title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
BUG: groupby.nth should be a filter | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index dae42dd4f1118..d8a36b1711b6e 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -1354,9 +1354,14 @@ This shows the first or last n rows from each group.
Taking the nth row of each group
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To select from a DataFrame or Series the nth item, use
-:meth:`~pd.core.groupby.DataFrameGroupBy.nth`. This is a reduction method, and
-will return a single row (or no row) per group if you pass an int for n:
+To select the nth item from each group, use :meth:`.DataFrameGroupBy.nth` or
+:meth:`.SeriesGroupBy.nth`. Arguments supplied can be any integer, lists of integers,
+slices, or lists of slices; see below for examples. When the nth element of a group
+does not exist an error is *not* raised; instead no corresponding rows are returned.
+
+In general this operation acts as a filtration. In certain cases it will also return
+one row per group, making it also a reduction. However because in general it can
+return zero or multiple rows per group, pandas treats it as a filtration in all cases.
.. ipython:: python
@@ -1367,6 +1372,14 @@ will return a single row (or no row) per group if you pass an int for n:
g.nth(-1)
g.nth(1)
+If the nth element of a group does not exist, then no corresponding row is included
+in the result. In particular, if the specified ``n`` is larger than any group, the
+result will be an empty DataFrame.
+
+.. ipython:: python
+
+ g.nth(5)
+
If you want to select the nth not-null item, use the ``dropna`` kwarg. For a DataFrame this should be either ``'any'`` or ``'all'`` just like you would pass to dropna:
.. ipython:: python
@@ -1376,21 +1389,11 @@ If you want to select the nth not-null item, use the ``dropna`` kwarg. For a Dat
g.first()
# nth(-1) is the same as g.last()
- g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
+ g.nth(-1, dropna="any")
g.last()
g.B.nth(0, dropna="all")
-As with other methods, passing ``as_index=False``, will achieve a filtration, which returns the grouped row.
-
-.. ipython:: python
-
- df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
- g = df.groupby("A", as_index=False)
-
- g.nth(0)
- g.nth(-1)
-
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
.. ipython:: python
@@ -1400,6 +1403,13 @@ You can also select multiple rows from each group by specifying multiple nth val
# get the first, 4th, and last date index for each month
df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
+You may also use a slices or lists of slices.
+
+.. ipython:: python
+
+ df.groupby([df.index.year, df.index.month]).nth[1:]
+ df.groupby([df.index.year, df.index.month]).nth[1:, :-1]
+
Enumerate group items
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 151d853166563..bf6bb5b4d36cd 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -72,7 +72,7 @@ Notable bug fixes
These are bug fixes that might have notable behavior changes.
-.. _whatsnew_200.notable_bug_fixes.notable_bug_fix1:
+.. _whatsnew_200.notable_bug_fixes.cumsum_cumprod_overflow:
:meth:`.GroupBy.cumsum` and :meth:`.GroupBy.cumprod` overflow instead of lossy casting to float
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -102,10 +102,58 @@ We return incorrect results with the 6th value.
We overflow with the 7th value, but the 6th value is still correct.
-.. _whatsnew_200.notable_bug_fixes.notable_bug_fix2:
+.. _whatsnew_200.notable_bug_fixes.groupby_nth_filter:
-notable_bug_fix2
-^^^^^^^^^^^^^^^^
+:meth:`.DataFrameGroupBy.nth` and :meth:`.SeriesGroupBy.nth` now behave as filtrations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In previous versions of pandas, :meth:`.DataFrameGroupBy.nth` and
+:meth:`.SeriesGroupBy.nth` acted as if they were aggregations. However, for most
+inputs ``n``, they may return either zero or multiple rows per group. This means
+that they are filtrations, similar to e.g. :meth:`.DataFrameGroupBy.head`. pandas
+now treats them as filtrations (:issue:`13666`).
+
+.. ipython:: python
+
+ df = pd.DataFrame({"a": [1, 1, 2, 1, 2], "b": [np.nan, 2.0, 3.0, 4.0, 5.0]})
+ gb = df.groupby("a")
+
+*Old Behavior*
+
+.. code-block:: ipython
+
+ In [5]: gb.nth(n=1)
+ Out[5]:
+ A B
+ 1 1 2.0
+ 4 2 5.0
+
+*New Behavior*
+
+.. ipython:: python
+
+ gb.nth(n=1)
+
+In particular, the index of the result is derived from the input by selecting
+the appropriate rows. Also, when ``n`` is larger than the group, no rows instead of
+``NaN`` is returned.
+
+*Old Behavior*
+
+.. code-block:: ipython
+
+ In [5]: gb.nth(n=3, dropna="any")
+ Out[5]:
+ B
+ A
+ 1 NaN
+ 2 NaN
+
+*New Behavior*
+
+.. ipython:: python
+
+ gb.nth(n=3, dropna="any")
.. ---------------------------------------------------------------------------
.. _whatsnew_200.api_breaking:
diff --git a/pandas/core/groupby/base.py b/pandas/core/groupby/base.py
index a2e9c059cbcc9..0f6d39be7d32f 100644
--- a/pandas/core/groupby/base.py
+++ b/pandas/core/groupby/base.py
@@ -37,7 +37,6 @@ class OutputKey:
"mean",
"median",
"min",
- "nth",
"nunique",
"prod",
# as long as `quantile`'s signature accepts only
@@ -100,6 +99,7 @@ class OutputKey:
"indices",
"ndim",
"ngroups",
+ "nth",
"ohlc",
"pipe",
"plot",
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index f1c18b7762f66..d10931586d5e0 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2978,97 +2978,68 @@ def nth(
... 'B': [np.nan, 2, 3, 4, 5]}, columns=['A', 'B'])
>>> g = df.groupby('A')
>>> g.nth(0)
- B
- A
- 1 NaN
- 2 3.0
+ A B
+ 0 1 NaN
+ 2 2 3.0
>>> g.nth(1)
- B
- A
- 1 2.0
- 2 5.0
+ A B
+ 1 1 2.0
+ 4 2 5.0
>>> g.nth(-1)
- B
- A
- 1 4.0
- 2 5.0
+ A B
+ 3 1 4.0
+ 4 2 5.0
>>> g.nth([0, 1])
- B
- A
- 1 NaN
- 1 2.0
- 2 3.0
- 2 5.0
+ A B
+ 0 1 NaN
+ 1 1 2.0
+ 2 2 3.0
+ 4 2 5.0
>>> g.nth(slice(None, -1))
- B
- A
- 1 NaN
- 1 2.0
- 2 3.0
+ A B
+ 0 1 NaN
+ 1 1 2.0
+ 2 2 3.0
Index notation may also be used
>>> g.nth[0, 1]
- B
- A
- 1 NaN
- 1 2.0
- 2 3.0
- 2 5.0
+ A B
+ 0 1 NaN
+ 1 1 2.0
+ 2 2 3.0
+ 4 2 5.0
>>> g.nth[:-1]
- B
- A
- 1 NaN
- 1 2.0
- 2 3.0
+ A B
+ 0 1 NaN
+ 1 1 2.0
+ 2 2 3.0
- Specifying `dropna` allows count ignoring ``NaN``
+ Specifying `dropna` allows ignoring ``NaN`` values
>>> g.nth(0, dropna='any')
- B
- A
- 1 2.0
- 2 3.0
+ A B
+ 1 1 2.0
+ 2 2 3.0
- NaNs denote group exhausted when using dropna
+ When the specified ``n`` is larger than any of the groups, an
+ empty DataFrame is returned
>>> g.nth(3, dropna='any')
- B
- A
- 1 NaN
- 2 NaN
-
- Specifying `as_index=False` in `groupby` keeps the original index.
-
- >>> df.groupby('A', as_index=False).nth(1)
- A B
- 1 1 2.0
- 4 2 5.0
+ Empty DataFrame
+ Columns: [A, B]
+ Index: []
"""
if not dropna:
- with self._group_selection_context():
- mask = self._make_mask_from_positional_indexer(n)
+ mask = self._make_mask_from_positional_indexer(n)
- ids, _, _ = self.grouper.group_info
+ ids, _, _ = self.grouper.group_info
- # Drop NA values in grouping
- mask = mask & (ids != -1)
+ # Drop NA values in grouping
+ mask = mask & (ids != -1)
- out = self._mask_selected_obj(mask)
- if not self.as_index:
- return out
-
- result_index = self.grouper.result_index
- if self.axis == 0:
- out.index = result_index[ids[mask]]
- if not self.observed and isinstance(result_index, CategoricalIndex):
- out = out.reindex(result_index)
-
- out = self._reindex_output(out)
- else:
- out.columns = result_index[ids[mask]]
-
- return out.sort_index(axis=self.axis) if self.sort else out
+ out = self._mask_selected_obj(mask)
+ return out
# dropna is truthy
if not is_integer(n):
@@ -3085,7 +3056,6 @@ def nth(
# old behaviour, but with all and any support for DataFrames.
# modified in GH 7559 to have better perf
n = cast(int, n)
- max_len = n if n >= 0 else -1 - n
dropped = self.obj.dropna(how=dropna, axis=self.axis)
# get a new grouper for our dropped obj
@@ -3115,22 +3085,7 @@ def nth(
grb = dropped.groupby(
grouper, as_index=self.as_index, sort=self.sort, axis=self.axis
)
- sizes, result = grb.size(), grb.nth(n)
- mask = (sizes < max_len)._values
-
- # set the results which don't meet the criteria
- if len(result) and mask.any():
- result.loc[mask] = np.nan
-
- # reset/reindex to the original groups
- if len(self.obj) == len(dropped) or len(result) == len(
- self.grouper.result_index
- ):
- result.index = self.grouper.result_index
- else:
- result = result.reindex(self.grouper.result_index)
-
- return result
+ return grb.nth(n)
@final
def quantile(
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 1e2bcb58110dd..ca794d4ae5a3e 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -563,11 +563,7 @@ def test_observed_nth():
df = DataFrame({"cat": cat, "ser": ser})
result = df.groupby("cat", observed=False)["ser"].nth(0)
-
- index = Categorical(["a", "b", "c"], categories=["a", "b", "c"])
- expected = Series([1, np.nan, np.nan], index=index, name="ser")
- expected.index.name = "cat"
-
+ expected = df["ser"].iloc[[0]]
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 5383a4d28c8ce..f05874c3286c7 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -405,7 +405,6 @@ def test_median_empty_bins(observed):
("last", {"df": [{"a": 1, "b": 2}, {"a": 2, "b": 4}]}),
("min", {"df": [{"a": 1, "b": 1}, {"a": 2, "b": 3}]}),
("max", {"df": [{"a": 1, "b": 2}, {"a": 2, "b": 4}]}),
- ("nth", {"df": [{"a": 1, "b": 2}, {"a": 2, "b": 4}], "args": [1]}),
("count", {"df": [{"a": 1, "b": 2}, {"a": 2, "b": 2}], "out_type": "int64"}),
],
)
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 1c8b8e3d33ecf..e3b7ad8f78750 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -851,6 +851,8 @@ def test_groupby_with_single_column(self):
exp = DataFrame(index=Index(["a", "b", "s"], name="a"))
tm.assert_frame_equal(df.groupby("a").count(), exp)
tm.assert_frame_equal(df.groupby("a").sum(), exp)
+
+ exp = df.iloc[[3, 4, 5]]
tm.assert_frame_equal(df.groupby("a").nth(1), exp)
def test_gb_key_len_equal_axis_len(self):
diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index 187c80075f36b..de5025b998b30 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -23,6 +23,7 @@ def test_first_last_nth(df):
tm.assert_frame_equal(first, expected)
nth = grouped.nth(0)
+ expected = df.loc[[0, 1]]
tm.assert_frame_equal(nth, expected)
last = grouped.last()
@@ -31,12 +32,11 @@ def test_first_last_nth(df):
tm.assert_frame_equal(last, expected)
nth = grouped.nth(-1)
+ expected = df.iloc[[5, 7]]
tm.assert_frame_equal(nth, expected)
nth = grouped.nth(1)
- expected = df.loc[[2, 3], ["B", "C", "D"]].copy()
- expected.index = Index(["foo", "bar"], name="A")
- expected = expected.sort_index()
+ expected = df.iloc[[2, 3]]
tm.assert_frame_equal(nth, expected)
# it works!
@@ -47,7 +47,7 @@ def test_first_last_nth(df):
df.loc[df["A"] == "foo", "B"] = np.nan
assert isna(grouped["B"].first()["foo"])
assert isna(grouped["B"].last()["foo"])
- assert isna(grouped["B"].nth(0)["foo"])
+ assert isna(grouped["B"].nth(0).iloc[0])
# v0.14.0 whatsnew
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
@@ -56,7 +56,7 @@ def test_first_last_nth(df):
expected = df.iloc[[1, 2]].set_index("A")
tm.assert_frame_equal(result, expected)
- expected = df.iloc[[1, 2]].set_index("A")
+ expected = df.iloc[[1, 2]]
result = g.nth(0, dropna="any")
tm.assert_frame_equal(result, expected)
@@ -82,18 +82,10 @@ def test_first_last_with_na_object(method, nulls_fixture):
@pytest.mark.parametrize("index", [0, -1])
def test_nth_with_na_object(index, nulls_fixture):
# https://github.com/pandas-dev/pandas/issues/32123
- groups = DataFrame({"a": [1, 1, 2, 2], "b": [1, 2, 3, nulls_fixture]}).groupby("a")
+ df = DataFrame({"a": [1, 1, 2, 2], "b": [1, 2, 3, nulls_fixture]})
+ groups = df.groupby("a")
result = groups.nth(index)
-
- if index == 0:
- values = [1, 3]
- else:
- values = [2, nulls_fixture]
-
- values = np.array(values, dtype=result["b"].dtype)
- idx = Index([1, 2], name="a")
- expected = DataFrame({"b": values}, index=idx)
-
+ expected = df.iloc[[0, 2]] if index == 0 else df.iloc[[1, 3]]
tm.assert_frame_equal(result, expected)
@@ -149,9 +141,7 @@ def test_first_last_nth_dtypes(df_mixed_floats):
tm.assert_frame_equal(last, expected)
nth = grouped.nth(1)
- expected = df.loc[[3, 2], ["B", "C", "D", "E", "F"]]
- expected.index = Index(["bar", "foo"], name="A")
- expected = expected.sort_index()
+ expected = df.iloc[[2, 3]]
tm.assert_frame_equal(nth, expected)
# GH 2763, first/last shifting dtypes
@@ -166,11 +156,13 @@ def test_first_last_nth_dtypes(df_mixed_floats):
def test_first_last_nth_nan_dtype():
# GH 33591
df = DataFrame({"data": ["A"], "nans": Series([np.nan], dtype=object)})
-
grouped = df.groupby("data")
+
expected = df.set_index("data").nans
tm.assert_series_equal(grouped.nans.first(), expected)
tm.assert_series_equal(grouped.nans.last(), expected)
+
+ expected = df.nans
tm.assert_series_equal(grouped.nans.nth(-1), expected)
tm.assert_series_equal(grouped.nans.nth(0), expected)
@@ -198,23 +190,21 @@ def test_nth():
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
g = df.groupby("A")
- tm.assert_frame_equal(g.nth(0), df.iloc[[0, 2]].set_index("A"))
- tm.assert_frame_equal(g.nth(1), df.iloc[[1]].set_index("A"))
- tm.assert_frame_equal(g.nth(2), df.loc[[]].set_index("A"))
- tm.assert_frame_equal(g.nth(-1), df.iloc[[1, 2]].set_index("A"))
- tm.assert_frame_equal(g.nth(-2), df.iloc[[0]].set_index("A"))
- tm.assert_frame_equal(g.nth(-3), df.loc[[]].set_index("A"))
- tm.assert_series_equal(g.B.nth(0), df.set_index("A").B.iloc[[0, 2]])
- tm.assert_series_equal(g.B.nth(1), df.set_index("A").B.iloc[[1]])
- tm.assert_frame_equal(g[["B"]].nth(0), df.loc[[0, 2], ["A", "B"]].set_index("A"))
+ tm.assert_frame_equal(g.nth(0), df.iloc[[0, 2]])
+ tm.assert_frame_equal(g.nth(1), df.iloc[[1]])
+ tm.assert_frame_equal(g.nth(2), df.loc[[]])
+ tm.assert_frame_equal(g.nth(-1), df.iloc[[1, 2]])
+ tm.assert_frame_equal(g.nth(-2), df.iloc[[0]])
+ tm.assert_frame_equal(g.nth(-3), df.loc[[]])
+ tm.assert_series_equal(g.B.nth(0), df.B.iloc[[0, 2]])
+ tm.assert_series_equal(g.B.nth(1), df.B.iloc[[1]])
+ tm.assert_frame_equal(g[["B"]].nth(0), df[["B"]].iloc[[0, 2]])
- exp = df.set_index("A")
- tm.assert_frame_equal(g.nth(0, dropna="any"), exp.iloc[[1, 2]])
- tm.assert_frame_equal(g.nth(-1, dropna="any"), exp.iloc[[1, 2]])
+ tm.assert_frame_equal(g.nth(0, dropna="any"), df.iloc[[1, 2]])
+ tm.assert_frame_equal(g.nth(-1, dropna="any"), df.iloc[[1, 2]])
- exp["B"] = np.nan
- tm.assert_frame_equal(g.nth(7, dropna="any"), exp.iloc[[1, 2]])
- tm.assert_frame_equal(g.nth(2, dropna="any"), exp.iloc[[1, 2]])
+ tm.assert_frame_equal(g.nth(7, dropna="any"), df.iloc[:0])
+ tm.assert_frame_equal(g.nth(2, dropna="any"), df.iloc[:0])
# out of bounds, regression from 0.13.1
# GH 6621
@@ -263,13 +253,6 @@ def test_nth():
assert expected.iloc[0] == v
assert expected2.iloc[0] == v
- # this is NOT the same as .first (as sorted is default!)
- # as it keeps the order in the series (and not the group order)
- # related GH 7287
- expected = s.groupby(g, sort=False).first()
- result = s.groupby(g, sort=False).nth(0, dropna="all")
- tm.assert_series_equal(result, expected)
-
with pytest.raises(ValueError, match="For a DataFrame"):
s.groupby(g, sort=False).nth(0, dropna=True)
@@ -277,21 +260,21 @@ def test_nth():
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
g = df.groupby("A")
result = g.B.nth(0, dropna="all")
- expected = g.B.first()
+ expected = df.B.iloc[[1, 2]]
tm.assert_series_equal(result, expected)
# test multiple nth values
df = DataFrame([[1, np.nan], [1, 3], [1, 4], [5, 6], [5, 7]], columns=["A", "B"])
g = df.groupby("A")
- tm.assert_frame_equal(g.nth(0), df.iloc[[0, 3]].set_index("A"))
- tm.assert_frame_equal(g.nth([0]), df.iloc[[0, 3]].set_index("A"))
- tm.assert_frame_equal(g.nth([0, 1]), df.iloc[[0, 1, 3, 4]].set_index("A"))
- tm.assert_frame_equal(g.nth([0, -1]), df.iloc[[0, 2, 3, 4]].set_index("A"))
- tm.assert_frame_equal(g.nth([0, 1, 2]), df.iloc[[0, 1, 2, 3, 4]].set_index("A"))
- tm.assert_frame_equal(g.nth([0, 1, -1]), df.iloc[[0, 1, 2, 3, 4]].set_index("A"))
- tm.assert_frame_equal(g.nth([2]), df.iloc[[2]].set_index("A"))
- tm.assert_frame_equal(g.nth([3, 4]), df.loc[[]].set_index("A"))
+ tm.assert_frame_equal(g.nth(0), df.iloc[[0, 3]])
+ tm.assert_frame_equal(g.nth([0]), df.iloc[[0, 3]])
+ tm.assert_frame_equal(g.nth([0, 1]), df.iloc[[0, 1, 3, 4]])
+ tm.assert_frame_equal(g.nth([0, -1]), df.iloc[[0, 2, 3, 4]])
+ tm.assert_frame_equal(g.nth([0, 1, 2]), df.iloc[[0, 1, 2, 3, 4]])
+ tm.assert_frame_equal(g.nth([0, 1, -1]), df.iloc[[0, 1, 2, 3, 4]])
+ tm.assert_frame_equal(g.nth([2]), df.iloc[[2]])
+ tm.assert_frame_equal(g.nth([3, 4]), df.loc[[]])
business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
df = DataFrame(1, index=business_dates, columns=["a", "b"])
@@ -318,12 +301,12 @@ def test_nth():
tm.assert_frame_equal(result, expected)
-def test_nth_multi_index(three_group):
+def test_nth_multi_grouper(three_group):
# PR 9090, related to issue 8979
- # test nth on MultiIndex, should match .first()
+ # test nth on multiple groupers
grouped = three_group.groupby(["A", "B"])
result = grouped.nth(0)
- expected = grouped.first()
+ expected = three_group.iloc[[0, 3, 4, 7]]
tm.assert_frame_equal(result, expected)
@@ -504,13 +487,7 @@ def test_nth_multi_index_as_expected():
)
grouped = three_group.groupby(["A", "B"])
result = grouped.nth(0)
- expected = DataFrame(
- {"C": ["dull", "dull", "dull", "dull"]},
- index=MultiIndex.from_arrays(
- [["bar", "bar", "foo", "foo"], ["one", "two", "one", "two"]],
- names=["A", "B"],
- ),
- )
+ expected = three_group.iloc[[0, 3, 4, 7]]
tm.assert_frame_equal(result, expected)
@@ -567,7 +544,7 @@ def test_groupby_head_tail_axis_1(op, n, expected_cols):
def test_group_selection_cache():
# GH 12839 nth, head, and tail should return same result consistently
df = DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
- expected = df.iloc[[0, 2]].set_index("A")
+ expected = df.iloc[[0, 2]]
g = df.groupby("A")
result1 = g.head(n=2)
@@ -598,13 +575,11 @@ def test_nth_empty():
# GH 16064
df = DataFrame(index=[0], columns=["a", "b", "c"])
result = df.groupby("a").nth(10)
- expected = DataFrame(index=Index([], name="a"), columns=["b", "c"])
+ expected = df.iloc[:0]
tm.assert_frame_equal(result, expected)
result = df.groupby(["a", "b"]).nth(10)
- expected = DataFrame(
- index=MultiIndex([[], []], [[], []], names=["a", "b"]), columns=["c"]
- )
+ expected = df.iloc[:0]
tm.assert_frame_equal(result, expected)
@@ -616,15 +591,11 @@ def test_nth_column_order():
columns=["A", "C", "B"],
)
result = df.groupby("A").nth(0)
- expected = DataFrame(
- [["b", 100.0], ["c", 200.0]], columns=["C", "B"], index=Index([1, 2], name="A")
- )
+ expected = df.iloc[[0, 3]]
tm.assert_frame_equal(result, expected)
result = df.groupby("A").nth(-1, dropna="any")
- expected = DataFrame(
- [["a", 50.0], ["d", 150.0]], columns=["C", "B"], index=Index([1, 2], name="A")
- )
+ expected = df.iloc[[1, 4]]
tm.assert_frame_equal(result, expected)
@@ -636,9 +607,7 @@ def test_nth_nan_in_grouper(dropna):
columns=list("abc"),
)
result = df.groupby("a").nth(0, dropna=dropna)
- expected = DataFrame(
- [[2, 3], [6, 7]], columns=list("bc"), index=Index(["abc", "def"], name="a")
- )
+ expected = df.iloc[[1, 3]]
tm.assert_frame_equal(result, expected)
@@ -772,29 +741,21 @@ def test_groupby_nth_with_column_axis():
columns=["C", "B", "A"],
)
result = df.groupby(df.iloc[1], axis=1).nth(0)
- expected = DataFrame(
- [
- [6, 4],
- [7, 8],
- ],
- index=["z", "y"],
- columns=[7, 8],
- )
- expected.columns.name = "y"
+ expected = df.iloc[:, [0, 2]]
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
"start, stop, expected_values, expected_columns",
[
- (None, None, [0, 1, 2, 3, 4], [5, 5, 5, 6, 6]),
- (None, 1, [0, 3], [5, 6]),
- (None, 9, [0, 1, 2, 3, 4], [5, 5, 5, 6, 6]),
- (None, -1, [0, 1, 3], [5, 5, 6]),
- (1, None, [1, 2, 4], [5, 5, 6]),
- (1, -1, [1], [5]),
- (-1, None, [2, 4], [5, 6]),
- (-1, 2, [4], [6]),
+ (None, None, [0, 1, 2, 3, 4], list("ABCDE")),
+ (None, 1, [0, 3], list("AD")),
+ (None, 9, [0, 1, 2, 3, 4], list("ABCDE")),
+ (None, -1, [0, 1, 3], list("ABD")),
+ (1, None, [1, 2, 4], list("BCE")),
+ (1, -1, [1], list("B")),
+ (-1, None, [2, 4], list("CE")),
+ (-1, 2, [4], list("E")),
],
)
@pytest.mark.parametrize("method", ["call", "index"])
@@ -807,7 +768,7 @@ def test_nth_slices_with_column_axis(
"call": lambda start, stop: gb.nth(slice(start, stop)),
"index": lambda start, stop: gb.nth[start:stop],
}[method](start, stop)
- expected = DataFrame([expected_values], columns=expected_columns)
+ expected = DataFrame([expected_values], columns=[expected_columns])
tm.assert_frame_equal(result, expected)
@@ -824,7 +785,7 @@ def test_head_tail_dropna_true():
result = df.groupby(["X", "Y"]).tail(n=1)
tm.assert_frame_equal(result, expected)
- result = df.groupby(["X", "Y"]).nth(n=0).reset_index()
+ result = df.groupby(["X", "Y"]).nth(n=0)
tm.assert_frame_equal(result, expected)
@@ -839,5 +800,5 @@ def test_head_tail_dropna_false():
result = df.groupby(["X", "Y"], dropna=False).tail(n=1)
tm.assert_frame_equal(result, expected)
- result = df.groupby(["X", "Y"], dropna=False).nth(n=0).reset_index()
+ result = df.groupby(["X", "Y"], dropna=False).nth(n=0)
tm.assert_frame_equal(result, expected)
| - [x] closes #13666 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49262 | 2022-10-23T12:54:15Z | 2022-11-11T01:39:41Z | 2022-11-11T01:39:41Z | 2022-11-11T13:00:00Z |
ENH: Add pre-commit check for setup.cfg options.extras_require | diff --git a/ci/deps/actions-38-minimum_versions.yaml b/ci/deps/actions-38-minimum_versions.yaml
index 7cf6d777ae607..8e0ccd77b19a6 100644
--- a/ci/deps/actions-38-minimum_versions.yaml
+++ b/ci/deps/actions-38-minimum_versions.yaml
@@ -36,6 +36,7 @@ dependencies:
- numba=0.53.1
- numexpr=2.7.3
- odfpy=1.4.1
+ - qtpy=2.2.0
- openpyxl=3.0.7
- pandas-gbq=0.15.0
- psycopg2=2.8.6
@@ -54,3 +55,6 @@ dependencies:
- xlrd=2.0.1
- xlsxwriter=1.4.3
- zstandard=0.15.2
+
+ - pip:
+ - pyqt5==5.15.1
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index 699d1b565fc71..abad188f06720 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -46,6 +46,8 @@
"xlsxwriter": "1.4.3",
"zstandard": "0.15.2",
"tzdata": "2022.1",
+ "qtpy": "2.2.0",
+ "pyqt5": "5.15.1",
}
# A mapping from import name to package name (on PyPI) for packages where
diff --git a/scripts/validate_min_versions_in_sync.py b/scripts/validate_min_versions_in_sync.py
index cb6a204094bf5..ad0375a4320a2 100755
--- a/scripts/validate_min_versions_in_sync.py
+++ b/scripts/validate_min_versions_in_sync.py
@@ -4,6 +4,7 @@
ci/deps/actions-.*-minimum_versions.yaml
pandas/compat/_optional.py
+setup.cfg
TODO: doc/source/getting_started/install.rst
@@ -13,6 +14,7 @@
"""
from __future__ import annotations
+import configparser
import pathlib
import sys
@@ -21,6 +23,7 @@
pathlib.Path("ci/deps").absolute().glob("actions-*-minimum_versions.yaml")
)
CODE_PATH = pathlib.Path("pandas/compat/_optional.py").resolve()
+SETUP_PATH = pathlib.Path("setup.cfg").resolve()
EXCLUDE_DEPS = {"tzdata"}
# pandas package is not available
# in pre-commit environment
@@ -57,29 +60,76 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str,
seen_required = True
elif "# optional dependencies" in line:
seen_optional = True
+ elif "- pip:" in line:
+ continue
elif seen_required and line.strip():
- package, version = line.strip().split("=")
+ if "==" in line:
+ package, version = line.strip().split("==")
+
+ else:
+ package, version = line.strip().split("=")
package = package[2:]
if package in EXCLUDE_DEPS:
continue
if not seen_optional:
- required_deps[package] = version
+ required_deps[package.casefold()] = version
else:
- optional_deps[package] = version
+ optional_deps[package.casefold()] = version
return required_deps, optional_deps
+def get_versions_from_setup() -> dict[str, str]:
+ install_map = _optional.INSTALL_MAPPING
+ optional_dependencies = {}
+
+ parser = configparser.ConfigParser()
+ parser.read(SETUP_PATH)
+ setup_optional = parser["options.extras_require"]["all"]
+ dependencies = setup_optional[1:].split("\n")
+
+ # remove test dependencies
+ test = parser["options.extras_require"]["test"]
+ test_dependencies = set(test[1:].split("\n"))
+ dependencies = [
+ package for package in dependencies if package not in test_dependencies
+ ]
+
+ for dependency in dependencies:
+ package, version = dependency.strip().split(">=")
+ optional_dependencies[install_map.get(package, package).casefold()] = version
+
+ for item in EXCLUDE_DEPS:
+ optional_dependencies.pop(item)
+
+ return optional_dependencies
+
+
def main():
with open(CI_PATH, encoding="utf-8") as f:
_, ci_optional = get_versions_from_ci(f.readlines())
code_optional = get_versions_from_code()
- diff = set(ci_optional.items()).symmetric_difference(code_optional.items())
+ setup_optional = get_versions_from_setup()
+
+ diff = (ci_optional.items() | code_optional.items() | setup_optional.items()) - (
+ ci_optional.items() & code_optional.items() & setup_optional.items()
+ )
+
if diff:
- sys.stdout.write(
+ packages = {package for package, _ in diff}
+ out = sys.stdout
+ out.write(
f"The follow minimum version differences were found between "
- f"{CI_PATH} and {CODE_PATH}. Please ensure these are aligned: "
- f"{diff}\n"
+ f"{CI_PATH}, {CODE_PATH} AND {SETUP_PATH}. "
+ f"Please ensure these are aligned: \n\n"
)
+
+ for package in packages:
+ out.write(
+ f"{package}\n"
+ f"{CI_PATH}: {ci_optional.get(package, 'Not specified')}\n"
+ f"{CODE_PATH}: {code_optional.get(package, 'Not specified')}\n"
+ f"{SETUP_PATH}: {setup_optional.get(package, 'Not specified')}\n\n"
+ )
sys.exit(1)
sys.exit(0)
diff --git a/setup.cfg b/setup.cfg
index 5680db30ec50d..785143c7b647c 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -60,7 +60,7 @@ test =
# see: doc/source/getting_started/install.rst
performance =
bottleneck>=1.3.2
- numba>=0.53.0
+ numba>=0.53.1
numexpr>=2.7.1
timezone =
tzdata>=2022.1
@@ -68,12 +68,11 @@ computation =
scipy>=1.7.1
xarray>=0.19.0
fss =
- fsspec>=2021.7.0
+ fsspec>=2021.07.0
aws =
- boto3>=1.22.7
- s3fs>=0.4.0
+ s3fs>=2021.08.0
gcp =
- gcsfs>=2021.05.0
+ gcsfs>=2021.07.0
pandas-gbq>=0.15.0
excel =
odfpy>=1.4.1
@@ -105,7 +104,7 @@ html =
xml =
lxml>=4.6.3
plot =
- matplotlib>=3.3.2
+ matplotlib>=3.6.1
output_formatting =
jinja2>=3.0.0
tabulate>=0.8.9
@@ -123,19 +122,18 @@ compression =
all =
beautifulsoup4>=4.9.3
blosc>=1.21.0
- bottleneck>=1.3.1
- boto3>=1.22.7
+ bottleneck>=1.3.2
brotlipy>=0.7.0
- fastparquet>=0.4.0
- fsspec>=2021.7.0
- gcsfs>=2021.05.0
+ fastparquet>=0.6.3
+ fsspec>=2021.07.0
+ gcsfs>=2021.07.0
html5lib>=1.1
- hypothesis>=5.5.3
+ hypothesis>=6.13.0
jinja2>=3.0.0
lxml>=4.6.3
- matplotlib>=3.3.2
- numba>=0.53.0
- numexpr>=2.7.1
+ matplotlib>=3.6.1
+ numba>=0.53.1
+ numexpr>=2.7.3
odfpy>=1.4.1
openpyxl>=3.0.7
pandas-gbq>=0.15.0
@@ -151,7 +149,7 @@ all =
pyxlsb>=1.0.8
qtpy>=2.2.0
scipy>=1.7.1
- s3fs>=0.4.0
+ s3fs>=2021.08.0
SQLAlchemy>=1.4.16
tables>=3.6.1
tabulate>=0.8.9
| Augmented `scripts/validate_min_versions_in_sync.py` to ensure that minimum versions are aligned in other areas where minimum versions are specified (`setup.cfg`).
- [x] closes #48949(Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49261 | 2022-10-23T12:17:43Z | 2022-11-07T19:38:43Z | 2022-11-07T19:38:43Z | 2022-11-07T19:54:18Z |
DEPR: enforce deprecation MultiIndex.is_lexsorted and MultiIndex.lexsort_depth | diff --git a/doc/source/whatsnew/v0.15.2.rst b/doc/source/whatsnew/v0.15.2.rst
index 2dae76dd6b461..fd4946c9765e1 100644
--- a/doc/source/whatsnew/v0.15.2.rst
+++ b/doc/source/whatsnew/v0.15.2.rst
@@ -25,6 +25,7 @@ API changes
a lexically sorted index will have a better performance. (:issue:`2646`)
.. ipython:: python
+ :okexcept:
:okwarning:
df = pd.DataFrame({'jim':[0, 0, 1, 1],
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 6239ddf9442e7..2ba3bd9470ff3 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -177,6 +177,7 @@ Removal of prior version deprecations/changes
- Remove arguments ``names`` and ``dtype`` from :meth:`Index.copy` and ``levels`` and ``codes`` from :meth:`MultiIndex.copy` (:issue:`35853`, :issue:`36685`)
- Remove argument ``inplace`` from :meth:`MultiIndex.set_levels` and :meth:`MultiIndex.set_codes` (:issue:`35626`)
- Disallow passing positional arguments to :meth:`MultiIndex.set_levels` and :meth:`MultiIndex.set_codes` (:issue:`41485`)
+- Removed :meth:`MultiIndex.is_lexsorted` and :meth:`MultiIndex.lexsort_depth` (:issue:`38701`)
- Removed argument ``how`` from :meth:`PeriodIndex.astype`, use :meth:`PeriodIndex.to_timestamp` instead (:issue:`37982`)
- Removed argument ``try_cast`` from :meth:`DataFrame.mask`, :meth:`DataFrame.where`, :meth:`Series.mask` and :meth:`Series.where` (:issue:`38836`)
- Removed argument ``is_copy`` from :meth:`DataFrame.take` and :meth:`Series.take` (:issue:`30615`)
diff --git a/pandas/conftest.py b/pandas/conftest.py
index d9f481d843b37..24d878cf1cde9 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -154,7 +154,6 @@ def pytest_collection_modifyitems(items, config) -> None:
("DataFrame.append", "The frame.append method is deprecated"),
("Series.append", "The series.append method is deprecated"),
("dtypes.common.is_categorical", "is_categorical is deprecated"),
- ("MultiIndex._is_lexsorted", "MultiIndex.is_lexsorted is deprecated"),
# Docstring divides by zero to show behavior difference
("missing.mask_zero_div_zero", "divide by zero encountered"),
# Docstring demonstrates the call raises a warning
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 59a0179f93c10..761869d2b5954 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1810,15 +1810,6 @@ def to_flat_index(self) -> Index: # type: ignore[override]
"""
return Index(self._values, tupleize_cols=False)
- def is_lexsorted(self) -> bool:
- warnings.warn(
- "MultiIndex.is_lexsorted is deprecated as a public function, "
- "users should use MultiIndex.is_monotonic_increasing instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self._is_lexsorted()
-
def _is_lexsorted(self) -> bool:
"""
Return True if the codes are lexicographically sorted.
@@ -1832,37 +1823,29 @@ def _is_lexsorted(self) -> bool:
In the below examples, the first level of the MultiIndex is sorted because
a<b<c, so there is no need to look at the next level.
- >>> pd.MultiIndex.from_arrays([['a', 'b', 'c'], ['d', 'e', 'f']]).is_lexsorted()
+ >>> pd.MultiIndex.from_arrays([['a', 'b', 'c'],
+ ... ['d', 'e', 'f']])._is_lexsorted()
True
- >>> pd.MultiIndex.from_arrays([['a', 'b', 'c'], ['d', 'f', 'e']]).is_lexsorted()
+ >>> pd.MultiIndex.from_arrays([['a', 'b', 'c'],
+ ... ['d', 'f', 'e']])._is_lexsorted()
True
In case there is a tie, the lexicographical sorting looks
at the next level of the MultiIndex.
- >>> pd.MultiIndex.from_arrays([[0, 1, 1], ['a', 'b', 'c']]).is_lexsorted()
+ >>> pd.MultiIndex.from_arrays([[0, 1, 1], ['a', 'b', 'c']])._is_lexsorted()
True
- >>> pd.MultiIndex.from_arrays([[0, 1, 1], ['a', 'c', 'b']]).is_lexsorted()
+ >>> pd.MultiIndex.from_arrays([[0, 1, 1], ['a', 'c', 'b']])._is_lexsorted()
False
>>> pd.MultiIndex.from_arrays([['a', 'a', 'b', 'b'],
- ... ['aa', 'bb', 'aa', 'bb']]).is_lexsorted()
+ ... ['aa', 'bb', 'aa', 'bb']])._is_lexsorted()
True
>>> pd.MultiIndex.from_arrays([['a', 'a', 'b', 'b'],
- ... ['bb', 'aa', 'aa', 'bb']]).is_lexsorted()
+ ... ['bb', 'aa', 'aa', 'bb']])._is_lexsorted()
False
"""
return self._lexsort_depth == self.nlevels
- @property
- def lexsort_depth(self) -> int:
- warnings.warn(
- "MultiIndex.lexsort_depth is deprecated as a public function, "
- "users should use MultiIndex.is_monotonic_increasing instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self._lexsort_depth
-
@cache_readonly
def _lexsort_depth(self) -> int:
"""
diff --git a/pandas/tests/indexes/multi/test_lexsort.py b/pandas/tests/indexes/multi/test_lexsort.py
index 0aadbdb5c32da..fc16a4197a3a4 100644
--- a/pandas/tests/indexes/multi/test_lexsort.py
+++ b/pandas/tests/indexes/multi/test_lexsort.py
@@ -1,5 +1,4 @@
from pandas import MultiIndex
-import pandas._testing as tm
class TestIsLexsorted:
@@ -22,14 +21,6 @@ def test_is_lexsorted(self):
assert not index._is_lexsorted()
assert index._lexsort_depth == 0
- def test_is_lexsorted_deprecation(self):
- # GH 32259
- with tm.assert_produces_warning(
- FutureWarning,
- match="MultiIndex.is_lexsorted is deprecated as a public function",
- ):
- MultiIndex.from_arrays([["a", "b", "c"], ["d", "f", "e"]]).is_lexsorted()
-
class TestLexsortDepth:
def test_lexsort_depth(self):
@@ -53,11 +44,3 @@ def test_lexsort_depth(self):
levels=levels, codes=[[0, 0, 1, 0, 1, 1], [0, 1, 0, 2, 2, 1]], sortorder=0
)
assert index._lexsort_depth == 0
-
- def test_lexsort_depth_deprecation(self):
- # GH 32259
- with tm.assert_produces_warning(
- FutureWarning,
- match="MultiIndex.lexsort_depth is deprecated as a public function",
- ):
- MultiIndex.from_arrays([["a", "b", "c"], ["d", "f", "e"]]).lexsort_depth
diff --git a/pandas/tests/indexes/multi/test_setops.py b/pandas/tests/indexes/multi/test_setops.py
index 61e602f4c41fe..eaa4e0a7b5256 100644
--- a/pandas/tests/indexes/multi/test_setops.py
+++ b/pandas/tests/indexes/multi/test_setops.py
@@ -683,9 +683,7 @@ def test_intersection_lexsort_depth(levels1, levels2, codes1, codes2, names):
mi1 = MultiIndex(levels=levels1, codes=codes1, names=names)
mi2 = MultiIndex(levels=levels2, codes=codes2, names=names)
mi_int = mi1.intersection(mi2)
-
- with tm.assert_produces_warning(FutureWarning, match="MultiIndex.lexsort_depth"):
- assert mi_int.lexsort_depth == 2
+ assert mi_int._lexsort_depth == 2
@pytest.mark.parametrize("val", [pd.NA, 100])
| - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature.
Deprecation introduced in #38701 | https://api.github.com/repos/pandas-dev/pandas/pulls/49260 | 2022-10-23T12:01:40Z | 2022-10-23T22:15:40Z | 2022-10-23T22:15:40Z | 2022-10-26T10:17:55Z |
DEPR: disallow non-keyword arguments | diff --git a/asv_bench/benchmarks/io/stata.py b/asv_bench/benchmarks/io/stata.py
index 4ae2745af8bff..1ff929d6dbdea 100644
--- a/asv_bench/benchmarks/io/stata.py
+++ b/asv_bench/benchmarks/io/stata.py
@@ -38,13 +38,13 @@ def setup(self, convert_dates):
)
self.df["float32_"] = np.array(np.random.randn(N), dtype=np.float32)
self.convert_dates = {"index": convert_dates}
- self.df.to_stata(self.fname, self.convert_dates)
+ self.df.to_stata(self.fname, convert_dates=self.convert_dates)
def time_read_stata(self, convert_dates):
read_stata(self.fname)
def time_write_stata(self, convert_dates):
- self.df.to_stata(self.fname, self.convert_dates)
+ self.df.to_stata(self.fname, convert_dates=self.convert_dates)
class StataMissing(Stata):
@@ -54,7 +54,7 @@ def setup(self, convert_dates):
missing_data = np.random.randn(self.N)
missing_data[missing_data < 0] = np.nan
self.df[f"missing_{i}"] = missing_data
- self.df.to_stata(self.fname, self.convert_dates)
+ self.df.to_stata(self.fname, convert_dates=self.convert_dates)
from ..pandas_vb_common import setup # noqa: F401 isort:skip
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 0880b8e2cac12..921c8af0aa346 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -183,6 +183,16 @@ Removal of prior version deprecations/changes
- Disallow passing non-round floats to :class:`Timestamp` with ``unit="M"`` or ``unit="Y"`` (:issue:`47266`)
- Remove keywords ``convert_float`` and ``mangle_dupe_cols`` from :func:`read_excel` (:issue:`41176`)
- Disallow passing non-keyword arguments to :func:`read_excel` except ``io`` and ``sheet_name`` (:issue:`34418`)
+- Disallow passing non-keyword arguments to :meth:`Series.mask` and :meth:`DataFrame.mask` except ``cond`` and ``other`` (:issue:`41580`)
+- Disallow passing non-keyword arguments to :meth:`DataFrame.to_stata` except for ``path`` (:issue:`48128`)
+- Disallow passing non-keyword arguments to :meth:`DataFrame.where` and :meth:`Series.where` except for ``cond`` and ``other`` (:issue:`41523`)
+- Disallow passing non-keyword arguments to :meth:`Series.set_axis` and :meth:`DataFrame.set_axis` except for ``labels`` (:issue:`41491`)
+- Disallow passing non-keyword arguments to :meth:`Series.rename_axis` and :meth:`DataFrame.rename_axis` except for ``mapper`` (:issue:`47587`)
+- Disallow passing non-keyword arguments to :meth:`Series.clip` and :meth:`DataFrame.clip` (:issue:`41511`)
+- Disallow passing non-keyword arguments to :meth:`Series.bfill`, :meth:`Series.ffill`, :meth:`DataFrame.bfill` and :meth:`DataFrame.ffill` (:issue:`41508`)
+- Disallow passing non-keyword arguments to :meth:`DataFrame.replace`, :meth:`Series.replace` except for ``to_replace`` and ``value`` (:issue:`47587`)
+- Disallow passing non-keyword arguments to :meth:`DataFrame.sort_values` except for ``by`` (:issue:`41505`)
+- Disallow passing non-keyword arguments to :meth:`Series.sort_values` (:issue:`41505`)
- Removed :meth:`.Rolling.validate`, :meth:`.Expanding.validate`, and :meth:`.ExponentialMovingWindow.validate` (:issue:`43665`)
- Removed :attr:`Rolling.win_type` returning ``"freq"`` (:issue:`38963`)
- Removed :attr:`Rolling.is_datetimelike` (:issue:`38963`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fb333aff66b72..0cac5d621fd0c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2621,10 +2621,10 @@ def _from_arrays(
compression_options=_shared_docs["compression_options"] % "path",
)
@deprecate_kwarg(old_arg_name="fname", new_arg_name="path")
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "path"])
def to_stata(
self,
path: FilePath | WriteBuffer[bytes],
+ *,
convert_dates: dict[Hashable, str] | None = None,
write_index: bool = True,
byteorder: str | None = None,
@@ -2635,7 +2635,6 @@ def to_stata(
convert_strl: Sequence[Hashable] | None = None,
compression: CompressionOptions = "infer",
storage_options: StorageOptions = None,
- *,
value_labels: dict[Hashable, dict[float, str]] | None = None,
) -> None:
"""
@@ -5141,7 +5140,6 @@ def set_axis(
...
# error: Signature of "set_axis" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "labels"])
@Appender(
"""
Examples
@@ -5183,9 +5181,9 @@ def set_axis(
def set_axis(
self,
labels,
+ *,
axis: Axis = 0,
inplace: bool | lib.NoDefault = lib.no_default,
- *,
copy: bool | lib.NoDefault = lib.no_default,
):
return super().set_axis(labels, axis=axis, inplace=inplace, copy=copy)
@@ -5719,14 +5717,12 @@ def replace(
...
# error: Signature of "replace" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "to_replace", "value"]
- )
@doc(NDFrame.replace, **_shared_doc_kwargs)
def replace( # type: ignore[override]
self,
to_replace=None,
value=lib.no_default,
+ *,
inplace: bool = False,
limit: int | None = None,
regex: bool = False,
@@ -6868,12 +6864,12 @@ def sort_values(
# TODO: Just move the sort_values doc here.
# error: Signature of "sort_values" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "by"])
@Substitution(**_shared_doc_kwargs)
@Appender(NDFrame.sort_values.__doc__)
def sort_values( # type: ignore[override]
self,
by: IndexLabel,
+ *,
axis: Axis = 0,
ascending: bool | list[bool] | tuple[bool, ...] = True,
inplace: bool = False,
@@ -11776,10 +11772,9 @@ def ffill(
) -> DataFrame | None:
...
- # error: Signature of "ffill" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
- def ffill( # type: ignore[override]
+ def ffill(
self,
+ *,
axis: None | Axis = None,
inplace: bool = False,
limit: None | int = None,
@@ -11820,10 +11815,9 @@ def bfill(
) -> DataFrame | None:
...
- # error: Signature of "bfill" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
- def bfill( # type: ignore[override]
+ def bfill(
self,
+ *,
axis: None | Axis = None,
inplace: bool = False,
limit: None | int = None,
@@ -11831,19 +11825,16 @@ def bfill( # type: ignore[override]
) -> DataFrame | None:
return super().bfill(axis=axis, inplace=inplace, limit=limit, downcast=downcast)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "lower", "upper"]
- )
def clip(
self: DataFrame,
lower: float | None = None,
upper: float | None = None,
+ *,
axis: Axis | None = None,
inplace: bool = False,
- *args,
**kwargs,
) -> DataFrame | None:
- return super().clip(lower, upper, axis, inplace, *args, **kwargs)
+ return super().clip(lower, upper, axis=axis, inplace=inplace, **kwargs)
@deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "method"])
def interpolate(
@@ -11909,13 +11900,11 @@ def where(
# error: Signature of "where" incompatible with supertype "NDFrame"
@deprecate_kwarg(old_arg_name="errors", new_arg_name=None)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "cond", "other"]
- )
def where( # type: ignore[override]
self,
cond,
other=lib.no_default,
+ *,
inplace: bool = False,
axis: Axis | None = None,
level: Level = None,
@@ -11970,13 +11959,11 @@ def mask(
# error: Signature of "mask" incompatible with supertype "NDFrame"
@deprecate_kwarg(old_arg_name="errors", new_arg_name=None)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "cond", "other"]
- )
def mask( # type: ignore[override]
self,
cond,
other=lib.no_default,
+ *,
inplace: bool = False,
axis: Axis | None = None,
level: Level = None,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7a1026d32d4f3..27e2ff4b8914e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -750,13 +750,12 @@ def set_axis(
) -> NDFrameT | None:
...
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "labels"])
def set_axis(
self: NDFrameT,
labels,
+ *,
axis: Axis = 0,
inplace: bool_t | lib.NoDefault = lib.no_default,
- *,
copy: bool_t | lib.NoDefault = lib.no_default,
) -> NDFrameT | None:
"""
@@ -1154,10 +1153,10 @@ def rename_axis(
...
@rewrite_axis_style_signature("mapper", [("copy", True)])
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "mapper"])
def rename_axis(
self: NDFrameT,
mapper: IndexLabel | lib.NoDefault = lib.no_default,
+ *,
inplace: bool_t = False,
**kwargs,
) -> NDFrameT | None:
@@ -4813,9 +4812,9 @@ def sort_values(
) -> NDFrameT | None:
...
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
def sort_values(
self: NDFrameT,
+ *,
axis: Axis = 0,
ascending: bool_t | Sequence[bool_t] = True,
inplace: bool_t = False,
@@ -7007,10 +7006,10 @@ def ffill(
) -> NDFrameT | None:
...
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
@doc(klass=_shared_doc_kwargs["klass"])
def ffill(
self: NDFrameT,
+ *,
axis: None | Axis = None,
inplace: bool_t = False,
limit: None | int = None,
@@ -7063,10 +7062,10 @@ def bfill(
) -> NDFrameT | None:
...
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
@doc(klass=_shared_doc_kwargs["klass"])
def bfill(
self: NDFrameT,
+ *,
axis: None | Axis = None,
inplace: bool_t = False,
limit: None | int = None,
@@ -7125,9 +7124,6 @@ def replace(
) -> NDFrameT | None:
...
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "to_replace", "value"]
- )
@doc(
_shared_docs["replace"],
klass=_shared_doc_kwargs["klass"],
@@ -7138,6 +7134,7 @@ def replace(
self: NDFrameT,
to_replace=None,
value=lib.no_default,
+ *,
inplace: bool_t = False,
limit: int | None = None,
regex: bool_t = False,
@@ -8000,9 +7997,9 @@ def clip(
self: NDFrameT,
lower=None,
upper=None,
+ *,
axis: Axis | None = None,
inplace: bool_t = False,
- *args,
**kwargs,
) -> NDFrameT | None:
"""
@@ -8105,7 +8102,7 @@ def clip(
"""
inplace = validate_bool_kwarg(inplace, "inplace")
- axis = nv.validate_clip_with_axis(axis, args, kwargs)
+ axis = nv.validate_clip_with_axis(axis, (), kwargs)
if axis is not None:
axis = self._get_axis_number(axis)
@@ -9827,9 +9824,6 @@ def where(
...
@deprecate_kwarg(old_arg_name="errors", new_arg_name=None)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "cond", "other"]
- )
@doc(
klass=_shared_doc_kwargs["klass"],
cond="True",
@@ -9841,6 +9835,7 @@ def where(
self: NDFrameT,
cond,
other=np.nan,
+ *,
inplace: bool_t = False,
axis: Axis | None = None,
level: Level = None,
@@ -10035,9 +10030,6 @@ def mask(
...
@deprecate_kwarg(old_arg_name="errors", new_arg_name=None)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "cond", "other"]
- )
@doc(
where,
klass=_shared_doc_kwargs["klass"],
@@ -10050,6 +10042,7 @@ def mask(
self: NDFrameT,
cond,
other=lib.no_default,
+ *,
inplace: bool_t = False,
axis: Axis | None = None,
level: Level = None,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b7d12158fd909..bafc90ee4cf5b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3523,10 +3523,9 @@ def sort_values(
) -> None:
...
- # error: Signature of "sort_values" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
- def sort_values( # type: ignore[override]
+ def sort_values(
self,
+ *,
axis: Axis = 0,
ascending: bool | int | Sequence[bool] | Sequence[int] = True,
inplace: bool = False,
@@ -4992,7 +4991,6 @@ def set_axis(
...
# error: Signature of "set_axis" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "labels"])
@Appender(
"""
Examples
@@ -5021,6 +5019,7 @@ def set_axis(
def set_axis( # type: ignore[override]
self,
labels,
+ *,
axis: Axis = 0,
inplace: bool | lib.NoDefault = lib.no_default,
copy: bool | lib.NoDefault = lib.no_default,
@@ -5313,9 +5312,6 @@ def replace(
...
# error: Signature of "replace" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "to_replace", "value"]
- )
@doc(
NDFrame.replace,
klass=_shared_doc_kwargs["klass"],
@@ -5326,6 +5322,7 @@ def replace( # type: ignore[override]
self,
to_replace=None,
value=lib.no_default,
+ *,
inplace: bool = False,
limit: int | None = None,
regex: bool = False,
@@ -5945,10 +5942,9 @@ def ffill(
) -> Series | None:
...
- # error: Signature of "ffill" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
- def ffill( # type: ignore[override]
+ def ffill(
self,
+ *,
axis: None | Axis = None,
inplace: bool = False,
limit: None | int = None,
@@ -5989,10 +5985,9 @@ def bfill(
) -> Series | None:
...
- # error: Signature of "bfill" incompatible with supertype "NDFrame"
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self"])
- def bfill( # type: ignore[override]
+ def bfill(
self,
+ *,
axis: None | Axis = None,
inplace: bool = False,
limit: None | int = None,
@@ -6000,19 +5995,16 @@ def bfill( # type: ignore[override]
) -> Series | None:
return super().bfill(axis=axis, inplace=inplace, limit=limit, downcast=downcast)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "lower", "upper"]
- )
def clip(
self: Series,
lower=None,
upper=None,
+ *,
axis: Axis | None = None,
inplace: bool = False,
- *args,
**kwargs,
) -> Series | None:
- return super().clip(lower, upper, axis, inplace, *args, **kwargs)
+ return super().clip(lower, upper, axis=axis, inplace=inplace, **kwargs)
@deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "method"])
def interpolate(
@@ -6078,13 +6070,11 @@ def where(
# error: Signature of "where" incompatible with supertype "NDFrame"
@deprecate_kwarg(old_arg_name="errors", new_arg_name=None)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "cond", "other"]
- )
def where( # type: ignore[override]
self,
cond,
other=lib.no_default,
+ *,
inplace: bool = False,
axis: Axis | None = None,
level: Level = None,
@@ -6139,13 +6129,11 @@ def mask(
# error: Signature of "mask" incompatible with supertype "NDFrame"
@deprecate_kwarg(old_arg_name="errors", new_arg_name=None)
- @deprecate_nonkeyword_arguments(
- version=None, allowed_args=["self", "cond", "other"]
- )
def mask( # type: ignore[override]
self,
cond,
other=lib.no_default,
+ *,
inplace: bool = False,
axis: Axis | None = None,
level: Level = None,
diff --git a/pandas/tests/frame/indexing/test_mask.py b/pandas/tests/frame/indexing/test_mask.py
index 3a31123da7679..e8a49ab868425 100644
--- a/pandas/tests/frame/indexing/test_mask.py
+++ b/pandas/tests/frame/indexing/test_mask.py
@@ -92,23 +92,6 @@ def test_mask_dtype_bool_conversion(self):
result = bools.mask(mask)
tm.assert_frame_equal(result, expected)
- def test_mask_pos_args_deprecation(self, frame_or_series):
- # https://github.com/pandas-dev/pandas/issues/41485
- obj = DataFrame({"a": range(5)})
- expected = DataFrame({"a": [-1, 1, -1, 3, -1]})
- obj = tm.get_obj(obj, frame_or_series)
- expected = tm.get_obj(expected, frame_or_series)
-
- cond = obj % 2 == 0
- msg = (
- r"In a future version of pandas all arguments of "
- f"{frame_or_series.__name__}.mask except for "
- r"the arguments 'cond' and 'other' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = obj.mask(cond, -1, False)
- tm.assert_equal(result, expected)
-
def test_mask_stringdtype(frame_or_series):
# GH 40824
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index ea559230d1595..1212659ea24e2 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -863,20 +863,6 @@ def test_where_duplicate_axes_mixed_dtypes():
tm.assert_frame_equal(c.astype("f8"), d.astype("f8"))
-def test_where_non_keyword_deprecation(frame_or_series):
- # GH 41485
- obj = frame_or_series(range(5))
- msg = (
- "In a future version of pandas all arguments of "
- f"{frame_or_series.__name__}.where except for the arguments 'cond' "
- "and 'other' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = obj.where(obj > 1, 10, False)
- expected = frame_or_series([10, 10, 2, 3, 4])
- tm.assert_equal(expected, result)
-
-
def test_where_columns_casting():
# GH 42295
diff --git a/pandas/tests/frame/methods/test_clip.py b/pandas/tests/frame/methods/test_clip.py
index c851e65a7ad4f..f8d9adf44dbc2 100644
--- a/pandas/tests/frame/methods/test_clip.py
+++ b/pandas/tests/frame/methods/test_clip.py
@@ -164,15 +164,3 @@ def test_clip_with_na_args(self, float_frame):
result = df.clip(lower=t, axis=0)
expected = DataFrame({"col_0": [9, -3, 0, 6, 5], "col_1": [2, -4, 6, 8, 3]})
tm.assert_frame_equal(result, expected)
-
- def test_clip_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- df = DataFrame({"a": [1, 2, 3]})
- msg = (
- r"In a future version of pandas all arguments of DataFrame.clip except "
- r"for the arguments 'lower' and 'upper' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = df.clip(0, 1, 0)
- expected = DataFrame({"a": [1, 1, 1]})
- tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_fillna.py b/pandas/tests/frame/methods/test_fillna.py
index ccd564b46cffa..869cd32aa9ef9 100644
--- a/pandas/tests/frame/methods/test_fillna.py
+++ b/pandas/tests/frame/methods/test_fillna.py
@@ -399,18 +399,6 @@ def test_ffill(self, datetime_frame):
datetime_frame.ffill(), datetime_frame.fillna(method="ffill")
)
- def test_ffill_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- df = DataFrame({"a": [1, 2, 3]})
- msg = (
- r"In a future version of pandas all arguments of DataFrame.ffill "
- r"will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = df.ffill(0)
- expected = DataFrame({"a": [1, 2, 3]})
- tm.assert_frame_equal(result, expected)
-
def test_bfill(self, datetime_frame):
datetime_frame["A"][:5] = np.nan
datetime_frame["A"][-5:] = np.nan
@@ -419,18 +407,6 @@ def test_bfill(self, datetime_frame):
datetime_frame.bfill(), datetime_frame.fillna(method="bfill")
)
- def test_bfill_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- df = DataFrame({"a": [1, 2, 3]})
- msg = (
- r"In a future version of pandas all arguments of DataFrame.bfill "
- r"will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = df.bfill(0)
- expected = DataFrame({"a": [1, 2, 3]})
- tm.assert_frame_equal(result, expected)
-
def test_frame_pad_backfill_limit(self):
index = np.arange(10)
df = DataFrame(np.random.randn(10, 4), index=index)
diff --git a/pandas/tests/frame/methods/test_set_axis.py b/pandas/tests/frame/methods/test_set_axis.py
index f105a38e6fdd0..8e597e1e9fa69 100644
--- a/pandas/tests/frame/methods/test_set_axis.py
+++ b/pandas/tests/frame/methods/test_set_axis.py
@@ -168,26 +168,3 @@ class TestSeriesSetAxis(SharedSetAxisTests):
def obj(self):
ser = Series(np.arange(4), index=[1, 3, 5, 7], dtype="int64")
return ser
-
-
-def test_nonkeyword_arguments_deprecation_warning():
- # https://github.com/pandas-dev/pandas/issues/41485
- df = DataFrame({"a": [1, 2, 3]})
- msg = (
- r"In a future version of pandas all arguments of DataFrame\.set_axis "
- r"except for the argument 'labels' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = df.set_axis([1, 2, 4], 0)
- expected = DataFrame({"a": [1, 2, 3]}, index=[1, 2, 4])
- tm.assert_frame_equal(result, expected)
-
- ser = Series([1, 2, 3])
- msg = (
- r"In a future version of pandas all arguments of Series\.set_axis "
- r"except for the argument 'labels' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.set_axis([1, 2, 4], 0)
- expected = Series([1, 2, 3], index=[1, 2, 4])
- tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_sort_values.py b/pandas/tests/frame/methods/test_sort_values.py
index 51b590263f893..d7f1d900db052 100644
--- a/pandas/tests/frame/methods/test_sort_values.py
+++ b/pandas/tests/frame/methods/test_sort_values.py
@@ -860,18 +860,6 @@ def test_sort_column_level_and_index_label(
tm.assert_frame_equal(result, expected)
- def test_sort_values_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- df = DataFrame({"a": [1, 2, 3]})
- msg = (
- r"In a future version of pandas all arguments of DataFrame\.sort_values "
- r"except for the argument 'by' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = df.sort_values("a", 0)
- expected = DataFrame({"a": [1, 2, 3]})
- tm.assert_frame_equal(result, expected)
-
def test_sort_values_validate_ascending_for_value_error(self):
# GH41634
df = DataFrame({"D": [23, 7, 21]})
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 566b2e4cd9353..368e9d5f6e6a1 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -816,8 +816,7 @@ def test_big_dates(self, datapath):
# {c : c[-2:] for c in columns}
with tm.ensure_clean() as path:
expected.index.name = "index"
- with tm.assert_produces_warning(FutureWarning, match="keyword-only"):
- expected.to_stata(path, date_conversion)
+ expected.to_stata(path, convert_dates=date_conversion)
written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(
written_and_read_again.set_index("index"),
diff --git a/pandas/tests/series/methods/test_clip.py b/pandas/tests/series/methods/test_clip.py
index bc6d5aeb0a581..b123e8a12a852 100644
--- a/pandas/tests/series/methods/test_clip.py
+++ b/pandas/tests/series/methods/test_clip.py
@@ -137,15 +137,3 @@ def test_clip_with_timestamps_and_oob_datetimes(self):
expected = Series([Timestamp.min, Timestamp.max], dtype="object")
tm.assert_series_equal(result, expected)
-
- def test_clip_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- ser = Series([1, 2, 3])
- msg = (
- r"In a future version of pandas all arguments of Series.clip except "
- r"for the arguments 'lower' and 'upper' will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.clip(0, 1, 0)
- expected = Series([1, 1, 1])
- tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py
index ac12b513aad4e..26416c7a2b483 100644
--- a/pandas/tests/series/methods/test_fillna.py
+++ b/pandas/tests/series/methods/test_fillna.py
@@ -888,18 +888,6 @@ def test_ffill(self):
ts[2] = np.NaN
tm.assert_series_equal(ts.ffill(), ts.fillna(method="ffill"))
- def test_ffill_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- ser = Series([1, 2, 3])
- msg = (
- r"In a future version of pandas all arguments of Series.ffill "
- r"will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.ffill(0)
- expected = Series([1, 2, 3])
- tm.assert_series_equal(result, expected)
-
def test_ffill_mixed_dtypes_without_missing_data(self):
# GH#14956
series = Series([datetime(2015, 1, 1, tzinfo=pytz.utc), 1])
@@ -911,18 +899,6 @@ def test_bfill(self):
ts[2] = np.NaN
tm.assert_series_equal(ts.bfill(), ts.fillna(method="bfill"))
- def test_bfill_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- ser = Series([1, 2, 3])
- msg = (
- r"In a future version of pandas all arguments of Series.bfill "
- r"will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.bfill(0)
- expected = Series([1, 2, 3])
- tm.assert_series_equal(result, expected)
-
def test_pad_nan(self):
x = Series(
[np.nan, 1.0, np.nan, 3.0, np.nan], ["z", "a", "b", "c", "d"], dtype=float
diff --git a/pandas/tests/series/methods/test_sort_values.py b/pandas/tests/series/methods/test_sort_values.py
index adc578d948163..b5f589b3b2514 100644
--- a/pandas/tests/series/methods/test_sort_values.py
+++ b/pandas/tests/series/methods/test_sort_values.py
@@ -187,18 +187,6 @@ def test_sort_values_ignore_index(
tm.assert_series_equal(result_ser, expected)
tm.assert_series_equal(ser, Series(original_list))
- def test_sort_values_pos_args_deprecation(self):
- # https://github.com/pandas-dev/pandas/issues/41485
- ser = Series([1, 2, 3])
- msg = (
- r"In a future version of pandas all arguments of Series\.sort_values "
- r"will be keyword-only"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = ser.sort_values(0)
- expected = Series([1, 2, 3])
- tm.assert_series_equal(result, expected)
-
def test_mergesort_decending_stability(self):
# GH 28697
s = Series([1, 2, 1, 3], ["first", "b", "second", "c"])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49259 | 2022-10-23T03:22:02Z | 2022-10-24T20:14:26Z | 2022-10-24T20:14:26Z | 2022-10-24T20:22:27Z |
DEPR: drop setting categorical codes | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 0880b8e2cac12..0d6660b4a27f1 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -198,6 +198,7 @@ Removal of prior version deprecations/changes
- Removed ``pandas.SparseArray`` in favor of :class:`arrays.SparseArray` (:issue:`30642`)
- Removed ``pandas.SparseSeries`` and ``pandas.SparseDataFrame`` (:issue:`30642`)
- Enforced disallowing a string column label into ``times`` in :meth:`DataFrame.ewm` (:issue:`43265`)
+- Removed setting Categorical._codes directly (:issue:`41429`)
- Enforced :meth:`Rolling.count` with ``min_periods=None`` to default to the size of the window (:issue:`31302`)
-
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 48371b7f14b28..15a7c8d52b724 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2059,16 +2059,6 @@ def _values_for_rank(self):
def _codes(self) -> np.ndarray:
return self._ndarray
- @_codes.setter
- def _codes(self, value: np.ndarray) -> None:
- warn(
- "Setting the codes on a Categorical is deprecated and will raise in "
- "a future version. Create a new Categorical object instead",
- FutureWarning,
- stacklevel=find_stack_level(),
- ) # GH#40606
- NDArrayBacked.__init__(self, value, self.dtype)
-
def _box_func(self, i: int):
if i == -1:
return np.NaN
diff --git a/pandas/tests/arrays/categorical/test_api.py b/pandas/tests/arrays/categorical/test_api.py
index e3fd73aaa9b1c..03bd1c522838d 100644
--- a/pandas/tests/arrays/categorical/test_api.py
+++ b/pandas/tests/arrays/categorical/test_api.py
@@ -516,15 +516,6 @@ def test_set_categories_inplace(self, factor):
tm.assert_index_equal(cat.categories, Index(["a", "b", "c", "d"]))
- def test_codes_setter_deprecated(self):
- cat = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1])
- new_codes = cat._codes + 1
- with tm.assert_produces_warning(FutureWarning):
- # GH#40606
- cat._codes = new_codes
-
- assert cat._codes is new_codes
-
class TestPrivateCategoricalAPI:
def test_codes_immutable(self):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49258 | 2022-10-23T02:08:55Z | 2022-10-24T20:30:14Z | 2022-10-24T20:30:14Z | 2022-10-25T05:03:54Z |
DEPR: enforce deprecation of Categorical.replace | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 0880b8e2cac12..46fbb41a86d2f 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -187,6 +187,7 @@ Removal of prior version deprecations/changes
- Removed :attr:`Rolling.win_type` returning ``"freq"`` (:issue:`38963`)
- Removed :attr:`Rolling.is_datetimelike` (:issue:`38963`)
- Removed deprecated :meth:`Timedelta.delta`, :meth:`Timedelta.is_populated`, and :attr:`Timedelta.freq` (:issue:`46430`, :issue:`46476`)
+- Removed deprecated :meth:`Categorical.replace`, use :meth:`Series.replace` instead (:issue:`44929`)
- Removed the ``numeric_only`` keyword from :meth:`Categorical.min` and :meth:`Categorical.max` in favor of ``skipna`` (:issue:`48821`)
- Removed :func:`is_extension_type` in favor of :func:`is_extension_array_dtype` (:issue:`29457`)
- Removed :meth:`Index.get_value` (:issue:`33907`)
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 074b6068a4518..d9f481d843b37 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -154,7 +154,6 @@ def pytest_collection_modifyitems(items, config) -> None:
("DataFrame.append", "The frame.append method is deprecated"),
("Series.append", "The series.append method is deprecated"),
("dtypes.common.is_categorical", "is_categorical is deprecated"),
- ("Categorical.replace", "Categorical.replace is deprecated"),
("MultiIndex._is_lexsorted", "MultiIndex.is_lexsorted is deprecated"),
# Docstring divides by zero to show behavior difference
("missing.mask_zero_div_zero", "divide by zero encountered"),
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 48371b7f14b28..adeb53feaf5b6 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2561,53 +2561,6 @@ def isin(self, values) -> npt.NDArray[np.bool_]:
code_values = code_values[null_mask | (code_values >= 0)]
return algorithms.isin(self.codes, code_values)
- @overload
- def replace(
- self, to_replace, value, *, inplace: Literal[False] = ...
- ) -> Categorical:
- ...
-
- @overload
- def replace(self, to_replace, value, *, inplace: Literal[True]) -> None:
- ...
-
- @deprecate_nonkeyword_arguments(version=None, allowed_args=["self", "value"])
- def replace(self, to_replace, value, inplace: bool = False) -> Categorical | None:
- """
- Replaces all instances of one value with another
-
- Parameters
- ----------
- to_replace: object
- The value to be replaced
-
- value: object
- The value to replace it with
-
- inplace: bool
- Whether the operation is done in-place
-
- Returns
- -------
- None if inplace is True, otherwise the new Categorical after replacement
-
-
- Examples
- --------
- >>> s = pd.Categorical([1, 2, 1, 3])
- >>> s.replace(1, 3)
- [3, 2, 3, 3]
- Categories (2, int64): [2, 3]
- """
- # GH#44929 deprecation
- warn(
- "Categorical.replace is deprecated and will be removed in a future "
- "version. Use Series.replace directly instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self._replace(to_replace=to_replace, value=value, inplace=inplace)
-
def _replace(self, *, to_replace, value, inplace: bool = False):
inplace = validate_bool_kwarg(inplace, "inplace")
cat = self if inplace else self.copy()
diff --git a/pandas/tests/arrays/categorical/test_replace.py b/pandas/tests/arrays/categorical/test_replace.py
index a50b1eddd99be..a3ba420c84a17 100644
--- a/pandas/tests/arrays/categorical/test_replace.py
+++ b/pandas/tests/arrays/categorical/test_replace.py
@@ -55,9 +55,7 @@ def test_replace_categorical(to_replace, value, result, expected_error_msg):
# GH#26988
cat = Categorical(["a", "b"])
expected = Categorical(result)
- with tm.assert_produces_warning(FutureWarning, match="Series.replace"):
- # GH#44929 replace->_replace
- result = cat.replace(to_replace, value)
+ result = pd.Series(cat).replace(to_replace, value)._values
tm.assert_categorical_equal(result, expected)
if to_replace == "b": # the "c" test is supposed to be unchanged
@@ -65,8 +63,5 @@ def test_replace_categorical(to_replace, value, result, expected_error_msg):
# ensure non-inplace call does not affect original
tm.assert_categorical_equal(cat, expected)
- with tm.assert_produces_warning(FutureWarning, match="Series.replace"):
- # GH#44929 replace->_replace
- cat.replace(to_replace, value, inplace=True)
-
+ pd.Series(cat).replace(to_replace, value, inplace=True)
tm.assert_categorical_equal(cat, expected)
| - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature.
Deprecation added in #44929 | https://api.github.com/repos/pandas-dev/pandas/pulls/49255 | 2022-10-22T22:55:14Z | 2022-10-23T17:27:42Z | 2022-10-23T17:27:42Z | 2022-10-26T10:17:55Z |
DEPR: is_categorical | diff --git a/doc/redirects.csv b/doc/redirects.csv
index 8a55c48996e84..35bf56b9b824c 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -101,7 +101,6 @@ generated/pandas.api.types.infer_dtype,../reference/api/pandas.api.types.infer_d
generated/pandas.api.types.is_bool_dtype,../reference/api/pandas.api.types.is_bool_dtype
generated/pandas.api.types.is_bool,../reference/api/pandas.api.types.is_bool
generated/pandas.api.types.is_categorical_dtype,../reference/api/pandas.api.types.is_categorical_dtype
-generated/pandas.api.types.is_categorical,../reference/api/pandas.api.types.is_categorical
generated/pandas.api.types.is_complex_dtype,../reference/api/pandas.api.types.is_complex_dtype
generated/pandas.api.types.is_complex,../reference/api/pandas.api.types.is_complex
generated/pandas.api.types.is_datetime64_any_dtype,../reference/api/pandas.api.types.is_datetime64_any_dtype
diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst
index ad12ade41589e..17510a0b7d479 100644
--- a/doc/source/reference/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -659,7 +659,6 @@ Scalar introspection
:toctree: api/
api.types.is_bool
- api.types.is_categorical
api.types.is_complex
api.types.is_float
api.types.is_hashable
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 2ba3bd9470ff3..3e23108e28676 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -164,6 +164,7 @@ Removal of prior version deprecations/changes
- Removed deprecated :meth:`CategoricalIndex.take_nd` (:issue:`30702`)
- Removed deprecated :meth:`Index.is_type_compatible` (:issue:`42113`)
- Removed deprecated :meth:`Index.is_mixed`, check ``index.inferred_type`` directly instead (:issue:`32922`)
+- Removed deprecated :func:`pandas.api.types.is_categorical`; use :func:`pandas.api.types.is_categorical_dtype` instead (:issue:`33385`)
- Removed deprecated :meth:`Index.asi8` (:issue:`37877`)
- Enforced deprecation disallowing passing a timezone-aware :class:`Timestamp` and ``dtype="datetime64[ns]"`` to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
- Enforced deprecation disallowing passing a sequence of timezone-aware values and ``dtype="datetime64[ns]"`` to to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 24d878cf1cde9..94db244e19945 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -153,7 +153,6 @@ def pytest_collection_modifyitems(items, config) -> None:
# Deprecations where the docstring will emit a warning
("DataFrame.append", "The frame.append method is deprecated"),
("Series.append", "The series.append method is deprecated"),
- ("dtypes.common.is_categorical", "is_categorical is deprecated"),
# Docstring divides by zero to show behavior difference
("missing.mask_zero_div_zero", "divide by zero encountered"),
# Docstring demonstrates the call raises a warning
diff --git a/pandas/core/dtypes/api.py b/pandas/core/dtypes/api.py
index 0456073eaa7c6..00300c5c74e51 100644
--- a/pandas/core/dtypes/api.py
+++ b/pandas/core/dtypes/api.py
@@ -2,7 +2,6 @@
is_array_like,
is_bool,
is_bool_dtype,
- is_categorical,
is_categorical_dtype,
is_complex,
is_complex_dtype,
@@ -45,7 +44,6 @@
"is_array_like",
"is_bool",
"is_bool_dtype",
- "is_categorical",
"is_categorical_dtype",
"is_complex",
"is_complex_dtype",
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 61931b2a94490..3c2aa1f6bab5d 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -7,7 +7,6 @@
Any,
Callable,
)
-import warnings
import numpy as np
@@ -22,7 +21,6 @@
ArrayLike,
DtypeObj,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.base import _registry as registry
from pandas.core.dtypes.dtypes import (
@@ -32,10 +30,7 @@
IntervalDtype,
PeriodDtype,
)
-from pandas.core.dtypes.generic import (
- ABCCategorical,
- ABCIndex,
-)
+from pandas.core.dtypes.generic import ABCIndex
from pandas.core.dtypes.inference import (
is_array_like,
is_bool,
@@ -275,47 +270,6 @@ def is_scipy_sparse(arr) -> bool:
return _is_scipy_sparse(arr)
-def is_categorical(arr) -> bool:
- """
- Check whether an array-like is a Categorical instance.
-
- .. deprecated:: 1.1.0
- Use ``is_categorical_dtype`` instead.
-
- Parameters
- ----------
- arr : array-like
- The array-like to check.
-
- Returns
- -------
- boolean
- Whether or not the array-like is of a Categorical instance.
-
- Examples
- --------
- >>> is_categorical([1, 2, 3])
- False
-
- Categoricals, Series Categoricals, and CategoricalIndex will return True.
-
- >>> cat = pd.Categorical([1, 2, 3])
- >>> is_categorical(cat)
- True
- >>> is_categorical(pd.Series(cat))
- True
- >>> is_categorical(pd.CategoricalIndex([1, 2, 3]))
- True
- """
- warnings.warn(
- "is_categorical is deprecated and will be removed in a future version. "
- "Use is_categorical_dtype instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return isinstance(arr, ABCCategorical) or is_categorical_dtype(arr)
-
-
def is_datetime64_dtype(arr_or_dtype) -> bool:
"""
Check whether an array-like or dtype is of the datetime64 dtype.
@@ -1772,7 +1726,6 @@ def is_all_strings(value: ArrayLike) -> bool:
"is_array_like",
"is_bool",
"is_bool_dtype",
- "is_categorical",
"is_categorical_dtype",
"is_complex",
"is_complex_dtype",
diff --git a/pandas/tests/api/test_types.py b/pandas/tests/api/test_types.py
index 80eb9a2593f40..8c729fd19cbc7 100644
--- a/pandas/tests/api/test_types.py
+++ b/pandas/tests/api/test_types.py
@@ -10,7 +10,6 @@ class TestTypes(Base):
allowed = [
"is_bool",
"is_bool_dtype",
- "is_categorical",
"is_categorical_dtype",
"is_complex",
"is_complex_dtype",
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 9f1ad2840ec87..589e2e04d668a 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -208,22 +208,6 @@ def test_is_scipy_sparse():
assert not com.is_scipy_sparse(SparseArray([1, 2, 3]))
-def test_is_categorical():
- cat = pd.Categorical([1, 2, 3])
- with tm.assert_produces_warning(FutureWarning):
- assert com.is_categorical(cat)
- assert com.is_categorical(pd.Series(cat))
- assert com.is_categorical(pd.CategoricalIndex([1, 2, 3]))
-
- assert not com.is_categorical([1, 2, 3])
-
-
-def test_is_categorical_deprecation():
- # GH#33385
- with tm.assert_produces_warning(FutureWarning):
- com.is_categorical([1, 2, 3])
-
-
def test_is_datetime64_dtype():
assert not com.is_datetime64_dtype(object)
assert not com.is_datetime64_dtype([1, 2, 3])
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index 7f6ec8b328c87..d054fb59d8561 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -9,7 +9,6 @@
from pandas.core.dtypes.base import _registry as registry
from pandas.core.dtypes.common import (
is_bool_dtype,
- is_categorical,
is_categorical_dtype,
is_datetime64_any_dtype,
is_datetime64_dtype,
@@ -179,13 +178,6 @@ def test_basic(self, dtype):
assert is_categorical_dtype(s)
assert not is_categorical_dtype(np.dtype("float64"))
- with tm.assert_produces_warning(FutureWarning):
- # GH#33385 deprecated
- assert is_categorical(s.dtype)
- assert is_categorical(s)
- assert not is_categorical(np.dtype("float64"))
- assert not is_categorical(1.0)
-
def test_tuple_categories(self):
categories = [(1, "a"), (2, "b"), (3, "c")]
result = CategoricalDtype(categories)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49253 | 2022-10-22T22:46:33Z | 2022-10-24T05:49:05Z | 2022-10-24T05:49:05Z | 2022-10-24T14:31:51Z |
DEPR: Enforce deprecation of mad and tshift | diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py
index 3007d2d1e126c..97f48a3a6f69f 100644
--- a/asv_bench/benchmarks/groupby.py
+++ b/asv_bench/benchmarks/groupby.py
@@ -35,7 +35,6 @@
"pct_change",
"min",
"var",
- "mad",
"describe",
"std",
"quantile",
@@ -52,7 +51,6 @@
"cummax",
"pct_change",
"var",
- "mad",
"describe",
"std",
},
@@ -437,7 +435,6 @@ class GroupByMethods:
"first",
"head",
"last",
- "mad",
"max",
"min",
"median",
@@ -483,7 +480,7 @@ def setup(self, dtype, method, application, ncols):
if method == "describe":
ngroups = 20
- elif method in ["mad", "skew"]:
+ elif method == "skew":
ngroups = 100
else:
ngroups = 1000
diff --git a/asv_bench/benchmarks/stat_ops.py b/asv_bench/benchmarks/stat_ops.py
index 92a78b7c2f63d..19fa7f7a06cf2 100644
--- a/asv_bench/benchmarks/stat_ops.py
+++ b/asv_bench/benchmarks/stat_ops.py
@@ -2,7 +2,7 @@
import pandas as pd
-ops = ["mean", "sum", "median", "std", "skew", "kurt", "mad", "prod", "sem", "var"]
+ops = ["mean", "sum", "median", "std", "skew", "kurt", "prod", "sem", "var"]
class FrameOps:
@@ -11,9 +11,6 @@ class FrameOps:
param_names = ["op", "dtype", "axis"]
def setup(self, op, dtype, axis):
- if op == "mad" and dtype == "Int64":
- # GH-33036, GH#33600
- raise NotImplementedError
values = np.random.randn(100000, 4)
if dtype == "Int64":
values = values.astype(int)
diff --git a/doc/redirects.csv b/doc/redirects.csv
index 42f91a8b9884f..8a55c48996e84 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -186,7 +186,6 @@ generated/pandas.core.groupby.DataFrameGroupBy.filter,../reference/api/pandas.co
generated/pandas.core.groupby.DataFrameGroupBy.hist,../reference/api/pandas.core.groupby.DataFrameGroupBy.hist
generated/pandas.core.groupby.DataFrameGroupBy.idxmax,../reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax
generated/pandas.core.groupby.DataFrameGroupBy.idxmin,../reference/api/pandas.core.groupby.DataFrameGroupBy.idxmin
-generated/pandas.core.groupby.DataFrameGroupBy.mad,../reference/api/pandas.core.groupby.DataFrameGroupBy.mad
generated/pandas.core.groupby.DataFrameGroupBy.pct_change,../reference/api/pandas.core.groupby.DataFrameGroupBy.pct_change
generated/pandas.core.groupby.DataFrameGroupBy.plot,../reference/api/pandas.core.groupby.DataFrameGroupBy.plot
generated/pandas.core.groupby.DataFrameGroupBy.quantile,../reference/api/pandas.core.groupby.DataFrameGroupBy.quantile
@@ -196,7 +195,6 @@ generated/pandas.core.groupby.DataFrameGroupBy.shift,../reference/api/pandas.cor
generated/pandas.core.groupby.DataFrameGroupBy.size,../reference/api/pandas.core.groupby.DataFrameGroupBy.size
generated/pandas.core.groupby.DataFrameGroupBy.skew,../reference/api/pandas.core.groupby.DataFrameGroupBy.skew
generated/pandas.core.groupby.DataFrameGroupBy.take,../reference/api/pandas.core.groupby.DataFrameGroupBy.take
-generated/pandas.core.groupby.DataFrameGroupBy.tshift,../reference/api/pandas.core.groupby.DataFrameGroupBy.tshift
generated/pandas.core.groupby.GroupBy.agg,../reference/api/pandas.core.groupby.GroupBy.agg
generated/pandas.core.groupby.GroupBy.aggregate,../reference/api/pandas.core.groupby.GroupBy.aggregate
generated/pandas.core.groupby.GroupBy.all,../reference/api/pandas.core.groupby.GroupBy.all
@@ -415,7 +413,6 @@ generated/pandas.DataFrame.le,../reference/api/pandas.DataFrame.le
generated/pandas.DataFrame.loc,../reference/api/pandas.DataFrame.loc
generated/pandas.DataFrame.lookup,../reference/api/pandas.DataFrame.lookup
generated/pandas.DataFrame.lt,../reference/api/pandas.DataFrame.lt
-generated/pandas.DataFrame.mad,../reference/api/pandas.DataFrame.mad
generated/pandas.DataFrame.mask,../reference/api/pandas.DataFrame.mask
generated/pandas.DataFrame.max,../reference/api/pandas.DataFrame.max
generated/pandas.DataFrame.mean,../reference/api/pandas.DataFrame.mean
@@ -528,7 +525,6 @@ generated/pandas.DataFrame.transform,../reference/api/pandas.DataFrame.transform
generated/pandas.DataFrame.transpose,../reference/api/pandas.DataFrame.transpose
generated/pandas.DataFrame.truediv,../reference/api/pandas.DataFrame.truediv
generated/pandas.DataFrame.truncate,../reference/api/pandas.DataFrame.truncate
-generated/pandas.DataFrame.tshift,../reference/api/pandas.DataFrame.tshift
generated/pandas.DataFrame.tz_convert,../reference/api/pandas.DataFrame.tz_convert
generated/pandas.DataFrame.tz_localize,../reference/api/pandas.DataFrame.tz_localize
generated/pandas.DataFrame.unstack,../reference/api/pandas.DataFrame.unstack
@@ -1097,7 +1093,6 @@ generated/pandas.Series.last_valid_index,../reference/api/pandas.Series.last_val
generated/pandas.Series.le,../reference/api/pandas.Series.le
generated/pandas.Series.loc,../reference/api/pandas.Series.loc
generated/pandas.Series.lt,../reference/api/pandas.Series.lt
-generated/pandas.Series.mad,../reference/api/pandas.Series.mad
generated/pandas.Series.map,../reference/api/pandas.Series.map
generated/pandas.Series.mask,../reference/api/pandas.Series.mask
generated/pandas.Series.max,../reference/api/pandas.Series.max
@@ -1266,7 +1261,6 @@ generated/pandas.Series.transform,../reference/api/pandas.Series.transform
generated/pandas.Series.transpose,../reference/api/pandas.Series.transpose
generated/pandas.Series.truediv,../reference/api/pandas.Series.truediv
generated/pandas.Series.truncate,../reference/api/pandas.Series.truncate
-generated/pandas.Series.tshift,../reference/api/pandas.Series.tshift
generated/pandas.Series.tz_convert,../reference/api/pandas.Series.tz_convert
generated/pandas.Series.tz_localize,../reference/api/pandas.Series.tz_localize
generated/pandas.Series.unique,../reference/api/pandas.Series.unique
diff --git a/doc/source/reference/frame.rst b/doc/source/reference/frame.rst
index e71ee80767d29..cc38f6cc42972 100644
--- a/doc/source/reference/frame.rst
+++ b/doc/source/reference/frame.rst
@@ -151,7 +151,6 @@ Computations / descriptive stats
DataFrame.eval
DataFrame.kurt
DataFrame.kurtosis
- DataFrame.mad
DataFrame.max
DataFrame.mean
DataFrame.median
@@ -268,7 +267,6 @@ Time Series-related
DataFrame.asof
DataFrame.shift
DataFrame.slice_shift
- DataFrame.tshift
DataFrame.first_valid_index
DataFrame.last_valid_index
DataFrame.resample
diff --git a/doc/source/reference/groupby.rst b/doc/source/reference/groupby.rst
index 7c6bf485c0599..54b2e893bfd08 100644
--- a/doc/source/reference/groupby.rst
+++ b/doc/source/reference/groupby.rst
@@ -84,7 +84,6 @@ Function application
DataFrameGroupBy.idxmax
DataFrameGroupBy.idxmin
DataFrameGroupBy.last
- DataFrameGroupBy.mad
DataFrameGroupBy.max
DataFrameGroupBy.mean
DataFrameGroupBy.median
@@ -108,7 +107,6 @@ Function application
DataFrameGroupBy.var
DataFrameGroupBy.tail
DataFrameGroupBy.take
- DataFrameGroupBy.tshift
DataFrameGroupBy.value_counts
``SeriesGroupBy`` computations / descriptive stats
@@ -138,7 +136,6 @@ Function application
SeriesGroupBy.idxmin
SeriesGroupBy.is_monotonic_increasing
SeriesGroupBy.is_monotonic_decreasing
- SeriesGroupBy.mad
SeriesGroupBy.max
SeriesGroupBy.mean
SeriesGroupBy.median
@@ -165,7 +162,6 @@ Function application
SeriesGroupBy.var
SeriesGroupBy.tail
SeriesGroupBy.take
- SeriesGroupBy.tshift
SeriesGroupBy.value_counts
Plotting and visualization
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst
index fcdc9ea9b95da..0beac55c8b86c 100644
--- a/doc/source/reference/series.rst
+++ b/doc/source/reference/series.rst
@@ -148,7 +148,6 @@ Computations / descriptive stats
Series.diff
Series.factorize
Series.kurt
- Series.mad
Series.max
Series.mean
Series.median
@@ -269,7 +268,6 @@ Time Series-related
Series.tz_localize
Series.at_time
Series.between_time
- Series.tshift
Series.slice_shift
Accessors
diff --git a/doc/source/user_guide/basics.rst b/doc/source/user_guide/basics.rst
index a34d4891b9d77..0883113474f54 100644
--- a/doc/source/user_guide/basics.rst
+++ b/doc/source/user_guide/basics.rst
@@ -556,7 +556,6 @@ optional ``level`` parameter which applies only if the object has a
``count``, Number of non-NA observations
``sum``, Sum of values
``mean``, Mean of values
- ``mad``, Mean absolute deviation
``median``, Arithmetic median of values
``min``, Minimum
``max``, Maximum
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index e281e250d608e..cb7a694ea1d6f 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -196,7 +196,8 @@ Removal of prior version deprecations/changes
- Removed ``pandas.SparseSeries`` and ``pandas.SparseDataFrame`` (:issue:`30642`)
- Enforced disallowing a string column label into ``times`` in :meth:`DataFrame.ewm` (:issue:`43265`)
- Enforced :meth:`Rolling.count` with ``min_periods=None`` to default to the size of the window (:issue:`31302`)
--
+- Removed the deprecated method ``mad`` from pandas classes (:issue:`11787`)
+- Removed the deprecated method ``tshift`` from pandas classes (:issue:`11631`)
.. ---------------------------------------------------------------------------
.. _whatsnew_200.performance:
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index bbb954c1a4e80..4f9af2d0c01d6 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -557,7 +557,7 @@ def apply_str(self) -> DataFrame | Series:
sig = inspect.getfullargspec(func)
arg_names = (*sig.args, *sig.kwonlyargs)
if self.axis != 0 and (
- "axis" not in arg_names or f in ("corrwith", "mad", "skew")
+ "axis" not in arg_names or f in ("corrwith", "skew")
):
raise ValueError(f"Operation {f} does not support axis=1")
elif "axis" in arg_names:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7a1026d32d4f3..f46b429a5fc75 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -250,7 +250,7 @@ class NDFrame(PandasObject, indexing.IndexingMixin):
_internal_names_set: set[str] = set(_internal_names)
_accessors: set[str] = set()
_hidden_attrs: frozenset[str] = frozenset(
- ["_AXIS_NAMES", "_AXIS_NUMBERS", "get_values", "tshift"]
+ ["_AXIS_NAMES", "_AXIS_NUMBERS", "get_values"]
)
_metadata: list[str] = []
_is_copy: weakref.ReferenceType[NDFrame] | None = None
@@ -10122,8 +10122,6 @@ def shift(
Index.shift : Shift values of Index.
DatetimeIndex.shift : Shift values of DatetimeIndex.
PeriodIndex.shift : Shift values of PeriodIndex.
- tshift : Shift the time index, using the index's frequency if
- available.
Examples
--------
@@ -10272,49 +10270,6 @@ def slice_shift(self: NDFrameT, periods: int = 1, axis: Axis = 0) -> NDFrameT:
new_obj = new_obj.set_axis(shifted_axis, axis=axis, copy=False)
return new_obj.__finalize__(self, method="slice_shift")
- @final
- def tshift(self: NDFrameT, periods: int = 1, freq=None, axis: Axis = 0) -> NDFrameT:
- """
- Shift the time index, using the index's frequency if available.
-
- .. deprecated:: 1.1.0
- Use `shift` instead.
-
- Parameters
- ----------
- periods : int
- Number of periods to move, can be positive or negative.
- freq : DateOffset, timedelta, or str, default None
- Increment to use from the tseries module
- or time rule expressed as a string (e.g. 'EOM').
- axis : {0 or ‘index’, 1 or ‘columns’, None}, default 0
- Corresponds to the axis that contains the Index.
- For `Series` this parameter is unused and defaults to 0.
-
- Returns
- -------
- shifted : Series/DataFrame
-
- Notes
- -----
- If freq is not specified then tries to use the freq or inferred_freq
- attributes of the index. If neither of those attributes exist, a
- ValueError is thrown
- """
- warnings.warn(
- (
- "tshift is deprecated and will be removed in a future version. "
- "Please use shift instead."
- ),
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- if freq is None:
- freq = "infer"
-
- return self.shift(periods, freq, axis)
-
def truncate(
self: NDFrameT,
before=None,
@@ -11544,70 +11499,6 @@ def prod(
product = prod
- def mad(
- self,
- axis: Axis | None = None,
- skipna: bool_t = True,
- level: Level | None = None,
- ) -> Series | float:
- """
- {desc}
-
- .. deprecated:: 1.5.0
- mad is deprecated.
-
- Parameters
- ----------
- axis : {axis_descr}
- Axis for the function to be applied on.
- For `Series` this parameter is unused and defaults to 0.
- skipna : bool, default True
- Exclude NA/null values when computing the result.
- level : int or level name, default None
- If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a {name1}.
-
- Returns
- -------
- {name1} or {name2} (if level specified)\
- {see_also}\
- {examples}
- """
- msg = (
- "The 'mad' method is deprecated and will be removed in a future version. "
- "To compute the same result, you may do `(df - df.mean()).abs().mean()`."
- )
- warnings.warn(msg, FutureWarning, stacklevel=find_stack_level())
-
- if not is_bool(skipna):
- warnings.warn(
- "Passing None for skipna is deprecated and will raise in a future"
- "version. Pass True instead. Only boolean values will be allowed "
- "in the future.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- skipna = True
- if axis is None:
- axis = self._stat_axis_number
- if level is not None:
- warnings.warn(
- "Using the level keyword in DataFrame and Series aggregations is "
- "deprecated and will be removed in a future version. Use groupby "
- "instead. df.mad(level=1) should use df.groupby(level=1).mad()",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self._agg_by_level("mad", axis=axis, level=level, skipna=skipna)
-
- data = self._get_numeric_data()
- if axis == 0:
- # error: Unsupported operand types for - ("NDFrame" and "float")
- demeaned = data - data.mean(axis=0) # type: ignore[operator]
- else:
- demeaned = data.sub(data.mean(axis=1), axis=0)
- return np.abs(demeaned).mean(axis=axis, skipna=skipna)
-
@classmethod
def _add_numeric_operations(cls) -> None:
"""
@@ -11664,21 +11555,6 @@ def all(
setattr(cls, "all", all)
- @doc(
- NDFrame.mad.__doc__,
- desc="Return the mean absolute deviation of the values "
- "over the requested axis.",
- name1=name1,
- name2=name2,
- axis_descr=axis_descr,
- see_also="",
- examples="",
- )
- def mad(self, axis: Axis | None = None, skipna: bool_t = True, level=None):
- return NDFrame.mad(self, axis, skipna, level)
-
- setattr(cls, "mad", mad)
-
@doc(
_num_ddof_doc,
desc="Return unbiased standard error of the mean over requested "
diff --git a/pandas/core/groupby/base.py b/pandas/core/groupby/base.py
index a953dab2115da..42630845bf6b2 100644
--- a/pandas/core/groupby/base.py
+++ b/pandas/core/groupby/base.py
@@ -36,7 +36,6 @@ class OutputKey:
"idxmax",
"idxmin",
"last",
- "mad",
"max",
"mean",
"median",
@@ -86,7 +85,6 @@ def maybe_normalize_deprecated_kernels(kernel) -> Literal["bfill", "ffill"]:
"pct_change",
"rank",
"shift",
- "tshift",
]
)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 47452d885543e..16732f5421df7 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -880,18 +880,6 @@ def skew(
)
return result
- @doc(Series.mad.__doc__)
- def mad(
- self, axis: Axis | None = None, skipna: bool = True, level: Level | None = None
- ) -> Series:
- result = self._op_via_apply("mad", axis=axis, skipna=skipna, level=level)
- return result
-
- @doc(Series.tshift.__doc__)
- def tshift(self, periods: int = 1, freq=None) -> Series:
- result = self._op_via_apply("tshift", periods=periods, freq=freq)
- return result
-
@property
@doc(Series.plot.__doc__)
def plot(self):
@@ -2275,18 +2263,6 @@ def skew(
)
return result
- @doc(DataFrame.mad.__doc__)
- def mad(
- self, axis: Axis | None = None, skipna: bool = True, level: Level | None = None
- ) -> DataFrame:
- result = self._op_via_apply("mad", axis=axis, skipna=skipna, level=level)
- return result
-
- @doc(DataFrame.tshift.__doc__)
- def tshift(self, periods: int = 1, freq=None, axis: Axis = 0) -> DataFrame:
- result = self._op_via_apply("tshift", periods=periods, freq=freq, axis=axis)
- return result
-
@property
@doc(DataFrame.plot.__doc__)
def plot(self) -> GroupByPlot:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 6c975058c5b76..a0f83e13c4ece 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -3877,8 +3877,6 @@ def shift(self, periods: int = 1, freq=None, axis: Axis = 0, fill_value=None):
See Also
--------
Index.shift : Shift values of Index.
- tshift : Shift the time index, using the index’s frequency
- if available.
"""
if freq is not None or axis != 0:
f = lambda x: x.shift(periods, freq, axis, fill_value)
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 04f18369f4fcc..bf3f74330e8cb 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -789,7 +789,6 @@ def apply(
result_values.append(res)
# getattr pattern for __name__ is needed for functools.partial objects
if len(group_keys) == 0 and getattr(f, "__name__", None) in [
- "mad",
"skew",
"sum",
"prod",
diff --git a/pandas/tests/apply/common.py b/pandas/tests/apply/common.py
index 91b831bcbb684..b4d153df54059 100644
--- a/pandas/tests/apply/common.py
+++ b/pandas/tests/apply/common.py
@@ -1,10 +1,7 @@
from pandas.core.groupby.base import transformation_kernels
-# tshift only works on time index and is deprecated
# There is no Series.cumcount or DataFrame.cumcount
series_transform_kernels = [
- x for x in sorted(transformation_kernels) if x not in ["tshift", "cumcount"]
-]
-frame_transform_kernels = [
- x for x in sorted(transformation_kernels) if x not in ["tshift", "cumcount"]
+ x for x in sorted(transformation_kernels) if x != "cumcount"
]
+frame_transform_kernels = [x for x in sorted(transformation_kernels) if x != "cumcount"]
diff --git a/pandas/tests/apply/test_frame_transform.py b/pandas/tests/apply/test_frame_transform.py
index c7a99400ab8e1..f884e8a7daf67 100644
--- a/pandas/tests/apply/test_frame_transform.py
+++ b/pandas/tests/apply/test_frame_transform.py
@@ -147,17 +147,14 @@ def test_transform_bad_dtype(op, frame_or_series, request):
obj = DataFrame({"A": 3 * [object]}) # DataFrame that will fail on most transforms
obj = tm.get_obj(obj, frame_or_series)
- # tshift is deprecated
- warn = None if op != "tshift" else FutureWarning
- with tm.assert_produces_warning(warn):
- with pytest.raises(TypeError, match="unsupported operand|not supported"):
- obj.transform(op)
- with pytest.raises(TypeError, match="Transform function failed"):
- obj.transform([op])
- with pytest.raises(TypeError, match="Transform function failed"):
- obj.transform({"A": op})
- with pytest.raises(TypeError, match="Transform function failed"):
- obj.transform({"A": [op]})
+ with pytest.raises(TypeError, match="unsupported operand|not supported"):
+ obj.transform(op)
+ with pytest.raises(TypeError, match="Transform function failed"):
+ obj.transform([op])
+ with pytest.raises(TypeError, match="Transform function failed"):
+ obj.transform({"A": op})
+ with pytest.raises(TypeError, match="Transform function failed"):
+ obj.transform({"A": [op]})
@pytest.mark.parametrize("op", frame_kernels_raise)
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index 9642691b5c578..70636d633b674 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -374,8 +374,6 @@ def test_numeric_like_ops(self):
with pytest.raises(TypeError, match=msg):
getattr(s, op)(numeric_only=False)
- # mad technically works because it takes always the numeric data
-
def test_numeric_like_ops_series(self):
# numpy ops
s = Series(Categorical([1, 2, 3, 4]))
diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py
index 8dbed84b85837..2cfa295d939a8 100644
--- a/pandas/tests/frame/conftest.py
+++ b/pandas/tests/frame/conftest.py
@@ -277,7 +277,6 @@ def frame_of_index_cols():
"sem",
"var",
"std",
- "mad",
]
)
def reduction_functions(request):
diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py
index bfc3c8e0a25eb..f76deca9048be 100644
--- a/pandas/tests/frame/methods/test_shift.py
+++ b/pandas/tests/frame/methods/test_shift.py
@@ -447,64 +447,6 @@ def test_shift_axis1_multiple_blocks_with_int_fill(self):
tm.assert_frame_equal(result, expected)
- @pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
- def test_tshift(self, datetime_frame, frame_or_series):
- # TODO(2.0): remove this test when tshift deprecation is enforced
-
- # PeriodIndex
- ps = tm.makePeriodFrame()
- ps = tm.get_obj(ps, frame_or_series)
- shifted = ps.tshift(1)
- unshifted = shifted.tshift(-1)
-
- tm.assert_equal(unshifted, ps)
-
- shifted2 = ps.tshift(freq="B")
- tm.assert_equal(shifted, shifted2)
-
- shifted3 = ps.tshift(freq=offsets.BDay())
- tm.assert_equal(shifted, shifted3)
-
- msg = "Given freq M does not match PeriodIndex freq B"
- with pytest.raises(ValueError, match=msg):
- ps.tshift(freq="M")
-
- # DatetimeIndex
- dtobj = tm.get_obj(datetime_frame, frame_or_series)
- shifted = dtobj.tshift(1)
- unshifted = shifted.tshift(-1)
-
- tm.assert_equal(dtobj, unshifted)
-
- shifted2 = dtobj.tshift(freq=dtobj.index.freq)
- tm.assert_equal(shifted, shifted2)
-
- inferred_ts = DataFrame(
- datetime_frame.values,
- Index(np.asarray(datetime_frame.index)),
- columns=datetime_frame.columns,
- )
- inferred_ts = tm.get_obj(inferred_ts, frame_or_series)
- shifted = inferred_ts.tshift(1)
-
- expected = dtobj.tshift(1)
- expected.index = expected.index._with_freq(None)
- tm.assert_equal(shifted, expected)
-
- unshifted = shifted.tshift(-1)
- tm.assert_equal(unshifted, inferred_ts)
-
- no_freq = dtobj.iloc[[0, 5, 7]]
- msg = "Freq was not set in the index hence cannot be inferred"
- with pytest.raises(ValueError, match=msg):
- no_freq.tshift()
-
- def test_tshift_deprecated(self, datetime_frame, frame_or_series):
- # GH#11631
- dtobj = tm.get_obj(datetime_frame, frame_or_series)
- with tm.assert_produces_warning(FutureWarning):
- dtobj.tshift()
-
def test_period_index_frame_shift_with_freq(self, frame_or_series):
ps = tm.makePeriodFrame()
ps = tm.get_obj(ps, frame_or_series)
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 6654ecec78c94..744d06d6cf339 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -69,7 +69,6 @@ def assert_stat_op_calc(
skipna_alternative : function, default None
NaN-safe version of alternative
"""
- warn = FutureWarning if opname == "mad" else None
f = getattr(frame, opname)
if check_dates:
@@ -91,9 +90,8 @@ def wrapper(x):
return alternative(x.values)
skipna_wrapper = tm._make_skipna_wrapper(alternative, skipna_alternative)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result0 = f(axis=0, skipna=False)
- result1 = f(axis=1, skipna=False)
+ result0 = f(axis=0, skipna=False)
+ result1 = f(axis=1, skipna=False)
tm.assert_series_equal(
result0, frame.apply(wrapper), check_dtype=check_dtype, rtol=rtol, atol=atol
)
@@ -106,9 +104,8 @@ def wrapper(x):
else:
skipna_wrapper = alternative
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result0 = f(axis=0)
- result1 = f(axis=1)
+ result0 = f(axis=0)
+ result1 = f(axis=1)
tm.assert_series_equal(
result0,
frame.apply(skipna_wrapper),
@@ -130,18 +127,14 @@ def wrapper(x):
assert lcd_dtype == result1.dtype
# bad axis
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- with pytest.raises(ValueError, match="No axis named 2"):
- f(axis=2)
+ with pytest.raises(ValueError, match="No axis named 2"):
+ f(axis=2)
# all NA case
if has_skipna:
all_na = frame * np.NaN
- with tm.assert_produces_warning(
- warn, match="The 'mad' method is deprecated", raise_on_extra_warnings=False
- ):
- r0 = getattr(all_na, opname)(axis=0)
- r1 = getattr(all_na, opname)(axis=1)
+ r0 = getattr(all_na, opname)(axis=0)
+ r1 = getattr(all_na, opname)(axis=1)
if opname in ["sum", "prod"]:
unit = 1 if opname == "prod" else 0 # result for empty sum/prod
expected = Series(unit, index=r0.index, dtype=r0.dtype)
@@ -167,7 +160,6 @@ class TestDataFrameAnalytics:
"min",
"max",
"nunique",
- "mad",
"var",
"std",
"sem",
@@ -176,13 +168,9 @@ class TestDataFrameAnalytics:
],
)
def test_stat_op_api_float_string_frame(self, float_string_frame, axis, opname):
- warn = FutureWarning if opname == "mad" else None
- with tm.assert_produces_warning(
- warn, match="The 'mad' method is deprecated", raise_on_extra_warnings=False
- ):
- getattr(float_string_frame, opname)(axis=axis)
- if opname not in ("nunique", "mad"):
- getattr(float_string_frame, opname)(axis=axis, numeric_only=True)
+ getattr(float_string_frame, opname)(axis=axis)
+ if opname != "nunique":
+ getattr(float_string_frame, opname)(axis=axis, numeric_only=True)
@pytest.mark.filterwarnings("ignore:Dropping of nuisance:FutureWarning")
@pytest.mark.parametrize("axis", [0, 1])
@@ -213,9 +201,6 @@ def count(s):
def nunique(s):
return len(algorithms.unique1d(s.dropna()))
- def mad(x):
- return np.abs(x - x.mean()).mean()
-
def var(x):
return np.var(x, ddof=1)
@@ -253,7 +238,6 @@ def sem(x):
"product", np.prod, float_frame_with_na, skipna_alternative=np.nanprod
)
- assert_stat_op_calc("mad", mad, float_frame_with_na)
assert_stat_op_calc("var", var, float_frame_with_na)
assert_stat_op_calc("std", std, float_frame_with_na)
assert_stat_op_calc("sem", sem, float_frame_with_na)
@@ -1490,14 +1474,6 @@ def test_frame_any_with_timedelta(self):
expected = Series(data=[False, True])
tm.assert_series_equal(result, expected)
- def test_reductions_deprecation_skipna_none(self, frame_or_series):
- # GH#44580
- obj = frame_or_series([1, 2, 3])
- with tm.assert_produces_warning(
- FutureWarning, match="skipna", raise_on_extra_warnings=False
- ):
- obj.mad(skipna=None)
-
def test_reductions_deprecation_level_argument(
self, frame_or_series, reduction_functions
):
@@ -1515,8 +1491,6 @@ def test_reductions_skipna_none_raises(
request.node.add_marker(
pytest.mark.xfail(reason="Count does not accept skipna")
)
- elif reduction_functions == "mad":
- pytest.skip("Mad is deprecated: GH#11787")
obj = frame_or_series([1, 2, 3])
msg = 'For argument "skipna" expected type bool, received type NoneType.'
with pytest.raises(ValueError, match=msg):
@@ -1718,68 +1692,6 @@ def test_minmax_extensionarray(method, numeric_only):
tm.assert_series_equal(result, expected)
-def test_mad_nullable_integer(any_signed_int_ea_dtype):
- # GH#33036
- df = DataFrame(np.random.randn(100, 4).astype(np.int64))
- df2 = df.astype(any_signed_int_ea_dtype)
-
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = df2.mad()
- expected = df.mad()
- tm.assert_series_equal(result, expected)
-
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = df2.mad(axis=1)
- expected = df.mad(axis=1)
- tm.assert_series_equal(result, expected)
-
- # case with NAs present
- df2.iloc[::2, 1] = pd.NA
-
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = df2.mad()
- expected = df.mad()
- expected[1] = df.iloc[1::2, 1].mad()
- tm.assert_series_equal(result, expected)
-
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = df2.mad(axis=1)
- expected = df.mad(axis=1)
- expected[::2] = df.T.loc[[0, 2, 3], ::2].mad()
- tm.assert_series_equal(result, expected)
-
-
-@pytest.mark.xfail(reason="GH#42895 caused by lack of 2D EA")
-def test_mad_nullable_integer_all_na(any_signed_int_ea_dtype):
- # GH#33036
- df = DataFrame(np.random.randn(100, 4).astype(np.int64))
- df2 = df.astype(any_signed_int_ea_dtype)
-
- # case with all-NA row/column
- msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- df2.iloc[:, 1] = pd.NA # FIXME(GH#44199): this doesn't operate in-place
- df2.iloc[:, 1] = pd.array([pd.NA] * len(df2), dtype=any_signed_int_ea_dtype)
-
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = df2.mad()
- expected = df.mad()
-
- expected[1] = pd.NA
- expected = expected.astype("Float64")
- tm.assert_series_equal(result, expected)
-
-
@pytest.mark.parametrize("meth", ["max", "min", "sum", "mean", "median"])
def test_groupby_regular_arithmetic_equivalent(meth):
# GH#40660
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index 3c40218ef9024..7634f783117d6 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -395,22 +395,6 @@
(pd.DataFrame, frame_data, operator.methodcaller("where", np.array([[True]]))),
(pd.Series, ([1, 2],), operator.methodcaller("mask", np.array([True, False]))),
(pd.DataFrame, frame_data, operator.methodcaller("mask", np.array([[True]]))),
- pytest.param(
- (
- pd.Series,
- (1, pd.date_range("2000", periods=4)),
- operator.methodcaller("tshift"),
- ),
- marks=pytest.mark.filterwarnings("ignore::FutureWarning"),
- ),
- pytest.param(
- (
- pd.DataFrame,
- ({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("tshift"),
- ),
- marks=pytest.mark.filterwarnings("ignore::FutureWarning"),
- ),
(pd.Series, ([1, 2],), operator.methodcaller("truncate", before=0)),
(pd.DataFrame, frame_data, operator.methodcaller("truncate", before=0)),
(
diff --git a/pandas/tests/groupby/__init__.py b/pandas/tests/groupby/__init__.py
index c63aa568a15dc..446d9da437771 100644
--- a/pandas/tests/groupby/__init__.py
+++ b/pandas/tests/groupby/__init__.py
@@ -22,6 +22,4 @@ def get_groupby_method_args(name, obj):
return (0.5,)
if name == "corrwith":
return (obj,)
- if name == "tshift":
- return (0, 0)
return ()
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 3e1ee02aabce7..ad7368a69c0f5 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1348,13 +1348,11 @@ def test_groupby_aggregate_directory(reduction_func):
# GH#32793
if reduction_func in ["corrwith", "nth"]:
return None
- warn = FutureWarning if reduction_func == "mad" else None
obj = DataFrame([[0, 1], [0, np.nan]])
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result_reduced_series = obj.groupby(0).agg(reduction_func)
- result_reduced_frame = obj.groupby(0).agg({1: reduction_func})
+ result_reduced_series = obj.groupby(0).agg(reduction_func)
+ result_reduced_frame = obj.groupby(0).agg({1: reduction_func})
if reduction_func in ["size", "ngroup"]:
# names are different: None / 1
diff --git a/pandas/tests/groupby/test_allowlist.py b/pandas/tests/groupby/test_allowlist.py
index 93df7d1f1d4a0..1dba199dc8f22 100644
--- a/pandas/tests/groupby/test_allowlist.py
+++ b/pandas/tests/groupby/test_allowlist.py
@@ -28,12 +28,11 @@
"median",
"mean",
"skew",
- "mad",
"std",
"var",
"sem",
]
-AGG_FUNCTIONS_WITH_SKIPNA = ["skew", "mad"]
+AGG_FUNCTIONS_WITH_SKIPNA = ["skew"]
@pytest.fixture
@@ -79,8 +78,6 @@ def test_regression_allowlist_methods(raw_frame, op, level, axis, skipna, sort):
# GH6944
# GH 17537
# explicitly test the allowlist methods
- warn = FutureWarning if op == "mad" else None
-
if axis == 0:
frame = raw_frame
else:
@@ -88,20 +85,15 @@ def test_regression_allowlist_methods(raw_frame, op, level, axis, skipna, sort):
if op in AGG_FUNCTIONS_WITH_SKIPNA:
grouped = frame.groupby(level=level, axis=axis, sort=sort)
- with tm.assert_produces_warning(
- warn, match="The 'mad' method is deprecated", raise_on_extra_warnings=False
- ):
- result = getattr(grouped, op)(skipna=skipna)
- with tm.assert_produces_warning(FutureWarning):
- expected = getattr(frame, op)(level=level, axis=axis, skipna=skipna)
+ result = getattr(grouped, op)(skipna=skipna)
+ expected = getattr(frame, op)(level=level, axis=axis, skipna=skipna)
if sort:
expected = expected.sort_index(axis=axis, level=level)
tm.assert_frame_equal(result, expected)
else:
grouped = frame.groupby(level=level, axis=axis, sort=sort)
- with tm.assert_produces_warning(FutureWarning):
- result = getattr(grouped, op)()
- expected = getattr(frame, op)(level=level, axis=axis)
+ result = getattr(grouped, op)()
+ expected = getattr(frame, op)(level=level, axis=axis)
if sort:
expected = expected.sort_index(axis=axis, level=level)
tm.assert_frame_equal(result, expected)
@@ -203,10 +195,8 @@ def test_tab_completion(mframe):
"shift",
"skew",
"take",
- "tshift",
"pct_change",
"any",
- "mad",
"corr",
"corrwith",
"cov",
@@ -272,19 +262,6 @@ def test_groupby_selection_with_methods(df, method):
tm.assert_frame_equal(res, exp)
-@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
-def test_groupby_selection_tshift_raises(df):
- rng = date_range("2014", periods=len(df))
- df.index = rng
-
- g = df.groupby(["A"])[["C"]]
-
- # check that the index cache is cleared
- with pytest.raises(ValueError, match="Freq was not set in the index"):
- # GH#35937
- g.tshift()
-
-
def test_groupby_selection_other_methods(df):
# some methods which require DatetimeIndex
rng = date_range("2014", periods=len(df))
diff --git a/pandas/tests/groupby/test_api_consistency.py b/pandas/tests/groupby/test_api_consistency.py
index 1e82c2b6ac6e2..155f86c23e106 100644
--- a/pandas/tests/groupby/test_api_consistency.py
+++ b/pandas/tests/groupby/test_api_consistency.py
@@ -98,8 +98,6 @@ def test_series_consistency(request, groupby_func):
exclude_expected = {"kwargs", "bool_only", "level", "axis"}
elif groupby_func in ("count",):
exclude_expected = {"level"}
- elif groupby_func in ("tshift",):
- exclude_expected = {"axis"}
elif groupby_func in ("diff",):
exclude_result = {"axis"}
elif groupby_func in ("max", "min"):
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 47ea6a99ffea9..0cd89a205bb82 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -1048,8 +1048,6 @@ def test_apply_with_timezones_aware():
def test_apply_is_unchanged_when_other_methods_are_called_first(reduction_func):
# GH #34656
# GH #34271
- warn = FutureWarning if reduction_func == "mad" else None
-
df = DataFrame(
{
"a": [99, 99, 99, 88, 88, 88],
@@ -1071,8 +1069,7 @@ def test_apply_is_unchanged_when_other_methods_are_called_first(reduction_func):
# Check output when another method is called before .apply()
grp = df.groupby(by="a")
args = get_groupby_method_args(reduction_func, df)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- _ = getattr(grp, reduction_func)(*args)
+ _ = getattr(grp, reduction_func)(*args)
result = grp.apply(sum)
tm.assert_frame_equal(result, expected)
@@ -1338,7 +1335,6 @@ def test_result_name_when_one_group(name):
[
("apply", lambda gb: gb.values[-1]),
("apply", lambda gb: gb["b"].iloc[0]),
- ("agg", "mad"),
("agg", "skew"),
("agg", "prod"),
("agg", "sum"),
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index e99d1325a7e4f..a3821fc2216ec 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -49,7 +49,6 @@ def f(a):
"idxmax": np.NaN,
"idxmin": np.NaN,
"last": np.NaN,
- "mad": np.NaN,
"max": np.NaN,
"mean": np.NaN,
"median": np.NaN,
@@ -1365,7 +1364,6 @@ def test_series_groupby_on_2_categoricals_unobserved(reduction_func, observed, r
reason="TODO: implemented SeriesGroupBy.corrwith. See GH 32293"
)
request.node.add_marker(mark)
- warn = FutureWarning if reduction_func == "mad" else None
df = DataFrame(
{
@@ -1380,8 +1378,7 @@ def test_series_groupby_on_2_categoricals_unobserved(reduction_func, observed, r
series_groupby = df.groupby(["cat_1", "cat_2"], observed=observed)["value"]
agg = getattr(series_groupby, reduction_func)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = agg(*args)
+ result = agg(*args)
assert len(result) == expected_length
@@ -1400,7 +1397,6 @@ def test_series_groupby_on_2_categoricals_unobserved_zeroes_or_nans(
reason="TODO: implemented SeriesGroupBy.corrwith. See GH 32293"
)
request.node.add_marker(mark)
- warn = FutureWarning if reduction_func == "mad" else None
df = DataFrame(
{
@@ -1414,8 +1410,7 @@ def test_series_groupby_on_2_categoricals_unobserved_zeroes_or_nans(
series_groupby = df.groupby(["cat_1", "cat_2"], observed=False)["value"]
agg = getattr(series_groupby, reduction_func)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = agg(*args)
+ result = agg(*args)
zero_or_nan = _results_for_groupbys_with_missing_categories[reduction_func]
@@ -1438,7 +1433,6 @@ def test_dataframe_groupby_on_2_categoricals_when_observed_is_true(reduction_fun
# does not return the categories that are not in df when observed=True
if reduction_func == "ngroup":
pytest.skip("ngroup does not return the Categories on the index")
- warn = FutureWarning if reduction_func == "mad" else None
df = DataFrame(
{
@@ -1452,8 +1446,7 @@ def test_dataframe_groupby_on_2_categoricals_when_observed_is_true(reduction_fun
df_grp = df.groupby(["cat_1", "cat_2"], observed=True)
args = get_groupby_method_args(reduction_func, df)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- res = getattr(df_grp, reduction_func)(*args)
+ res = getattr(df_grp, reduction_func)(*args)
for cat in unobserved_cats:
assert cat not in res.index
@@ -1470,7 +1463,6 @@ def test_dataframe_groupby_on_2_categoricals_when_observed_is_false(
if reduction_func == "ngroup":
pytest.skip("ngroup does not return the Categories on the index")
- warn = FutureWarning if reduction_func == "mad" else None
df = DataFrame(
{
@@ -1484,8 +1476,7 @@ def test_dataframe_groupby_on_2_categoricals_when_observed_is_false(
df_grp = df.groupby(["cat_1", "cat_2"], observed=observed)
args = get_groupby_method_args(reduction_func, df)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- res = getattr(df_grp, reduction_func)(*args)
+ res = getattr(df_grp, reduction_func)(*args)
expected = _results_for_groupbys_with_missing_categories[reduction_func]
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index cdbb121819c5e..2b583431dcd71 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -323,23 +323,6 @@ def test_idxmin(self, gb):
result = gb.idxmin()
tm.assert_frame_equal(result, expected)
- def test_mad(self, gb, gni):
- # mad
- expected = DataFrame([[0], [np.nan]], columns=["B"], index=[1, 3])
- expected.index.name = "A"
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = gb.mad()
- tm.assert_frame_equal(result, expected)
-
- expected = DataFrame([[1, 0.0], [3, np.nan]], columns=["A", "B"], index=[0, 1])
- with tm.assert_produces_warning(
- FutureWarning, match="The 'mad' method is deprecated"
- ):
- result = gni.mad()
- tm.assert_frame_equal(result, expected)
-
def test_describe(self, df, gb, gni):
# describe
expected_index = Index([1, 3], name="A")
@@ -560,8 +543,6 @@ def test_idxmin_idxmax_axis1():
def test_axis1_numeric_only(request, groupby_func, numeric_only):
if groupby_func in ("idxmax", "idxmin"):
pytest.skip("idxmax and idx_min tested in test_idxmin_idxmax_axis1")
- if groupby_func in ("mad", "tshift"):
- pytest.skip("mad and tshift are deprecated")
if groupby_func in ("corrwith", "skew"):
msg = "GH#47723 groupby.corrwith and skew do not correctly implement axis=1"
request.node.add_marker(pytest.mark.xfail(reason=msg))
@@ -1460,7 +1441,7 @@ def test_deprecate_numeric_only(
@pytest.mark.parametrize("dtype", [bool, int, float, object])
def test_deprecate_numeric_only_series(dtype, groupby_func, request):
# GH#46560
- if groupby_func in ("backfill", "mad", "pad", "tshift"):
+ if groupby_func in ("backfill", "pad"):
pytest.skip("method is deprecated")
elif groupby_func == "corrwith":
msg = "corrwith is not implemented on SeriesGroupBy"
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 74b4d5dc19ca1..26f269d3d4384 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -688,11 +688,9 @@ def test_ops_not_as_index(reduction_func):
if reduction_func in ("corrwith", "nth", "ngroup"):
pytest.skip(f"GH 5755: Test not applicable for {reduction_func}")
- warn = FutureWarning if reduction_func == "mad" else None
df = DataFrame(np.random.randint(0, 5, size=(100, 2)), columns=["a", "b"])
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- expected = getattr(df.groupby("a"), reduction_func)()
+ expected = getattr(df.groupby("a"), reduction_func)()
if reduction_func == "size":
expected = expected.rename("size")
expected = expected.reset_index()
@@ -703,20 +701,16 @@ def test_ops_not_as_index(reduction_func):
g = df.groupby("a", as_index=False)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = getattr(g, reduction_func)()
+ result = getattr(g, reduction_func)()
tm.assert_frame_equal(result, expected)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = g.agg(reduction_func)
+ result = g.agg(reduction_func)
tm.assert_frame_equal(result, expected)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = getattr(g["b"], reduction_func)()
+ result = getattr(g["b"], reduction_func)()
tm.assert_frame_equal(result, expected)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = g["b"].agg(reduction_func)
+ result = g["b"].agg(reduction_func)
tm.assert_frame_equal(result, expected)
@@ -1877,7 +1871,7 @@ def test_pivot_table_values_key_error():
)
@pytest.mark.parametrize("method", ["attr", "agg", "apply"])
@pytest.mark.parametrize(
- "op", ["idxmax", "idxmin", "mad", "min", "max", "sum", "prod", "skew"]
+ "op", ["idxmax", "idxmin", "min", "max", "sum", "prod", "skew"]
)
@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
@@ -1888,16 +1882,10 @@ def test_empty_groupby(columns, keys, values, method, op, request, using_array_m
if (
isinstance(values, Categorical)
and not isinstance(columns, list)
- and op in ["sum", "prod", "skew", "mad"]
+ and op in ["sum", "prod", "skew"]
):
# handled below GH#41291
-
- if using_array_manager and op == "mad":
- right_msg = "Cannot interpret 'CategoricalDtype.* as a data type"
- msg = "Regex pattern \"'Categorical' does not implement.*" + right_msg
- mark = pytest.mark.xfail(raises=AssertionError, match=msg)
- request.node.add_marker(mark)
-
+ pass
elif (
isinstance(values, Categorical)
and len(keys) == 1
@@ -1931,19 +1919,6 @@ def test_empty_groupby(columns, keys, values, method, op, request, using_array_m
)
request.node.add_marker(mark)
- elif (
- op == "mad"
- and not isinstance(columns, list)
- and isinstance(values, pd.DatetimeIndex)
- and values.tz is not None
- and using_array_manager
- ):
- mark = pytest.mark.xfail(
- raises=TypeError,
- match=r"Cannot interpret 'datetime64\[ns, US/Eastern\]' as a data type",
- )
- request.node.add_marker(mark)
-
elif isinstance(values, BooleanArray) and op in ["sum", "prod"]:
# We expect to get Int64 back for these
override_dtype = "Int64"
@@ -1963,14 +1938,10 @@ def test_empty_groupby(columns, keys, values, method, op, request, using_array_m
gb = df.groupby(keys, group_keys=False)[columns]
def get_result():
- warn = FutureWarning if op == "mad" else None
- with tm.assert_produces_warning(
- warn, match="The 'mad' method is deprecated", raise_on_extra_warnings=False
- ):
- if method == "attr":
- return getattr(gb, op)()
- else:
- return getattr(gb, method)(op)
+ if method == "attr":
+ return getattr(gb, op)()
+ else:
+ return getattr(gb, method)(op)
if columns == "C":
# i.e. SeriesGroupBy
@@ -1987,13 +1958,10 @@ def get_result():
get_result()
return
- if op in ["prod", "sum", "skew", "mad"]:
+ if op in ["prod", "sum", "skew"]:
if isinstance(values, Categorical):
# GH#41291
- if op == "mad":
- # mad calls mean, which Categorical doesn't implement
- msg = "does not support reduction 'mean'"
- elif op == "skew":
+ if op == "skew":
msg = f"does not support reduction '{op}'"
else:
msg = "category type does not support"
@@ -2044,7 +2012,7 @@ def get_result():
return
if (
- op in ["mad", "min", "max", "skew"]
+ op in ["min", "max", "skew"]
and isinstance(values, Categorical)
and len(keys) == 1
):
@@ -2307,13 +2275,9 @@ def test_groupby_duplicate_index():
tm.assert_series_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*is deprecated.*:FutureWarning")
def test_group_on_empty_multiindex(transformation_func, request):
# GH 47787
# With one row, those are transforms so the schema should be the same
- if transformation_func == "tshift":
- mark = pytest.mark.xfail(raises=NotImplementedError)
- request.node.add_marker(mark)
df = DataFrame(
data=[[1, Timestamp("today"), 3, 4]],
columns=["col_1", "col_2", "col_3", "col_4"],
@@ -2323,8 +2287,6 @@ def test_group_on_empty_multiindex(transformation_func, request):
df = df.set_index(["col_1", "col_2"])
if transformation_func == "fillna":
args = ("ffill",)
- elif transformation_func == "tshift":
- args = (1, "D")
else:
args = ()
result = df.iloc[:0].groupby(["col_1"]).transform(transformation_func, *args)
@@ -2351,24 +2313,17 @@ def test_group_on_empty_multiindex(transformation_func, request):
MultiIndex.from_tuples((("a", "a"), ("a", "a")), names=["foo", "bar"]),
],
)
-@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
def test_dup_labels_output_shape(groupby_func, idx):
if groupby_func in {"size", "ngroup", "cumcount"}:
pytest.skip(f"Not applicable for {groupby_func}")
# TODO(2.0) Remove after pad/backfill deprecation enforced
groupby_func = maybe_normalize_deprecated_kernels(groupby_func)
- warn = FutureWarning if groupby_func in ("mad", "tshift") else None
df = DataFrame([[1, 1]], columns=idx)
grp_by = df.groupby([0])
- if groupby_func == "tshift":
- df.index = [Timestamp("today")]
- # args.extend([1, "D"])
args = get_groupby_method_args(groupby_func, df)
-
- with tm.assert_produces_warning(warn, match="is deprecated"):
- result = getattr(grp_by, groupby_func)(*args)
+ result = getattr(grp_by, groupby_func)(*args)
assert result.shape == (1, 2)
tm.assert_index_equal(result.columns, idx)
diff --git a/pandas/tests/groupby/test_groupby_subclass.py b/pandas/tests/groupby/test_groupby_subclass.py
index fddf0c86d0ab1..b8aa2a1c9656d 100644
--- a/pandas/tests/groupby/test_groupby_subclass.py
+++ b/pandas/tests/groupby/test_groupby_subclass.py
@@ -20,7 +20,6 @@
tm.SubclassedSeries(np.arange(0, 10), name="A"),
],
)
-@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
def test_groupby_preserves_subclass(obj, groupby_func):
# GH28330 -- preserve subclass through groupby operations
@@ -28,7 +27,6 @@ def test_groupby_preserves_subclass(obj, groupby_func):
pytest.skip(f"Not applicable for Series and {groupby_func}")
# TODO(2.0) Remove after pad/backfill deprecation enforced
groupby_func = maybe_normalize_deprecated_kernels(groupby_func)
- warn = FutureWarning if groupby_func in ("mad", "tshift") else None
grouped = obj.groupby(np.arange(0, 10))
@@ -37,9 +35,8 @@ def test_groupby_preserves_subclass(obj, groupby_func):
args = get_groupby_method_args(groupby_func, obj)
- with tm.assert_produces_warning(warn, match="is deprecated"):
- result1 = getattr(grouped, groupby_func)(*args)
- result2 = grouped.agg(groupby_func, *args)
+ result1 = getattr(grouped, groupby_func)(*args)
+ result2 = grouped.agg(groupby_func, *args)
# Reduction or transformation kernels should preserve type
slices = {"ngroup", "cumcount", "size"}
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 8a2bd64a3deb0..2b4eba539ec82 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -173,13 +173,10 @@ def test_transform_axis_1(request, transformation_func):
msg = "ngroup fails with axis=1: #45986"
request.node.add_marker(pytest.mark.xfail(reason=msg))
- warn = FutureWarning if transformation_func == "tshift" else None
-
df = DataFrame({"a": [1, 2], "b": [3, 4], "c": [5, 6]}, index=["x", "y"])
args = get_groupby_method_args(transformation_func, df)
- with tm.assert_produces_warning(warn):
- result = df.groupby([0, 0, 1], axis=1).transform(transformation_func, *args)
- expected = df.T.groupby([0, 0, 1]).transform(transformation_func, *args).T
+ result = df.groupby([0, 0, 1], axis=1).transform(transformation_func, *args)
+ expected = df.T.groupby([0, 0, 1]).transform(transformation_func, *args).T
if transformation_func in ["diff", "shift"]:
# Result contains nans, so transpose coerces to float
@@ -200,22 +197,13 @@ def test_transform_axis_1_reducer(request, reduction_func):
):
marker = pytest.mark.xfail(reason="transform incorrectly fails - GH#45986")
request.node.add_marker(marker)
- if reduction_func == "mad":
- warn = FutureWarning
- msg = "The 'mad' method is deprecated"
- elif reduction_func in ("sem", "std"):
- warn = FutureWarning
- msg = "The default value of numeric_only"
- else:
- warn = None
- msg = ""
+ warn = FutureWarning if reduction_func in ("sem", "std") else None
+ msg = "The default value of numeric_only"
df = DataFrame({"a": [1, 2], "b": [3, 4], "c": [5, 6]}, index=["x", "y"])
with tm.assert_produces_warning(warn, match=msg):
result = df.groupby([0, 0, 1], axis=1).transform(reduction_func)
- warn = FutureWarning if reduction_func == "mad" else None
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- expected = df.T.groupby([0, 0, 1]).transform(reduction_func).T
+ expected = df.T.groupby([0, 0, 1]).transform(reduction_func).T
tm.assert_equal(result, expected)
@@ -402,12 +390,6 @@ def mock_op(x):
counter += 1
return Series(counter, index=x.index)
- elif transformation_func == "tshift":
- msg = (
- "Current behavior of groupby.tshift is inconsistent with other "
- "transformations. See GH34452 for more details"
- )
- request.node.add_marker(pytest.mark.xfail(reason=msg))
else:
test_op = lambda x: x.transform(transformation_func)
mock_op = lambda x: getattr(x, transformation_func)()
@@ -1152,7 +1134,6 @@ def test_transform_invalid_name_raises():
)
def test_transform_agg_by_name(request, reduction_func, obj):
func = reduction_func
- warn = FutureWarning if func == "mad" else None
g = obj.groupby(np.repeat([0, 1], 3))
@@ -1162,8 +1143,7 @@ def test_transform_agg_by_name(request, reduction_func, obj):
)
args = get_groupby_method_args(reduction_func, obj)
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = g.transform(func, *args)
+ result = g.transform(func, *args)
# this is the *definition* of a transformation
tm.assert_index_equal(result.index, obj.index)
@@ -1353,7 +1333,6 @@ def test_null_group_str_reducer(request, dropna, reduction_func):
if reduction_func == "corrwith":
msg = "incorrectly raises"
request.node.add_marker(pytest.mark.xfail(reason=msg))
- warn = FutureWarning if reduction_func == "mad" else None
index = [1, 2, 3, 4] # test transform preserves non-standard index
df = DataFrame({"A": [1, 1, np.nan, np.nan], "B": [1, 2, 2, 3]}, index=index)
@@ -1377,10 +1356,7 @@ def test_null_group_str_reducer(request, dropna, reduction_func):
expected_gb = df.groupby("A", dropna=False)
buffer = []
for idx, group in expected_gb:
- with tm.assert_produces_warning(
- warn, match="The 'mad' method is deprecated"
- ):
- res = getattr(group["B"], reduction_func)()
+ res = getattr(group["B"], reduction_func)()
buffer.append(Series(res, index=group.index))
expected = concat(buffer).to_frame("B")
if dropna:
@@ -1391,17 +1367,12 @@ def test_null_group_str_reducer(request, dropna, reduction_func):
else:
expected.iloc[[2, 3]] = np.nan
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = gb.transform(reduction_func, *args)
+ result = gb.transform(reduction_func, *args)
tm.assert_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
def test_null_group_str_transformer(request, dropna, transformation_func):
# GH 17093
- if transformation_func == "tshift":
- msg = "tshift requires timeseries"
- request.node.add_marker(pytest.mark.xfail(reason=msg))
df = DataFrame({"A": [1, 1, np.nan], "B": [1, 2, 2]}, index=[1, 2, 3])
args = get_groupby_method_args(transformation_func, df)
gb = df.groupby("A", dropna=dropna)
@@ -1438,7 +1409,6 @@ def test_null_group_str_reducer_series(request, dropna, reduction_func):
if reduction_func == "corrwith":
msg = "corrwith not implemented for SeriesGroupBy"
request.node.add_marker(pytest.mark.xfail(reason=msg))
- warn = FutureWarning if reduction_func == "mad" else None
# GH 17093
index = [1, 2, 3, 4] # test transform preserves non-standard index
@@ -1463,10 +1433,7 @@ def test_null_group_str_reducer_series(request, dropna, reduction_func):
expected_gb = ser.groupby([1, 1, np.nan, np.nan], dropna=False)
buffer = []
for idx, group in expected_gb:
- with tm.assert_produces_warning(
- warn, match="The 'mad' method is deprecated"
- ):
- res = getattr(group, reduction_func)()
+ res = getattr(group, reduction_func)()
buffer.append(Series(res, index=group.index))
expected = concat(buffer)
if dropna:
@@ -1474,17 +1441,12 @@ def test_null_group_str_reducer_series(request, dropna, reduction_func):
expected = expected.astype(dtype)
expected.iloc[[2, 3]] = np.nan
- with tm.assert_produces_warning(warn, match="The 'mad' method is deprecated"):
- result = gb.transform(reduction_func, *args)
+ result = gb.transform(reduction_func, *args)
tm.assert_series_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
def test_null_group_str_transformer_series(request, dropna, transformation_func):
# GH 17093
- if transformation_func == "tshift":
- msg = "tshift requires timeseries"
- request.node.add_marker(pytest.mark.xfail(reason=msg))
ser = Series([1, 2, 2], index=[1, 2, 3])
args = get_groupby_method_args(transformation_func, ser)
gb = ser.groupby([1, 1, np.nan], dropna=dropna)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49252 | 2022-10-22T21:52:25Z | 2022-10-23T17:31:33Z | 2022-10-23T17:31:33Z | 2022-10-23T18:11:58Z |
DEPR: DataFrame.median/mean with numeric_only=None and dt64 columns | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index e281e250d608e..87b1e371ab4d8 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -185,6 +185,7 @@ Removal of prior version deprecations/changes
- Removed :attr:`Rolling.is_datetimelike` (:issue:`38963`)
- Removed deprecated :meth:`Timedelta.delta`, :meth:`Timedelta.is_populated`, and :attr:`Timedelta.freq` (:issue:`46430`, :issue:`46476`)
- Removed the ``numeric_only`` keyword from :meth:`Categorical.min` and :meth:`Categorical.max` in favor of ``skipna`` (:issue:`48821`)
+- Changed behavior of :meth:`DataFrame.median` and :meth:`DataFrame.mean` with ``numeric_only=None`` to not exclude datetime-like columns THIS NOTE WILL BE IRRELEVANT ONCE ``numeric_only=None`` DEPRECATION IS ENFORCED (:issue:`29941`)
- Removed :func:`is_extension_type` in favor of :func:`is_extension_array_dtype` (:issue:`29457`)
- Removed :meth:`Index.get_value` (:issue:`33907`)
- Remove :meth:`DataFrameGroupBy.pad` and :meth:`DataFrameGroupBy.backfill` (:issue:`45076`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fb333aff66b72..ede84aad504d3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -123,7 +123,6 @@
is_1d_only_ea_dtype,
is_bool_dtype,
is_dataclass,
- is_datetime64_any_dtype,
is_dict_like,
is_dtype_equal,
is_extension_array_dtype,
@@ -10789,29 +10788,6 @@ def _reduce(
assert filter_type is None or filter_type == "bool", filter_type
out_dtype = "bool" if filter_type == "bool" else None
- if numeric_only is None and name in ["mean", "median"]:
- own_dtypes = [arr.dtype for arr in self._mgr.arrays]
-
- dtype_is_dt = np.array(
- [is_datetime64_any_dtype(dtype) for dtype in own_dtypes],
- dtype=bool,
- )
- if dtype_is_dt.any():
- warnings.warn(
- "DataFrame.mean and DataFrame.median with numeric_only=None "
- "will include datetime64 and datetime64tz columns in a "
- "future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- # Non-copy equivalent to
- # dt64_cols = self.dtypes.apply(is_datetime64_any_dtype)
- # cols = self.columns[~dt64_cols]
- # self = self[cols]
- predicate = lambda x: not is_datetime64_any_dtype(x.dtype)
- mgr = self._mgr._get_data_subset(predicate)
- self = type(self)(mgr)
-
# TODO: Make other agg func handle axis=None properly GH#21597
axis = self._get_axis_number(axis)
labels = self._get_agg_axis(axis)
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 6654ecec78c94..92cdaf3319195 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -73,14 +73,13 @@ def assert_stat_op_calc(
f = getattr(frame, opname)
if check_dates:
- expected_warning = FutureWarning if opname in ["mean", "median"] else None
df = DataFrame({"b": date_range("1/1/2001", periods=2)})
- with tm.assert_produces_warning(expected_warning):
+ with tm.assert_produces_warning(None):
result = getattr(df, opname)()
assert isinstance(result, Series)
df["a"] = range(len(df))
- with tm.assert_produces_warning(expected_warning):
+ with tm.assert_produces_warning(None):
result = getattr(df, opname)()
assert isinstance(result, Series)
assert len(result)
@@ -390,21 +389,19 @@ def test_nunique(self):
def test_mean_mixed_datetime_numeric(self, tz):
# https://github.com/pandas-dev/pandas/issues/24752
df = DataFrame({"A": [1, 1], "B": [Timestamp("2000", tz=tz)] * 2})
- with tm.assert_produces_warning(FutureWarning):
- result = df.mean()
- expected = Series([1.0], index=["A"])
+ result = df.mean()
+ expected = Series([1.0, Timestamp("2000", tz=tz)], index=["A", "B"])
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("tz", [None, "UTC"])
- def test_mean_excludes_datetimes(self, tz):
+ def test_mean_includes_datetimes(self, tz):
# https://github.com/pandas-dev/pandas/issues/24752
- # Our long-term desired behavior is unclear, but the behavior in
- # 0.24.0rc1 was buggy.
+ # Behavior in 0.24.0rc1 was buggy.
+ # As of 2.0 with numeric_only=None we do *not* drop datetime columns
df = DataFrame({"A": [Timestamp("2000", tz=tz)] * 2})
- with tm.assert_produces_warning(FutureWarning):
- result = df.mean()
+ result = df.mean()
- expected = Series(dtype=np.float64)
+ expected = Series([Timestamp("2000", tz=tz)], index=["A"])
tm.assert_series_equal(result, expected)
def test_mean_mixed_string_decimal(self):
@@ -857,6 +854,7 @@ def test_mean_corner(self, float_frame, float_string_frame):
def test_mean_datetimelike(self):
# GH#24757 check that datetimelike are excluded by default, handled
# correctly with numeric_only=True
+ # As of 2.0, datetimelike are *not* excluded with numeric_only=None
df = DataFrame(
{
@@ -870,10 +868,9 @@ def test_mean_datetimelike(self):
expected = Series({"A": 1.0})
tm.assert_series_equal(result, expected)
- with tm.assert_produces_warning(FutureWarning):
- # in the future datetime columns will be included
+ with tm.assert_produces_warning(FutureWarning, match="Select only valid"):
result = df.mean()
- expected = Series({"A": 1.0, "C": df.loc[1, "C"]})
+ expected = Series({"A": 1.0, "B": df.loc[1, "B"], "C": df.loc[1, "C"]})
tm.assert_series_equal(result, expected)
def test_mean_datetimelike_numeric_only_false(self):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49250 | 2022-10-22T18:42:16Z | 2022-10-30T12:21:10Z | 2022-10-30T12:21:10Z | 2022-10-30T16:19:50Z |
read_json engine keyword and pyarrow integration | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 1c3cdd9f4cffd..d0cd5c300248b 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -2069,6 +2069,8 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
* ``lines`` : reads file as one json object per line.
* ``encoding`` : The encoding to use to decode py3 bytes.
* ``chunksize`` : when used in combination with ``lines=True``, return a JsonReader which reads in ``chunksize`` lines per iteration.
+* ``engine``: Either ``"ujson"``, the built-in JSON parser, or ``"pyarrow"`` which dispatches to pyarrow's ``pyarrow.json.read_json``.
+ The ``"pyarrow"`` is only available when ``lines=True``
The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is not parseable.
@@ -2250,6 +2252,16 @@ For line-delimited json files, pandas can also return an iterator which reads in
for chunk in reader:
print(chunk)
+Line-limited json can also be read using the pyarrow reader by specifying ``engine="pyarrow"``.
+
+.. ipython:: python
+
+ from io import BytesIO
+ df = pd.read_json(BytesIO(jsonl.encode()), lines=True, engine="pyarrow")
+ df
+
+.. versionadded:: 2.0.0
+
.. _io.table_schema:
Table schema
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index b006d3820889f..51b4d0468a19a 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -298,6 +298,7 @@ Other enhancements
- Added :meth:`DatetimeIndex.as_unit` and :meth:`TimedeltaIndex.as_unit` to convert to different resolutions; supported resolutions are "s", "ms", "us", and "ns" (:issue:`50616`)
- Added :meth:`Series.dt.unit` and :meth:`Series.dt.as_unit` to convert to different resolutions; supported resolutions are "s", "ms", "us", and "ns" (:issue:`51223`)
- Added new argument ``dtype`` to :func:`read_sql` to be consistent with :func:`read_sql_query` (:issue:`50797`)
+- Added new argument ``engine`` to :func:`read_json` to support parsing JSON with pyarrow by specifying ``engine="pyarrow"`` (:issue:`48893`)
- Added support for SQLAlchemy 2.0 (:issue:`40686`)
-
diff --git a/pandas/_typing.py b/pandas/_typing.py
index 8d3044a978291..87979aba9ada4 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -324,6 +324,9 @@ def closed(self) -> bool:
# read_csv engines
CSVEngine = Literal["c", "python", "pyarrow", "python-fwf"]
+# read_json engines
+JSONEngine = Literal["ujson", "pyarrow"]
+
# read_xml parsers
XMLParsers = Literal["lxml", "etree"]
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 338e831ed184f..1494b11125319 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -21,7 +21,10 @@
import numpy as np
-from pandas._config import using_nullable_dtypes
+from pandas._config import (
+ get_option,
+ using_nullable_dtypes,
+)
from pandas._libs import lib
from pandas._libs.json import (
@@ -34,11 +37,13 @@
DtypeArg,
FilePath,
IndexLabel,
+ JSONEngine,
JSONSerializable,
ReadBuffer,
StorageOptions,
WriteBuffer,
)
+from pandas.compat._optional import import_optional_dependency
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
@@ -401,6 +406,7 @@ def read_json(
nrows: int | None = ...,
storage_options: StorageOptions = ...,
use_nullable_dtypes: bool = ...,
+ engine: JSONEngine = ...,
) -> JsonReader[Literal["frame"]]:
...
@@ -425,6 +431,7 @@ def read_json(
nrows: int | None = ...,
storage_options: StorageOptions = ...,
use_nullable_dtypes: bool = ...,
+ engine: JSONEngine = ...,
) -> JsonReader[Literal["series"]]:
...
@@ -449,6 +456,7 @@ def read_json(
nrows: int | None = ...,
storage_options: StorageOptions = ...,
use_nullable_dtypes: bool = ...,
+ engine: JSONEngine = ...,
) -> Series:
...
@@ -473,6 +481,7 @@ def read_json(
nrows: int | None = ...,
storage_options: StorageOptions = ...,
use_nullable_dtypes: bool = ...,
+ engine: JSONEngine = ...,
) -> DataFrame:
...
@@ -500,6 +509,7 @@ def read_json(
nrows: int | None = None,
storage_options: StorageOptions = None,
use_nullable_dtypes: bool | lib.NoDefault = lib.no_default,
+ engine: JSONEngine = "ujson",
) -> DataFrame | Series | JsonReader:
"""
Convert a JSON string to pandas object.
@@ -653,6 +663,12 @@ def read_json(
.. versionadded:: 2.0
+ engine : {{"ujson", "pyarrow"}}, default "ujson"
+ Parser engine to use. The ``"pyarrow"`` engine is only available when
+ ``lines=True``.
+
+ .. versionadded:: 2.0
+
Returns
-------
Series or DataFrame
@@ -771,6 +787,7 @@ def read_json(
storage_options=storage_options,
encoding_errors=encoding_errors,
use_nullable_dtypes=use_nullable_dtypes,
+ engine=engine,
)
if chunksize:
@@ -807,6 +824,7 @@ def __init__(
storage_options: StorageOptions = None,
encoding_errors: str | None = "strict",
use_nullable_dtypes: bool = False,
+ engine: JSONEngine = "ujson",
) -> None:
self.orient = orient
@@ -818,6 +836,7 @@ def __init__(
self.precise_float = precise_float
self.date_unit = date_unit
self.encoding = encoding
+ self.engine = engine
self.compression = compression
self.storage_options = storage_options
self.lines = lines
@@ -828,17 +847,32 @@ def __init__(
self.handles: IOHandles[str] | None = None
self.use_nullable_dtypes = use_nullable_dtypes
+ if self.engine not in {"pyarrow", "ujson"}:
+ raise ValueError(
+ f"The engine type {self.engine} is currently not supported."
+ )
if self.chunksize is not None:
self.chunksize = validate_integer("chunksize", self.chunksize, 1)
if not self.lines:
raise ValueError("chunksize can only be passed if lines=True")
+ if self.engine == "pyarrow":
+ raise ValueError(
+ "currently pyarrow engine doesn't support chunksize parameter"
+ )
if self.nrows is not None:
self.nrows = validate_integer("nrows", self.nrows, 0)
if not self.lines:
raise ValueError("nrows can only be passed if lines=True")
-
- data = self._get_data_from_filepath(filepath_or_buffer)
- self.data = self._preprocess_data(data)
+ if self.engine == "pyarrow":
+ if not self.lines:
+ raise ValueError(
+ "currently pyarrow engine only supports "
+ "the line-delimited JSON format"
+ )
+ self.data = filepath_or_buffer
+ elif self.engine == "ujson":
+ data = self._get_data_from_filepath(filepath_or_buffer)
+ self.data = self._preprocess_data(data)
def _preprocess_data(self, data):
"""
@@ -923,23 +957,45 @@ def read(self) -> DataFrame | Series:
"""
obj: DataFrame | Series
with self:
- if self.lines:
- if self.chunksize:
- obj = concat(self)
- elif self.nrows:
- lines = list(islice(self.data, self.nrows))
- lines_json = self._combine_lines(lines)
- obj = self._get_object_parser(lines_json)
+ if self.engine == "pyarrow":
+ pyarrow_json = import_optional_dependency("pyarrow.json")
+ pa_table = pyarrow_json.read_json(self.data)
+ if self.use_nullable_dtypes:
+ if get_option("mode.dtype_backend") == "pyarrow":
+ from pandas.arrays import ArrowExtensionArray
+
+ return DataFrame(
+ {
+ col_name: ArrowExtensionArray(pa_col)
+ for col_name, pa_col in zip(
+ pa_table.column_names, pa_table.itercolumns()
+ )
+ }
+ )
+ elif get_option("mode.dtype_backend") == "pandas":
+ from pandas.io._util import _arrow_dtype_mapping
+
+ mapping = _arrow_dtype_mapping()
+ return pa_table.to_pandas(types_mapper=mapping.get)
+ return pa_table.to_pandas()
+ elif self.engine == "ujson":
+ if self.lines:
+ if self.chunksize:
+ obj = concat(self)
+ elif self.nrows:
+ lines = list(islice(self.data, self.nrows))
+ lines_json = self._combine_lines(lines)
+ obj = self._get_object_parser(lines_json)
+ else:
+ data = ensure_str(self.data)
+ data_lines = data.split("\n")
+ obj = self._get_object_parser(self._combine_lines(data_lines))
else:
- data = ensure_str(self.data)
- data_lines = data.split("\n")
- obj = self._get_object_parser(self._combine_lines(data_lines))
- else:
- obj = self._get_object_parser(self.data)
- if self.use_nullable_dtypes:
- return obj.convert_dtypes(infer_objects=False)
- else:
- return obj
+ obj = self._get_object_parser(self.data)
+ if self.use_nullable_dtypes:
+ return obj.convert_dtypes(infer_objects=False)
+ else:
+ return obj
def _get_object_parser(self, json) -> DataFrame | Series:
"""
diff --git a/pandas/tests/io/json/conftest.py b/pandas/tests/io/json/conftest.py
index 4e848cd48b42d..f3736252e850a 100644
--- a/pandas/tests/io/json/conftest.py
+++ b/pandas/tests/io/json/conftest.py
@@ -7,3 +7,10 @@ def orient(request):
Fixture for orients excluding the table format.
"""
return request.param
+
+
+@pytest.fixture(params=["ujson", "pyarrow"])
+def engine(request):
+ if request.param == "pyarrow":
+ pytest.importorskip("pyarrow.json")
+ return request.param
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index f59e1e8cbe43d..db839353934b0 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1956,3 +1956,19 @@ def test_read_json_nullable_series(self, string_storage, dtype_backend, orient):
expected = Series(ArrowExtensionArray(pa.array(expected, from_pandas=True)))
tm.assert_series_equal(result, expected)
+
+
+def test_invalid_engine():
+ # GH 48893
+ ser = Series(range(1))
+ out = ser.to_json()
+ with pytest.raises(ValueError, match="The engine type foo"):
+ read_json(out, engine="foo")
+
+
+def test_pyarrow_engine_lines_false():
+ # GH 48893
+ ser = Series(range(1))
+ out = ser.to_json()
+ with pytest.raises(ValueError, match="currently pyarrow engine only supports"):
+ read_json(out, engine="pyarrow", lines=False)
diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index a76627fb08147..9b36423be73dd 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -27,14 +27,29 @@ def test_read_jsonl():
tm.assert_frame_equal(result, expected)
-def test_read_datetime():
+def test_read_jsonl_engine_pyarrow(datapath, engine):
+ result = read_json(
+ datapath("io", "json", "data", "line_delimited.json"),
+ lines=True,
+ engine=engine,
+ )
+ expected = DataFrame({"a": [1, 3, 5], "b": [2, 4, 6]})
+ tm.assert_frame_equal(result, expected)
+
+
+def test_read_datetime(request, engine):
# GH33787
+ if engine == "pyarrow":
+ # GH 48893
+ reason = "Pyarrow only supports a file path as an input and line delimited json"
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
df = DataFrame(
[([1, 2], ["2020-03-05", "2020-04-08T09:58:49+00:00"], "hector")],
columns=["accounts", "date", "name"],
)
json_line = df.to_json(lines=True, orient="records")
- result = read_json(json_line)
+ result = read_json(json_line, engine=engine)
expected = DataFrame(
[[1, "2020-03-05", "hector"], [2, "2020-04-08T09:58:49+00:00", "hector"]],
columns=["accounts", "date", "name"],
@@ -90,55 +105,95 @@ def test_to_jsonl_count_new_lines():
@pytest.mark.parametrize("chunksize", [1, 1.0])
-def test_readjson_chunks(lines_json_df, chunksize):
+def test_readjson_chunks(request, lines_json_df, chunksize, engine):
# Basic test that read_json(chunks=True) gives the same result as
# read_json(chunks=False)
# GH17048: memory usage when lines=True
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
unchunked = read_json(StringIO(lines_json_df), lines=True)
- with read_json(StringIO(lines_json_df), lines=True, chunksize=chunksize) as reader:
+ with read_json(
+ StringIO(lines_json_df), lines=True, chunksize=chunksize, engine=engine
+ ) as reader:
chunked = pd.concat(reader)
tm.assert_frame_equal(chunked, unchunked)
-def test_readjson_chunksize_requires_lines(lines_json_df):
+def test_readjson_chunksize_requires_lines(lines_json_df, engine):
msg = "chunksize can only be passed if lines=True"
with pytest.raises(ValueError, match=msg):
- with read_json(StringIO(lines_json_df), lines=False, chunksize=2) as _:
+ with read_json(
+ StringIO(lines_json_df), lines=False, chunksize=2, engine=engine
+ ) as _:
pass
-def test_readjson_chunks_series():
+def test_readjson_chunks_series(request, engine):
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason))
+
# Test reading line-format JSON to Series with chunksize param
s = pd.Series({"A": 1, "B": 2})
strio = StringIO(s.to_json(lines=True, orient="records"))
- unchunked = read_json(strio, lines=True, typ="Series")
+ unchunked = read_json(strio, lines=True, typ="Series", engine=engine)
strio = StringIO(s.to_json(lines=True, orient="records"))
- with read_json(strio, lines=True, typ="Series", chunksize=1) as reader:
+ with read_json(
+ strio, lines=True, typ="Series", chunksize=1, engine=engine
+ ) as reader:
chunked = pd.concat(reader)
tm.assert_series_equal(chunked, unchunked)
-def test_readjson_each_chunk(lines_json_df):
+def test_readjson_each_chunk(request, lines_json_df, engine):
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
# Other tests check that the final result of read_json(chunksize=True)
# is correct. This checks the intermediate chunks.
- with read_json(StringIO(lines_json_df), lines=True, chunksize=2) as reader:
+ with read_json(
+ StringIO(lines_json_df), lines=True, chunksize=2, engine=engine
+ ) as reader:
chunks = list(reader)
assert chunks[0].shape == (2, 2)
assert chunks[1].shape == (1, 2)
-def test_readjson_chunks_from_file():
+def test_readjson_chunks_from_file(request, engine):
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
with tm.ensure_clean("test.json") as path:
df = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
df.to_json(path, lines=True, orient="records")
- with read_json(path, lines=True, chunksize=1) as reader:
+ with read_json(path, lines=True, chunksize=1, engine=engine) as reader:
chunked = pd.concat(reader)
- unchunked = read_json(path, lines=True)
+ unchunked = read_json(path, lines=True, engine=engine)
tm.assert_frame_equal(unchunked, chunked)
@@ -171,11 +226,13 @@ def test_readjson_chunks_closes(chunksize):
@pytest.mark.parametrize("chunksize", [0, -1, 2.2, "foo"])
-def test_readjson_invalid_chunksize(lines_json_df, chunksize):
+def test_readjson_invalid_chunksize(lines_json_df, chunksize, engine):
msg = r"'chunksize' must be an integer >=1"
with pytest.raises(ValueError, match=msg):
- with read_json(StringIO(lines_json_df), lines=True, chunksize=chunksize) as _:
+ with read_json(
+ StringIO(lines_json_df), lines=True, chunksize=chunksize, engine=engine
+ ) as _:
pass
@@ -205,19 +262,27 @@ def test_readjson_chunks_multiple_empty_lines(chunksize):
tm.assert_frame_equal(orig, test, obj=f"chunksize: {chunksize}")
-def test_readjson_unicode(monkeypatch):
+def test_readjson_unicode(request, monkeypatch, engine):
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
with tm.ensure_clean("test.json") as path:
monkeypatch.setattr("locale.getpreferredencoding", lambda do_setlocale: "cp949")
with open(path, "w", encoding="utf-8") as f:
f.write('{"£©µÀÆÖÞßéöÿ":["АБВГДабвгд가"]}')
- result = read_json(path)
+ result = read_json(path, engine=engine)
expected = DataFrame({"£©µÀÆÖÞßéöÿ": ["АБВГДабвгд가"]})
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("nrows", [1, 2])
-def test_readjson_nrows(nrows):
+def test_readjson_nrows(nrows, engine):
# GH 33916
# Test reading line-format JSON to Series with nrows param
jsonl = """{"a": 1, "b": 2}
@@ -230,20 +295,30 @@ def test_readjson_nrows(nrows):
@pytest.mark.parametrize("nrows,chunksize", [(2, 2), (4, 2)])
-def test_readjson_nrows_chunks(nrows, chunksize):
+def test_readjson_nrows_chunks(request, nrows, chunksize, engine):
# GH 33916
# Test reading line-format JSON to Series with nrows and chunksize param
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
jsonl = """{"a": 1, "b": 2}
{"a": 3, "b": 4}
{"a": 5, "b": 6}
{"a": 7, "b": 8}"""
- with read_json(jsonl, lines=True, nrows=nrows, chunksize=chunksize) as reader:
+ with read_json(
+ jsonl, lines=True, nrows=nrows, chunksize=chunksize, engine=engine
+ ) as reader:
chunked = pd.concat(reader)
expected = DataFrame({"a": [1, 3, 5, 7], "b": [2, 4, 6, 8]}).iloc[:nrows]
tm.assert_frame_equal(chunked, expected)
-def test_readjson_nrows_requires_lines():
+def test_readjson_nrows_requires_lines(engine):
# GH 33916
# Test ValuError raised if nrows is set without setting lines in read_json
jsonl = """{"a": 1, "b": 2}
@@ -252,12 +327,20 @@ def test_readjson_nrows_requires_lines():
{"a": 7, "b": 8}"""
msg = "nrows can only be passed if lines=True"
with pytest.raises(ValueError, match=msg):
- read_json(jsonl, lines=False, nrows=2)
+ read_json(jsonl, lines=False, nrows=2, engine=engine)
-def test_readjson_lines_chunks_fileurl(datapath):
+def test_readjson_lines_chunks_fileurl(request, datapath, engine):
# GH 27135
# Test reading line-format JSON from file url
+ if engine == "pyarrow":
+ # GH 48893
+ reason = (
+ "Pyarrow only supports a file path as an input and line delimited json"
+ "and doesn't support chunksize parameter."
+ )
+ request.node.add_marker(pytest.mark.xfail(reason=reason, raises=ValueError))
+
df_list_expected = [
DataFrame([[1, 2]], columns=["a", "b"], index=[0]),
DataFrame([[3, 4]], columns=["a", "b"], index=[1]),
@@ -265,7 +348,7 @@ def test_readjson_lines_chunks_fileurl(datapath):
]
os_path = datapath("io", "json", "data", "line_delimited.json")
file_url = Path(os_path).as_uri()
- with read_json(file_url, lines=True, chunksize=1) as url_reader:
+ with read_json(file_url, lines=True, chunksize=1, engine=engine) as url_reader:
for index, chuck in enumerate(url_reader):
tm.assert_frame_equal(chuck, df_list_expected[index])
| - [ ] closes #48893
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49249 | 2022-10-22T18:21:12Z | 2023-02-10T12:26:44Z | 2023-02-10T12:26:44Z | 2023-02-10T14:50:15Z |
DEPR: to_native_types, set_value, iteritems, union_many, to_perioddelta | diff --git a/doc/redirects.csv b/doc/redirects.csv
index 8a55c48996e84..5cf310e062dc3 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -399,10 +399,8 @@ generated/pandas.DataFrame.isna,../reference/api/pandas.DataFrame.isna
generated/pandas.DataFrame.isnull,../reference/api/pandas.DataFrame.isnull
generated/pandas.DataFrame.items,../reference/api/pandas.DataFrame.items
generated/pandas.DataFrame.__iter__,../reference/api/pandas.DataFrame.__iter__
-generated/pandas.DataFrame.iteritems,../reference/api/pandas.DataFrame.iteritems
generated/pandas.DataFrame.iterrows,../reference/api/pandas.DataFrame.iterrows
generated/pandas.DataFrame.itertuples,../reference/api/pandas.DataFrame.itertuples
-generated/pandas.DataFrame.ix,../reference/api/pandas.DataFrame.ix
generated/pandas.DataFrame.join,../reference/api/pandas.DataFrame.join
generated/pandas.DataFrame.keys,../reference/api/pandas.DataFrame.keys
generated/pandas.DataFrame.kurt,../reference/api/pandas.DataFrame.kurt
@@ -571,7 +569,6 @@ generated/pandas.DatetimeIndex.strftime,../reference/api/pandas.DatetimeIndex.st
generated/pandas.DatetimeIndex.time,../reference/api/pandas.DatetimeIndex.time
generated/pandas.DatetimeIndex.timetz,../reference/api/pandas.DatetimeIndex.timetz
generated/pandas.DatetimeIndex.to_frame,../reference/api/pandas.DatetimeIndex.to_frame
-generated/pandas.DatetimeIndex.to_perioddelta,../reference/api/pandas.DatetimeIndex.to_perioddelta
generated/pandas.DatetimeIndex.to_period,../reference/api/pandas.DatetimeIndex.to_period
generated/pandas.DatetimeIndex.to_pydatetime,../reference/api/pandas.DatetimeIndex.to_pydatetime
generated/pandas.DatetimeIndex.to_series,../reference/api/pandas.DatetimeIndex.to_series
@@ -704,7 +701,6 @@ generated/pandas.Index.rename,../reference/api/pandas.Index.rename
generated/pandas.Index.repeat,../reference/api/pandas.Index.repeat
generated/pandas.Index.searchsorted,../reference/api/pandas.Index.searchsorted
generated/pandas.Index.set_names,../reference/api/pandas.Index.set_names
-generated/pandas.Index.set_value,../reference/api/pandas.Index.set_value
generated/pandas.Index.shape,../reference/api/pandas.Index.shape
generated/pandas.Index.shift,../reference/api/pandas.Index.shift
generated/pandas.Index.size,../reference/api/pandas.Index.size
@@ -723,7 +719,6 @@ generated/pandas.Index.to_flat_index,../reference/api/pandas.Index.to_flat_index
generated/pandas.Index.to_frame,../reference/api/pandas.Index.to_frame
generated/pandas.Index.to_list,../reference/api/pandas.Index.to_list
generated/pandas.Index.tolist,../reference/api/pandas.Index.tolist
-generated/pandas.Index.to_native_types,../reference/api/pandas.Index.to_native_types
generated/pandas.Index.to_numpy,../reference/api/pandas.Index.to_numpy
generated/pandas.Index.to_series,../reference/api/pandas.Index.to_series
generated/pandas.Index.transpose,../reference/api/pandas.Index.transpose
@@ -1083,8 +1078,6 @@ generated/pandas.Series.is_unique,../reference/api/pandas.Series.is_unique
generated/pandas.Series.item,../reference/api/pandas.Series.item
generated/pandas.Series.items,../reference/api/pandas.Series.items
generated/pandas.Series.__iter__,../reference/api/pandas.Series.__iter__
-generated/pandas.Series.iteritems,../reference/api/pandas.Series.iteritems
-generated/pandas.Series.ix,../reference/api/pandas.Series.ix
generated/pandas.Series.keys,../reference/api/pandas.Series.keys
generated/pandas.Series.kurt,../reference/api/pandas.Series.kurt
generated/pandas.Series.kurtosis,../reference/api/pandas.Series.kurtosis
diff --git a/doc/source/reference/frame.rst b/doc/source/reference/frame.rst
index cc38f6cc42972..ea3267d306813 100644
--- a/doc/source/reference/frame.rst
+++ b/doc/source/reference/frame.rst
@@ -63,7 +63,6 @@ Indexing, iteration
DataFrame.insert
DataFrame.__iter__
DataFrame.items
- DataFrame.iteritems
DataFrame.keys
DataFrame.iterrows
DataFrame.itertuples
diff --git a/doc/source/reference/indexing.rst b/doc/source/reference/indexing.rst
index a52ee92ea5921..93897723d5d71 100644
--- a/doc/source/reference/indexing.rst
+++ b/doc/source/reference/indexing.rst
@@ -110,7 +110,6 @@ Conversion
Index.map
Index.ravel
Index.to_list
- Index.to_native_types
Index.to_series
Index.to_frame
Index.view
@@ -393,7 +392,6 @@ Conversion
:toctree: api/
DatetimeIndex.to_period
- DatetimeIndex.to_perioddelta
DatetimeIndex.to_pydatetime
DatetimeIndex.to_series
DatetimeIndex.to_frame
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst
index 0beac55c8b86c..9f38fc384a0b9 100644
--- a/doc/source/reference/series.rst
+++ b/doc/source/reference/series.rst
@@ -66,7 +66,6 @@ Indexing, iteration
Series.iloc
Series.__iter__
Series.items
- Series.iteritems
Series.keys
Series.pop
Series.item
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 00e57d738ca6e..04f97f206a312 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -165,6 +165,12 @@ Removal of prior version deprecations/changes
- Removed deprecated :meth:`Index.is_type_compatible` (:issue:`42113`)
- Removed deprecated :meth:`Index.is_mixed`, check ``index.inferred_type`` directly instead (:issue:`32922`)
- Removed deprecated :meth:`Index.asi8` (:issue:`37877`)
+- Removed deprecated :meth:`DataFrame._AXIS_NUMBERS`, :meth:`DataFrame._AXIS_NAMES`, :meth:`Series._AXIS_NUMBERS`, :meth:`Series._AXIS_NAMES` (:issue:`33637`)
+- Removed deprecated :meth:`Index.to_native_types`, use ``obj.astype(str)`` instead (:issue:`36418`)
+- Removed deprecated :meth:`Series.iteritems`, :meth:`DataFrame.iteritems`, use ``obj.items`` instead (:issue:`45321`)
+- Removed deprecated :meth:`DatetimeIndex.union_many` (:issue:`45018`)
+- Removed deprecated :meth:`RangeIndex._start`, :meth:`RangeIndex._stop`, :meth:`RangeIndex._step`, use ``start``, ``stop``, ``step`` instead (:issue:`30482`)
+- Removed deprecated :meth:`DatetimeIndex.to_perioddelta`, Use ``dtindex - dtindex.to_period(freq).to_timestamp()`` instead (:issue:`34853`)
- Enforced deprecation disallowing passing a timezone-aware :class:`Timestamp` and ``dtype="datetime64[ns]"`` to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
- Enforced deprecation disallowing passing a sequence of timezone-aware values and ``dtype="datetime64[ns]"`` to to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
- Enforced deprecation disallowing using ``.astype`` to convert a ``datetime64[ns]`` :class:`Series`, :class:`DataFrame`, or :class:`DatetimeIndex` to timezone-aware dtype, use ``obj.tz_localize`` or ``ser.dt.tz_localize`` instead (:issue:`39258`)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 19ef100f24f1b..07d689d737c87 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -89,10 +89,7 @@
if TYPE_CHECKING:
from pandas import DataFrame
- from pandas.core.arrays import (
- PeriodArray,
- TimedeltaArray,
- )
+ from pandas.core.arrays import PeriodArray
_midnight = time(0, 0)
@@ -1194,38 +1191,6 @@ def to_period(self, freq=None) -> PeriodArray:
return PeriodArray._from_datetime64(self._ndarray, freq, tz=self.tz)
- def to_perioddelta(self, freq) -> TimedeltaArray:
- """
- Calculate deltas between self values and self converted to Periods at a freq.
-
- Used for vectorized offsets.
-
- Parameters
- ----------
- freq : Period frequency
-
- Returns
- -------
- TimedeltaArray/Index
- """
- # Deprecaation GH#34853
- warnings.warn(
- "to_perioddelta is deprecated and will be removed in a "
- "future version. "
- "Use `dtindex - dtindex.to_period(freq).to_timestamp()` instead.",
- FutureWarning,
- # stacklevel chosen to be correct for when called from DatetimeIndex
- stacklevel=find_stack_level(),
- )
- from pandas.core.arrays.timedeltas import TimedeltaArray
-
- if self._ndarray.dtype != "M8[ns]":
- raise NotImplementedError("Only supported for nanosecond resolution.")
-
- i8delta = self.asi8 - self.to_period(freq).to_timestamp().asi8
- m8delta = i8delta.view("m8[ns]")
- return TimedeltaArray(m8delta)
-
# -----------------------------------------------------------------
# Properties - Vectorized Timestamp Properties/Methods
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fb333aff66b72..f3913239534f3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1328,44 +1328,6 @@ def items(self) -> Iterable[tuple[Hashable, Series]]:
for i, k in enumerate(self.columns):
yield k, self._ixs(i, axis=1)
- _shared_docs[
- "iteritems"
- ] = r"""
- Iterate over (column name, Series) pairs.
-
- .. deprecated:: 1.5.0
- iteritems is deprecated and will be removed in a future version.
- Use .items instead.
-
- Iterates over the DataFrame columns, returning a tuple with
- the column name and the content as a Series.
-
- Yields
- ------
- label : object
- The column names for the DataFrame being iterated over.
- content : Series
- The column entries belonging to each label, as a Series.
-
- See Also
- --------
- DataFrame.iter : Recommended alternative.
- DataFrame.iterrows : Iterate over DataFrame rows as
- (index, Series) pairs.
- DataFrame.itertuples : Iterate over DataFrame rows as namedtuples
- of the values.
- """
-
- @Appender(_shared_docs["iteritems"])
- def iteritems(self) -> Iterable[tuple[Hashable, Series]]:
- warnings.warn(
- "iteritems is deprecated and will be removed in a future version. "
- "Use .items instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- yield from self.items()
-
def iterrows(self) -> Iterable[tuple[Hashable, Series]]:
"""
Iterate over DataFrame rows as (index, Series) pairs.
@@ -11628,18 +11590,6 @@ def isin(self, values: Series | DataFrame | Sequence | Mapping) -> DataFrame:
)
columns = properties.AxisProperty(axis=0, doc="The column labels of the DataFrame.")
- @property
- def _AXIS_NUMBERS(self) -> dict[str, int]:
- """.. deprecated:: 1.1.0"""
- super()._AXIS_NUMBERS
- return {"index": 0, "columns": 1}
-
- @property
- def _AXIS_NAMES(self) -> dict[int, str]:
- """.. deprecated:: 1.1.0"""
- super()._AXIS_NAMES
- return {0: "index", 1: "columns"}
-
# ----------------------------------------------------------------------
# Add plotting methods to DataFrame
plot = CachedAccessor("plot", pandas.plotting.PlotAccessor)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f46b429a5fc75..e8ee68305ed67 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -494,25 +494,6 @@ def _data(self):
_info_axis_name: Literal["index", "columns"]
_AXIS_LEN: int
- @property
- def _AXIS_NUMBERS(self) -> dict[str, int]:
- """.. deprecated:: 1.1.0"""
- warnings.warn(
- "_AXIS_NUMBERS has been deprecated.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return {"index": 0}
-
- @property
- def _AXIS_NAMES(self) -> dict[int, str]:
- """.. deprecated:: 1.1.0"""
- level = self.ndim + 1
- warnings.warn(
- "_AXIS_NAMES has been deprecated.", FutureWarning, stacklevel=level
- )
- return {0: "index"}
-
@final
def _construct_axes_dict(self, axes: Sequence[Axis] | None = None, **kwargs):
"""Return an axes dictionary for myself."""
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e89fa7806cba9..4c79461326eb2 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1460,45 +1460,6 @@ def _format_with_header(self, header: list[str_t], na_rep: str_t) -> list[str_t]
result = trim_front(format_array(values, None, justify="left"))
return header + result
- @final
- def to_native_types(self, slicer=None, **kwargs) -> np.ndarray:
- """
- Format specified values of `self` and return them.
-
- .. deprecated:: 1.2.0
-
- Parameters
- ----------
- slicer : int, array-like
- An indexer into `self` that specifies which values
- are used in the formatting process.
- kwargs : dict
- Options for specifying how the values should be formatted.
- These options include the following:
-
- 1) na_rep : str
- The value that serves as a placeholder for NULL values
- 2) quoting : bool or None
- Whether or not there are quoted values in `self`
- 3) date_format : str
- The format used to represent date-like values.
-
- Returns
- -------
- numpy.ndarray
- Formatted values.
- """
- warnings.warn(
- "The 'to_native_types' method is deprecated and will be removed in "
- "a future version. Use 'astype(str)' instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- values = self
- if slicer is not None:
- values = values[slicer]
- return values._format_native_types(**kwargs)
-
def _format_native_types(
self, *, na_rep: str_t = "", quoting=None, **kwargs
) -> npt.NDArray[np.object_]:
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index b08a0d3a60526..2a387abe0e6f9 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -63,7 +63,6 @@
import pandas.core.common as com
from pandas.core.indexes.base import (
Index,
- get_unanimous_names,
maybe_extract_name,
)
from pandas.core.indexes.datetimelike import DatetimeTimedeltaMixin
@@ -75,7 +74,6 @@
DataFrame,
Float64Index,
PeriodIndex,
- TimedeltaIndex,
)
@@ -229,7 +227,6 @@ class DatetimeIndex(DatetimeTimedeltaMixin):
floor
ceil
to_period
- to_perioddelta
to_pydatetime
to_series
to_frame
@@ -295,13 +292,6 @@ def to_period(self, freq=None) -> PeriodIndex:
arr = self._data.to_period(freq)
return PeriodIndex._simple_new(arr, name=self.name)
- @doc(DatetimeArray.to_perioddelta)
- def to_perioddelta(self, freq) -> TimedeltaIndex:
- from pandas.core.indexes.api import TimedeltaIndex
-
- arr = self._data.to_perioddelta(freq)
- return TimedeltaIndex._simple_new(arr, name=self.name)
-
@doc(DatetimeArray.to_julian_date)
def to_julian_date(self) -> Float64Index:
from pandas.core.indexes.api import Float64Index
@@ -439,43 +429,6 @@ def _can_range_setop(self, other) -> bool:
return False
return super()._can_range_setop(other)
- def union_many(self, others):
- """
- A bit of a hack to accelerate unioning a collection of indexes.
- """
- warnings.warn(
- "DatetimeIndex.union_many is deprecated and will be removed in "
- "a future version. Use obj.union instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- this = self
-
- for other in others:
- if not isinstance(this, DatetimeIndex):
- this = Index.union(this, other)
- continue
-
- if not isinstance(other, DatetimeIndex):
- try:
- other = DatetimeIndex(other)
- except TypeError:
- pass
-
- this, other = this._maybe_utc_convert(other)
-
- if len(self) and len(other) and this._can_fast_union(other):
- # union already has fastpath handling for empty cases
- this = this._fast_union(other)
- else:
- this = Index.union(this, other)
-
- res_name = get_unanimous_names(self, *others)[0]
- if this.name != res_name:
- return this.rename(res_name)
- return this
-
def _maybe_utc_convert(self, other: Index) -> tuple[DatetimeIndex, Index]:
this = self
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index f0f6dfe4eb147..1e368fc86b74e 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -4,7 +4,6 @@
Callable,
Hashable,
)
-import warnings
import numpy as np
@@ -20,7 +19,6 @@
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_dtype_equal,
@@ -359,16 +357,6 @@ class IntegerIndex(NumericIndex):
_is_backward_compat_public_numeric_index: bool = False
- @property
- def asi8(self) -> npt.NDArray[np.int64]:
- # do not cache or you'll create a memory leak
- warnings.warn(
- "Index.asi8 is deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self._values.view(self._default_dtype)
-
class Int64Index(IntegerIndex):
_index_descr_args = {
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index de50f86756c7a..f15c244d8b628 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -12,7 +12,6 @@
List,
cast,
)
-import warnings
import numpy as np
@@ -31,7 +30,6 @@
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
ensure_platform_int,
@@ -238,11 +236,6 @@ def _format_with_header(self, header: list[str], na_rep: str) -> list[str]:
return header + [f"{x:<{max_length}}" for x in self._range]
# --------------------------------------------------------------------
- _deprecation_message = (
- "RangeIndex.{} is deprecated and will be "
- "removed in a future version. Use RangeIndex.{} "
- "instead"
- )
@property
def start(self) -> int:
@@ -252,21 +245,6 @@ def start(self) -> int:
# GH 25710
return self._range.start
- @property
- def _start(self) -> int:
- """
- The value of the `start` parameter (``0`` if this was not supplied).
-
- .. deprecated:: 0.25.0
- Use ``start`` instead.
- """
- warnings.warn(
- self._deprecation_message.format("_start", "start"),
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.start
-
@property
def stop(self) -> int:
"""
@@ -274,22 +252,6 @@ def stop(self) -> int:
"""
return self._range.stop
- @property
- def _stop(self) -> int:
- """
- The value of the `stop` parameter.
-
- .. deprecated:: 0.25.0
- Use ``stop`` instead.
- """
- # GH 25710
- warnings.warn(
- self._deprecation_message.format("_stop", "stop"),
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.stop
-
@property
def step(self) -> int:
"""
@@ -298,22 +260,6 @@ def step(self) -> int:
# GH 25710
return self._range.step
- @property
- def _step(self) -> int:
- """
- The value of the `step` parameter (``1`` if this was not supplied).
-
- .. deprecated:: 0.25.0
- Use ``step`` instead.
- """
- # GH 25710
- warnings.warn(
- self._deprecation_message.format("_step", "step"),
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.step
-
@cache_readonly
def nbytes(self) -> int:
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b7d12158fd909..d3d3ead06cc91 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1782,37 +1782,6 @@ def items(self) -> Iterable[tuple[Hashable, Any]]:
"""
return zip(iter(self.index), iter(self))
- def iteritems(self) -> Iterable[tuple[Hashable, Any]]:
- """
- Lazily iterate over (index, value) tuples.
-
- .. deprecated:: 1.5.0
- iteritems is deprecated and will be removed in a future version.
- Use .items instead.
-
- This method returns an iterable tuple (index, value). This is
- convenient if you want to create a lazy iterator.
-
- Returns
- -------
- iterable
- Iterable of tuples containing the (index, value) pairs from a
- Series.
-
- See Also
- --------
- Series.items : Recommended alternative.
- DataFrame.items : Iterate over (column name, Series) pairs.
- DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs.
- """
- warnings.warn(
- "iteritems is deprecated and will be removed in a future version. "
- "Use .items instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.items()
-
# ----------------------------------------------------------------------
# Misc public methods
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index ea895e5656ccb..8f9d38044e7ef 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -780,25 +780,6 @@ def test_astype_object(self, arr1d):
assert asobj.dtype == "O"
assert list(asobj) == list(dti)
- def test_to_perioddelta(self, datetime_index, freqstr):
- # GH#23113
- dti = datetime_index
- arr = DatetimeArray(dti)
-
- msg = "to_perioddelta is deprecated and will be removed"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- # Deprecation GH#34853
- expected = dti.to_perioddelta(freq=freqstr)
- with tm.assert_produces_warning(FutureWarning, match=msg):
- # stacklevel is chosen to be "correct" for DatetimeIndex, not
- # DatetimeArray
- result = arr.to_perioddelta(freq=freqstr)
- assert isinstance(result, TimedeltaArray)
-
- # placeholder until these become actual EA subclasses and we can use
- # an EA-specific tm.assert_ function
- tm.assert_index_equal(pd.Index(result), pd.Index(expected))
-
def test_to_period(self, datetime_index, freqstr):
dti = datetime_index
arr = DatetimeArray(dti)
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index bc6c676568f73..1ab20c282b23a 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -382,8 +382,3 @@ def test_inspect_getmembers(self):
df = DataFrame()
with tm.assert_produces_warning(None):
inspect.getmembers(df)
-
- def test_dataframe_iteritems_deprecated(self):
- df = DataFrame([1])
- with tm.assert_produces_warning(FutureWarning):
- next(df.iteritems())
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index a6c516b51c6c5..5b04241688a76 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -438,22 +438,6 @@ def test_axis_classmethods(self, frame_or_series):
assert obj._get_axis_name(v) == box._get_axis_name(v)
assert obj._get_block_manager_axis(v) == box._get_block_manager_axis(v)
- def test_axis_names_deprecated(self, frame_or_series):
- # GH33637
- box = frame_or_series
- obj = box(dtype=object)
- msg = "_AXIS_NAMES has been deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- obj._AXIS_NAMES
-
- def test_axis_numbers_deprecated(self, frame_or_series):
- # GH33637
- box = frame_or_series
- obj = box(dtype=object)
- msg = "_AXIS_NUMBERS has been deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- obj._AXIS_NUMBERS
-
def test_flags_identity(self, frame_or_series):
obj = Series([1, 2])
if frame_or_series is DataFrame:
diff --git a/pandas/tests/indexes/datetimes/test_formats.py b/pandas/tests/indexes/datetimes/test_formats.py
index 197038dbadaf7..01e6c0ad9b5c3 100644
--- a/pandas/tests/indexes/datetimes/test_formats.py
+++ b/pandas/tests/indexes/datetimes/test_formats.py
@@ -13,25 +13,7 @@
import pandas._testing as tm
-def test_to_native_types_method_deprecated():
- index = pd.date_range(freq="1D", periods=3, start="2017-01-01")
- expected = np.array(["2017-01-01", "2017-01-02", "2017-01-03"], dtype=object)
-
- with tm.assert_produces_warning(FutureWarning):
- result = index.to_native_types()
-
- tm.assert_numpy_array_equal(result, expected)
-
- # Make sure slicing works
- expected = np.array(["2017-01-01", "2017-01-03"], dtype=object)
-
- with tm.assert_produces_warning(FutureWarning):
- result = index.to_native_types([0, 2])
-
- tm.assert_numpy_array_equal(result, expected)
-
-
-def test_to_native_types():
+def test_format_native_types():
index = pd.date_range(freq="1D", periods=3, start="2017-01-01")
# First, with no arguments.
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index be8d70c127e8b..a4ae7c5fd6fa3 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -27,13 +27,6 @@
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
-def test_union_many_deprecated():
- dti = date_range("2016-01-01", periods=3)
-
- with tm.assert_produces_warning(FutureWarning):
- dti.union_many([dti, dti])
-
-
class TestDatetimeIndexSetOps:
tz = [
None,
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index 1c65c369a7e99..003c69a6a11a6 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -77,13 +77,6 @@ def test_start_stop_step_attrs(self, index, start, stop, step):
assert index.stop == stop
assert index.step == step
- @pytest.mark.parametrize("attr_name", ["_start", "_stop", "_step"])
- def test_deprecated_start_stop_step_attrs(self, attr_name, simple_index):
- # GH 26581
- idx = simple_index
- with tm.assert_produces_warning(FutureWarning):
- getattr(idx, attr_name)
-
def test_copy(self):
i = RangeIndex(5, name="Foo")
i_copy = i.copy()
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 0aab381d6e076..5a66597bdb314 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -205,11 +205,6 @@ def test_series_datetimelike_attribute_access_invalid(self):
with pytest.raises(AttributeError, match=msg):
ser.weekday
- def test_series_iteritems_deprecated(self):
- ser = Series([1])
- with tm.assert_produces_warning(FutureWarning):
- next(ser.iteritems())
-
@pytest.mark.parametrize(
"kernel, has_numeric_only",
[
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49244 | 2022-10-22T01:26:23Z | 2022-10-24T20:42:07Z | 2022-10-24T20:42:07Z | 2022-10-24T20:47:39Z |
STYLE: fix pylint consider-using-dict-items warnings | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 074b6068a4518..0282278183dd3 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -653,7 +653,7 @@ def index(request):
@pytest.fixture(
params=[
- key for key in indices_dict if not isinstance(indices_dict[key], MultiIndex)
+ key for key, value in indices_dict.items() if not isinstance(value, MultiIndex)
]
)
def index_flat(request):
@@ -671,12 +671,12 @@ def index_flat(request):
@pytest.fixture(
params=[
key
- for key in indices_dict
+ for key, value in indices_dict.items()
if not (
key in ["int", "uint", "range", "empty", "repeats", "bool-dtype"]
or key.startswith("num_")
)
- and not isinstance(indices_dict[key], MultiIndex)
+ and not isinstance(value, MultiIndex)
]
)
def index_with_missing(request):
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 81ffa74693156..abd1182214f5f 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -1646,9 +1646,8 @@ def _clean_options(
validate_header_arg(options["header"])
- for arg in _deprecated_defaults.keys():
+ for arg, depr_default in _deprecated_defaults.items():
parser_default = _c_parser_defaults.get(arg, parser_defaults[arg])
- depr_default = _deprecated_defaults[arg]
if result.get(arg, depr_default) != depr_default.default_value:
msg = (
f"The {arg} argument has been deprecated and will be "
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 037aeb5339818..5e9b70aeb2a82 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1853,8 +1853,8 @@ def _do_convert_missing(self, data: DataFrame, convert_missing: bool) -> DataFra
replacements[colname] = replacement
if replacements:
- for col in replacements:
- data[col] = replacements[col]
+ for col, value in replacements.items():
+ data[col] = value
return data
def _insert_strls(self, data: DataFrame) -> DataFrame:
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index 69087b6822f2e..8e6aa43ff434c 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -301,7 +301,7 @@ def test_astype_categorical(self, dtype):
d = {"A": list("abbc"), "B": list("bccd"), "C": list("cdde")}
df = DataFrame(d)
result = df.astype(dtype)
- expected = DataFrame({k: Categorical(d[k], dtype=dtype) for k in d})
+ expected = DataFrame({k: Categorical(v, dtype=dtype) for k, v in d.items()})
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("cls", [CategoricalDtype, DatetimeTZDtype, IntervalDtype])
diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py
index 06e04f06c38d1..44f57b02d0f0a 100644
--- a/pandas/tests/plotting/frame/test_frame.py
+++ b/pandas/tests/plotting/frame/test_frame.py
@@ -1827,11 +1827,11 @@ def test_memory_leak(self):
# force a garbage collection
gc.collect()
msg = "weakly-referenced object no longer exists"
- for key in results:
+ for result_value in results.values():
# check that every plot was collected
with pytest.raises(ReferenceError, match=msg):
# need to actually access something to get an error
- results[key].lines
+ result_value.lines
def test_df_gridspec_patterns(self):
# GH 10819
diff --git a/pyproject.toml b/pyproject.toml
index f18b54a8dcd03..236d944b0c954 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -80,7 +80,6 @@ disable = [
# pylint type "C": convention, for programming standard violation
"consider-iterating-dictionary",
- "consider-using-dict-items",
"consider-using-f-string",
"disallowed-name",
"import-outside-toplevel",
| Related to https://github.com/pandas-dev/pandas/issues/48855
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
`consider-using-dict-items` is also removed from the ignored warnings in `pyproject.toml`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49243 | 2022-10-22T00:22:36Z | 2022-10-24T20:44:38Z | 2022-10-24T20:44:38Z | 2022-10-25T11:28:03Z |
DEPR: Series(dt64_naive, dtype=dt64tz) | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 1050eed40fbb4..be4a7f6390e37 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -165,6 +165,7 @@ Removal of prior version deprecations/changes
- Removed deprecated :meth:`Index.is_mixed`, check ``index.inferred_type`` directly instead (:issue:`32922`)
- Removed deprecated :func:`pandas.api.types.is_categorical`; use :func:`pandas.api.types.is_categorical_dtype` instead (:issue:`33385`)
- Removed deprecated :meth:`Index.asi8` (:issue:`37877`)
+- Enforced deprecation changing behavior when passing ``datetime64[ns]`` dtype data and timezone-aware dtype to :class:`Series`, interpreting the values as wall-times instead of UTC times, matching :class:`DatetimeIndex` behavior (:issue:`41662`)
- Removed deprecated :meth:`DataFrame._AXIS_NUMBERS`, :meth:`DataFrame._AXIS_NAMES`, :meth:`Series._AXIS_NUMBERS`, :meth:`Series._AXIS_NAMES` (:issue:`33637`)
- Removed deprecated :meth:`Index.to_native_types`, use ``obj.astype(str)`` instead (:issue:`36418`)
- Removed deprecated :meth:`Series.iteritems`, :meth:`DataFrame.iteritems`, use ``obj.items`` instead (:issue:`45321`)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 07d689d737c87..8395d54224f1d 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -2123,10 +2123,15 @@ def _sequence_to_dt64ns(
# Convert tz-naive to UTC
# TODO: if tz is UTC, are there situations where we *don't* want a
# copy? tz_localize_to_utc always makes one.
+ shape = data.shape
+ if data.ndim > 1:
+ data = data.ravel()
+
data = tzconversion.tz_localize_to_utc(
data.view("i8"), tz, ambiguous=ambiguous, creso=data_unit
)
data = data.view(new_dtype)
+ data = data.reshape(shape)
assert data.dtype == new_dtype, data.dtype
result = data
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index b7db95269439c..447006572f22d 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -55,10 +55,7 @@
is_object_dtype,
is_timedelta64_ns_dtype,
)
-from pandas.core.dtypes.dtypes import (
- DatetimeTZDtype,
- PandasDtype,
-)
+from pandas.core.dtypes.dtypes import PandasDtype
from pandas.core.dtypes.generic import (
ABCExtensionArray,
ABCIndex,
@@ -800,16 +797,6 @@ def _try_cast(
elif isinstance(dtype, ExtensionDtype):
# create an extension array from its dtype
- if isinstance(dtype, DatetimeTZDtype):
- # We can't go through _from_sequence because it handles dt64naive
- # data differently; _from_sequence treats naive as wall times,
- # while maybe_cast_to_datetime treats it as UTC
- # see test_maybe_promote_any_numpy_dtype_with_datetimetz
- # TODO(2.0): with deprecations enforced, should be able to remove
- # special case.
- return maybe_cast_to_datetime(arr, dtype)
- # TODO: copy?
-
array_type = dtype.construct_array_type()._from_sequence
subarr = array_type(arr, dtype=dtype, copy=copy)
return subarr
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 9830d22f3e2e5..ec313f91d2721 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -65,7 +65,6 @@
is_complex,
is_complex_dtype,
is_datetime64_dtype,
- is_datetime64tz_dtype,
is_dtype_equal,
is_extension_array_dtype,
is_float,
@@ -1314,13 +1313,15 @@ def try_timedelta(v: np.ndarray) -> np.ndarray:
def maybe_cast_to_datetime(
- value: ExtensionArray | np.ndarray | list, dtype: DtypeObj | None
+ value: ExtensionArray | np.ndarray | list, dtype: np.dtype | None
) -> ExtensionArray | np.ndarray:
"""
try to cast the array/value to a datetimelike dtype, converting float
nan to iNaT
We allow a list *only* when dtype is not None.
+
+ Caller is responsible for handling ExtensionDtype cases.
"""
from pandas.core.arrays.datetimes import sequence_to_datetimes
from pandas.core.arrays.timedeltas import TimedeltaArray
@@ -1332,18 +1333,22 @@ def maybe_cast_to_datetime(
# TODO: _from_sequence would raise ValueError in cases where
# _ensure_nanosecond_dtype raises TypeError
dtype = cast(np.dtype, dtype)
- dtype = _ensure_nanosecond_dtype(dtype)
+ # Incompatible types in assignment (expression has type "Union[dtype[Any],
+ # ExtensionDtype]", variable has type "Optional[dtype[Any]]")
+ dtype = _ensure_nanosecond_dtype(dtype) # type: ignore[assignment]
res = TimedeltaArray._from_sequence(value, dtype=dtype)
return res
if dtype is not None:
is_datetime64 = is_datetime64_dtype(dtype)
- is_datetime64tz = is_datetime64tz_dtype(dtype)
vdtype = getattr(value, "dtype", None)
- if is_datetime64 or is_datetime64tz:
- dtype = _ensure_nanosecond_dtype(dtype)
+ if is_datetime64:
+ # Incompatible types in assignment (expression has type
+ # "Union[dtype[Any], ExtensionDtype]", variable has type
+ # "Optional[dtype[Any]]")
+ dtype = _ensure_nanosecond_dtype(dtype) # type: ignore[assignment]
value = np.array(value, copy=False)
@@ -1352,59 +1357,22 @@ def maybe_cast_to_datetime(
_disallow_mismatched_datetimelike(value, dtype)
try:
- if is_datetime64:
- dta = sequence_to_datetimes(value)
- # GH 25843: Remove tz information since the dtype
- # didn't specify one
-
- if dta.tz is not None:
- raise ValueError(
- "Cannot convert timezone-aware data to "
- "timezone-naive dtype. Use "
- "pd.Series(values).dt.tz_localize(None) instead."
- )
-
- # TODO(2.0): Do this astype in sequence_to_datetimes to
- # avoid potential extra copy?
- dta = dta.astype(dtype, copy=False)
- value = dta
- elif is_datetime64tz:
- dtype = cast(DatetimeTZDtype, dtype)
- # The string check can be removed once issue #13712
- # is solved. String data that is passed with a
- # datetime64tz is assumed to be naive which should
- # be localized to the timezone.
- is_dt_string = is_string_dtype(value.dtype)
- dta = sequence_to_datetimes(value)
- if dta.tz is not None:
- value = dta.astype(dtype, copy=False)
- elif is_dt_string:
- # Strings here are naive, so directly localize
- # equiv: dta.astype(dtype) # though deprecated
-
- value = dta.tz_localize(dtype.tz)
- else:
- # Numeric values are UTC at this point,
- # so localize and convert
- # equiv: Series(dta).astype(dtype) # though deprecated
- if getattr(vdtype, "kind", None) == "M":
- # GH#24559, GH#33401 deprecate behavior inconsistent
- # with DatetimeArray/DatetimeIndex
- warnings.warn(
- "In a future version, constructing a Series "
- "from datetime64[ns] data and a "
- "DatetimeTZDtype will interpret the data "
- "as wall-times instead of "
- "UTC times, matching the behavior of "
- "DatetimeIndex. To treat the data as UTC "
- "times, use pd.Series(data).dt"
- ".tz_localize('UTC').tz_convert(dtype.tz) "
- "or pd.Series(data.view('int64'), dtype=dtype)",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- value = dta.tz_localize("UTC").tz_convert(dtype.tz)
+ dta = sequence_to_datetimes(value)
+ # GH 25843: Remove tz information since the dtype
+ # didn't specify one
+
+ if dta.tz is not None:
+ raise ValueError(
+ "Cannot convert timezone-aware data to "
+ "timezone-naive dtype. Use "
+ "pd.Series(values).dt.tz_localize(None) instead."
+ )
+
+ # TODO(2.0): Do this astype in sequence_to_datetimes to
+ # avoid potential extra copy?
+ dta = dta.astype(dtype, copy=False)
+ value = dta
+
except OutOfBoundsDatetime:
raise
except ParserError:
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index c1d0ab730fe7e..054663fcd0626 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -43,6 +43,7 @@
is_named_tuple,
is_object_dtype,
)
+from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCSeries,
@@ -1054,7 +1055,15 @@ def _convert_object_array(
def convert(arr):
if dtype != np.dtype("O"):
arr = lib.maybe_convert_objects(arr)
- arr = maybe_cast_to_datetime(arr, dtype)
+
+ if isinstance(dtype, ExtensionDtype):
+ # TODO: test(s) that get here
+ # TODO: try to de-duplicate this convert function with
+ # core.construction functions
+ cls = dtype.construct_array_type()
+ arr = cls._from_sequence(arr, dtype=dtype, copy=False)
+ else:
+ arr = maybe_cast_to_datetime(arr, dtype)
return arr
arrays = [convert(arr) for arr in content]
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 3faddfeca38bd..35ebd152f447c 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1241,14 +1241,14 @@ def test_construction_consistency(self):
result = Series(ser.dt.tz_convert("UTC"), dtype=ser.dtype)
tm.assert_series_equal(result, ser)
- msg = "will interpret the data as wall-times"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- # deprecate behavior inconsistent with DatetimeIndex GH#33401
- result = Series(ser.values, dtype=ser.dtype)
- tm.assert_series_equal(result, ser)
+ # Pre-2.0 dt64 values were treated as utc, which was inconsistent
+ # with DatetimeIndex, which treats them as wall times, see GH#33401
+ result = Series(ser.values, dtype=ser.dtype)
+ expected = Series(ser.values).dt.tz_localize(ser.dtype.tz)
+ tm.assert_series_equal(result, expected)
with tm.assert_produces_warning(None):
- # one suggested alternative to the deprecated usage
+ # one suggested alternative to the deprecated (changed in 2.0) usage
middle = Series(ser.values).dt.tz_localize("UTC")
result = middle.dt.tz_convert(ser.dtype.tz)
tm.assert_series_equal(result, ser)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49242 | 2022-10-21T23:42:51Z | 2022-10-25T18:37:32Z | 2022-10-25T18:37:31Z | 2022-10-25T19:28:49Z |
CI/TST: pip extras install | diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml
new file mode 100644
index 0000000000000..762cb509be136
--- /dev/null
+++ b/.github/workflows/package-checks.yml
@@ -0,0 +1,56 @@
+name: Package Checks
+
+on:
+ push:
+ branches:
+ - main
+ - 1.5.x
+ pull_request:
+ branches:
+ - main
+ - 1.5.x
+
+permissions:
+ contents: read
+
+jobs:
+ pip:
+ runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ extra: ["test", "performance", "timezone", "computation", "fss", "aws", "gcp", "excel", "parquet", "feather", "hdf5", "spss", "postgresql", "mysql", "sql-other", "html", "xml", "plot", "output_formatting", "clipboard", "compression", "all"]
+ fail-fast: false
+ name: Install Extras - ${{ matrix.extra }}
+ concurrency:
+ # https://github.community/t/concurrecy-not-work-for-push/183068/7
+ group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-pip-extras-${{ matrix.extra }}
+ cancel-in-progress: true
+
+ steps:
+ - name: Checkout
+ uses: actions/checkout@v3
+ with:
+ fetch-depth: 0
+
+ - name: Setup Python
+ id: setup_python
+ uses: actions/setup-python@v3
+ with:
+ python-version: '3.8'
+
+ # Hacky patch to disable building cython extensions.
+ # This job should only check that the extras successfully install.
+ - name: Disable building ext_modules
+ run: |
+ sed -i '/ext_modules=/d' setup.py
+ shell: bash -el {0}
+
+ - name: Install required dependencies
+ run: |
+ python -m pip install --upgrade pip setuptools wheel python-dateutil pytz numpy cython
+ shell: bash -el {0}
+
+ - name: Pip install with extra
+ run: |
+ python -m pip install -e .[${{ matrix.extra }}] --no-build-isolation
+ shell: bash -el {0}
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 11c419c399877..d4d7ee5efcbb0 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -415,7 +415,7 @@ depending on required compatibility.
Dependency Minimum Version optional_extra Notes
========================= ================== ================ =============================================================
PyTables 3.6.1 hdf5 HDF5-based reading / writing
-blosc 1.21.0 hdf5 Compression for HDF5
+blosc 1.21.0 hdf5 Compression for HDF5; only available on ``conda``
zlib hdf5 Compression for HDF5
fastparquet 0.6.3 - Parquet reading / writing (pyarrow is default)
pyarrow 6.0.0 parquet, feather Parquet, ORC, and feather reading / writing
diff --git a/scripts/validate_min_versions_in_sync.py b/scripts/validate_min_versions_in_sync.py
index ad0375a4320a2..a69bdb95c0f9b 100755
--- a/scripts/validate_min_versions_in_sync.py
+++ b/scripts/validate_min_versions_in_sync.py
@@ -24,7 +24,7 @@
)
CODE_PATH = pathlib.Path("pandas/compat/_optional.py").resolve()
SETUP_PATH = pathlib.Path("setup.cfg").resolve()
-EXCLUDE_DEPS = {"tzdata"}
+EXCLUDE_DEPS = {"tzdata", "blosc"}
# pandas package is not available
# in pre-commit environment
sys.path.append("pandas/compat")
@@ -38,10 +38,11 @@
def get_versions_from_code() -> dict[str, str]:
+ """Min versions for checking within pandas code."""
install_map = _optional.INSTALL_MAPPING
versions = _optional.VERSIONS
for item in EXCLUDE_DEPS:
- versions.pop(item)
+ versions.pop(item, None)
return {
install_map.get(k, k).casefold(): v
for k, v in versions.items()
@@ -50,6 +51,7 @@ def get_versions_from_code() -> dict[str, str]:
def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str, str]]:
+ """Min versions in CI job for testing all optional dependencies."""
# Don't parse with pyyaml because it ignores comments we're looking for
seen_required = False
seen_optional = False
@@ -79,6 +81,7 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str,
def get_versions_from_setup() -> dict[str, str]:
+ """Min versions in setup.cfg for pip install pandas[extra]."""
install_map = _optional.INSTALL_MAPPING
optional_dependencies = {}
@@ -99,7 +102,7 @@ def get_versions_from_setup() -> dict[str, str]:
optional_dependencies[install_map.get(package, package).casefold()] = version
for item in EXCLUDE_DEPS:
- optional_dependencies.pop(item)
+ optional_dependencies.pop(item, None)
return optional_dependencies
diff --git a/setup.cfg b/setup.cfg
index 785143c7b647c..62bebac8a2885 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -85,7 +85,8 @@ parquet =
feather =
pyarrow>=6.0.0
hdf5 =
- blosc>=1.20.1
+ # blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
+ # blosc>=1.20.1
tables>=3.6.1
spss =
pyreadstat>=1.1.2
@@ -121,7 +122,8 @@ compression =
# `all ` should be kept as the complete set of pandas optional dependencies for general use.
all =
beautifulsoup4>=4.9.3
- blosc>=1.21.0
+ # blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
+ # blosc>=1.21.0
bottleneck>=1.3.2
brotlipy>=0.7.0
fastparquet>=0.6.3
| - [x] closes #48942 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
`(c-)blosc` is only available on conda, so removing it from the pip extras
https://www.pytables.org/usersguide/installation.html#prerequisites | https://api.github.com/repos/pandas-dev/pandas/pulls/49241 | 2022-10-21T22:51:16Z | 2022-11-16T22:19:13Z | 2022-11-16T22:19:13Z | 2022-11-18T20:23:19Z |
DEPR: categorical.mode | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 164dc74836508..48371b7f14b28 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2353,29 +2353,6 @@ def max(self, *, skipna: bool = True, **kwargs):
pointer = self._codes.max()
return self._wrap_reduction_result(None, pointer)
- def mode(self, dropna: bool = True) -> Categorical:
- """
- Returns the mode(s) of the Categorical.
-
- Always returns `Categorical` even if only one value.
-
- Parameters
- ----------
- dropna : bool, default True
- Don't consider counts of NaN/NaT.
-
- Returns
- -------
- modes : `Categorical` (sorted)
- """
- warn(
- "Categorical.mode is deprecated and will be removed in a future version. "
- "Use Series.mode instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self._mode(dropna=dropna)
-
def _mode(self, dropna: bool = True) -> Categorical:
codes = self._codes
mask = None
diff --git a/pandas/tests/arrays/categorical/test_analytics.py b/pandas/tests/arrays/categorical/test_analytics.py
index 39590bcc6636e..e9f4be11ee4b7 100644
--- a/pandas/tests/arrays/categorical/test_analytics.py
+++ b/pandas/tests/arrays/categorical/test_analytics.py
@@ -158,10 +158,8 @@ def test_numpy_min_max_axis_equals_none(self, method, expected):
],
)
def test_mode(self, values, categories, exp_mode):
- s = Categorical(values, categories=categories, ordered=True)
- msg = "Use Series.mode instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = s.mode()
+ cat = Categorical(values, categories=categories, ordered=True)
+ res = Series(cat).mode()._values
exp = Categorical(exp_mode, categories=categories, ordered=True)
tm.assert_categorical_equal(res, exp)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index b6891dac9034b..a6b765117f616 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -2296,21 +2296,17 @@ def test_uint64_overflow(self):
def test_categorical(self):
c = Categorical([1, 2])
exp = c
- msg = "Categorical.mode is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = c.mode()
+ res = Series(c).mode()._values
tm.assert_categorical_equal(res, exp)
c = Categorical([1, "a", "a"])
exp = Categorical(["a"], categories=[1, "a"])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = c.mode()
+ res = Series(c).mode()._values
tm.assert_categorical_equal(res, exp)
c = Categorical([1, 1, 2, 3, 3])
exp = Categorical([1, 3], categories=[1, 2, 3])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = c.mode()
+ res = Series(c).mode()._values
tm.assert_categorical_equal(res, exp)
def test_index(self):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49238 | 2022-10-21T20:28:15Z | 2022-10-22T11:42:11Z | 2022-10-22T11:42:11Z | 2022-10-22T21:45:22Z |
STYLE: fix pylint no-else-break warnings | diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index daf89319a4520..ddd73375f8871 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -704,7 +704,7 @@ def _next_line(self) -> list[Scalar]:
self._is_line_empty(self.data[self.pos - 1]) or line
):
break
- elif self.skip_blank_lines:
+ if self.skip_blank_lines:
ret = self._remove_empty_lines([line])
if ret:
line = ret[0]
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 783b5f8cd63ae..0f24e3f31cc4b 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -971,18 +971,18 @@ def _query_iterator(
[], columns=columns, coerce_float=coerce_float
)
break
- else:
- has_read_data = True
- self.frame = DataFrame.from_records(
- data, columns=columns, coerce_float=coerce_float
- )
- self._harmonize_columns(parse_dates=parse_dates)
+ has_read_data = True
+ self.frame = DataFrame.from_records(
+ data, columns=columns, coerce_float=coerce_float
+ )
- if self.index is not None:
- self.frame.set_index(self.index, inplace=True)
+ self._harmonize_columns(parse_dates=parse_dates)
+
+ if self.index is not None:
+ self.frame.set_index(self.index, inplace=True)
- yield self.frame
+ yield self.frame
def read(
self,
@@ -1489,16 +1489,16 @@ def _query_iterator(
parse_dates=parse_dates,
)
break
- else:
- has_read_data = True
- yield _wrap_result(
- data,
- columns,
- index_col=index_col,
- coerce_float=coerce_float,
- parse_dates=parse_dates,
- dtype=dtype,
- )
+
+ has_read_data = True
+ yield _wrap_result(
+ data,
+ columns,
+ index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates,
+ dtype=dtype,
+ )
def read_query(
self,
@@ -2053,16 +2053,16 @@ def _query_iterator(
[], columns=columns, coerce_float=coerce_float
)
break
- else:
- has_read_data = True
- yield _wrap_result(
- data,
- columns,
- index_col=index_col,
- coerce_float=coerce_float,
- parse_dates=parse_dates,
- dtype=dtype,
- )
+
+ has_read_data = True
+ yield _wrap_result(
+ data,
+ columns,
+ index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates,
+ dtype=dtype,
+ )
def read_query(
self,
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 70fca27c837f3..4d5feafb5ebd2 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -404,9 +404,8 @@ def __call__(self):
if num <= interval * (max_millis_ticks - 1):
self._interval = interval
break
- else:
- # We went through the whole loop without breaking, default to 1
- self._interval = 1000.0
+ # We went through the whole loop without breaking, default to 1
+ self._interval = 1000.0
estimate = (nmax - nmin) / (self._get_unit() * self._get_interval())
diff --git a/pandas/tests/io/test_user_agent.py b/pandas/tests/io/test_user_agent.py
index ac4ca5dce6dc3..3b552805198b5 100644
--- a/pandas/tests/io/test_user_agent.py
+++ b/pandas/tests/io/test_user_agent.py
@@ -233,9 +233,8 @@ def responder(request):
if wait_time > kill_time:
server_process.kill()
break
- else:
- wait_time += 0.1
- time.sleep(0.1)
+ wait_time += 0.1
+ time.sleep(0.1)
server_process.close()
diff --git a/pyproject.toml b/pyproject.toml
index 09ea61c06380c..f18b54a8dcd03 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -122,7 +122,6 @@ disable = [
"inconsistent-return-statements",
"invalid-sequence-index",
"literal-comparison",
- "no-else-break",
"no-else-continue",
"no-else-raise",
"no-else-return",
| Related to https://github.com/pandas-dev/pandas/issues/48855
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
The `no-else-break` is also removed from the ignored warnings in `pyproject.toml`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49237 | 2022-10-21T19:00:45Z | 2022-10-21T23:31:29Z | 2022-10-21T23:31:29Z | 2022-10-21T23:31:53Z |
DEPR: enforce DatetimeArray.astype deprecations | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 4ac737bb6b29a..6f3602b1d0202 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -148,6 +148,8 @@ Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Enforced deprecation disallowing passing a timezone-aware :class:`Timestamp` and ``dtype="datetime64[ns]"`` to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
- Enforced deprecation disallowing passing a sequence of timezone-aware values and ``dtype="datetime64[ns]"`` to to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
+- Enforced deprecation disallowing using ``.astype`` to convert a ``datetime64[ns]`` :class:`Series`, :class:`DataFrame`, or :class:`DatetimeIndex` to timezone-aware dtype, use ``obj.tz_localize`` or ``ser.dt.tz_localize`` instead (:issue:`39258`)
+- Enforced deprecation disallowing using ``.astype`` to convert a timezone-aware :class:`Series`, :class:`DataFrame`, or :class:`DatetimeIndex` to timezone-naive ``datetime64[ns]`` dtype, use ``obj.tz_localize(None)`` or ``obj.tz_convert("UTC").tz_localize(None)`` instead (:issue:`39258`)
- Removed Date parser functions :func:`~pandas.io.date_converters.parse_date_time`,
:func:`~pandas.io.date_converters.parse_date_fields`, :func:`~pandas.io.date_converters.parse_all_fields`
and :func:`~pandas.io.date_converters.generic_parser` (:issue:`24518`)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index ca0a745c180e9..19ef100f24f1b 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -56,14 +56,12 @@
from pandas.util._exceptions import find_stack_level
from pandas.util._validators import validate_inclusive
-from pandas.core.dtypes.astype import astype_dt64_to_dt64tz
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
INT64_DTYPE,
is_bool_dtype,
is_datetime64_any_dtype,
is_datetime64_dtype,
- is_datetime64_ns_dtype,
is_datetime64tz_dtype,
is_dtype_equal,
is_extension_array_dtype,
@@ -660,15 +658,29 @@ def astype(self, dtype, copy: bool = True):
return type(self)._simple_new(res_values, dtype=res_values.dtype)
# TODO: preserve freq?
- elif is_datetime64_ns_dtype(dtype):
- return astype_dt64_to_dt64tz(self, dtype, copy, via_utc=False)
-
elif self.tz is not None and isinstance(dtype, DatetimeTZDtype):
# tzaware unit conversion e.g. datetime64[s, UTC]
np_dtype = np.dtype(dtype.str)
res_values = astype_overflowsafe(self._ndarray, np_dtype, copy=copy)
- return type(self)._simple_new(res_values, dtype=dtype)
- # TODO: preserve freq?
+ return type(self)._simple_new(res_values, dtype=dtype, freq=self.freq)
+
+ elif self.tz is None and isinstance(dtype, DatetimeTZDtype):
+ # pre-2.0 this did self.tz_localize(dtype.tz), which did not match
+ # the Series behavior
+ raise TypeError(
+ "Cannot use .astype to convert from timezone-naive dtype to "
+ "timezone-aware dtype. Use obj.tz_localize instead."
+ )
+
+ elif self.tz is not None and is_datetime64_dtype(dtype):
+ # pre-2.0 behavior for DTA/DTI was
+ # values.tz_convert("UTC").tz_localize(None), which did not match
+ # the Series behavior
+ raise TypeError(
+ "Cannot use .astype to convert from timezone-aware dtype to "
+ "timezone-naive dtype. Use obj.tz_localize(None) or "
+ "obj.tz_convert('UTC').tz_localize(None) instead."
+ )
elif (
self.tz is None
diff --git a/pandas/core/dtypes/astype.py b/pandas/core/dtypes/astype.py
index ad3dc0a876e00..718badc2e4085 100644
--- a/pandas/core/dtypes/astype.py
+++ b/pandas/core/dtypes/astype.py
@@ -7,10 +7,8 @@
import inspect
from typing import (
TYPE_CHECKING,
- cast,
overload,
)
-import warnings
import numpy as np
@@ -27,7 +25,6 @@
IgnoreRaise,
)
from pandas.errors import IntCastingNaNError
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_datetime64_dtype,
@@ -39,17 +36,13 @@
pandas_dtype,
)
from pandas.core.dtypes.dtypes import (
- DatetimeTZDtype,
ExtensionDtype,
PandasDtype,
)
from pandas.core.dtypes.missing import isna
if TYPE_CHECKING:
- from pandas.core.arrays import (
- DatetimeArray,
- ExtensionArray,
- )
+ from pandas.core.arrays import ExtensionArray
_dtype_obj = np.dtype(object)
@@ -227,7 +220,13 @@ def astype_array(values: ArrayLike, dtype: DtypeObj, copy: bool = False) -> Arra
raise TypeError(msg)
if is_datetime64tz_dtype(dtype) and is_datetime64_dtype(values.dtype):
- return astype_dt64_to_dt64tz(values, dtype, copy, via_utc=True)
+ # Series.astype behavior pre-2.0 did
+ # values.tz_localize("UTC").tz_convert(dtype.tz)
+ # which did not match the DTA/DTI behavior.
+ raise TypeError(
+ "Cannot use .astype to convert from timezone-naive dtype to "
+ "timezone-aware dtype. Use ser.dt.tz_localize instead."
+ )
if is_dtype_equal(values.dtype, dtype):
if copy:
@@ -351,80 +350,3 @@ def astype_td64_unit_conversion(
mask = isna(values)
np.putmask(result, mask, np.nan)
return result
-
-
-def astype_dt64_to_dt64tz(
- values: ArrayLike, dtype: DtypeObj, copy: bool, via_utc: bool = False
-) -> DatetimeArray:
- # GH#33401 we have inconsistent behaviors between
- # Datetimeindex[naive].astype(tzaware)
- # Series[dt64].astype(tzaware)
- # This collects them in one place to prevent further fragmentation.
-
- from pandas.core.construction import ensure_wrapped_if_datetimelike
-
- values = ensure_wrapped_if_datetimelike(values)
- values = cast("DatetimeArray", values)
- aware = isinstance(dtype, DatetimeTZDtype)
-
- if via_utc:
- # Series.astype behavior
-
- # caller is responsible for checking this
- assert values.tz is None and aware
- dtype = cast(DatetimeTZDtype, dtype)
-
- if copy:
- # this should be the only copy
- values = values.copy()
-
- warnings.warn(
- "Using .astype to convert from timezone-naive dtype to "
- "timezone-aware dtype is deprecated and will raise in a "
- "future version. Use ser.dt.tz_localize instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- # GH#33401 this doesn't match DatetimeArray.astype, which
- # goes through the `not via_utc` path
- return values.tz_localize("UTC").tz_convert(dtype.tz)
-
- else:
- # DatetimeArray/DatetimeIndex.astype behavior
- if values.tz is None and aware:
- dtype = cast(DatetimeTZDtype, dtype)
- warnings.warn(
- "Using .astype to convert from timezone-naive dtype to "
- "timezone-aware dtype is deprecated and will raise in a "
- "future version. Use obj.tz_localize instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- return values.tz_localize(dtype.tz)
-
- elif aware:
- # GH#18951: datetime64_tz dtype but not equal means different tz
- dtype = cast(DatetimeTZDtype, dtype)
- result = values.tz_convert(dtype.tz)
- if copy:
- result = result.copy()
- return result
-
- elif values.tz is not None:
- warnings.warn(
- "Using .astype to convert from timezone-aware dtype to "
- "timezone-naive dtype is deprecated and will raise in a "
- "future version. Use obj.tz_localize(None) or "
- "obj.tz_convert('UTC').tz_localize(None) instead",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- result = values.tz_convert("UTC").tz_localize(None)
- if copy:
- result = result.copy()
- return result
-
- raise NotImplementedError("dtype_equal case should be handled elsewhere")
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index b27d90e43d860..24779c6e0c89d 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -364,15 +364,22 @@ def test_astype_copies(self, dtype, other):
ser = pd.Series([1, 2], dtype=dtype)
orig = ser.copy()
- warn = None
+ err = False
if (dtype == "datetime64[ns]") ^ (other == "datetime64[ns]"):
# deprecated in favor of tz_localize
- warn = FutureWarning
-
- with tm.assert_produces_warning(warn):
+ err = True
+
+ if err:
+ if dtype == "datetime64[ns]":
+ msg = "Use ser.dt.tz_localize instead"
+ else:
+ msg = "from timezone-aware dtype to timezone-naive dtype"
+ with pytest.raises(TypeError, match=msg):
+ ser.astype(other)
+ else:
t = ser.astype(other)
- t[:] = pd.NaT
- tm.assert_series_equal(ser, orig)
+ t[:] = pd.NaT
+ tm.assert_series_equal(ser, orig)
@pytest.mark.parametrize("dtype", [int, np.int32, np.int64, "uint32", "uint64"])
def test_astype_int(self, dtype):
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index bebc44505f02a..69087b6822f2e 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -611,27 +611,10 @@ def test_astype_dt64tz(self, timezone_frame):
result = timezone_frame.astype(object)
tm.assert_frame_equal(result, expected)
- with tm.assert_produces_warning(FutureWarning):
+ msg = "Cannot use .astype to convert from timezone-aware dtype to timezone-"
+ with pytest.raises(TypeError, match=msg):
# dt64tz->dt64 deprecated
- result = timezone_frame.astype("datetime64[ns]")
- expected = DataFrame(
- {
- "A": date_range("20130101", periods=3),
- "B": (
- date_range("20130101", periods=3, tz="US/Eastern")
- .tz_convert("UTC")
- .tz_localize(None)
- ),
- "C": (
- date_range("20130101", periods=3, tz="CET")
- .tz_convert("UTC")
- .tz_localize(None)
- ),
- }
- )
- expected.iloc[1, 1] = NaT
- expected.iloc[1, 2] = NaT
- tm.assert_frame_equal(result, expected)
+ timezone_frame.astype("datetime64[ns]")
def test_astype_dt64tz_to_str(self, timezone_frame):
# str formatting
diff --git a/pandas/tests/indexes/datetimes/methods/test_astype.py b/pandas/tests/indexes/datetimes/methods/test_astype.py
index e7823f0c90b1a..a9a35f26d58a3 100644
--- a/pandas/tests/indexes/datetimes/methods/test_astype.py
+++ b/pandas/tests/indexes/datetimes/methods/test_astype.py
@@ -62,20 +62,14 @@ def test_astype_with_tz(self):
# with tz
rng = date_range("1/1/2000", periods=10, tz="US/Eastern")
- with tm.assert_produces_warning(FutureWarning):
+ msg = "Cannot use .astype to convert from timezone-aware"
+ with pytest.raises(TypeError, match=msg):
# deprecated
- result = rng.astype("datetime64[ns]")
- with tm.assert_produces_warning(FutureWarning):
+ rng.astype("datetime64[ns]")
+ with pytest.raises(TypeError, match=msg):
# check DatetimeArray while we're here deprecated
rng._data.astype("datetime64[ns]")
- expected = (
- date_range("1/1/2000", periods=10, tz="US/Eastern")
- .tz_convert("UTC")
- .tz_localize(None)
- )
- tm.assert_index_equal(result, expected)
-
def test_astype_tzaware_to_tzaware(self):
# GH 18951: tz-aware to tz-aware
idx = date_range("20170101", periods=4, tz="US/Pacific")
@@ -88,17 +82,14 @@ def test_astype_tznaive_to_tzaware(self):
# GH 18951: tz-naive to tz-aware
idx = date_range("20170101", periods=4)
idx = idx._with_freq(None) # tz_localize does not preserve freq
- with tm.assert_produces_warning(FutureWarning):
+ msg = "Cannot use .astype to convert from timezone-naive"
+ with pytest.raises(TypeError, match=msg):
# dt64->dt64tz deprecated
- result = idx.astype("datetime64[ns, US/Eastern]")
- with tm.assert_produces_warning(FutureWarning):
+ idx.astype("datetime64[ns, US/Eastern]")
+ with pytest.raises(TypeError, match=msg):
# dt64->dt64tz deprecated
idx._data.astype("datetime64[ns, US/Eastern]")
- expected = date_range("20170101", periods=4, tz="US/Eastern")
- expected = expected._with_freq(None)
- tm.assert_index_equal(result, expected)
-
def test_astype_str_nat(self):
# GH 13149, GH 13209
# verify that we are returning NaT as a string (and not unicode)
@@ -171,15 +162,10 @@ def test_astype_datetime64(self):
assert result is idx
idx_tz = DatetimeIndex(["2016-05-16", "NaT", NaT, np.NaN], tz="EST", name="idx")
- with tm.assert_produces_warning(FutureWarning):
+ msg = "Cannot use .astype to convert from timezone-aware"
+ with pytest.raises(TypeError, match=msg):
# dt64tz->dt64 deprecated
result = idx_tz.astype("datetime64[ns]")
- expected = DatetimeIndex(
- ["2016-05-16 05:00:00", "NaT", "NaT", "NaT"],
- dtype="datetime64[ns]",
- name="idx",
- )
- tm.assert_index_equal(result, expected)
def test_astype_object(self):
rng = date_range("1/1/2000", periods=20)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 4b0821a50e09b..ff08b72b4a10d 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -244,8 +244,9 @@ def test_constructor_dtypes_datetime(self, tz_naive_fixture, attr, klass):
index = index.tz_localize(tz_naive_fixture)
dtype = index.dtype
- warn = None if tz_naive_fixture is None else FutureWarning
- # astype dt64 -> dt64tz deprecated
+ # As of 2.0 astype raises on dt64.astype(dt64tz)
+ err = tz_naive_fixture is not None
+ msg = "Cannot use .astype to convert from timezone-naive dtype to"
if attr == "asi8":
result = DatetimeIndex(arg).tz_localize(tz_naive_fixture)
@@ -254,11 +255,15 @@ def test_constructor_dtypes_datetime(self, tz_naive_fixture, attr, klass):
tm.assert_index_equal(result, index)
if attr == "asi8":
- with tm.assert_produces_warning(warn):
+ if err:
+ with pytest.raises(TypeError, match=msg):
+ DatetimeIndex(arg).astype(dtype)
+ else:
result = DatetimeIndex(arg).astype(dtype)
+ tm.assert_index_equal(result, index)
else:
result = klass(arg, dtype=dtype)
- tm.assert_index_equal(result, index)
+ tm.assert_index_equal(result, index)
if attr == "asi8":
result = DatetimeIndex(list(arg)).tz_localize(tz_naive_fixture)
@@ -267,11 +272,15 @@ def test_constructor_dtypes_datetime(self, tz_naive_fixture, attr, klass):
tm.assert_index_equal(result, index)
if attr == "asi8":
- with tm.assert_produces_warning(warn):
+ if err:
+ with pytest.raises(TypeError, match=msg):
+ DatetimeIndex(list(arg)).astype(dtype)
+ else:
result = DatetimeIndex(list(arg)).astype(dtype)
+ tm.assert_index_equal(result, index)
else:
result = klass(list(arg), dtype=dtype)
- tm.assert_index_equal(result, index)
+ tm.assert_index_equal(result, index)
@pytest.mark.parametrize("attr", ["values", "asi8"])
@pytest.mark.parametrize("klass", [Index, TimedeltaIndex])
diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py
index 498225307b52e..9b57f0f634a6c 100644
--- a/pandas/tests/series/methods/test_astype.py
+++ b/pandas/tests/series/methods/test_astype.py
@@ -211,15 +211,14 @@ def test_astype_datetime64tz(self):
tm.assert_series_equal(result, expected)
# astype - datetime64[ns, tz]
- with tm.assert_produces_warning(FutureWarning):
+ msg = "Cannot use .astype to convert from timezone-naive"
+ with pytest.raises(TypeError, match=msg):
# dt64->dt64tz astype deprecated
- result = Series(ser.values).astype("datetime64[ns, US/Eastern]")
- tm.assert_series_equal(result, ser)
+ Series(ser.values).astype("datetime64[ns, US/Eastern]")
- with tm.assert_produces_warning(FutureWarning):
+ with pytest.raises(TypeError, match=msg):
# dt64->dt64tz astype deprecated
- result = Series(ser.values).astype(ser.dtype)
- tm.assert_series_equal(result, ser)
+ Series(ser.values).astype(ser.dtype)
result = ser.astype("datetime64[ns, CET]")
expected = Series(date_range("20130101 06:00:00", periods=3, tz="CET"))
diff --git a/pandas/tests/series/methods/test_convert_dtypes.py b/pandas/tests/series/methods/test_convert_dtypes.py
index 0fd508b08f1db..dd28f28f89bcb 100644
--- a/pandas/tests/series/methods/test_convert_dtypes.py
+++ b/pandas/tests/series/methods/test_convert_dtypes.py
@@ -155,18 +155,19 @@ class TestSeriesConvertDtypes:
def test_convert_dtypes(
self, data, maindtype, params, expected_default, expected_other
):
- warn = None
if (
hasattr(data, "dtype")
and data.dtype == "M8[ns]"
and isinstance(maindtype, pd.DatetimeTZDtype)
):
# this astype is deprecated in favor of tz_localize
- warn = FutureWarning
+ msg = "Cannot use .astype to convert from timezone-naive dtype"
+ with pytest.raises(TypeError, match=msg):
+ pd.Series(data, dtype=maindtype)
+ return
if maindtype is not None:
- with tm.assert_produces_warning(warn):
- series = pd.Series(data, dtype=maindtype)
+ series = pd.Series(data, dtype=maindtype)
else:
series = pd.Series(data)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49235 | 2022-10-21T16:10:18Z | 2022-10-21T20:13:34Z | 2022-10-21T20:13:34Z | 2022-10-21T20:17:24Z |
DEPR: PeriodIndex.astype how keyword | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 4ac737bb6b29a..74a50f435de5d 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -156,6 +156,7 @@ Removal of prior version deprecations/changes
- Remove arguments ``names`` and ``dtype`` from :meth:`Index.copy` and ``levels`` and ``codes`` from :meth:`MultiIndex.copy` (:issue:`35853`, :issue:`36685`)
- Remove argument ``inplace`` from :meth:`MultiIndex.set_levels` and :meth:`MultiIndex.set_codes` (:issue:`35626`)
- Disallow passing positional arguments to :meth:`MultiIndex.set_levels` and :meth:`MultiIndex.set_codes` (:issue:`41485`)
+- Removed argument ``how`` from :meth:`PeriodIndex.astype`, use :meth:`PeriodIndex.to_timestamp` instead (:issue:`37982`)
- Removed argument ``try_cast`` from :meth:`DataFrame.mask`, :meth:`DataFrame.where`, :meth:`Series.mask` and :meth:`Series.where` (:issue:`38836`)
- Disallow passing non-round floats to :class:`Timestamp` with ``unit="M"`` or ``unit="Y"`` (:issue:`47266`)
- Removed deprecated :meth:`Timedelta.delta`, :meth:`Timedelta.is_populated`, and :attr:`Timedelta.freq` (:issue:`46430`, :issue:`46476`)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index a5408f19456dd..889d8f33cdfae 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -5,7 +5,6 @@
timedelta,
)
from typing import Hashable
-import warnings
import numpy as np
@@ -29,13 +28,8 @@
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
-from pandas.core.dtypes.common import (
- is_datetime64_any_dtype,
- is_integer,
- pandas_dtype,
-)
+from pandas.core.dtypes.common import is_integer
from pandas.core.dtypes.dtypes import PeriodDtype
from pandas.core.dtypes.missing import is_valid_na_for_dtype
@@ -349,32 +343,6 @@ def asof_locs(self, where: Index, mask: npt.NDArray[np.bool_]) -> np.ndarray:
return super().asof_locs(where, mask)
- @doc(Index.astype)
- def astype(self, dtype, copy: bool = True, how=lib.no_default):
- dtype = pandas_dtype(dtype)
-
- if how is not lib.no_default:
- # GH#37982
- warnings.warn(
- "The 'how' keyword in PeriodIndex.astype is deprecated and "
- "will be removed in a future version. "
- "Use index.to_timestamp(how=how) instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- else:
- how = "start"
-
- if is_datetime64_any_dtype(dtype):
- # 'how' is index-specific, isn't part of the EA interface.
- # GH#45038 implement this for PeriodArray (but without "how")
- # once the "how" deprecation is enforced we can just dispatch
- # directly to PeriodArray.
- tz = getattr(dtype, "tz", None)
- return self.to_timestamp(how=how).tz_localize(tz)
-
- return super().astype(dtype, copy=copy)
-
@property
def is_full(self) -> bool:
"""
diff --git a/pandas/tests/indexes/period/methods/test_astype.py b/pandas/tests/indexes/period/methods/test_astype.py
index fbc1d3702115e..9720b751b87ce 100644
--- a/pandas/tests/indexes/period/methods/test_astype.py
+++ b/pandas/tests/indexes/period/methods/test_astype.py
@@ -8,7 +8,6 @@
NaT,
Period,
PeriodIndex,
- Timedelta,
period_range,
)
import pandas._testing as tm
@@ -149,30 +148,7 @@ def test_astype_array_fallback(self):
def test_period_astype_to_timestamp(self):
pi = PeriodIndex(["2011-01", "2011-02", "2011-03"], freq="M")
- exp = DatetimeIndex(["2011-01-01", "2011-02-01", "2011-03-01"], freq="MS")
- with tm.assert_produces_warning(FutureWarning):
- # how keyword deprecated GH#37982
- res = pi.astype("datetime64[ns]", how="start")
- tm.assert_index_equal(res, exp)
- assert res.freq == exp.freq
-
- exp = DatetimeIndex(["2011-01-31", "2011-02-28", "2011-03-31"])
- exp = exp + Timedelta(1, "D") - Timedelta(1, "ns")
- with tm.assert_produces_warning(FutureWarning):
- # how keyword deprecated GH#37982
- res = pi.astype("datetime64[ns]", how="end")
- tm.assert_index_equal(res, exp)
- assert res.freq == exp.freq
-
exp = DatetimeIndex(["2011-01-01", "2011-02-01", "2011-03-01"], tz="US/Eastern")
res = pi.astype("datetime64[ns, US/Eastern]")
tm.assert_index_equal(res, exp)
assert res.freq == exp.freq
-
- exp = DatetimeIndex(["2011-01-31", "2011-02-28", "2011-03-31"], tz="US/Eastern")
- exp = exp + Timedelta(1, "D") - Timedelta(1, "ns")
- with tm.assert_produces_warning(FutureWarning):
- # how keyword deprecated GH#37982
- res = pi.astype("datetime64[ns, US/Eastern]", how="end")
- tm.assert_index_equal(res, exp)
- assert res.freq == exp.freq
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49234 | 2022-10-21T15:01:21Z | 2022-10-21T20:09:58Z | 2022-10-21T20:09:58Z | 2022-10-21T20:16:57Z |
CLN/FIX/PERF: Don't buffer entire Stata file into memory | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 91cd3335d9db6..3c3a655626bb6 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -6033,6 +6033,14 @@ values will have ``object`` data type.
``int64`` for all integer types and ``float64`` for floating point data. By default,
the Stata data types are preserved when importing.
+.. note::
+
+ All :class:`~pandas.io.stata.StataReader` objects, whether created by :func:`~pandas.read_stata`
+ (when using ``iterator=True`` or ``chunksize``) or instantiated by hand, must be used as context
+ managers (e.g. the ``with`` statement).
+ While the :meth:`~pandas.io.stata.StataReader.close` method is available, its use is unsupported.
+ It is not part of the public API and will be removed in with future without warning.
+
.. ipython:: python
:suppress:
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index bdbde438217b9..a8d6f3fce5bb7 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -857,6 +857,7 @@ Deprecations
- Deprecated :meth:`Series.backfill` in favor of :meth:`Series.bfill` (:issue:`33396`)
- Deprecated :meth:`DataFrame.pad` in favor of :meth:`DataFrame.ffill` (:issue:`33396`)
- Deprecated :meth:`DataFrame.backfill` in favor of :meth:`DataFrame.bfill` (:issue:`33396`)
+- Deprecated :meth:`~pandas.io.stata.StataReader.close`. Use :class:`~pandas.io.stata.StataReader` as a context manager instead (:issue:`49228`)
.. ---------------------------------------------------------------------------
.. _whatsnew_200.prior_deprecations:
@@ -1163,6 +1164,8 @@ Performance improvements
- Fixed a reference leak in :func:`read_hdf` (:issue:`37441`)
- Fixed a memory leak in :meth:`DataFrame.to_json` and :meth:`Series.to_json` when serializing datetimes and timedeltas (:issue:`40443`)
- Decreased memory usage in many :class:`DataFrameGroupBy` methods (:issue:`51090`)
+- Memory improvement in :class:`StataReader` when reading seekable files (:issue:`48922`)
+
.. ---------------------------------------------------------------------------
.. _whatsnew_200.bug_fixes:
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index eacc036f2740d..5cc13892224c5 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -23,6 +23,7 @@
TYPE_CHECKING,
Any,
AnyStr,
+ Callable,
Final,
Hashable,
Sequence,
@@ -182,10 +183,10 @@
>>> df = pd.DataFrame(values, columns=["i"]) # doctest: +SKIP
>>> df.to_stata('filename.dta') # doctest: +SKIP
->>> itr = pd.read_stata('filename.dta', chunksize=10000) # doctest: +SKIP
->>> for chunk in itr:
-... # Operate on a single chunk, e.g., chunk.mean()
-... pass # doctest: +SKIP
+>>> with pd.read_stata('filename.dta', chunksize=10000) as itr: # doctest: +SKIP
+>>> for chunk in itr:
+... # Operate on a single chunk, e.g., chunk.mean()
+... pass # doctest: +SKIP
"""
_read_method_doc = f"""\
@@ -1114,6 +1115,8 @@ def __init__(self) -> None:
class StataReader(StataParser, abc.Iterator):
__doc__ = _stata_reader_doc
+ _path_or_buf: IO[bytes] # Will be assigned by `_open_file`.
+
def __init__(
self,
path_or_buf: FilePath | ReadBuffer[bytes],
@@ -1129,7 +1132,7 @@ def __init__(
storage_options: StorageOptions = None,
) -> None:
super().__init__()
- self.col_sizes: list[int] = []
+ self._col_sizes: list[int] = []
# Arguments to the reader (can be temporarily overridden in
# calls to read).
@@ -1140,15 +1143,20 @@ def __init__(
self._preserve_dtypes = preserve_dtypes
self._columns = columns
self._order_categoricals = order_categoricals
+ self._original_path_or_buf = path_or_buf
+ self._compression = compression
+ self._storage_options = storage_options
self._encoding = ""
self._chunksize = chunksize
self._using_iterator = False
+ self._entered = False
if self._chunksize is None:
self._chunksize = 1
elif not isinstance(chunksize, int) or chunksize <= 0:
raise ValueError("chunksize must be a positive integer when set.")
# State variables for the file
+ self._close_file: Callable[[], None] | None = None
self._has_string_data = False
self._missing_values = False
self._can_read_value_labels = False
@@ -1159,21 +1167,48 @@ def __init__(
self._lines_read = 0
self._native_byteorder = _set_endianness(sys.byteorder)
- with get_handle(
- path_or_buf,
+
+ def _ensure_open(self) -> None:
+ """
+ Ensure the file has been opened and its header data read.
+ """
+ if not hasattr(self, "_path_or_buf"):
+ self._open_file()
+
+ def _open_file(self) -> None:
+ """
+ Open the file (with compression options, etc.), and read header information.
+ """
+ if not self._entered:
+ warnings.warn(
+ "StataReader is being used without using a context manager. "
+ "Using StataReader as a context manager is the only supported method.",
+ ResourceWarning,
+ stacklevel=find_stack_level(),
+ )
+ handles = get_handle(
+ self._original_path_or_buf,
"rb",
- storage_options=storage_options,
+ storage_options=self._storage_options,
is_text=False,
- compression=compression,
- ) as handles:
- # Copy to BytesIO, and ensure no encoding
- self.path_or_buf = BytesIO(handles.handle.read())
+ compression=self._compression,
+ )
+ if hasattr(handles.handle, "seekable") and handles.handle.seekable():
+ # If the handle is directly seekable, use it without an extra copy.
+ self._path_or_buf = handles.handle
+ self._close_file = handles.close
+ else:
+ # Copy to memory, and ensure no encoding.
+ with handles:
+ self._path_or_buf = BytesIO(handles.handle.read())
+ self._close_file = self._path_or_buf.close
self._read_header()
self._setup_dtype()
def __enter__(self) -> StataReader:
"""enter context manager"""
+ self._entered = True
return self
def __exit__(
@@ -1182,119 +1217,142 @@ def __exit__(
exc_value: BaseException | None,
traceback: TracebackType | None,
) -> None:
- """exit context manager"""
- self.close()
+ if self._close_file:
+ self._close_file()
def close(self) -> None:
- """close the handle if its open"""
- self.path_or_buf.close()
+ """Close the handle if its open.
+
+ .. deprecated: 2.0.0
+
+ The close method is not part of the public API.
+ The only supported way to use StataReader is to use it as a context manager.
+ """
+ warnings.warn(
+ "The StataReader.close() method is not part of the public API and "
+ "will be removed in a future version without notice. "
+ "Using StataReader as a context manager is the only supported method.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ if self._close_file:
+ self._close_file()
def _set_encoding(self) -> None:
"""
Set string encoding which depends on file version
"""
- if self.format_version < 118:
+ if self._format_version < 118:
self._encoding = "latin-1"
else:
self._encoding = "utf-8"
+ def _read_int8(self) -> int:
+ return struct.unpack("b", self._path_or_buf.read(1))[0]
+
+ def _read_uint8(self) -> int:
+ return struct.unpack("B", self._path_or_buf.read(1))[0]
+
+ def _read_uint16(self) -> int:
+ return struct.unpack(f"{self._byteorder}H", self._path_or_buf.read(2))[0]
+
+ def _read_uint32(self) -> int:
+ return struct.unpack(f"{self._byteorder}I", self._path_or_buf.read(4))[0]
+
+ def _read_uint64(self) -> int:
+ return struct.unpack(f"{self._byteorder}Q", self._path_or_buf.read(8))[0]
+
+ def _read_int16(self) -> int:
+ return struct.unpack(f"{self._byteorder}h", self._path_or_buf.read(2))[0]
+
+ def _read_int32(self) -> int:
+ return struct.unpack(f"{self._byteorder}i", self._path_or_buf.read(4))[0]
+
+ def _read_int64(self) -> int:
+ return struct.unpack(f"{self._byteorder}q", self._path_or_buf.read(8))[0]
+
+ def _read_char8(self) -> bytes:
+ return struct.unpack("c", self._path_or_buf.read(1))[0]
+
+ def _read_int16_count(self, count: int) -> tuple[int, ...]:
+ return struct.unpack(
+ f"{self._byteorder}{'h' * count}",
+ self._path_or_buf.read(2 * count),
+ )
+
def _read_header(self) -> None:
- first_char = self.path_or_buf.read(1)
- if struct.unpack("c", first_char)[0] == b"<":
+ first_char = self._read_char8()
+ if first_char == b"<":
self._read_new_header()
else:
self._read_old_header(first_char)
- self.has_string_data = len([x for x in self.typlist if type(x) is int]) > 0
+ self._has_string_data = len([x for x in self._typlist if type(x) is int]) > 0
# calculate size of a data record
- self.col_sizes = [self._calcsize(typ) for typ in self.typlist]
+ self._col_sizes = [self._calcsize(typ) for typ in self._typlist]
def _read_new_header(self) -> None:
# The first part of the header is common to 117 - 119.
- self.path_or_buf.read(27) # stata_dta><header><release>
- self.format_version = int(self.path_or_buf.read(3))
- if self.format_version not in [117, 118, 119]:
- raise ValueError(_version_error.format(version=self.format_version))
+ self._path_or_buf.read(27) # stata_dta><header><release>
+ self._format_version = int(self._path_or_buf.read(3))
+ if self._format_version not in [117, 118, 119]:
+ raise ValueError(_version_error.format(version=self._format_version))
self._set_encoding()
- self.path_or_buf.read(21) # </release><byteorder>
- self.byteorder = ">" if self.path_or_buf.read(3) == b"MSF" else "<"
- self.path_or_buf.read(15) # </byteorder><K>
- nvar_type = "H" if self.format_version <= 118 else "I"
- nvar_size = 2 if self.format_version <= 118 else 4
- self.nvar = struct.unpack(
- self.byteorder + nvar_type, self.path_or_buf.read(nvar_size)
- )[0]
- self.path_or_buf.read(7) # </K><N>
-
- self.nobs = self._get_nobs()
- self.path_or_buf.read(11) # </N><label>
- self._data_label = self._get_data_label()
- self.path_or_buf.read(19) # </label><timestamp>
- self.time_stamp = self._get_time_stamp()
- self.path_or_buf.read(26) # </timestamp></header><map>
- self.path_or_buf.read(8) # 0x0000000000000000
- self.path_or_buf.read(8) # position of <map>
-
- self._seek_vartypes = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 16
- )
- self._seek_varnames = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 10
- )
- self._seek_sortlist = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 10
- )
- self._seek_formats = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 9
- )
- self._seek_value_label_names = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 19
+ self._path_or_buf.read(21) # </release><byteorder>
+ self._byteorder = ">" if self._path_or_buf.read(3) == b"MSF" else "<"
+ self._path_or_buf.read(15) # </byteorder><K>
+ self._nvar = (
+ self._read_uint16() if self._format_version <= 118 else self._read_uint32()
)
+ self._path_or_buf.read(7) # </K><N>
+
+ self._nobs = self._get_nobs()
+ self._path_or_buf.read(11) # </N><label>
+ self._data_label = self._get_data_label()
+ self._path_or_buf.read(19) # </label><timestamp>
+ self._time_stamp = self._get_time_stamp()
+ self._path_or_buf.read(26) # </timestamp></header><map>
+ self._path_or_buf.read(8) # 0x0000000000000000
+ self._path_or_buf.read(8) # position of <map>
+
+ self._seek_vartypes = self._read_int64() + 16
+ self._seek_varnames = self._read_int64() + 10
+ self._seek_sortlist = self._read_int64() + 10
+ self._seek_formats = self._read_int64() + 9
+ self._seek_value_label_names = self._read_int64() + 19
# Requires version-specific treatment
self._seek_variable_labels = self._get_seek_variable_labels()
- self.path_or_buf.read(8) # <characteristics>
- self.data_location = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 6
- )
- self.seek_strls = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 7
- )
- self.seek_value_labels = (
- struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 14
- )
+ self._path_or_buf.read(8) # <characteristics>
+ self._data_location = self._read_int64() + 6
+ self._seek_strls = self._read_int64() + 7
+ self._seek_value_labels = self._read_int64() + 14
- self.typlist, self.dtyplist = self._get_dtypes(self._seek_vartypes)
+ self._typlist, self._dtyplist = self._get_dtypes(self._seek_vartypes)
- self.path_or_buf.seek(self._seek_varnames)
- self.varlist = self._get_varlist()
+ self._path_or_buf.seek(self._seek_varnames)
+ self._varlist = self._get_varlist()
- self.path_or_buf.seek(self._seek_sortlist)
- self.srtlist = struct.unpack(
- self.byteorder + ("h" * (self.nvar + 1)),
- self.path_or_buf.read(2 * (self.nvar + 1)),
- )[:-1]
+ self._path_or_buf.seek(self._seek_sortlist)
+ self._srtlist = self._read_int16_count(self._nvar + 1)[:-1]
- self.path_or_buf.seek(self._seek_formats)
- self.fmtlist = self._get_fmtlist()
+ self._path_or_buf.seek(self._seek_formats)
+ self._fmtlist = self._get_fmtlist()
- self.path_or_buf.seek(self._seek_value_label_names)
- self.lbllist = self._get_lbllist()
+ self._path_or_buf.seek(self._seek_value_label_names)
+ self._lbllist = self._get_lbllist()
- self.path_or_buf.seek(self._seek_variable_labels)
+ self._path_or_buf.seek(self._seek_variable_labels)
self._variable_labels = self._get_variable_labels()
# Get data type information, works for versions 117-119.
def _get_dtypes(
self, seek_vartypes: int
) -> tuple[list[int | str], list[str | np.dtype]]:
- self.path_or_buf.seek(seek_vartypes)
- raw_typlist = [
- struct.unpack(self.byteorder + "H", self.path_or_buf.read(2))[0]
- for _ in range(self.nvar)
- ]
+ self._path_or_buf.seek(seek_vartypes)
+ raw_typlist = [self._read_uint16() for _ in range(self._nvar)]
def f(typ: int) -> int | str:
if typ <= 2045:
@@ -1320,112 +1378,110 @@ def g(typ: int) -> str | np.dtype:
def _get_varlist(self) -> list[str]:
# 33 in order formats, 129 in formats 118 and 119
- b = 33 if self.format_version < 118 else 129
- return [self._decode(self.path_or_buf.read(b)) for _ in range(self.nvar)]
+ b = 33 if self._format_version < 118 else 129
+ return [self._decode(self._path_or_buf.read(b)) for _ in range(self._nvar)]
# Returns the format list
def _get_fmtlist(self) -> list[str]:
- if self.format_version >= 118:
+ if self._format_version >= 118:
b = 57
- elif self.format_version > 113:
+ elif self._format_version > 113:
b = 49
- elif self.format_version > 104:
+ elif self._format_version > 104:
b = 12
else:
b = 7
- return [self._decode(self.path_or_buf.read(b)) for _ in range(self.nvar)]
+ return [self._decode(self._path_or_buf.read(b)) for _ in range(self._nvar)]
# Returns the label list
def _get_lbllist(self) -> list[str]:
- if self.format_version >= 118:
+ if self._format_version >= 118:
b = 129
- elif self.format_version > 108:
+ elif self._format_version > 108:
b = 33
else:
b = 9
- return [self._decode(self.path_or_buf.read(b)) for _ in range(self.nvar)]
+ return [self._decode(self._path_or_buf.read(b)) for _ in range(self._nvar)]
def _get_variable_labels(self) -> list[str]:
- if self.format_version >= 118:
+ if self._format_version >= 118:
vlblist = [
- self._decode(self.path_or_buf.read(321)) for _ in range(self.nvar)
+ self._decode(self._path_or_buf.read(321)) for _ in range(self._nvar)
]
- elif self.format_version > 105:
+ elif self._format_version > 105:
vlblist = [
- self._decode(self.path_or_buf.read(81)) for _ in range(self.nvar)
+ self._decode(self._path_or_buf.read(81)) for _ in range(self._nvar)
]
else:
vlblist = [
- self._decode(self.path_or_buf.read(32)) for _ in range(self.nvar)
+ self._decode(self._path_or_buf.read(32)) for _ in range(self._nvar)
]
return vlblist
def _get_nobs(self) -> int:
- if self.format_version >= 118:
- return struct.unpack(self.byteorder + "Q", self.path_or_buf.read(8))[0]
+ if self._format_version >= 118:
+ return self._read_uint64()
else:
- return struct.unpack(self.byteorder + "I", self.path_or_buf.read(4))[0]
+ return self._read_uint32()
def _get_data_label(self) -> str:
- if self.format_version >= 118:
- strlen = struct.unpack(self.byteorder + "H", self.path_or_buf.read(2))[0]
- return self._decode(self.path_or_buf.read(strlen))
- elif self.format_version == 117:
- strlen = struct.unpack("b", self.path_or_buf.read(1))[0]
- return self._decode(self.path_or_buf.read(strlen))
- elif self.format_version > 105:
- return self._decode(self.path_or_buf.read(81))
+ if self._format_version >= 118:
+ strlen = self._read_uint16()
+ return self._decode(self._path_or_buf.read(strlen))
+ elif self._format_version == 117:
+ strlen = self._read_int8()
+ return self._decode(self._path_or_buf.read(strlen))
+ elif self._format_version > 105:
+ return self._decode(self._path_or_buf.read(81))
else:
- return self._decode(self.path_or_buf.read(32))
+ return self._decode(self._path_or_buf.read(32))
def _get_time_stamp(self) -> str:
- if self.format_version >= 118:
- strlen = struct.unpack("b", self.path_or_buf.read(1))[0]
- return self.path_or_buf.read(strlen).decode("utf-8")
- elif self.format_version == 117:
- strlen = struct.unpack("b", self.path_or_buf.read(1))[0]
- return self._decode(self.path_or_buf.read(strlen))
- elif self.format_version > 104:
- return self._decode(self.path_or_buf.read(18))
+ if self._format_version >= 118:
+ strlen = self._read_int8()
+ return self._path_or_buf.read(strlen).decode("utf-8")
+ elif self._format_version == 117:
+ strlen = self._read_int8()
+ return self._decode(self._path_or_buf.read(strlen))
+ elif self._format_version > 104:
+ return self._decode(self._path_or_buf.read(18))
else:
raise ValueError()
def _get_seek_variable_labels(self) -> int:
- if self.format_version == 117:
- self.path_or_buf.read(8) # <variable_labels>, throw away
+ if self._format_version == 117:
+ self._path_or_buf.read(8) # <variable_labels>, throw away
# Stata 117 data files do not follow the described format. This is
# a work around that uses the previous label, 33 bytes for each
# variable, 20 for the closing tag and 17 for the opening tag
- return self._seek_value_label_names + (33 * self.nvar) + 20 + 17
- elif self.format_version >= 118:
- return struct.unpack(self.byteorder + "q", self.path_or_buf.read(8))[0] + 17
+ return self._seek_value_label_names + (33 * self._nvar) + 20 + 17
+ elif self._format_version >= 118:
+ return self._read_int64() + 17
else:
raise ValueError()
def _read_old_header(self, first_char: bytes) -> None:
- self.format_version = struct.unpack("b", first_char)[0]
- if self.format_version not in [104, 105, 108, 111, 113, 114, 115]:
- raise ValueError(_version_error.format(version=self.format_version))
+ self._format_version = int(first_char[0])
+ if self._format_version not in [104, 105, 108, 111, 113, 114, 115]:
+ raise ValueError(_version_error.format(version=self._format_version))
self._set_encoding()
- self.byteorder = (
- ">" if struct.unpack("b", self.path_or_buf.read(1))[0] == 0x1 else "<"
- )
- self.filetype = struct.unpack("b", self.path_or_buf.read(1))[0]
- self.path_or_buf.read(1) # unused
+ self._byteorder = ">" if self._read_int8() == 0x1 else "<"
+ self._filetype = self._read_int8()
+ self._path_or_buf.read(1) # unused
- self.nvar = struct.unpack(self.byteorder + "H", self.path_or_buf.read(2))[0]
- self.nobs = self._get_nobs()
+ self._nvar = self._read_uint16()
+ self._nobs = self._get_nobs()
self._data_label = self._get_data_label()
- self.time_stamp = self._get_time_stamp()
+ self._time_stamp = self._get_time_stamp()
# descriptors
- if self.format_version > 108:
- typlist = [ord(self.path_or_buf.read(1)) for _ in range(self.nvar)]
+ if self._format_version > 108:
+ typlist = [int(c) for c in self._path_or_buf.read(self._nvar)]
else:
- buf = self.path_or_buf.read(self.nvar)
+ buf = self._path_or_buf.read(self._nvar)
typlistb = np.frombuffer(buf, dtype=np.uint8)
typlist = []
for tp in typlistb:
@@ -1435,32 +1491,29 @@ def _read_old_header(self, first_char: bytes) -> None:
typlist.append(tp - 127) # bytes
try:
- self.typlist = [self.TYPE_MAP[typ] for typ in typlist]
+ self._typlist = [self.TYPE_MAP[typ] for typ in typlist]
except ValueError as err:
invalid_types = ",".join([str(x) for x in typlist])
raise ValueError(f"cannot convert stata types [{invalid_types}]") from err
try:
- self.dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
+ self._dtyplist = [self.DTYPE_MAP[typ] for typ in typlist]
except ValueError as err:
invalid_dtypes = ",".join([str(x) for x in typlist])
raise ValueError(f"cannot convert stata dtypes [{invalid_dtypes}]") from err
- if self.format_version > 108:
- self.varlist = [
- self._decode(self.path_or_buf.read(33)) for _ in range(self.nvar)
+ if self._format_version > 108:
+ self._varlist = [
+ self._decode(self._path_or_buf.read(33)) for _ in range(self._nvar)
]
else:
- self.varlist = [
- self._decode(self.path_or_buf.read(9)) for _ in range(self.nvar)
+ self._varlist = [
+ self._decode(self._path_or_buf.read(9)) for _ in range(self._nvar)
]
- self.srtlist = struct.unpack(
- self.byteorder + ("h" * (self.nvar + 1)),
- self.path_or_buf.read(2 * (self.nvar + 1)),
- )[:-1]
+ self._srtlist = self._read_int16_count(self._nvar + 1)[:-1]
- self.fmtlist = self._get_fmtlist()
+ self._fmtlist = self._get_fmtlist()
- self.lbllist = self._get_lbllist()
+ self._lbllist = self._get_lbllist()
self._variable_labels = self._get_variable_labels()
@@ -1469,25 +1522,19 @@ def _read_old_header(self, first_char: bytes) -> None:
# the size of the next read, which you discard. You then continue
# like this until you read 5 bytes of zeros.
- if self.format_version > 104:
+ if self._format_version > 104:
while True:
- data_type = struct.unpack(
- self.byteorder + "b", self.path_or_buf.read(1)
- )[0]
- if self.format_version > 108:
- data_len = struct.unpack(
- self.byteorder + "i", self.path_or_buf.read(4)
- )[0]
+ data_type = self._read_int8()
+ if self._format_version > 108:
+ data_len = self._read_int32()
else:
- data_len = struct.unpack(
- self.byteorder + "h", self.path_or_buf.read(2)
- )[0]
+ data_len = self._read_int16()
if data_type == 0:
break
- self.path_or_buf.read(data_len)
+ self._path_or_buf.read(data_len)
# necessary data to continue parsing
- self.data_location = self.path_or_buf.tell()
+ self._data_location = self._path_or_buf.tell()
def _setup_dtype(self) -> np.dtype:
"""Map between numpy and state dtypes"""
@@ -1495,12 +1542,12 @@ def _setup_dtype(self) -> np.dtype:
return self._dtype
dtypes = [] # Convert struct data types to numpy data type
- for i, typ in enumerate(self.typlist):
+ for i, typ in enumerate(self._typlist):
if typ in self.NUMPY_TYPE_MAP:
typ = cast(str, typ) # only strs in NUMPY_TYPE_MAP
- dtypes.append(("s" + str(i), self.byteorder + self.NUMPY_TYPE_MAP[typ]))
+ dtypes.append((f"s{i}", f"{self._byteorder}{self.NUMPY_TYPE_MAP[typ]}"))
else:
- dtypes.append(("s" + str(i), "S" + str(typ)))
+ dtypes.append((f"s{i}", f"S{typ}"))
self._dtype = np.dtype(dtypes)
return self._dtype
@@ -1508,7 +1555,7 @@ def _setup_dtype(self) -> np.dtype:
def _calcsize(self, fmt: int | str) -> int:
if isinstance(fmt, int):
return fmt
- return struct.calcsize(self.byteorder + fmt)
+ return struct.calcsize(self._byteorder + fmt)
def _decode(self, s: bytes) -> str:
# have bytes not strings, so must decode
@@ -1532,82 +1579,85 @@ def _decode(self, s: bytes) -> str:
return s.decode("latin-1")
def _read_value_labels(self) -> None:
+ self._ensure_open()
if self._value_labels_read:
# Don't read twice
return
- if self.format_version <= 108:
+ if self._format_version <= 108:
# Value labels are not supported in version 108 and earlier.
self._value_labels_read = True
- self.value_label_dict: dict[str, dict[float, str]] = {}
+ self._value_label_dict: dict[str, dict[float, str]] = {}
return
- if self.format_version >= 117:
- self.path_or_buf.seek(self.seek_value_labels)
+ if self._format_version >= 117:
+ self._path_or_buf.seek(self._seek_value_labels)
else:
assert self._dtype is not None
- offset = self.nobs * self._dtype.itemsize
- self.path_or_buf.seek(self.data_location + offset)
+ offset = self._nobs * self._dtype.itemsize
+ self._path_or_buf.seek(self._data_location + offset)
self._value_labels_read = True
- self.value_label_dict = {}
+ self._value_label_dict = {}
while True:
- if self.format_version >= 117:
- if self.path_or_buf.read(5) == b"</val": # <lbl>
+ if self._format_version >= 117:
+ if self._path_or_buf.read(5) == b"</val": # <lbl>
break # end of value label table
- slength = self.path_or_buf.read(4)
+ slength = self._path_or_buf.read(4)
if not slength:
break # end of value label table (format < 117)
- if self.format_version <= 117:
- labname = self._decode(self.path_or_buf.read(33))
+ if self._format_version <= 117:
+ labname = self._decode(self._path_or_buf.read(33))
else:
- labname = self._decode(self.path_or_buf.read(129))
- self.path_or_buf.read(3) # padding
+ labname = self._decode(self._path_or_buf.read(129))
+ self._path_or_buf.read(3) # padding
- n = struct.unpack(self.byteorder + "I", self.path_or_buf.read(4))[0]
- txtlen = struct.unpack(self.byteorder + "I", self.path_or_buf.read(4))[0]
+ n = self._read_uint32()
+ txtlen = self._read_uint32()
off = np.frombuffer(
- self.path_or_buf.read(4 * n), dtype=self.byteorder + "i4", count=n
+ self._path_or_buf.read(4 * n), dtype=f"{self._byteorder}i4", count=n
)
val = np.frombuffer(
- self.path_or_buf.read(4 * n), dtype=self.byteorder + "i4", count=n
+ self._path_or_buf.read(4 * n), dtype=f"{self._byteorder}i4", count=n
)
ii = np.argsort(off)
off = off[ii]
val = val[ii]
- txt = self.path_or_buf.read(txtlen)
- self.value_label_dict[labname] = {}
+ txt = self._path_or_buf.read(txtlen)
+ self._value_label_dict[labname] = {}
for i in range(n):
end = off[i + 1] if i < n - 1 else txtlen
- self.value_label_dict[labname][val[i]] = self._decode(txt[off[i] : end])
- if self.format_version >= 117:
- self.path_or_buf.read(6) # </lbl>
+ self._value_label_dict[labname][val[i]] = self._decode(
+ txt[off[i] : end]
+ )
+ if self._format_version >= 117:
+ self._path_or_buf.read(6) # </lbl>
self._value_labels_read = True
def _read_strls(self) -> None:
- self.path_or_buf.seek(self.seek_strls)
+ self._path_or_buf.seek(self._seek_strls)
# Wrap v_o in a string to allow uint64 values as keys on 32bit OS
self.GSO = {"0": ""}
while True:
- if self.path_or_buf.read(3) != b"GSO":
+ if self._path_or_buf.read(3) != b"GSO":
break
- if self.format_version == 117:
- v_o = struct.unpack(self.byteorder + "Q", self.path_or_buf.read(8))[0]
+ if self._format_version == 117:
+ v_o = self._read_uint64()
else:
- buf = self.path_or_buf.read(12)
+ buf = self._path_or_buf.read(12)
# Only tested on little endian file on little endian machine.
- v_size = 2 if self.format_version == 118 else 3
- if self.byteorder == "<":
+ v_size = 2 if self._format_version == 118 else 3
+ if self._byteorder == "<":
buf = buf[0:v_size] + buf[4 : (12 - v_size)]
else:
# This path may not be correct, impossible to test
buf = buf[0:v_size] + buf[(4 + v_size) :]
v_o = struct.unpack("Q", buf)[0]
- typ = struct.unpack("B", self.path_or_buf.read(1))[0]
- length = struct.unpack(self.byteorder + "I", self.path_or_buf.read(4))[0]
- va = self.path_or_buf.read(length)
+ typ = self._read_uint8()
+ length = self._read_uint32()
+ va = self._path_or_buf.read(length)
if typ == 130:
decoded_va = va[0:-1].decode(self._encoding)
else:
@@ -1649,14 +1699,14 @@ def read(
columns: Sequence[str] | None = None,
order_categoricals: bool | None = None,
) -> DataFrame:
+ self._ensure_open()
# Handle empty file or chunk. If reading incrementally raise
# StopIteration. If reading the whole thing return an empty
# data frame.
- if (self.nobs == 0) and (nrows is None):
+ if (self._nobs == 0) and (nrows is None):
self._can_read_value_labels = True
self._data_read = True
- self.close()
- return DataFrame(columns=self.varlist)
+ return DataFrame(columns=self._varlist)
# Handle options
if convert_dates is None:
@@ -1675,16 +1725,16 @@ def read(
index_col = self._index_col
if nrows is None:
- nrows = self.nobs
+ nrows = self._nobs
- if (self.format_version >= 117) and (not self._value_labels_read):
+ if (self._format_version >= 117) and (not self._value_labels_read):
self._can_read_value_labels = True
self._read_strls()
# Read data
assert self._dtype is not None
dtype = self._dtype
- max_read_len = (self.nobs - self._lines_read) * dtype.itemsize
+ max_read_len = (self._nobs - self._lines_read) * dtype.itemsize
read_len = nrows * dtype.itemsize
read_len = min(read_len, max_read_len)
if read_len <= 0:
@@ -1692,31 +1742,30 @@ def read(
# we are reading the file incrementally
if convert_categoricals:
self._read_value_labels()
- self.close()
raise StopIteration
offset = self._lines_read * dtype.itemsize
- self.path_or_buf.seek(self.data_location + offset)
- read_lines = min(nrows, self.nobs - self._lines_read)
+ self._path_or_buf.seek(self._data_location + offset)
+ read_lines = min(nrows, self._nobs - self._lines_read)
raw_data = np.frombuffer(
- self.path_or_buf.read(read_len), dtype=dtype, count=read_lines
+ self._path_or_buf.read(read_len), dtype=dtype, count=read_lines
)
self._lines_read += read_lines
- if self._lines_read == self.nobs:
+ if self._lines_read == self._nobs:
self._can_read_value_labels = True
self._data_read = True
# if necessary, swap the byte order to native here
- if self.byteorder != self._native_byteorder:
+ if self._byteorder != self._native_byteorder:
raw_data = raw_data.byteswap().newbyteorder()
if convert_categoricals:
self._read_value_labels()
if len(raw_data) == 0:
- data = DataFrame(columns=self.varlist)
+ data = DataFrame(columns=self._varlist)
else:
data = DataFrame.from_records(raw_data)
- data.columns = Index(self.varlist)
+ data.columns = Index(self._varlist)
# If index is not specified, use actual row number rather than
# restarting at 0 for each chunk.
@@ -1725,32 +1774,28 @@ def read(
data.index = Index(rng) # set attr instead of set_index to avoid copy
if columns is not None:
- try:
- data = self._do_select_columns(data, columns)
- except ValueError:
- self.close()
- raise
+ data = self._do_select_columns(data, columns)
# Decode strings
- for col, typ in zip(data, self.typlist):
+ for col, typ in zip(data, self._typlist):
if type(typ) is int:
data[col] = data[col].apply(self._decode, convert_dtype=True)
data = self._insert_strls(data)
- cols_ = np.where([dtyp is not None for dtyp in self.dtyplist])[0]
+ cols_ = np.where([dtyp is not None for dtyp in self._dtyplist])[0]
# Convert columns (if needed) to match input type
ix = data.index
requires_type_conversion = False
data_formatted = []
for i in cols_:
- if self.dtyplist[i] is not None:
+ if self._dtyplist[i] is not None:
col = data.columns[i]
dtype = data[col].dtype
- if dtype != np.dtype(object) and dtype != self.dtyplist[i]:
+ if dtype != np.dtype(object) and dtype != self._dtyplist[i]:
requires_type_conversion = True
data_formatted.append(
- (col, Series(data[col], ix, self.dtyplist[i]))
+ (col, Series(data[col], ix, self._dtyplist[i]))
)
else:
data_formatted.append((col, data[col]))
@@ -1765,20 +1810,16 @@ def read(
def any_startswith(x: str) -> bool:
return any(x.startswith(fmt) for fmt in _date_formats)
- cols = np.where([any_startswith(x) for x in self.fmtlist])[0]
+ cols = np.where([any_startswith(x) for x in self._fmtlist])[0]
for i in cols:
col = data.columns[i]
- try:
- data[col] = _stata_elapsed_date_to_datetime_vec(
- data[col], self.fmtlist[i]
- )
- except ValueError:
- self.close()
- raise
+ data[col] = _stata_elapsed_date_to_datetime_vec(
+ data[col], self._fmtlist[i]
+ )
- if convert_categoricals and self.format_version > 108:
+ if convert_categoricals and self._format_version > 108:
data = self._do_convert_categoricals(
- data, self.value_label_dict, self.lbllist, order_categoricals
+ data, self._value_label_dict, self._lbllist, order_categoricals
)
if not preserve_dtypes:
@@ -1809,7 +1850,7 @@ def _do_convert_missing(self, data: DataFrame, convert_missing: bool) -> DataFra
# Check for missing values, and replace if found
replacements = {}
for i, colname in enumerate(data):
- fmt = self.typlist[i]
+ fmt = self._typlist[i]
if fmt not in self.VALID_RANGE:
continue
@@ -1855,7 +1896,7 @@ def _do_convert_missing(self, data: DataFrame, convert_missing: bool) -> DataFra
def _insert_strls(self, data: DataFrame) -> DataFrame:
if not hasattr(self, "GSO") or len(self.GSO) == 0:
return data
- for i, typ in enumerate(self.typlist):
+ for i, typ in enumerate(self._typlist):
if typ != "Q":
continue
# Wrap v_o in a string to allow uint64 values as keys on 32bit OS
@@ -1881,15 +1922,15 @@ def _do_select_columns(self, data: DataFrame, columns: Sequence[str]) -> DataFra
lbllist = []
for col in columns:
i = data.columns.get_loc(col)
- dtyplist.append(self.dtyplist[i])
- typlist.append(self.typlist[i])
- fmtlist.append(self.fmtlist[i])
- lbllist.append(self.lbllist[i])
-
- self.dtyplist = dtyplist
- self.typlist = typlist
- self.fmtlist = fmtlist
- self.lbllist = lbllist
+ dtyplist.append(self._dtyplist[i])
+ typlist.append(self._typlist[i])
+ fmtlist.append(self._fmtlist[i])
+ lbllist.append(self._lbllist[i])
+
+ self._dtyplist = dtyplist
+ self._typlist = typlist
+ self._fmtlist = fmtlist
+ self._lbllist = lbllist
self._column_selector_set = True
return data[columns]
@@ -1976,8 +2017,17 @@ def data_label(self) -> str:
"""
Return data label of Stata file.
"""
+ self._ensure_open()
return self._data_label
+ @property
+ def time_stamp(self) -> str:
+ """
+ Return time stamp of Stata file.
+ """
+ self._ensure_open()
+ return self._time_stamp
+
def variable_labels(self) -> dict[str, str]:
"""
Return a dict associating each variable name with corresponding label.
@@ -1986,7 +2036,8 @@ def variable_labels(self) -> dict[str, str]:
-------
dict
"""
- return dict(zip(self.varlist, self._variable_labels))
+ self._ensure_open()
+ return dict(zip(self._varlist, self._variable_labels))
def value_labels(self) -> dict[str, dict[float, str]]:
"""
@@ -1999,7 +2050,7 @@ def value_labels(self) -> dict[str, dict[float, str]]:
if not self._value_labels_read:
self._read_value_labels()
- return self.value_label_dict
+ return self._value_label_dict
@Appender(_read_stata_doc)
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 5393a15cff19b..75e9f7b744caa 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -736,10 +736,8 @@ def test_minimal_size_col(self):
original.to_stata(path, write_index=False)
with StataReader(path) as sr:
- typlist = sr.typlist
- variables = sr.varlist
- formats = sr.fmtlist
- for variable, fmt, typ in zip(variables, formats, typlist):
+ sr._ensure_open() # The `_*list` variables are initialized here
+ for variable, fmt, typ in zip(sr._varlist, sr._fmtlist, sr._typlist):
assert int(variable[1:]) == int(fmt[1:-1])
assert int(variable[1:]) == typ
@@ -1891,6 +1889,44 @@ def test_backward_compat(version, datapath):
tm.assert_frame_equal(old_dta, expected, check_dtype=False)
+def test_direct_read(datapath, monkeypatch):
+ file_path = datapath("io", "data", "stata", "stata-compat-118.dta")
+
+ # Test that opening a file path doesn't buffer the file.
+ with StataReader(file_path) as reader:
+ # Must not have been buffered to memory
+ assert not reader.read().empty
+ assert not isinstance(reader._path_or_buf, io.BytesIO)
+
+ # Test that we use a given fp exactly, if possible.
+ with open(file_path, "rb") as fp:
+ with StataReader(fp) as reader:
+ assert not reader.read().empty
+ assert reader._path_or_buf is fp
+
+ # Test that we use a given BytesIO exactly, if possible.
+ with open(file_path, "rb") as fp:
+ with io.BytesIO(fp.read()) as bio:
+ with StataReader(bio) as reader:
+ assert not reader.read().empty
+ assert reader._path_or_buf is bio
+
+
+def test_statareader_warns_when_used_without_context(datapath):
+ file_path = datapath("io", "data", "stata", "stata-compat-118.dta")
+ with tm.assert_produces_warning(
+ ResourceWarning,
+ match="without using a context manager",
+ ):
+ sr = StataReader(file_path)
+ sr.read()
+ with tm.assert_produces_warning(
+ FutureWarning,
+ match="is not part of the public API",
+ ):
+ sr.close()
+
+
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize("use_dict", [True, False])
@pytest.mark.parametrize("infer", [True, False])
| Fixes #48700
Closes #48922 (supersedes it)
Refs pandas-dev/pandas#9245
Refs pandas-dev/pandas#37639
Regressed in 6d1541e1782a7b94797d5432922e64a97934cfa4
- [x] closes #48700
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] The existing tests for e.g. roundtripping zstandard-compressed Stata files test this code path.
- [x] Added a test that checks e.g. a fp or BytesIO passed in is the same object the reader reads.
- [x] Added a test that ensures a warning is raised if StataReader isn't being used as a context manager.
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
---
This PR is a reworking of my earlier #48922, following @bashtage's suggestions. Unfortunately I couldn't quite make it fit in about 15 lines.
Compared to that PR, StataReader required some refactoring to have it behave as it has done before (see the discussion in the other PR) while also making it possible for it to raise a warning when data is being read without a `with` block being in use. (This does not take into account an user having code that doesn't use `with`, but that calls `.close()` by hand.)
The main refactoring is that all of the internal state of the reader object is now `_private`, and public bits are accessed via properties; this allows us to make sure the data is actually read when requested, even without reading happening in `__init__`. (Accessing those properties when the StataReader hasn't been `__enter__`ed will thus also raise a ResourceWarning.) As a bonus, the properties are now well-typed. Previously, e.g. `reader.time_stamp` had to rely on inference...
While renaming those properties, I noticed there was a bunch of repeated `struct.unpack(...read...)[0]` code that would have been `black` reformatted into very unwieldy lines following the addition of the underscores, so I decided to refactor the repeated code into smaller reading utility functions.
Finally, there's basically the same fix I made in #48922 – if the underlying stream is seekable, we can use it directly.
I also added a commit that removes the implicit automatic closing of the underlying stream, since it shouldn't be necessary anymore. If that should be removed from this PR, let me know. | https://api.github.com/repos/pandas-dev/pandas/pulls/49228 | 2022-10-21T12:50:16Z | 2023-02-23T22:09:28Z | 2023-02-23T22:09:28Z | 2023-02-24T19:25:49Z |
DOC: Fixing typos in our community page | diff --git a/doc/source/development/community.rst b/doc/source/development/community.rst
index 59689a2cf51d1..c321c9b0cccf6 100644
--- a/doc/source/development/community.rst
+++ b/doc/source/development/community.rst
@@ -77,10 +77,10 @@ Any community member can open issues to:
- Ask questions, e.g. "I noticed the behavior of a certain function
changed between versions. Is this expected?".
- Ideally your questions should be related to how pandas work rather
+ Ideally, your questions should be related to how pandas works rather
than how you use pandas. `StackOverflow <https://stackoverflow.com/>`_ is
better suited for answering usage questions, and we ask that all usage
- questions are first asked on StackOverflow. Thank you for respecting are
+ questions are first asked on StackOverflow. Thank you for respecting our
time and wishes. 🙇
Maintainers and frequent contributors might also open issues to discuss the
@@ -88,7 +88,7 @@ ongoing development of the project. For example:
- Report issues with the CI, GitHub Actions, or the performance of pandas
- Open issues relating to the internals
-- Start roadmap discussion aligning on proposals what to do in future
+- Start roadmap discussion aligning on proposals for what to do in future
releases or changes to the API.
- Open issues relating to the project's website, logo, or governance
@@ -97,7 +97,7 @@ The developer mailing list
The pandas mailing list `pandas-dev@python.org <mailto://pandas-dev@python
.org>`_ is used for long form
-conversations and to engages people in the wider community who might not
+conversations and to engage people in the wider community who might not
be active on the issue tracker but we would like to include in discussions.
.. _community.slack:
@@ -114,6 +114,6 @@ mailing list or GitHub.
If this sounds like the right place for you, you are welcome to join! Email us
at `slack@pandas.pydata.org <mailto://slack@pandas.pydata.org>`_ and let us
know that you read and agree to our `Code of Conduct <https://pandas.pydata.org/community/coc.html>`_
-😉 to get an invite. And please remember the slack is not meant to replace the
+😉 to get an invite. And please remember that slack is not meant to replace the
mailing list or issue tracker - all important announcements and conversations
should still happen there.
| @Dr-Irv do you mind having a look please?
| https://api.github.com/repos/pandas-dev/pandas/pulls/49225 | 2022-10-21T04:41:31Z | 2022-10-21T13:42:05Z | 2022-10-21T13:42:05Z | 2022-10-21T13:42:06Z |
ADM/WEB: Update CoC committee members | diff --git a/web/pandas/config.yml b/web/pandas/config.yml
index 16e1357d405a0..7f581a6754642 100644
--- a/web/pandas/config.yml
+++ b/web/pandas/config.yml
@@ -102,11 +102,10 @@ maintainers:
- charlesdong1991
- dsaxton
coc:
- - Safia Abdalla
+ - Bijay Regmi
+ - Wuraola Oyewusi
+ - Мария Чакчурина
- Tom Augspurger
- - Joris Van den Bossche
- - Camille Scott
- - Nathaniel Smith
numfocus:
- Wes McKinney
- Jeff Reback
| Updating members of the CoC committee.
CC: @regmibijay @WuraolaOyewusi @chakchurina
The list will appear in our [Team](https://pandas.pydata.org/about/team.html) page. We can also add to the config file the github usernames, and instead of just listing the names, we can list the names with links like the inactive maintainers list, or we can also have the github avatars like the active maintainers list. The reason I implemented it with the names is that I didn't know the github usersnames of the previous committee members. But I don't have a preference for any format. The list of members also appears in our [CoC](https://pandas.pydata.org/community/coc.html) page, and the same applies.
@TomAugspurger @jorisvandenbossche can you make the decision on who will be in the committee, so we can move forward with this please?
| https://api.github.com/repos/pandas-dev/pandas/pulls/49224 | 2022-10-21T04:08:55Z | 2022-11-15T04:55:37Z | 2022-11-15T04:55:37Z | 2022-11-15T04:55:37Z |
DEPR: is_copy arg from take | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 4ac737bb6b29a..01745ab0736d6 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -157,6 +157,7 @@ Removal of prior version deprecations/changes
- Remove argument ``inplace`` from :meth:`MultiIndex.set_levels` and :meth:`MultiIndex.set_codes` (:issue:`35626`)
- Disallow passing positional arguments to :meth:`MultiIndex.set_levels` and :meth:`MultiIndex.set_codes` (:issue:`41485`)
- Removed argument ``try_cast`` from :meth:`DataFrame.mask`, :meth:`DataFrame.where`, :meth:`Series.mask` and :meth:`Series.where` (:issue:`38836`)
+- Removed argument ``is_copy`` from :meth:`DataFrame.take` and :meth:`Series.take` (:issue:`30615`)
- Disallow passing non-round floats to :class:`Timestamp` with ``unit="M"`` or ``unit="Y"`` (:issue:`47266`)
- Removed deprecated :meth:`Timedelta.delta`, :meth:`Timedelta.is_populated`, and :attr:`Timedelta.freq` (:issue:`46430`, :issue:`46476`)
- Removed the ``numeric_only`` keyword from :meth:`Categorical.min` and :meth:`Categorical.max` in favor of ``skipna`` (:issue:`48821`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a30b21d04ff59..5a554b417afe7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3794,9 +3794,7 @@ def _clear_item_cache(self) -> None:
# ----------------------------------------------------------------------
# Indexing Methods
- def take(
- self: NDFrameT, indices, axis: Axis = 0, is_copy: bool_t | None = None, **kwargs
- ) -> NDFrameT:
+ def take(self: NDFrameT, indices, axis: Axis = 0, **kwargs) -> NDFrameT:
"""
Return the elements in the given *positional* indices along an axis.
@@ -3812,13 +3810,6 @@ def take(
The axis on which to select elements. ``0`` means that we are
selecting rows, ``1`` means that we are selecting columns.
For `Series` this parameter is unused and defaults to 0.
- is_copy : bool
- Before pandas 1.0, ``is_copy=False`` can be specified to ensure
- that the return value is an actual copy. Starting with pandas 1.0,
- ``take`` always returns a copy, and the keyword is therefore
- deprecated.
-
- .. deprecated:: 1.0.0
**kwargs
For compatibility with :meth:`numpy.take`. Has no effect on the
output.
@@ -3877,13 +3868,6 @@ class max_speed
1 monkey mammal NaN
3 lion mammal 80.5
"""
- if is_copy is not None:
- warnings.warn(
- "is_copy is deprecated and will be removed in a future version. "
- "'take' always returns a copy, so there is no need to specify this.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
nv.validate_take((), kwargs)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 2bac228ef74a0..47452d885543e 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -856,12 +856,9 @@ def take(
self,
indices: TakeIndexer,
axis: Axis = 0,
- is_copy: bool | None = None,
**kwargs,
) -> Series:
- result = self._op_via_apply(
- "take", indices=indices, axis=axis, is_copy=is_copy, **kwargs
- )
+ result = self._op_via_apply("take", indices=indices, axis=axis, **kwargs)
return result
@doc(Series.skew.__doc__)
@@ -2254,12 +2251,9 @@ def take(
self,
indices: TakeIndexer,
axis: Axis | None = 0,
- is_copy: bool | None = None,
**kwargs,
) -> DataFrame:
- result = self._op_via_apply(
- "take", indices=indices, axis=axis, is_copy=is_copy, **kwargs
- )
+ result = self._op_via_apply("take", indices=indices, axis=axis, **kwargs)
return result
@doc(DataFrame.skew.__doc__)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index bf062ec53eec1..b7d12158fd909 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -889,16 +889,7 @@ def axes(self) -> list[Index]:
# Indexing Methods
@Appender(NDFrame.take.__doc__)
- def take(
- self, indices, axis: Axis = 0, is_copy: bool | None = None, **kwargs
- ) -> Series:
- if is_copy is not None:
- warnings.warn(
- "is_copy is deprecated and will be removed in a future version. "
- "'take' always returns a copy, so there is no need to specify this.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
+ def take(self, indices, axis: Axis = 0, **kwargs) -> Series:
nv.validate_take((), kwargs)
indices = ensure_platform_int(indices)
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 80d9bde784018..a6c516b51c6c5 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -429,21 +429,6 @@ def test_take_invalid_kwargs(self, frame_or_series):
with pytest.raises(ValueError, match=msg):
obj.take(indices, mode="clip")
- @pytest.mark.parametrize("is_copy", [True, False])
- def test_depr_take_kwarg_is_copy(self, is_copy, frame_or_series):
- # GH 27357
- obj = DataFrame({"A": [1, 2, 3]})
- obj = tm.get_obj(obj, frame_or_series)
-
- msg = (
- "is_copy is deprecated and will be removed in a future version. "
- "'take' always returns a copy, so there is no need to specify this."
- )
- with tm.assert_produces_warning(FutureWarning) as w:
- obj.take([0, 1], is_copy=is_copy)
-
- assert w[0].message.args[0] == msg
-
def test_axis_classmethods(self, frame_or_series):
box = frame_or_series
obj = box(dtype=object)
| - [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature.
Deprecation introduced in #30615. | https://api.github.com/repos/pandas-dev/pandas/pulls/49221 | 2022-10-21T00:01:51Z | 2022-10-21T20:16:51Z | 2022-10-21T20:16:51Z | 2022-10-26T10:18:05Z |
DEPR: Categorical, Index | diff --git a/doc/redirects.csv b/doc/redirects.csv
index 6a79ed5958089..42f91a8b9884f 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -637,7 +637,6 @@ generated/pandas.Index.argmax,../reference/api/pandas.Index.argmax
generated/pandas.Index.argmin,../reference/api/pandas.Index.argmin
generated/pandas.Index.argsort,../reference/api/pandas.Index.argsort
generated/pandas.Index.array,../reference/api/pandas.Index.array
-generated/pandas.Index.asi8,../reference/api/pandas.Index.asi8
generated/pandas.Index.asof,../reference/api/pandas.Index.asof
generated/pandas.Index.asof_locs,../reference/api/pandas.Index.asof_locs
generated/pandas.Index.astype,../reference/api/pandas.Index.astype
@@ -680,7 +679,6 @@ generated/pandas.Index.isin,../reference/api/pandas.Index.isin
generated/pandas.Index.is_integer,../reference/api/pandas.Index.is_integer
generated/pandas.Index.is_interval,../reference/api/pandas.Index.is_interval
generated/pandas.Index.is_lexsorted_for_tuple,../reference/api/pandas.Index.is_lexsorted_for_tuple
-generated/pandas.Index.is_mixed,../reference/api/pandas.Index.is_mixed
generated/pandas.Index.is_monotonic_decreasing,../reference/api/pandas.Index.is_monotonic_decreasing
generated/pandas.Index.is_monotonic,../reference/api/pandas.Index.is_monotonic
generated/pandas.Index.is_monotonic_increasing,../reference/api/pandas.Index.is_monotonic_increasing
@@ -688,7 +686,6 @@ generated/pandas.Index.isna,../reference/api/pandas.Index.isna
generated/pandas.Index.isnull,../reference/api/pandas.Index.isnull
generated/pandas.Index.is_numeric,../reference/api/pandas.Index.is_numeric
generated/pandas.Index.is_object,../reference/api/pandas.Index.is_object
-generated/pandas.Index.is_type_compatible,../reference/api/pandas.Index.is_type_compatible
generated/pandas.Index.is_unique,../reference/api/pandas.Index.is_unique
generated/pandas.Index.item,../reference/api/pandas.Index.item
generated/pandas.Index.join,../reference/api/pandas.Index.join
diff --git a/doc/source/reference/index.rst b/doc/source/reference/index.rst
index fc920db671ee5..37cb6767f760e 100644
--- a/doc/source/reference/index.rst
+++ b/doc/source/reference/index.rst
@@ -47,9 +47,7 @@ public functions related to data types in pandas.
..
.. toctree::
- api/pandas.Index.asi8
api/pandas.Index.holds_integer
- api/pandas.Index.is_type_compatible
api/pandas.Index.nlevels
api/pandas.Index.sort
diff --git a/doc/source/reference/indexing.rst b/doc/source/reference/indexing.rst
index 533f4ee19f83b..a52ee92ea5921 100644
--- a/doc/source/reference/indexing.rst
+++ b/doc/source/reference/indexing.rst
@@ -68,7 +68,6 @@ Modifying and computations
Index.is_floating
Index.is_integer
Index.is_interval
- Index.is_mixed
Index.is_numeric
Index.is_object
Index.min
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 4ac737bb6b29a..4676c9f917253 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -146,6 +146,14 @@ Deprecations
Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- Removed deprecated :meth:`Categorical.to_dense`, use ``np.asarray(cat)`` instead (:issue:`32639`)
+- Removed deprecated :meth:`Categorical.take_nd` (:issue:`27745`)
+- Removed deprecated :meth:`Categorical.mode`, use ``Series(cat).mode()`` instead (:issue:`45033`)
+- Removed deprecated :meth:`Categorical.is_dtype_equal` and :meth:`CategoricalIndex.is_dtype_equal` (:issue:`37545`)
+- Removed deprecated :meth:`CategoricalIndex.take_nd` (:issue:`30702`)
+- Removed deprecated :meth:`Index.is_type_compatible` (:issue:`42113`)
+- Removed deprecated :meth:`Index.is_mixed`, check ``index.inferred_type`` directly instead (:issue:`32922`)
+- Removed deprecated :meth:`Index.asi8` (:issue:`37877`)
- Enforced deprecation disallowing passing a timezone-aware :class:`Timestamp` and ``dtype="datetime64[ns]"`` to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
- Enforced deprecation disallowing passing a sequence of timezone-aware values and ``dtype="datetime64[ns]"`` to to :class:`Series` or :class:`DataFrame` constructors (:issue:`41555`)
- Removed Date parser functions :func:`~pandas.io.date_converters.parse_date_time`,
diff --git a/pandas/conftest.py b/pandas/conftest.py
index e8f982949677c..074b6068a4518 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -155,7 +155,6 @@ def pytest_collection_modifyitems(items, config) -> None:
("Series.append", "The series.append method is deprecated"),
("dtypes.common.is_categorical", "is_categorical is deprecated"),
("Categorical.replace", "Categorical.replace is deprecated"),
- ("Index.is_mixed", "Index.is_mixed is deprecated"),
("MultiIndex._is_lexsorted", "MultiIndex.is_lexsorted is deprecated"),
# Docstring divides by zero to show behavior difference
("missing.mask_zero_div_zero", "divide by zero encountered"),
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 5a1d812cda53c..164dc74836508 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2052,24 +2052,6 @@ def _values_for_rank(self):
)
return values
- def to_dense(self) -> np.ndarray:
- """
- Return my 'dense' representation
-
- For internal compatibility with numpy arrays.
-
- Returns
- -------
- dense : array
- """
- warn(
- "Categorical.to_dense is deprecated and will be removed in "
- "a future version. Use np.asarray(cat) instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return np.asarray(self)
-
# ------------------------------------------------------------------
# NDArrayBackedExtensionArray compat
@@ -2101,17 +2083,6 @@ def _unbox_scalar(self, key) -> int:
# ------------------------------------------------------------------
- def take_nd(
- self, indexer, allow_fill: bool = False, fill_value=None
- ) -> Categorical:
- # GH#27745 deprecate alias that other EAs dont have
- warn(
- "Categorical.take_nd is deprecated, use Categorical.take instead",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.take(indexer, allow_fill=allow_fill, fill_value=fill_value)
-
def __iter__(self) -> Iterator:
"""
Returns an Iterator over the values of this Categorical.
@@ -2539,18 +2510,6 @@ def _categories_match_up_to_permutation(self, other: Categorical) -> bool:
"""
return hash(self.dtype) == hash(other.dtype)
- def is_dtype_equal(self, other) -> bool:
- warn(
- "Categorical.is_dtype_equal is deprecated and will be removed "
- "in a future version",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- try:
- return self._categories_match_up_to_permutation(other)
- except (AttributeError, TypeError):
- return False
-
def describe(self) -> DataFrame:
"""
Describes this Categorical
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 1ae5b97cc0913..9d694face3941 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -680,23 +680,6 @@ def _dtype_to_subclass(cls, dtype: DtypeObj):
See each method's docstring.
"""
- @property
- def asi8(self):
- """
- Integer representation of the values.
-
- Returns
- -------
- ndarray
- An ndarray with int64 dtype.
- """
- warnings.warn(
- "Index.asi8 is deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return None
-
@classmethod
def _simple_new(cls: type[_IndexT], values, name: Hashable = None) -> _IndexT:
"""
@@ -2451,7 +2434,6 @@ def is_boolean(self) -> bool:
is_object : Check if the Index is of the object dtype.
is_categorical : Check if the Index holds categorical data.
is_interval : Check if the Index holds Interval objects.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2487,7 +2469,6 @@ def is_integer(self) -> bool:
is_object : Check if the Index is of the object dtype.
is_categorical : Check if the Index holds categorical data.
is_interval : Check if the Index holds Interval objects.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2527,7 +2508,6 @@ def is_floating(self) -> bool:
is_object : Check if the Index is of the object dtype.
is_categorical : Check if the Index holds categorical data.
is_interval : Check if the Index holds Interval objects.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2567,7 +2547,6 @@ def is_numeric(self) -> bool:
is_object : Check if the Index is of the object dtype.
is_categorical : Check if the Index holds categorical data.
is_interval : Check if the Index holds Interval objects.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2611,7 +2590,6 @@ def is_object(self) -> bool:
is_numeric : Check if the Index only consists of numeric data.
is_categorical : Check if the Index holds categorical data.
is_interval : Check if the Index holds Interval objects.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2653,7 +2631,6 @@ def is_categorical(self) -> bool:
is_numeric : Check if the Index only consists of numeric data.
is_object : Check if the Index is of the object dtype.
is_interval : Check if the Index holds Interval objects.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2697,7 +2674,6 @@ def is_interval(self) -> bool:
is_numeric : Check if the Index only consists of numeric data.
is_object : Check if the Index is of the object dtype.
is_categorical : Check if the Index holds categorical data.
- is_mixed : Check if the Index holds data with mixed data types.
Examples
--------
@@ -2712,44 +2688,6 @@ def is_interval(self) -> bool:
"""
return self.inferred_type in ["interval"]
- @final
- def is_mixed(self) -> bool:
- """
- Check if the Index holds data with mixed data types.
-
- Returns
- -------
- bool
- Whether or not the Index holds data with mixed data types.
-
- See Also
- --------
- is_boolean : Check if the Index only consists of booleans.
- is_integer : Check if the Index only consists of integers.
- is_floating : Check if the Index is a floating type.
- is_numeric : Check if the Index only consists of numeric data.
- is_object : Check if the Index is of the object dtype.
- is_categorical : Check if the Index holds categorical data.
- is_interval : Check if the Index holds Interval objects.
-
- Examples
- --------
- >>> idx = pd.Index(['a', np.nan, 'b'])
- >>> idx.is_mixed()
- True
-
- >>> idx = pd.Index([1.0, 2.0, 3.0, 5.0])
- >>> idx.is_mixed()
- False
- """
- warnings.warn(
- "Index.is_mixed is deprecated and will be removed in a future version. "
- "Check index.inferred_type directly instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.inferred_type in ["mixed"]
-
@final
def holds_integer(self) -> bool:
"""
@@ -5312,18 +5250,6 @@ def _is_memory_usage_qualified(self) -> bool:
"""
return self.is_object()
- def is_type_compatible(self, kind: str_t) -> bool:
- """
- Whether the index type is compatible with the provided type.
- """
- warnings.warn(
- "Index.is_type_compatible is deprecated and will be removed in a "
- "future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return kind == self.inferred_type
-
def __contains__(self, key: Any) -> bool:
"""
Return a boolean indicating whether the provided key is in the index.
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index baa938a6f7646..58b533cb576d9 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -61,7 +61,6 @@
"ordered",
"_reverse_indexer",
"searchsorted",
- "is_dtype_equal",
"min",
"max",
],
@@ -489,16 +488,6 @@ def _maybe_cast_listlike_indexer(self, values) -> CategoricalIndex:
def _is_comparable_dtype(self, dtype: DtypeObj) -> bool:
return self.categories._is_comparable_dtype(dtype)
- def take_nd(self, *args, **kwargs) -> CategoricalIndex:
- """Alias for `take`"""
- warnings.warn(
- "CategoricalIndex.take_nd is deprecated, use CategoricalIndex.take "
- "instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.take(*args, **kwargs)
-
def map(self, mapper):
"""
Map values using input an input mapping or function.
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index dafa5e9680eec..63acb2ab24422 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -13,7 +13,6 @@
cast,
final,
)
-import warnings
import numpy as np
@@ -29,14 +28,16 @@
parsing,
to_offset,
)
-from pandas._typing import Axis
+from pandas._typing import (
+ Axis,
+ npt,
+)
from pandas.compat.numpy import function as nv
from pandas.util._decorators import (
Appender,
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -80,7 +81,7 @@
DatetimeLikeArrayMixin,
cache=True,
)
-@inherit_names(["mean", "asi8", "freq", "freqstr"], DatetimeLikeArrayMixin)
+@inherit_names(["mean", "freq", "freqstr"], DatetimeLikeArrayMixin)
class DatetimeIndexOpsMixin(NDArrayBackedExtensionIndex):
"""
Common ops mixin to support a unified interface datetimelike Index.
@@ -93,6 +94,10 @@ class DatetimeIndexOpsMixin(NDArrayBackedExtensionIndex):
freqstr: str | None
_resolution_obj: Resolution
+ @property
+ def asi8(self) -> npt.NDArray[np.int64]:
+ return self._data.asi8
+
# ------------------------------------------------------------------------
@cache_readonly
@@ -394,15 +399,6 @@ def _with_freq(self, freq):
arr = self._data._with_freq(freq)
return type(self)._simple_new(arr, name=self._name)
- def is_type_compatible(self, kind: str) -> bool:
- warnings.warn(
- f"{type(self).__name__}.is_type_compatible is deprecated and will be "
- "removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return kind in self._data._infer_matches
-
@property
def values(self) -> np.ndarray:
# NB: For Datetime64TZ this is lossy
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index a5408f19456dd..32f6fe06f3587 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -386,7 +386,7 @@ def is_full(self) -> bool:
if not self.is_monotonic_increasing:
raise ValueError("Index is not monotonic")
values = self.asi8
- return ((values[1:] - values[:-1]) < 2).all()
+ return bool(((values[1:] - values[:-1]) < 2).all())
@property
def inferred_type(self) -> str:
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 9b158a6b5aed2..da47aa549dfa3 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -158,7 +158,7 @@ def to_numeric(
elif isinstance(arg, ABCIndex):
is_index = True
if needs_i8_conversion(arg.dtype):
- values = arg.asi8
+ values = arg.view("i8")
else:
values = arg.values
elif isinstance(arg, (list, tuple)):
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 80b782d582561..139909a77f94a 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -16,6 +16,7 @@
Hashable,
Iterator,
Sized,
+ cast,
)
import warnings
@@ -442,7 +443,8 @@ def _insert_on_column(self, result: DataFrame, obj: DataFrame) -> None:
def _index_array(self):
# TODO: why do we get here with e.g. MultiIndex?
if needs_i8_conversion(self._on.dtype):
- return self._on.asi8
+ idx = cast("PeriodIndex | DatetimeIndex | TimedeltaIndex", self._on)
+ return idx.asi8
return None
def _resolve_output(self, out: DataFrame, obj: DataFrame) -> DataFrame:
diff --git a/pandas/tests/arrays/categorical/test_api.py b/pandas/tests/arrays/categorical/test_api.py
index 377ab530d8733..e3fd73aaa9b1c 100644
--- a/pandas/tests/arrays/categorical/test_api.py
+++ b/pandas/tests/arrays/categorical/test_api.py
@@ -330,12 +330,6 @@ def test_set_categories(self):
tm.assert_numpy_array_equal(np.asarray(c), np.asarray(c2))
- def test_to_dense_deprecated(self):
- cat = Categorical(["a", "b", "c", "a"], ordered=True)
-
- with tm.assert_produces_warning(FutureWarning):
- cat.to_dense()
-
@pytest.mark.parametrize(
"values, categories, new_categories",
[
diff --git a/pandas/tests/arrays/categorical/test_dtypes.py b/pandas/tests/arrays/categorical/test_dtypes.py
index ddaf6aa46bc4f..7447daadbdd48 100644
--- a/pandas/tests/arrays/categorical/test_dtypes.py
+++ b/pandas/tests/arrays/categorical/test_dtypes.py
@@ -15,13 +15,6 @@
class TestCategoricalDtypes:
- def test_is_dtype_equal_deprecated(self):
- # GH#37545
- c1 = Categorical(list("aabca"), categories=list("abc"), ordered=False)
-
- with tm.assert_produces_warning(FutureWarning):
- c1.is_dtype_equal(c1)
-
def test_categories_match_up_to_permutation(self):
# test dtype comparisons between cats
diff --git a/pandas/tests/arrays/categorical/test_take.py b/pandas/tests/arrays/categorical/test_take.py
index fbdbea1dae3b2..fb79fe4923522 100644
--- a/pandas/tests/arrays/categorical/test_take.py
+++ b/pandas/tests/arrays/categorical/test_take.py
@@ -1,10 +1,7 @@
import numpy as np
import pytest
-from pandas import (
- Categorical,
- Index,
-)
+from pandas import Categorical
import pandas._testing as tm
@@ -84,12 +81,3 @@ def test_take_fill_value_new_raises(self):
xpr = r"Cannot setitem on a Categorical with a new category \(d\)"
with pytest.raises(TypeError, match=xpr):
cat.take([0, 1, -1], fill_value="d", allow_fill=True)
-
- def test_take_nd_deprecated(self):
- cat = Categorical(["a", "b", "c"])
- with tm.assert_produces_warning(FutureWarning):
- cat.take_nd([0, 1])
-
- ci = Index(cat)
- with tm.assert_produces_warning(FutureWarning):
- ci.take_nd([0, 1])
diff --git a/pandas/tests/indexes/test_any_index.py b/pandas/tests/indexes/test_any_index.py
index 1745e5680c721..6868279776a91 100644
--- a/pandas/tests/indexes/test_any_index.py
+++ b/pandas/tests/indexes/test_any_index.py
@@ -67,20 +67,6 @@ def test_ravel_deprecation(index):
index.ravel()
-def test_is_type_compatible_deprecation(index):
- # GH#42113
- msg = "is_type_compatible is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- index.is_type_compatible(index.inferred_type)
-
-
-def test_is_mixed_deprecated(index):
- # GH#32922
- msg = "Index.is_mixed is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- index.is_mixed()
-
-
class TestConversion:
def test_to_series(self, index):
# assert that we are creating a copy of the index
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index 8635aed3559d4..834be77cbdcda 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -23,7 +23,6 @@
MultiIndex,
PeriodIndex,
RangeIndex,
- TimedeltaIndex,
)
import pandas._testing as tm
from pandas.core.api import NumericIndex
@@ -423,16 +422,6 @@ def test_astype_preserves_name(self, index, dtype):
else:
assert result.name == index.name
- def test_asi8_deprecation(self, index):
- # GH#37877
- if isinstance(index, (DatetimeIndex, TimedeltaIndex, PeriodIndex)):
- warn = None
- else:
- warn = FutureWarning
-
- with tm.assert_produces_warning(warn):
- index.asi8
-
def test_hasnans_isnans(self, index_flat):
# GH#11343, added tests for hasnans / isnans
index = index_flat
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49220 | 2022-10-20T23:34:49Z | 2022-10-21T20:19:42Z | 2022-10-21T20:19:42Z | 2022-10-21T20:27:00Z |
Fix code triggering superfluous-parens pylint messages | diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index f0f3e7f19db50..bea3da589d86e 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -470,7 +470,7 @@ def visit_Attribute(self, node, **kwargs):
# try to get the value to see if we are another expression
try:
resolved = resolved.value
- except (AttributeError):
+ except AttributeError:
pass
try:
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index db3c86b734cf4..bf473f5ea428e 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -1723,7 +1723,7 @@ def _apply(
f"Function {repr(func)} must return a DataFrame or ndarray "
f"when passed to `Styler.apply` with axis=None"
)
- if not (data.shape == result.shape):
+ if data.shape != result.shape:
raise ValueError(
f"Function {repr(func)} returned ndarray with wrong shape.\n"
f"Result has shape: {result.shape}\n"
@@ -3310,9 +3310,9 @@ def bar(
"(eg: color=['#d65f5f', '#5fba7d'])"
)
- if not (0 <= width <= 100):
+ if not 0 <= width <= 100:
raise ValueError(f"`width` must be a value in [0, 100], got {width}")
- elif not (0 <= height <= 100):
+ elif not 0 <= height <= 100:
raise ValueError(f"`height` must be a value in [0, 100], got {height}")
if subset is None:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index a95d1b9d7cce6..30712ec1f0836 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -4945,7 +4945,7 @@ def _unconvert_index(data, kind: str, encoding: str, errors: str) -> np.ndarray
elif kind == "date":
try:
index = np.asarray([date.fromordinal(v) for v in data], dtype=object)
- except (ValueError):
+ except ValueError:
index = np.asarray([date.fromtimestamp(v) for v in data], dtype=object)
elif kind in ("integer", "float", "bool"):
index = np.asarray(data)
diff --git a/pandas/tests/groupby/test_allowlist.py b/pandas/tests/groupby/test_allowlist.py
index 93df7d1f1d4a0..d75853159b21c 100644
--- a/pandas/tests/groupby/test_allowlist.py
+++ b/pandas/tests/groupby/test_allowlist.py
@@ -315,9 +315,9 @@ def test_all_methods_categorized(mframe):
new_names -= transformation_kernels
new_names -= groupby_other_methods
- assert not (reduction_kernels & transformation_kernels)
- assert not (reduction_kernels & groupby_other_methods)
- assert not (transformation_kernels & groupby_other_methods)
+ assert not reduction_kernels & transformation_kernels
+ assert not reduction_kernels & groupby_other_methods
+ assert not transformation_kernels & groupby_other_methods
# new public method?
if new_names:
@@ -341,7 +341,7 @@ def test_all_methods_categorized(mframe):
all_categorized = reduction_kernels | transformation_kernels | groupby_other_methods
print(names)
print(all_categorized)
- if not (names == all_categorized):
+ if names != all_categorized:
msg = f"""
Some methods which are supposed to be on the Grouper class
are missing:
diff --git a/pandas/tests/indexes/datetimelike.py b/pandas/tests/indexes/datetimelike.py
index ecdbf01fd41c1..f836672cc8557 100644
--- a/pandas/tests/indexes/datetimelike.py
+++ b/pandas/tests/indexes/datetimelike.py
@@ -49,7 +49,7 @@ def test_str(self, simple_index):
# test the string repr
idx = simple_index
idx.name = "foo"
- assert not (f"length={len(idx)}" in str(idx))
+ assert f"length={len(idx)}" not in str(idx)
assert "'foo'" in str(idx)
assert type(idx).__name__ in str(idx)
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index bdf299f6dbbdf..ff4b8564f86ca 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -374,7 +374,7 @@ def test_contains(self):
# GH#13603
td = to_timedelta(range(5), unit="d") + offsets.Hour(1)
for v in [NaT, None, float("nan"), np.nan]:
- assert not (v in td)
+ assert v not in td
td = to_timedelta([NaT])
for v in [NaT, None, float("nan"), np.nan]:
diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py
index 192fec048a930..77a996b1f92d6 100644
--- a/pandas/tests/io/formats/style/test_style.py
+++ b/pandas/tests/io/formats/style/test_style.py
@@ -352,7 +352,7 @@ def test_clear(mi_styler_comp):
# test vars have same vales on obj and clean copy after clearing
styler.clear()
- for attr in [a for a in styler.__dict__ if not (callable(a))]:
+ for attr in [a for a in styler.__dict__ if not callable(a)]:
res = getattr(styler, attr) == getattr(clean_copy, attr)
assert all(res) if hasattr(res, "__iter__") else res
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index a2f2dce70c7f4..0b3f6395e51d7 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -985,7 +985,7 @@ def test_to_string_truncate_indices(self, index, h, w):
if w == 20:
assert has_horizontally_truncated_repr(df)
else:
- assert not (has_horizontally_truncated_repr(df))
+ assert not has_horizontally_truncated_repr(df)
with option_context("display.max_rows", 15, "display.max_columns", 15):
if h == 20 and w == 20:
assert has_doubly_truncated_repr(df)
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
index 8f03655ec27cc..b66631a7d943e 100644
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -326,7 +326,7 @@ def write_legacy_file():
# force our cwd to be the first searched
sys.path.insert(0, ".")
- if not (3 <= len(sys.argv) <= 4):
+ if not 3 <= len(sys.argv) <= 4:
exit(
"Specify output directory and storage type: generate_legacy_"
"storage_files.py <output_dir> <storage_type> "
diff --git a/pandas/tests/scalar/timedelta/test_arithmetic.py b/pandas/tests/scalar/timedelta/test_arithmetic.py
index f5cfc6fecb5d0..72ee89a4b5108 100644
--- a/pandas/tests/scalar/timedelta/test_arithmetic.py
+++ b/pandas/tests/scalar/timedelta/test_arithmetic.py
@@ -1055,13 +1055,13 @@ def __gt__(self, other):
t = Timedelta("1s")
- assert not (t == "string")
- assert not (t == 1)
- assert not (t == CustomClass())
- assert not (t == CustomClass(cmp_result=False))
+ assert t != "string"
+ assert t != 1
+ assert t != CustomClass()
+ assert t != CustomClass(cmp_result=False)
assert t < CustomClass(cmp_result=True)
- assert not (t < CustomClass(cmp_result=False))
+ assert not t < CustomClass(cmp_result=False)
assert t == CustomClass(cmp_result=True)
diff --git a/pandas/tests/scalar/timestamp/test_comparisons.py b/pandas/tests/scalar/timestamp/test_comparisons.py
index 510c29bd3893d..2c9b029bf109e 100644
--- a/pandas/tests/scalar/timestamp/test_comparisons.py
+++ b/pandas/tests/scalar/timestamp/test_comparisons.py
@@ -324,5 +324,5 @@ def __eq__(self, other) -> bool:
for left, right in [(inf, timestamp), (timestamp, inf)]:
assert left > right or left < right
assert left >= right or left <= right
- assert not (left == right)
+ assert not left == right
assert left != right
diff --git a/pandas/tests/series/methods/test_reindex.py b/pandas/tests/series/methods/test_reindex.py
index b64c7bec6ea39..60ada18410415 100644
--- a/pandas/tests/series/methods/test_reindex.py
+++ b/pandas/tests/series/methods/test_reindex.py
@@ -55,7 +55,7 @@ def test_reindex(datetime_series, string_series):
# return a copy the same index here
result = datetime_series.reindex()
- assert not (result is datetime_series)
+ assert result is not datetime_series
def test_reindex_nan():
diff --git a/pandas/tests/tseries/offsets/test_ticks.py b/pandas/tests/tseries/offsets/test_ticks.py
index 7a174b89baa9b..7e7f6dc86b8f9 100644
--- a/pandas/tests/tseries/offsets/test_ticks.py
+++ b/pandas/tests/tseries/offsets/test_ticks.py
@@ -90,11 +90,10 @@ def test_tick_equality(cls, n, m):
left = cls(n)
right = cls(m)
assert left != right
- assert not (left == right)
right = cls(n)
assert left == right
- assert not (left != right)
+ assert not left != right
if n != 0:
assert cls(n) != cls(-n)
diff --git a/pyproject.toml b/pyproject.toml
index b8568f1839f42..10e6a2bfe8436 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -91,7 +91,6 @@ disable = [
"missing-function-docstring",
"missing-module-docstring",
"singleton-comparison",
- "superfluous-parens",
"too-many-lines",
"typevar-name-incorrect-variance",
"ungrouped-imports",
@@ -139,7 +138,6 @@ disable = [
"too-many-branches",
"too-many-instance-attributes",
"too-many-locals",
- "too-many-locals",
"too-many-nested-blocks",
"too-many-public-methods",
"too-many-return-statements",
| - [ ] Contributes to #48855
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
There were several locations where superfluous parentheses were being used to clarify the operator precedence in potentially ambiguous situations. Parentheses were removed and the code was refactored to add clarity where relevant.
This code structure was a little confusing:
```
assert not (left == right)
assert left != right
```
I decided to remove the first line altogether, since it seems unnecessary. If it does serve a purpose, I can add it back.
A duplicate `"too-many-locals"` was also removed from 'pyproject.toml'. | https://api.github.com/repos/pandas-dev/pandas/pulls/49219 | 2022-10-20T22:42:11Z | 2022-10-24T20:51:20Z | 2022-10-24T20:51:20Z | 2022-10-24T20:51:27Z |
DOC: fixed typo | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 17c7653072526..945b738e52480 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -4004,7 +4004,7 @@ and data values from the values and assembles them into a ``data.frame``:
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
- # NOTE: matrices returned by h5read have to be transposed to to obtain
+ # NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14892 | 2016-12-16T02:17:03Z | 2016-12-16T09:18:26Z | 2016-12-16T09:18:26Z | 2016-12-16T16:17:02Z |
DOC: Add documentation about cpplint | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 38718bc5ca19a..ecc2a5e723c45 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -113,11 +113,12 @@ want to clone your fork to your machine::
This creates the directory `pandas-yourname` and connects your repository to
the upstream (main project) *pandas* repository.
-The testing suite will run automatically on Travis-CI once your pull request is
-submitted. However, if you wish to run the test suite on a branch prior to
-submitting the pull request, then Travis-CI needs to be hooked up to your
-GitHub repository. Instructions for doing so are `here
-<http://about.travis-ci.org/docs/user/getting-started/>`__.
+The testing suite will run automatically on Travis-CI and Appveyor once your
+pull request is submitted. However, if you wish to run the test suite on a
+branch prior to submitting the pull request, then Travis-CI and/or AppVeyor
+need to be hooked up to your GitHub repository. Instructions for doing so
+are `here <http://about.travis-ci.org/docs/user/getting-started/>`__ for
+Travis-CI and `here <https://www.appveyor.com/docs/>`__ for AppVeyor.
Creating a branch
-----------------
@@ -142,7 +143,7 @@ To update this branch, you need to retrieve the changes from the master branch::
git fetch upstream
git rebase upstream/master
-This will replay your commits on top of the lastest pandas git master. If this
+This will replay your commits on top of the latest pandas git master. If this
leads to merge conflicts, you must resolve these before submitting your pull
request. If you have uncommitted changes, you will need to ``stash`` them prior
to updating. This will effectively store your changes and they can be reapplied
@@ -396,7 +397,7 @@ evocations, sphinx will try to only build the pages that have been modified.
If you want to do a full clean build, do::
python make.py clean
- python make.py build
+ python make.py html
Starting with *pandas* 0.13.1 you can tell ``make.py`` to compile only a single section
of the docs, greatly reducing the turn-around time for checking your changes.
@@ -442,18 +443,80 @@ Contributing to the code base
Code standards
--------------
+Writing good code is not just about what you write. It is also about *how* you
+write it. During testing on Travis-CI, several tools will be run to check your
+code for stylistic errors. Generating any warnings will cause the test to fail.
+Thus, good style is a requirement for submitting code to *pandas*.
+
+In addition, because a lot of people use our library, it is important that we
+do not make sudden changes to the code that could have the potential to break
+a lot of user code as a result, that is, we need it to be as *backwards compatible*
+as possible to avoid mass breakages.
+
+Additional standards are outlined on the `code style wiki
+page <https://github.com/pandas-dev/pandas/wiki/Code-Style-and-Conventions>`_.
+
+C (cpplint)
+~~~~~~~~~~~
+
+*pandas* uses the `Google <https://google.github.io/styleguide/cppguide.html>`_
+standard. Google provides an open source style checker called ``cpplint``, but we
+use a fork of it that can be found `here <https://github.com/cpplint/cpplint>`_.
+Here are *some* of the more common ``cpplint`` issues:
+
+ - we restrict line-length to 80 characters to promote readability
+ - every header file must include a header guard to avoid name collisions if re-included
+
+Travis-CI will run the `cpplint <https://pypi.python.org/pypi/cpplint>`_ tool
+and report any stylistic errors in your code. Therefore, it is helpful before
+submitting code to run the check yourself::
+
+ cpplint --extensions=c,h --headers=h --filter=-readability/casting,-runtime/int,-build/include_subdir modified-c-file
+
+You can also run this command on an entire directory if necessary::
+
+ cpplint --extensions=c,h --headers=h --filter=-readability/casting,-runtime/int,-build/include_subdir --recursive modified-c-directory
+
+To make your commits compliant with this standard, you can install the
+`ClangFormat <http://clang.llvm.org/docs/ClangFormat.html>`_ tool, which can be
+downloaded `here <http://llvm.org/builds/>`_. To configure, in your home directory,
+run the following command::
+
+ clang-format style=google -dump-config > .clang-format
+
+Then modify the file to ensure that any indentation width parameters are at least four.
+Once configured, you can run the tool as follows::
+
+ clang-format modified-c-file
+
+This will output what your file will look like if the changes are made, and to apply
+them, just run the following command::
+
+ clang-format -i modified-c-file
+
+To run the tool on an entire directory, you can run the following analogous commands::
+
+ clang-format modified-c-directory/*.c modified-c-directory/*.h
+ clang-format -i modified-c-directory/*.c modified-c-directory/*.h
+
+Do note that this tool is best-effort, meaning that it will try to correct as
+many errors as possible, but it may not correct *all* of them. Thus, it is
+recommended that you run ``cpplint`` to double check and make any other style
+fixes manually.
+
+Python (PEP8)
+~~~~~~~~~~~~~
+
*pandas* uses the `PEP8 <http://www.python.org/dev/peps/pep-0008/>`_ standard.
There are several tools to ensure you abide by this standard. Here are *some* of
the more common ``PEP8`` issues:
- - we restrict line-length to 80 characters to promote readability
+ - we restrict line-length to 79 characters to promote readability
- passing arguments should have spaces after commas, e.g. ``foo(arg1, arg2, kw1='bar')``
-The Travis-CI will run `flake8 <http://pypi.python.org/pypi/flake8>`_ tool and report
-any stylistic errors in your code. Generating any warnings will cause the build to fail;
-thus these are part of the requirements for submitting code to *pandas*.
-
-It is helpful before submitting code to run this yourself on the diff::
+Travis-CI will run the `flake8 <http://pypi.python.org/pypi/flake8>`_ tool
+and report any stylistic errors in your code. Therefore, it is helpful before
+submitting code to run the check yourself on the diff::
git diff master | flake8 --diff
@@ -466,8 +529,8 @@ and make these changes with::
pep8radius master --diff --in-place
-Additional standards are outlined on the `code style wiki
-page <https://github.com/pandas-dev/pandas/wiki/Code-Style-and-Conventions>`_.
+Backwards Compatibility
+~~~~~~~~~~~~~~~~~~~~~~~
Please try to maintain backward compatibility. *pandas* has lots of users with lots of
existing code, so don't break it if at all possible. If you think breakage is required,
| 1) Minor spelling and command errors.
2) Add that we also use AppVeyor for testing.
3) With the introduction of `cpplint`, document it and how to use.
Follow-up to #14814.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14890 | 2016-12-15T21:40:07Z | 2016-12-16T11:11:01Z | 2016-12-16T11:11:01Z | 2016-12-16T15:35:09Z |
BUG: regression in DataFrame.combine_first with integer columns (GH14687) | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
index dabc6036fc9ba..291b0ffde145e 100644
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -78,7 +78,7 @@ Bug Fixes
- Bug in clipboard functions on linux with python2 with unicode and separators (:issue:`13747`)
- Bug in clipboard functions on Windows 10 and python 3 (:issue:`14362`, :issue:`12807`)
- Bug in ``.to_clipboard()`` and Excel compat (:issue:`12529`)
-
+- Bug in ``DataFrame.combine_first()`` for integer columns (:issue:`14687`).
- Bug in ``pd.read_csv()`` in which the ``dtype`` parameter was not being respected for empty data (:issue:`14712`)
- Bug in ``pd.read_csv()`` in which the ``nrows`` parameter was not being respected for large input when using the C engine for parsing (:issue:`7626`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0d4bcd781cf74..78d0f47d473c8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3665,10 +3665,8 @@ def combine(self, other, func, fill_value=None, overwrite=True):
otherSeries[other_mask] = fill_value
# if we have different dtypes, possibily promote
- if notnull(series).all():
- new_dtype = this_dtype
- otherSeries = otherSeries.astype(new_dtype)
- else:
+ new_dtype = this_dtype
+ if not is_dtype_equal(this_dtype, other_dtype):
new_dtype = _find_common_type([this_dtype, other_dtype])
if not is_dtype_equal(this_dtype, new_dtype):
series = series.astype(new_dtype)
diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py
index 5b5236843643d..c6b69dad3e6b5 100644
--- a/pandas/tests/frame/test_combine_concat.py
+++ b/pandas/tests/frame/test_combine_concat.py
@@ -725,3 +725,13 @@ def test_combine_first_period(self):
exp = pd.DataFrame({'P': exp_dts}, index=[1, 2, 3, 4, 5, 7])
tm.assert_frame_equal(res, exp)
self.assertEqual(res['P'].dtype, 'object')
+
+ def test_combine_first_int(self):
+ # GH14687 - integer series that do no align exactly
+
+ df1 = pd.DataFrame({'a': [0, 1, 3, 5]}, dtype='int64')
+ df2 = pd.DataFrame({'a': [1, 4]}, dtype='int64')
+
+ res = df1.combine_first(df2)
+ tm.assert_frame_equal(res, df1)
+ self.assertEqual(res['a'].dtype, 'int64')
| closes #14687
| https://api.github.com/repos/pandas-dev/pandas/pulls/14886 | 2016-12-15T10:53:24Z | 2016-12-16T09:43:42Z | 2016-12-16T09:43:42Z | 2016-12-16T09:43:42Z |
TST: Test datetime array assignment with different units (#7492) | diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 9758c2b9c805e..c6c3b4f43b55a 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -366,6 +366,21 @@ def test_operation_on_NaT(self):
exp = pd.Series([pd.NaT], index=["foo"])
tm.assert_series_equal(res, exp)
+ def test_datetime_assignment_with_NaT_and_diff_time_units(self):
+ # GH 7492
+ data_ns = np.array([1, 'nat'], dtype='datetime64[ns]')
+ result = pd.Series(data_ns).to_frame()
+ result['new'] = data_ns
+ expected = pd.DataFrame({0: [1, None],
+ 'new': [1, None]}, dtype='datetime64[ns]')
+ tm.assert_frame_equal(result, expected)
+ # OutOfBoundsDatetime error shouldn't occur
+ data_s = np.array([1, 'nat'], dtype='datetime64[s]')
+ result['new'] = data_s
+ expected = pd.DataFrame({0: [1, None],
+ 'new': [1e9, None]}, dtype='datetime64[ns]')
+ tm.assert_frame_equal(result, expected)
+
if __name__ == '__main__':
import nose
| - [x] closes #7492
- [x] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
The example in this issue currently works in master. Added a test to confirm. Based on a quick skim of the recent PRs, doesn't look like it was fixed recently.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14884 | 2016-12-15T06:06:56Z | 2016-12-17T23:13:43Z | 2016-12-17T23:13:43Z | 2016-12-21T04:20:11Z |
DOC: add floats and ints missing as acceptable arguments for pandas.t… | diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index 326bc5be3fd8f..21e1c9744aa88 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -183,7 +183,7 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
Parameters
----------
- arg : string, datetime, list, tuple, 1-d array, Series
+ arg : integer, float, string, datetime, list, tuple, 1-d array, Series
.. versionadded: 0.18.1
| - [ ] closes #14556
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
…o_datetime | https://api.github.com/repos/pandas-dev/pandas/pulls/14864 | 2016-12-12T10:21:00Z | 2016-12-12T11:10:35Z | 2016-12-12T11:10:35Z | 2017-03-24T20:14:03Z |
TST: Parse dates with empty space (#6428) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 75f36c5274cd2..17c7653072526 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -867,6 +867,12 @@ data columns:
index_col=0) #index is the nominal column
df
+.. note::
+ If a column or index contains an unparseable date, the entire column or
+ index will be returned unaltered as an object data type. For non-standard
+ datetime parsing, use :func:`to_datetime` after ``pd.read_csv``.
+
+
.. note::
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g "2000-01-01T00:01:02+00:00" and similar variations. If you can arrange
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 3cd23150bb0bf..200943324ce66 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -167,6 +167,10 @@
* dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result
'foo'
+ If a column or index contains an unparseable date, the entire column or
+ index will be returned unaltered as an object data type. For non-standard
+ datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
+
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_format : boolean, default False
If True and parse_dates is enabled, pandas will attempt to infer the format
diff --git a/pandas/io/tests/test_date_converters.py b/pandas/io/tests/test_date_converters.py
index 95fd2d52db009..3a0dd4eaa09e5 100644
--- a/pandas/io/tests/test_date_converters.py
+++ b/pandas/io/tests/test_date_converters.py
@@ -138,6 +138,19 @@ def date_parser(date, time):
names=['datetime', 'prn']))
assert_frame_equal(df, df_correct)
+ def test_parse_date_column_with_empty_string(self):
+ # GH 6428
+ data = """case,opdate
+ 7,10/18/2006
+ 7,10/18/2008
+ 621, """
+ result = read_csv(StringIO(data), parse_dates=['opdate'])
+ expected_data = [[7, '10/18/2006'],
+ [7, '10/18/2008'],
+ [621, ' ']]
+ expected = DataFrame(expected_data, columns=['case', 'opdate'])
+ assert_frame_equal(result, expected)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 2c3e5ca126209..beacc21912edc 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -947,6 +947,18 @@ def test_to_datetime_on_datetime64_series(self):
result = to_datetime(s)
self.assertEqual(result[0], s[0])
+ def test_to_datetime_with_space_in_series(self):
+ # GH 6428
+ s = Series(['10/18/2006', '10/18/2008', ' '])
+ tm.assertRaises(ValueError, lambda: to_datetime(s, errors='raise'))
+ result_coerce = to_datetime(s, errors='coerce')
+ expected_coerce = Series([datetime(2006, 10, 18),
+ datetime(2008, 10, 18),
+ pd.NaT])
+ tm.assert_series_equal(result_coerce, expected_coerce)
+ result_ignore = to_datetime(s, errors='ignore')
+ tm.assert_series_equal(result_ignore, s)
+
def test_to_datetime_with_apply(self):
# this is only locale tested with US/None locales
_skip_if_has_locale()
| - [x] closes #6428
- [x] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
Parsing a date with an empty space no longer returns today's date on master 0.19.1. Not sure when this was fixed, but it doesn't seem like it occurred recently.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14862 | 2016-12-12T05:07:13Z | 2016-12-14T11:07:26Z | 2016-12-14T11:07:26Z | 2016-12-21T04:19:30Z |
TST: Test DatetimeIndex weekend offset | diff --git a/pandas/tests/indexes/test_datetimelike.py b/pandas/tests/indexes/test_datetimelike.py
index 68db163be6fde..0017271fe6c97 100644
--- a/pandas/tests/indexes/test_datetimelike.py
+++ b/pandas/tests/indexes/test_datetimelike.py
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
-from datetime import datetime, timedelta, time
+from datetime import datetime, timedelta, time, date
import numpy as np
@@ -348,6 +348,19 @@ def test_construction_outofbounds(self):
# can't create DatetimeIndex
DatetimeIndex(dates)
+ def test_construction_with_ndarray(self):
+ # GH 5152
+ dates = [datetime(2013, 10, 7),
+ datetime(2013, 10, 8),
+ datetime(2013, 10, 9)]
+ data = DatetimeIndex(dates, freq=pd.tseries.frequencies.BDay()).values
+ result = DatetimeIndex(data, freq=pd.tseries.frequencies.BDay())
+ expected = DatetimeIndex(['2013-10-07',
+ '2013-10-08',
+ '2013-10-09'],
+ freq='B')
+ tm.assert_index_equal(result, expected)
+
def test_astype(self):
# GH 13149, GH 13209
idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN])
@@ -748,6 +761,26 @@ def test_difference_freq(self):
tm.assert_index_equal(idx_diff, expected)
tm.assert_attr_equal('freq', idx_diff, expected)
+ def test_week_of_month_frequency(self):
+ # GH 5348: "ValueError: Could not evaluate WOM-1SUN" shouldn't raise
+ d1 = date(2002, 9, 1)
+ d2 = date(2013, 10, 27)
+ d3 = date(2012, 9, 30)
+ idx1 = DatetimeIndex([d1, d2])
+ idx2 = DatetimeIndex([d3])
+ result_append = idx1.append(idx2)
+ expected = DatetimeIndex([d1, d2, d3])
+ tm.assert_index_equal(result_append, expected)
+ result_union = idx1.union(idx2)
+ expected = DatetimeIndex([d1, d3, d2])
+ tm.assert_index_equal(result_union, expected)
+
+ # GH 5115
+ result = date_range("2013-1-1", periods=4, freq='WOM-1SAT')
+ dates = ['2013-01-05', '2013-02-02', '2013-03-02', '2013-04-06']
+ expected = DatetimeIndex(dates, freq='WOM-1SAT')
+ tm.assert_index_equal(result, expected)
+
class TestPeriodIndex(DatetimeLike, tm.TestCase):
_holder = PeriodIndex
| closes #5348
closes #5115
closes #5152
The original issues pass on master as of 0.19.1. These related issues might have been fixed part of #5004, but it doesn't look like that any PRs fixed this recently.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14853 | 2016-12-10T21:23:26Z | 2016-12-11T14:06:54Z | 2016-12-11T14:06:54Z | 2016-12-21T04:17:50Z |
TST: skip testing on windows for specific formatting which sometimes hang | diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 628095a2fcbd3..2dfeb7da07a3d 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -915,23 +915,12 @@ def test_format(self):
self._check_method_works(Index.format)
# GH 14626
- # our formatting is different by definition when we have
- # ms vs us precision (e.g. trailing zeros);
- # so don't compare this case
- def datetime_now_without_trailing_zeros():
- now = datetime.now()
-
- while str(now).endswith("000"):
- now = datetime.now()
-
- return now
-
- index = Index([datetime_now_without_trailing_zeros()])
-
# windows has different precision on datetime.datetime.now (it doesn't
# include us since the default for Timestamp shows these but Index
- # formating does not we are skipping
- if not is_platform_windows():
+ # formating does not we are skipping)
+ now = datetime.now()
+ if not str(now).endswith("000"):
+ index = Index([now])
formatted = index.format()
expected = [str(index[0])]
self.assertEqual(formatted, expected)
| xref #14626 | https://api.github.com/repos/pandas-dev/pandas/pulls/14851 | 2016-12-10T14:52:35Z | 2016-12-10T15:31:53Z | 2016-12-10T15:31:53Z | 2016-12-10T15:31:53Z |
DOC: astype now takes dict mapping col names to datatypes (#14761) | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index e7db814483905..2e8abe0a5c329 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1757,6 +1757,7 @@ then the more *general* one will be used as the result of the operation.
# conversion of dtypes
df3.astype('float32').dtypes
+
Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`
.. ipython:: python
@@ -1766,6 +1767,17 @@ Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`
dft
dft.dtypes
+.. versionadded:: 0.19.0
+
+Convert certain columns to a specific dtype by passing a dict to :meth:`~DataFrame.astype`
+
+.. ipython:: python
+
+ dft1 = pd.DataFrame({'a': [1,0,1], 'b': [4,5,6], 'c': [7, 8, 9]})
+ dft1 = dft1.astype({'a': np.bool, 'c': np.float64})
+ dft1
+ dft1.dtypes
+
.. note::
When trying to convert a subset of columns to a specified type using :meth:`~DataFrame.astype` and :meth:`~DataFrame.loc`, upcasting occurs.
| - [x] closes https://github.com/pandas-dev/pandas/issues/14761
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
Updating documentation to reflect change | https://api.github.com/repos/pandas-dev/pandas/pulls/14837 | 2016-12-09T12:06:00Z | 2016-12-22T13:37:19Z | 2016-12-22T13:37:19Z | 2016-12-22T15:18:51Z |
BF(TST): use = (native) instead of < (little endian) for target data types | diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 6eb73876c11dd..b6d1d4bb09f56 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -1453,7 +1453,7 @@ def test_as_recarray(self):
FutureWarning, check_stacklevel=False):
data = 'a,b\n1,a\n2,b'
expected = np.array([(1, 'a'), (2, 'b')],
- dtype=[('a', '<i8'), ('b', 'O')])
+ dtype=[('a', '=i8'), ('b', 'O')])
out = self.read_csv(StringIO(data), as_recarray=True)
tm.assert_numpy_array_equal(out, expected)
@@ -1462,7 +1462,7 @@ def test_as_recarray(self):
FutureWarning, check_stacklevel=False):
data = 'a,b\n1,a\n2,b'
expected = np.array([(1, 'a'), (2, 'b')],
- dtype=[('a', '<i8'), ('b', 'O')])
+ dtype=[('a', '=i8'), ('b', 'O')])
out = self.read_csv(StringIO(data), as_recarray=True, index_col=0)
tm.assert_numpy_array_equal(out, expected)
@@ -1471,7 +1471,7 @@ def test_as_recarray(self):
FutureWarning, check_stacklevel=False):
data = '1,a\n2,b'
expected = np.array([(1, 'a'), (2, 'b')],
- dtype=[('a', '<i8'), ('b', 'O')])
+ dtype=[('a', '=i8'), ('b', 'O')])
out = self.read_csv(StringIO(data), names=['a', 'b'],
header=None, as_recarray=True)
tm.assert_numpy_array_equal(out, expected)
@@ -1482,7 +1482,7 @@ def test_as_recarray(self):
FutureWarning, check_stacklevel=False):
data = 'b,a\n1,a\n2,b'
expected = np.array([(1, 'a'), (2, 'b')],
- dtype=[('b', '<i8'), ('a', 'O')])
+ dtype=[('b', '=i8'), ('a', 'O')])
out = self.read_csv(StringIO(data), as_recarray=True)
tm.assert_numpy_array_equal(out, expected)
@@ -1490,7 +1490,7 @@ def test_as_recarray(self):
with tm.assert_produces_warning(
FutureWarning, check_stacklevel=False):
data = 'a\n1'
- expected = np.array([(1,)], dtype=[('a', '<i8')])
+ expected = np.array([(1,)], dtype=[('a', '=i8')])
out = self.read_csv(StringIO(data), as_recarray=True, squeeze=True)
tm.assert_numpy_array_equal(out, expected)
@@ -1500,7 +1500,7 @@ def test_as_recarray(self):
data = 'a,b\n1,a\n2,b'
conv = lambda x: int(x) + 1
expected = np.array([(2, 'a'), (3, 'b')],
- dtype=[('a', '<i8'), ('b', 'O')])
+ dtype=[('a', '=i8'), ('b', 'O')])
out = self.read_csv(StringIO(data), as_recarray=True,
converters={'a': conv})
tm.assert_numpy_array_equal(out, expected)
@@ -1509,7 +1509,7 @@ def test_as_recarray(self):
with tm.assert_produces_warning(
FutureWarning, check_stacklevel=False):
data = 'a,b\n1,a\n2,b'
- expected = np.array([(1,), (2,)], dtype=[('a', '<i8')])
+ expected = np.array([(1,), (2,)], dtype=[('a', '=i8')])
out = self.read_csv(StringIO(data), as_recarray=True,
usecols=['a'])
tm.assert_numpy_array_equal(out, expected)
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index ed441f2f85572..b9f999a6c6ffe 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -326,7 +326,7 @@ def test_strftime(self):
period_index = period_range('20150301', periods=5)
result = period_index.strftime("%Y/%m/%d")
expected = np.array(['2015/03/01', '2015/03/02', '2015/03/03',
- '2015/03/04', '2015/03/05'], dtype='<U10')
+ '2015/03/04', '2015/03/05'], dtype='=U10')
self.assert_numpy_array_equal(result, expected)
s = Series([datetime(2013, 1, 1, 2, 32, 59), datetime(2013, 1, 2, 14,
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 67b203d011d1a..2c3e5ca126209 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2991,7 +2991,7 @@ def test_map(self):
f = lambda x: x.strftime('%Y%m%d')
result = rng.map(f)
- exp = np.array([f(x) for x in rng], dtype='<U8')
+ exp = np.array([f(x) for x in rng], dtype='=U8')
tm.assert_almost_equal(result, exp)
def test_iteration_preserves_tz(self):
| - [ ] closes #14830
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
It is still WiP -- just thought to submit a PR to see how travis etc reacts on typical platforms | https://api.github.com/repos/pandas-dev/pandas/pulls/14832 | 2016-12-08T17:59:51Z | 2016-12-09T19:34:32Z | 2016-12-09T19:34:32Z | 2017-01-12T14:44:57Z |
frame_methods benchmarking sum instead of mean | diff --git a/asv_bench/benchmarks/frame_methods.py b/asv_bench/benchmarks/frame_methods.py
index 3daffb9d3a1cc..8cbf5b8d97b70 100644
--- a/asv_bench/benchmarks/frame_methods.py
+++ b/asv_bench/benchmarks/frame_methods.py
@@ -299,7 +299,7 @@ def time_apply_axis_1(self):
self.df.apply((lambda x: (x + 1)), axis=1)
def time_apply_lambda_mean(self):
- self.df.apply((lambda x: x.sum()))
+ self.df.apply((lambda x: x.mean()))
def time_apply_np_mean(self):
self.df.apply(np.mean)
| Not sure if the function name or the method call is incorrect, but it looks like this should be benchmarking `mean` instead of `sum`
| https://api.github.com/repos/pandas-dev/pandas/pulls/14824 | 2016-12-08T05:49:12Z | 2016-12-10T21:31:12Z | 2016-12-10T21:31:12Z | 2016-12-21T04:16:25Z |
Fixed KDE Plot to drop the missing values | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
index 231297df3fb8f..caf195afc4966 100644
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -79,4 +79,6 @@ Bug Fixes
- Explicit check in ``to_stata`` and ``StataWriter`` for out-of-range values when writing doubles (:issue:`14618`)
+- Bug in ``.plot(kind='kde')`` which did not drop missing values to generate the KDE Plot, instead generating an empty plot. (:issue:`14821`)
+
- Bug in ``unstack()`` if called with a list of column(s) as an argument, regardless of the dtypes of all columns, they get coerced to ``object`` (:issue:`11847`)
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index e752197c6ad77..73119fec88198 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -569,7 +569,11 @@ def test_kde_missing_vals(self):
_skip_if_no_scipy_gaussian_kde()
s = Series(np.random.uniform(size=50))
s[0] = np.nan
- _check_plot_works(s.plot.kde)
+ axes = _check_plot_works(s.plot.kde)
+ # check if the values have any missing values
+ # GH14821
+ self.assertTrue(any(~np.isnan(axes.lines[0].get_xdata())),
+ msg='Missing Values not dropped')
@slow
def test_hist_kwargs(self):
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 21e8b64a3656a..bd9933b12b580 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -2153,9 +2153,10 @@ def _args_adjust(self):
def _get_ind(self, y):
if self.ind is None:
- sample_range = max(y) - min(y)
- ind = np.linspace(min(y) - 0.5 * sample_range,
- max(y) + 0.5 * sample_range, 1000)
+ # np.nanmax() and np.nanmin() ignores the missing values
+ sample_range = np.nanmax(y) - np.nanmin(y)
+ ind = np.linspace(np.nanmin(y) - 0.5 * sample_range,
+ np.nanmax(y) + 0.5 * sample_range, 1000)
else:
ind = self.ind
return ind
| - [x] closes https://github.com/pandas-dev/pandas/issues/14821
- [x] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14820 | 2016-12-08T01:38:48Z | 2016-12-15T15:40:38Z | 2016-12-15T15:40:38Z | 2016-12-24T02:07:26Z |
Changed pandas-qt python2/3 friendly qtpandas. | diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index d23e0ca59254d..b96660be97d71 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -111,5 +111,5 @@ Visualizing Data in Qt applications
-----------------------------------
There is no support for such visualization in pandas. However, the external
-package `pandas-qt <https://github.com/datalyze-solutions/pandas-qt>`_ does
+package `qtpandas <https://github.com/draperjames/qtpandas>`_ does
provide this functionality.
| Just changed the link to the abandoned repository python 2 only pandas-qt to the new functional Python 2/3 friendly qtpandas. | https://api.github.com/repos/pandas-dev/pandas/pulls/14818 | 2016-12-07T20:38:44Z | 2016-12-29T14:56:29Z | 2016-12-29T14:56:29Z | 2016-12-29T22:27:38Z |
BUG: Prevent addition overflow with TimedeltaIndex | diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py
index 20d149493951f..fe657936c403e 100644
--- a/asv_bench/benchmarks/algorithms.py
+++ b/asv_bench/benchmarks/algorithms.py
@@ -18,7 +18,7 @@ def setup(self):
self.float = pd.Float64Index(np.random.randn(N).repeat(5))
# Convenience naming.
- self.checked_add = pd.core.nanops._checked_add_with_arr
+ self.checked_add = pd.core.algorithms.checked_add_with_arr
self.arr = np.arange(1000000)
self.arrpos = np.arange(1000000)
@@ -26,6 +26,9 @@ def setup(self):
self.arrmixed = np.array([1, -1]).repeat(500000)
self.strings = tm.makeStringIndex(100000)
+ self.arr_nan = np.random.choice([True, False], size=1000000)
+ self.arrmixed_nan = np.random.choice([True, False], size=1000000)
+
# match
self.uniques = tm.makeStringIndex(1000).values
self.all = self.uniques.repeat(10)
@@ -69,6 +72,16 @@ def time_add_overflow_neg_arr(self):
def time_add_overflow_mixed_arr(self):
self.checked_add(self.arr, self.arrmixed)
+ def time_add_overflow_first_arg_nan(self):
+ self.checked_add(self.arr, self.arrmixed, arr_mask=self.arr_nan)
+
+ def time_add_overflow_second_arg_nan(self):
+ self.checked_add(self.arr, self.arrmixed, b_mask=self.arrmixed_nan)
+
+ def time_add_overflow_both_arg_nan(self):
+ self.checked_add(self.arr, self.arrmixed, arr_mask=self.arr_nan,
+ b_mask=self.arrmixed_nan)
+
class Hashing(object):
goal_time = 0.2
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index e264fb15f3e67..8c486370c5eed 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -213,6 +213,7 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
+- Bug in ``TimedeltaIndex`` addition where overflow was being allowed without error (:issue:`14816`)
- Bug in ``astype()`` where ``inf`` values were incorrectly converted to integers. Now raises error now with ``astype()`` for Series and DataFrames (:issue:`14265`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 0d4d4143e6b9b..b2702ea0acca7 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -27,6 +27,7 @@
_ensure_float64,
_ensure_int64,
is_list_like)
+from pandas.compat.numpy import _np_version_under1p10
from pandas.types.missing import isnull
import pandas.core.common as com
@@ -550,6 +551,95 @@ def rank(values, axis=0, method='average', na_option='keep',
return ranks
+
+def checked_add_with_arr(arr, b, arr_mask=None, b_mask=None):
+ """
+ Perform array addition that checks for underflow and overflow.
+
+ Performs the addition of an int64 array and an int64 integer (or array)
+ but checks that they do not result in overflow first. For elements that
+ are indicated to be NaN, whether or not there is overflow for that element
+ is automatically ignored.
+
+ Parameters
+ ----------
+ arr : array addend.
+ b : array or scalar addend.
+ arr_mask : boolean array or None
+ array indicating which elements to exclude from checking
+ b_mask : boolean array or boolean or None
+ array or scalar indicating which element(s) to exclude from checking
+
+ Returns
+ -------
+ sum : An array for elements x + b for each element x in arr if b is
+ a scalar or an array for elements x + y for each element pair
+ (x, y) in (arr, b).
+
+ Raises
+ ------
+ OverflowError if any x + y exceeds the maximum or minimum int64 value.
+ """
+ def _broadcast(arr_or_scalar, shape):
+ """
+ Helper function to broadcast arrays / scalars to the desired shape.
+ """
+ if _np_version_under1p10:
+ if lib.isscalar(arr_or_scalar):
+ out = np.empty(shape)
+ out.fill(arr_or_scalar)
+ else:
+ out = arr_or_scalar
+ else:
+ out = np.broadcast_to(arr_or_scalar, shape)
+ return out
+
+ # For performance reasons, we broadcast 'b' to the new array 'b2'
+ # so that it has the same size as 'arr'.
+ b2 = _broadcast(b, arr.shape)
+ if b_mask is not None:
+ # We do the same broadcasting for b_mask as well.
+ b2_mask = _broadcast(b_mask, arr.shape)
+ else:
+ b2_mask = None
+
+ # For elements that are NaN, regardless of their value, we should
+ # ignore whether they overflow or not when doing the checked add.
+ if arr_mask is not None and b2_mask is not None:
+ not_nan = np.logical_not(arr_mask | b2_mask)
+ elif arr_mask is not None:
+ not_nan = np.logical_not(arr_mask)
+ elif b_mask is not None:
+ not_nan = np.logical_not(b2_mask)
+ else:
+ not_nan = np.empty(arr.shape, dtype=bool)
+ not_nan.fill(True)
+
+ # gh-14324: For each element in 'arr' and its corresponding element
+ # in 'b2', we check the sign of the element in 'b2'. If it is positive,
+ # we then check whether its sum with the element in 'arr' exceeds
+ # np.iinfo(np.int64).max. If so, we have an overflow error. If it
+ # it is negative, we then check whether its sum with the element in
+ # 'arr' exceeds np.iinfo(np.int64).min. If so, we have an overflow
+ # error as well.
+ mask1 = b2 > 0
+ mask2 = b2 < 0
+
+ if not mask1.any():
+ to_raise = ((np.iinfo(np.int64).min - b2 > arr) & not_nan).any()
+ elif not mask2.any():
+ to_raise = ((np.iinfo(np.int64).max - b2 < arr) & not_nan).any()
+ else:
+ to_raise = (((np.iinfo(np.int64).max -
+ b2[mask1] < arr[mask1]) & not_nan[mask1]).any() or
+ ((np.iinfo(np.int64).min -
+ b2[mask2] > arr[mask2]) & not_nan[mask2]).any())
+
+ if to_raise:
+ raise OverflowError("Overflow in int64 addition")
+ return arr + b
+
+
_rank1d_functions = {
'float64': algos.rank_1d_float64,
'int64': algos.rank_1d_int64,
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index d7d68ad536be5..a76e348b7dee2 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -11,7 +11,6 @@
import pandas.hashtable as _hash
from pandas import compat, lib, algos, tslib
-from pandas.compat.numpy import _np_version_under1p10
from pandas.types.common import (_ensure_int64, _ensure_object,
_ensure_float64, _get_dtype,
is_float, is_scalar,
@@ -810,57 +809,3 @@ def unique1d(values):
table = _hash.PyObjectHashTable(len(values))
uniques = table.unique(_ensure_object(values))
return uniques
-
-
-def _checked_add_with_arr(arr, b):
- """
- Performs the addition of an int64 array and an int64 integer (or array)
- but checks that they do not result in overflow first.
-
- Parameters
- ----------
- arr : array addend.
- b : array or scalar addend.
-
- Returns
- -------
- sum : An array for elements x + b for each element x in arr if b is
- a scalar or an array for elements x + y for each element pair
- (x, y) in (arr, b).
-
- Raises
- ------
- OverflowError if any x + y exceeds the maximum or minimum int64 value.
- """
- # For performance reasons, we broadcast 'b' to the new array 'b2'
- # so that it has the same size as 'arr'.
- if _np_version_under1p10:
- if lib.isscalar(b):
- b2 = np.empty(arr.shape)
- b2.fill(b)
- else:
- b2 = b
- else:
- b2 = np.broadcast_to(b, arr.shape)
-
- # gh-14324: For each element in 'arr' and its corresponding element
- # in 'b2', we check the sign of the element in 'b2'. If it is positive,
- # we then check whether its sum with the element in 'arr' exceeds
- # np.iinfo(np.int64).max. If so, we have an overflow error. If it
- # it is negative, we then check whether its sum with the element in
- # 'arr' exceeds np.iinfo(np.int64).min. If so, we have an overflow
- # error as well.
- mask1 = b2 > 0
- mask2 = b2 < 0
-
- if not mask1.any():
- to_raise = (np.iinfo(np.int64).min - b2 > arr).any()
- elif not mask2.any():
- to_raise = (np.iinfo(np.int64).max - b2 < arr).any()
- else:
- to_raise = ((np.iinfo(np.int64).max - b2[mask1] < arr[mask1]).any() or
- (np.iinfo(np.int64).min - b2[mask2] > arr[mask2]).any())
-
- if to_raise:
- raise OverflowError("Overflow in int64 addition")
- return arr + b
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index f89f41abd0d35..d0c909b9c1b30 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1129,6 +1129,55 @@ def test_ensure_platform_int():
assert (result is arr)
+def test_int64_add_overflow():
+ # see gh-14068
+ msg = "Overflow in int64 addition"
+ m = np.iinfo(np.int64).max
+ n = np.iinfo(np.int64).min
+
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), m)
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]))
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([n, n]), n)
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([n, n]), np.array([n, n]))
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, n]), np.array([n, n]))
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]),
+ arr_mask=np.array([False, True]))
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]),
+ b_mask=np.array([False, True]))
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]),
+ arr_mask=np.array([False, True]),
+ b_mask=np.array([False, True]))
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ with tm.assert_produces_warning(RuntimeWarning):
+ algos.checked_add_with_arr(np.array([m, m]),
+ np.array([np.nan, m]))
+
+ # Check that the nan boolean arrays override whether or not
+ # the addition overflows. We don't check the result but just
+ # the fact that an OverflowError is not raised.
+ with tm.assertRaises(AssertionError):
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]),
+ arr_mask=np.array([True, True]))
+ with tm.assertRaises(AssertionError):
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]),
+ b_mask=np.array([True, True]))
+ with tm.assertRaises(AssertionError):
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ algos.checked_add_with_arr(np.array([m, m]), np.array([m, m]),
+ arr_mask=np.array([True, False]),
+ b_mask=np.array([False, True]))
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index be634228b1b6e..dd3a49de55d73 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -1002,28 +1002,6 @@ def prng(self):
return np.random.RandomState(1234)
-def test_int64_add_overflow():
- # see gh-14068
- msg = "Overflow in int64 addition"
- m = np.iinfo(np.int64).max
- n = np.iinfo(np.int64).min
-
- with tm.assertRaisesRegexp(OverflowError, msg):
- nanops._checked_add_with_arr(np.array([m, m]), m)
- with tm.assertRaisesRegexp(OverflowError, msg):
- nanops._checked_add_with_arr(np.array([m, m]), np.array([m, m]))
- with tm.assertRaisesRegexp(OverflowError, msg):
- nanops._checked_add_with_arr(np.array([n, n]), n)
- with tm.assertRaisesRegexp(OverflowError, msg):
- nanops._checked_add_with_arr(np.array([n, n]), np.array([n, n]))
- with tm.assertRaisesRegexp(OverflowError, msg):
- nanops._checked_add_with_arr(np.array([m, n]), np.array([n, n]))
- with tm.assertRaisesRegexp(OverflowError, msg):
- with tm.assert_produces_warning(RuntimeWarning):
- nanops._checked_add_with_arr(np.array([m, m]),
- np.array([np.nan, m]))
-
-
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', '-s'
diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py
index 8e24f430108b3..63e56e09e91fe 100644
--- a/pandas/tseries/base.py
+++ b/pandas/tseries/base.py
@@ -16,6 +16,7 @@
ABCPeriodIndex, ABCIndexClass)
from pandas.types.missing import isnull
from pandas.core import common as com, algorithms
+from pandas.core.algorithms import checked_add_with_arr
from pandas.core.common import AbstractMethodError
import pandas.formats.printing as printing
@@ -688,7 +689,8 @@ def _add_delta_td(self, other):
# return the i8 result view
inc = tslib._delta_to_nanoseconds(other)
- new_values = (self.asi8 + inc).view('i8')
+ new_values = checked_add_with_arr(self.asi8, inc,
+ arr_mask=self._isnan).view('i8')
if self.hasnans:
new_values[self._isnan] = tslib.iNaT
return new_values.view('i8')
@@ -703,7 +705,9 @@ def _add_delta_tdi(self, other):
self_i8 = self.asi8
other_i8 = other.asi8
- new_values = self_i8 + other_i8
+ new_values = checked_add_with_arr(self_i8, other_i8,
+ arr_mask=self._isnan,
+ b_mask=other._isnan)
if self.hasnans or other.hasnans:
mask = (self._isnan) | (other._isnan)
new_values[mask] = tslib.iNaT
diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py
index 7e77d8baf3b2c..1585aac0c8ead 100644
--- a/pandas/tseries/tdi.py
+++ b/pandas/tseries/tdi.py
@@ -20,8 +20,8 @@
import pandas.compat as compat
from pandas.compat import u
from pandas.tseries.frequencies import to_offset
+from pandas.core.algorithms import checked_add_with_arr
from pandas.core.base import _shared_docs
-from pandas.core.nanops import _checked_add_with_arr
from pandas.indexes.base import _index_shared_docs
import pandas.core.common as com
import pandas.types.concat as _concat
@@ -347,7 +347,7 @@ def _add_datelike(self, other):
else:
other = Timestamp(other)
i8 = self.asi8
- result = _checked_add_with_arr(i8, other.value)
+ result = checked_add_with_arr(i8, other.value)
result = self._maybe_mask_results(result, fill_value=tslib.iNaT)
return DatetimeIndex(result, name=self.name, copy=False)
diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py
index ca957ca0394d1..fc95b17b9b52d 100644
--- a/pandas/tseries/tests/test_timedeltas.py
+++ b/pandas/tseries/tests/test_timedeltas.py
@@ -1958,11 +1958,34 @@ def test_add_overflow(self):
with tm.assertRaisesRegexp(OverflowError, msg):
Timestamp('2000') + to_timedelta(106580, 'D')
+ _NaT = int(pd.NaT) + 1
msg = "Overflow in int64 addition"
with tm.assertRaisesRegexp(OverflowError, msg):
to_timedelta([106580], 'D') + Timestamp('2000')
with tm.assertRaisesRegexp(OverflowError, msg):
Timestamp('2000') + to_timedelta([106580], 'D')
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ to_timedelta([_NaT]) - Timedelta('1 days')
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ to_timedelta(['5 days', _NaT]) - Timedelta('1 days')
+ with tm.assertRaisesRegexp(OverflowError, msg):
+ (to_timedelta([_NaT, '5 days', '1 hours']) -
+ to_timedelta(['7 seconds', _NaT, '4 hours']))
+
+ # These should not overflow!
+ exp = TimedeltaIndex([pd.NaT])
+ result = to_timedelta([pd.NaT]) - Timedelta('1 days')
+ tm.assert_index_equal(result, exp)
+
+ exp = TimedeltaIndex(['4 days', pd.NaT])
+ result = to_timedelta(['5 days', pd.NaT]) - Timedelta('1 days')
+ tm.assert_index_equal(result, exp)
+
+ exp = TimedeltaIndex([pd.NaT, pd.NaT, '5 hours'])
+ result = (to_timedelta([pd.NaT, '5 days', '1 hours']) +
+ to_timedelta(['7 seconds', pd.NaT, '4 hours']))
+ tm.assert_index_equal(result, exp)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| Expands checked-add array addition introduced in #14237 to include all other addition cases (i.e.
`TimedeltaIndex` and `Timedelta`). Follow-up to #14453. | https://api.github.com/repos/pandas-dev/pandas/pulls/14816 | 2016-12-07T15:50:04Z | 2016-12-17T23:25:08Z | 2016-12-17T23:25:08Z | 2016-12-24T02:15:57Z |
COMPAT: numpy compat with 1-ndim object array compat and broadcasting | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 96b447cda4bc4..7c5ad04cc90b0 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1176,6 +1176,13 @@ def na_op(x, y):
yrav = y.ravel()
mask = notnull(xrav) & notnull(yrav)
xrav = xrav[mask]
+
+ # we may need to manually
+ # broadcast a 1 element array
+ if yrav.shape != mask.shape:
+ yrav = np.empty(mask.shape, dtype=yrav.dtype)
+ yrav.fill(yrav.item())
+
yrav = yrav[mask]
if np.prod(xrav.shape) and np.prod(yrav.shape):
with np.errstate(all='ignore'):
| xref #14808 | https://api.github.com/repos/pandas-dev/pandas/pulls/14809 | 2016-12-06T11:58:47Z | 2016-12-06T13:00:30Z | 2016-12-06T13:00:30Z | 2016-12-06T13:00:30Z |
BUG: fix hash collisions for from int overflow | diff --git a/pandas/src/hash.pyx b/pandas/src/hash.pyx
index a393e0df96954..06ed947808e39 100644
--- a/pandas/src/hash.pyx
+++ b/pandas/src/hash.pyx
@@ -40,7 +40,8 @@ def hash_object_array(ndarray[object] arr, object key, object encoding='utf8'):
Py_ssize_t i, l, n
ndarray[uint64_t] result
bytes data, k
- uint8_t *kb, *lens
+ uint8_t *kb
+ uint64_t *lens
char **vecs, *cdata
object val
@@ -55,7 +56,7 @@ def hash_object_array(ndarray[object] arr, object key, object encoding='utf8'):
# create an array of bytes
vecs = <char **> malloc(n * sizeof(char *))
- lens = <uint8_t*> malloc(n * sizeof(uint8_t))
+ lens = <uint64_t*> malloc(n * sizeof(uint64_t))
cdef list datas = []
for i in range(n):
diff --git a/pandas/tools/tests/test_hashing.py b/pandas/tools/tests/test_hashing.py
index 4e05ae7007c80..6e5f30fb7a52d 100644
--- a/pandas/tools/tests/test_hashing.py
+++ b/pandas/tools/tests/test_hashing.py
@@ -142,7 +142,36 @@ def test_alternate_encoding(self):
obj = Series(list('abc'))
self.check_equal(obj, encoding='ascii')
- def test_long_strings(self):
-
- obj = Index(tm.rands_array(nchars=10000, size=100))
- self.check_equal(obj)
+ def test_same_len_hash_collisions(self):
+
+ for l in range(8):
+ length = 2**(l + 8) + 1
+ s = tm.rands_array(length, 2)
+ result = hash_array(s, 'utf8')
+ self.assertFalse(result[0] == result[1])
+
+ for l in range(8):
+ length = 2**(l + 8)
+ s = tm.rands_array(length, 2)
+ result = hash_array(s, 'utf8')
+ self.assertFalse(result[0] == result[1])
+
+ def test_hash_collisions(self):
+
+ # hash collisions are bad
+ # https://github.com/pandas-dev/pandas/issues/14711#issuecomment-264885726
+ L = ['Ingrid-9Z9fKIZmkO7i7Cn51Li34pJm44fgX6DYGBNj3VPlOH50m7HnBlPxfIwFMrcNJNMP6PSgLmwWnInciMWrCSAlLEvt7JkJl4IxiMrVbXSa8ZQoVaq5xoQPjltuJEfwdNlO6jo8qRRHvD8sBEBMQASrRa6TsdaPTPCBo3nwIBpE7YzzmyH0vMBhjQZLx1aCT7faSEx7PgFxQhHdKFWROcysamgy9iVj8DO2Fmwg1NNl93rIAqC3mdqfrCxrzfvIY8aJdzin2cHVzy3QUJxZgHvtUtOLxoqnUHsYbNTeq0xcLXpTZEZCxD4PGubIuCNf32c33M7HFsnjWSEjE2yVdWKhmSVodyF8hFYVmhYnMCztQnJrt3O8ZvVRXd5IKwlLexiSp4h888w7SzAIcKgc3g5XQJf6MlSMftDXm9lIsE1mJNiJEv6uY6pgvC3fUPhatlR5JPpVAHNSbSEE73MBzJrhCAbOLXQumyOXigZuPoME7QgJcBalliQol7YZ9', # noqa
+ 'Tim-b9MddTxOWW2AT1Py6vtVbZwGAmYCjbp89p8mxsiFoVX4FyDOF3wFiAkyQTUgwg9sVqVYOZo09Dh1AzhFHbgij52ylF0SEwgzjzHH8TGY8Lypart4p4onnDoDvVMBa0kdthVGKl6K0BDVGzyOXPXKpmnMF1H6rJzqHJ0HywfwS4XYpVwlAkoeNsiicHkJUFdUAhG229INzvIAiJuAHeJDUoyO4DCBqtoZ5TDend6TK7Y914yHlfH3g1WZu5LksKv68VQHJriWFYusW5e6ZZ6dKaMjTwEGuRgdT66iU5nqWTHRH8WSzpXoCFwGcTOwyuqPSe0fTe21DVtJn1FKj9F9nEnR9xOvJUO7E0piCIF4Ad9yAIDY4DBimpsTfKXCu1vdHpKYerzbndfuFe5AhfMduLYZJi5iAw8qKSwR5h86ttXV0Mc0QmXz8dsRvDgxjXSmupPxBggdlqUlC828hXiTPD7am0yETBV0F3bEtvPiNJfremszcV8NcqAoARMe'] # noqa
+
+ # these should be different!
+ result1 = hash_array(np.asarray(L[0:1], dtype=object), 'utf8')
+ expected1 = np.array([14963968704024874985], dtype=np.uint64)
+ self.assert_numpy_array_equal(result1, expected1)
+
+ result2 = hash_array(np.asarray(L[1:2], dtype=object), 'utf8')
+ expected2 = np.array([16428432627716348016], dtype=np.uint64)
+ self.assert_numpy_array_equal(result2, expected2)
+
+ result = hash_array(np.asarray(L, dtype=object), 'utf8')
+ self.assert_numpy_array_equal(
+ result, np.concatenate([expected1, expected2], axis=0))
| hash collisions brought up: https://github.com/pandas-dev/pandas/issues/14711#issuecomment-264885726
closes #14804
| https://api.github.com/repos/pandas-dev/pandas/pulls/14805 | 2016-12-06T01:18:20Z | 2016-12-06T13:15:08Z | 2016-12-06T13:15:08Z | 2016-12-06T13:15:08Z |
DOC: Fix grammar and formatting typos | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0053135e1fd85..0d4bcd781cf74 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2483,7 +2483,7 @@ def assign(self, **kwargs):
Notes
-----
Since ``kwargs`` is a dictionary, the order of your
- arguments may not be preserved. The make things predicatable,
+ arguments may not be preserved. To make things predicatable,
the columns are inserted in alphabetical order, at the end of
your DataFrame. Assigning multiple columns within the same
``assign`` is possible, but you cannot reference other columns
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b7e43d6fe01e8..64e3d60e1fe14 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3635,14 +3635,17 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
require that you also specify an `order` (int),
e.g. df.interpolate(method='polynomial', order=4).
These use the actual numerical values of the index.
- * 'krogh', 'piecewise_polynomial', 'spline', 'pchip' and 'akima' are all
- wrappers around the scipy interpolation methods of similar
- names. These use the actual numerical values of the index. See
- the scipy documentation for more on their behavior
- `here <http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation>`__ # noqa
- `and here <http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html>`__ # noqa
+ * 'krogh', 'piecewise_polynomial', 'spline', 'pchip' and 'akima'
+ are all wrappers around the scipy interpolation methods of
+ similar names. These use the actual numerical values of the
+ index. For more information on their behavior, see the
+ `scipy documentation
+ <http://docs.scipy.org/doc/scipy/reference/interpolate.html#univariate-interpolation>`__
+ and `tutorial documentation
+ <http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html>`__
* 'from_derivatives' refers to BPoly.from_derivatives which
- replaces 'piecewise_polynomial' interpolation method in scipy 0.18
+ replaces 'piecewise_polynomial' interpolation method in
+ scipy 0.18
.. versionadded:: 0.18.1
@@ -3656,7 +3659,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
* 1: fill row-by-row
limit : int, default None.
Maximum number of consecutive NaNs to fill.
- limit_direction : {'forward', 'backward', 'both'}, defaults to 'forward'
+ limit_direction : {'forward', 'backward', 'both'}, default 'forward'
If limit is specified, consecutive NaNs will be filled in this
direction.
@@ -4159,6 +4162,9 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
.. versionadded:: 0.19.0
+ Notes
+ -----
+
To learn more about the offset strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
@@ -4346,7 +4352,7 @@ def rank(self, axis=0, method='average', numeric_only=None,
Parameters
----------
- axis: {0 or 'index', 1 or 'columns'}, default 0
+ axis : {0 or 'index', 1 or 'columns'}, default 0
index to direct ranking
method : {'average', 'min', 'max', 'first', 'dense'}
* average: average rank of group
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 7c5ad04cc90b0..80de3cd85d4db 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1006,7 +1006,7 @@ def wrapper(self, other):
Parameters
----------
-other: Series or scalar value
+other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill missing (NaN) values with this value. If both Series are
missing, the result will be missing
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 958cf183578dd..7018865e5b3ec 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2033,9 +2033,9 @@ def reorder_levels(self, order):
Parameters
----------
- order: list of int representing new level order.
+ order : list of int representing new level order.
(reference level by number or key)
- axis: where to reorder levels
+ axis : where to reorder levels
Returns
-------
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index d46dc4d355b4c..e4cf896a89f57 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -2893,8 +2893,9 @@ def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None,
invisible
figsize : tuple
The size of the figure to create in inches by default
- layout: (optional) a tuple (rows, columns) for the layout of the histograms
- bins: integer, default 10
+ layout : tuple, optional
+ Tuple of (rows, columns) for the layout of the histograms
+ bins : integer, default 10
Number of histogram bins to be used
kwds : other plotting keyword arguments
To be passed to hist function
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 0824072cc383f..3edf75fbb82ae 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -2000,7 +2000,7 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
Frequency strings can have multiples, e.g. '5H'
tz : string or None
Time zone name for returning localized DatetimeIndex, for example
- Asia/Hong_Kong
+ Asia/Hong_Kong
normalize : bool, default False
Normalize start/end dates to midnight before generating date range
name : str, default None
| One grammatical typo, and the rest address small formatting typos when rendering online. | https://api.github.com/repos/pandas-dev/pandas/pulls/14803 | 2016-12-05T07:00:13Z | 2016-12-08T16:41:52Z | 2016-12-08T16:41:52Z | 2016-12-21T04:15:21Z |
DOC: minor format fix | diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 3e84d15caf50b..a9d0ab5476b66 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -107,10 +107,8 @@ Splitting
df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
- dflow = df[df.AAA <= 5]
- dfhigh = df[df.AAA > 5]
-
- dflow; dfhigh
+ dflow = df[df.AAA <= 5]; dflow
+ dfhigh = df[df.AAA > 5]; dfhigh
Building Criteria
*****************
| in the original version.
dflow is not displayed.
see: http://pandas.pydata.org/pandas-docs/stable/cookbook.html#splitting
| https://api.github.com/repos/pandas-dev/pandas/pulls/14802 | 2016-12-05T04:28:29Z | 2016-12-05T08:57:01Z | 2016-12-05T08:57:01Z | 2016-12-05T08:57:14Z |
DOC: add section on groupby().rolling/expanding/resample | diff --git a/doc/source/computation.rst b/doc/source/computation.rst
index 1414d2dd3c8dc..d727424750be5 100644
--- a/doc/source/computation.rst
+++ b/doc/source/computation.rst
@@ -214,6 +214,11 @@ computing common *window* or *rolling* statistics. Among these are count, sum,
mean, median, correlation, variance, covariance, standard deviation, skewness,
and kurtosis.
+Starting in version 0.18.1, the ``rolling()`` and ``expanding()``
+functions can be used directly from DataFrameGroupBy objects,
+see the :ref:`groupby docs <groupby.transform.window_resample>`.
+
+
.. note::
The API for window statistics is quite similar to the way one works with ``GroupBy`` objects, see the documentation :ref:`here <groupby>`
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index c5a77770085d6..ff97775afc2e2 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -614,6 +614,54 @@ and that the transformed data contains no NAs.
grouped.ffill()
+
+.. _groupby.transform.window_resample:
+
+New syntax to window and resample operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. versionadded:: 0.18.1
+
+Working with the resample, expanding or rolling operations on the groupby
+level used to require the application of helper functions. However,
+now it is possible to use ``resample()``, ``expanding()`` and
+``rolling()`` as methods on groupbys.
+
+The example below will apply the ``rolling()`` method on the samples of
+the column B based on the groups of column A.
+
+.. ipython:: python
+
+ df = pd.DataFrame({'A': [1] * 10 + [5] * 10,
+ 'B': np.arange(20)})
+ df
+
+ df.groupby('A').rolling(4).B.mean()
+
+
+The ``expanding()`` method will accumulate a given operation
+(``sum()`` in the example) for all the members of each particular
+group.
+
+.. ipython:: python
+
+ df.groupby('A').expanding().sum()
+
+
+Suppose you want to use the ``resample()`` method to get a daily
+frequency in each group of your dataframe and wish to complete the
+missing values with the ``ffill()`` method.
+
+.. ipython:: python
+
+ df = pd.DataFrame({'date': pd.date_range(start='2016-01-01',
+ periods=4,
+ freq='W'),
+ 'group': [1, 1, 2, 2],
+ 'val': [5, 6, 7, 8]}).set_index('date')
+ df
+
+ df.groupby('group').resample('1D').ffill()
+
.. _groupby.filter:
Filtration
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 854de443ac5ee..9253124f7e8b2 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1288,6 +1288,9 @@ limited to, financial applications.
``.resample()`` is a time-based groupby, followed by a reduction method on each of its groups.
See some :ref:`cookbook examples <cookbook.resample>` for some advanced strategies
+Starting in version 0.18.1, the ``resample()`` function can be used directly from
+DataFrameGroupBy objects, see the :ref:`groupby docs <groupby.transform.window_resample>`.
+
.. note::
``.resample()`` is similar to using a ``.rolling()`` operation with a time-based offset, see a discussion :ref:`here <stats.moments.ts-versus-resampling>`
| - [x] closes #14759
This is my first PR here. Not sure about the missing entries in API reference. Need some guidance on that. The same doubt about the best place to put the xref in computation.rst and timeseries.rst
Thanks.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14801 | 2016-12-05T02:52:52Z | 2016-12-10T11:12:01Z | 2016-12-10T11:12:01Z | 2016-12-10T11:21:49Z |
ENH: add timedelta as valid type for interpolate with method='time' | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index d89103309e990..f534c67273560 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -59,6 +59,7 @@ Other enhancements
- ``Series`` provides a ``to_excel`` method to output Excel files (:issue:`8825`)
- The ``usecols`` argument in ``pd.read_csv`` now accepts a callable function as a value (:issue:`14154`)
- ``pd.DataFrame.plot`` now prints a title above each subplot if ``suplots=True`` and ``title`` is a list of strings (:issue:`14753`)
+- ``pd.Series.interpolate`` now supports timedelta as an index type with ``method='time'`` (:issue:`6424`)
.. _whatsnew_0200.api_breaking:
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index b847415f274db..f1191ff1c7009 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -12,7 +12,8 @@
is_float_dtype, is_datetime64_dtype,
is_integer_dtype, _ensure_float64,
is_scalar,
- _DATELIKE_DTYPES)
+ _DATELIKE_DTYPES,
+ needs_i8_conversion)
from pandas.types.missing import isnull
@@ -187,7 +188,7 @@ def _interp_limit(invalid, fw_limit, bw_limit):
if method in ('values', 'index'):
inds = np.asarray(xvalues)
# hack for DatetimeIndex, #1646
- if issubclass(inds.dtype.type, np.datetime64):
+ if needs_i8_conversion(inds.dtype.type):
inds = inds.view(np.int64)
if inds.dtype == np.object_:
inds = lib.maybe_convert_objects(inds)
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 4e6c58df54dfd..5666a07cad4b8 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -891,6 +891,23 @@ def test_spline_error(self):
with tm.assertRaises(ValueError):
s.interpolate(method='spline', order=0)
+ def test_interp_timedelta64(self):
+ # GH 6424
+ df = Series([1, np.nan, 3],
+ index=pd.to_timedelta([1, 2, 3]))
+ result = df.interpolate(method='time')
+ expected = Series([1., 2., 3.],
+ index=pd.to_timedelta([1, 2, 3]))
+ assert_series_equal(result, expected)
+
+ # test for non uniform spacing
+ df = Series([1, np.nan, 3],
+ index=pd.to_timedelta([1, 2, 4]))
+ result = df.interpolate(method='time')
+ expected = Series([1., 1.666667, 3.],
+ index=pd.to_timedelta([1, 2, 4]))
+ assert_series_equal(result, expected)
+
if __name__ == '__main__':
import nose
| - [x] closes #6424
- [x] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14799 | 2016-12-05T00:24:22Z | 2016-12-10T11:19:49Z | 2016-12-10T11:19:49Z | 2016-12-10T11:20:03Z |
Small typos | diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index efcde100d1ce7..abb7018b2e25b 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -155,14 +155,14 @@ class DateOffset(object):
DateOffsets can be created to move dates forward a given number of
valid dates. For example, Bday(2) can be added to a date to move
it two business days forward. If the date does not start on a
- valid date, first it is moved to a valid date. Thus psedo code
+ valid date, first it is moved to a valid date. Thus pseudo code
is:
def __add__(date):
date = rollback(date) # does nothing if date is valid
return date + <n number of periods>
- When a date offset is created for a negitive number of periods,
+ When a date offset is created for a negative number of periods,
the date is first rolled forward. The pseudo code is:
def __add__(date):
| https://api.github.com/repos/pandas-dev/pandas/pulls/14789 | 2016-12-02T19:18:56Z | 2016-12-02T22:27:27Z | 2016-12-02T22:27:27Z | 2016-12-02T22:27:35Z | |
DOC: warning section on memory overflow when joining/merging dataframes on index with duplicate keys | diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index c6541a26c72b4..f95987afd4c77 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -692,6 +692,29 @@ either the left or right tables, the values in the joined table will be
p.plot([left, right], result,
labels=['left', 'right'], vertical=False);
plt.close('all');
+
+Here is another example with duplicate join keys in DataFrames:
+
+.. ipython:: python
+
+ left = pd.DataFrame({'A' : [1,2], 'B' : [2, 2]})
+
+ right = pd.DataFrame({'A' : [4,5,6], 'B': [2,2,2]})
+
+ result = pd.merge(left, right, on='B', how='outer')
+
+.. ipython:: python
+ :suppress:
+
+ @savefig merging_merge_on_key_dup.png
+ p.plot([left, right], result,
+ labels=['left', 'right'], vertical=False);
+ plt.close('all');
+
+.. warning::
+
+ Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions,
+ may result in memory overflow. It is the user' s responsibility to manage duplicate values in keys before joining large DataFrames.
.. _merging.indicator:
| - [ ] closes #14736
| https://api.github.com/repos/pandas-dev/pandas/pulls/14788 | 2016-12-02T09:41:58Z | 2016-12-11T20:26:35Z | 2016-12-11T20:26:35Z | 2016-12-11T20:26:55Z |
Fix typo at pandas/core/generic.py | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 27ca817c19a63..7868969f477b0 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1095,7 +1095,7 @@ def to_hdf(self, path_or_buf, key, **kwargs):
----------
path_or_buf : the path (string) or HDFStore object
key : string
- indentifier for the group in the store
+ identifier for the group in the store
mode : optional, {'a', 'w', 'r+'}, default 'a'
``'w'``
| `indentifier` -> `identifier`
| https://api.github.com/repos/pandas-dev/pandas/pulls/14787 | 2016-12-02T04:11:03Z | 2016-12-02T12:51:50Z | 2016-12-02T12:51:50Z | 2016-12-15T09:37:59Z |
API: add dtype param to read_excel | diff --git a/doc/source/io.rst b/doc/source/io.rst
index f524d37d0de60..f22374553e9c3 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2538,6 +2538,20 @@ missing data to recover integer dtype:
cfun = lambda x: int(x) if x else -1
read_excel('path_to_file.xls', 'Sheet1', converters={'MyInts': cfun})
+dtype Specifications
+++++++++++++++++++++
+
+.. versionadded:: 0.20
+
+As an alternative to converters, the type for an entire column can
+be specified using the `dtype` keyword, which takes a dictionary
+mapping column names to types. To interpret data with
+no type inference, use the type ``str`` or ``object``.
+
+.. code-block:: python
+
+ read_excel('path_to_file.xls', dtype={'MyInts': 'int64', 'MyText': str})
+
.. _io.excel_writer:
Writing Excel Files
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 6fe0ad8092a03..06517c1489861 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -22,8 +22,8 @@ New features
~~~~~~~~~~~~
-``read_csv`` supports ``dtype`` keyword for python engine
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+``dtype`` keyword for data io
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``dtype`` keyword argument in the :func:`read_csv` function for specifying the types of parsed columns
is now supported with the ``'python'`` engine (:issue:`14295`). See the :ref:`io docs <io.dtypes>` for more information.
@@ -35,7 +35,7 @@ The ``dtype`` keyword argument in the :func:`read_csv` function for specifying t
pd.read_csv(StringIO(data), engine='python', dtype={'a':'float64', 'b':'object'}).dtypes
The ``dtype`` keyword argument is also now supported in the :func:`read_fwf` function for parsing
-fixed-width text files.
+fixed-width text files, and :func:`read_excel` for parsing Excel files.
.. ipython:: python
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index d3171ceedfc03..6b7c597ecfcdc 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -87,6 +87,14 @@
either be integers or column labels, values are functions that take one
input argument, the Excel cell content, and return the transformed
content.
+dtype : Type name or dict of column -> type, default None
+ Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
+ Use `str` or `object` to preserve and not interpret dtype.
+ If converters are specified, they will be applied INSTEAD
+ of dtype conversion.
+
+ .. versionadded:: 0.20.0
+
true_values : list, default None
Values to consider as True
@@ -184,8 +192,8 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0,
index_col=None, names=None, parse_cols=None, parse_dates=False,
date_parser=None, na_values=None, thousands=None,
convert_float=True, has_index_names=None, converters=None,
- true_values=None, false_values=None, engine=None, squeeze=False,
- **kwds):
+ dtype=None, true_values=None, false_values=None, engine=None,
+ squeeze=False, **kwds):
if not isinstance(io, ExcelFile):
io = ExcelFile(io, engine=engine)
@@ -195,7 +203,7 @@ def read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0,
index_col=index_col, parse_cols=parse_cols, parse_dates=parse_dates,
date_parser=date_parser, na_values=na_values, thousands=thousands,
convert_float=convert_float, has_index_names=has_index_names,
- skip_footer=skip_footer, converters=converters,
+ skip_footer=skip_footer, converters=converters, dtype=dtype,
true_values=true_values, false_values=false_values, squeeze=squeeze,
**kwds)
@@ -318,7 +326,7 @@ def _parse_excel(self, sheetname=0, header=0, skiprows=None, names=None,
parse_cols=None, parse_dates=False, date_parser=None,
na_values=None, thousands=None, convert_float=True,
true_values=None, false_values=None, verbose=False,
- squeeze=False, **kwds):
+ dtype=None, squeeze=False, **kwds):
skipfooter = kwds.pop('skipfooter', None)
if skipfooter is not None:
@@ -501,6 +509,7 @@ def _parse_cell(cell_contents, cell_typ):
skiprows=skiprows,
skipfooter=skip_footer,
squeeze=squeeze,
+ dtype=dtype,
**kwds)
output[asheetname] = parser.read()
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 580a3398bb66a..ef839297c80d3 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -18,7 +18,7 @@
from pandas.types.common import (is_integer, _ensure_object,
is_list_like, is_integer_dtype,
is_float, is_dtype_equal,
- is_object_dtype,
+ is_object_dtype, is_string_dtype,
is_scalar, is_categorical_dtype)
from pandas.types.missing import isnull
from pandas.types.cast import _astype_nansafe
@@ -1329,7 +1329,7 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
try_num_bool=False)
else:
# skip inference if specified dtype is object
- try_num_bool = not (cast_type and is_object_dtype(cast_type))
+ try_num_bool = not (cast_type and is_string_dtype(cast_type))
# general type inference and conversion
cvals, na_count = self._infer_types(
diff --git a/pandas/io/tests/data/testdtype.xls b/pandas/io/tests/data/testdtype.xls
new file mode 100644
index 0000000000000..f63357524324f
Binary files /dev/null and b/pandas/io/tests/data/testdtype.xls differ
diff --git a/pandas/io/tests/data/testdtype.xlsm b/pandas/io/tests/data/testdtype.xlsm
new file mode 100644
index 0000000000000..20e658288d5ac
Binary files /dev/null and b/pandas/io/tests/data/testdtype.xlsm differ
diff --git a/pandas/io/tests/data/testdtype.xlsx b/pandas/io/tests/data/testdtype.xlsx
new file mode 100644
index 0000000000000..7c65263c373a3
Binary files /dev/null and b/pandas/io/tests/data/testdtype.xlsx differ
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 49a508dd22023..9c909398d2d88 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -373,6 +373,33 @@ def test_reader_converters(self):
actual = self.get_exceldf(basename, 'Sheet1', converters=converters)
tm.assert_frame_equal(actual, expected)
+ def test_reader_dtype(self):
+ # GH 8212
+ basename = 'testdtype'
+ actual = self.get_exceldf(basename)
+
+ expected = DataFrame({
+ 'a': [1, 2, 3, 4],
+ 'b': [2.5, 3.5, 4.5, 5.5],
+ 'c': [1, 2, 3, 4],
+ 'd': [1.0, 2.0, np.nan, 4.0]}).reindex(
+ columns=['a', 'b', 'c', 'd'])
+
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.get_exceldf(basename,
+ dtype={'a': 'float64',
+ 'b': 'float32',
+ 'c': str})
+
+ expected['a'] = expected['a'].astype('float64')
+ expected['b'] = expected['b'].astype('float32')
+ expected['c'] = ['001', '002', '003', '004']
+ tm.assert_frame_equal(actual, expected)
+
+ with tm.assertRaises(ValueError):
+ actual = self.get_exceldf(basename, dtype={'d': 'int64'})
+
def test_reading_all_sheets(self):
# Test reading all sheetnames by setting sheetname to None,
# Ensure a dict is returned.
| - [x] closes #8212
- [ ] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14786 | 2016-12-02T02:02:22Z | 2016-12-03T12:37:32Z | 2016-12-03T12:37:32Z | 2016-12-03T12:37:32Z |
Fix a simple typo | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 7cff1104c50be..96b447cda4bc4 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -421,7 +421,7 @@ def _validate(self, lvalues, rvalues, name):
# if tz's must be equal (same or None)
if getattr(lvalues, 'tz', None) != getattr(rvalues, 'tz', None):
- raise ValueError("Incompatbile tz's on datetime subtraction "
+ raise ValueError("Incompatible tz's on datetime subtraction "
"ops")
elif ((self.is_timedelta_lhs or self.is_offset_lhs) and
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 72973105ff3bd..aa59a74606674 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1947,7 +1947,7 @@ def test_append_raise(self):
self.assertRaises(TypeError, store.append,
'df', Series(np.arange(10)))
- # appending an incompatbile table
+ # appending an incompatible table
df = tm.makeDataFrame()
store.append('df', df)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
Insignificant PR, it's just fixing a single typo on an exception. | https://api.github.com/repos/pandas-dev/pandas/pulls/14785 | 2016-12-02T01:21:12Z | 2016-12-02T12:52:55Z | 2016-12-02T12:52:55Z | 2016-12-15T09:38:06Z |
ENH: Pandas Series provide to_excel method | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 638abd5421862..929664840f583 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -692,6 +692,7 @@ Serialization / IO / Conversion
Series.to_pickle
Series.to_csv
Series.to_dict
+ Series.to_excel
Series.to_frame
Series.to_xarray
Series.to_hdf
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index f172f70932d60..93c177428d0c2 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -49,8 +49,8 @@ Other enhancements
^^^^^^^^^^^^^^^^^^
- ``pd.read_excel`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
-
- ``pd.cut`` and ``pd.qcut`` now support datetime64 and timedelta64 dtypes (issue:`14714`)
+- ``Series`` provides a ``to_excel`` method to output Excel files (:issue:`8825`)
.. _whatsnew_0200.api_breaking:
@@ -105,4 +105,4 @@ Performance Improvements
.. _whatsnew_0200.bug_fixes:
Bug Fixes
-~~~~~~~~~
+~~~~~~~~~
\ No newline at end of file
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index bf1ff28cd63b1..4cb26ab2f5886 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -105,7 +105,8 @@
axes_single_arg="{0 or 'index', 1 or 'columns'}",
optional_by="""
by : str or list of str
- Name or list of names which refer to the axis items.""")
+ Name or list of names which refer to the axis items.""",
+ versionadded_to_excel='')
_numeric_only_doc = """numeric_only : boolean, default None
Include only float, int, boolean data. If None, will attempt to use
@@ -1385,65 +1386,11 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
if path_or_buf is None:
return formatter.path_or_buf.getvalue()
+ @Appender(_shared_docs['to_excel'] % _shared_doc_kwargs)
def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
float_format=None, columns=None, header=True, index=True,
index_label=None, startrow=0, startcol=0, engine=None,
merge_cells=True, encoding=None, inf_rep='inf', verbose=True):
- """
- Write DataFrame to a excel sheet
-
- Parameters
- ----------
- excel_writer : string or ExcelWriter object
- File path or existing ExcelWriter
- sheet_name : string, default 'Sheet1'
- Name of sheet which will contain DataFrame
- na_rep : string, default ''
- Missing data representation
- float_format : string, default None
- Format string for floating point numbers
- columns : sequence, optional
- Columns to write
- header : boolean or list of string, default True
- Write out column names. If a list of string is given it is
- assumed to be aliases for the column names
- index : boolean, default True
- Write row names (index)
- index_label : string or sequence, default None
- Column label for index column(s) if desired. If None is given, and
- `header` and `index` are True, then the index names are used. A
- sequence should be given if the DataFrame uses MultiIndex.
- startrow :
- upper left cell row to dump data frame
- startcol :
- upper left cell column to dump data frame
- engine : string, default None
- write engine to use - you can also set this via the options
- ``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
- ``io.excel.xlsm.writer``.
- merge_cells : boolean, default True
- Write MultiIndex and Hierarchical Rows as merged cells.
- encoding: string, default None
- encoding of the resulting excel file. Only necessary for xlwt,
- other writers support unicode natively.
- inf_rep : string, default 'inf'
- Representation for infinity (there is no native representation for
- infinity in Excel)
-
- Notes
- -----
- If passing an existing ExcelWriter object, then the sheet will be added
- to the existing workbook. This can be used to save different
- DataFrames to one workbook:
-
- >>> writer = ExcelWriter('output.xlsx')
- >>> df1.to_excel(writer,'Sheet1')
- >>> df2.to_excel(writer,'Sheet2')
- >>> writer.save()
-
- For compatibility with to_csv, to_excel serializes lists and dicts to
- strings before writing.
- """
from pandas.io.excel import ExcelWriter
need_save = False
if encoding is None:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7868969f477b0..c25ff7d8a318b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1016,6 +1016,62 @@ def __setstate__(self, state):
# ----------------------------------------------------------------------
# I/O Methods
+ _shared_docs['to_excel'] = """
+ Write %(klass)s to a excel sheet
+ %(versionadded_to_excel)s
+ Parameters
+ ----------
+ excel_writer : string or ExcelWriter object
+ File path or existing ExcelWriter
+ sheet_name : string, default 'Sheet1'
+ Name of sheet which will contain DataFrame
+ na_rep : string, default ''
+ Missing data representation
+ float_format : string, default None
+ Format string for floating point numbers
+ columns : sequence, optional
+ Columns to write
+ header : boolean or list of string, default True
+ Write out column names. If a list of string is given it is
+ assumed to be aliases for the column names
+ index : boolean, default True
+ Write row names (index)
+ index_label : string or sequence, default None
+ Column label for index column(s) if desired. If None is given, and
+ `header` and `index` are True, then the index names are used. A
+ sequence should be given if the DataFrame uses MultiIndex.
+ startrow :
+ upper left cell row to dump data frame
+ startcol :
+ upper left cell column to dump data frame
+ engine : string, default None
+ write engine to use - you can also set this via the options
+ ``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
+ ``io.excel.xlsm.writer``.
+ merge_cells : boolean, default True
+ Write MultiIndex and Hierarchical Rows as merged cells.
+ encoding: string, default None
+ encoding of the resulting excel file. Only necessary for xlwt,
+ other writers support unicode natively.
+ inf_rep : string, default 'inf'
+ Representation for infinity (there is no native representation for
+ infinity in Excel)
+
+ Notes
+ -----
+ If passing an existing ExcelWriter object, then the sheet will be added
+ to the existing workbook. This can be used to save different
+ DataFrames to one workbook:
+
+ >>> writer = ExcelWriter('output.xlsx')
+ >>> df1.to_excel(writer,'Sheet1')
+ >>> df2.to_excel(writer,'Sheet2')
+ >>> writer.save()
+
+ For compatibility with to_csv, to_excel serializes lists and dicts to
+ strings before writing.
+ """
+
def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
double_precision=10, force_ascii=True, date_unit='ms',
default_handler=None, lines=False):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 56a3933bded3b..dc4eb3f0eba26 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -80,7 +80,8 @@
inplace="""inplace : boolean, default False
If True, performs operation inplace and returns None.""",
unique='np.ndarray', duplicated='Series',
- optional_by='')
+ optional_by='',
+ versionadded_to_excel='\n.. versionadded:: 0.20.0\n')
def _coerce_method(converter):
@@ -2619,6 +2620,19 @@ def to_csv(self, path=None, index=True, sep=",", na_rep='',
if path is None:
return result
+ @Appender(generic._shared_docs['to_excel'] % _shared_doc_kwargs)
+ def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
+ float_format=None, columns=None, header=True, index=True,
+ index_label=None, startrow=0, startcol=0, engine=None,
+ merge_cells=True, encoding=None, inf_rep='inf', verbose=True):
+ df = self.to_frame()
+ df.to_excel(excel_writer=excel_writer, sheet_name=sheet_name,
+ na_rep=na_rep, float_format=float_format, columns=columns,
+ header=header, index=index, index_label=index_label,
+ startrow=startrow, startcol=startcol, engine=engine,
+ merge_cells=merge_cells, encoding=encoding,
+ inf_rep=inf_rep, verbose=verbose)
+
def dropna(self, axis=0, inplace=False, **kwargs):
"""
Return Series without null values
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 9c909398d2d88..7a1b5655cfbf7 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -1078,6 +1078,12 @@ def test_roundtrip(self):
recons = read_excel(path, index_col=0)
tm.assert_frame_equal(self.frame, recons)
+ # GH 8825 Pandas Series should provide to_excel method
+ s = self.frame["A"]
+ s.to_excel(path)
+ recons = read_excel(path, index_col=0)
+ tm.assert_frame_equal(s.to_frame(), recons)
+
def test_mixed(self):
_skip_if_no_xlrd()
| - [x] closes #8825
- [x] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14780 | 2016-12-01T07:51:08Z | 2016-12-05T14:39:14Z | 2016-12-05T14:39:14Z | 2016-12-05T14:39:26Z |
DOC: Remove SparseSeries from SparseArray doc | diff --git a/pandas/sparse/array.py b/pandas/sparse/array.py
index a15def65cad7e..4bb36446c9ff7 100644
--- a/pandas/sparse/array.py
+++ b/pandas/sparse/array.py
@@ -547,8 +547,8 @@ def astype(self, dtype=None, copy=True):
def copy(self, deep=True):
"""
- Make a copy of the SparseSeries. Only the actual sparse values need to
- be copied
+ Make a copy of the SparseArray. Only the actual sparse values need to
+ be copied.
"""
if deep:
values = self.sp_values.copy()
@@ -559,9 +559,9 @@ def copy(self, deep=True):
def count(self):
"""
- Compute sum of non-NA/null observations in SparseSeries. If the
+ Compute sum of non-NA/null observations in SparseArray. If the
fill_value is not NaN, the "sparse" locations will be included in the
- observation count
+ observation count.
Returns
-------
| Title is self-explanatory. Looks like a failed copy-paste.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14769 | 2016-11-30T06:43:32Z | 2016-11-30T11:03:10Z | 2016-11-30T11:03:10Z | 2016-11-30T16:52:51Z |
DOC/TST: dtype param in read_fwf | diff --git a/doc/source/io.rst b/doc/source/io.rst
index b1c151def26af..f524d37d0de60 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1268,11 +1268,22 @@ is whitespace).
df = pd.read_fwf('bar.csv', header=None, index_col=0)
df
+.. versionadded:: 0.20.0
+
+``read_fwf`` supports the ``dtype`` parameter for specifying the types of
+parsed columns to be different from the inferred type.
+
+.. ipython:: python
+
+ pd.read_fwf('bar.csv', header=None, index_col=0).dtypes
+ pd.read_fwf('bar.csv', header=None, dtype={2: 'object'}).dtypes
+
.. ipython:: python
:suppress:
os.remove('bar.csv')
+
Indexes
'''''''
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index ff086380fdb05..6fe0ad8092a03 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -34,6 +34,15 @@ The ``dtype`` keyword argument in the :func:`read_csv` function for specifying t
pd.read_csv(StringIO(data), engine='python').dtypes
pd.read_csv(StringIO(data), engine='python', dtype={'a':'float64', 'b':'object'}).dtypes
+The ``dtype`` keyword argument is also now supported in the :func:`read_fwf` function for parsing
+fixed-width text files.
+
+.. ipython:: python
+
+ data = "a b\n1 2\n3 4"
+ pd.read_fwf(StringIO(data)).dtypes
+ pd.read_fwf(StringIO(data), dtype={'a':'float64', 'b':'object'}).dtypes
+
.. _whatsnew_0200.enhancements.other:
Other enhancements
diff --git a/pandas/io/tests/parser/test_read_fwf.py b/pandas/io/tests/parser/test_read_fwf.py
index 11b10211650d6..42b1116280a1e 100644
--- a/pandas/io/tests/parser/test_read_fwf.py
+++ b/pandas/io/tests/parser/test_read_fwf.py
@@ -345,3 +345,23 @@ def test_variable_width_unicode(self):
header=None, encoding='utf8')
tm.assert_frame_equal(expected, read_fwf(
BytesIO(test.encode('utf8')), header=None, encoding='utf8'))
+
+ def test_dtype(self):
+ data = ''' a b c
+1 2 3.2
+3 4 5.2
+'''
+ colspecs = [(0, 5), (5, 10), (10, None)]
+ result = pd.read_fwf(StringIO(data), colspecs=colspecs)
+ expected = pd.DataFrame({
+ 'a': [1, 3],
+ 'b': [2, 4],
+ 'c': [3.2, 5.2]}, columns=['a', 'b', 'c'])
+ tm.assert_frame_equal(result, expected)
+
+ expected['a'] = expected['a'].astype('float64')
+ expected['b'] = expected['b'].astype(str)
+ expected['c'] = expected['c'].astype('int32')
+ result = pd.read_fwf(StringIO(data), colspecs=colspecs,
+ dtype={'a': 'float64', 'b': str, 'c': 'int32'})
+ tm.assert_frame_equal(result, expected)
| - [x] closes #7141
- [ ] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
#14295 actually implemented the behavior, this is just adding docs and tests. | https://api.github.com/repos/pandas-dev/pandas/pulls/14768 | 2016-11-30T01:29:13Z | 2016-11-30T15:11:05Z | 2016-11-30T15:11:05Z | 2016-11-30T15:11:14Z |
BLD: clean .pxi when cleaning | diff --git a/setup.py b/setup.py
index 8d2e2669852ea..2bef65c9719dc 100755
--- a/setup.py
+++ b/setup.py
@@ -293,6 +293,11 @@ def initialize_options(self):
if d == '__pycache__':
self._clean_trees.append(pjoin(root, d))
+ # clean the generated pxi files
+ for pxifile in _pxifiles:
+ pxifile = pxifile.replace(".pxi.in", ".pxi")
+ self._clean_me.append(pxifile)
+
for d in ('build', 'dist'):
if os.path.exists(d):
self._clean_trees.append(d)
| https://api.github.com/repos/pandas-dev/pandas/pulls/14766 | 2016-11-29T23:37:09Z | 2016-11-29T23:37:29Z | 2016-11-29T23:37:29Z | 2016-11-29T23:37:29Z | |
added read_msgpack() to index | diff --git a/doc/source/api.rst b/doc/source/api.rst
index a510f663d19ee..638abd5421862 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -27,6 +27,7 @@ Flat File
read_table
read_csv
read_fwf
+ read_msgpack
Clipboard
~~~~~~~~~
| - [x] closes #14702
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
First time contributing to pandas, let me know if there's something I should do differently. | https://api.github.com/repos/pandas-dev/pandas/pulls/14765 | 2016-11-29T21:50:28Z | 2016-11-29T23:33:47Z | 2016-11-29T23:33:47Z | 2016-12-05T19:11:39Z |
ENH: Introduce UnsortedIndexError GH11897 | diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst
index 0c843dd39b56f..7b6b2a09f6037 100644
--- a/doc/source/advanced.rst
+++ b/doc/source/advanced.rst
@@ -528,12 +528,14 @@ return a copy of the data rather than a view:
jim joe
1 z 0.64094
+.. _advanced.unsorted:
+
Furthermore if you try to index something that is not fully lexsorted, this can raise:
.. code-block:: ipython
In [5]: dfm.loc[(0,'y'):(1, 'z')]
- KeyError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
+ UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
The ``is_lexsorted()`` method on an ``Index`` show if the index is sorted, and the ``lexsort_depth`` property returns the sort depth:
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 0bfd755aae40c..25664fec313ae 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -50,6 +50,11 @@ Other enhancements
- ``Series.sort_index`` accepts parameters ``kind`` and ``na_position`` (:issue:`13589`, :issue:`14444`)
- ``pd.read_excel`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
+
+- New ``UnsortedIndexError`` (subclass of ``KeyError``) raised when indexing/slicing into an
+ unsorted MultiIndex (:issue:`11897`). This allows differentiation between errors due to lack
+ of sorting or an incorrect key. See :ref:`here <advanced.unsorted>`
+
- ``pd.cut`` and ``pd.qcut`` now support datetime64 and timedelta64 dtypes (issue:`14714`)
- ``Series`` provides a ``to_excel`` method to output Excel files (:issue:`8825`)
- The ``usecols`` argument in ``pd.read_csv`` now accepts a callable function as a value (:issue:`14154`)
@@ -70,6 +75,9 @@ Backwards incompatible API changes
Other API Changes
^^^^^^^^^^^^^^^^^
+- Change error message text when indexing via a
+ boolean ``Series`` that has an incompatible index (:issue:`14491`)
+
.. _whatsnew_0200.deprecations:
Deprecations
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 295947bbc1166..fddac1f29d454 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -97,6 +97,16 @@ class UnsupportedFunctionCall(ValueError):
pass
+class UnsortedIndexError(KeyError):
+ """ Error raised when attempting to get a slice of a MultiIndex
+ and the index has not been lexsorted. Subclass of `KeyError`.
+
+ .. versionadded:: 0.20.0
+
+ """
+ pass
+
+
class AbstractMethodError(NotImplementedError):
"""Raise this error instead of NotImplementedError for abstract methods
while keeping compatibility with Python 2 and Python 3.
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 660e8c9446202..c4ae3dcca8367 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1814,7 +1814,9 @@ def check_bool_indexer(ax, key):
result = result.reindex(ax)
mask = isnull(result._values)
if mask.any():
- raise IndexingError('Unalignable boolean Series key provided')
+ raise IndexingError('Unalignable boolean Series provided as '
+ 'indexer (index of the boolean Series and of '
+ 'the indexed object do not match')
result = result.astype(bool)._values
elif is_sparse(result):
result = result.to_dense()
diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py
index 45b6cad89d020..132543e0e386c 100644
--- a/pandas/indexes/multi.py
+++ b/pandas/indexes/multi.py
@@ -25,7 +25,8 @@
from pandas.core.common import (_values_from_object,
is_bool_indexer,
is_null_slice,
- PerformanceWarning)
+ PerformanceWarning,
+ UnsortedIndexError)
from pandas.core.base import FrozenList
@@ -1936,9 +1937,10 @@ def get_locs(self, tup):
# must be lexsorted to at least as many levels
if not self.is_lexsorted_for_tuple(tup):
- raise KeyError('MultiIndex Slicing requires the index to be fully '
- 'lexsorted tuple len ({0}), lexsort depth '
- '({1})'.format(len(tup), self.lexsort_depth))
+ raise UnsortedIndexError('MultiIndex Slicing requires the index '
+ 'to be fully lexsorted tuple len ({0}), '
+ 'lexsort depth ({1})'
+ .format(len(tup), self.lexsort_depth))
# indexer
# this is the list of all values that we want to select
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index e1e714719092a..ccbe65e58a1a5 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -8,7 +8,7 @@
from pandas import (DataFrame, date_range, period_range, MultiIndex, Index,
CategoricalIndex, compat)
-from pandas.core.common import PerformanceWarning
+from pandas.core.common import PerformanceWarning, UnsortedIndexError
from pandas.indexes.base import InvalidIndexError
from pandas.compat import range, lrange, u, PY3, long, lzip
@@ -2535,3 +2535,19 @@ def test_dropna(self):
msg = "invalid how option: xxx"
with tm.assertRaisesRegexp(ValueError, msg):
idx.dropna(how='xxx')
+
+ def test_unsortedindex(self):
+ # GH 11897
+ mi = pd.MultiIndex.from_tuples([('z', 'a'), ('x', 'a'), ('y', 'b'),
+ ('x', 'b'), ('y', 'a'), ('z', 'b')],
+ names=['one', 'two'])
+ df = pd.DataFrame([[i, 10 * i] for i in lrange(6)], index=mi,
+ columns=['one', 'two'])
+
+ with assertRaises(UnsortedIndexError):
+ df.loc(axis=0)['z', :]
+ df.sort_index(inplace=True)
+ self.assertEqual(len(df.loc(axis=0)['z', :]), 2)
+
+ with assertRaises(KeyError):
+ df.loc(axis=0)['q', :]
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 9ca1fd2a76817..bc95ff329d686 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -23,7 +23,7 @@
MultiIndex, Timestamp, Timedelta)
from pandas.formats.printing import pprint_thing
from pandas import concat
-from pandas.core.common import PerformanceWarning
+from pandas.core.common import PerformanceWarning, UnsortedIndexError
import pandas.util.testing as tm
from pandas import date_range
@@ -2230,7 +2230,7 @@ def f():
df = df.sortlevel(level=1, axis=0)
self.assertEqual(df.index.lexsort_depth, 0)
with tm.assertRaisesRegexp(
- KeyError,
+ UnsortedIndexError,
'MultiIndex Slicing requires the index to be fully '
r'lexsorted tuple len \(2\), lexsort depth \(0\)'):
df.loc[(slice(None), df.loc[:, ('a', 'bar')] > 5), :]
@@ -2417,7 +2417,7 @@ def test_per_axis_per_level_doc_examples(self):
def f():
df.loc['A1', (slice(None), 'foo')]
- self.assertRaises(KeyError, f)
+ self.assertRaises(UnsortedIndexError, f)
df = df.sortlevel(axis=1)
# slicing
@@ -3480,8 +3480,12 @@ def test_iloc_mask(self):
('index', '.loc'): '0b11',
('index', '.iloc'): ('iLocation based boolean indexing '
'cannot use an indexable as a mask'),
- ('locs', ''): 'Unalignable boolean Series key provided',
- ('locs', '.loc'): 'Unalignable boolean Series key provided',
+ ('locs', ''): 'Unalignable boolean Series provided as indexer '
+ '(index of the boolean Series and of the indexed '
+ 'object do not match',
+ ('locs', '.loc'): 'Unalignable boolean Series provided as indexer '
+ '(index of the boolean Series and of the '
+ 'indexed object do not match',
('locs', '.iloc'): ('iLocation based boolean indexing on an '
'integer type is not available'),
}
| closes #11897
closes #14491
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14762 | 2016-11-29T15:18:34Z | 2016-12-09T16:36:30Z | 2016-12-09T16:36:29Z | 2016-12-09T16:40:25Z |
ENH: Added more options for formats.style.bar | diff --git a/doc/source/style.ipynb b/doc/source/style.ipynb
index 2cacbb19d81bb..427b18b988aef 100644
--- a/doc/source/style.ipynb
+++ b/doc/source/style.ipynb
@@ -2,7 +2,9 @@
"cells": [
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "collapsed": true
+ },
"source": [
"# Styling\n",
"\n",
@@ -87,9 +89,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style"
@@ -107,9 +107,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.highlight_null().render().split('\\n')[:10]"
@@ -160,9 +158,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"s = df.style.applymap(color_negative_red)\n",
@@ -208,9 +204,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.apply(highlight_max)"
@@ -234,9 +228,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.\\\n",
@@ -290,9 +282,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.apply(highlight_max, color='darkorange', axis=None)"
@@ -340,9 +330,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.apply(highlight_max, subset=['B', 'C', 'D'])"
@@ -358,9 +346,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.applymap(color_negative_red,\n",
@@ -393,9 +379,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.format(\"{:.2%}\")"
@@ -411,9 +395,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.format({'B': \"{:0<4.0f}\", 'D': '{:+.2f}'})"
@@ -429,9 +411,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.format({\"B\": lambda x: \"±{:.2f}\".format(abs(x))})"
@@ -454,9 +434,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.highlight_null(null_color='red')"
@@ -472,9 +450,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"import seaborn as sns\n",
@@ -495,9 +471,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"# Uses the full color range\n",
@@ -507,9 +481,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"# Compress the color range\n",
@@ -523,67 +495,128 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "You can include \"bar charts\" in your DataFrame."
+ "There's also `.highlight_min` and `.highlight_max`."
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
- "df.style.bar(subset=['A', 'B'], color='#d65f5f')"
+ "df.style.highlight_max(axis=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "There's also `.highlight_min` and `.highlight_max`."
+ "Use `Styler.set_properties` when the style doesn't actually depend on the values."
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
- "df.style.highlight_max(axis=0)"
+ "df.style.set_properties(**{'background-color': 'black',\n",
+ " 'color': 'lawngreen',\n",
+ " 'border-color': 'white'})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Bar charts"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can include \"bar charts\" in your DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
- "df.style.highlight_min(axis=0)"
+ "df.style.bar(subset=['A', 'B'], color='#d65f5f')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Use `Styler.set_properties` when the style doesn't actually depend on the values."
+ "New in version 0.20.0 is the ability to customize further the bar chart: You can now have the `df.style.bar` be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of `[color_negative, color_positive]`.\n",
+ "\n",
+ "Here's how you can change the above with the new `align='mid'` option:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
- "df.style.set_properties(**{'background-color': 'black',\n",
- " 'color': 'lawngreen',\n",
- " 'border-color': 'white'})"
+ "df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The following example aims to give a highlight of the behavior of the new align options:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import pandas as pd\n",
+ "from IPython.display import HTML\n",
+ "\n",
+ "# Test series\n",
+ "test1 = pd.Series([-100,-60,-30,-20], name='All Negative')\n",
+ "test2 = pd.Series([10,20,50,100], name='All Positive')\n",
+ "test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')\n",
+ "\n",
+ "head = \"\"\"\n",
+ "<table>\n",
+ " <thead>\n",
+ " <th>Align</th>\n",
+ " <th>All Negative</th>\n",
+ " <th>All Positive</th>\n",
+ " <th>Both Neg and Pos</th>\n",
+ " </thead>\n",
+ " </tbody>\n",
+ "\n",
+ "\"\"\"\n",
+ "\n",
+ "aligns = ['left','zero','mid']\n",
+ "for align in aligns:\n",
+ " row = \"<tr><th>{}</th>\".format(align)\n",
+ " for serie in [test1,test2,test3]:\n",
+ " s = serie.copy()\n",
+ " s.name=''\n",
+ " row += \"<td>{}</td>\".format(s.to_frame().style.bar(align=align, \n",
+ " color=['#d65f5f', '#5fba7d'], \n",
+ " width=100).render()) #testn['width']\n",
+ " row += '</tr>'\n",
+ " head += row\n",
+ " \n",
+ "head+= \"\"\"\n",
+ "</tbody>\n",
+ "</table>\"\"\"\n",
+ " \n",
+ "\n",
+ "HTML(head)"
]
},
{
@@ -603,9 +636,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df2 = -df\n",
@@ -616,9 +647,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"style2 = df2.style\n",
@@ -671,9 +700,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"with pd.option_context('display.precision', 2):\n",
@@ -693,9 +720,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style\\\n",
@@ -728,9 +753,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"df.style.set_caption('Colormaps, with a caption.')\\\n",
@@ -756,9 +779,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML\n",
@@ -854,9 +875,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"from IPython.html import widgets\n",
@@ -892,9 +911,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"np.random.seed(25)\n",
@@ -993,9 +1010,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"%mkdir templates"
@@ -1012,9 +1027,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"%%file templates/myhtml.tpl\n",
@@ -1065,9 +1078,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"MyStyler(df)"
@@ -1083,9 +1094,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"HTML(MyStyler(df).render(table_title=\"Extending Example\"))"
@@ -1101,9 +1110,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"EasyStyler = Styler.from_custom_template(\"templates\", \"myhtml.tpl\")\n",
@@ -1120,9 +1127,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "collapsed": true
- },
+ "metadata": {},
"outputs": [],
"source": [
"with open(\"template_structure.html\") as f:\n",
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 6e4756c3c5245..4a95e580af9fd 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -524,6 +524,8 @@ Other Enhancements
- ``parallel_coordinates()`` has gained a ``sort_labels`` keyword arg that sorts class labels and the colours assigned to them (:issue:`15908`)
- Options added to allow one to turn on/off using ``bottleneck`` and ``numexpr``, see :ref:`here <basics.accelerate>` (:issue:`16157`)
+- ``DataFrame.style.bar()`` now accepts two more options to further customize the bar chart. Bar alignment is set with ``align='left'|'mid'|'zero'``, the default is "left", which is backward compatible; You can now pass a list of ``color=[color_negative, color_positive]``. (:issue:`14757`)
+
.. _ISO 8601 duration: https://en.wikipedia.org/wiki/ISO_8601#Durations
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 83062e7d764cd..f1ff2966dca48 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -23,6 +23,7 @@
import numpy as np
import pandas as pd
+from pandas.api.types import is_list_like
from pandas.compat import range
from pandas.core.config import get_option
from pandas.core.generic import _shared_docs
@@ -868,30 +869,141 @@ def set_properties(self, subset=None, **kwargs):
return self.applymap(f, subset=subset)
@staticmethod
- def _bar(s, color, width):
+ def _bar_left(s, color, width, base):
+ """
+ The minimum value is aligned at the left of the cell
+ Parameters
+ ----------
+ color: 2-tuple/list, of [``color_negative``, ``color_positive``]
+ width: float
+ A number between 0 or 100. The largest value will cover ``width``
+ percent of the cell's width
+ base: str
+ The base css format of the cell, e.g.:
+ ``base = 'width: 10em; height: 80%;'``
+ Returns
+ -------
+ self : Styler
+ """
normed = width * (s - s.min()) / (s.max() - s.min())
-
- base = 'width: 10em; height: 80%;'
- attrs = (base + 'background: linear-gradient(90deg,{c} {w}%, '
+ zero_normed = width * (0 - s.min()) / (s.max() - s.min())
+ attrs = (base + 'background: linear-gradient(90deg,{c} {w:.1f}%, '
'transparent 0%)')
- return [attrs.format(c=color, w=x) if x != 0 else base for x in normed]
- def bar(self, subset=None, axis=0, color='#d65f5f', width=100):
+ return [base if x == 0 else attrs.format(c=color[0], w=x)
+ if x < zero_normed
+ else attrs.format(c=color[1], w=x) if x >= zero_normed
+ else base for x in normed]
+
+ @staticmethod
+ def _bar_center_zero(s, color, width, base):
+ """
+ Creates a bar chart where the zero is centered in the cell
+ Parameters
+ ----------
+ color: 2-tuple/list, of [``color_negative``, ``color_positive``]
+ width: float
+ A number between 0 or 100. The largest value will cover ``width``
+ percent of the cell's width
+ base: str
+ The base css format of the cell, e.g.:
+ ``base = 'width: 10em; height: 80%;'``
+ Returns
+ -------
+ self : Styler
+ """
+
+ # Either the min or the max should reach the edge
+ # (50%, centered on zero)
+ m = max(abs(s.min()), abs(s.max()))
+
+ normed = s * 50 * width / (100.0 * m)
+
+ attrs_neg = (base + 'background: linear-gradient(90deg, transparent 0%'
+ ', transparent {w:.1f}%, {c} {w:.1f}%, '
+ '{c} 50%, transparent 50%)')
+
+ attrs_pos = (base + 'background: linear-gradient(90deg, transparent 0%'
+ ', transparent 50%, {c} 50%, {c} {w:.1f}%, '
+ 'transparent {w:.1f}%)')
+
+ return [attrs_pos.format(c=color[1], w=(50 + x)) if x >= 0
+ else attrs_neg.format(c=color[0], w=(50 + x))
+ for x in normed]
+
+ @staticmethod
+ def _bar_center_mid(s, color, width, base):
+ """
+ Creates a bar chart where the midpoint is centered in the cell
+ Parameters
+ ----------
+ color: 2-tuple/list, of [``color_negative``, ``color_positive``]
+ width: float
+ A number between 0 or 100. The largest value will cover ``width``
+ percent of the cell's width
+ base: str
+ The base css format of the cell, e.g.:
+ ``base = 'width: 10em; height: 80%;'``
+ Returns
+ -------
+ self : Styler
+ """
+
+ if s.min() >= 0:
+ # In this case, we place the zero at the left, and the max() should
+ # be at width
+ zero = 0.0
+ slope = width / s.max()
+ elif s.max() <= 0:
+ # In this case, we place the zero at the right, and the min()
+ # should be at 100-width
+ zero = 100.0
+ slope = width / -s.min()
+ else:
+ slope = width / (s.max() - s.min())
+ zero = (100.0 + width) / 2.0 - slope * s.max()
+
+ normed = zero + slope * s
+
+ attrs_neg = (base + 'background: linear-gradient(90deg, transparent 0%'
+ ', transparent {w:.1f}%, {c} {w:.1f}%, '
+ '{c} {zero:.1f}%, transparent {zero:.1f}%)')
+
+ attrs_pos = (base + 'background: linear-gradient(90deg, transparent 0%'
+ ', transparent {zero:.1f}%, {c} {zero:.1f}%, '
+ '{c} {w:.1f}%, transparent {w:.1f}%)')
+
+ return [attrs_pos.format(c=color[1], zero=zero, w=x) if x > zero
+ else attrs_neg.format(c=color[0], zero=zero, w=x)
+ for x in normed]
+
+ def bar(self, subset=None, axis=0, color='#d65f5f', width=100,
+ align='left'):
"""
Color the background ``color`` proptional to the values in each column.
Excludes non-numeric data by default.
-
.. versionadded:: 0.17.1
-
Parameters
----------
subset: IndexSlice, default None
a valid slice for ``data`` to limit the style application to
axis: int
- color: str
+ color: str or 2-tuple/list
+ If a str is passed, the color is the same for both
+ negative and positive numbers. If 2-tuple/list is used, the
+ first element is the color_negative and the second is the
+ color_positive (eg: ['#d65f5f', '#5fba7d'])
width: float
A number between 0 or 100. The largest value will cover ``width``
percent of the cell's width
+ align : {'left', 'zero',' mid'}, default 'left'
+ - 'left' : the min value starts at the left of the cell
+ - 'zero' : a value of zero is located at the center of the cell
+ - 'mid' : the center of the cell is at (max-min)/2, or
+ if values are all negative (positive) the zero is aligned
+ at the right (left) of the cell
+
+ .. versionadded:: 0.20.0
Returns
-------
@@ -899,8 +1011,32 @@ def bar(self, subset=None, axis=0, color='#d65f5f', width=100):
"""
subset = _maybe_numeric_slice(self.data, subset)
subset = _non_reducing_slice(subset)
- self.apply(self._bar, subset=subset, axis=axis, color=color,
- width=width)
+
+ base = 'width: 10em; height: 80%;'
+
+ if not(is_list_like(color)):
+ color = [color, color]
+ elif len(color) == 1:
+ color = [color[0], color[0]]
+ elif len(color) > 2:
+ msg = ("Must pass `color` as string or a list-like"
+ " of length 2: [`color_negative`, `color_positive`]\n"
+ "(eg: color=['#d65f5f', '#5fba7d'])")
+ raise ValueError(msg)
+
+ if align == 'left':
+ self.apply(self._bar_left, subset=subset, axis=axis, color=color,
+ width=width, base=base)
+ elif align == 'zero':
+ self.apply(self._bar_center_zero, subset=subset, axis=axis,
+ color=color, width=width, base=base)
+ elif align == 'mid':
+ self.apply(self._bar_center_mid, subset=subset, axis=axis,
+ color=color, width=width, base=base)
+ else:
+ msg = ("`align` must be one of {'left', 'zero',' mid'}")
+ raise ValueError(msg)
+
return self
def highlight_max(self, subset=None, color='yellow', axis=0):
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index f421c0f8e6d69..9219ac1c9c26b 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -266,7 +266,7 @@ def test_empty(self):
{'props': [['', '']], 'selector': 'row1_col0'}]
assert result == expected
- def test_bar(self):
+ def test_bar_align_left(self):
df = pd.DataFrame({'A': [0, 1, 2]})
result = df.style.bar()._compute().ctx
expected = {
@@ -299,7 +299,7 @@ def test_bar(self):
result = df.style.bar(color='red', width=50)._compute().ctx
assert result == expected
- def test_bar_0points(self):
+ def test_bar_align_left_0points(self):
df = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
result = df.style.bar()._compute().ctx
expected = {(0, 0): ['width: 10em', ' height: 80%'],
@@ -349,6 +349,118 @@ def test_bar_0points(self):
', transparent 0%)']}
assert result == expected
+ def test_bar_align_mid_pos_and_neg(self):
+ df = pd.DataFrame({'A': [-10, 0, 20, 90]})
+
+ result = df.style.bar(align='mid', color=[
+ '#d65f5f', '#5fba7d'])._compute().ctx
+
+ expected = {(0, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 0.0%, #d65f5f 0.0%, '
+ '#d65f5f 10.0%, transparent 10.0%)'],
+ (1, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 10.0%, '
+ '#d65f5f 10.0%, #d65f5f 10.0%, '
+ 'transparent 10.0%)'],
+ (2, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 10.0%, #5fba7d 10.0%'
+ ', #5fba7d 30.0%, transparent 30.0%)'],
+ (3, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 10.0%, '
+ '#5fba7d 10.0%, #5fba7d 100.0%, '
+ 'transparent 100.0%)']}
+
+ self.assertEqual(result, expected)
+
+ def test_bar_align_mid_all_pos(self):
+ df = pd.DataFrame({'A': [10, 20, 50, 100]})
+
+ result = df.style.bar(align='mid', color=[
+ '#d65f5f', '#5fba7d'])._compute().ctx
+
+ expected = {(0, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, '
+ '#5fba7d 10.0%, transparent 10.0%)'],
+ (1, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, '
+ '#5fba7d 20.0%, transparent 20.0%)'],
+ (2, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, '
+ '#5fba7d 50.0%, transparent 50.0%)'],
+ (3, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 0.0%, #5fba7d 0.0%, '
+ '#5fba7d 100.0%, transparent 100.0%)']}
+
+ self.assertEqual(result, expected)
+
+ def test_bar_align_mid_all_neg(self):
+ df = pd.DataFrame({'A': [-100, -60, -30, -20]})
+
+ result = df.style.bar(align='mid', color=[
+ '#d65f5f', '#5fba7d'])._compute().ctx
+
+ expected = {(0, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 0.0%, '
+ '#d65f5f 0.0%, #d65f5f 100.0%, '
+ 'transparent 100.0%)'],
+ (1, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 40.0%, '
+ '#d65f5f 40.0%, #d65f5f 100.0%, '
+ 'transparent 100.0%)'],
+ (2, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 70.0%, '
+ '#d65f5f 70.0%, #d65f5f 100.0%, '
+ 'transparent 100.0%)'],
+ (3, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 80.0%, '
+ '#d65f5f 80.0%, #d65f5f 100.0%, '
+ 'transparent 100.0%)']}
+ assert result == expected
+
+ def test_bar_align_zero_pos_and_neg(self):
+ # See https://github.com/pandas-dev/pandas/pull/14757
+ df = pd.DataFrame({'A': [-10, 0, 20, 90]})
+
+ result = df.style.bar(align='zero', color=[
+ '#d65f5f', '#5fba7d'], width=90)._compute().ctx
+
+ expected = {(0, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 45.0%, '
+ '#d65f5f 45.0%, #d65f5f 50%, '
+ 'transparent 50%)'],
+ (1, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 50%, '
+ '#5fba7d 50%, #5fba7d 50.0%, '
+ 'transparent 50.0%)'],
+ (2, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 50%, #5fba7d 50%, '
+ '#5fba7d 60.0%, transparent 60.0%)'],
+ (3, 0): ['width: 10em', ' height: 80%',
+ 'background: linear-gradient(90deg, '
+ 'transparent 0%, transparent 50%, #5fba7d 50%, '
+ '#5fba7d 95.0%, transparent 95.0%)']}
+ assert result == expected
+
+ def test_bar_bad_align_raises(self):
+ df = pd.DataFrame({'A': [-100, -60, -30, -20]})
+ with pytest.raises(ValueError):
+ df.style.bar(align='poorly', color=['#d65f5f', '#5fba7d'])
+
def test_highlight_null(self, null_color='red'):
df = pd.DataFrame({'A': [0, np.nan]})
result = df.style.highlight_null()._compute().ctx
| - [x] 4 tests added and passed
- [x] passes ``git diff upstream/master | flake8 --diff``
- [x] Update documentation
You can now have the df.style.bar be centered on zero or midpoint value (in
addition to the already existing way of having the min value at the left side of the cell)
----
## Documentation:
I have produced the following example that I think does a pretty good job of explaining the new capabilities
```
import pandas as pd
from IPython.display import HTML
# Case 1: s.min() < 0:
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
# Case 2: s.max() < 0:
test2 = pd.Series([10,20,50,100], name='All Positive')
# Case 3:
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head = """
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
"""
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for serie in [test1,test2,test3]:
s = serie.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['width']
row += '</tr>'
head += row
head+= """
</tbody>
</table>"""
HTML(head)
```

What do you think?
| https://api.github.com/repos/pandas-dev/pandas/pulls/14757 | 2016-11-28T16:02:35Z | 2017-05-02T01:46:44Z | 2017-05-02T01:46:44Z | 2017-07-04T10:20:42Z |
BF: (re)raise the exception always unless returning | diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py
index ba02e9186f1df..e6e6f33669e17 100644
--- a/pandas/io/tests/json/test_pandas.py
+++ b/pandas/io/tests/json/test_pandas.py
@@ -167,7 +167,7 @@ def _check_orient(df, orient, dtype=None, numpy=False,
if raise_ok is not None:
if isinstance(detail, raise_ok):
return
- raise
+ raise
if sort is not None and sort in unser.columns:
unser = unser.sort_values(sort)
| coding logic flaw is obvious (leaves unser used later undefined), so no dedicated test was found
otherwise leads atm to masking of this error while testing on i386 and then failling with
UnboundLocalError: local variable unser referenced before assignment
More detail: https://buildd.debian.org/status/fetch.php?pkg=pandas&arch=i386&ver=0.19.1-1&stamp=1479504883 | https://api.github.com/repos/pandas-dev/pandas/pulls/14756 | 2016-11-28T02:48:34Z | 2016-11-28T17:30:16Z | 2016-11-28T17:30:16Z | 2016-11-28T17:30:19Z |
TST: Correct results with np.size and crosstab (#4003) | diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 26f80f463d609..5e800c02c9509 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -1281,6 +1281,41 @@ def test_crosstab_with_categorial_columns(self):
columns=expected_columns)
tm.assert_frame_equal(result, expected)
+ def test_crosstab_with_numpy_size(self):
+ # GH 4003
+ df = pd.DataFrame({'A': ['one', 'one', 'two', 'three'] * 6,
+ 'B': ['A', 'B', 'C'] * 8,
+ 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 4,
+ 'D': np.random.randn(24),
+ 'E': np.random.randn(24)})
+ result = pd.crosstab(index=[df['A'], df['B']],
+ columns=[df['C']],
+ margins=True,
+ aggfunc=np.size,
+ values=df['D'])
+ expected_index = pd.MultiIndex(levels=[['All', 'one', 'three', 'two'],
+ ['', 'A', 'B', 'C']],
+ labels=[[1, 1, 1, 2, 2, 2, 3, 3, 3, 0],
+ [1, 2, 3, 1, 2, 3, 1, 2, 3, 0]],
+ names=['A', 'B'])
+ expected_column = pd.Index(['bar', 'foo', 'All'],
+ dtype='object',
+ name='C')
+ expected_data = np.array([[2., 2., 4.],
+ [2., 2., 4.],
+ [2., 2., 4.],
+ [2., np.nan, 2.],
+ [np.nan, 2., 2.],
+ [2., np.nan, 2.],
+ [np.nan, 2., 2.],
+ [2., np.nan, 2.],
+ [np.nan, 2., 2.],
+ [12., 12., 24.]])
+ expected = pd.DataFrame(expected_data,
+ index=expected_index,
+ columns=expected_column)
+ tm.assert_frame_equal(result, expected)
+
if __name__ == '__main__':
import nose
| - [x] closes #4003
- [x] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
| https://api.github.com/repos/pandas-dev/pandas/pulls/14755 | 2016-11-27T01:45:54Z | 2016-12-10T21:35:41Z | 2016-12-10T21:35:41Z | 2016-12-21T04:13:37Z |
TST: Same return values in drop_duplicates for Series and DataFrames(#… | diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index da8cf120b8ed4..a5cd0bbc28369 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -949,6 +949,21 @@ def test_duplicated_drop_duplicates_index(self):
s.drop_duplicates(inplace=True)
tm.assert_series_equal(s, original)
+ def test_drop_duplicates_series_vs_dataframe(self):
+ # GH 14192
+ df = pd.DataFrame({'a': [1, 1, 1, 'one', 'one'],
+ 'b': [2, 2, np.nan, np.nan, np.nan],
+ 'c': [3, 3, np.nan, np.nan, 'three'],
+ 'd': [1, 2, 3, 4, 4],
+ 'e': [datetime(2015, 1, 1), datetime(2015, 1, 1),
+ datetime(2015, 2, 1), pd.NaT, pd.NaT]
+ })
+ for column in df.columns:
+ for keep in ['first', 'last', False]:
+ dropped_frame = df[[column]].drop_duplicates(keep=keep)
+ dropped_series = df[column].drop_duplicates(keep=keep)
+ tm.assert_frame_equal(dropped_frame, dropped_series.to_frame())
+
def test_fillna(self):
# # GH 11343
# though Index.fillna and Series.fillna has separate impl,
| - [x] closes #14192
- [x] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
…14192) | https://api.github.com/repos/pandas-dev/pandas/pulls/14754 | 2016-11-27T00:49:49Z | 2016-12-15T09:02:51Z | 2016-12-15T09:02:51Z | 2016-12-21T04:12:19Z |
ENH: Add the ability to have a separate title for each subplot when plotting | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 0bfd755aae40c..aeafc76876bbd 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -53,6 +53,7 @@ Other enhancements
- ``pd.cut`` and ``pd.qcut`` now support datetime64 and timedelta64 dtypes (issue:`14714`)
- ``Series`` provides a ``to_excel`` method to output Excel files (:issue:`8825`)
- The ``usecols`` argument in ``pd.read_csv`` now accepts a callable function as a value (:issue:`14154`)
+- ``pd.DataFrame.plot`` now prints a title above each subplot if ``suplots=True`` and ``title`` is a list of strings (:issue:`14753`)
.. _whatsnew_0200.api_breaking:
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index a484217da5969..6c313f5937602 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -16,13 +16,11 @@
from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works,
_ok_for_gaussian_kde)
-
""" Test cases for misc plot functions """
@tm.mplskip
class TestSeriesPlots(TestPlotBase):
-
def setUp(self):
TestPlotBase.setUp(self)
import matplotlib as mpl
@@ -54,7 +52,6 @@ def test_bootstrap_plot(self):
@tm.mplskip
class TestDataFramePlots(TestPlotBase):
-
@slow
def test_scatter_plot_legacy(self):
tm._skip_if_no_scipy()
@@ -277,6 +274,32 @@ def test_radviz(self):
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, facecolors=colors)
+ @slow
+ def test_subplot_titles(self):
+ df = self.iris.drop('Name', axis=1).head()
+ # Use the column names as the subplot titles
+ title = list(df.columns)
+
+ # Case len(title) == len(df)
+ plot = df.plot(subplots=True, title=title)
+ self.assertEqual([p.get_title() for p in plot], title)
+
+ # Case len(title) > len(df)
+ self.assertRaises(ValueError, df.plot, subplots=True,
+ title=title + ["kittens > puppies"])
+
+ # Case len(title) < len(df)
+ self.assertRaises(ValueError, df.plot, subplots=True, title=title[:2])
+
+ # Case subplots=False and title is of type list
+ self.assertRaises(ValueError, df.plot, subplots=False, title=title)
+
+ # Case df with 3 numeric columns but layout of (2,2)
+ plot = df.drop('SepalWidth', axis=1).plot(subplots=True, layout=(2, 2),
+ title=title[:-1])
+ title_list = [ax.get_title() for sublist in plot for ax in sublist]
+ self.assertEqual(title_list, title[:3] + [''])
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index e4cf896a89f57..21e8b64a3656a 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1217,8 +1217,25 @@ def _adorn_subplots(self):
if self.title:
if self.subplots:
- self.fig.suptitle(self.title)
+ if is_list_like(self.title):
+ if len(self.title) != self.nseries:
+ msg = ('The length of `title` must equal the number '
+ 'of columns if using `title` of type `list` '
+ 'and `subplots=True`.\n'
+ 'length of title = {}\n'
+ 'number of columns = {}').format(
+ len(self.title), self.nseries)
+ raise ValueError(msg)
+
+ for (ax, title) in zip(self.axes, self.title):
+ ax.set_title(title)
+ else:
+ self.fig.suptitle(self.title)
else:
+ if is_list_like(self.title):
+ msg = ('Using `title` of type `list` is not supported '
+ 'unless `subplots=True` is passed')
+ raise ValueError(msg)
self.axes[0].set_title(self.title)
def _apply_axis_properties(self, axis, rot=None, fontsize=None):
@@ -2555,8 +2572,10 @@ def _plot(data, x=None, y=None, subplots=False,
figsize : a tuple (width, height) in inches
use_index : boolean, default True
Use index as ticks for x axis
- title : string
- Title to use for the plot
+ title : string or list
+ Title to use for the plot. If a string is passed, print the string at
+ the top of the figure. If a list is passed and `subplots` is True,
+ print each item in the list above the corresponding subplot.
grid : boolean, default None (matlab style default)
Axis grid lines
legend : False/True/'reverse'
| - [] tests added / passed
- [] passes ``git diff upstream/master | flake8 --diff``
- [] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14753 | 2016-11-26T20:18:16Z | 2016-12-09T08:32:33Z | 2016-12-09T08:32:33Z | 2016-12-09T08:32:47Z |
TST: add test to confirm GH14606 (specify category dtype for empty) | diff --git a/pandas/io/tests/parser/dtypes.py b/pandas/io/tests/parser/dtypes.py
index 18c37b31f6480..b9ab79c3b9d54 100644
--- a/pandas/io/tests/parser/dtypes.py
+++ b/pandas/io/tests/parser/dtypes.py
@@ -241,6 +241,9 @@ def test_empty_dtype(self):
result = self.read_csv(StringIO(data), header=0,
dtype='category')
tm.assert_frame_equal(result, expected)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={'a': 'category', 'b': 'category'})
+ tm.assert_frame_equal(result, expected)
expected = pd.DataFrame(columns=['a', 'b'], dtype='datetime64[ns]')
result = self.read_csv(StringIO(data), header=0,
| Closes #14606
Issue #14606 was fixed by PR #14717, adding one more specific test to confirm this | https://api.github.com/repos/pandas-dev/pandas/pulls/14752 | 2016-11-26T11:41:54Z | 2016-12-10T11:02:54Z | 2016-12-10T11:02:54Z | 2016-12-10T11:02:54Z |
BUG: Improve error message for skipfooter malformed rows in Python engine | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
index d2394ff25ddd4..6ee6271929008 100644
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -32,6 +32,7 @@ Bug Fixes
- Bug in ``pd.read_csv`` where reading files fails, if the number of headers is equal to the number of lines in the file (:issue:`14515`)
- Bug in ``pd.read_csv`` for the Python engine in which an unhelpful error message was being raised when multi-char delimiters were not being respected with quotes (:issue:`14582`)
- Fix bugs (:issue:`14734`, :issue:`13654`) in ``pd.read_sas`` and ``pandas.io.sas.sas7bdat.SAS7BDATReader`` that caused problems when reading a SAS file incrementally.
+- Bug in ``pd.read_csv`` for the Python engine in which an unhelpful error message was being raised when ``skipfooter`` was not being respected by Python's CSV library (:issue:`13879`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 94eb015701004..580a3398bb66a 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2411,14 +2411,23 @@ def _next_line(self):
try:
orig_line = next(self.data)
except csv.Error as e:
+ msg = str(e)
+
if 'NULL byte' in str(e):
- raise csv.Error(
- 'NULL byte detected. This byte '
- 'cannot be processed in Python\'s '
- 'native csv library at the moment, '
- 'so please pass in engine=\'c\' instead.')
- else:
- raise
+ msg = ('NULL byte detected. This byte '
+ 'cannot be processed in Python\'s '
+ 'native csv library at the moment, '
+ 'so please pass in engine=\'c\' instead')
+
+ if self.skipfooter > 0:
+ reason = ('Error could possibly be due to '
+ 'parsing errors in the skipped footer rows '
+ '(the skipfooter keyword is only applied '
+ 'after Python\'s csv library has parsed '
+ 'all rows).')
+ msg += '. ' + reason
+
+ raise csv.Error(msg)
line = self._check_comments([orig_line])[0]
self.pos += 1
if (not self.skip_blank_lines and
diff --git a/pandas/io/tests/parser/python_parser_only.py b/pandas/io/tests/parser/python_parser_only.py
index 55801b4a9788e..ad62aaa275127 100644
--- a/pandas/io/tests/parser/python_parser_only.py
+++ b/pandas/io/tests/parser/python_parser_only.py
@@ -221,3 +221,18 @@ def test_multi_char_sep_quotes(self):
with tm.assertRaisesRegexp(ValueError, msg):
self.read_csv(StringIO(data), sep=',,',
quoting=csv.QUOTE_NONE)
+
+ def test_skipfooter_bad_row(self):
+ # see gh-13879
+
+ data = 'a,b,c\ncat,foo,bar\ndog,foo,"baz'
+ msg = 'parsing errors in the skipped footer rows'
+
+ with tm.assertRaisesRegexp(csv.Error, msg):
+ self.read_csv(StringIO(data), skipfooter=1)
+
+ # We expect no match, so there should be an assertion
+ # error out of the inner context manager.
+ with tm.assertRaises(AssertionError):
+ with tm.assertRaisesRegexp(csv.Error, msg):
+ self.read_csv(StringIO(data))
| Python's native CSV library does not respect the `skipfooter` parameter, so if one of those skipped
rows is malformed, it will still raise an error. Append some useful information to the error message regarding that if that situation is potentially applicable.
Closes #13879. | https://api.github.com/repos/pandas-dev/pandas/pulls/14749 | 2016-11-26T00:53:25Z | 2016-11-29T21:18:28Z | 2016-11-29T21:18:28Z | 2016-11-29T21:24:00Z |
DOC: missing ref in timeseries.rst | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 4132d25e9be48..854de443ac5ee 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1286,12 +1286,11 @@ secondly data into 5-minutely data). This is extremely common in, but not
limited to, financial applications.
``.resample()`` is a time-based groupby, followed by a reduction method on each of its groups.
+See some :ref:`cookbook examples <cookbook.resample>` for some advanced strategies
.. note::
- ``.resample()`` is similar to using a ``.rolling()`` operation with a time-based offset, see a discussion `here <stats.moments.ts-versus-resampling>`
-
-See some :ref:`cookbook examples <cookbook.resample>` for some advanced strategies
+ ``.resample()`` is similar to using a ``.rolling()`` operation with a time-based offset, see a discussion :ref:`here <stats.moments.ts-versus-resampling>`
.. ipython:: python
| https://api.github.com/repos/pandas-dev/pandas/pulls/14745 | 2016-11-25T17:00:46Z | 2016-11-25T17:00:50Z | 2016-11-25T17:00:50Z | 2016-11-25T20:39:47Z | |
Revert "TST/TEMP: fix pyqt to 4.x for plotting tests" | diff --git a/ci/requirements-2.7-64.run b/ci/requirements-2.7-64.run
index ce085a6ebf91c..42b5a789ae31a 100644
--- a/ci/requirements-2.7-64.run
+++ b/ci/requirements-2.7-64.run
@@ -16,4 +16,3 @@ bottleneck
html5lib
beautiful-soup
jinja2=2.8
-pyqt=4.11.4
diff --git a/ci/requirements-2.7.run b/ci/requirements-2.7.run
index eec7886fed38d..560d6571b8771 100644
--- a/ci/requirements-2.7.run
+++ b/ci/requirements-2.7.run
@@ -21,4 +21,3 @@ beautiful-soup=4.2.1
statsmodels
jinja2=2.8
xarray
-pyqt=4.11.4
diff --git a/ci/requirements-3.5-64.run b/ci/requirements-3.5-64.run
index 1dc88ed2c94af..96de21e3daa5e 100644
--- a/ci/requirements-3.5-64.run
+++ b/ci/requirements-3.5-64.run
@@ -10,4 +10,3 @@ numexpr
pytables
matplotlib
blosc
-pyqt=4.11.4
diff --git a/ci/requirements-3.5.run b/ci/requirements-3.5.run
index d9ce708585a33..333641caf26c4 100644
--- a/ci/requirements-3.5.run
+++ b/ci/requirements-3.5.run
@@ -18,7 +18,6 @@ pymysql
psycopg2
xarray
boto
-pyqt=4.11.4
# incompat with conda ATM
# beautiful-soup
| Reverts pydata/pandas#14240
For when https://github.com/ContinuumIO/anaconda-issues/issues/1068 is solved
| https://api.github.com/repos/pandas-dev/pandas/pulls/14744 | 2016-11-25T16:20:12Z | 2016-11-26T17:18:54Z | 2016-11-26T17:18:54Z | 2022-06-08T17:03:42Z |
SAS chunksize / iteration issues | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
index d9aa92270669d..a5fca8f268d9c 100644
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -31,6 +31,7 @@ Bug Fixes
- Allow ``nanoseconds`` in ``Timestamp.replace`` as a kwarg (:issue:`14621`)
- Bug in ``pd.read_csv`` where reading files fails, if the number of headers is equal to the number of lines in the file (:issue:`14515`)
- Bug in ``pd.read_csv`` for the Python engine in which an unhelpful error message was being raised when multi-char delimiters were not being respected with quotes (:issue:`14582`)
+- Fix bugs (:issue:`14734`, :issue:`13654`) in ``pd.read_sas`` and ``pandas.io.sas.sas7bdat.SAS7BDATReader`` that caused problems when reading a SAS file incrementally.
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 65b62601c7022..03e0cae6cc83f 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -83,4 +83,3 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
-
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index 2a82fd7a53222..91f417abc0502 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -225,6 +225,12 @@ def _get_properties(self):
self.os_name = self.os_name.decode(
self.encoding or self.default_encoding)
+ def __next__(self):
+ da = self.read(nrows=self.chunksize or 1)
+ if da is None:
+ raise StopIteration
+ return da
+
# Read a single float of the given width (4 or 8).
def _read_float(self, offset, width):
if width not in (4, 8):
@@ -591,6 +597,10 @@ def read(self, nrows=None):
if self._current_row_in_file_index >= self.row_count:
return None
+ m = self.row_count - self._current_row_in_file_index
+ if nrows > m:
+ nrows = m
+
nd = (self.column_types == b'd').sum()
ns = (self.column_types == b's').sum()
diff --git a/pandas/io/tests/sas/test_sas7bdat.py b/pandas/io/tests/sas/test_sas7bdat.py
index 06eb9774679b1..e20ea48247119 100644
--- a/pandas/io/tests/sas/test_sas7bdat.py
+++ b/pandas/io/tests/sas/test_sas7bdat.py
@@ -47,7 +47,9 @@ def test_from_buffer(self):
with open(fname, 'rb') as f:
byts = f.read()
buf = io.BytesIO(byts)
- df = pd.read_sas(buf, format="sas7bdat", encoding='utf-8')
+ rdr = pd.read_sas(buf, format="sas7bdat",
+ iterator=True, encoding='utf-8')
+ df = rdr.read()
tm.assert_frame_equal(df, df0, check_exact=False)
def test_from_iterator(self):
@@ -55,16 +57,35 @@ def test_from_iterator(self):
df0 = self.data[j]
for k in self.test_ix[j]:
fname = os.path.join(self.dirpath, "test%d.sas7bdat" % k)
- with open(fname, 'rb') as f:
- byts = f.read()
- buf = io.BytesIO(byts)
- rdr = pd.read_sas(buf, format="sas7bdat",
- iterator=True, encoding='utf-8')
+ rdr = pd.read_sas(fname, iterator=True, encoding='utf-8')
df = rdr.read(2)
tm.assert_frame_equal(df, df0.iloc[0:2, :])
df = rdr.read(3)
tm.assert_frame_equal(df, df0.iloc[2:5, :])
+ def test_iterator_loop(self):
+ # github #13654
+ for j in 0, 1:
+ for k in self.test_ix[j]:
+ for chunksize in 3, 5, 10, 11:
+ fname = os.path.join(self.dirpath, "test%d.sas7bdat" % k)
+ rdr = pd.read_sas(fname, chunksize=10, encoding='utf-8')
+ y = 0
+ for x in rdr:
+ y += x.shape[0]
+ self.assertTrue(y == rdr.row_count)
+
+ def test_iterator_read_too_much(self):
+ # github #14734
+ k = self.test_ix[0][0]
+ fname = os.path.join(self.dirpath, "test%d.sas7bdat" % k)
+ rdr = pd.read_sas(fname, format="sas7bdat",
+ iterator=True, encoding='utf-8')
+ d1 = rdr.read(rdr.row_count + 20)
+ rdr = pd.read_sas(fname, iterator=True, encoding="utf-8")
+ d2 = rdr.read(rdr.row_count + 20)
+ tm.assert_frame_equal(d1, d2)
+
def test_encoding_options():
dirpath = tm.get_data_path()
diff --git a/pandas/io/tests/sas/test_xport.py b/pandas/io/tests/sas/test_xport.py
index d0627a80f9604..fe2f7cb4bf4be 100644
--- a/pandas/io/tests/sas/test_xport.py
+++ b/pandas/io/tests/sas/test_xport.py
@@ -35,6 +35,13 @@ def test1_basic(self):
# Read full file
data = read_sas(self.file01, format="xport")
tm.assert_frame_equal(data, data_csv)
+ num_rows = data.shape[0]
+
+ # Test reading beyond end of file
+ reader = read_sas(self.file01, format="xport", iterator=True)
+ data = reader.read(num_rows + 100)
+ self.assertTrue(data.shape[0] == num_rows)
+ reader.close()
# Test incremental read with `read` method.
reader = read_sas(self.file01, format="xport", iterator=True)
@@ -48,6 +55,14 @@ def test1_basic(self):
reader.close()
tm.assert_frame_equal(data, data_csv.iloc[0:10, :])
+ # Test read in loop
+ m = 0
+ reader = read_sas(self.file01, format="xport", chunksize=100)
+ for x in reader:
+ m += x.shape[0]
+ reader.close()
+ self.assertTrue(m == num_rows)
+
# Read full file with `read_sas` method
data = read_sas(self.file01)
tm.assert_frame_equal(data, data_csv)
| closes #14734
closes #13654
- [X] tests added / passed
- [X] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14743 | 2016-11-25T15:10:12Z | 2016-11-28T09:48:59Z | 2016-11-28T09:48:59Z | 2016-11-28T09:49:08Z |
MAINT: Cleanup pandas/src/parser | diff --git a/ci/lint.sh b/ci/lint.sh
index d6390a16b763e..7ab97bfc6d328 100755
--- a/ci/lint.sh
+++ b/ci/lint.sh
@@ -35,6 +35,18 @@ if [ "$LINT" ]; then
done
echo "Linting *.pxi.in DONE"
+ # readability/casting: Warnings about C casting instead of C++ casting
+ # runtime/int: Warnings about using C number types instead of C++ ones
+ # build/include_subdir: Warnings about prefacing included header files with directory
+ pip install cpplint
+
+ echo "Linting *.c and *.h"
+ cpplint --extensions=c,h --headers=h --filter=-readability/casting,-runtime/int,-build/include_subdir --recursive pandas/src/parser
+ if [ $? -ne "0" ]; then
+ RET=1
+ fi
+ echo "Linting *.c and *.h DONE"
+
echo "Check for invalid testing"
grep -r -E --include '*.py' --exclude nosetester.py --exclude testing.py '(numpy|np)\.testing' pandas
if [ $? = "0" ]; then
diff --git a/pandas/src/parser/io.c b/pandas/src/parser/io.c
index 566de72804968..562d6033ce3eb 100644
--- a/pandas/src/parser/io.c
+++ b/pandas/src/parser/io.c
@@ -1,12 +1,20 @@
-#include "io.h"
+/*
+Copyright (c) 2016, PyData Development Team
+All rights reserved.
+
+Distributed under the terms of the BSD Simplified License.
+
+The full license is in the LICENSE file, distributed with this software.
+*/
- /*
- On-disk FILE, uncompressed
- */
+#include "io.h"
+/*
+ On-disk FILE, uncompressed
+*/
void *new_file_source(char *fname, size_t buffer_size) {
- file_source *fs = (file_source *) malloc(sizeof(file_source));
+ file_source *fs = (file_source *)malloc(sizeof(file_source));
fs->fp = fopen(fname, "rb");
if (fs->fp == NULL) {
@@ -18,7 +26,7 @@ void *new_file_source(char *fname, size_t buffer_size) {
fs->initial_file_pos = ftell(fs->fp);
// Only allocate this heap memory if we are not memory-mapping the file
- fs->buffer = (char*) malloc((buffer_size + 1) * sizeof(char));
+ fs->buffer = (char *)malloc((buffer_size + 1) * sizeof(char));
if (fs->buffer == NULL) {
return NULL;
@@ -27,25 +35,11 @@ void *new_file_source(char *fname, size_t buffer_size) {
memset(fs->buffer, 0, buffer_size + 1);
fs->buffer[buffer_size] = '\0';
- return (void *) fs;
+ return (void *)fs;
}
-
-// XXX handle on systems without the capability
-
-
-/*
- * void *new_file_buffer(FILE *f, int buffer_size)
- *
- * Allocate a new file_buffer.
- * Returns NULL if the memory allocation fails or if the call to mmap fails.
- *
- * buffer_size is ignored.
- */
-
-
-void* new_rd_source(PyObject *obj) {
- rd_source *rds = (rd_source *) malloc(sizeof(rd_source));
+void *new_rd_source(PyObject *obj) {
+ rd_source *rds = (rd_source *)malloc(sizeof(rd_source));
/* hold on to this object */
Py_INCREF(obj);
@@ -53,7 +47,7 @@ void* new_rd_source(PyObject *obj) {
rds->buffer = NULL;
rds->position = 0;
- return (void*) rds;
+ return (void *)rds;
}
/*
@@ -63,9 +57,7 @@ void* new_rd_source(PyObject *obj) {
*/
int del_file_source(void *fs) {
- // fseek(FS(fs)->fp, FS(fs)->initial_file_pos, SEEK_SET);
- if (fs == NULL)
- return 0;
+ if (fs == NULL) return 0;
/* allocated on the heap */
free(FS(fs)->buffer);
@@ -89,13 +81,11 @@ int del_rd_source(void *rds) {
*/
-
-void* buffer_file_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status) {
+void *buffer_file_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status) {
file_source *src = FS(source);
- *bytes_read = fread((void*) src->buffer, sizeof(char), nbytes,
- src->fp);
+ *bytes_read = fread((void *)src->buffer, sizeof(char), nbytes, src->fp);
if (*bytes_read == 0) {
*status = REACHED_EOF;
@@ -103,13 +93,11 @@ void* buffer_file_bytes(void *source, size_t nbytes,
*status = 0;
}
- return (void*) src->buffer;
-
+ return (void *)src->buffer;
}
-
-void* buffer_rd_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status) {
+void *buffer_rd_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status) {
PyGILState_STATE state;
PyObject *result, *func, *args, *tmp;
@@ -125,21 +113,18 @@ void* buffer_rd_bytes(void *source, size_t nbytes,
args = Py_BuildValue("(i)", nbytes);
func = PyObject_GetAttrString(src->obj, "read");
- /* printf("%s\n", PyBytes_AsString(PyObject_Repr(func))); */
/* TODO: does this release the GIL? */
result = PyObject_CallObject(func, args);
Py_XDECREF(args);
Py_XDECREF(func);
- /* PyObject_Print(PyObject_Type(result), stdout, 0); */
if (result == NULL) {
PyGILState_Release(state);
*bytes_read = 0;
*status = CALLING_READ_FAILED;
return NULL;
- }
- else if (!PyBytes_Check(result)) {
+ } else if (!PyBytes_Check(result)) {
tmp = PyUnicode_AsUTF8String(result);
Py_XDECREF(result);
result = tmp;
@@ -154,8 +139,7 @@ void* buffer_rd_bytes(void *source, size_t nbytes,
/* hang on to the Python object */
src->buffer = result;
- retval = (void*) PyBytes_AsString(result);
-
+ retval = (void *)PyBytes_AsString(result);
PyGILState_Release(state);
@@ -165,21 +149,18 @@ void* buffer_rd_bytes(void *source, size_t nbytes,
return retval;
}
-
#ifdef HAVE_MMAP
-#include <sys/stat.h>
#include <sys/mman.h>
+#include <sys/stat.h>
-void *new_mmap(char *fname)
-{
+void *new_mmap(char *fname) {
struct stat buf;
int fd;
memory_map *mm;
- /* off_t position; */
off_t filesize;
- mm = (memory_map *) malloc(sizeof(memory_map));
+ mm = (memory_map *)malloc(sizeof(memory_map));
mm->fp = fopen(fname, "rb");
fd = fileno(mm->fp);
@@ -187,20 +168,19 @@ void *new_mmap(char *fname)
fprintf(stderr, "new_file_buffer: fstat() failed. errno =%d\n", errno);
return NULL;
}
- filesize = buf.st_size; /* XXX This might be 32 bits. */
-
+ filesize = buf.st_size; /* XXX This might be 32 bits. */
if (mm == NULL) {
/* XXX Eventually remove this print statement. */
fprintf(stderr, "new_file_buffer: malloc() failed.\n");
return NULL;
}
- mm->size = (off_t) filesize;
+ mm->size = (off_t)filesize;
mm->line_number = 0;
mm->fileno = fd;
mm->position = ftell(mm->fp);
- mm->last_pos = (off_t) filesize;
+ mm->last_pos = (off_t)filesize;
mm->memmap = mmap(NULL, filesize, PROT_READ, MAP_SHARED, fd, 0);
if (mm->memmap == NULL) {
@@ -210,30 +190,20 @@ void *new_mmap(char *fname)
mm = NULL;
}
- return (void*) mm;
+ return (void *)mm;
}
-
-int del_mmap(void *src)
-{
+int del_mmap(void *src) {
munmap(MM(src)->memmap, MM(src)->size);
fclose(MM(src)->fp);
-
- /*
- * With a memory mapped file, there is no need to do
- * anything if restore == RESTORE_INITIAL.
- */
- /* if (restore == RESTORE_FINAL) { */
- /* fseek(FB(fb)->file, FB(fb)->current_pos, SEEK_SET); */
- /* } */
free(src);
return 0;
}
-void* buffer_mmap_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status) {
+void *buffer_mmap_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status) {
void *retval;
memory_map *src = MM(source);
@@ -264,19 +234,15 @@ void* buffer_mmap_bytes(void *source, size_t nbytes,
/* kludgy */
-void *new_mmap(char *fname) {
- return NULL;
-}
+void *new_mmap(char *fname) { return NULL; }
-int del_mmap(void *src) {
- return 0;
-}
+int del_mmap(void *src) { return 0; }
/* don't use this! */
-void* buffer_mmap_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status) {
- return NULL;
+void *buffer_mmap_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status) {
+ return NULL;
}
#endif
diff --git a/pandas/src/parser/io.h b/pandas/src/parser/io.h
index 2ae72ff8a7fe0..5a0c2b2b5e4a4 100644
--- a/pandas/src/parser/io.h
+++ b/pandas/src/parser/io.h
@@ -1,14 +1,23 @@
+/*
+Copyright (c) 2016, PyData Development Team
+All rights reserved.
+
+Distributed under the terms of the BSD Simplified License.
+
+The full license is in the LICENSE file, distributed with this software.
+*/
+
+#ifndef PANDAS_SRC_PARSER_IO_H_
+#define PANDAS_SRC_PARSER_IO_H_
+
#include "Python.h"
#include "tokenizer.h"
-
typedef struct _file_source {
/* The file being read. */
FILE *fp;
char *buffer;
- /* Size of the file, in bytes. */
- /* off_t size; */
/* file position when the file_buffer was created. */
off_t initial_file_pos;
@@ -16,15 +25,9 @@ typedef struct _file_source {
/* Offset in the file of the data currently in the buffer. */
off_t buffer_file_pos;
- /* Actual number of bytes in the current buffer. (Can be less than buffer_size.) */
+ /* Actual number of bytes in the current buffer. (Can be less than
+ * buffer_size.) */
off_t last_pos;
-
- /* Size (in bytes) of the buffer. */
- // off_t buffer_size;
-
- /* Pointer to the buffer. */
- // char *buffer;
-
} file_source;
#define FS(source) ((file_source *)source)
@@ -34,7 +37,6 @@ typedef struct _file_source {
#endif
typedef struct _memory_map {
-
FILE *fp;
/* Size of the file, in bytes. */
@@ -49,22 +51,20 @@ typedef struct _memory_map {
off_t position;
off_t last_pos;
char *memmap;
-
} memory_map;
-#define MM(src) ((memory_map*) src)
+#define MM(src) ((memory_map *)src)
void *new_mmap(char *fname);
int del_mmap(void *src);
-void* buffer_mmap_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status);
-
+void *buffer_mmap_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status);
typedef struct _rd_source {
- PyObject* obj;
- PyObject* buffer;
+ PyObject *obj;
+ PyObject *buffer;
size_t position;
} rd_source;
@@ -77,9 +77,10 @@ void *new_rd_source(PyObject *obj);
int del_file_source(void *src);
int del_rd_source(void *src);
-void* buffer_file_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status);
+void *buffer_file_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status);
-void* buffer_rd_bytes(void *source, size_t nbytes,
- size_t *bytes_read, int *status);
+void *buffer_rd_bytes(void *source, size_t nbytes, size_t *bytes_read,
+ int *status);
+#endif // PANDAS_SRC_PARSER_IO_H_
diff --git a/pandas/src/parser/tokenizer.c b/pandas/src/parser/tokenizer.c
index 450abcf6c325c..1ea62d66345bd 100644
--- a/pandas/src/parser/tokenizer.c
+++ b/pandas/src/parser/tokenizer.c
@@ -9,61 +9,33 @@ See LICENSE for the license
*/
- /*
- Low-level ascii-file processing for pandas. Combines some elements from
- Python's built-in csv module and Warren Weckesser's textreader project on
- GitHub. See Python Software Foundation License and BSD licenses for these.
+/*
- */
+Low-level ascii-file processing for pandas. Combines some elements from
+Python's built-in csv module and Warren Weckesser's textreader project on
+GitHub. See Python Software Foundation License and BSD licenses for these.
+*/
#include "tokenizer.h"
#include <ctype.h>
-#include <math.h>
#include <float.h>
-
-
-//#define READ_ERROR_OUT_OF_MEMORY 1
-
-
-/*
-* restore:
-* RESTORE_NOT (0):
-* Free memory, but leave the file position wherever it
-* happend to be.
-* RESTORE_INITIAL (1):
-* Restore the file position to the location at which
-* the file_buffer was created.
-* RESTORE_FINAL (2):
-* Put the file position at the next byte after the
-* data read from the file_buffer.
-*
-#define RESTORE_NOT 0
-#define RESTORE_INITIAL 1
-#define RESTORE_FINAL 2
-*/
+#include <math.h>
static void *safe_realloc(void *buffer, size_t size) {
void *result;
- // OS X is weird
+ // OSX is weird.
// http://stackoverflow.com/questions/9560609/
// different-realloc-behaviour-in-linux-and-osx
result = realloc(buffer, size);
- TRACE(("safe_realloc: buffer = %p, size = %zu, result = %p\n", buffer, size, result))
+ TRACE(("safe_realloc: buffer = %p, size = %zu, result = %p\n", buffer, size,
+ result))
-/* if (result != NULL) {
- // errno gets set to 12 on my OS Xmachine in some cases even when the
- // realloc succeeds. annoying
- errno = 0;
- } else {
- return buffer;
- }*/
return result;
}
-
void coliter_setup(coliter_t *self, parser_t *parser, int i, int start) {
// column i, starting at 0
self->words = parser->words;
@@ -73,7 +45,7 @@ void coliter_setup(coliter_t *self, parser_t *parser, int i, int start) {
coliter_t *coliter_new(parser_t *self, int i) {
// column i, starting at 0
- coliter_t *iter = (coliter_t*) malloc(sizeof(coliter_t));
+ coliter_t *iter = (coliter_t *)malloc(sizeof(coliter_t));
if (NULL == iter) {
return NULL;
@@ -83,36 +55,28 @@ coliter_t *coliter_new(parser_t *self, int i) {
return iter;
}
-
- /* int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max, int *error); */
- /* uint64_t str_to_uint64(const char *p_item, uint64_t uint_max, int *error); */
-
-
-static void free_if_not_null(void **ptr) {
+static void free_if_not_null(void **ptr) {
TRACE(("free_if_not_null %p\n", *ptr))
if (*ptr != NULL) {
free(*ptr);
*ptr = NULL;
}
- }
-
-
-
- /*
+}
- Parser / tokenizer
+/*
- */
+ Parser / tokenizer
+*/
-static void *grow_buffer(void *buffer, int length, int *capacity,
- int space, int elsize, int *error) {
+static void *grow_buffer(void *buffer, int length, int *capacity, int space,
+ int elsize, int *error) {
int cap = *capacity;
void *newbuffer = buffer;
// Can we fit potentially nbytes tokens (+ null terminators) in the stream?
- while ( (length + space >= cap) && (newbuffer != NULL) ){
- cap = cap? cap << 1 : 2;
+ while ((length + space >= cap) && (newbuffer != NULL)) {
+ cap = cap ? cap << 1 : 2;
buffer = newbuffer;
newbuffer = safe_realloc(newbuffer, elsize * cap);
}
@@ -122,15 +86,14 @@ static void *grow_buffer(void *buffer, int length, int *capacity,
// and return the last good realloc'd buffer so it can be freed
*error = errno;
newbuffer = buffer;
- } else {
+ } else {
// realloc worked, update *capacity and set *error to 0
// sigh, multiple return values
*capacity = cap;
*error = 0;
}
return newbuffer;
- }
-
+}
void parser_set_default_options(parser_t *self) {
self->decimal = '.';
@@ -139,7 +102,7 @@ void parser_set_default_options(parser_t *self) {
// For tokenization
self->state = START_RECORD;
- self->delimiter = ','; // XXX
+ self->delimiter = ','; // XXX
self->delim_whitespace = 0;
self->doublequote = 0;
@@ -161,17 +124,13 @@ void parser_set_default_options(parser_t *self) {
self->thousands = '\0';
self->skipset = NULL;
- self-> skip_first_N_rows = -1;
+ self->skip_first_N_rows = -1;
self->skip_footer = 0;
}
-int get_parser_memory_footprint(parser_t *self) {
- return 0;
-}
+int get_parser_memory_footprint(parser_t *self) { return 0; }
-parser_t* parser_new() {
- return (parser_t*) calloc(1, sizeof(parser_t));
-}
+parser_t *parser_new() { return (parser_t *)calloc(1, sizeof(parser_t)); }
int parser_clear_data_buffers(parser_t *self) {
free_if_not_null((void *)&self->stream);
@@ -183,14 +142,14 @@ int parser_clear_data_buffers(parser_t *self) {
}
int parser_cleanup(parser_t *self) {
- int status = 0;
+ int status = 0;
// XXX where to put this
- free_if_not_null((void *) &self->error_msg);
- free_if_not_null((void *) &self->warn_msg);
+ free_if_not_null((void *)&self->error_msg);
+ free_if_not_null((void *)&self->warn_msg);
if (self->skipset != NULL) {
- kh_destroy_int64((kh_int64_t*) self->skipset);
+ kh_destroy_int64((kh_int64_t *)self->skipset);
self->skipset = NULL;
}
@@ -207,8 +166,6 @@ int parser_cleanup(parser_t *self) {
return status;
}
-
-
int parser_init(parser_t *self) {
int sz;
@@ -225,7 +182,7 @@ int parser_init(parser_t *self) {
self->warn_msg = NULL;
// token stream
- self->stream = (char*) malloc(STREAM_INIT_SIZE * sizeof(char));
+ self->stream = (char *)malloc(STREAM_INIT_SIZE * sizeof(char));
if (self->stream == NULL) {
parser_cleanup(self);
return PARSER_OUT_OF_MEMORY;
@@ -235,16 +192,16 @@ int parser_init(parser_t *self) {
// word pointers and metadata
sz = STREAM_INIT_SIZE / 10;
- sz = sz? sz : 1;
- self->words = (char**) malloc(sz * sizeof(char*));
- self->word_starts = (int*) malloc(sz * sizeof(int));
+ sz = sz ? sz : 1;
+ self->words = (char **)malloc(sz * sizeof(char *));
+ self->word_starts = (int *)malloc(sz * sizeof(int));
self->words_cap = sz;
self->words_len = 0;
// line pointers and metadata
- self->line_start = (int*) malloc(sz * sizeof(int));
+ self->line_start = (int *)malloc(sz * sizeof(int));
- self->line_fields = (int*) malloc(sz * sizeof(int));
+ self->line_fields = (int *)malloc(sz * sizeof(int));
self->lines_cap = sz;
self->lines = 0;
@@ -253,7 +210,6 @@ int parser_init(parser_t *self) {
if (self->stream == NULL || self->words == NULL ||
self->word_starts == NULL || self->line_start == NULL ||
self->line_fields == NULL) {
-
parser_cleanup(self);
return PARSER_OUT_OF_MEMORY;
@@ -279,7 +235,6 @@ int parser_init(parser_t *self) {
return 0;
}
-
void parser_free(parser_t *self) {
// opposite of parser_init
parser_cleanup(self);
@@ -292,20 +247,21 @@ static int make_stream_space(parser_t *self, size_t nbytes) {
// Can we fit potentially nbytes tokens (+ null terminators) in the stream?
- /* TRACE(("maybe growing buffers\n")); */
-
/*
TOKEN STREAM
*/
- orig_ptr = (void *) self->stream;
- TRACE(("\n\nmake_stream_space: nbytes = %zu. grow_buffer(self->stream...)\n", nbytes))
- self->stream = (char*) grow_buffer((void *) self->stream,
- self->stream_len,
- &self->stream_cap, nbytes * 2,
- sizeof(char), &status);
- TRACE(("make_stream_space: self->stream=%p, self->stream_len = %zu, self->stream_cap=%zu, status=%zu\n",
- self->stream, self->stream_len, self->stream_cap, status))
+ orig_ptr = (void *)self->stream;
+ TRACE(
+ ("\n\nmake_stream_space: nbytes = %zu. grow_buffer(self->stream...)\n",
+ nbytes))
+ self->stream = (char *)grow_buffer((void *)self->stream, self->stream_len,
+ &self->stream_cap, nbytes * 2,
+ sizeof(char), &status);
+ TRACE(
+ ("make_stream_space: self->stream=%p, self->stream_len = %zu, "
+ "self->stream_cap=%zu, status=%zu\n",
+ self->stream, self->stream_len, self->stream_cap, status))
if (status != 0) {
return PARSER_OUT_OF_MEMORY;
@@ -313,95 +269,86 @@ static int make_stream_space(parser_t *self, size_t nbytes) {
// realloc sets errno when moving buffer?
if (self->stream != orig_ptr) {
- // uff
- /* TRACE(("Moving word pointers\n")) */
-
self->pword_start = self->stream + self->word_start;
- for (i = 0; i < self->words_len; ++i)
- {
+ for (i = 0; i < self->words_len; ++i) {
self->words[i] = self->stream + self->word_starts[i];
}
}
-
/*
WORD VECTORS
*/
cap = self->words_cap;
- self->words = (char**) grow_buffer((void *) self->words,
- self->words_len,
- &self->words_cap, nbytes,
- sizeof(char*), &status);
- TRACE(("make_stream_space: grow_buffer(self->self->words, %zu, %zu, %zu, %d)\n",
- self->words_len, self->words_cap, nbytes, status))
+ self->words =
+ (char **)grow_buffer((void *)self->words, self->words_len,
+ &self->words_cap, nbytes, sizeof(char *), &status);
+ TRACE(
+ ("make_stream_space: grow_buffer(self->self->words, %zu, %zu, %zu, "
+ "%d)\n",
+ self->words_len, self->words_cap, nbytes, status))
if (status != 0) {
return PARSER_OUT_OF_MEMORY;
}
-
// realloc took place
if (cap != self->words_cap) {
- TRACE(("make_stream_space: cap != self->words_cap, nbytes = %d, self->words_cap=%d\n", nbytes, self->words_cap))
- newptr = safe_realloc((void *) self->word_starts, sizeof(int) * self->words_cap);
+ TRACE(
+ ("make_stream_space: cap != self->words_cap, nbytes = %d, "
+ "self->words_cap=%d\n",
+ nbytes, self->words_cap))
+ newptr = safe_realloc((void *)self->word_starts,
+ sizeof(int) * self->words_cap);
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- self->word_starts = (int*) newptr;
+ self->word_starts = (int *)newptr;
}
}
-
/*
LINE VECTORS
*/
- /*
- printf("Line_start: ");
-
- for (j = 0; j < self->lines + 1; ++j) {
- printf("%d ", self->line_fields[j]);
- }
- printf("\n");
-
- printf("lines_cap: %d\n", self->lines_cap);
- */
cap = self->lines_cap;
- self->line_start = (int*) grow_buffer((void *) self->line_start,
- self->lines + 1,
- &self->lines_cap, nbytes,
- sizeof(int), &status);
- TRACE(("make_stream_space: grow_buffer(self->line_start, %zu, %zu, %zu, %d)\n",
- self->lines + 1, self->lines_cap, nbytes, status))
+ self->line_start =
+ (int *)grow_buffer((void *)self->line_start, self->lines + 1,
+ &self->lines_cap, nbytes, sizeof(int), &status);
+ TRACE((
+ "make_stream_space: grow_buffer(self->line_start, %zu, %zu, %zu, %d)\n",
+ self->lines + 1, self->lines_cap, nbytes, status))
if (status != 0) {
return PARSER_OUT_OF_MEMORY;
}
// realloc took place
if (cap != self->lines_cap) {
- TRACE(("make_stream_space: cap != self->lines_cap, nbytes = %d\n", nbytes))
- newptr = safe_realloc((void *) self->line_fields, sizeof(int) * self->lines_cap);
+ TRACE(("make_stream_space: cap != self->lines_cap, nbytes = %d\n",
+ nbytes))
+ newptr = safe_realloc((void *)self->line_fields,
+ sizeof(int) * self->lines_cap);
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- self->line_fields = (int*) newptr;
+ self->line_fields = (int *)newptr;
}
}
- /* TRACE(("finished growing buffers\n")); */
-
return 0;
}
-
static int push_char(parser_t *self, char c) {
- /* TRACE(("pushing %c \n", c)) */
- TRACE(("push_char: self->stream[%zu] = %x, stream_cap=%zu\n", self->stream_len+1, c, self->stream_cap))
+ TRACE(("push_char: self->stream[%zu] = %x, stream_cap=%zu\n",
+ self->stream_len + 1, c, self->stream_cap))
if (self->stream_len >= self->stream_cap) {
- TRACE(("push_char: ERROR!!! self->stream_len(%d) >= self->stream_cap(%d)\n",
- self->stream_len, self->stream_cap))
- self->error_msg = (char*) malloc(64);
- sprintf(self->error_msg, "Buffer overflow caught - possible malformed input file.\n");
+ TRACE(
+ ("push_char: ERROR!!! self->stream_len(%d) >= "
+ "self->stream_cap(%d)\n",
+ self->stream_len, self->stream_cap))
+ int bufsize = 100;
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "Buffer overflow caught - possible malformed input file.\n");
return PARSER_OUT_OF_MEMORY;
}
self->stream[self->stream_len++] = c;
@@ -410,11 +357,15 @@ static int push_char(parser_t *self, char c) {
int P_INLINE end_field(parser_t *self) {
// XXX cruft
-// self->numeric_field = 0;
if (self->words_len >= self->words_cap) {
- TRACE(("end_field: ERROR!!! self->words_len(%zu) >= self->words_cap(%zu)\n", self->words_len, self->words_cap))
- self->error_msg = (char*) malloc(64);
- sprintf(self->error_msg, "Buffer overflow caught - possible malformed input file.\n");
+ TRACE(
+ ("end_field: ERROR!!! self->words_len(%zu) >= "
+ "self->words_cap(%zu)\n",
+ self->words_len, self->words_cap))
+ int bufsize = 100;
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "Buffer overflow caught - possible malformed input file.\n");
return PARSER_OUT_OF_MEMORY;
}
@@ -426,8 +377,8 @@ int P_INLINE end_field(parser_t *self) {
TRACE(("end_field: Char diff: %d\n", self->pword_start - self->words[0]));
- TRACE(("end_field: Saw word %s at: %d. Total: %d\n",
- self->pword_start, self->word_start, self->words_len + 1))
+ TRACE(("end_field: Saw word %s at: %d. Total: %d\n", self->pword_start,
+ self->word_start, self->words_len + 1))
self->word_starts[self->words_len] = self->word_start;
self->words_len++;
@@ -442,29 +393,29 @@ int P_INLINE end_field(parser_t *self) {
return 0;
}
-
static void append_warning(parser_t *self, const char *msg) {
int ex_length;
int length = strlen(msg);
void *newptr;
if (self->warn_msg == NULL) {
- self->warn_msg = (char*) malloc(length + 1);
- strcpy(self->warn_msg, msg);
+ self->warn_msg = (char *)malloc(length + 1);
+ strncpy(self->warn_msg, msg, strlen(msg) + 1);
} else {
ex_length = strlen(self->warn_msg);
newptr = safe_realloc(self->warn_msg, ex_length + length + 1);
if (newptr != NULL) {
- self->warn_msg = (char*) newptr;
- strcpy(self->warn_msg + ex_length, msg);
+ self->warn_msg = (char *)newptr;
+ strncpy(self->warn_msg + ex_length, msg, strlen(msg) + 1);
}
}
}
static int end_line(parser_t *self) {
+ char *msg;
int fields;
int ex_fields = self->expected_fields;
- char *msg;
+ int bufsize = 100; // for error or warning messages
fields = self->line_fields[self->lines];
@@ -478,11 +429,10 @@ static int end_line(parser_t *self) {
}
}
- if (self->state == START_FIELD_IN_SKIP_LINE || \
- self->state == IN_FIELD_IN_SKIP_LINE || \
- self->state == IN_QUOTED_FIELD_IN_SKIP_LINE || \
- self->state == QUOTE_IN_QUOTED_FIELD_IN_SKIP_LINE
- ) {
+ if (self->state == START_FIELD_IN_SKIP_LINE ||
+ self->state == IN_FIELD_IN_SKIP_LINE ||
+ self->state == IN_QUOTED_FIELD_IN_SKIP_LINE ||
+ self->state == QUOTE_IN_QUOTED_FIELD_IN_SKIP_LINE) {
TRACE(("end_line: Skipping row %d\n", self->file_lines));
// increment file line count
self->file_lines++;
@@ -495,9 +445,8 @@ static int end_line(parser_t *self) {
return 0;
}
- if (!(self->lines <= self->header_end + 1)
- && (self->expected_fields < 0 && fields > ex_fields)
- && !(self->usecols)) {
+ if (!(self->lines <= self->header_end + 1) &&
+ (self->expected_fields < 0 && fields > ex_fields) && !(self->usecols)) {
// increment file line count
self->file_lines++;
@@ -509,8 +458,9 @@ static int end_line(parser_t *self) {
// file_lines is now the actual file line number (starting at 1)
if (self->error_bad_lines) {
- self->error_msg = (char*) malloc(100);
- sprintf(self->error_msg, "Expected %d fields in line %d, saw %d\n",
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "Expected %d fields in line %d, saw %d\n",
ex_fields, self->file_lines, fields);
TRACE(("Error at line %d, %d fields\n", self->file_lines, fields));
@@ -520,9 +470,10 @@ static int end_line(parser_t *self) {
// simply skip bad lines
if (self->warn_bad_lines) {
// pass up error message
- msg = (char*) malloc(100);
- sprintf(msg, "Skipping line %d: expected %d fields, saw %d\n",
- self->file_lines, ex_fields, fields);
+ msg = (char *)malloc(bufsize);
+ snprintf(msg, bufsize,
+ "Skipping line %d: expected %d fields, saw %d\n",
+ self->file_lines, ex_fields, fields);
append_warning(self, msg);
free(msg);
}
@@ -530,14 +481,13 @@ static int end_line(parser_t *self) {
} else {
// missing trailing delimiters
if ((self->lines >= self->header_end + 1) && fields < ex_fields) {
-
// might overrun the buffer when closing fields
if (make_stream_space(self, ex_fields - fields) < 0) {
self->error_msg = "out of memory";
return -1;
}
- while (fields < ex_fields){
+ while (fields < ex_fields) {
end_field(self);
fields++;
}
@@ -549,15 +499,21 @@ static int end_line(parser_t *self) {
// good line, set new start point
if (self->lines >= self->lines_cap) {
- TRACE(("end_line: ERROR!!! self->lines(%zu) >= self->lines_cap(%zu)\n", self->lines, self->lines_cap)) \
- self->error_msg = (char*) malloc(100); \
- sprintf(self->error_msg, "Buffer overflow caught - possible malformed input file.\n"); \
- return PARSER_OUT_OF_MEMORY; \
+ TRACE((
+ "end_line: ERROR!!! self->lines(%zu) >= self->lines_cap(%zu)\n",
+ self->lines, self->lines_cap))
+ int bufsize = 100;
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "Buffer overflow caught - "
+ "possible malformed input file.\n");
+ return PARSER_OUT_OF_MEMORY;
}
- self->line_start[self->lines] = (self->line_start[self->lines - 1] +
- fields);
+ self->line_start[self->lines] =
+ (self->line_start[self->lines - 1] + fields);
- TRACE(("end_line: new line start: %d\n", self->line_start[self->lines]));
+ TRACE(
+ ("end_line: new line start: %d\n", self->line_start[self->lines]));
// new line start with 0 fields
self->line_fields[self->lines] = 0;
@@ -574,10 +530,10 @@ int parser_add_skiprow(parser_t *self, int64_t row) {
int ret = 0;
if (self->skipset == NULL) {
- self->skipset = (void*) kh_init_int64();
+ self->skipset = (void *)kh_init_int64();
}
- set = (kh_int64_t*) self->skipset;
+ set = (kh_int64_t *)self->skipset;
k = kh_put_int64(set, row, &ret);
set->keys[k] = row;
@@ -601,18 +557,21 @@ static int parser_buffer_bytes(parser_t *self, size_t nbytes) {
status = 0;
self->datapos = 0;
self->data = self->cb_io(self->source, nbytes, &bytes_read, &status);
- TRACE(("parser_buffer_bytes self->cb_io: nbytes=%zu, datalen: %d, status=%d\n",
- nbytes, bytes_read, status));
+ TRACE((
+ "parser_buffer_bytes self->cb_io: nbytes=%zu, datalen: %d, status=%d\n",
+ nbytes, bytes_read, status));
self->datalen = bytes_read;
if (status != REACHED_EOF && self->data == NULL) {
- self->error_msg = (char*) malloc(200);
+ int bufsize = 200;
+ self->error_msg = (char *)malloc(bufsize);
if (status == CALLING_READ_FAILED) {
- sprintf(self->error_msg, ("Calling read(nbytes) on source failed. "
- "Try engine='python'."));
+ snprintf(self->error_msg, bufsize,
+ "Calling read(nbytes) on source failed. "
+ "Try engine='python'.");
} else {
- sprintf(self->error_msg, "Unknown error in IO callback");
+ snprintf(self->error_msg, bufsize, "Unknown error in IO callback");
}
return -1;
}
@@ -622,93 +581,96 @@ static int parser_buffer_bytes(parser_t *self, size_t nbytes) {
return status;
}
-
/*
Tokenization macros and state machine code
*/
-// printf("pushing %c\n", c);
-
-#define PUSH_CHAR(c) \
- TRACE(("PUSH_CHAR: Pushing %c, slen= %d, stream_cap=%zu, stream_len=%zu\n", c, slen, self->stream_cap, self->stream_len)) \
- if (slen >= maxstreamsize) { \
- TRACE(("PUSH_CHAR: ERROR!!! slen(%d) >= maxstreamsize(%d)\n", slen, maxstreamsize)) \
- self->error_msg = (char*) malloc(100); \
- sprintf(self->error_msg, "Buffer overflow caught - possible malformed input file.\n"); \
- return PARSER_OUT_OF_MEMORY; \
- } \
- *stream++ = c; \
+#define PUSH_CHAR(c) \
+ TRACE( \
+ ("PUSH_CHAR: Pushing %c, slen= %d, stream_cap=%zu, stream_len=%zu\n", \
+ c, slen, self->stream_cap, self->stream_len)) \
+ if (slen >= maxstreamsize) { \
+ TRACE(("PUSH_CHAR: ERROR!!! slen(%d) >= maxstreamsize(%d)\n", slen, \
+ maxstreamsize)) \
+ int bufsize = 100; \
+ self->error_msg = (char *)malloc(bufsize); \
+ snprintf(self->error_msg, bufsize, \
+ "Buffer overflow caught - possible malformed input file.\n");\
+ return PARSER_OUT_OF_MEMORY; \
+ } \
+ *stream++ = c; \
slen++;
// This is a little bit of a hack but works for now
-#define END_FIELD() \
- self->stream_len = slen; \
- if (end_field(self) < 0) { \
- goto parsingerror; \
- } \
- stream = self->stream + self->stream_len; \
+#define END_FIELD() \
+ self->stream_len = slen; \
+ if (end_field(self) < 0) { \
+ goto parsingerror; \
+ } \
+ stream = self->stream + self->stream_len; \
slen = self->stream_len;
-#define END_LINE_STATE(STATE) \
- self->stream_len = slen; \
- if (end_line(self) < 0) { \
- goto parsingerror; \
- } \
- stream = self->stream + self->stream_len; \
- slen = self->stream_len; \
- self->state = STATE; \
- if (line_limit > 0 && self->lines == start_lines + line_limit) { \
- goto linelimit; \
- \
- }
-
-#define END_LINE_AND_FIELD_STATE(STATE) \
- self->stream_len = slen; \
- if (end_line(self) < 0) { \
- goto parsingerror; \
- } \
- if (end_field(self) < 0) { \
- goto parsingerror; \
- } \
- stream = self->stream + self->stream_len; \
- slen = self->stream_len; \
- self->state = STATE; \
- if (line_limit > 0 && self->lines == start_lines + line_limit) { \
- goto linelimit; \
- \
+#define END_LINE_STATE(STATE) \
+ self->stream_len = slen; \
+ if (end_line(self) < 0) { \
+ goto parsingerror; \
+ } \
+ stream = self->stream + self->stream_len; \
+ slen = self->stream_len; \
+ self->state = STATE; \
+ if (line_limit > 0 && self->lines == start_lines + line_limit) { \
+ goto linelimit; \
+ }
+
+#define END_LINE_AND_FIELD_STATE(STATE) \
+ self->stream_len = slen; \
+ if (end_line(self) < 0) { \
+ goto parsingerror; \
+ } \
+ if (end_field(self) < 0) { \
+ goto parsingerror; \
+ } \
+ stream = self->stream + self->stream_len; \
+ slen = self->stream_len; \
+ self->state = STATE; \
+ if (line_limit > 0 && self->lines == start_lines + line_limit) { \
+ goto linelimit; \
}
#define END_LINE() END_LINE_STATE(START_RECORD)
#define IS_WHITESPACE(c) ((c == ' ' || c == '\t'))
-#define IS_TERMINATOR(c) ((self->lineterminator == '\0' && c == '\n') || \
- (self->lineterminator != '\0' && \
- c == self->lineterminator))
+#define IS_TERMINATOR(c) \
+ ((self->lineterminator == '\0' && c == '\n') || \
+ (self->lineterminator != '\0' && c == self->lineterminator))
#define IS_QUOTE(c) ((c == self->quotechar && self->quoting != QUOTE_NONE))
// don't parse '\r' with a custom line terminator
#define IS_CARRIAGE(c) ((self->lineterminator == '\0' && c == '\r'))
-#define IS_COMMENT_CHAR(c) ((self->commentchar != '\0' && c == self->commentchar))
+#define IS_COMMENT_CHAR(c) \
+ ((self->commentchar != '\0' && c == self->commentchar))
#define IS_ESCAPE_CHAR(c) ((self->escapechar != '\0' && c == self->escapechar))
-#define IS_SKIPPABLE_SPACE(c) ((!self->delim_whitespace && c == ' ' && \
- self->skipinitialspace))
+#define IS_SKIPPABLE_SPACE(c) \
+ ((!self->delim_whitespace && c == ' ' && self->skipinitialspace))
// applied when in a field
-#define IS_DELIMITER(c) ((!self->delim_whitespace && c == self->delimiter) || \
- (self->delim_whitespace && IS_WHITESPACE(c)))
+#define IS_DELIMITER(c) \
+ ((!self->delim_whitespace && c == self->delimiter) || \
+ (self->delim_whitespace && IS_WHITESPACE(c)))
#define _TOKEN_CLEANUP() \
self->stream_len = slen; \
self->datapos = i; \
- TRACE(("_TOKEN_CLEANUP: datapos: %d, datalen: %d\n", self->datapos, self->datalen));
+ TRACE(("_TOKEN_CLEANUP: datapos: %d, datalen: %d\n", self->datapos, \
+ self->datalen));
#define CHECK_FOR_BOM() \
if (*buf == '\xef' && *(buf + 1) == '\xbb' && *(buf + 2) == '\xbf') { \
@@ -718,16 +680,14 @@ static int parser_buffer_bytes(parser_t *self, size_t nbytes) {
int skip_this_line(parser_t *self, int64_t rownum) {
if (self->skipset != NULL) {
- return ( kh_get_int64((kh_int64_t*) self->skipset, self->file_lines) !=
- ((kh_int64_t*)self->skipset)->n_buckets );
- }
- else {
- return ( rownum <= self->skip_first_N_rows );
+ return (kh_get_int64((kh_int64_t *)self->skipset, self->file_lines) !=
+ ((kh_int64_t *)self->skipset)->n_buckets);
+ } else {
+ return (rownum <= self->skip_first_N_rows);
}
}
-int tokenize_bytes(parser_t *self, size_t line_limit, int start_lines)
-{
+int tokenize_bytes(parser_t *self, size_t line_limit, int start_lines) {
int i, slen;
long maxstreamsize;
char c;
@@ -749,368 +709,364 @@ int tokenize_bytes(parser_t *self, size_t line_limit, int start_lines)
CHECK_FOR_BOM();
}
- for (i = self->datapos; i < self->datalen; ++i)
- {
+ for (i = self->datapos; i < self->datalen; ++i) {
// next character in file
c = *buf++;
- TRACE(("tokenize_bytes - Iter: %d Char: 0x%x Line %d field_count %d, state %d\n",
- i, c, self->file_lines + 1, self->line_fields[self->lines],
- self->state));
-
- switch(self->state) {
-
- case START_FIELD_IN_SKIP_LINE:
- if (IS_TERMINATOR(c)) {
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- self->file_lines++;
- self->state = EAT_CRNL_NOP;
- } else if (IS_QUOTE(c)) {
- self->state = IN_QUOTED_FIELD_IN_SKIP_LINE;
- } else if (IS_DELIMITER(c)) {
- // Do nothing, we're starting a new field again.
- } else {
- self->state = IN_FIELD_IN_SKIP_LINE;
- }
- break;
+ TRACE(
+ ("tokenize_bytes - Iter: %d Char: 0x%x Line %d field_count %d, "
+ "state %d\n",
+ i, c, self->file_lines + 1, self->line_fields[self->lines],
+ self->state));
- case IN_FIELD_IN_SKIP_LINE:
- if (IS_TERMINATOR(c)) {
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- self->file_lines++;
- self->state = EAT_CRNL_NOP;
- } else if (IS_DELIMITER(c)) {
- self->state = START_FIELD_IN_SKIP_LINE;
- }
- break;
-
- case IN_QUOTED_FIELD_IN_SKIP_LINE:
- if (IS_QUOTE(c)) {
- if (self->doublequote) {
- self->state = QUOTE_IN_QUOTED_FIELD_IN_SKIP_LINE;
+ switch (self->state) {
+ case START_FIELD_IN_SKIP_LINE:
+ if (IS_TERMINATOR(c)) {
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ self->file_lines++;
+ self->state = EAT_CRNL_NOP;
+ } else if (IS_QUOTE(c)) {
+ self->state = IN_QUOTED_FIELD_IN_SKIP_LINE;
+ } else if (IS_DELIMITER(c)) {
+ // Do nothing, we're starting a new field again.
} else {
self->state = IN_FIELD_IN_SKIP_LINE;
}
- }
- break;
-
- case QUOTE_IN_QUOTED_FIELD_IN_SKIP_LINE:
- if (IS_QUOTE(c)) {
- self->state = IN_QUOTED_FIELD_IN_SKIP_LINE;
- } else if (IS_TERMINATOR(c)) {
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- self->file_lines++;
- self->state = EAT_CRNL_NOP;
- } else if (IS_DELIMITER(c)) {
- self->state = START_FIELD_IN_SKIP_LINE;
- } else {
- self->state = IN_FIELD_IN_SKIP_LINE;
- }
- break;
-
- case WHITESPACE_LINE:
- if (IS_TERMINATOR(c)) {
- self->file_lines++;
- self->state = START_RECORD;
- break;
- } else if (IS_CARRIAGE(c)) {
- self->file_lines++;
- self->state = EAT_CRNL_NOP;
break;
- } else if (!self->delim_whitespace) {
- if (IS_WHITESPACE(c) && c != self->delimiter) {
- ;
- } else { // backtrack
- // use i + 1 because buf has been incremented but not i
- do {
- --buf;
- --i;
- } while (i + 1 > self->datapos && !IS_TERMINATOR(*buf));
- // reached a newline rather than the beginning
- if (IS_TERMINATOR(*buf)) {
- ++buf; // move pointer to first char after newline
- ++i;
- }
- self->state = START_FIELD;
+ case IN_FIELD_IN_SKIP_LINE:
+ if (IS_TERMINATOR(c)) {
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ self->file_lines++;
+ self->state = EAT_CRNL_NOP;
+ } else if (IS_DELIMITER(c)) {
+ self->state = START_FIELD_IN_SKIP_LINE;
}
break;
- }
- // fall through
- case EAT_WHITESPACE:
- if (IS_TERMINATOR(c)) {
- END_LINE();
- self->state = START_RECORD;
- break;
- } else if (IS_CARRIAGE(c)) {
- self->state = EAT_CRNL;
- break;
- } else if (!IS_WHITESPACE(c)) {
- self->state = START_FIELD;
- // fall through to subsequent state
- } else {
- // if whitespace char, keep slurping
+ case IN_QUOTED_FIELD_IN_SKIP_LINE:
+ if (IS_QUOTE(c)) {
+ if (self->doublequote) {
+ self->state = QUOTE_IN_QUOTED_FIELD_IN_SKIP_LINE;
+ } else {
+ self->state = IN_FIELD_IN_SKIP_LINE;
+ }
+ }
break;
- }
- case START_RECORD:
- // start of record
- if (skip_this_line(self, self->file_lines)) {
+ case QUOTE_IN_QUOTED_FIELD_IN_SKIP_LINE:
if (IS_QUOTE(c)) {
self->state = IN_QUOTED_FIELD_IN_SKIP_LINE;
+ } else if (IS_TERMINATOR(c)) {
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ self->file_lines++;
+ self->state = EAT_CRNL_NOP;
+ } else if (IS_DELIMITER(c)) {
+ self->state = START_FIELD_IN_SKIP_LINE;
} else {
self->state = IN_FIELD_IN_SKIP_LINE;
-
- if (IS_TERMINATOR(c)) {
- END_LINE();
- }
}
break;
- } else if (IS_TERMINATOR(c)) {
- // \n\r possible?
- if (self->skip_empty_lines) {
+
+ case WHITESPACE_LINE:
+ if (IS_TERMINATOR(c)) {
self->file_lines++;
- } else {
- END_LINE();
- }
- break;
- } else if (IS_CARRIAGE(c)) {
- if (self->skip_empty_lines) {
+ self->state = START_RECORD;
+ break;
+ } else if (IS_CARRIAGE(c)) {
self->file_lines++;
self->state = EAT_CRNL_NOP;
- } else {
+ break;
+ } else if (!self->delim_whitespace) {
+ if (IS_WHITESPACE(c) && c != self->delimiter) {
+ } else { // backtrack
+ // use i + 1 because buf has been incremented but not i
+ do {
+ --buf;
+ --i;
+ } while (i + 1 > self->datapos && !IS_TERMINATOR(*buf));
+
+ // reached a newline rather than the beginning
+ if (IS_TERMINATOR(*buf)) {
+ ++buf; // move pointer to first char after newline
+ ++i;
+ }
+ self->state = START_FIELD;
+ }
+ break;
+ }
+ // fall through
+
+ case EAT_WHITESPACE:
+ if (IS_TERMINATOR(c)) {
+ END_LINE();
+ self->state = START_RECORD;
+ break;
+ } else if (IS_CARRIAGE(c)) {
self->state = EAT_CRNL;
+ break;
+ } else if (!IS_WHITESPACE(c)) {
+ self->state = START_FIELD;
+ // fall through to subsequent state
+ } else {
+ // if whitespace char, keep slurping
+ break;
}
- break;
- } else if (IS_COMMENT_CHAR(c)) {
- self->state = EAT_LINE_COMMENT;
- break;
- } else if (IS_WHITESPACE(c)) {
- if (self->delim_whitespace) {
+
+ case START_RECORD:
+ // start of record
+ if (skip_this_line(self, self->file_lines)) {
+ if (IS_QUOTE(c)) {
+ self->state = IN_QUOTED_FIELD_IN_SKIP_LINE;
+ } else {
+ self->state = IN_FIELD_IN_SKIP_LINE;
+
+ if (IS_TERMINATOR(c)) {
+ END_LINE();
+ }
+ }
+ break;
+ } else if (IS_TERMINATOR(c)) {
+ // \n\r possible?
if (self->skip_empty_lines) {
- self->state = WHITESPACE_LINE;
+ self->file_lines++;
} else {
- self->state = EAT_WHITESPACE;
+ END_LINE();
}
break;
- } else if (c != self->delimiter && self->skip_empty_lines) {
- self->state = WHITESPACE_LINE;
+ } else if (IS_CARRIAGE(c)) {
+ if (self->skip_empty_lines) {
+ self->file_lines++;
+ self->state = EAT_CRNL_NOP;
+ } else {
+ self->state = EAT_CRNL;
+ }
+ break;
+ } else if (IS_COMMENT_CHAR(c)) {
+ self->state = EAT_LINE_COMMENT;
break;
+ } else if (IS_WHITESPACE(c)) {
+ if (self->delim_whitespace) {
+ if (self->skip_empty_lines) {
+ self->state = WHITESPACE_LINE;
+ } else {
+ self->state = EAT_WHITESPACE;
+ }
+ break;
+ } else if (c != self->delimiter && self->skip_empty_lines) {
+ self->state = WHITESPACE_LINE;
+ break;
+ }
+ // fall through
}
- // fall through
- }
- // normal character - fall through
- // to handle as START_FIELD
- self->state = START_FIELD;
+ // normal character - fall through
+ // to handle as START_FIELD
+ self->state = START_FIELD;
- case START_FIELD:
- // expecting field
- if (IS_TERMINATOR(c)) {
- END_FIELD();
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- END_FIELD();
- self->state = EAT_CRNL;
- } else if (IS_QUOTE(c)) {
- // start quoted field
- self->state = IN_QUOTED_FIELD;
- } else if (IS_ESCAPE_CHAR(c)) {
- // possible escaped character
- self->state = ESCAPED_CHAR;
- } else if (IS_SKIPPABLE_SPACE(c)) {
- // ignore space at start of field
- ;
- } else if (IS_DELIMITER(c)) {
- if (self->delim_whitespace) {
- self->state = EAT_WHITESPACE;
- } else {
- // save empty field
+ case START_FIELD:
+ // expecting field
+ if (IS_TERMINATOR(c)) {
+ END_FIELD();
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ END_FIELD();
+ self->state = EAT_CRNL;
+ } else if (IS_QUOTE(c)) {
+ // start quoted field
+ self->state = IN_QUOTED_FIELD;
+ } else if (IS_ESCAPE_CHAR(c)) {
+ // possible escaped character
+ self->state = ESCAPED_CHAR;
+ } else if (IS_SKIPPABLE_SPACE(c)) {
+ // ignore space at start of field
+ } else if (IS_DELIMITER(c)) {
+ if (self->delim_whitespace) {
+ self->state = EAT_WHITESPACE;
+ } else {
+ // save empty field
+ END_FIELD();
+ }
+ } else if (IS_COMMENT_CHAR(c)) {
END_FIELD();
+ self->state = EAT_COMMENT;
+ } else {
+ // begin new unquoted field
+ PUSH_CHAR(c);
+ self->state = IN_FIELD;
}
- } else if (IS_COMMENT_CHAR(c)) {
- END_FIELD();
- self->state = EAT_COMMENT;
- } else {
- // begin new unquoted field
- // if (self->delim_whitespace && \
- // self->quoting == QUOTE_NONNUMERIC) {
- // self->numeric_field = 1;
- // }
+ break;
+ case ESCAPED_CHAR:
PUSH_CHAR(c);
self->state = IN_FIELD;
- }
- break;
+ break;
- case ESCAPED_CHAR:
- PUSH_CHAR(c);
- self->state = IN_FIELD;
- break;
+ case EAT_LINE_COMMENT:
+ if (IS_TERMINATOR(c)) {
+ self->file_lines++;
+ self->state = START_RECORD;
+ } else if (IS_CARRIAGE(c)) {
+ self->file_lines++;
+ self->state = EAT_CRNL_NOP;
+ }
+ break;
- case EAT_LINE_COMMENT:
- if (IS_TERMINATOR(c)) {
- self->file_lines++;
- self->state = START_RECORD;
- } else if (IS_CARRIAGE(c)) {
- self->file_lines++;
- self->state = EAT_CRNL_NOP;
- }
- break;
+ case IN_FIELD:
+ // in unquoted field
+ if (IS_TERMINATOR(c)) {
+ END_FIELD();
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ END_FIELD();
+ self->state = EAT_CRNL;
+ } else if (IS_ESCAPE_CHAR(c)) {
+ // possible escaped character
+ self->state = ESCAPED_CHAR;
+ } else if (IS_DELIMITER(c)) {
+ // end of field - end of line not reached yet
+ END_FIELD();
- case IN_FIELD:
- // in unquoted field
- if (IS_TERMINATOR(c)) {
- END_FIELD();
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- END_FIELD();
- self->state = EAT_CRNL;
- } else if (IS_ESCAPE_CHAR(c)) {
- // possible escaped character
- self->state = ESCAPED_CHAR;
- } else if (IS_DELIMITER(c)) {
- // end of field - end of line not reached yet
- END_FIELD();
-
- if (self->delim_whitespace) {
- self->state = EAT_WHITESPACE;
+ if (self->delim_whitespace) {
+ self->state = EAT_WHITESPACE;
+ } else {
+ self->state = START_FIELD;
+ }
+ } else if (IS_COMMENT_CHAR(c)) {
+ END_FIELD();
+ self->state = EAT_COMMENT;
} else {
- self->state = START_FIELD;
+ // normal character - save in field
+ PUSH_CHAR(c);
}
- } else if (IS_COMMENT_CHAR(c)) {
- END_FIELD();
- self->state = EAT_COMMENT;
- } else {
- // normal character - save in field
- PUSH_CHAR(c);
- }
- break;
+ break;
- case IN_QUOTED_FIELD:
- // in quoted field
- if (IS_ESCAPE_CHAR(c)) {
- // possible escape character
- self->state = ESCAPE_IN_QUOTED_FIELD;
- } else if (IS_QUOTE(c)) {
- if (self->doublequote) {
- // double quote - " represented by ""
- self->state = QUOTE_IN_QUOTED_FIELD;
+ case IN_QUOTED_FIELD:
+ // in quoted field
+ if (IS_ESCAPE_CHAR(c)) {
+ // possible escape character
+ self->state = ESCAPE_IN_QUOTED_FIELD;
+ } else if (IS_QUOTE(c)) {
+ if (self->doublequote) {
+ // double quote - " represented by ""
+ self->state = QUOTE_IN_QUOTED_FIELD;
+ } else {
+ // end of quote part of field
+ self->state = IN_FIELD;
+ }
} else {
- // end of quote part of field
- self->state = IN_FIELD;
+ // normal character - save in field
+ PUSH_CHAR(c);
}
- } else {
- // normal character - save in field
- PUSH_CHAR(c);
- }
- break;
-
- case ESCAPE_IN_QUOTED_FIELD:
- PUSH_CHAR(c);
- self->state = IN_QUOTED_FIELD;
- break;
-
- case QUOTE_IN_QUOTED_FIELD:
- // double quote - seen a quote in an quoted field
- if (IS_QUOTE(c)) {
- // save "" as "
+ break;
+ case ESCAPE_IN_QUOTED_FIELD:
PUSH_CHAR(c);
self->state = IN_QUOTED_FIELD;
- } else if (IS_DELIMITER(c)) {
- // end of field - end of line not reached yet
- END_FIELD();
-
- if (self->delim_whitespace) {
- self->state = EAT_WHITESPACE;
- } else {
- self->state = START_FIELD;
- }
- } else if (IS_TERMINATOR(c)) {
- END_FIELD();
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- END_FIELD();
- self->state = EAT_CRNL;
- } else if (!self->strict) {
- PUSH_CHAR(c);
- self->state = IN_FIELD;
- } else {
- self->error_msg = (char*) malloc(50);
- sprintf(self->error_msg,
- "delimiter expected after "
- "quote in quote");
- goto parsingerror;
- }
- break;
+ break;
- case EAT_COMMENT:
- if (IS_TERMINATOR(c)) {
- END_LINE();
- } else if (IS_CARRIAGE(c)) {
- self->state = EAT_CRNL;
- }
- break;
+ case QUOTE_IN_QUOTED_FIELD:
+ // double quote - seen a quote in an quoted field
+ if (IS_QUOTE(c)) {
+ // save "" as "
- // only occurs with non-custom line terminator,
- // which is why we directly check for '\n'
- case EAT_CRNL:
- if (c == '\n') {
- END_LINE();
- } else if (IS_DELIMITER(c)){
+ PUSH_CHAR(c);
+ self->state = IN_QUOTED_FIELD;
+ } else if (IS_DELIMITER(c)) {
+ // end of field - end of line not reached yet
+ END_FIELD();
- if (self->delim_whitespace) {
- END_LINE_STATE(EAT_WHITESPACE);
+ if (self->delim_whitespace) {
+ self->state = EAT_WHITESPACE;
+ } else {
+ self->state = START_FIELD;
+ }
+ } else if (IS_TERMINATOR(c)) {
+ END_FIELD();
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ END_FIELD();
+ self->state = EAT_CRNL;
+ } else if (!self->strict) {
+ PUSH_CHAR(c);
+ self->state = IN_FIELD;
} else {
- // Handle \r-delimited files
- END_LINE_AND_FIELD_STATE(START_FIELD);
+ int bufsize = 100;
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "delimiter expected after quote in quote");
+ goto parsingerror;
}
- } else {
- if (self->delim_whitespace) {
- /* XXX
- * first character of a new record--need to back up and reread
- * to handle properly...
- */
- i--; buf--; // back up one character (HACK!)
- END_LINE_STATE(START_RECORD);
- } else {
- // \r line terminator
- // UGH. we don't actually want
- // to consume the token. fix this later
- self->stream_len = slen;
- if (end_line(self) < 0) {
- goto parsingerror;
- }
+ break;
- stream = self->stream + self->stream_len;
- slen = self->stream_len;
- self->state = START_RECORD;
+ case EAT_COMMENT:
+ if (IS_TERMINATOR(c)) {
+ END_LINE();
+ } else if (IS_CARRIAGE(c)) {
+ self->state = EAT_CRNL;
+ }
+ break;
+
+ // only occurs with non-custom line terminator,
+ // which is why we directly check for '\n'
+ case EAT_CRNL:
+ if (c == '\n') {
+ END_LINE();
+ } else if (IS_DELIMITER(c)) {
+ if (self->delim_whitespace) {
+ END_LINE_STATE(EAT_WHITESPACE);
+ } else {
+ // Handle \r-delimited files
+ END_LINE_AND_FIELD_STATE(START_FIELD);
+ }
+ } else {
+ if (self->delim_whitespace) {
+ /* XXX
+ * first character of a new record--need to back up and
+ * reread
+ * to handle properly...
+ */
+ i--;
+ buf--; // back up one character (HACK!)
+ END_LINE_STATE(START_RECORD);
+ } else {
+ // \r line terminator
+ // UGH. we don't actually want
+ // to consume the token. fix this later
+ self->stream_len = slen;
+ if (end_line(self) < 0) {
+ goto parsingerror;
+ }
+
+ stream = self->stream + self->stream_len;
+ slen = self->stream_len;
+ self->state = START_RECORD;
- --i; buf--; // let's try this character again (HACK!)
- if (line_limit > 0 && self->lines == start_lines + line_limit) {
- goto linelimit;
+ --i;
+ buf--; // let's try this character again (HACK!)
+ if (line_limit > 0 &&
+ self->lines == start_lines + line_limit) {
+ goto linelimit;
+ }
}
}
- }
- break;
+ break;
- // only occurs with non-custom line terminator,
- // which is why we directly check for '\n'
- case EAT_CRNL_NOP: // inside an ignored comment line
- self->state = START_RECORD;
- // \r line terminator -- parse this character again
- if (c != '\n' && !IS_DELIMITER(c)) {
- --i;
- --buf;
- }
- break;
- default:
- break;
+ // only occurs with non-custom line terminator,
+ // which is why we directly check for '\n'
+ case EAT_CRNL_NOP: // inside an ignored comment line
+ self->state = START_RECORD;
+ // \r line terminator -- parse this character again
+ if (c != '\n' && !IS_DELIMITER(c)) {
+ --i;
+ --buf;
+ }
+ break;
+ default:
+ break;
}
}
@@ -1134,39 +1090,41 @@ int tokenize_bytes(parser_t *self, size_t line_limit, int start_lines)
}
static int parser_handle_eof(parser_t *self) {
- TRACE(("handling eof, datalen: %d, pstate: %d\n", self->datalen, self->state))
+ int bufsize = 100;
- if (self->datalen != 0)
- return -1;
+ TRACE(
+ ("handling eof, datalen: %d, pstate: %d\n", self->datalen, self->state))
- switch (self->state) {
- case START_RECORD:
- case WHITESPACE_LINE:
- case EAT_CRNL_NOP:
- case EAT_LINE_COMMENT:
- return 0;
+ if (self->datalen != 0) return -1;
- case ESCAPE_IN_QUOTED_FIELD:
- case IN_QUOTED_FIELD:
- self->error_msg = (char*)malloc(100);
- sprintf(self->error_msg, "EOF inside string starting at line %d",
- self->file_lines);
- return -1;
+ switch (self->state) {
+ case START_RECORD:
+ case WHITESPACE_LINE:
+ case EAT_CRNL_NOP:
+ case EAT_LINE_COMMENT:
+ return 0;
- case ESCAPED_CHAR:
- self->error_msg = (char*)malloc(100);
- sprintf(self->error_msg, "EOF following escape character");
- return -1;
+ case ESCAPE_IN_QUOTED_FIELD:
+ case IN_QUOTED_FIELD:
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "EOF inside string starting at line %d", self->file_lines);
+ return -1;
- case IN_FIELD:
- case START_FIELD:
- case QUOTE_IN_QUOTED_FIELD:
- if (end_field(self) < 0)
+ case ESCAPED_CHAR:
+ self->error_msg = (char *)malloc(bufsize);
+ snprintf(self->error_msg, bufsize,
+ "EOF following escape character");
return -1;
- break;
- default:
- break;
+ case IN_FIELD:
+ case START_FIELD:
+ case QUOTE_IN_QUOTED_FIELD:
+ if (end_field(self) < 0) return -1;
+ break;
+
+ default:
+ break;
}
if (end_line(self) < 0)
@@ -1183,19 +1141,19 @@ int parser_consume_rows(parser_t *self, size_t nrows) {
}
/* do nothing */
- if (nrows == 0)
- return 0;
+ if (nrows == 0) return 0;
/* cannot guarantee that nrows + 1 has been observed */
word_deletions = self->line_start[nrows - 1] + self->line_fields[nrows - 1];
char_count = (self->word_starts[word_deletions - 1] +
strlen(self->words[word_deletions - 1]) + 1);
- TRACE(("parser_consume_rows: Deleting %d words, %d chars\n", word_deletions, char_count));
+ TRACE(("parser_consume_rows: Deleting %d words, %d chars\n", word_deletions,
+ char_count));
/* move stream, only if something to move */
if (char_count < self->stream_len) {
- memmove((void*) self->stream, (void*) (self->stream + char_count),
+ memmove((void *)self->stream, (void *)(self->stream + char_count),
self->stream_len - char_count);
}
/* buffer counts */
@@ -1213,26 +1171,14 @@ int parser_consume_rows(parser_t *self, size_t nrows) {
/* move current word pointer to stream */
self->pword_start -= char_count;
self->word_start -= char_count;
- /*
- printf("Line_start: ");
- for (i = 0; i < self->lines + 1; ++i) {
- printf("%d ", self->line_fields[i]);
- }
- printf("\n");
- */
+
/* move line metadata */
- for (i = 0; i < self->lines - nrows + 1; ++i)
- {
+ for (i = 0; i < self->lines - nrows + 1; ++i) {
offset = i + nrows;
self->line_start[i] = self->line_start[offset] - word_deletions;
-
- /* TRACE(("First word in line %d is now %s\n", i, */
- /* self->words[self->line_start[i]])); */
-
self->line_fields[i] = self->line_fields[offset];
}
self->lines -= nrows;
- /* self->line_fields[self->lines] = 0; */
return 0;
}
@@ -1256,47 +1202,50 @@ int parser_trim_buffers(parser_t *self) {
new_cap = _next_pow2(self->words_len) + 1;
if (new_cap < self->words_cap) {
TRACE(("parser_trim_buffers: new_cap < self->words_cap\n"));
- newptr = safe_realloc((void*) self->words, new_cap * sizeof(char*));
+ newptr = safe_realloc((void *)self->words, new_cap * sizeof(char *));
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- self->words = (char**) newptr;
+ self->words = (char **)newptr;
}
- newptr = safe_realloc((void*) self->word_starts, new_cap * sizeof(int));
+ newptr = safe_realloc((void *)self->word_starts, new_cap * sizeof(int));
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- self->word_starts = (int*) newptr;
+ self->word_starts = (int *)newptr;
self->words_cap = new_cap;
}
}
/* trim stream */
new_cap = _next_pow2(self->stream_len) + 1;
- TRACE(("parser_trim_buffers: new_cap = %zu, stream_cap = %zu, lines_cap = %zu\n",
- new_cap, self->stream_cap, self->lines_cap));
+ TRACE(
+ ("parser_trim_buffers: new_cap = %zu, stream_cap = %zu, lines_cap = "
+ "%zu\n",
+ new_cap, self->stream_cap, self->lines_cap));
if (new_cap < self->stream_cap) {
- TRACE(("parser_trim_buffers: new_cap < self->stream_cap, calling safe_realloc\n"));
- newptr = safe_realloc((void*) self->stream, new_cap);
+ TRACE(
+ ("parser_trim_buffers: new_cap < self->stream_cap, calling "
+ "safe_realloc\n"));
+ newptr = safe_realloc((void *)self->stream, new_cap);
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- // Update the pointers in the self->words array (char **) if `safe_realloc`
- // moved the `self->stream` buffer. This block mirrors a similar block in
+ // Update the pointers in the self->words array (char **) if
+ // `safe_realloc`
+ // moved the `self->stream` buffer. This block mirrors a similar
+ // block in
// `make_stream_space`.
if (self->stream != newptr) {
- /* TRACE(("Moving word pointers\n")) */
- self->pword_start = (char*) newptr + self->word_start;
+ self->pword_start = (char *)newptr + self->word_start;
- for (i = 0; i < self->words_len; ++i)
- {
- self->words[i] = (char*) newptr + self->word_starts[i];
+ for (i = 0; i < self->words_len; ++i) {
+ self->words[i] = (char *)newptr + self->word_starts[i];
}
}
self->stream = newptr;
self->stream_cap = new_cap;
-
}
}
@@ -1304,17 +1253,17 @@ int parser_trim_buffers(parser_t *self) {
new_cap = _next_pow2(self->lines) + 1;
if (new_cap < self->lines_cap) {
TRACE(("parser_trim_buffers: new_cap < self->lines_cap\n"));
- newptr = safe_realloc((void*) self->line_start, new_cap * sizeof(int));
+ newptr = safe_realloc((void *)self->line_start, new_cap * sizeof(int));
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- self->line_start = (int*) newptr;
+ self->line_start = (int *)newptr;
}
- newptr = safe_realloc((void*) self->line_fields, new_cap * sizeof(int));
+ newptr = safe_realloc((void *)self->line_fields, new_cap * sizeof(int));
if (newptr == NULL) {
return PARSER_OUT_OF_MEMORY;
} else {
- self->line_fields = (int*) newptr;
+ self->line_fields = (int *)newptr;
self->lines_cap = new_cap;
}
}
@@ -1326,12 +1275,10 @@ void debug_print_parser(parser_t *self) {
int j, line;
char *token;
- for (line = 0; line < self->lines; ++line)
- {
+ for (line = 0; line < self->lines; ++line) {
printf("(Parsed) Line %d: ", line);
- for (j = 0; j < self->line_fields[j]; ++j)
- {
+ for (j = 0; j < self->line_fields[j]; ++j) {
token = self->words[j + self->line_start[line]];
printf("%s ", token);
}
@@ -1339,13 +1286,6 @@ void debug_print_parser(parser_t *self) {
}
}
-/*int clear_parsed_lines(parser_t *self, size_t nlines) {
- // TODO. move data up in stream, shift relevant word pointers
-
- return 0;
-}*/
-
-
/*
nrows : number of rows to tokenize (or until reach EOF)
all : tokenize all the data vs. certain number of rows
@@ -1359,12 +1299,12 @@ int _tokenize_helper(parser_t *self, size_t nrows, int all) {
return 0;
}
- TRACE(("_tokenize_helper: Asked to tokenize %d rows, datapos=%d, datalen=%d\n", \
- (int) nrows, self->datapos, self->datalen));
+ TRACE((
+ "_tokenize_helper: Asked to tokenize %d rows, datapos=%d, datalen=%d\n",
+ (int)nrows, self->datapos, self->datalen));
while (1) {
- if (!all && self->lines - start_lines >= nrows)
- break;
+ if (!all && self->lines - start_lines >= nrows) break;
if (self->datapos == self->datalen) {
status = parser_buffer_bytes(self, self->chunksize);
@@ -1379,15 +1319,19 @@ int _tokenize_helper(parser_t *self, size_t nrows, int all) {
}
}
- TRACE(("_tokenize_helper: Trying to process %d bytes, datalen=%d, datapos= %d\n",
- self->datalen - self->datapos, self->datalen, self->datapos));
+ TRACE(
+ ("_tokenize_helper: Trying to process %d bytes, datalen=%d, "
+ "datapos= %d\n",
+ self->datalen - self->datapos, self->datalen, self->datapos));
status = tokenize_bytes(self, nrows, start_lines);
if (status < 0) {
// XXX
- TRACE(("_tokenize_helper: Status %d returned from tokenize_bytes, breaking\n",
- status));
+ TRACE(
+ ("_tokenize_helper: Status %d returned from tokenize_bytes, "
+ "breaking\n",
+ status));
status = -1;
break;
}
@@ -1406,86 +1350,11 @@ int tokenize_all_rows(parser_t *self) {
return status;
}
-/* SEL - does not look like this routine is used anywhere
-void test_count_lines(char *fname) {
- clock_t start = clock();
-
- char *buffer, *tmp;
- size_t bytes, lines = 0;
- int i;
- FILE *fp = fopen(fname, "rb");
-
- buffer = (char*) malloc(CHUNKSIZE * sizeof(char));
-
- while(1) {
- tmp = buffer;
- bytes = fread((void *) buffer, sizeof(char), CHUNKSIZE, fp);
- // printf("Read %d bytes\n", bytes);
-
- if (bytes == 0) {
- break;
- }
-
- for (i = 0; i < bytes; ++i)
- {
- if (*tmp++ == '\n') {
- lines++;
- }
- }
- }
-
-
- printf("Saw %d lines\n", (int) lines);
-
- free(buffer);
- fclose(fp);
-
- printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC);
-}*/
-
-
P_INLINE void uppercase(char *p) {
- for ( ; *p; ++p) *p = toupper(*p);
-}
-
-/* SEL - does not look like these routines are used anywhere
-P_INLINE void lowercase(char *p) {
- for ( ; *p; ++p) *p = tolower(*p);
+ for (; *p; ++p) *p = toupper(*p);
}
-int P_INLINE to_complex(char *item, double *p_real, double *p_imag, char sci, char decimal)
-{
- char *p_end;
-
- *p_real = xstrtod(item, &p_end, decimal, sci, '\0', FALSE);
- if (*p_end == '\0') {
- *p_imag = 0.0;
- return errno == 0;
- }
- if (*p_end == 'i' || *p_end == 'j') {
- *p_imag = *p_real;
- *p_real = 0.0;
- ++p_end;
- }
- else {
- if (*p_end == '+') {
- ++p_end;
- }
- *p_imag = xstrtod(p_end, &p_end, decimal, sci, '\0', FALSE);
- if (errno || ((*p_end != 'i') && (*p_end != 'j'))) {
- return FALSE;
- }
- ++p_end;
- }
- while(*p_end == ' ') {
- ++p_end;
- }
- return *p_end == '\0';
-}*/
-
-
-int P_INLINE to_longlong(char *item, long long *p_value)
-{
+int P_INLINE to_longlong(char *item, long long *p_value) {
char *p_end;
// Try integer conversion. We explicitly give the base to be 10. If
@@ -1500,65 +1369,26 @@ int P_INLINE to_longlong(char *item, long long *p_value)
return (errno == 0) && (!*p_end);
}
-/* does not look like this routine is used anywhere
-int P_INLINE to_longlong_thousands(char *item, long long *p_value, char tsep)
-{
- int i, pos, status, n = strlen(item), count = 0;
- char *tmp;
- char *p_end;
-
- for (i = 0; i < n; ++i)
- {
- if (*(item + i) == tsep) {
- count++;
- }
- }
-
- if (count == 0) {
- return to_longlong(item, p_value);
- }
-
- tmp = (char*) malloc((n - count + 1) * sizeof(char));
- if (tmp == NULL) {
- return 0;
- }
-
- pos = 0;
- for (i = 0; i < n; ++i)
- {
- if (item[i] != tsep)
- tmp[pos++] = item[i];
- }
-
- tmp[pos] = '\0';
-
- status = to_longlong(tmp, p_value);
- free(tmp);
-
- return status;
-}*/
-
int to_boolean(const char *item, uint8_t *val) {
char *tmp;
int i, status = 0;
+ int bufsize = sizeof(char) * (strlen(item) + 1);
static const char *tstrs[1] = {"TRUE"};
static const char *fstrs[1] = {"FALSE"};
- tmp = malloc(sizeof(char) * (strlen(item) + 1));
- strcpy(tmp, item);
+ tmp = malloc(bufsize);
+ strncpy(tmp, item, bufsize);
uppercase(tmp);
- for (i = 0; i < 1; ++i)
- {
+ for (i = 0; i < 1; ++i) {
if (strcmp(tmp, tstrs[i]) == 0) {
*val = 1;
goto done;
}
}
- for (i = 0; i < 1; ++i)
- {
+ for (i = 0; i < 1; ++i) {
if (strcmp(tmp, fstrs[i]) == 0) {
*val = 0;
goto done;
@@ -1572,27 +1402,19 @@ int to_boolean(const char *item, uint8_t *val) {
return status;
}
-// #define TEST
-
#ifdef TEST
-int main(int argc, char *argv[])
-{
+int main(int argc, char *argv[]) {
double x, y;
long long xi;
int status;
char *s;
- //s = "0.10e-3-+5.5e2i";
- // s = "1-0j";
- // status = to_complex(s, &x, &y, 'e', '.');
s = "123,789";
status = to_longlong_thousands(s, &xi, ',');
printf("s = '%s'\n", s);
printf("status = %d\n", status);
- printf("x = %d\n", (int) xi);
-
- // printf("x = %lg, y = %lg\n", x, y);
+ printf("x = %d\n", (int)xi);
return 0;
}
@@ -1621,10 +1443,12 @@ int main(int argc, char *argv[])
// may be used to endorse or promote products derived from this software
// without specific prior written permission.
//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+// AND
// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
+// ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+// LIABLE
// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
// OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
@@ -1643,197 +1467,185 @@ int main(int argc, char *argv[])
// * Add tsep argument for thousands separator
//
-double xstrtod(const char *str, char **endptr, char decimal,
- char sci, char tsep, int skip_trailing)
-{
- double number;
- int exponent;
- int negative;
- char *p = (char *) str;
- double p10;
- int n;
- int num_digits;
- int num_decimals;
-
- errno = 0;
-
- // Skip leading whitespace
- while (isspace(*p)) p++;
-
- // Handle optional sign
- negative = 0;
- switch (*p)
- {
- case '-': negative = 1; // Fall through to increment position
- case '+': p++;
- }
-
- number = 0.;
- exponent = 0;
- num_digits = 0;
- num_decimals = 0;
-
- // Process string of digits
- while (isdigit(*p))
- {
- number = number * 10. + (*p - '0');
- p++;
- num_digits++;
-
- p += (tsep != '\0' && *p == tsep);
- }
-
- // Process decimal part
- if (*p == decimal)
- {
- p++;
-
- while (isdigit(*p))
- {
- number = number * 10. + (*p - '0');
- p++;
- num_digits++;
- num_decimals++;
- }
-
- exponent -= num_decimals;
- }
-
- if (num_digits == 0)
- {
- errno = ERANGE;
- return 0.0;
- }
-
- // Correct for sign
- if (negative) number = -number;
-
- // Process an exponent string
- if (toupper(*p) == toupper(sci))
- {
- // Handle optional sign
+double xstrtod(const char *str, char **endptr, char decimal, char sci,
+ char tsep, int skip_trailing) {
+ double number;
+ int exponent;
+ int negative;
+ char *p = (char *)str;
+ double p10;
+ int n;
+ int num_digits;
+ int num_decimals;
+
+ errno = 0;
+
+ // Skip leading whitespace.
+ while (isspace(*p)) p++;
+
+ // Handle optional sign.
negative = 0;
- switch (*++p)
- {
- case '-': negative = 1; // Fall through to increment pos
- case '+': p++;
+ switch (*p) {
+ case '-':
+ negative = 1; // Fall through to increment position.
+ case '+':
+ p++;
}
- // Process string of digits
+ number = 0.;
+ exponent = 0;
num_digits = 0;
- n = 0;
- while (isdigit(*p))
- {
- n = n * 10 + (*p - '0');
- num_digits++;
- p++;
+ num_decimals = 0;
+
+ // Process string of digits.
+ while (isdigit(*p)) {
+ number = number * 10. + (*p - '0');
+ p++;
+ num_digits++;
+
+ p += (tsep != '\0' && *p == tsep);
}
- if (negative)
- exponent -= n;
- else
- exponent += n;
+ // Process decimal part.
+ if (*p == decimal) {
+ p++;
+
+ while (isdigit(*p)) {
+ number = number * 10. + (*p - '0');
+ p++;
+ num_digits++;
+ num_decimals++;
+ }
- // If no digits, after the 'e'/'E', un-consume it
- if (num_digits == 0)
- p--;
- }
+ exponent -= num_decimals;
+ }
+ if (num_digits == 0) {
+ errno = ERANGE;
+ return 0.0;
+ }
- if (exponent < DBL_MIN_EXP || exponent > DBL_MAX_EXP)
- {
+ // Correct for sign.
+ if (negative) number = -number;
- errno = ERANGE;
- return HUGE_VAL;
- }
+ // Process an exponent string.
+ if (toupper(*p) == toupper(sci)) {
+ // Handle optional sign.
+ negative = 0;
+ switch (*++p) {
+ case '-':
+ negative = 1; // Fall through to increment pos.
+ case '+':
+ p++;
+ }
- // Scale the result
- p10 = 10.;
- n = exponent;
- if (n < 0) n = -n;
- while (n)
- {
- if (n & 1)
- {
- if (exponent < 0)
- number /= p10;
- else
- number *= p10;
+ // Process string of digits.
+ num_digits = 0;
+ n = 0;
+ while (isdigit(*p)) {
+ n = n * 10 + (*p - '0');
+ num_digits++;
+ p++;
+ }
+
+ if (negative)
+ exponent -= n;
+ else
+ exponent += n;
+
+ // If no digits, after the 'e'/'E', un-consume it
+ if (num_digits == 0) p--;
}
- n >>= 1;
- p10 *= p10;
- }
+ if (exponent < DBL_MIN_EXP || exponent > DBL_MAX_EXP) {
+ errno = ERANGE;
+ return HUGE_VAL;
+ }
- if (number == HUGE_VAL) {
- errno = ERANGE;
- }
+ // Scale the result.
+ p10 = 10.;
+ n = exponent;
+ if (n < 0) n = -n;
+ while (n) {
+ if (n & 1) {
+ if (exponent < 0)
+ number /= p10;
+ else
+ number *= p10;
+ }
+ n >>= 1;
+ p10 *= p10;
+ }
- if (skip_trailing) {
- // Skip trailing whitespace
- while (isspace(*p)) p++;
- }
+ if (number == HUGE_VAL) {
+ errno = ERANGE;
+ }
- if (endptr) *endptr = p;
+ if (skip_trailing) {
+ // Skip trailing whitespace.
+ while (isspace(*p)) p++;
+ }
+ if (endptr) *endptr = p;
- return number;
+ return number;
}
-double precise_xstrtod(const char *str, char **endptr, char decimal,
- char sci, char tsep, int skip_trailing)
-{
+double precise_xstrtod(const char *str, char **endptr, char decimal, char sci,
+ char tsep, int skip_trailing) {
double number;
int exponent;
int negative;
- char *p = (char *) str;
+ char *p = (char *)str;
int num_digits;
int num_decimals;
int max_digits = 17;
int n;
- // Cache powers of 10 in memory
- static double e[] = {1., 1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9, 1e10,
- 1e11, 1e12, 1e13, 1e14, 1e15, 1e16, 1e17, 1e18, 1e19, 1e20,
- 1e21, 1e22, 1e23, 1e24, 1e25, 1e26, 1e27, 1e28, 1e29, 1e30,
- 1e31, 1e32, 1e33, 1e34, 1e35, 1e36, 1e37, 1e38, 1e39, 1e40,
- 1e41, 1e42, 1e43, 1e44, 1e45, 1e46, 1e47, 1e48, 1e49, 1e50,
- 1e51, 1e52, 1e53, 1e54, 1e55, 1e56, 1e57, 1e58, 1e59, 1e60,
- 1e61, 1e62, 1e63, 1e64, 1e65, 1e66, 1e67, 1e68, 1e69, 1e70,
- 1e71, 1e72, 1e73, 1e74, 1e75, 1e76, 1e77, 1e78, 1e79, 1e80,
- 1e81, 1e82, 1e83, 1e84, 1e85, 1e86, 1e87, 1e88, 1e89, 1e90,
- 1e91, 1e92, 1e93, 1e94, 1e95, 1e96, 1e97, 1e98, 1e99, 1e100,
- 1e101, 1e102, 1e103, 1e104, 1e105, 1e106, 1e107, 1e108, 1e109, 1e110,
- 1e111, 1e112, 1e113, 1e114, 1e115, 1e116, 1e117, 1e118, 1e119, 1e120,
- 1e121, 1e122, 1e123, 1e124, 1e125, 1e126, 1e127, 1e128, 1e129, 1e130,
- 1e131, 1e132, 1e133, 1e134, 1e135, 1e136, 1e137, 1e138, 1e139, 1e140,
- 1e141, 1e142, 1e143, 1e144, 1e145, 1e146, 1e147, 1e148, 1e149, 1e150,
- 1e151, 1e152, 1e153, 1e154, 1e155, 1e156, 1e157, 1e158, 1e159, 1e160,
- 1e161, 1e162, 1e163, 1e164, 1e165, 1e166, 1e167, 1e168, 1e169, 1e170,
- 1e171, 1e172, 1e173, 1e174, 1e175, 1e176, 1e177, 1e178, 1e179, 1e180,
- 1e181, 1e182, 1e183, 1e184, 1e185, 1e186, 1e187, 1e188, 1e189, 1e190,
- 1e191, 1e192, 1e193, 1e194, 1e195, 1e196, 1e197, 1e198, 1e199, 1e200,
- 1e201, 1e202, 1e203, 1e204, 1e205, 1e206, 1e207, 1e208, 1e209, 1e210,
- 1e211, 1e212, 1e213, 1e214, 1e215, 1e216, 1e217, 1e218, 1e219, 1e220,
- 1e221, 1e222, 1e223, 1e224, 1e225, 1e226, 1e227, 1e228, 1e229, 1e230,
- 1e231, 1e232, 1e233, 1e234, 1e235, 1e236, 1e237, 1e238, 1e239, 1e240,
- 1e241, 1e242, 1e243, 1e244, 1e245, 1e246, 1e247, 1e248, 1e249, 1e250,
- 1e251, 1e252, 1e253, 1e254, 1e255, 1e256, 1e257, 1e258, 1e259, 1e260,
- 1e261, 1e262, 1e263, 1e264, 1e265, 1e266, 1e267, 1e268, 1e269, 1e270,
- 1e271, 1e272, 1e273, 1e274, 1e275, 1e276, 1e277, 1e278, 1e279, 1e280,
- 1e281, 1e282, 1e283, 1e284, 1e285, 1e286, 1e287, 1e288, 1e289, 1e290,
- 1e291, 1e292, 1e293, 1e294, 1e295, 1e296, 1e297, 1e298, 1e299, 1e300,
- 1e301, 1e302, 1e303, 1e304, 1e305, 1e306, 1e307, 1e308};
+ // Cache powers of 10 in memory.
+ static double e[] = {
+ 1., 1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9,
+ 1e10, 1e11, 1e12, 1e13, 1e14, 1e15, 1e16, 1e17, 1e18, 1e19,
+ 1e20, 1e21, 1e22, 1e23, 1e24, 1e25, 1e26, 1e27, 1e28, 1e29,
+ 1e30, 1e31, 1e32, 1e33, 1e34, 1e35, 1e36, 1e37, 1e38, 1e39,
+ 1e40, 1e41, 1e42, 1e43, 1e44, 1e45, 1e46, 1e47, 1e48, 1e49,
+ 1e50, 1e51, 1e52, 1e53, 1e54, 1e55, 1e56, 1e57, 1e58, 1e59,
+ 1e60, 1e61, 1e62, 1e63, 1e64, 1e65, 1e66, 1e67, 1e68, 1e69,
+ 1e70, 1e71, 1e72, 1e73, 1e74, 1e75, 1e76, 1e77, 1e78, 1e79,
+ 1e80, 1e81, 1e82, 1e83, 1e84, 1e85, 1e86, 1e87, 1e88, 1e89,
+ 1e90, 1e91, 1e92, 1e93, 1e94, 1e95, 1e96, 1e97, 1e98, 1e99,
+ 1e100, 1e101, 1e102, 1e103, 1e104, 1e105, 1e106, 1e107, 1e108, 1e109,
+ 1e110, 1e111, 1e112, 1e113, 1e114, 1e115, 1e116, 1e117, 1e118, 1e119,
+ 1e120, 1e121, 1e122, 1e123, 1e124, 1e125, 1e126, 1e127, 1e128, 1e129,
+ 1e130, 1e131, 1e132, 1e133, 1e134, 1e135, 1e136, 1e137, 1e138, 1e139,
+ 1e140, 1e141, 1e142, 1e143, 1e144, 1e145, 1e146, 1e147, 1e148, 1e149,
+ 1e150, 1e151, 1e152, 1e153, 1e154, 1e155, 1e156, 1e157, 1e158, 1e159,
+ 1e160, 1e161, 1e162, 1e163, 1e164, 1e165, 1e166, 1e167, 1e168, 1e169,
+ 1e170, 1e171, 1e172, 1e173, 1e174, 1e175, 1e176, 1e177, 1e178, 1e179,
+ 1e180, 1e181, 1e182, 1e183, 1e184, 1e185, 1e186, 1e187, 1e188, 1e189,
+ 1e190, 1e191, 1e192, 1e193, 1e194, 1e195, 1e196, 1e197, 1e198, 1e199,
+ 1e200, 1e201, 1e202, 1e203, 1e204, 1e205, 1e206, 1e207, 1e208, 1e209,
+ 1e210, 1e211, 1e212, 1e213, 1e214, 1e215, 1e216, 1e217, 1e218, 1e219,
+ 1e220, 1e221, 1e222, 1e223, 1e224, 1e225, 1e226, 1e227, 1e228, 1e229,
+ 1e230, 1e231, 1e232, 1e233, 1e234, 1e235, 1e236, 1e237, 1e238, 1e239,
+ 1e240, 1e241, 1e242, 1e243, 1e244, 1e245, 1e246, 1e247, 1e248, 1e249,
+ 1e250, 1e251, 1e252, 1e253, 1e254, 1e255, 1e256, 1e257, 1e258, 1e259,
+ 1e260, 1e261, 1e262, 1e263, 1e264, 1e265, 1e266, 1e267, 1e268, 1e269,
+ 1e270, 1e271, 1e272, 1e273, 1e274, 1e275, 1e276, 1e277, 1e278, 1e279,
+ 1e280, 1e281, 1e282, 1e283, 1e284, 1e285, 1e286, 1e287, 1e288, 1e289,
+ 1e290, 1e291, 1e292, 1e293, 1e294, 1e295, 1e296, 1e297, 1e298, 1e299,
+ 1e300, 1e301, 1e302, 1e303, 1e304, 1e305, 1e306, 1e307, 1e308};
errno = 0;
- // Skip leading whitespace
+ // Skip leading whitespace.
while (isspace(*p)) p++;
- // Handle optional sign
+ // Handle optional sign.
negative = 0;
- switch (*p)
- {
- case '-': negative = 1; // Fall through to increment position
- case '+': p++;
+ switch (*p) {
+ case '-':
+ negative = 1; // Fall through to increment position.
+ case '+':
+ p++;
}
number = 0.;
@@ -1841,66 +1653,59 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
num_digits = 0;
num_decimals = 0;
- // Process string of digits
- while (isdigit(*p))
- {
- if (num_digits < max_digits)
- {
+ // Process string of digits.
+ while (isdigit(*p)) {
+ if (num_digits < max_digits) {
number = number * 10. + (*p - '0');
num_digits++;
- }
- else
+ } else {
++exponent;
+ }
p++;
p += (tsep != '\0' && *p == tsep);
}
// Process decimal part
- if (*p == decimal)
- {
+ if (*p == decimal) {
p++;
- while (num_digits < max_digits && isdigit(*p))
- {
+ while (num_digits < max_digits && isdigit(*p)) {
number = number * 10. + (*p - '0');
p++;
num_digits++;
num_decimals++;
}
- if (num_digits >= max_digits) // consume extra decimal digits
- while (isdigit(*p))
- ++p;
+ if (num_digits >= max_digits) // Consume extra decimal digits.
+ while (isdigit(*p)) ++p;
exponent -= num_decimals;
}
- if (num_digits == 0)
- {
+ if (num_digits == 0) {
errno = ERANGE;
return 0.0;
}
- // Correct for sign
+ // Correct for sign.
if (negative) number = -number;
- // Process an exponent string
- if (toupper(*p) == toupper(sci))
- {
+ // Process an exponent string.
+ if (toupper(*p) == toupper(sci)) {
// Handle optional sign
negative = 0;
- switch (*++p)
- {
- case '-': negative = 1; // Fall through to increment pos
- case '+': p++;
+ switch (*++p) {
+ case '-':
+ negative = 1; // Fall through to increment pos.
+ case '+':
+ p++;
}
- // Process string of digits
+ // Process string of digits.
num_digits = 0;
n = 0;
- while (isdigit(*p))
- {
+ while (isdigit(*p)) {
n = n * 10 + (*p - '0');
num_digits++;
p++;
@@ -1911,33 +1716,28 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
else
exponent += n;
- // If no digits, after the 'e'/'E', un-consume it
- if (num_digits == 0)
- p--;
+ // If no digits after the 'e'/'E', un-consume it.
+ if (num_digits == 0) p--;
}
- if (exponent > 308)
- {
+ if (exponent > 308) {
errno = ERANGE;
return HUGE_VAL;
- }
- else if (exponent > 0)
+ } else if (exponent > 0) {
number *= e[exponent];
- else if (exponent < -308) // subnormal
- {
- if (exponent < -616) // prevent invalid array access
+ } else if (exponent < -308) { // Subnormal
+ if (exponent < -616) // Prevent invalid array access.
number = 0.;
number /= e[-308 - exponent];
number /= e[308];
- }
- else
+ } else {
number /= e[-exponent];
+ }
- if (number == HUGE_VAL || number == -HUGE_VAL)
- errno = ERANGE;
+ if (number == HUGE_VAL || number == -HUGE_VAL) errno = ERANGE;
if (skip_trailing) {
- // Skip trailing whitespace
+ // Skip trailing whitespace.
while (isspace(*p)) p++;
}
@@ -1945,9 +1745,8 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
return number;
}
-double round_trip(const char *p, char **q, char decimal, char sci,
- char tsep, int skip_trailing)
-{
+double round_trip(const char *p, char **q, char decimal, char sci, char tsep,
+ int skip_trailing) {
#if PY_VERSION_HEX >= 0x02070000
return PyOS_string_to_double(p, q, 0);
#else
@@ -1955,31 +1754,12 @@ double round_trip(const char *p, char **q, char decimal, char sci,
#endif
}
-/*
-float strtof(const char *str, char **endptr)
-{
- return (float) strtod(str, endptr);
-}
-
-
-long double strtold(const char *str, char **endptr)
-{
- return strtod(str, endptr);
-}
-
-double atof(const char *str)
-{
- return strtod(str, NULL);
-}
-*/
-
// End of xstrtod code
// ---------------------------------------------------------------------------
int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
- int *error, char tsep)
-{
- const char *p = (const char *) p_item;
+ int *error, char tsep) {
+ const char *p = (const char *)p_item;
int isneg = 0;
int64_t number = 0;
int d;
@@ -1993,8 +1773,7 @@ int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
if (*p == '-') {
isneg = 1;
++p;
- }
- else if (*p == '+') {
+ } else if (*p == '+') {
p++;
}
@@ -2023,11 +1802,9 @@ int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
}
if ((number > pre_min) ||
((number == pre_min) && (d - '0' <= dig_pre_min))) {
-
number = number * 10 - (d - '0');
d = *++p;
- }
- else {
+ } else {
*error = ERROR_OVERFLOW;
return 0;
}
@@ -2036,25 +1813,20 @@ int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
while (isdigit(d)) {
if ((number > pre_min) ||
((number == pre_min) && (d - '0' <= dig_pre_min))) {
-
number = number * 10 - (d - '0');
d = *++p;
- }
- else {
+ } else {
*error = ERROR_OVERFLOW;
return 0;
}
}
}
- }
- else {
+ } else {
// If number is less than pre_max, at least one more digit
// can be processed without overflowing.
int64_t pre_max = int_max / 10;
int dig_pre_max = int_max % 10;
- //printf("pre_max = %lld dig_pre_max = %d\n", pre_max, dig_pre_max);
-
// Process the digits.
d = *p;
if (tsep != '\0') {
@@ -2067,12 +1839,10 @@ int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
}
if ((number < pre_max) ||
((number == pre_max) && (d - '0' <= dig_pre_max))) {
-
number = number * 10 + (d - '0');
d = *++p;
- }
- else {
+ } else {
*error = ERROR_OVERFLOW;
return 0;
}
@@ -2081,12 +1851,10 @@ int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
while (isdigit(d)) {
if ((number < pre_max) ||
((number == pre_max) && (d - '0' <= dig_pre_max))) {
-
number = number * 10 + (d - '0');
d = *++p;
- }
- else {
+ } else {
*error = ERROR_OVERFLOW;
return 0;
}
@@ -2108,66 +1876,3 @@ int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
*error = 0;
return number;
}
-
-/* does not look like this routine is used anywhere
-uint64_t str_to_uint64(const char *p_item, uint64_t uint_max, int *error)
-{
- int d, dig_pre_max;
- uint64_t pre_max;
- const char *p = (const char *) p_item;
- uint64_t number = 0;
-
- // Skip leading spaces.
- while (isspace(*p)) {
- ++p;
- }
-
- // Handle sign.
- if (*p == '-') {
- *error = ERROR_MINUS_SIGN;
- return 0;
- }
- if (*p == '+') {
- p++;
- }
-
- // Check that there is a first digit.
- if (!isdigit(*p)) {
- // Error...
- *error = ERROR_NO_DIGITS;
- return 0;
- }
-
- // If number is less than pre_max, at least one more digit
- // can be processed without overflowing.
- pre_max = uint_max / 10;
- dig_pre_max = uint_max % 10;
-
- // Process the digits.
- d = *p;
- while (isdigit(d)) {
- if ((number < pre_max) || ((number == pre_max) && (d - '0' <= dig_pre_max))) {
- number = number * 10 + (d - '0');
- d = *++p;
- }
- else {
- *error = ERROR_OVERFLOW;
- return 0;
- }
- }
-
- // Skip trailing spaces.
- while (isspace(*p)) {
- ++p;
- }
-
- // Did we use up all the characters?
- if (*p) {
- *error = ERROR_INVALID_CHARS;
- return 0;
- }
-
- *error = 0;
- return number;
-}
-*/
diff --git a/pandas/src/parser/tokenizer.h b/pandas/src/parser/tokenizer.h
index 487c1265d9358..e01812f1c5520 100644
--- a/pandas/src/parser/tokenizer.h
+++ b/pandas/src/parser/tokenizer.h
@@ -9,29 +9,29 @@ See LICENSE for the license
*/
-#ifndef _PARSER_COMMON_H_
-#define _PARSER_COMMON_H_
+#ifndef PANDAS_SRC_PARSER_TOKENIZER_H_
+#define PANDAS_SRC_PARSER_TOKENIZER_H_
-#include "Python.h"
+#include <errno.h>
#include <stdio.h>
-#include <string.h>
#include <stdlib.h>
+#include <string.h>
#include <time.h>
-#include <errno.h>
+#include "Python.h"
#include <ctype.h>
-#define ERROR_OK 0
-#define ERROR_NO_DIGITS 1
-#define ERROR_OVERFLOW 2
-#define ERROR_INVALID_CHARS 3
-#define ERROR_MINUS_SIGN 4
+#define ERROR_OK 0
+#define ERROR_NO_DIGITS 1
+#define ERROR_OVERFLOW 2
+#define ERROR_INVALID_CHARS 3
+#define ERROR_MINUS_SIGN 4
#include "../headers/stdint.h"
#include "khash.h"
-#define CHUNKSIZE 1024*256
+#define CHUNKSIZE 1024 * 256
#define KB 1024
#define MB 1024 * KB
#define STREAM_INIT_SIZE 32
@@ -40,15 +40,15 @@ See LICENSE for the license
#define CALLING_READ_FAILED 2
#ifndef P_INLINE
- #if defined(__GNUC__)
- #define P_INLINE static __inline__
- #elif defined(_MSC_VER)
- #define P_INLINE
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define P_INLINE static inline
- #else
- #define P_INLINE
- #endif
+#if defined(__GNUC__)
+#define P_INLINE static __inline__
+#elif defined(_MSC_VER)
+#define P_INLINE
+#elif defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+#define P_INLINE static inline
+#else
+#define P_INLINE
+#endif
#endif
#if defined(_MSC_VER)
@@ -62,41 +62,34 @@ See LICENSE for the license
*/
#define FALSE 0
-#define TRUE 1
-
-/* Maximum number of columns in a file. */
-#define MAX_NUM_COLUMNS 2000
+#define TRUE 1
-/* Maximum number of characters in single field. */
-
-#define FIELD_BUFFER_SIZE 2000
+// Maximum number of columns in a file.
+#define MAX_NUM_COLUMNS 2000
+// Maximum number of characters in single field.
+#define FIELD_BUFFER_SIZE 2000
/*
* Common set of error types for the read_rows() and tokenize()
* functions.
*/
-
-#define ERROR_OUT_OF_MEMORY 1
-#define ERROR_INVALID_COLUMN_INDEX 10
+#define ERROR_OUT_OF_MEMORY 1
+#define ERROR_INVALID_COLUMN_INDEX 10
#define ERROR_CHANGED_NUMBER_OF_FIELDS 12
-#define ERROR_TOO_MANY_CHARS 21
-#define ERROR_TOO_MANY_FIELDS 22
-#define ERROR_NO_DATA 23
-
-
-/* #define VERBOSE */
+#define ERROR_TOO_MANY_CHARS 21
+#define ERROR_TOO_MANY_FIELDS 22
+#define ERROR_NO_DATA 23
+// #define VERBOSE
#if defined(VERBOSE)
#define TRACE(X) printf X;
#else
#define TRACE(X)
#endif
-
#define PARSER_OUT_OF_MEMORY -1
-
/*
* XXX Might want to couple count_rows() with read_rows() to avoid duplication
* of some file I/O.
@@ -108,7 +101,6 @@ See LICENSE for the license
*/
#define WORD_BUFFER_SIZE 4000
-
typedef enum {
START_RECORD,
START_FIELD,
@@ -131,12 +123,14 @@ typedef enum {
} ParserState;
typedef enum {
- QUOTE_MINIMAL, QUOTE_ALL, QUOTE_NONNUMERIC, QUOTE_NONE
+ QUOTE_MINIMAL,
+ QUOTE_ALL,
+ QUOTE_NONNUMERIC,
+ QUOTE_NONE
} QuoteStyle;
-
-typedef void* (*io_callback)(void *src, size_t nbytes, size_t *bytes_read,
- int *status);
+typedef void *(*io_callback)(void *src, size_t nbytes, size_t *bytes_read,
+ int *status);
typedef int (*io_cleanup)(void *src);
typedef struct parser_t {
@@ -156,38 +150,38 @@ typedef struct parser_t {
// Store words in (potentially ragged) matrix for now, hmm
char **words;
- int *word_starts; // where we are in the stream
+ int *word_starts; // where we are in the stream
int words_len;
int words_cap;
- char *pword_start; // pointer to stream start of current field
- int word_start; // position start of current field
+ char *pword_start; // pointer to stream start of current field
+ int word_start; // position start of current field
- int *line_start; // position in words for start of line
- int *line_fields; // Number of fields in each line
- int lines; // Number of (good) lines observed
- int file_lines; // Number of file lines observed (including bad or skipped)
- int lines_cap; // Vector capacity
+ int *line_start; // position in words for start of line
+ int *line_fields; // Number of fields in each line
+ int lines; // Number of (good) lines observed
+ int file_lines; // Number of file lines observed (including bad or skipped)
+ int lines_cap; // Vector capacity
// Tokenizing stuff
ParserState state;
- int doublequote; /* is " represented by ""? */
- char delimiter; /* field separator */
- int delim_whitespace; /* delimit by consuming space/tabs instead */
- char quotechar; /* quote character */
- char escapechar; /* escape character */
+ int doublequote; /* is " represented by ""? */
+ char delimiter; /* field separator */
+ int delim_whitespace; /* delimit by consuming space/tabs instead */
+ char quotechar; /* quote character */
+ char escapechar; /* escape character */
char lineterminator;
- int skipinitialspace; /* ignore spaces following delimiter? */
- int quoting; /* style of quoting to write */
+ int skipinitialspace; /* ignore spaces following delimiter? */
+ int quoting; /* style of quoting to write */
// krufty, hmm =/
int numeric_field;
char commentchar;
int allow_embedded_newline;
- int strict; /* raise exception on bad CSV */
+ int strict; /* raise exception on bad CSV */
- int usecols; // Boolean: 1: usecols provided, 0: none provided
+ int usecols; // Boolean: 1: usecols provided, 0: none provided
int expected_fields;
int error_bad_lines;
@@ -200,9 +194,9 @@ typedef struct parser_t {
// thousands separator (comma, period)
char thousands;
- int header; // Boolean: 1: has header, 0: no header
- int header_start; // header row start
- int header_end; // header row end
+ int header; // Boolean: 1: has header, 0: no header
+ int header_start; // header row start
+ int header_end; // header row end
void *skipset;
int64_t skip_first_N_rows;
@@ -216,7 +210,6 @@ typedef struct parser_t {
int skip_empty_lines;
} parser_t;
-
typedef struct coliter_t {
char **words;
int *line_start;
@@ -226,15 +219,13 @@ typedef struct coliter_t {
void coliter_setup(coliter_t *self, parser_t *parser, int i, int start);
coliter_t *coliter_new(parser_t *self, int i);
-/* #define COLITER_NEXT(iter) iter->words[iter->line_start[iter->line++] + iter->col] */
-// #define COLITER_NEXT(iter) iter.words[iter.line_start[iter.line++] + iter.col]
+#define COLITER_NEXT(iter, word) \
+ do { \
+ const int i = *iter.line_start++ + iter.col; \
+ word = i < *iter.line_start ? iter.words[i] : ""; \
+ } while (0)
-#define COLITER_NEXT(iter, word) do { \
- const int i = *iter.line_start++ + iter.col; \
- word = i < *iter.line_start ? iter.words[i]: ""; \
- } while(0)
-
-parser_t* parser_new(void);
+parser_t *parser_new(void);
int parser_init(parser_t *self);
@@ -256,24 +247,17 @@ int tokenize_nrows(parser_t *self, size_t nrows);
int tokenize_all_rows(parser_t *self);
-/*
-
- Have parsed / type-converted a chunk of data and want to free memory from the
- token stream
-
- */
-//int clear_parsed_lines(parser_t *self, size_t nlines);
-
-int64_t str_to_int64(const char *p_item, int64_t int_min,
- int64_t int_max, int *error, char tsep);
-//uint64_t str_to_uint64(const char *p_item, uint64_t uint_max, int *error);
-
-double xstrtod(const char *p, char **q, char decimal, char sci, char tsep, int skip_trailing);
-double precise_xstrtod(const char *p, char **q, char decimal, char sci, char tsep, int skip_trailing);
-double round_trip(const char *p, char **q, char decimal, char sci, char tsep, int skip_trailing);
-//int P_INLINE to_complex(char *item, double *p_real, double *p_imag, char sci, char decimal);
-//int P_INLINE to_longlong(char *item, long long *p_value);
-//int P_INLINE to_longlong_thousands(char *item, long long *p_value, char tsep);
+// Have parsed / type-converted a chunk of data
+// and want to free memory from the token stream
+
+int64_t str_to_int64(const char *p_item, int64_t int_min, int64_t int_max,
+ int *error, char tsep);
+double xstrtod(const char *p, char **q, char decimal, char sci, char tsep,
+ int skip_trailing);
+double precise_xstrtod(const char *p, char **q, char decimal, char sci,
+ char tsep, int skip_trailing);
+double round_trip(const char *p, char **q, char decimal, char sci, char tsep,
+ int skip_trailing);
int to_boolean(const char *item, uint8_t *val);
-#endif // _PARSER_COMMON_H_
+#endif // PANDAS_SRC_PARSER_TOKENIZER_H_
| Remove dead code and reformat for style using Google's C++ style guide (found <a href="https://google.github.io/styleguide/cppguide.html">here</a>).
Also adds line of code to install fork of Google's C++ style checker (source code <a href="https://github.com/cpplint/cpplint">here</a>) when doing style checks in Travis. | https://api.github.com/repos/pandas-dev/pandas/pulls/14740 | 2016-11-25T10:02:26Z | 2016-12-06T18:43:13Z | 2016-12-06T18:43:13Z | 2016-12-06T22:31:59Z |
DOC: Update outdated caveats for Anaconda and HTML parsing (#9032) | diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index cfac5c257184d..8a1e06fa6d86c 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -514,40 +514,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
-**Issues with using** |Anaconda|_
-
- * `Anaconda`_ ships with `lxml`_ version 3.2.0; the following workaround for
- `Anaconda`_ was successfully used to deal with the versioning issues
- surrounding `lxml`_ and `BeautifulSoup4`_.
-
- .. note::
-
- Unless you have *both*:
-
- * A strong restriction on the upper bound of the runtime of some code
- that incorporates :func:`~pandas.io.html.read_html`
- * Complete knowledge that the HTML you will be parsing will be 100%
- valid at all times
-
- then you should install `html5lib`_ and things will work swimmingly
- without you having to muck around with `conda`. If you want the best of
- both worlds then install both `html5lib`_ and `lxml`_. If you do install
- `lxml`_ then you need to perform the following commands to ensure that
- lxml will work correctly:
-
- .. code-block:: sh
-
- # remove the included version
- conda remove lxml
-
- # install the latest version of lxml
- pip install 'git+git://github.com/lxml/lxml.git'
-
- # install the latest version of beautifulsoup4
- pip install 'bzr+lp:beautifulsoup'
-
- Note that you need `bzr <http://bazaar.canonical.com/en>`__ and `git
- <http://git-scm.com>`__ installed to perform the last two operations.
.. |svm| replace:: **strictly valid markup**
.. _svm: http://validator.w3.org/docs/help.html#validation_basics
@@ -561,9 +527,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
.. |lxml| replace:: **lxml**
.. _lxml: http://lxml.de
-.. |Anaconda| replace:: **Anaconda**
-.. _Anaconda: https://store.continuum.io/cshop/anaconda
-
Byte-Ordering Issues
--------------------
| - [x] closes #9032
| https://api.github.com/repos/pandas-dev/pandas/pulls/14739 | 2016-11-25T07:59:20Z | 2016-11-25T22:47:50Z | 2016-11-25T22:47:50Z | 2016-12-21T04:11:35Z |
ENH: support cut/qcut for datetime/timedelta (GH14714) | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 6fe0ad8092a03..5e94a95e38cbb 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -50,6 +50,7 @@ Other enhancements
- ``pd.read_excel`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
+- ``pd.cut`` and ``pd.qcut`` now support datetime64 and timedelta64 dtypes (issue:`14714`)
.. _whatsnew_0200.api_breaking:
diff --git a/pandas/tools/tests/test_tile.py b/pandas/tools/tests/test_tile.py
index e5b9c65b515d6..33d2a01b1256e 100644
--- a/pandas/tools/tests/test_tile.py
+++ b/pandas/tools/tests/test_tile.py
@@ -12,6 +12,7 @@
from pandas.core.algorithms import quantile
from pandas.tools.tile import cut, qcut
import pandas.tools.tile as tmod
+from pandas import to_datetime, DatetimeIndex
class TestCut(tm.TestCase):
@@ -283,6 +284,35 @@ def test_single_bin(self):
result = cut(s, 1, labels=False)
tm.assert_series_equal(result, expected)
+ def test_datetime_cut(self):
+ # GH 14714
+ # testing for time data to be present as series
+ data = to_datetime(Series(['2013-01-01', '2013-01-02', '2013-01-03']))
+ result, bins = cut(data, 3, retbins=True)
+ expected = Series(['(2012-12-31 23:57:07.200000, 2013-01-01 16:00:00]',
+ '(2013-01-01 16:00:00, 2013-01-02 08:00:00]',
+ '(2013-01-02 08:00:00, 2013-01-03 00:00:00]'],
+ ).astype("category", ordered=True)
+ tm.assert_series_equal(result, expected)
+
+ # testing for time data to be present as list
+ data = [np.datetime64('2013-01-01'), np.datetime64('2013-01-02'),
+ np.datetime64('2013-01-03')]
+ result, bins = cut(data, 3, retbins=True)
+ tm.assert_series_equal(Series(result), expected)
+
+ # testing for time data to be present as ndarray
+ data = np.array([np.datetime64('2013-01-01'),
+ np.datetime64('2013-01-02'),
+ np.datetime64('2013-01-03')])
+ result, bins = cut(data, 3, retbins=True)
+ tm.assert_series_equal(Series(result), expected)
+
+ # testing for time data to be present as datetime index
+ data = DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03'])
+ result, bins = cut(data, 3, retbins=True)
+ tm.assert_series_equal(Series(result), expected)
+
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
diff --git a/pandas/tools/tile.py b/pandas/tools/tile.py
index ef75f2f84779b..f62bac9e951a7 100644
--- a/pandas/tools/tile.py
+++ b/pandas/tools/tile.py
@@ -11,6 +11,8 @@
import pandas.core.algorithms as algos
import pandas.core.nanops as nanops
from pandas.compat import zip
+from pandas import to_timedelta, to_datetime
+from pandas.types.common import is_datetime64_dtype, is_timedelta64_dtype
import numpy as np
@@ -81,14 +83,17 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
array([1, 1, 1, 1, 1], dtype=int64)
"""
# NOTE: this binning code is changed a bit from histogram for var(x) == 0
+
+ # for handling the cut for datetime and timedelta objects
+ x_is_series, series_index, name, x = _preprocess_for_cut(x)
+ x, dtype = _coerce_to_type(x)
+
if not np.iterable(bins):
if is_scalar(bins) and bins < 1:
raise ValueError("`bins` should be a positive integer.")
- try: # for array-like
- sz = x.size
- except AttributeError:
- x = np.asarray(x)
- sz = x.size
+
+ sz = x.size
+
if sz == 0:
raise ValueError('Cannot cut empty array')
# handle empty arrays. Can't determine range, so use 0-1.
@@ -114,9 +119,12 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
if (np.diff(bins) < 0).any():
raise ValueError('bins must increase monotonically.')
- return _bins_to_cuts(x, bins, right=right, labels=labels,
- retbins=retbins, precision=precision,
- include_lowest=include_lowest)
+ fac, bins = _bins_to_cuts(x, bins, right=right, labels=labels,
+ precision=precision,
+ include_lowest=include_lowest, dtype=dtype)
+
+ return _postprocess_for_cut(fac, bins, retbins, x_is_series,
+ series_index, name)
def qcut(x, q, labels=None, retbins=False, precision=3):
@@ -166,26 +174,26 @@ def qcut(x, q, labels=None, retbins=False, precision=3):
>>> pd.qcut(range(5), 4, labels=False)
array([0, 0, 1, 2, 3], dtype=int64)
"""
+ x_is_series, series_index, name, x = _preprocess_for_cut(x)
+
+ x, dtype = _coerce_to_type(x)
+
if is_integer(q):
quantiles = np.linspace(0, 1, q + 1)
else:
quantiles = q
bins = algos.quantile(x, quantiles)
- return _bins_to_cuts(x, bins, labels=labels, retbins=retbins,
- precision=precision, include_lowest=True)
+ fac, bins = _bins_to_cuts(x, bins, labels=labels,
+ precision=precision, include_lowest=True,
+ dtype=dtype)
+ return _postprocess_for_cut(fac, bins, retbins, x_is_series,
+ series_index, name)
-def _bins_to_cuts(x, bins, right=True, labels=None, retbins=False,
- precision=3, name=None, include_lowest=False):
- x_is_series = isinstance(x, Series)
- series_index = None
-
- if x_is_series:
- series_index = x.index
- if name is None:
- name = x.name
- x = np.asarray(x)
+def _bins_to_cuts(x, bins, right=True, labels=None,
+ precision=3, include_lowest=False,
+ dtype=None):
side = 'left' if right else 'right'
ids = bins.searchsorted(x, side=side)
@@ -205,7 +213,8 @@ def _bins_to_cuts(x, bins, right=True, labels=None, retbins=False,
while True:
try:
levels = _format_levels(bins, precision, right=right,
- include_lowest=include_lowest)
+ include_lowest=include_lowest,
+ dtype=dtype)
except ValueError:
increases += 1
precision += 1
@@ -229,18 +238,12 @@ def _bins_to_cuts(x, bins, right=True, labels=None, retbins=False,
fac = fac.astype(np.float64)
np.putmask(fac, na_mask, np.nan)
- if x_is_series:
- fac = Series(fac, index=series_index, name=name)
-
- if not retbins:
- return fac
-
return fac, bins
def _format_levels(bins, prec, right=True,
- include_lowest=False):
- fmt = lambda v: _format_label(v, precision=prec)
+ include_lowest=False, dtype=None):
+ fmt = lambda v: _format_label(v, precision=prec, dtype=dtype)
if right:
levels = []
for a, b in zip(bins, bins[1:]):
@@ -258,12 +261,16 @@ def _format_levels(bins, prec, right=True,
else:
levels = ['[%s, %s)' % (fmt(a), fmt(b))
for a, b in zip(bins, bins[1:])]
-
return levels
-def _format_label(x, precision=3):
+def _format_label(x, precision=3, dtype=None):
fmt_str = '%%.%dg' % precision
+
+ if is_datetime64_dtype(dtype):
+ return to_datetime(x, unit='ns')
+ if is_timedelta64_dtype(dtype):
+ return to_timedelta(x, unit='ns')
if np.isinf(x):
return str(x)
elif is_float(x):
@@ -300,3 +307,55 @@ def _trim_zeros(x):
if len(x) > 1 and x[-1] == '.':
x = x[:-1]
return x
+
+
+def _coerce_to_type(x):
+ """
+ if the passed data is of datetime/timedelta type,
+ this method converts it to integer so that cut method can
+ handle it
+ """
+ dtype = None
+
+ if is_timedelta64_dtype(x):
+ x = to_timedelta(x).view(np.int64)
+ dtype = np.timedelta64
+ elif is_datetime64_dtype(x):
+ x = to_datetime(x).view(np.int64)
+ dtype = np.datetime64
+
+ return x, dtype
+
+
+def _preprocess_for_cut(x):
+ """
+ handles preprocessing for cut where we convert passed
+ input to array, strip the index information and store it
+ seperately
+ """
+ x_is_series = isinstance(x, Series)
+ series_index = None
+ name = None
+
+ if x_is_series:
+ series_index = x.index
+ name = x.name
+
+ x = np.asarray(x)
+
+ return x_is_series, series_index, name, x
+
+
+def _postprocess_for_cut(fac, bins, retbins, x_is_series, series_index, name):
+ """
+ handles post processing for the cut method where
+ we combine the index information if the originally passed
+ datatype was a series
+ """
+ if x_is_series:
+ fac = Series(fac, index=series_index, name=name)
+
+ if not retbins:
+ return fac
+
+ return fac, bins
| - [x] closes #14714
- [x] tests added / passed
- [x] passes ``git diff upstream/master | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14737 | 2016-11-25T06:43:10Z | 2016-12-03T10:11:59Z | 2016-12-03T10:11:59Z | 2016-12-04T22:35:52Z |
CLN: move assignment from header into cython | diff --git a/pandas/src/datetime.pxd b/pandas/src/datetime.pxd
index 5f7de8244d17e..3f0e6a563d66b 100644
--- a/pandas/src/datetime.pxd
+++ b/pandas/src/datetime.pxd
@@ -42,9 +42,6 @@ cdef extern from "datetime.h":
object PyDateTime_FromDateAndTime(int year, int month, int day, int hour,
int minute, int second, int us)
-cdef extern from "datetime_helper.h":
- void mangle_nat(object o)
-
cdef extern from "numpy/ndarrayobject.h":
ctypedef int64_t npy_timedelta
diff --git a/pandas/src/datetime_helper.h b/pandas/src/datetime_helper.h
index d78e91e747854..11399181fa4e7 100644
--- a/pandas/src/datetime_helper.h
+++ b/pandas/src/datetime_helper.h
@@ -7,11 +7,6 @@
#define PyInt_AS_LONG PyLong_AsLong
#endif
-void mangle_nat(PyObject *val) {
- PyDateTime_GET_MONTH(val) = -1;
- PyDateTime_GET_DAY(val) = -1;
-}
-
npy_int64 get_long_attr(PyObject *o, const char *attr) {
npy_int64 long_val;
PyObject *value = PyObject_GetAttrString(o, attr);
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index bab45595cd60f..edc2d8d2e4d75 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -738,7 +738,8 @@ class NaTType(_NaT):
cdef _NaT base
base = _NaT.__new__(cls, 1, 1, 1)
- mangle_nat(base)
+ base._day = -1
+ base._month = -1
base.value = NPY_NAT
return base
| an insignificant refactoring that allows PyPy2.7 to build and import Pandas. The original code uses functions that are macros in CPython but actual c-functions in PyPy and one cannot assign to a function result
Note that while PyPy imports Pandas, it very quickly segfaults, Apparently PyPy does not yet play well with enough of Cython, I suspect strange interactions with tp_new and tp_deallocate | https://api.github.com/repos/pandas-dev/pandas/pulls/14731 | 2016-11-24T18:18:28Z | 2016-11-25T12:52:11Z | 2016-11-25T12:52:11Z | 2016-11-25T12:52:16Z |
ENH: add data hashing routines | diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py
index 9807639143ddb..53b7d55368f6a 100644
--- a/asv_bench/benchmarks/algorithms.py
+++ b/asv_bench/benchmarks/algorithms.py
@@ -1,5 +1,6 @@
import numpy as np
import pandas as pd
+from pandas.util import testing as tm
class algorithm(object):
@@ -55,3 +56,35 @@ def time_add_overflow_neg_arr(self):
def time_add_overflow_mixed_arr(self):
self.checked_add(self.arr, self.arrmixed)
+
+
+class hashing(object):
+ goal_time = 0.2
+
+ def setup(self):
+ N = 100000
+
+ self.df = pd.DataFrame(
+ {'A': pd.Series(tm.makeStringIndex(100).take(
+ np.random.randint(0, 100, size=N))),
+ 'B': pd.Series(tm.makeStringIndex(10000).take(
+ np.random.randint(0, 10000, size=N))),
+ 'D': np.random.randn(N),
+ 'E': np.arange(N),
+ 'F': pd.date_range('20110101', freq='s', periods=N),
+ 'G': pd.timedelta_range('1 day', freq='s', periods=N),
+ })
+ self.df['C'] = self.df['B'].astype('category')
+ self.df.iloc[10:20] = np.nan
+
+ def time_frame(self):
+ self.df.hash()
+
+ def time_series_int(self):
+ self.df.E.hash()
+
+ def time_series_string(self):
+ self.df.B.hash()
+
+ def time_series_categorical(self):
+ self.df.C.hash()
diff --git a/pandas/src/hash.pyx b/pandas/src/hash.pyx
new file mode 100644
index 0000000000000..b8c309f1f7a13
--- /dev/null
+++ b/pandas/src/hash.pyx
@@ -0,0 +1,180 @@
+# cython: profile=False
+# Translated from the reference implementation
+# at https://github.com/veorq/SipHash
+
+import cython
+cimport numpy as cnp
+import numpy as np
+from numpy cimport ndarray, uint8_t, uint32_t, uint64_t
+
+from cpython cimport (PyString_Check,
+ PyBytes_Check,
+ PyUnicode_Check)
+from libc.stdlib cimport malloc, free
+
+DEF cROUNDS = 2
+DEF dROUNDS = 4
+
+
+@cython.boundscheck(False)
+def hash_object_array(ndarray[object] arr, object key, object encoding='utf8'):
+ """
+ Parameters
+ ----------
+ arr : 1-d object ndarray of objects
+ key : hash key, must be 16 byte len encoded
+ encoding : encoding for key & arr, default to 'utf8'
+
+ Returns
+ -------
+ 1-d uint64 ndarray of hashes
+
+ """
+ cdef:
+ Py_ssize_t i, l, n
+ ndarray[uint64_t] result
+ bytes data, k
+ uint8_t *kb, *lens
+ char **vecs, *cdata
+ object val
+
+ k = <bytes>key.encode(encoding)
+ kb = <uint8_t *>k
+ if len(k) != 16:
+ raise ValueError(
+ 'key should be a 16-byte string encoded, got {!r} (len {})'.format(
+ k, len(k)))
+
+ n = len(arr)
+
+ # create an array of bytes
+ vecs = <char **> malloc(n * sizeof(char *))
+ lens = <uint8_t*> malloc(n * sizeof(uint8_t))
+
+ cdef list datas = []
+ for i in range(n):
+ val = arr[i]
+ if PyString_Check(val):
+ data = <bytes>val.encode(encoding)
+ elif PyBytes_Check(val):
+ data = <bytes>val
+ elif PyUnicode_Check(val):
+ data = <bytes>val.encode(encoding)
+ else:
+ # non-strings
+ data = <bytes>str(val).encode(encoding)
+
+ l = len(data)
+ lens[i] = l
+ cdata = data
+
+ # keep the refernce alive thru the end of the
+ # function
+ datas.append(data)
+ vecs[i] = cdata
+
+ result = np.empty(n, dtype=np.uint64)
+ with nogil:
+ for i in range(n):
+ result[i] = low_level_siphash(<uint8_t *>vecs[i], lens[i], kb)
+
+ free(vecs)
+ free(lens)
+ return result
+
+cdef inline uint64_t _rotl(uint64_t x, uint64_t b) nogil:
+ return (x << b) | (x >> (64 - b))
+
+cdef inline void u32to8_le(uint8_t* p, uint32_t v) nogil:
+ p[0] = <uint8_t>(v)
+ p[1] = <uint8_t>(v >> 8)
+ p[2] = <uint8_t>(v >> 16)
+ p[3] = <uint8_t>(v >> 24)
+
+cdef inline void u64to8_le(uint8_t* p, uint64_t v) nogil:
+ u32to8_le(p, <uint32_t>v)
+ u32to8_le(p + 4, <uint32_t>(v >> 32))
+
+cdef inline uint64_t u8to64_le(uint8_t* p) nogil:
+ return (<uint64_t>p[0] |
+ <uint64_t>p[1] << 8 |
+ <uint64_t>p[2] << 16 |
+ <uint64_t>p[3] << 24 |
+ <uint64_t>p[4] << 32 |
+ <uint64_t>p[5] << 40 |
+ <uint64_t>p[6] << 48 |
+ <uint64_t>p[7] << 56)
+
+cdef inline void _sipround(uint64_t* v0, uint64_t* v1,
+ uint64_t* v2, uint64_t* v3) nogil:
+ v0[0] += v1[0]
+ v1[0] = _rotl(v1[0], 13)
+ v1[0] ^= v0[0]
+ v0[0] = _rotl(v0[0], 32)
+ v2[0] += v3[0]
+ v3[0] = _rotl(v3[0], 16)
+ v3[0] ^= v2[0]
+ v0[0] += v3[0]
+ v3[0] = _rotl(v3[0], 21)
+ v3[0] ^= v0[0]
+ v2[0] += v1[0]
+ v1[0] = _rotl(v1[0], 17)
+ v1[0] ^= v2[0]
+ v2[0] = _rotl(v2[0], 32)
+
+cpdef uint64_t siphash(bytes data, bytes key) except? 0:
+ if len(key) != 16:
+ raise ValueError(
+ 'key should be a 16-byte bytestring, got {!r} (len {})'.format(
+ key, len(key)))
+ return low_level_siphash(data, len(data), key)
+
+
+@cython.cdivision(True)
+cdef uint64_t low_level_siphash(uint8_t* data, size_t datalen,
+ uint8_t* key) nogil:
+ cdef uint64_t v0 = 0x736f6d6570736575ULL
+ cdef uint64_t v1 = 0x646f72616e646f6dULL
+ cdef uint64_t v2 = 0x6c7967656e657261ULL
+ cdef uint64_t v3 = 0x7465646279746573ULL
+ cdef uint64_t b
+ cdef uint64_t k0 = u8to64_le(key)
+ cdef uint64_t k1 = u8to64_le(key + 8)
+ cdef uint64_t m
+ cdef int i
+ cdef uint8_t* end = data + datalen - (datalen % sizeof(uint64_t))
+ cdef int left = datalen & 7
+ cdef int left_byte
+
+ b = (<uint64_t>datalen) << 56
+ v3 ^= k1
+ v2 ^= k0
+ v1 ^= k1
+ v0 ^= k0
+
+ while (data != end):
+ m = u8to64_le(data)
+ v3 ^= m
+ for i in range(cROUNDS):
+ _sipround(&v0, &v1, &v2, &v3)
+ v0 ^= m
+
+ data += sizeof(uint64_t)
+
+ for i in range(left-1, -1, -1):
+ b |= (<uint64_t>data[i]) << (i * 8)
+
+ v3 ^= b
+
+ for i in range(cROUNDS):
+ _sipround(&v0, &v1, &v2, &v3)
+
+ v0 ^= b
+ v2 ^= 0xff
+
+ for i in range(dROUNDS):
+ _sipround(&v0, &v1, &v2, &v3)
+
+ b = v0 ^ v1 ^ v2 ^ v3
+
+ return b
diff --git a/pandas/tools/hashing.py b/pandas/tools/hashing.py
new file mode 100644
index 0000000000000..aa18b8bc70c37
--- /dev/null
+++ b/pandas/tools/hashing.py
@@ -0,0 +1,137 @@
+"""
+data hash pandas / numpy objects
+"""
+
+import numpy as np
+from pandas import _hash, Series, factorize, Categorical, Index
+from pandas.lib import infer_dtype
+from pandas.types.generic import ABCIndexClass, ABCSeries, ABCDataFrame
+from pandas.types.common import is_categorical_dtype
+
+# 16 byte long hashing key
+_default_hash_key = '0123456789123456'
+
+
+def hash_pandas_object(obj, index=True, encoding='utf8', hash_key=None):
+ """
+ Return a data hash of the Index/Series/DataFrame
+
+ .. versionadded:: 0.19.2
+
+ Parameters
+ ----------
+ index : boolean, default True
+ include the index in the hash (if Series/DataFrame)
+ encoding : string, default 'utf8'
+ encoding for data & key when strings
+ hash_key : string key to encode, default to _default_hash_key
+
+ Returns
+ -------
+ Series of uint64, same length as the object
+
+ """
+ if hash_key is None:
+ hash_key = _default_hash_key
+
+ def adder(h, hashed_to_add):
+ h = np.multiply(h, np.uint(3), h)
+ return np.add(h, hashed_to_add, h)
+
+ if isinstance(obj, ABCIndexClass):
+ h = hash_array(obj.values, encoding, hash_key).astype('uint64')
+ h = Series(h, index=obj, dtype='uint64')
+ elif isinstance(obj, ABCSeries):
+ h = hash_array(obj.values, encoding, hash_key).astype('uint64')
+ if index:
+ h = adder(h, hash_pandas_object(obj.index,
+ index=False,
+ encoding=encoding,
+ hash_key=hash_key).values)
+ h = Series(h, index=obj.index, dtype='uint64')
+ elif isinstance(obj, ABCDataFrame):
+ cols = obj.iteritems()
+ first_series = next(cols)[1]
+ h = hash_array(first_series.values, encoding,
+ hash_key).astype('uint64')
+ for _, col in cols:
+ h = adder(h, hash_array(col.values, encoding, hash_key))
+ if index:
+ h = adder(h, hash_pandas_object(obj.index,
+ index=False,
+ encoding=encoding,
+ hash_key=hash_key).values)
+
+ h = Series(h, index=obj.index, dtype='uint64')
+ else:
+ raise TypeError("Unexpected type for hashing %s" % type(obj))
+ return h
+
+
+def hash_array(vals, encoding='utf8', hash_key=None):
+ """
+ Given a 1d array, return an array of deterministic integers.
+
+ .. versionadded:: 0.19.2
+
+ Parameters
+ ----------
+ vals : ndarray
+ encoding : string, default 'utf8'
+ encoding for data & key when strings
+ hash_key : string key to encode, default to _default_hash_key
+
+ Returns
+ -------
+ 1d uint64 numpy array of hash values, same length as the vals
+
+ """
+
+ # work with cagegoricals as ints. (This check is above the complex
+ # check so that we don't ask numpy if categorical is a subdtype of
+ # complex, as it will choke.
+ if hash_key is None:
+ hash_key = _default_hash_key
+
+ if is_categorical_dtype(vals.dtype):
+ vals = vals.codes
+
+ # we'll be working with everything as 64-bit values, so handle this
+ # 128-bit value early
+ if np.issubdtype(vals.dtype, np.complex128):
+ return hash_array(vals.real) + 23 * hash_array(vals.imag)
+
+ # MAIN LOGIC:
+ inferred = infer_dtype(vals)
+
+ # First, turn whatever array this is into unsigned 64-bit ints, if we can
+ # manage it.
+ if inferred == 'boolean':
+ vals = vals.astype('u8')
+
+ if (np.issubdtype(vals.dtype, np.datetime64) or
+ np.issubdtype(vals.dtype, np.timedelta64) or
+ np.issubdtype(vals.dtype, np.number)) and vals.dtype.itemsize <= 8:
+
+ vals = vals.view('u{}'.format(vals.dtype.itemsize)).astype('u8')
+ else:
+
+ # its MUCH faster to categorize object dtypes, then hash and rename
+ codes, categories = factorize(vals, sort=False)
+ categories = Index(categories)
+ c = Series(Categorical(codes, categories,
+ ordered=False, fastpath=True))
+ vals = _hash.hash_object_array(categories.values,
+ hash_key,
+ encoding)
+
+ # rename & extract
+ vals = c.cat.rename_categories(Index(vals)).astype(np.uint64).values
+
+ # Then, redistribute these 64-bit ints within the space of 64-bit ints
+ vals ^= vals >> 30
+ vals *= np.uint64(0xbf58476d1ce4e5b9)
+ vals ^= vals >> 27
+ vals *= np.uint64(0x94d049bb133111eb)
+ vals ^= vals >> 31
+ return vals
diff --git a/pandas/tools/tests/test_hashing.py b/pandas/tools/tests/test_hashing.py
new file mode 100644
index 0000000000000..3e4c77244d2f7
--- /dev/null
+++ b/pandas/tools/tests/test_hashing.py
@@ -0,0 +1,143 @@
+import numpy as np
+import pandas as pd
+
+from pandas import DataFrame, Series, Index
+from pandas.tools.hashing import hash_array, hash_pandas_object
+import pandas.util.testing as tm
+
+
+class TestHashing(tm.TestCase):
+
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ self.df = DataFrame(
+ {'i32': np.array([1, 2, 3] * 3, dtype='int32'),
+ 'f32': np.array([None, 2.5, 3.5] * 3, dtype='float32'),
+ 'cat': Series(['a', 'b', 'c'] * 3).astype('category'),
+ 'obj': Series(['d', 'e', 'f'] * 3),
+ 'bool': np.array([True, False, True] * 3),
+ 'dt': Series(pd.date_range('20130101', periods=9)),
+ 'dt_tz': Series(pd.date_range('20130101', periods=9,
+ tz='US/Eastern')),
+ 'td': Series(pd.timedelta_range('2000', periods=9))})
+
+ def test_consistency(self):
+ # check that our hash doesn't change because of a mistake
+ # in the actual code; this is the ground truth
+ result = hash_pandas_object(Index(['foo', 'bar', 'baz']))
+ expected = Series(np.array([3600424527151052760, 1374399572096150070,
+ 477881037637427054], dtype='uint64'),
+ index=['foo', 'bar', 'baz'])
+ tm.assert_series_equal(result, expected)
+
+ def test_hash_array(self):
+ for name, s in self.df.iteritems():
+ a = s.values
+ tm.assert_numpy_array_equal(hash_array(a), hash_array(a))
+
+ def check_equal(self, obj, **kwargs):
+ a = hash_pandas_object(obj, **kwargs)
+ b = hash_pandas_object(obj, **kwargs)
+ tm.assert_series_equal(a, b)
+
+ kwargs.pop('index', None)
+ a = hash_pandas_object(obj, **kwargs)
+ b = hash_pandas_object(obj, **kwargs)
+ tm.assert_series_equal(a, b)
+
+ def check_not_equal_with_index(self, obj):
+
+ # check that we are not hashing the same if
+ # we include the index
+ if not isinstance(obj, Index):
+ a = hash_pandas_object(obj, index=True)
+ b = hash_pandas_object(obj, index=False)
+ self.assertFalse((a == b).all())
+
+ def test_hash_pandas_object(self):
+
+ for obj in [Series([1, 2, 3]),
+ Series([1.0, 1.5, 3.2]),
+ Series([1.0, 1.5, np.nan]),
+ Series([1.0, 1.5, 3.2], index=[1.5, 1.1, 3.3]),
+ Series(['a', 'b', 'c']),
+ Series(['a', np.nan, 'c']),
+ Series([True, False, True]),
+ Index([1, 2, 3]),
+ Index([True, False, True]),
+ DataFrame({'x': ['a', 'b', 'c'], 'y': [1, 2, 3]}),
+ tm.makeMissingDataframe(),
+ tm.makeMixedDataFrame(),
+ tm.makeTimeDataFrame(),
+ tm.makeTimeSeries(),
+ tm.makeTimedeltaIndex(),
+ Series([1, 2, 3], index=pd.MultiIndex.from_tuples(
+ [('a', 1), ('a', 2), ('b', 1)]))]:
+ self.check_equal(obj)
+ self.check_not_equal_with_index(obj)
+
+ def test_hash_pandas_object2(self):
+ for name, s in self.df.iteritems():
+ self.check_equal(s)
+ self.check_not_equal_with_index(s)
+
+ def test_hash_pandas_empty_object(self):
+ for obj in [Series([], dtype='float64'),
+ Series([], dtype='object'),
+ Index([])]:
+ self.check_equal(obj)
+
+ # these are by-definition the same with
+ # or w/o the index as the data is empty
+
+ def test_errors(self):
+
+ for obj in [pd.Timestamp('20130101'), tm.makePanel()]:
+ def f():
+ hash_pandas_object(f)
+
+ self.assertRaises(TypeError, f)
+
+ def test_hash_keys(self):
+ # using different hash keys, should have different hashes
+ # for the same data
+
+ # this only matters for object dtypes
+ obj = Series(list('abc'))
+ a = hash_pandas_object(obj, hash_key='9876543210123456')
+ b = hash_pandas_object(obj, hash_key='9876543210123465')
+ self.assertTrue((a != b).all())
+
+ def test_invalid_key(self):
+ # this only matters for object dtypes
+ def f():
+ hash_pandas_object(Series(list('abc')), hash_key='foo')
+ self.assertRaises(ValueError, f)
+
+ def test_mixed(self):
+ # mixed objects
+ obj = Series(['1', 2, 3])
+ self.check_equal(obj)
+ self.check_not_equal_with_index(obj)
+
+ # mixed are actually equal when stringified
+ a = hash_pandas_object(obj)
+ b = hash_pandas_object(Series(list('123')))
+ self.assert_series_equal(a, b)
+
+ def test_alread_encoded(self):
+ # if already encoded then ok
+
+ obj = Series(list('abc')).str.encode('utf8')
+ self.check_equal(obj)
+
+ def test_alternate_encoding(self):
+
+ obj = Series(list('abc'))
+ self.check_equal(obj, encoding='ascii')
+
+ def test_long_strings(self):
+
+ obj = Index(tm.rands_array(nchars=10000, size=100))
+ self.check_equal(obj)
diff --git a/setup.py b/setup.py
index 2dd3fec150781..8d2e2669852ea 100755
--- a/setup.py
+++ b/setup.py
@@ -331,6 +331,7 @@ class CheckSDist(sdist_class):
'pandas/src/period.pyx',
'pandas/src/sparse.pyx',
'pandas/src/testing.pyx',
+ 'pandas/src/hash.pyx',
'pandas/io/sas/saslib.pyx']
def initialize_options(self):
@@ -501,10 +502,12 @@ def pxd(name):
'sources': ['pandas/src/parser/tokenizer.c',
'pandas/src/parser/io.c']},
_sparse={'pyxfile': 'src/sparse',
- 'depends': ([srcpath('sparse', suffix='.pyx')]
- + _pxi_dep['_sparse'])},
+ 'depends': ([srcpath('sparse', suffix='.pyx')] +
+ _pxi_dep['_sparse'])},
_testing={'pyxfile': 'src/testing',
'depends': [srcpath('testing', suffix='.pyx')]},
+ _hash={'pyxfile': 'src/hash',
+ 'depends': [srcpath('hash', suffix='.pyx')]},
)
ext_data["io.sas.saslib"] = {'pyxfile': 'io/sas/saslib'}
| xref https://github.com/dask/dask/pull/1807 | https://api.github.com/repos/pandas-dev/pandas/pulls/14729 | 2016-11-24T15:21:19Z | 2016-11-28T16:19:06Z | 2016-11-28T16:19:06Z | 2016-11-28T16:31:04Z |
Minor typo in frame.py | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f704a61042b4f..bf1ff28cd63b1 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3233,7 +3233,7 @@ def trans(v):
# try to be helpful
if isinstance(self.columns, MultiIndex):
raise ValueError('Cannot sort by column %s in a '
- 'multi-index you need to explicity '
+ 'multi-index you need to explicitly '
'provide all the levels' % str(by))
raise ValueError('Cannot sort by duplicate column %s' %
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master | flake8 --diff``
- [ ] whatsnew entry
typo "explicitly" | https://api.github.com/repos/pandas-dev/pandas/pulls/14724 | 2016-11-23T21:15:38Z | 2016-11-23T21:23:32Z | 2016-11-23T21:23:32Z | 2016-11-24T09:22:27Z |
doc: comverted --> converted | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 05148c1f7e80a..f704a61042b4f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1346,7 +1346,7 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
file
quoting : optional constant from csv module
defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
- then floats are comverted to strings and thus csv.QUOTE_NONNUMERIC
+ then floats are converted to strings and thus csv.QUOTE_NONNUMERIC
will treat them as non-numeric
quotechar : string (length 1), default '\"'
character used to quote fields
| Just fixing a spelling error =)
| https://api.github.com/repos/pandas-dev/pandas/pulls/14722 | 2016-11-23T18:49:28Z | 2016-11-23T20:35:31Z | 2016-11-23T20:35:31Z | 2016-11-24T09:22:31Z |
DOC: fix typo in merge_asof docstring examples | diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index d2060185c3246..8d2f92ad58a88 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -371,7 +371,7 @@ def merge_asof(left, right, on=None,
By default we are taking the asof of the quotes
- >>> pd.asof_merge(trades, quotes,
+ >>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker')
time ticker price quantity bid ask
@@ -383,7 +383,7 @@ def merge_asof(left, right, on=None,
We only asof within 2ms betwen the quote time and the trade time
- >>> pd.asof_merge(trades, quotes,
+ >>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker',
... tolerance=pd.Timedelta('2ms'))
@@ -398,7 +398,7 @@ def merge_asof(left, right, on=None,
and we exclude exact matches on time. However *prior* data will
propogate forward
- >>> pd.asof_merge(trades, quotes,
+ >>> pd.merge_asof(trades, quotes,
... on='time',
... by='ticker',
... tolerance=pd.Timedelta('10ms'),
| Just a trivial fix to docs - incorrect function name in doctests.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14718 | 2016-11-23T00:20:44Z | 2016-11-23T08:35:09Z | 2016-11-23T08:35:09Z | 2016-11-23T08:35:09Z |
BUG: Respect the dtype parameter for empty CSV | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
index 5a255d1e62043..49c8330490ed1 100644
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -61,6 +61,7 @@ Bug Fixes
- Bug in ``.to_clipboard()`` and Excel compat (:issue:`12529`)
+- Bug in ``pd.read_csv()`` in which the ``dtype`` parameter was not being respected for empty data (:issue:`14712`)
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 581106924c77e..ff2874041f6f9 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -81,3 +81,4 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
+
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 3fe5e5e826ebd..929b360854d5b 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -20,6 +20,7 @@
is_float,
is_scalar)
from pandas.core.index import Index, MultiIndex, RangeIndex
+from pandas.core.series import Series
from pandas.core.frame import DataFrame
from pandas.core.common import AbstractMethodError
from pandas.core.config import get_option
@@ -2791,19 +2792,27 @@ def _clean_index_names(columns, index_col):
def _get_empty_meta(columns, index_col, index_names, dtype=None):
columns = list(columns)
- if dtype is None:
- dtype = {}
+ # Convert `dtype` to a defaultdict of some kind.
+ # This will enable us to write `dtype[col_name]`
+ # without worrying about KeyError issues later on.
+ if not isinstance(dtype, dict):
+ # if dtype == None, default will be np.object.
+ default_dtype = dtype or np.object
+ dtype = defaultdict(lambda: default_dtype)
else:
- if not isinstance(dtype, dict):
- dtype = defaultdict(lambda: dtype)
+ # Save a copy of the dictionary.
+ _dtype = dtype.copy()
+ dtype = defaultdict(lambda: np.object)
+
# Convert column indexes to column names.
- dtype = dict((columns[k] if is_integer(k) else k, v)
- for k, v in compat.iteritems(dtype))
+ for k, v in compat.iteritems(_dtype):
+ col = columns[k] if is_integer(k) else k
+ dtype[col] = v
if index_col is None or index_col is False:
index = Index([])
else:
- index = [np.empty(0, dtype=dtype.get(index_name, np.object))
+ index = [Series([], dtype=dtype[index_name])
for index_name in index_names]
index = MultiIndex.from_arrays(index, names=index_names)
index_col.sort()
@@ -2811,7 +2820,7 @@ def _get_empty_meta(columns, index_col, index_names, dtype=None):
columns.pop(n - i)
col_dict = dict((col_name,
- np.empty(0, dtype=dtype.get(col_name, np.object)))
+ Series([], dtype=dtype[col_name]))
for col_name in columns)
return index, columns, col_dict
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 75b99654dbf89..9cbe88d4032a3 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -561,3 +561,49 @@ def test_internal_null_byte(self):
result = self.read_csv(StringIO(data), names=names)
tm.assert_frame_equal(result, expected)
+
+ def test_empty_dtype(self):
+ # see gh-14712
+ data = 'a,b'
+
+ expected = pd.DataFrame(columns=['a', 'b'], dtype=np.float64)
+ result = self.read_csv(StringIO(data), header=0, dtype=np.float64)
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame({'a': pd.Categorical([]),
+ 'b': pd.Categorical([])},
+ index=[])
+ result = self.read_csv(StringIO(data), header=0,
+ dtype='category')
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'], dtype='datetime64[ns]')
+ result = self.read_csv(StringIO(data), header=0,
+ dtype='datetime64[ns]')
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame({'a': pd.Series([], dtype='timedelta64[ns]'),
+ 'b': pd.Series([], dtype='timedelta64[ns]')},
+ index=[])
+ result = self.read_csv(StringIO(data), header=0,
+ dtype='timedelta64[ns]')
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'])
+ expected['a'] = expected['a'].astype(np.float64)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={'a': np.float64})
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'])
+ expected['a'] = expected['a'].astype(np.float64)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={0: np.float64})
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'])
+ expected['a'] = expected['a'].astype(np.int32)
+ expected['b'] = expected['b'].astype(np.float64)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={'a': np.int32, 1: np.float64})
+ tm.assert_frame_equal(result, expected)
| Title is self-explanatory. Closes #14712.
Should be merged in before #14295 because the bug could also exist for the Python parser. | https://api.github.com/repos/pandas-dev/pandas/pulls/14717 | 2016-11-22T21:01:49Z | 2016-11-24T21:18:20Z | 2016-11-24T21:18:20Z | 2016-11-24T21:21:43Z |
Fix num_days in PandasAutoDateLocator | diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index 8f8519a498a31..ca208d1adc963 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -261,7 +261,7 @@ def get_locator(self, dmin, dmax):
'Pick the best locator based on a distance.'
delta = relativedelta(dmax, dmin)
- num_days = ((delta.years * 12.0) + delta.months * 31.0) + delta.days
+ num_days = (delta.years * 12.0 + delta.months) * 31.0 + delta.days
num_sec = (delta.hours * 60.0 + delta.minutes) * 60.0 + delta.seconds
tot_sec = num_days * 86400. + num_sec
| Prior to this commit, one year is worth 12 days, not 12 * 31 days. | https://api.github.com/repos/pandas-dev/pandas/pulls/14716 | 2016-11-22T19:52:37Z | 2017-03-21T08:52:38Z | 2017-03-21T08:52:38Z | 2017-03-21T08:53:14Z |
DOC: update FAQ to note pandas-qt only works for python 2.x | diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index d23e0ca59254d..3828ee1f9d091 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -111,5 +111,4 @@ Visualizing Data in Qt applications
-----------------------------------
There is no support for such visualization in pandas. However, the external
-package `pandas-qt <https://github.com/datalyze-solutions/pandas-qt>`_ does
-provide this functionality.
+package `pandas-qt <https://github.com/datalyze-solutions/pandas-qt>`_ provides this functionality for Python 2.x.
| - [ ] closes #14703
Here is the the pull request for the issue I raised [here](https://github.com/pandas-dev/pandas/issues/14703). | https://api.github.com/repos/pandas-dev/pandas/pulls/14713 | 2016-11-22T16:34:29Z | 2016-11-22T16:39:16Z | 2016-11-22T16:39:15Z | 2016-11-22T16:44:06Z |
DOC: Disambiguate 'where' in boolean indexing-10min.rst (#12661) | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index 54bcd76855f32..0612e86134cf2 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -282,7 +282,7 @@ Using a single column's values to select data.
df[df.A > 0]
-A ``where`` operation for getting.
+Selecting values from a DataFrame where a boolean condition is met.
.. ipython:: python
| - [x] closes #12661
Per @jorisvandenbossche comment in this thread (https://github.com/pandas-dev/pandas/pull/12671), remove code markdown from 'where' to not confuse the example with the `where()` method.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14708 | 2016-11-22T05:53:32Z | 2016-11-23T08:31:50Z | 2016-11-23T08:31:49Z | 2016-12-21T03:55:39Z |
Backport PR #54974 on branch 2.1.x (Include pyarrow_numpy string in efficient merge implementation) | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 4eda8d2d75408..d36ceff800c56 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -2417,7 +2417,8 @@ def _factorize_keys(
elif isinstance(lk, ExtensionArray) and lk.dtype == rk.dtype:
if (isinstance(lk.dtype, ArrowDtype) and is_string_dtype(lk.dtype)) or (
- isinstance(lk.dtype, StringDtype) and lk.dtype.storage == "pyarrow"
+ isinstance(lk.dtype, StringDtype)
+ and lk.dtype.storage in ["pyarrow", "pyarrow_numpy"]
):
import pyarrow as pa
import pyarrow.compute as pc
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 2c97e8773b0d6..c4067363d934e 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -2872,13 +2872,13 @@ def test_merge_ea_int_and_float_numpy():
tm.assert_frame_equal(result, expected.astype("float64"))
-def test_merge_arrow_string_index():
+def test_merge_arrow_string_index(any_string_dtype):
# GH#54894
pytest.importorskip("pyarrow")
- left = DataFrame({"a": ["a", "b"]}, dtype="string[pyarrow]")
- right = DataFrame({"b": 1}, index=Index(["a", "c"], dtype="string[pyarrow]"))
+ left = DataFrame({"a": ["a", "b"]}, dtype=any_string_dtype)
+ right = DataFrame({"b": 1}, index=Index(["a", "c"], dtype=any_string_dtype))
result = left.merge(right, left_on="a", right_index=True, how="left")
expected = DataFrame(
- {"a": Series(["a", "b"], dtype="string[pyarrow]"), "b": [1, np.nan]}
+ {"a": Series(["a", "b"], dtype=any_string_dtype), "b": [1, np.nan]}
)
tm.assert_frame_equal(result, expected)
| Backport PR #54974: Include pyarrow_numpy string in efficient merge implementation | https://api.github.com/repos/pandas-dev/pandas/pulls/55021 | 2023-09-05T22:32:45Z | 2023-09-06T01:01:36Z | 2023-09-06T01:01:36Z | 2023-09-06T01:01:36Z |
Backport PR #55013 on branch 2.1.x (CI: Ignore hypothesis differing executors) | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 10826f50d1fe1..b1b35448af134 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -71,6 +71,7 @@
Index,
MultiIndex,
)
+from pandas.util.version import Version
if TYPE_CHECKING:
from collections.abc import (
@@ -190,6 +191,10 @@ def pytest_collection_modifyitems(items, config) -> None:
item.add_marker(pytest.mark.arraymanager)
+hypothesis_health_checks = [hypothesis.HealthCheck.too_slow]
+if Version(hypothesis.__version__) >= Version("6.83.2"):
+ hypothesis_health_checks.append(hypothesis.HealthCheck.differing_executors)
+
# Hypothesis
hypothesis.settings.register_profile(
"ci",
@@ -201,7 +206,7 @@ def pytest_collection_modifyitems(items, config) -> None:
# 2022-02-09: Changed deadline from 500 -> None. Deadline leads to
# non-actionable, flaky CI failures (# GH 24641, 44969, 45118, 44969)
deadline=None,
- suppress_health_check=(hypothesis.HealthCheck.too_slow,),
+ suppress_health_check=tuple(hypothesis_health_checks),
)
hypothesis.settings.load_profile("ci")
| Backport PR #55013: CI: Ignore hypothesis differing executors | https://api.github.com/repos/pandas-dev/pandas/pulls/55019 | 2023-09-05T20:07:20Z | 2023-09-05T21:57:43Z | 2023-09-05T21:57:43Z | 2023-09-05T21:57:44Z |
Backport PR #54914 on branch 2.1.x (REGR: concat raising for 2 different ea dtypes) | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index 11b19b1508a71..6702f28187483 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -13,6 +13,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
+- Fixed regression in :func:`concat` when :class:`DataFrame` 's have two different extension dtypes (:issue:`54848`)
- Fixed regression in :func:`merge` when merging over a PyArrow string index (:issue:`54894`)
- Fixed regression in :func:`read_csv` when ``usecols`` is given and ``dtypes`` is a dict for ``engine="python"`` (:issue:`54868`)
- Fixed regression in :func:`read_csv` when ``delim_whitespace`` is True (:issue:`54918`, :issue:`54931`)
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 4d33f0137d3c4..b2d463a8c6c26 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -177,7 +177,7 @@ def concatenate_managers(
values = np.concatenate(vals, axis=1) # type: ignore[arg-type]
elif is_1d_only_ea_dtype(blk.dtype):
# TODO(EA2D): special-casing not needed with 2D EAs
- values = concat_compat(vals, axis=1, ea_compat_axis=True)
+ values = concat_compat(vals, axis=0, ea_compat_axis=True)
values = ensure_block_shape(values, ndim=2)
else:
values = concat_compat(vals, axis=1)
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 3efcd930af581..5dde863f246d1 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -858,3 +858,12 @@ def test_concat_multiindex_with_category():
)
expected = expected.set_index(["c1", "c2"])
tm.assert_frame_equal(result, expected)
+
+
+def test_concat_ea_upcast():
+ # GH#54848
+ df1 = DataFrame(["a"], dtype="string")
+ df2 = DataFrame([1], dtype="Int64")
+ result = concat([df1, df2])
+ expected = DataFrame(["a", 1], index=[0, 0])
+ tm.assert_frame_equal(result, expected)
| Backport PR #54914: REGR: concat raising for 2 different ea dtypes | https://api.github.com/repos/pandas-dev/pandas/pulls/55018 | 2023-09-05T18:45:51Z | 2023-09-05T23:50:40Z | 2023-09-05T23:50:40Z | 2023-09-05T23:50:40Z |
Backport PR #54927 on branch 2.1.x (REGR: interpolate raising if fill_value is given) | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index 11b19b1508a71..9dcc829ba7db3 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -21,6 +21,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.to_sql` not roundtripping datetime columns correctly for sqlite (:issue:`54877`)
- Fixed regression in :meth:`MultiIndex.append` raising when appending overlapping :class:`IntervalIndex` levels (:issue:`54934`)
- Fixed regression in :meth:`Series.drop_duplicates` for PyArrow strings (:issue:`54904`)
+- Fixed regression in :meth:`Series.interpolate` raising when ``fill_value`` was given (:issue:`54920`)
- Fixed regression in :meth:`Series.value_counts` raising for numeric data if ``bins`` was specified (:issue:`54857`)
- Fixed regression when comparing a :class:`Series` with ``datetime64`` dtype with ``None`` (:issue:`54870`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f6bf51f65049a..23c37fb34ec0d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8160,10 +8160,11 @@ def interpolate(
stacklevel=find_stack_level(),
)
- if "fill_value" in kwargs:
+ if method in fillna_methods and "fill_value" in kwargs:
raise ValueError(
"'fill_value' is not a valid keyword for "
- f"{type(self).__name__}.interpolate"
+ f"{type(self).__name__}.interpolate with method from "
+ f"{fillna_methods}"
)
if isinstance(obj.index, MultiIndex) and method != "linear":
diff --git a/pandas/tests/series/methods/test_interpolate.py b/pandas/tests/series/methods/test_interpolate.py
index 619690f400d98..549f429f09d35 100644
--- a/pandas/tests/series/methods/test_interpolate.py
+++ b/pandas/tests/series/methods/test_interpolate.py
@@ -858,3 +858,11 @@ def test_interpolate_asfreq_raises(self):
with pytest.raises(ValueError, match=msg):
with tm.assert_produces_warning(FutureWarning, match=msg2):
ser.interpolate(method="asfreq")
+
+ def test_interpolate_fill_value(self):
+ # GH#54920
+ pytest.importorskip("scipy")
+ ser = Series([np.nan, 0, 1, np.nan, 3, np.nan])
+ result = ser.interpolate(method="nearest", fill_value=0)
+ expected = Series([np.nan, 0, 1, 1, 3, 0])
+ tm.assert_series_equal(result, expected)
| Backport PR #54927: REGR: interpolate raising if fill_value is given | https://api.github.com/repos/pandas-dev/pandas/pulls/55017 | 2023-09-05T18:43:34Z | 2023-09-05T23:50:28Z | 2023-09-05T23:50:28Z | 2023-09-05T23:50:28Z |
Backport PR #55000 on branch 2.1.x (BUG: ArrowDtype raising for fixed size list) | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index 11b19b1508a71..64d7481117e8e 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -29,6 +29,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
+- Fixed bug for :class:`ArrowDtype` raising ``NotImplementedError`` for fixed-size list (:issue:`55000`)
- Fixed bug in :meth:`DataFrame.stack` with ``future_stack=True`` and columns a non-:class:`MultiIndex` consisting of tuples (:issue:`54948`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 53f0fb2843653..272e9928b96cb 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -2109,6 +2109,8 @@ def type(self):
return CategoricalDtypeType
elif pa.types.is_list(pa_type) or pa.types.is_large_list(pa_type):
return list
+ elif pa.types.is_fixed_size_list(pa_type):
+ return list
elif pa.types.is_map(pa_type):
return list
elif pa.types.is_struct(pa_type):
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index a2fee31d9f01b..ec2ca494b2aa1 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3037,6 +3037,15 @@ def test_groupby_count_return_arrow_dtype(data_missing):
tm.assert_frame_equal(result, expected)
+def test_fixed_size_list():
+ # GH#55000
+ ser = pd.Series(
+ [[1, 2], [3, 4]], dtype=ArrowDtype(pa.list_(pa.int64(), list_size=2))
+ )
+ result = ser.dtype.type
+ assert result == list
+
+
def test_arrowextensiondtype_dataframe_repr():
# GH 54062
df = pd.DataFrame(
| Backport PR #55000: BUG: ArrowDtype raising for fixed size list | https://api.github.com/repos/pandas-dev/pandas/pulls/55016 | 2023-09-05T18:08:29Z | 2023-09-05T23:49:57Z | 2023-09-05T23:49:57Z | 2023-09-05T23:49:57Z |
CI: Ignore hypothesis differing executors | diff --git a/pandas/conftest.py b/pandas/conftest.py
index a4f58e99d8bcc..ac0275bf695d4 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -71,6 +71,7 @@
Index,
MultiIndex,
)
+from pandas.util.version import Version
if TYPE_CHECKING:
from collections.abc import (
@@ -191,6 +192,10 @@ def pytest_collection_modifyitems(items, config) -> None:
item.add_marker(pytest.mark.arraymanager)
+hypothesis_health_checks = [hypothesis.HealthCheck.too_slow]
+if Version(hypothesis.__version__) >= Version("6.83.2"):
+ hypothesis_health_checks.append(hypothesis.HealthCheck.differing_executors)
+
# Hypothesis
hypothesis.settings.register_profile(
"ci",
@@ -202,7 +207,7 @@ def pytest_collection_modifyitems(items, config) -> None:
# 2022-02-09: Changed deadline from 500 -> None. Deadline leads to
# non-actionable, flaky CI failures (# GH 24641, 44969, 45118, 44969)
deadline=None,
- suppress_health_check=(hypothesis.HealthCheck.too_slow,),
+ suppress_health_check=tuple(hypothesis_health_checks),
)
hypothesis.settings.load_profile("ci")
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/55013 | 2023-09-05T16:48:34Z | 2023-09-05T20:07:09Z | 2023-09-05T20:07:09Z | 2023-09-05T22:32:02Z |
BLD: Build wheels for Python 3.12 | diff --git a/.circleci/config.yml b/.circleci/config.yml
index 50f6a116a6630..ba124533e953a 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -48,7 +48,7 @@ jobs:
name: Build aarch64 wheels
no_output_timeout: 30m # Sometimes the tests won't generate any output, make sure the job doesn't get killed by that
command: |
- pip3 install cibuildwheel==2.14.1
+ pip3 install cibuildwheel==2.15.0
cibuildwheel --prerelease-pythons --output-dir wheelhouse
environment:
CIBW_BUILD: << parameters.cibw-build >>
@@ -92,5 +92,4 @@ workflows:
only: /^v.*/
matrix:
parameters:
- # TODO: Enable Python 3.12 wheels when numpy releases a version that supports Python 3.12
- cibw-build: ["cp39-manylinux_aarch64", "cp310-manylinux_aarch64", "cp311-manylinux_aarch64"]#, "cp312-manylinux_aarch64"]
+ cibw-build: ["cp39-manylinux_aarch64", "cp310-manylinux_aarch64", "cp311-manylinux_aarch64", "cp312-manylinux_aarch64"]
diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index 5f541f1bae1fd..97d78a1a9afe3 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -97,8 +97,7 @@ jobs:
- [macos-12, macosx_*]
- [windows-2022, win_amd64]
# TODO: support PyPy?
- # TODO: Enable Python 3.12 wheels when numpy releases a version that supports Python 3.12
- python: [["cp39", "3.9"], ["cp310", "3.10"], ["cp311", "3.11"]]#, ["cp312", "3.12"]]
+ python: [["cp39", "3.9"], ["cp310", "3.10"], ["cp311", "3.11"], ["cp312", "3.12"]]
env:
IS_PUSH: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }}
IS_SCHEDULE_DISPATCH: ${{ github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }}
@@ -150,8 +149,10 @@ jobs:
uses: mamba-org/setup-micromamba@v1
with:
environment-name: wheel-env
+ # Use a fixed Python, since we might have an unreleased Python not
+ # yet present on conda-forge
create-args: >-
- python=${{ matrix.python[1] }}
+ python=3.11
anaconda-client
wheel
cache-downloads: true
@@ -167,12 +168,13 @@ jobs:
shell: pwsh
run: |
$TST_CMD = @"
- python -m pip install pytz six numpy python-dateutil tzdata>=2022.1 hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytest-asyncio>=0.17;
- python -m pip install --find-links=pandas\wheelhouse --no-index pandas;
+ python -m pip install hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytest-asyncio>=0.17;
+ python -m pip install `$(Get-Item pandas\wheelhouse\*.whl);
python -c `'import pandas as pd; pd.test(extra_args=[\"`\"--no-strict-data-files`\"\", \"`\"-m not clipboard and not single_cpu and not slow and not network and not db`\"\"])`';
"@
- docker pull python:${{ matrix.python[1] }}-windowsservercore
- docker run --env PANDAS_CI='1' -v ${PWD}:C:\pandas python:${{ matrix.python[1] }}-windowsservercore powershell -Command $TST_CMD
+ # add rc to the end of the image name if the Python version is unreleased
+ docker pull python:${{ matrix.python[1] == '3.12' && '3.12-rc' || format('{0}-windowsservercore', matrix.python[1]) }}
+ docker run --env PANDAS_CI='1' -v ${PWD}:C:\pandas python:${{ matrix.python[1] == '3.12' && '3.12-rc' || format('{0}-windowsservercore', matrix.python[1]) }} powershell -Command $TST_CMD
- uses: actions/upload-artifact@v3
with:
diff --git a/pyproject.toml b/pyproject.toml
index 845c2a63e84f0..74d6aaee286a9 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,7 +10,8 @@ requires = [
# we don't want to force users to compile with 1.25 though
# (Ideally, in the future, though, oldest-supported-numpy can be dropped when our min numpy is 1.25.x)
"oldest-supported-numpy>=2022.8.16; python_version<'3.12'",
- "numpy>=1.22.4; python_version>='3.12'",
+ # TODO: This needs to be updated when the official numpy 1.26 comes out
+ "numpy>=1.26.0b1; python_version>='3.12'",
"versioneer[toml]"
]
@@ -30,7 +31,9 @@ license = {file = 'LICENSE'}
requires-python = '>=3.9'
dependencies = [
"numpy>=1.22.4; python_version<'3.11'",
- "numpy>=1.23.2; python_version>='3.11'",
+ "numpy>=1.23.2; python_version=='3.11'",
+ # TODO: This needs to be updated when the official numpy 1.26 comes out
+ "numpy>=1.26.0b1; python_version>='3.12'",
"python-dateutil>=2.8.2",
"pytz>=2020.1",
"tzdata>=2022.1"
| - [ ] closes #54447 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/55010 | 2023-09-05T15:09:21Z | 2023-09-07T16:28:46Z | 2023-09-07T16:28:46Z | 2023-09-07T16:29:09Z |
CoW: Clear dead references every time we add a new one | diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 7a9a3b84fd69f..3b1a6bc7436c3 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -897,6 +897,11 @@ cdef class BlockValuesRefs:
else:
self.referenced_blocks = []
+ def _clear_dead_references(self) -> None:
+ self.referenced_blocks = [
+ ref for ref in self.referenced_blocks if ref() is not None
+ ]
+
def add_reference(self, blk: Block) -> None:
"""Adds a new reference to our reference collection.
@@ -905,6 +910,7 @@ cdef class BlockValuesRefs:
blk : Block
The block that the new references should point to.
"""
+ self._clear_dead_references()
self.referenced_blocks.append(weakref.ref(blk))
def add_index_reference(self, index: object) -> None:
@@ -915,6 +921,7 @@ cdef class BlockValuesRefs:
index : Index
The index that the new reference should point to.
"""
+ self._clear_dead_references()
self.referenced_blocks.append(weakref.ref(index))
def has_reference(self) -> bool:
@@ -927,8 +934,6 @@ cdef class BlockValuesRefs:
-------
bool
"""
- self.referenced_blocks = [
- ref for ref in self.referenced_blocks if ref() is not None
- ]
+ self._clear_dead_references()
# Checking for more references than block pointing to itself
return len(self.referenced_blocks) > 1
| - [x] closes #54352 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I'd like to try that one before going back to a set | https://api.github.com/repos/pandas-dev/pandas/pulls/55008 | 2023-09-05T12:37:23Z | 2023-09-15T09:10:52Z | 2023-09-15T09:10:52Z | 2023-10-19T07:50:57Z |
[pre-commit.ci] pre-commit autoupdate | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 9f9bcd78c07b0..c01bf65818167 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -24,7 +24,7 @@ repos:
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.285
+ rev: v0.0.287
hooks:
- id: ruff
args: [--exit-non-zero-on-fix]
@@ -34,7 +34,7 @@ repos:
alias: ruff-selected-autofixes
args: [--select, "ANN001,ANN204", --fix-only, --exit-non-zero-on-fix]
- repo: https://github.com/jendrikseipp/vulture
- rev: 'v2.7'
+ rev: 'v2.9.1'
hooks:
- id: vulture
entry: python scripts/run_vulture.py
@@ -84,7 +84,7 @@ repos:
'--filter=-readability/casting,-runtime/int,-build/include_subdir,-readability/fn_size'
]
- repo: https://github.com/pylint-dev/pylint
- rev: v3.0.0a6
+ rev: v3.0.0a7
hooks:
- id: pylint
stages: [manual]
@@ -124,7 +124,7 @@ repos:
types: [text] # overwrite types: [rst]
types_or: [python, rst]
- repo: https://github.com/sphinx-contrib/sphinx-lint
- rev: v0.6.7
+ rev: v0.6.8
hooks:
- id: sphinx-lint
- repo: local
diff --git a/asv_bench/benchmarks/array.py b/asv_bench/benchmarks/array.py
index 09c4acc0ab309..0229cf15fbfb8 100644
--- a/asv_bench/benchmarks/array.py
+++ b/asv_bench/benchmarks/array.py
@@ -90,7 +90,7 @@ def time_setitem(self, multiple_chunks):
self.array[i] = "foo"
def time_setitem_list(self, multiple_chunks):
- indexer = list(range(0, 50)) + list(range(-1000, 0, 50))
+ indexer = list(range(50)) + list(range(-1000, 0, 50))
self.array[indexer] = ["foo"] * len(indexer)
def time_setitem_slice(self, multiple_chunks):
diff --git a/asv_bench/benchmarks/join_merge.py b/asv_bench/benchmarks/join_merge.py
index 54bcdb0fa2843..04ac47a892a22 100644
--- a/asv_bench/benchmarks/join_merge.py
+++ b/asv_bench/benchmarks/join_merge.py
@@ -360,14 +360,14 @@ class MergeCategoricals:
def setup(self):
self.left_object = DataFrame(
{
- "X": np.random.choice(range(0, 10), size=(10000,)),
+ "X": np.random.choice(range(10), size=(10000,)),
"Y": np.random.choice(["one", "two", "three"], size=(10000,)),
}
)
self.right_object = DataFrame(
{
- "X": np.random.choice(range(0, 10), size=(10000,)),
+ "X": np.random.choice(range(10), size=(10000,)),
"Z": np.random.choice(["jjj", "kkk", "sss"], size=(10000,)),
}
)
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index f76163cbbd0a1..0589dc5b717a4 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -70,7 +70,7 @@
from collections.abc import MutableMapping
from datetime import tzinfo
- import pyarrow as pa # noqa: F811, TCH004
+ import pyarrow as pa # noqa: TCH004
from pandas._typing import (
Dtype,
diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py
index 781dfae7fef64..a8ef0e034ba9b 100644
--- a/pandas/core/indexes/api.py
+++ b/pandas/core/indexes/api.py
@@ -377,5 +377,5 @@ def all_indexes_same(indexes) -> bool:
def default_index(n: int) -> RangeIndex:
- rng = range(0, n)
+ rng = range(n)
return RangeIndex._simple_new(rng, name=None)
diff --git a/pandas/tests/frame/methods/test_copy.py b/pandas/tests/frame/methods/test_copy.py
index 95fcaaa473067..e7901ed363106 100644
--- a/pandas/tests/frame/methods/test_copy.py
+++ b/pandas/tests/frame/methods/test_copy.py
@@ -56,7 +56,7 @@ def test_copy_consolidates(self):
}
)
- for i in range(0, 10):
+ for i in range(10):
df.loc[:, f"n_{i}"] = np.random.default_rng(2).integers(0, 100, size=55)
assert len(df._mgr.blocks) == 11
diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py
index d99dd36f3a2e3..339e19254fd10 100644
--- a/pandas/tests/frame/methods/test_reset_index.py
+++ b/pandas/tests/frame/methods/test_reset_index.py
@@ -788,15 +788,15 @@ def test_errorreset_index_rename(float_frame):
def test_reset_index_false_index_name():
- result_series = Series(data=range(5, 10), index=range(0, 5))
+ result_series = Series(data=range(5, 10), index=range(5))
result_series.index.name = False
result_series.reset_index()
- expected_series = Series(range(5, 10), RangeIndex(range(0, 5), name=False))
+ expected_series = Series(range(5, 10), RangeIndex(range(5), name=False))
tm.assert_series_equal(result_series, expected_series)
# GH 38147
- result_frame = DataFrame(data=range(5, 10), index=range(0, 5))
+ result_frame = DataFrame(data=range(5, 10), index=range(5))
result_frame.index.name = False
result_frame.reset_index()
- expected_frame = DataFrame(range(5, 10), RangeIndex(range(0, 5), name=False))
+ expected_frame = DataFrame(range(5, 10), RangeIndex(range(5), name=False))
tm.assert_frame_equal(result_frame, expected_frame)
diff --git a/pandas/tests/frame/methods/test_sort_index.py b/pandas/tests/frame/methods/test_sort_index.py
index 228b62a418813..985a9e3602410 100644
--- a/pandas/tests/frame/methods/test_sort_index.py
+++ b/pandas/tests/frame/methods/test_sort_index.py
@@ -911,7 +911,7 @@ def test_sort_index_multiindex_sparse_column(self):
expected = DataFrame(
{
i: pd.array([0.0, 0.0, 0.0, 0.0], dtype=pd.SparseDtype("float64", 0.0))
- for i in range(0, 4)
+ for i in range(4)
},
index=MultiIndex.from_product([[1, 2], [1, 2]]),
)
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 3e2cde37c30eb..fd851ab244cb8 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -692,12 +692,12 @@ def test_constructor_error_msgs(self):
arr = np.array([[4, 5, 6]])
msg = r"Shape of passed values is \(1, 3\), indices imply \(1, 4\)"
with pytest.raises(ValueError, match=msg):
- DataFrame(index=[0], columns=range(0, 4), data=arr)
+ DataFrame(index=[0], columns=range(4), data=arr)
arr = np.array([4, 5, 6])
msg = r"Shape of passed values is \(3, 1\), indices imply \(1, 4\)"
with pytest.raises(ValueError, match=msg):
- DataFrame(index=[0], columns=range(0, 4), data=arr)
+ DataFrame(index=[0], columns=range(4), data=arr)
# higher dim raise exception
with pytest.raises(ValueError, match="Must pass 2-d input"):
@@ -2391,7 +2391,7 @@ def test_construct_with_two_categoricalindex_series(self):
def test_constructor_series_nonexact_categoricalindex(self):
# GH 42424
- ser = Series(range(0, 100))
+ ser = Series(range(100))
ser1 = cut(ser, 10).value_counts().head(5)
ser2 = cut(ser, 10).value_counts().tail(5)
result = DataFrame({"1": ser1, "2": ser2})
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index be226b4466f98..1e6d220199e22 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1928,7 +1928,7 @@ def test_pivot_table_values_key_error():
df = DataFrame(
{
"eventDate": date_range(datetime.today(), periods=20, freq="M").tolist(),
- "thename": range(0, 20),
+ "thename": range(20),
}
)
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index c9fe011f7063b..55f96bd1443de 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -842,7 +842,7 @@ def test_grouper_period_index(self):
result = period_series.groupby(period_series.index.month).sum()
expected = Series(
- range(0, periods), index=Index(range(1, periods + 1), name=index.name)
+ range(periods), index=Index(range(1, periods + 1), name=index.name)
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexes/multi/test_partial_indexing.py b/pandas/tests/indexes/multi/test_partial_indexing.py
index 47efc43d5eae0..66163dad3deae 100644
--- a/pandas/tests/indexes/multi/test_partial_indexing.py
+++ b/pandas/tests/indexes/multi/test_partial_indexing.py
@@ -31,7 +31,7 @@ def df():
dr = date_range("2016-01-01", "2016-01-03", freq="12H")
abc = ["a", "b", "c"]
mi = MultiIndex.from_product([dr, abc])
- frame = DataFrame({"c1": range(0, 15)}, index=mi)
+ frame = DataFrame({"c1": range(15)}, index=mi)
return frame
diff --git a/pandas/tests/indexing/multiindex/test_getitem.py b/pandas/tests/indexing/multiindex/test_getitem.py
index 9d11827e2923e..b86e233110e88 100644
--- a/pandas/tests/indexing/multiindex/test_getitem.py
+++ b/pandas/tests/indexing/multiindex/test_getitem.py
@@ -148,7 +148,7 @@ def test_frame_getitem_simple_key_error(
def test_tuple_string_column_names():
# GH#50372
mi = MultiIndex.from_tuples([("a", "aa"), ("a", "ab"), ("b", "ba"), ("b", "bb")])
- df = DataFrame([range(0, 4), range(1, 5), range(2, 6)], columns=mi)
+ df = DataFrame([range(4), range(1, 5), range(2, 6)], columns=mi)
df["single_index"] = 0
df_flat = df.copy()
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index ca3ce6ba34515..b3c2e67f7c318 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -2044,7 +2044,7 @@ def test_read_json_dtype_backend(self, string_storage, dtype_backend, orient):
)
if orient == "values":
- expected.columns = list(range(0, 8))
+ expected.columns = list(range(8))
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 5bb7097770820..d5f8c5200c4a3 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -1033,7 +1033,7 @@ def test_decode_floating_point(self, sign, float_number):
def test_encode_big_set(self):
s = set()
- for x in range(0, 100000):
+ for x in range(100000):
s.add(x)
# Make sure no Exception is raised.
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index db3909c147ad3..55445e44b9366 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -1012,7 +1012,7 @@ def test_timezone_aware_index(self, request, pa, timezone_aware_date_list):
def test_filter_row_groups(self, pa):
# https://github.com/pandas-dev/pandas/issues/26551
pytest.importorskip("pyarrow")
- df = pd.DataFrame({"a": list(range(0, 3))})
+ df = pd.DataFrame({"a": list(range(3))})
with tm.ensure_clean() as path:
df.to_parquet(path, engine=pa)
result = read_parquet(
@@ -1219,7 +1219,7 @@ def test_categorical(self, fp):
check_round_trip(df, fp)
def test_filter_row_groups(self, fp):
- d = {"a": list(range(0, 3))}
+ d = {"a": list(range(3))}
df = pd.DataFrame(d)
with tm.ensure_clean() as path:
df.to_parquet(path, engine=fp, compression=None, row_group_offsets=1)
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 7459aa1df8f3e..cd504616b6c5d 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -798,7 +798,7 @@ def test_missing_value_generator(self):
expected_values.insert(0, ".")
for t in types:
offset = valid_range[t][1]
- for i in range(0, 27):
+ for i in range(27):
val = StataMissingValue(offset + 1 + i)
assert val.string == expected_values[i]
diff --git a/pandas/tests/reshape/test_cut.py b/pandas/tests/reshape/test_cut.py
index b2a6ac49fdff2..81b466b059702 100644
--- a/pandas/tests/reshape/test_cut.py
+++ b/pandas/tests/reshape/test_cut.py
@@ -700,7 +700,7 @@ def test_cut_with_duplicated_index_lowest_included():
def test_cut_with_nonexact_categorical_indices():
# GH 42424
- ser = Series(range(0, 100))
+ ser = Series(range(100))
ser1 = cut(ser, 10).value_counts().head(5)
ser2 = cut(ser, 10).value_counts().tail(5)
result = DataFrame({"1": ser1, "2": ser2})
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 46da18445e135..c43fd05fd5501 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -33,7 +33,7 @@ def dropna(request):
return request.param
-@pytest.fixture(params=[([0] * 4, [1] * 4), (range(0, 3), range(1, 4))])
+@pytest.fixture(params=[([0] * 4, [1] * 4), (range(3), range(1, 4))])
def interval_values(request, closed):
left, right = request.param
return Categorical(pd.IntervalIndex.from_arrays(left, right, closed))
@@ -215,7 +215,7 @@ def test_pivot_table_dropna_categoricals(self, dropna):
{
"A": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"B": [1, 2, 3, 1, 2, 3, 1, 2, 3],
- "C": range(0, 9),
+ "C": range(9),
}
)
diff --git a/pandas/tests/series/methods/test_reindex.py b/pandas/tests/series/methods/test_reindex.py
index bce7d2d554004..016208f2d2026 100644
--- a/pandas/tests/series/methods/test_reindex.py
+++ b/pandas/tests/series/methods/test_reindex.py
@@ -159,9 +159,9 @@ def test_reindex_inference():
def test_reindex_downcasting():
# GH4618 shifted series downcasting
- s = Series(False, index=range(0, 5))
+ s = Series(False, index=range(5))
result = s.shift(1).bfill()
- expected = Series(False, index=range(0, 5))
+ expected = Series(False, index=range(5))
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/window/test_rolling_functions.py b/pandas/tests/window/test_rolling_functions.py
index 940f0845befa2..51f801ab3761b 100644
--- a/pandas/tests/window/test_rolling_functions.py
+++ b/pandas/tests/window/test_rolling_functions.py
@@ -388,7 +388,7 @@ def test_rolling_max_resample(step):
# So that we can have 3 datapoints on last day (4, 10, and 20)
indices.append(datetime(1975, 1, 5, 1))
indices.append(datetime(1975, 1, 5, 2))
- series = Series(list(range(0, 5)) + [10, 20], index=indices)
+ series = Series(list(range(5)) + [10, 20], index=indices)
# Use floats instead of ints as values
series = series.map(lambda x: float(x))
# Sort chronologically
@@ -425,7 +425,7 @@ def test_rolling_min_resample(step):
# So that we can have 3 datapoints on last day (4, 10, and 20)
indices.append(datetime(1975, 1, 5, 1))
indices.append(datetime(1975, 1, 5, 2))
- series = Series(list(range(0, 5)) + [10, 20], index=indices)
+ series = Series(list(range(5)) + [10, 20], index=indices)
# Use floats instead of ints as values
series = series.map(lambda x: float(x))
# Sort chronologically
@@ -445,7 +445,7 @@ def test_rolling_median_resample():
# So that we can have 3 datapoints on last day (4, 10, and 20)
indices.append(datetime(1975, 1, 5, 1))
indices.append(datetime(1975, 1, 5, 2))
- series = Series(list(range(0, 5)) + [10, 20], index=indices)
+ series = Series(list(range(5)) + [10, 20], index=indices)
# Use floats instead of ints as values
series = series.map(lambda x: float(x))
# Sort chronologically
| <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.0.285 → v0.0.287](https://github.com/astral-sh/ruff-pre-commit/compare/v0.0.285...v0.0.287)
- [github.com/jendrikseipp/vulture: v2.7 → v2.9.1](https://github.com/jendrikseipp/vulture/compare/v2.7...v2.9.1)
- [github.com/pylint-dev/pylint: v3.0.0a6 → v3.0.0a7](https://github.com/pylint-dev/pylint/compare/v3.0.0a6...v3.0.0a7)
- [github.com/sphinx-contrib/sphinx-lint: v0.6.7 → v0.6.8](https://github.com/sphinx-contrib/sphinx-lint/compare/v0.6.7...v0.6.8)
<!--pre-commit.ci end--> | https://api.github.com/repos/pandas-dev/pandas/pulls/55004 | 2023-09-04T16:43:04Z | 2023-09-04T23:52:19Z | 2023-09-04T23:52:19Z | 2023-09-05T22:38:16Z |
TYP: Add typing.overload signatures to DataFrame/Series.clip | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b9407ebe6624a..6d38f5f9c61e1 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8607,6 +8607,42 @@ def _clip_with_one_bound(self, threshold, method, axis, inplace):
# GH 40420
return self.where(subset, threshold, axis=axis, inplace=inplace)
+ @overload
+ def clip(
+ self,
+ lower=...,
+ upper=...,
+ *,
+ axis: Axis | None = ...,
+ inplace: Literal[False] = ...,
+ **kwargs,
+ ) -> Self:
+ ...
+
+ @overload
+ def clip(
+ self,
+ lower=...,
+ upper=...,
+ *,
+ axis: Axis | None = ...,
+ inplace: Literal[True],
+ **kwargs,
+ ) -> None:
+ ...
+
+ @overload
+ def clip(
+ self,
+ lower=...,
+ upper=...,
+ *,
+ axis: Axis | None = ...,
+ inplace: bool_t = ...,
+ **kwargs,
+ ) -> Self | None:
+ ...
+
@final
def clip(
self,
| This adds overloads so that a type checker can determine whether clip returns a Series/DataFrame or None based on the value of the inplace argument.
| https://api.github.com/repos/pandas-dev/pandas/pulls/55002 | 2023-09-04T14:43:29Z | 2023-09-04T17:23:33Z | 2023-09-04T17:23:33Z | 2023-09-04T19:05:38Z |
BUG: ArrowDtype raising for fixed size list | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index 11b19b1508a71..64d7481117e8e 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -29,6 +29,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
+- Fixed bug for :class:`ArrowDtype` raising ``NotImplementedError`` for fixed-size list (:issue:`55000`)
- Fixed bug in :meth:`DataFrame.stack` with ``future_stack=True`` and columns a non-:class:`MultiIndex` consisting of tuples (:issue:`54948`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index f76163cbbd0a1..4e5e4dd0fe44a 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -2148,6 +2148,8 @@ def type(self):
return CategoricalDtypeType
elif pa.types.is_list(pa_type) or pa.types.is_large_list(pa_type):
return list
+ elif pa.types.is_fixed_size_list(pa_type):
+ return list
elif pa.types.is_map(pa_type):
return list
elif pa.types.is_struct(pa_type):
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 5f1b16a44b8e9..fa6e85ba204d2 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -2992,6 +2992,15 @@ def test_groupby_count_return_arrow_dtype(data_missing):
tm.assert_frame_equal(result, expected)
+def test_fixed_size_list():
+ # GH#55000
+ ser = pd.Series(
+ [[1, 2], [3, 4]], dtype=ArrowDtype(pa.list_(pa.int64(), list_size=2))
+ )
+ result = ser.dtype.type
+ assert result == list
+
+
def test_arrowextensiondtype_dataframe_repr():
# GH 54062
df = pd.DataFrame(
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/55000 | 2023-09-04T12:08:55Z | 2023-09-05T18:07:21Z | 2023-09-05T18:07:21Z | 2023-09-05T18:15:14Z |
TYP: Add typing.overload signatures to DataFrame/Series.interpolate | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b9407ebe6624a..671cfc11df597 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7938,6 +7938,51 @@ def replace(
else:
return result.__finalize__(self, method="replace")
+ @overload
+ def interpolate(
+ self,
+ method: InterpolateOptions = ...,
+ *,
+ axis: Axis = ...,
+ limit: int | None = ...,
+ inplace: Literal[False] = ...,
+ limit_direction: Literal["forward", "backward", "both"] | None = ...,
+ limit_area: Literal["inside", "outside"] | None = ...,
+ downcast: Literal["infer"] | None | lib.NoDefault = ...,
+ **kwargs,
+ ) -> Self:
+ ...
+
+ @overload
+ def interpolate(
+ self,
+ method: InterpolateOptions = ...,
+ *,
+ axis: Axis = ...,
+ limit: int | None = ...,
+ inplace: Literal[True],
+ limit_direction: Literal["forward", "backward", "both"] | None = ...,
+ limit_area: Literal["inside", "outside"] | None = ...,
+ downcast: Literal["infer"] | None | lib.NoDefault = ...,
+ **kwargs,
+ ) -> None:
+ ...
+
+ @overload
+ def interpolate(
+ self,
+ method: InterpolateOptions = ...,
+ *,
+ axis: Axis = ...,
+ limit: int | None = ...,
+ inplace: bool_t = ...,
+ limit_direction: Literal["forward", "backward", "both"] | None = ...,
+ limit_area: Literal["inside", "outside"] | None = ...,
+ downcast: Literal["infer"] | None | lib.NoDefault = ...,
+ **kwargs,
+ ) -> Self | None:
+ ...
+
@final
def interpolate(
self,
| This adds overloads so that a type checker can determine whether interpolate returns DataFrame/Series or None based on the value of the inplace argument.
This is more of the same thing that was done in [this pull](https://github.com/pandas-dev/pandas/pull/54281). | https://api.github.com/repos/pandas-dev/pandas/pulls/54999 | 2023-09-04T12:05:08Z | 2023-09-04T14:31:58Z | 2023-09-04T14:31:58Z | 2023-09-04T14:32:06Z |
ENH: add calamine excel reader (close #50395) | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 2190136220c6c..927003b13d6be 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -46,6 +46,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index cf85345cb0cc2..00df41cce3bae 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -47,6 +47,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 3c1630714a041..d50ea20da1e0c 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -46,6 +46,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
# - pytables>=3.7.0, 3.8.0 is first version that supports 3.11
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index b1cea49e22d15..10862630bd596 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -48,6 +48,7 @@ dependencies:
- pymysql=1.0.2
- pyreadstat=1.1.5
- pytables=3.7.0
+ - python-calamine=0.1.6
- pyxlsb=1.0.9
- s3fs=2022.05.0
- scipy=1.8.1
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index b8a119ece4b03..904b55a813a9f 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -46,6 +46,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index 71686837451b4..4060cea73e7f6 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -47,6 +47,7 @@ dependencies:
- pymysql>=1.0.2
# - pyreadstat>=1.1.5 not available on ARM
- pytables>=3.7.0
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index ae7c9d4ea9c62..2c0787397e047 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -281,6 +281,7 @@ xlrd 2.0.1 excel Reading Excel
xlsxwriter 3.0.3 excel Writing Excel
openpyxl 3.0.10 excel Reading / writing for xlsx files
pyxlsb 1.0.9 excel Reading for xlsb files
+python-calamine 0.1.6 excel Reading for xls/xlsx/xlsb/ods files
========================= ================== =============== =============================================================
HTML
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index ecd547c5ff4d6..6bd181740c78d 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -3453,7 +3453,8 @@ Excel files
The :func:`~pandas.read_excel` method can read Excel 2007+ (``.xlsx``) files
using the ``openpyxl`` Python module. Excel 2003 (``.xls``) files
can be read using ``xlrd``. Binary Excel (``.xlsb``)
-files can be read using ``pyxlsb``.
+files can be read using ``pyxlsb``. All formats can be read
+using :ref:`calamine<io.calamine>` engine.
The :meth:`~DataFrame.to_excel` instance method is used for
saving a ``DataFrame`` to Excel. Generally the semantics are
similar to working with :ref:`csv<io.read_csv_table>` data.
@@ -3494,6 +3495,9 @@ using internally.
* For the engine odf, pandas is using :func:`odf.opendocument.load` to read in (``.ods``) files.
+* For the engine calamine, pandas is using :func:`python_calamine.load_workbook`
+ to read in (``.xlsx``), (``.xlsm``), (``.xls``), (``.xlsb``), (``.ods``) files.
+
.. code-block:: python
# Returns a DataFrame
@@ -3935,7 +3939,8 @@ The :func:`~pandas.read_excel` method can also read binary Excel files
using the ``pyxlsb`` module. The semantics and features for reading
binary Excel files mostly match what can be done for `Excel files`_ using
``engine='pyxlsb'``. ``pyxlsb`` does not recognize datetime types
-in files and will return floats instead.
+in files and will return floats instead (you can use :ref:`calamine<io.calamine>`
+if you need recognize datetime types).
.. code-block:: python
@@ -3947,6 +3952,20 @@ in files and will return floats instead.
Currently pandas only supports *reading* binary Excel files. Writing
is not implemented.
+.. _io.calamine:
+
+Calamine (Excel and ODS files)
+------------------------------
+
+The :func:`~pandas.read_excel` method can read Excel file (``.xlsx``, ``.xlsm``, ``.xls``, ``.xlsb``)
+and OpenDocument spreadsheets (``.ods``) using the ``python-calamine`` module.
+This module is a binding for Rust library `calamine <https://crates.io/crates/calamine>`__
+and is faster than other engines in most cases. The optional dependency 'python-calamine' needs to be installed.
+
+.. code-block:: python
+
+ # Returns a DataFrame
+ pd.read_excel("path_to_file.xlsb", engine="calamine")
.. _io.clipboard:
diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 07be496a95adc..249f08c7e387b 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -14,10 +14,27 @@ including other versions of pandas.
Enhancements
~~~~~~~~~~~~
-.. _whatsnew_220.enhancements.enhancement1:
+.. _whatsnew_220.enhancements.calamine:
-enhancement1
-^^^^^^^^^^^^
+Calamine engine for :func:`read_excel`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``calamine`` engine was added to :func:`read_excel`.
+It uses ``python-calamine``, which provides Python bindings for the Rust library `calamine <https://crates.io/crates/calamine>`__.
+This engine supports Excel files (``.xlsx``, ``.xlsm``, ``.xls``, ``.xlsb``) and OpenDocument spreadsheets (``.ods``) (:issue:`50395`).
+
+There are two advantages of this engine:
+
+1. Calamine is often faster than other engines, some benchmarks show results up to 5x faster than 'openpyxl', 20x - 'odf', 4x - 'pyxlsb', and 1.5x - 'xlrd'.
+ But, 'openpyxl' and 'pyxlsb' are faster in reading a few rows from large files because of lazy iteration over rows.
+2. Calamine supports the recognition of datetime in ``.xlsb`` files, unlike 'pyxlsb' which is the only other engine in pandas that can read ``.xlsb`` files.
+
+.. code-block:: python
+
+ pd.read_excel("path_to_file.xlsb", engine="calamine")
+
+
+For more, see :ref:`io.calamine` in the user guide on IO tools.
.. _whatsnew_220.enhancements.enhancement2:
diff --git a/environment.yml b/environment.yml
index 1a9dffb55bca7..1eb0b4cc2c7a6 100644
--- a/environment.yml
+++ b/environment.yml
@@ -47,6 +47,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.5
- pytables>=3.7.0
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.9
- s3fs>=2022.05.0
- scipy>=1.8.1
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index c5792fa1379fe..fa0e9e974ea39 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -37,6 +37,7 @@
"pyarrow": "7.0.0",
"pyreadstat": "1.1.5",
"pytest": "7.3.2",
+ "python-calamine": "0.1.6",
"pyxlsb": "1.0.9",
"s3fs": "2022.05.0",
"scipy": "1.8.1",
@@ -62,6 +63,7 @@
"lxml.etree": "lxml",
"odf": "odfpy",
"pandas_gbq": "pandas-gbq",
+ "python_calamine": "python-calamine",
"sqlalchemy": "SQLAlchemy",
"tables": "pytables",
}
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 62455f119a02f..750b374043193 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -513,11 +513,11 @@ def use_inf_as_na_cb(key) -> None:
auto, {others}.
"""
-_xls_options = ["xlrd"]
-_xlsm_options = ["xlrd", "openpyxl"]
-_xlsx_options = ["xlrd", "openpyxl"]
-_ods_options = ["odf"]
-_xlsb_options = ["pyxlsb"]
+_xls_options = ["xlrd", "calamine"]
+_xlsm_options = ["xlrd", "openpyxl", "calamine"]
+_xlsx_options = ["xlrd", "openpyxl", "calamine"]
+_ods_options = ["odf", "calamine"]
+_xlsb_options = ["pyxlsb", "calamine"]
with cf.config_prefix("io.excel.xls"):
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index b4b0f29019c31..073115cab8695 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -159,13 +159,15 @@
of dtype conversion.
engine : str, default None
If io is not a buffer or path, this must be set to identify io.
- Supported engines: "xlrd", "openpyxl", "odf", "pyxlsb".
+ Supported engines: "xlrd", "openpyxl", "odf", "pyxlsb", "calamine".
Engine compatibility :
- "xlrd" supports old-style Excel files (.xls).
- "openpyxl" supports newer Excel file formats.
- "odf" supports OpenDocument file formats (.odf, .ods, .odt).
- "pyxlsb" supports Binary Excel files.
+ - "calamine" supports Excel (.xls, .xlsx, .xlsm, .xlsb)
+ and OpenDocument (.ods) file formats.
.. versionchanged:: 1.2.0
The engine `xlrd <https://xlrd.readthedocs.io/en/latest/>`_
@@ -394,7 +396,7 @@ def read_excel(
| Callable[[str], bool]
| None = ...,
dtype: DtypeArg | None = ...,
- engine: Literal["xlrd", "openpyxl", "odf", "pyxlsb"] | None = ...,
+ engine: Literal["xlrd", "openpyxl", "odf", "pyxlsb", "calamine"] | None = ...,
converters: dict[str, Callable] | dict[int, Callable] | None = ...,
true_values: Iterable[Hashable] | None = ...,
false_values: Iterable[Hashable] | None = ...,
@@ -433,7 +435,7 @@ def read_excel(
| Callable[[str], bool]
| None = ...,
dtype: DtypeArg | None = ...,
- engine: Literal["xlrd", "openpyxl", "odf", "pyxlsb"] | None = ...,
+ engine: Literal["xlrd", "openpyxl", "odf", "pyxlsb", "calamine"] | None = ...,
converters: dict[str, Callable] | dict[int, Callable] | None = ...,
true_values: Iterable[Hashable] | None = ...,
false_values: Iterable[Hashable] | None = ...,
@@ -472,7 +474,7 @@ def read_excel(
| Callable[[str], bool]
| None = None,
dtype: DtypeArg | None = None,
- engine: Literal["xlrd", "openpyxl", "odf", "pyxlsb"] | None = None,
+ engine: Literal["xlrd", "openpyxl", "odf", "pyxlsb", "calamine"] | None = None,
converters: dict[str, Callable] | dict[int, Callable] | None = None,
true_values: Iterable[Hashable] | None = None,
false_values: Iterable[Hashable] | None = None,
@@ -1456,13 +1458,15 @@ class ExcelFile:
.xls, .xlsx, .xlsb, .xlsm, .odf, .ods, or .odt file.
engine : str, default None
If io is not a buffer or path, this must be set to identify io.
- Supported engines: ``xlrd``, ``openpyxl``, ``odf``, ``pyxlsb``
+ Supported engines: ``xlrd``, ``openpyxl``, ``odf``, ``pyxlsb``, ``calamine``
Engine compatibility :
- ``xlrd`` supports old-style Excel files (.xls).
- ``openpyxl`` supports newer Excel file formats.
- ``odf`` supports OpenDocument file formats (.odf, .ods, .odt).
- ``pyxlsb`` supports Binary Excel files.
+ - ``calamine`` supports Excel (.xls, .xlsx, .xlsm, .xlsb)
+ and OpenDocument (.ods) file formats.
.. versionchanged:: 1.2.0
@@ -1498,6 +1502,7 @@ class ExcelFile:
... df1 = pd.read_excel(xls, "Sheet1") # doctest: +SKIP
"""
+ from pandas.io.excel._calamine import CalamineReader
from pandas.io.excel._odfreader import ODFReader
from pandas.io.excel._openpyxl import OpenpyxlReader
from pandas.io.excel._pyxlsb import PyxlsbReader
@@ -1508,6 +1513,7 @@ class ExcelFile:
"openpyxl": OpenpyxlReader,
"odf": ODFReader,
"pyxlsb": PyxlsbReader,
+ "calamine": CalamineReader,
}
def __init__(
diff --git a/pandas/io/excel/_calamine.py b/pandas/io/excel/_calamine.py
new file mode 100644
index 0000000000000..d61a9fc664164
--- /dev/null
+++ b/pandas/io/excel/_calamine.py
@@ -0,0 +1,127 @@
+from __future__ import annotations
+
+from datetime import (
+ date,
+ datetime,
+ time,
+ timedelta,
+)
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Union,
+ cast,
+)
+
+from pandas._typing import Scalar
+from pandas.compat._optional import import_optional_dependency
+from pandas.util._decorators import doc
+
+import pandas as pd
+from pandas.core.shared_docs import _shared_docs
+
+from pandas.io.excel._base import BaseExcelReader
+
+if TYPE_CHECKING:
+ from python_calamine import (
+ CalamineSheet,
+ CalamineWorkbook,
+ )
+
+ from pandas._typing import (
+ FilePath,
+ ReadBuffer,
+ StorageOptions,
+ )
+
+_CellValueT = Union[int, float, str, bool, time, date, datetime, timedelta]
+
+
+class CalamineReader(BaseExcelReader["CalamineWorkbook"]):
+ @doc(storage_options=_shared_docs["storage_options"])
+ def __init__(
+ self,
+ filepath_or_buffer: FilePath | ReadBuffer[bytes],
+ storage_options: StorageOptions | None = None,
+ engine_kwargs: dict | None = None,
+ ) -> None:
+ """
+ Reader using calamine engine (xlsx/xls/xlsb/ods).
+
+ Parameters
+ ----------
+ filepath_or_buffer : str, path to be parsed or
+ an open readable stream.
+ {storage_options}
+ engine_kwargs : dict, optional
+ Arbitrary keyword arguments passed to excel engine.
+ """
+ import_optional_dependency("python_calamine")
+ super().__init__(
+ filepath_or_buffer,
+ storage_options=storage_options,
+ engine_kwargs=engine_kwargs,
+ )
+
+ @property
+ def _workbook_class(self) -> type[CalamineWorkbook]:
+ from python_calamine import CalamineWorkbook
+
+ return CalamineWorkbook
+
+ def load_workbook(
+ self, filepath_or_buffer: FilePath | ReadBuffer[bytes], engine_kwargs: Any
+ ) -> CalamineWorkbook:
+ from python_calamine import load_workbook
+
+ return load_workbook(
+ filepath_or_buffer, **engine_kwargs # type: ignore[arg-type]
+ )
+
+ @property
+ def sheet_names(self) -> list[str]:
+ from python_calamine import SheetTypeEnum
+
+ return [
+ sheet.name
+ for sheet in self.book.sheets_metadata
+ if sheet.typ == SheetTypeEnum.WorkSheet
+ ]
+
+ def get_sheet_by_name(self, name: str) -> CalamineSheet:
+ self.raise_if_bad_sheet_by_name(name)
+ return self.book.get_sheet_by_name(name)
+
+ def get_sheet_by_index(self, index: int) -> CalamineSheet:
+ self.raise_if_bad_sheet_by_index(index)
+ return self.book.get_sheet_by_index(index)
+
+ def get_sheet_data(
+ self, sheet: CalamineSheet, file_rows_needed: int | None = None
+ ) -> list[list[Scalar]]:
+ def _convert_cell(value: _CellValueT) -> Scalar:
+ if isinstance(value, float):
+ val = int(value)
+ if val == value:
+ return val
+ else:
+ return value
+ elif isinstance(value, date):
+ return pd.Timestamp(value)
+ elif isinstance(value, timedelta):
+ return pd.Timedelta(value)
+ elif isinstance(value, time):
+ # cast needed here because Scalar doesn't include datetime.time
+ return cast(Scalar, value)
+
+ return value
+
+ rows: list[list[_CellValueT]] = sheet.to_python(skip_empty_area=False)
+ data: list[list[Scalar]] = []
+
+ for row in rows:
+ data.append([_convert_cell(cell) for cell in row])
+ if file_rows_needed is not None and len(data) >= file_rows_needed:
+ break
+
+ return data
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 6db70c894f692..de444019e7b4c 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -54,6 +54,7 @@
),
pytest.param("pyxlsb", marks=td.skip_if_no("pyxlsb")),
pytest.param("odf", marks=td.skip_if_no("odf")),
+ pytest.param("calamine", marks=td.skip_if_no("python_calamine")),
]
@@ -67,11 +68,11 @@ def _is_valid_engine_ext_pair(engine, read_ext: str) -> bool:
return False
if engine == "odf" and read_ext != ".ods":
return False
- if read_ext == ".ods" and engine != "odf":
+ if read_ext == ".ods" and engine not in {"odf", "calamine"}:
return False
if engine == "pyxlsb" and read_ext != ".xlsb":
return False
- if read_ext == ".xlsb" and engine != "pyxlsb":
+ if read_ext == ".xlsb" and engine not in {"pyxlsb", "calamine"}:
return False
if engine == "xlrd" and read_ext != ".xls":
return False
@@ -160,9 +161,9 @@ def test_engine_kwargs(self, read_ext, engine):
"ods": {"foo": "abcd"},
}
- if read_ext[1:] in {"xls", "xlsb"}:
+ if engine in {"xlrd", "pyxlsb"}:
msg = re.escape(r"open_workbook() got an unexpected keyword argument 'foo'")
- elif read_ext[1:] == "ods":
+ elif engine == "odf":
msg = re.escape(r"load() got an unexpected keyword argument 'foo'")
else:
msg = re.escape(r"load_workbook() got an unexpected keyword argument 'foo'")
@@ -194,8 +195,8 @@ def test_usecols_int(self, read_ext):
usecols=3,
)
- def test_usecols_list(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_usecols_list(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -218,8 +219,8 @@ def test_usecols_list(self, request, read_ext, df_ref):
tm.assert_frame_equal(df1, df_ref, check_names=False)
tm.assert_frame_equal(df2, df_ref, check_names=False)
- def test_usecols_str(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_usecols_str(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -275,9 +276,9 @@ def test_usecols_str(self, request, read_ext, df_ref):
"usecols", [[0, 1, 3], [0, 3, 1], [1, 0, 3], [1, 3, 0], [3, 0, 1], [3, 1, 0]]
)
def test_usecols_diff_positional_int_columns_order(
- self, request, read_ext, usecols, df_ref
+ self, request, engine, read_ext, usecols, df_ref
):
- if read_ext == ".xlsb":
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -298,8 +299,8 @@ def test_usecols_diff_positional_str_columns_order(self, read_ext, usecols, df_r
result = pd.read_excel("test1" + read_ext, sheet_name="Sheet1", usecols=usecols)
tm.assert_frame_equal(result, expected, check_names=False)
- def test_read_excel_without_slicing(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_read_excel_without_slicing(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -310,8 +311,8 @@ def test_read_excel_without_slicing(self, request, read_ext, df_ref):
result = pd.read_excel("test1" + read_ext, sheet_name="Sheet1", index_col=0)
tm.assert_frame_equal(result, expected, check_names=False)
- def test_usecols_excel_range_str(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_usecols_excel_range_str(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -398,20 +399,26 @@ def test_excel_stop_iterator(self, read_ext):
expected = DataFrame([["aaaa", "bbbbb"]], columns=["Test", "Test1"])
tm.assert_frame_equal(parsed, expected)
- def test_excel_cell_error_na(self, request, read_ext):
- if read_ext == ".xlsb":
+ def test_excel_cell_error_na(self, request, engine, read_ext):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
)
)
+ # https://github.com/tafia/calamine/issues/355
+ if engine == "calamine" and read_ext == ".ods":
+ request.node.add_marker(
+ pytest.mark.xfail(reason="Calamine can't extract error from ods files")
+ )
+
parsed = pd.read_excel("test3" + read_ext, sheet_name="Sheet1")
expected = DataFrame([[np.nan]], columns=["Test"])
tm.assert_frame_equal(parsed, expected)
- def test_excel_table(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_excel_table(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -431,8 +438,8 @@ def test_excel_table(self, request, read_ext, df_ref):
)
tm.assert_frame_equal(df3, df1.iloc[:-1])
- def test_reader_special_dtypes(self, request, read_ext):
- if read_ext == ".xlsb":
+ def test_reader_special_dtypes(self, request, engine, read_ext):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -571,11 +578,17 @@ def test_reader_dtype_str(self, read_ext, dtype, expected):
actual = pd.read_excel(basename + read_ext, dtype=dtype)
tm.assert_frame_equal(actual, expected)
- def test_dtype_backend(self, read_ext, dtype_backend):
+ def test_dtype_backend(self, request, engine, read_ext, dtype_backend):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
+ # GH 54994
+ if engine == "calamine" and read_ext == ".ods":
+ request.node.add_marker(
+ pytest.mark.xfail(reason="OdsWriter produces broken file")
+ )
+
df = DataFrame(
{
"a": Series([1, 3], dtype="Int64"),
@@ -616,11 +629,17 @@ def test_dtype_backend(self, read_ext, dtype_backend):
expected = df
tm.assert_frame_equal(result, expected)
- def test_dtype_backend_and_dtype(self, read_ext):
+ def test_dtype_backend_and_dtype(self, request, engine, read_ext):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
+ # GH 54994
+ if engine == "calamine" and read_ext == ".ods":
+ request.node.add_marker(
+ pytest.mark.xfail(reason="OdsWriter produces broken file")
+ )
+
df = DataFrame({"a": [np.nan, 1.0], "b": [2.5, np.nan]})
with tm.ensure_clean(read_ext) as file_path:
df.to_excel(file_path, sheet_name="test", index=False)
@@ -632,11 +651,17 @@ def test_dtype_backend_and_dtype(self, read_ext):
)
tm.assert_frame_equal(result, df)
- def test_dtype_backend_string(self, read_ext, string_storage):
+ def test_dtype_backend_string(self, request, engine, read_ext, string_storage):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
+ # GH 54994
+ if engine == "calamine" and read_ext == ".ods":
+ request.node.add_marker(
+ pytest.mark.xfail(reason="OdsWriter produces broken file")
+ )
+
pa = pytest.importorskip("pyarrow")
with pd.option_context("mode.string_storage", string_storage):
@@ -800,8 +825,8 @@ def test_date_conversion_overflow(self, request, engine, read_ext):
result = pd.read_excel("testdateoverflow" + read_ext)
tm.assert_frame_equal(result, expected)
- def test_sheet_name(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_sheet_name(self, request, read_ext, engine, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -869,6 +894,11 @@ def test_corrupt_bytes_raises(self, engine):
"Unsupported format, or corrupt file: Expected BOF "
"record; found b'foo'"
)
+ elif engine == "calamine":
+ from python_calamine import CalamineError
+
+ error = CalamineError
+ msg = "Cannot detect file format"
else:
error = BadZipFile
msg = "File is not a zip file"
@@ -969,6 +999,14 @@ def test_reader_seconds(self, request, engine, read_ext):
)
)
+ # GH 55045
+ if engine == "calamine" and read_ext == ".ods":
+ request.node.add_marker(
+ pytest.mark.xfail(
+ reason="ODS file contains bad datetime (seconds as text)"
+ )
+ )
+
# Test reading times with and without milliseconds. GH5945.
expected = DataFrame.from_dict(
{
@@ -994,15 +1032,21 @@ def test_reader_seconds(self, request, engine, read_ext):
actual = pd.read_excel("times_1904" + read_ext, sheet_name="Sheet1")
tm.assert_frame_equal(actual, expected)
- def test_read_excel_multiindex(self, request, read_ext):
+ def test_read_excel_multiindex(self, request, engine, read_ext):
# see gh-4679
- if read_ext == ".xlsb":
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
)
)
+ # https://github.com/tafia/calamine/issues/354
+ if engine == "calamine" and read_ext == ".ods":
+ request.node.add_marker(
+ pytest.mark.xfail(reason="Last test fails in calamine")
+ )
+
mi = MultiIndex.from_product([["foo", "bar"], ["a", "b"]])
mi_file = "testmultiindex" + read_ext
@@ -1088,10 +1132,10 @@ def test_read_excel_multiindex(self, request, read_ext):
],
)
def test_read_excel_multiindex_blank_after_name(
- self, request, read_ext, sheet_name, idx_lvl2
+ self, request, engine, read_ext, sheet_name, idx_lvl2
):
# GH34673
- if read_ext == ".xlsb":
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb (GH4679"
@@ -1212,9 +1256,9 @@ def test_read_excel_bool_header_arg(self, read_ext):
with pytest.raises(TypeError, match=msg):
pd.read_excel("test1" + read_ext, header=arg)
- def test_read_excel_skiprows(self, request, read_ext):
+ def test_read_excel_skiprows(self, request, engine, read_ext):
# GH 4903
- if read_ext == ".xlsb":
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -1267,9 +1311,9 @@ def test_read_excel_skiprows(self, request, read_ext):
)
tm.assert_frame_equal(actual, expected)
- def test_read_excel_skiprows_callable_not_in(self, request, read_ext):
+ def test_read_excel_skiprows_callable_not_in(self, request, engine, read_ext):
# GH 4903
- if read_ext == ".xlsb":
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -1397,7 +1441,7 @@ def test_trailing_blanks(self, read_ext):
def test_ignore_chartsheets_by_str(self, request, engine, read_ext):
# GH 41448
- if engine == "odf":
+ if read_ext == ".ods":
pytest.skip("chartsheets do not exist in the ODF format")
if engine == "pyxlsb":
request.node.add_marker(
@@ -1410,7 +1454,7 @@ def test_ignore_chartsheets_by_str(self, request, engine, read_ext):
def test_ignore_chartsheets_by_int(self, request, engine, read_ext):
# GH 41448
- if engine == "odf":
+ if read_ext == ".ods":
pytest.skip("chartsheets do not exist in the ODF format")
if engine == "pyxlsb":
request.node.add_marker(
@@ -1540,8 +1584,8 @@ def test_excel_passes_na_filter(self, read_ext, na_filter):
expected = DataFrame(expected, columns=["Test"])
tm.assert_frame_equal(parsed, expected)
- def test_excel_table_sheet_by_index(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_excel_table_sheet_by_index(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -1569,8 +1613,8 @@ def test_excel_table_sheet_by_index(self, request, read_ext, df_ref):
tm.assert_frame_equal(df3, df1.iloc[:-1])
- def test_sheet_name(self, request, read_ext, df_ref):
- if read_ext == ".xlsb":
+ def test_sheet_name(self, request, engine, read_ext, df_ref):
+ if engine == "pyxlsb":
request.node.add_marker(
pytest.mark.xfail(
reason="Sheets containing datetimes not supported by pyxlsb"
@@ -1639,7 +1683,7 @@ def test_excel_read_binary(self, engine, read_ext):
def test_excel_read_binary_via_read_excel(self, read_ext, engine):
# GH 38424
with open("test1" + read_ext, "rb") as f:
- result = pd.read_excel(f)
+ result = pd.read_excel(f, engine=engine)
expected = pd.read_excel("test1" + read_ext, engine=engine)
tm.assert_frame_equal(result, expected)
@@ -1691,7 +1735,7 @@ def test_engine_invalid_option(self, read_ext):
def test_ignore_chartsheets(self, request, engine, read_ext):
# GH 41448
- if engine == "odf":
+ if read_ext == ".ods":
pytest.skip("chartsheets do not exist in the ODF format")
if engine == "pyxlsb":
request.node.add_marker(
@@ -1711,6 +1755,10 @@ def test_corrupt_files_closed(self, engine, read_ext):
import xlrd
errors = (BadZipFile, xlrd.biffh.XLRDError)
+ elif engine == "calamine":
+ from python_calamine import CalamineError
+
+ errors = (CalamineError,)
with tm.ensure_clean(f"corrupt{read_ext}") as file:
Path(file).write_text("corrupt", encoding="utf-8")
diff --git a/pyproject.toml b/pyproject.toml
index 74d6aaee286a9..9e579036c128b 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -69,7 +69,7 @@ computation = ['scipy>=1.8.1', 'xarray>=2022.03.0']
fss = ['fsspec>=2022.05.0']
aws = ['s3fs>=2022.05.0']
gcp = ['gcsfs>=2022.05.0', 'pandas-gbq>=0.17.5']
-excel = ['odfpy>=1.4.1', 'openpyxl>=3.0.10', 'pyxlsb>=1.0.9', 'xlrd>=2.0.1', 'xlsxwriter>=3.0.3']
+excel = ['odfpy>=1.4.1', 'openpyxl>=3.0.10', 'python-calamine>=0.1.6', 'pyxlsb>=1.0.9', 'xlrd>=2.0.1', 'xlsxwriter>=3.0.3']
parquet = ['pyarrow>=7.0.0']
feather = ['pyarrow>=7.0.0']
hdf5 = [# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
@@ -112,6 +112,7 @@ all = ['beautifulsoup4>=4.11.1',
'pytest>=7.3.2',
'pytest-xdist>=2.2.0',
'pytest-asyncio>=0.17.0',
+ 'python-calamine>=0.1.6',
'pyxlsb>=1.0.9',
'qtpy>=2.2.0',
'scipy>=1.8.1',
diff --git a/requirements-dev.txt b/requirements-dev.txt
index be02007a36333..ef3587b10d416 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -36,6 +36,7 @@ pyarrow>=7.0.0
pymysql>=1.0.2
pyreadstat>=1.1.5
tables>=3.7.0
+python-calamine>=0.1.6
pyxlsb>=1.0.9
s3fs>=2022.05.0
scipy>=1.8.1
diff --git a/scripts/tests/data/deps_expected_random.yaml b/scripts/tests/data/deps_expected_random.yaml
index c70025f8f019d..1ede20f5cc0d8 100644
--- a/scripts/tests/data/deps_expected_random.yaml
+++ b/scripts/tests/data/deps_expected_random.yaml
@@ -44,6 +44,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.2
- pytables>=3.6.1
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.8
- s3fs>=2021.08.0
- scipy>=1.7.1
diff --git a/scripts/tests/data/deps_minimum.toml b/scripts/tests/data/deps_minimum.toml
index b43815a982139..501ec4f061f17 100644
--- a/scripts/tests/data/deps_minimum.toml
+++ b/scripts/tests/data/deps_minimum.toml
@@ -62,7 +62,7 @@ computation = ['scipy>=1.7.1', 'xarray>=0.21.0']
fss = ['fsspec>=2021.07.0']
aws = ['s3fs>=2021.08.0']
gcp = ['gcsfs>=2021.07.0', 'pandas-gbq>=0.15.0']
-excel = ['odfpy>=1.4.1', 'openpyxl>=3.0.7', 'pyxlsb>=1.0.8', 'xlrd>=2.0.1', 'xlsxwriter>=1.4.3']
+excel = ['odfpy>=1.4.1', 'openpyxl>=3.0.7', 'python-calamine>=0.1.6', 'pyxlsb>=1.0.8', 'xlrd>=2.0.1', 'xlsxwriter>=1.4.3']
parquet = ['pyarrow>=7.0.0']
feather = ['pyarrow>=7.0.0']
hdf5 = [# blosc only available on conda (https://github.com/Blosc/python-blosc/issues/297)
@@ -103,6 +103,7 @@ all = ['beautifulsoup4>=5.9.3',
'pytest>=7.3.2',
'pytest-xdist>=2.2.0',
'pytest-asyncio>=0.17.0',
+ 'python-calamine>=0.1.6',
'pyxlsb>=1.0.8',
'qtpy>=2.2.0',
'scipy>=1.7.1',
diff --git a/scripts/tests/data/deps_unmodified_random.yaml b/scripts/tests/data/deps_unmodified_random.yaml
index 503eb3c7c7734..14bedd1025bf8 100644
--- a/scripts/tests/data/deps_unmodified_random.yaml
+++ b/scripts/tests/data/deps_unmodified_random.yaml
@@ -44,6 +44,7 @@ dependencies:
- pymysql>=1.0.2
- pyreadstat>=1.1.2
- pytables>=3.6.1
+ - python-calamine>=0.1.6
- pyxlsb>=1.0.8
- s3fs>=2021.08.0
- scipy>=1.7.1
| - [x] closes #50395
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Based on #50581, which was closed as stale and in favor of #53005, which was rejected. Now, calamine support datetime and sheet types, that more than pyxlsb supports, for example.
Now fail a few tests with ods:
- #54994 (5 tests);
- #55045 (1 test) - bad ods file in test_reader_seconds and, I think, a [bug](https://github.com/pandas-dev/pandas/blob/main/pandas/io/excel/_odfreader.py#L226) in the OdsReader (see `office:time-value` in [specification](http://docs.oasis-open.org/office/v1.2/os/OpenDocument-v1.2-os-part1.html#refTable13));
- two bugs in calamine, which can be fixed in the future (one of them - problem with specification - is fixed with [hack](https://github.com/pandas-dev/pandas/blob/main/pandas/io/excel/_odfreader.py#L198) in ODFReader for example).
My English is too bad for writing documentation and I will be happy any help with it.
cc @kostyafarber | https://api.github.com/repos/pandas-dev/pandas/pulls/54998 | 2023-09-04T12:03:34Z | 2023-09-12T20:34:57Z | 2023-09-12T20:34:57Z | 2023-09-13T00:58:17Z |
BUG: is_string_dtype(pd.Index([], dtype='O')) returns False | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index fa3cef6d9457d..2198a3107d297 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -253,6 +253,7 @@ Bug fixes
~~~~~~~~~
- Bug in :class:`AbstractHolidayCalendar` where timezone data was not propagated when computing holiday observances (:issue:`54580`)
- Bug in :class:`pandas.core.window.Rolling` where duplicate datetimelike indexes are treated as consecutive rather than equal with ``closed='left'`` and ``closed='neither'`` (:issue:`20712`)
+- Bug in :func:`pandas.api.types.is_string_dtype` while checking object array with no elements is of the string dtype (:issue:`54661`)
- Bug in :meth:`DataFrame.apply` where passing ``raw=True`` ignored ``args`` passed to the applied function (:issue:`55009`)
- Bug in :meth:`pandas.read_excel` with a ODS file without cached formatted cell for float values (:issue:`55219`)
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 9da4eac6a42c8..42e909a6b9856 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1664,9 +1664,12 @@ def is_all_strings(value: ArrayLike) -> bool:
dtype = value.dtype
if isinstance(dtype, np.dtype):
- return dtype == np.dtype("object") and lib.is_string_array(
- np.asarray(value), skipna=False
- )
+ if len(value) == 0:
+ return dtype == np.dtype("object")
+ else:
+ return dtype == np.dtype("object") and lib.is_string_array(
+ np.asarray(value), skipna=False
+ )
elif isinstance(dtype, CategoricalDtype):
return dtype.categories.inferred_type == "string"
return dtype == "string"
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 4507857418e9e..6f6cc5a5ad5d8 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -301,14 +301,23 @@ def test_is_categorical_dtype():
assert com.is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
-def test_is_string_dtype():
- assert not com.is_string_dtype(int)
- assert not com.is_string_dtype(pd.Series([1, 2]))
-
- assert com.is_string_dtype(str)
- assert com.is_string_dtype(object)
- assert com.is_string_dtype(np.array(["a", "b"]))
- assert com.is_string_dtype(pd.StringDtype())
+@pytest.mark.parametrize(
+ "dtype, expected",
+ [
+ (int, False),
+ (pd.Series([1, 2]), False),
+ (str, True),
+ (object, True),
+ (np.array(["a", "b"]), True),
+ (pd.StringDtype(), True),
+ (pd.Index([], dtype="O"), True),
+ ],
+)
+def test_is_string_dtype(dtype, expected):
+ # GH#54661
+
+ result = com.is_string_dtype(dtype)
+ assert result is expected
@pytest.mark.parametrize(
| - [x] closes #54661
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit)
- [x] Added an entry in `doc/source/whatsnew/v2.2.0.rst`
Corrected the definition of `is_all_strings`. Now for object arrays, which have no items `is_string_dtype` returns `True` instead of `False`. | https://api.github.com/repos/pandas-dev/pandas/pulls/54997 | 2023-09-04T11:23:34Z | 2023-10-04T15:46:01Z | 2023-10-04T15:46:01Z | 2023-10-04T15:46:08Z |
BUG: boolean/string value in OdsWriter (#54994) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 249f08c7e387b..08ba9e7437f04 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -248,6 +248,7 @@ I/O
^^^
- Bug in :func:`read_csv` where ``on_bad_lines="warn"`` would write to ``stderr`` instead of raise a Python warning. This now yields a :class:`.errors.ParserWarning` (:issue:`54296`)
- Bug in :func:`read_excel`, with ``engine="xlrd"`` (``xls`` files) erroring when file contains NaNs/Infs (:issue:`54564`)
+- Bug in :func:`to_excel`, with ``OdsWriter`` (``ods`` files) writing boolean/string value (:issue:`54994`)
Period
^^^^^^
diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index 74cbe90acdae8..bc7dca2d95b6b 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -192,7 +192,15 @@ def _make_table_cell(self, cell) -> tuple[object, Any]:
if isinstance(val, bool):
value = str(val).lower()
pvalue = str(val).upper()
- if isinstance(val, datetime.datetime):
+ return (
+ pvalue,
+ TableCell(
+ valuetype="boolean",
+ booleanvalue=value,
+ attributes=attributes,
+ ),
+ )
+ elif isinstance(val, datetime.datetime):
# Fast formatting
value = val.isoformat()
# Slow but locale-dependent
@@ -210,17 +218,20 @@ def _make_table_cell(self, cell) -> tuple[object, Any]:
pvalue,
TableCell(valuetype="date", datevalue=value, attributes=attributes),
)
+ elif isinstance(val, str):
+ return (
+ pvalue,
+ TableCell(
+ valuetype="string",
+ stringvalue=value,
+ attributes=attributes,
+ ),
+ )
else:
- class_to_cell_type = {
- str: "string",
- int: "float",
- float: "float",
- bool: "boolean",
- }
return (
pvalue,
TableCell(
- valuetype=class_to_cell_type[type(val)],
+ valuetype="float",
value=value,
attributes=attributes,
),
diff --git a/pandas/tests/io/excel/test_odswriter.py b/pandas/tests/io/excel/test_odswriter.py
index 21d31ec8a7fb5..ecee58362f8a9 100644
--- a/pandas/tests/io/excel/test_odswriter.py
+++ b/pandas/tests/io/excel/test_odswriter.py
@@ -1,7 +1,12 @@
+from datetime import (
+ date,
+ datetime,
+)
import re
import pytest
+import pandas as pd
import pandas._testing as tm
from pandas.io.excel import ExcelWriter
@@ -47,3 +52,47 @@ def test_book_and_sheets_consistent(ext):
table = odf.table.Table(name="test_name")
writer.book.spreadsheet.addElement(table)
assert writer.sheets == {"test_name": table}
+
+
+@pytest.mark.parametrize(
+ ["value", "cell_value_type", "cell_value_attribute", "cell_value"],
+ argvalues=[
+ (True, "boolean", "boolean-value", "true"),
+ ("test string", "string", "string-value", "test string"),
+ (1, "float", "value", "1"),
+ (1.5, "float", "value", "1.5"),
+ (
+ datetime(2010, 10, 10, 10, 10, 10),
+ "date",
+ "date-value",
+ "2010-10-10T10:10:10",
+ ),
+ (date(2010, 10, 10), "date", "date-value", "2010-10-10"),
+ ],
+)
+def test_cell_value_type(ext, value, cell_value_type, cell_value_attribute, cell_value):
+ # GH#54994 ODS: cell attributes should follow specification
+ # http://docs.oasis-open.org/office/v1.2/os/OpenDocument-v1.2-os-part1.html#refTable13
+ from odf.namespaces import OFFICENS
+ from odf.table import (
+ TableCell,
+ TableRow,
+ )
+
+ table_cell_name = TableCell().qname
+
+ with tm.ensure_clean(ext) as f:
+ pd.DataFrame([[value]]).to_excel(f, header=False, index=False)
+
+ with pd.ExcelFile(f) as wb:
+ sheet = wb._reader.get_sheet_by_index(0)
+ sheet_rows = sheet.getElementsByType(TableRow)
+ sheet_cells = [
+ x
+ for x in sheet_rows[0].childNodes
+ if hasattr(x, "qname") and x.qname == table_cell_name
+ ]
+
+ cell = sheet_cells[0]
+ assert cell.attributes.get((OFFICENS, "value-type")) == cell_value_type
+ assert cell.attributes.get((OFFICENS, cell_value_attribute)) == cell_value
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index de444019e7b4c..8dd9f96a05a90 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -578,17 +578,11 @@ def test_reader_dtype_str(self, read_ext, dtype, expected):
actual = pd.read_excel(basename + read_ext, dtype=dtype)
tm.assert_frame_equal(actual, expected)
- def test_dtype_backend(self, request, engine, read_ext, dtype_backend):
+ def test_dtype_backend(self, read_ext, dtype_backend):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
- # GH 54994
- if engine == "calamine" and read_ext == ".ods":
- request.node.add_marker(
- pytest.mark.xfail(reason="OdsWriter produces broken file")
- )
-
df = DataFrame(
{
"a": Series([1, 3], dtype="Int64"),
@@ -629,17 +623,11 @@ def test_dtype_backend(self, request, engine, read_ext, dtype_backend):
expected = df
tm.assert_frame_equal(result, expected)
- def test_dtype_backend_and_dtype(self, request, engine, read_ext):
+ def test_dtype_backend_and_dtype(self, read_ext):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
- # GH 54994
- if engine == "calamine" and read_ext == ".ods":
- request.node.add_marker(
- pytest.mark.xfail(reason="OdsWriter produces broken file")
- )
-
df = DataFrame({"a": [np.nan, 1.0], "b": [2.5, np.nan]})
with tm.ensure_clean(read_ext) as file_path:
df.to_excel(file_path, sheet_name="test", index=False)
@@ -651,17 +639,11 @@ def test_dtype_backend_and_dtype(self, request, engine, read_ext):
)
tm.assert_frame_equal(result, df)
- def test_dtype_backend_string(self, request, engine, read_ext, string_storage):
+ def test_dtype_backend_string(self, read_ext, string_storage):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
- # GH 54994
- if engine == "calamine" and read_ext == ".ods":
- request.node.add_marker(
- pytest.mark.xfail(reason="OdsWriter produces broken file")
- )
-
pa = pytest.importorskip("pyarrow")
with pd.option_context("mode.string_storage", string_storage):
| - [x] closes #54994
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54996 | 2023-09-04T11:22:46Z | 2023-09-13T20:05:09Z | 2023-09-13T20:05:09Z | 2023-09-14T11:25:20Z |
Backport PR #54985 on branch 2.1.x (REGR: rountripping datetime through sqlite doesn't work) | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index a6848dad6e3cd..11b19b1508a71 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -18,6 +18,7 @@ Fixed regressions
- Fixed regression in :func:`read_csv` when ``delim_whitespace`` is True (:issue:`54918`, :issue:`54931`)
- Fixed regression in :meth:`.GroupBy.get_group` raising for ``axis=1`` (:issue:`54858`)
- Fixed regression in :meth:`DataFrame.__setitem__` raising ``AssertionError`` when setting a :class:`Series` with a partial :class:`MultiIndex` (:issue:`54875`)
+- Fixed regression in :meth:`DataFrame.to_sql` not roundtripping datetime columns correctly for sqlite (:issue:`54877`)
- Fixed regression in :meth:`MultiIndex.append` raising when appending overlapping :class:`IntervalIndex` levels (:issue:`54934`)
- Fixed regression in :meth:`Series.drop_duplicates` for PyArrow strings (:issue:`54904`)
- Fixed regression in :meth:`Series.value_counts` raising for numeric data if ``bins`` was specified (:issue:`54857`)
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 7669d5aa4cea5..2b139f8ca527c 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -2091,13 +2091,11 @@ def _adapt_time(t) -> str:
adapt_date_iso = lambda val: val.isoformat()
adapt_datetime_iso = lambda val: val.isoformat()
- adapt_datetime_epoch = lambda val: int(val.timestamp())
sqlite3.register_adapter(time, _adapt_time)
sqlite3.register_adapter(date, adapt_date_iso)
sqlite3.register_adapter(datetime, adapt_datetime_iso)
- sqlite3.register_adapter(datetime, adapt_datetime_epoch)
convert_date = lambda val: date.fromisoformat(val.decode())
convert_datetime = lambda val: datetime.fromisoformat(val.decode())
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 9ec0ba0b12a76..bfa93a4ff910e 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -2962,6 +2962,13 @@ def test_read_sql_string_inference(self):
tm.assert_frame_equal(result, expected)
+ def test_roundtripping_datetimes(self):
+ # GH#54877
+ df = DataFrame({"t": [datetime(2020, 12, 31, 12)]}, dtype="datetime64[ns]")
+ df.to_sql("test", self.conn, if_exists="replace", index=False)
+ result = pd.read_sql("select * from test", self.conn).iloc[0, 0]
+ assert result == "2020-12-31 12:00:00.000000"
+
@pytest.mark.db
class TestMySQLAlchemy(_TestSQLAlchemy):
| Backport PR #54985: REGR: rountripping datetime through sqlite doesn't work | https://api.github.com/repos/pandas-dev/pandas/pulls/54993 | 2023-09-04T09:11:38Z | 2023-09-05T18:07:52Z | 2023-09-05T18:07:52Z | 2023-09-05T18:07:52Z |
DOC: Grammatically updated the tech docs | diff --git a/doc/source/getting_started/intro_tutorials/01_table_oriented.rst b/doc/source/getting_started/intro_tutorials/01_table_oriented.rst
index 2dcc8b0abe3b8..caaff3557ae40 100644
--- a/doc/source/getting_started/intro_tutorials/01_table_oriented.rst
+++ b/doc/source/getting_started/intro_tutorials/01_table_oriented.rst
@@ -106,9 +106,9 @@ between square brackets ``[]``.
</ul>
.. note::
- If you are familiar to Python
+ If you are familiar with Python
:ref:`dictionaries <python:tut-dictionaries>`, the selection of a
- single column is very similar to selection of dictionary values based on
+ single column is very similar to the selection of dictionary values based on
the key.
You can create a ``Series`` from scratch as well:
| -Added an entry in doc/source/getting_started/into_tutorials/01_table_oriented.rst
-Changed highlighted part in attached screenshot.

| https://api.github.com/repos/pandas-dev/pandas/pulls/54989 | 2023-09-04T02:00:13Z | 2023-09-05T18:09:45Z | 2023-09-05T18:09:45Z | 2023-09-05T18:09:53Z |
DOC: expanded pandas.DataFrame.to_sql docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b9407ebe6624a..d4cb04d7b6ead 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2847,7 +2847,7 @@ def to_sql(
index : bool, default True
Write DataFrame index as a column. Uses `index_label` as the column
- name in the table.
+ name in the table. Creates a table index for this column.
index_label : str or sequence, default None
Column label for index column(s). If None is given (default) and
`index` is True, then the index names are used.
| - [x] closes #54712
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Implements the suggested fix from @mattatark to `pandas.DataFrame.to_sql`
| https://api.github.com/repos/pandas-dev/pandas/pulls/54988 | 2023-09-03T23:22:19Z | 2023-09-05T18:12:13Z | 2023-09-05T18:12:13Z | 2023-09-05T18:12:20Z |
DOC: fix an example in whatsnew/v0.15.2.rst | diff --git a/doc/source/whatsnew/v0.15.2.rst b/doc/source/whatsnew/v0.15.2.rst
index bb7beef449d93..acc5409b86d09 100644
--- a/doc/source/whatsnew/v0.15.2.rst
+++ b/doc/source/whatsnew/v0.15.2.rst
@@ -24,25 +24,61 @@ API changes
- Indexing in ``MultiIndex`` beyond lex-sort depth is now supported, though
a lexically sorted index will have a better performance. (:issue:`2646`)
- .. ipython:: python
- :okexcept:
- :okwarning:
+ .. code-block:: ipython
+
+ In [1]: df = pd.DataFrame({'jim':[0, 0, 1, 1],
+ ...: 'joe':['x', 'x', 'z', 'y'],
+ ...: 'jolie':np.random.rand(4)}).set_index(['jim', 'joe'])
+ ...:
- df = pd.DataFrame({'jim':[0, 0, 1, 1],
- 'joe':['x', 'x', 'z', 'y'],
- 'jolie':np.random.rand(4)}).set_index(['jim', 'joe'])
- df
- df.index.lexsort_depth
+ In [2]: df
+ Out[2]:
+ jolie
+ jim joe
+ 0 x 0.126970
+ x 0.966718
+ 1 z 0.260476
+ y 0.897237
+
+ [4 rows x 1 columns]
+
+ In [3]: df.index.lexsort_depth
+ Out[3]: 1
# in prior versions this would raise a KeyError
# will now show a PerformanceWarning
- df.loc[(1, 'z')]
+ In [4]: df.loc[(1, 'z')]
+ Out[4]:
+ jolie
+ jim joe
+ 1 z 0.260476
+
+ [1 rows x 1 columns]
# lexically sorting
- df2 = df.sort_index()
- df2
- df2.index.lexsort_depth
- df2.loc[(1,'z')]
+ In [5]: df2 = df.sort_index()
+
+ In [6]: df2
+ Out[6]:
+ jolie
+ jim joe
+ 0 x 0.126970
+ x 0.966718
+ 1 y 0.897237
+ z 0.260476
+
+ [4 rows x 1 columns]
+
+ In [7]: df2.index.lexsort_depth
+ Out[7]: 2
+
+ In [8]: df2.loc[(1,'z')]
+ Out[8]:
+ jolie
+ jim joe
+ 1 z 0.260476
+
+ [1 rows x 1 columns]
- Bug in unique of Series with ``category`` dtype, which returned all categories regardless
whether they were "used" or not (see :issue:`8559` for the discussion).
| Fix an example in doc/source/whatsnew/v0.15.2.rst, which raises an AttributeError (see https://pandas.pydata.org/docs/dev/whatsnew/v0.15.2.html#api-changes) | https://api.github.com/repos/pandas-dev/pandas/pulls/54986 | 2023-09-03T19:06:16Z | 2023-09-04T09:52:00Z | 2023-09-04T09:52:00Z | 2023-09-04T09:52:01Z |
REGR: rountripping datetime through sqlite doesn't work | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index a6848dad6e3cd..11b19b1508a71 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -18,6 +18,7 @@ Fixed regressions
- Fixed regression in :func:`read_csv` when ``delim_whitespace`` is True (:issue:`54918`, :issue:`54931`)
- Fixed regression in :meth:`.GroupBy.get_group` raising for ``axis=1`` (:issue:`54858`)
- Fixed regression in :meth:`DataFrame.__setitem__` raising ``AssertionError`` when setting a :class:`Series` with a partial :class:`MultiIndex` (:issue:`54875`)
+- Fixed regression in :meth:`DataFrame.to_sql` not roundtripping datetime columns correctly for sqlite (:issue:`54877`)
- Fixed regression in :meth:`MultiIndex.append` raising when appending overlapping :class:`IntervalIndex` levels (:issue:`54934`)
- Fixed regression in :meth:`Series.drop_duplicates` for PyArrow strings (:issue:`54904`)
- Fixed regression in :meth:`Series.value_counts` raising for numeric data if ``bins`` was specified (:issue:`54857`)
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 7669d5aa4cea5..2b139f8ca527c 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -2091,13 +2091,11 @@ def _adapt_time(t) -> str:
adapt_date_iso = lambda val: val.isoformat()
adapt_datetime_iso = lambda val: val.isoformat()
- adapt_datetime_epoch = lambda val: int(val.timestamp())
sqlite3.register_adapter(time, _adapt_time)
sqlite3.register_adapter(date, adapt_date_iso)
sqlite3.register_adapter(datetime, adapt_datetime_iso)
- sqlite3.register_adapter(datetime, adapt_datetime_epoch)
convert_date = lambda val: date.fromisoformat(val.decode())
convert_datetime = lambda val: datetime.fromisoformat(val.decode())
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 9ec0ba0b12a76..bfa93a4ff910e 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -2962,6 +2962,13 @@ def test_read_sql_string_inference(self):
tm.assert_frame_equal(result, expected)
+ def test_roundtripping_datetimes(self):
+ # GH#54877
+ df = DataFrame({"t": [datetime(2020, 12, 31, 12)]}, dtype="datetime64[ns]")
+ df.to_sql("test", self.conn, if_exists="replace", index=False)
+ result = pd.read_sql("select * from test", self.conn).iloc[0, 0]
+ assert result == "2020-12-31 12:00:00.000000"
+
@pytest.mark.db
class TestMySQLAlchemy(_TestSQLAlchemy):
| - [ ] closes #54877 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
this wasn't one of the deprecated converters | https://api.github.com/repos/pandas-dev/pandas/pulls/54985 | 2023-09-03T18:49:06Z | 2023-09-04T09:09:41Z | 2023-09-04T09:09:41Z | 2023-09-04T09:09:45Z |
ENH: Implement masked algorithm for value_counts | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 07be496a95adc..06f0f6d046026 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -29,6 +29,7 @@ enhancement2
Other enhancements
^^^^^^^^^^^^^^^^^^
- DataFrame.apply now allows the usage of numba (via ``engine="numba"``) to JIT compile the passed function, allowing for potential speedups (:issue:`54666`)
+- Implement masked algorithms for :meth:`Series.value_counts` (:issue:`54984`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/hashtable.pyi b/pandas/_libs/hashtable.pyi
index 2bc6d74fe6aee..0ac914e86f699 100644
--- a/pandas/_libs/hashtable.pyi
+++ b/pandas/_libs/hashtable.pyi
@@ -240,7 +240,7 @@ def value_count(
values: np.ndarray,
dropna: bool,
mask: npt.NDArray[np.bool_] | None = ...,
-) -> tuple[np.ndarray, npt.NDArray[np.int64]]: ... # np.ndarray[same-as-values]
+) -> tuple[np.ndarray, npt.NDArray[np.int64], int]: ... # np.ndarray[same-as-values]
# arr and values should have same dtype
def ismember(
diff --git a/pandas/_libs/hashtable_func_helper.pxi.in b/pandas/_libs/hashtable_func_helper.pxi.in
index b9cf6011481af..19acd4acbdee7 100644
--- a/pandas/_libs/hashtable_func_helper.pxi.in
+++ b/pandas/_libs/hashtable_func_helper.pxi.in
@@ -36,7 +36,7 @@ cdef value_count_{{dtype}}(ndarray[{{dtype}}] values, bint dropna, const uint8_t
cdef value_count_{{dtype}}(const {{dtype}}_t[:] values, bint dropna, const uint8_t[:] mask=None):
{{endif}}
cdef:
- Py_ssize_t i = 0
+ Py_ssize_t i = 0, na_counter = 0, na_add = 0
Py_ssize_t n = len(values)
kh_{{ttype}}_t *table
@@ -49,9 +49,6 @@ cdef value_count_{{dtype}}(const {{dtype}}_t[:] values, bint dropna, const uint8
bint uses_mask = mask is not None
bint isna_entry = False
- if uses_mask and not dropna:
- raise NotImplementedError("uses_mask not implemented with dropna=False")
-
# we track the order in which keys are first seen (GH39009),
# khash-map isn't insertion-ordered, thus:
# table maps keys to counts
@@ -82,25 +79,31 @@ cdef value_count_{{dtype}}(const {{dtype}}_t[:] values, bint dropna, const uint8
for i in range(n):
val = {{to_c_type}}(values[i])
+ if uses_mask:
+ isna_entry = mask[i]
+
if dropna:
- if uses_mask:
- isna_entry = mask[i]
- else:
+ if not uses_mask:
isna_entry = is_nan_{{c_type}}(val)
if not dropna or not isna_entry:
- k = kh_get_{{ttype}}(table, val)
- if k != table.n_buckets:
- table.vals[k] += 1
+ if uses_mask and isna_entry:
+ na_counter += 1
else:
- k = kh_put_{{ttype}}(table, val, &ret)
- table.vals[k] = 1
- result_keys.append(val)
+ k = kh_get_{{ttype}}(table, val)
+ if k != table.n_buckets:
+ table.vals[k] += 1
+ else:
+ k = kh_put_{{ttype}}(table, val, &ret)
+ table.vals[k] = 1
+ result_keys.append(val)
{{endif}}
# collect counts in the order corresponding to result_keys:
+ if na_counter > 0:
+ na_add = 1
cdef:
- int64_t[::1] result_counts = np.empty(table.size, dtype=np.int64)
+ int64_t[::1] result_counts = np.empty(table.size + na_add, dtype=np.int64)
for i in range(table.size):
{{if dtype == 'object'}}
@@ -110,9 +113,13 @@ cdef value_count_{{dtype}}(const {{dtype}}_t[:] values, bint dropna, const uint8
{{endif}}
result_counts[i] = table.vals[k]
+ if na_counter > 0:
+ result_counts[table.size] = na_counter
+ result_keys.append(val)
+
kh_destroy_{{ttype}}(table)
- return result_keys.to_array(), result_counts.base
+ return result_keys.to_array(), result_counts.base, na_counter
@cython.wraparound(False)
@@ -399,10 +406,10 @@ def mode(ndarray[htfunc_t] values, bint dropna, const uint8_t[:] mask=None):
ndarray[htfunc_t] modes
int64_t[::1] counts
- int64_t count, max_count = -1
+ int64_t count, _, max_count = -1
Py_ssize_t nkeys, k, j = 0
- keys, counts = value_count(values, dropna, mask=mask)
+ keys, counts, _ = value_count(values, dropna, mask=mask)
nkeys = len(keys)
modes = np.empty(nkeys, dtype=values.dtype)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 1d74bb8b83e4e..c952178f4c998 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -924,7 +924,7 @@ def value_counts_internal(
else:
values = _ensure_arraylike(values, func_name="value_counts")
- keys, counts = value_counts_arraylike(values, dropna)
+ keys, counts, _ = value_counts_arraylike(values, dropna)
if keys.dtype == np.float16:
keys = keys.astype(np.float32)
@@ -949,7 +949,7 @@ def value_counts_internal(
# Called once from SparseArray, otherwise could be private
def value_counts_arraylike(
values: np.ndarray, dropna: bool, mask: npt.NDArray[np.bool_] | None = None
-) -> tuple[ArrayLike, npt.NDArray[np.int64]]:
+) -> tuple[ArrayLike, npt.NDArray[np.int64], int]:
"""
Parameters
----------
@@ -965,7 +965,7 @@ def value_counts_arraylike(
original = values
values = _ensure_data(values)
- keys, counts = htable.value_count(values, dropna, mask=mask)
+ keys, counts, na_counter = htable.value_count(values, dropna, mask=mask)
if needs_i8_conversion(original.dtype):
# datetime, timedelta, or period
@@ -975,7 +975,7 @@ def value_counts_arraylike(
keys, counts = keys[mask], counts[mask]
res_keys = _reconstruct_data(keys, original.dtype, original)
- return res_keys, counts
+ return res_keys, counts, na_counter
def duplicated(
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 2cf28c28427ab..aeca225dfb6f7 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -1044,28 +1044,22 @@ def value_counts(self, dropna: bool = True) -> Series:
)
from pandas.arrays import IntegerArray
- keys, value_counts = algos.value_counts_arraylike(
- self._data, dropna=True, mask=self._mask
+ keys, value_counts, na_counter = algos.value_counts_arraylike(
+ self._data, dropna=dropna, mask=self._mask
)
+ mask_index = np.zeros((len(value_counts),), dtype=np.bool_)
+ mask = mask_index.copy()
- if dropna:
- res = Series(value_counts, index=keys, name="count", copy=False)
- res.index = res.index.astype(self.dtype)
- res = res.astype("Int64")
- return res
+ if na_counter > 0:
+ mask_index[-1] = True
- # if we want nans, count the mask
- counts = np.empty(len(value_counts) + 1, dtype="int64")
- counts[:-1] = value_counts
- counts[-1] = self._mask.sum()
-
- index = Index(keys, dtype=self.dtype).insert(len(keys), self.dtype.na_value)
- index = index.astype(self.dtype)
-
- mask = np.zeros(len(counts), dtype="bool")
- counts_array = IntegerArray(counts, mask)
-
- return Series(counts_array, index=index, name="count", copy=False)
+ arr = IntegerArray(value_counts, mask)
+ index = Index(
+ self.dtype.construct_array_type()(
+ keys, mask_index # type: ignore[arg-type]
+ )
+ )
+ return Series(arr, index=index, name="count", copy=False)
@doc(ExtensionArray.equals)
def equals(self, other) -> bool:
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 00cbe1286c195..4d5eef960293f 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -881,7 +881,7 @@ def value_counts(self, dropna: bool = True) -> Series:
Series,
)
- keys, counts = algos.value_counts_arraylike(self.sp_values, dropna=dropna)
+ keys, counts, _ = algos.value_counts_arraylike(self.sp_values, dropna=dropna)
fcounts = self.sp_index.ngaps
if fcounts > 0 and (not self._null_fill_value or not dropna):
mask = isna(keys) if self._null_fill_value else keys == self.fill_value
diff --git a/pandas/tests/libs/test_hashtable.py b/pandas/tests/libs/test_hashtable.py
index b78e6426ca17f..2c8f4c4149528 100644
--- a/pandas/tests/libs/test_hashtable.py
+++ b/pandas/tests/libs/test_hashtable.py
@@ -586,15 +586,26 @@ def test_value_count(self, dtype, writable):
expected = (np.arange(N) + N).astype(dtype)
values = np.repeat(expected, 5)
values.flags.writeable = writable
- keys, counts = ht.value_count(values, False)
+ keys, counts, _ = ht.value_count(values, False)
tm.assert_numpy_array_equal(np.sort(keys), expected)
assert np.all(counts == 5)
+ def test_value_count_mask(self, dtype):
+ if dtype == np.object_:
+ pytest.skip("mask not implemented for object dtype")
+ values = np.array([1] * 5, dtype=dtype)
+ mask = np.zeros((5,), dtype=np.bool_)
+ mask[1] = True
+ mask[4] = True
+ keys, counts, na_counter = ht.value_count(values, False, mask=mask)
+ assert len(keys) == 2
+ assert na_counter == 2
+
def test_value_count_stable(self, dtype, writable):
# GH12679
values = np.array([2, 1, 5, 22, 3, -1, 8]).astype(dtype)
values.flags.writeable = writable
- keys, counts = ht.value_count(values, False)
+ keys, counts, _ = ht.value_count(values, False)
tm.assert_numpy_array_equal(keys, values)
assert np.all(counts == 1)
@@ -685,9 +696,9 @@ def test_unique_label_indices():
class TestHelpFunctionsWithNans:
def test_value_count(self, dtype):
values = np.array([np.nan, np.nan, np.nan], dtype=dtype)
- keys, counts = ht.value_count(values, True)
+ keys, counts, _ = ht.value_count(values, True)
assert len(keys) == 0
- keys, counts = ht.value_count(values, False)
+ keys, counts, _ = ht.value_count(values, False)
assert len(keys) == 1 and np.all(np.isnan(keys))
assert counts[0] == 3
diff --git a/pandas/tests/series/methods/test_value_counts.py b/pandas/tests/series/methods/test_value_counts.py
index f54489ac8a8b4..bde9902fec6e9 100644
--- a/pandas/tests/series/methods/test_value_counts.py
+++ b/pandas/tests/series/methods/test_value_counts.py
@@ -250,3 +250,22 @@ def test_value_counts_complex_numbers(self, input_array, expected):
# GH 17927
result = Series(input_array).value_counts()
tm.assert_series_equal(result, expected)
+
+ def test_value_counts_masked(self):
+ # GH#54984
+ dtype = "Int64"
+ ser = Series([1, 2, None, 2, None, 3], dtype=dtype)
+ result = ser.value_counts(dropna=False)
+ expected = Series(
+ [2, 2, 1, 1],
+ index=Index([2, None, 1, 3], dtype=dtype),
+ dtype=dtype,
+ name="count",
+ )
+ tm.assert_series_equal(result, expected)
+
+ result = ser.value_counts(dropna=True)
+ expected = Series(
+ [2, 1, 1], index=Index([2, 1, 3], dtype=dtype), dtype=dtype, name="count"
+ )
+ tm.assert_series_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
this adapts the same methods we are using for unique and friends. We get a 5-10% performance improvement as the Number of NAs grows | https://api.github.com/repos/pandas-dev/pandas/pulls/54984 | 2023-09-03T17:54:23Z | 2023-09-30T18:18:59Z | 2023-09-30T18:18:59Z | 2023-10-21T15:38:29Z |
BUG: pct_change showing unnecessary FutureWarning | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index b9bdb36fe0ed3..fe511b5cdec67 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -34,6 +34,7 @@ Bug fixes
~~~~~~~~~
- Fixed bug for :class:`ArrowDtype` raising ``NotImplementedError`` for fixed-size list (:issue:`55000`)
- Fixed bug in :meth:`DataFrame.stack` with ``future_stack=True`` and columns a non-:class:`MultiIndex` consisting of tuples (:issue:`54948`)
+- Fixed bug in :meth:`Series.pct_change` and :meth:`DataFrame.pct_change` showing unnecessary ``FutureWarning`` (:issue:`54981`)
.. ---------------------------------------------------------------------------
.. _whatsnew_211.other:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 8c1406fc305e3..fb3549be96ec1 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11793,15 +11793,21 @@ def pct_change(
stacklevel=find_stack_level(),
)
if fill_method is lib.no_default:
- if self.isna().values.any():
- warnings.warn(
- "The default fill_method='pad' in "
- f"{type(self).__name__}.pct_change is deprecated and will be "
- "removed in a future version. Call ffill before calling "
- "pct_change to retain current behavior and silence this warning.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
+ cols = self.items() if self.ndim == 2 else [(None, self)]
+ for _, col in cols:
+ mask = col.isna().values
+ mask = mask[np.argmax(~mask) :]
+ if mask.any():
+ warnings.warn(
+ "The default fill_method='pad' in "
+ f"{type(self).__name__}.pct_change is deprecated and will be "
+ "removed in a future version. Call ffill before calling "
+ "pct_change to retain current behavior and silence this "
+ "warning.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ break
fill_method = "pad"
if limit is lib.no_default:
limit = None
diff --git a/pandas/tests/frame/methods/test_pct_change.py b/pandas/tests/frame/methods/test_pct_change.py
index d0153da038a75..ede212ae18ae9 100644
--- a/pandas/tests/frame/methods/test_pct_change.py
+++ b/pandas/tests/frame/methods/test_pct_change.py
@@ -160,3 +160,21 @@ def test_pct_change_with_duplicated_indices(fill_method):
index=["a", "b"] * 3,
)
tm.assert_frame_equal(result, expected)
+
+
+def test_pct_change_none_beginning_no_warning():
+ # GH#54481
+ df = DataFrame(
+ [
+ [1, None],
+ [2, 1],
+ [3, 2],
+ [4, 3],
+ [5, 4],
+ ]
+ )
+ result = df.pct_change()
+ expected = DataFrame(
+ {0: [np.nan, 1, 0.5, 1 / 3, 0.25], 1: [np.nan, np.nan, 1, 0.5, 1 / 3]}
+ )
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_pct_change.py b/pandas/tests/series/methods/test_pct_change.py
index 4dabf7b87e2cd..6740b8756853e 100644
--- a/pandas/tests/series/methods/test_pct_change.py
+++ b/pandas/tests/series/methods/test_pct_change.py
@@ -107,3 +107,11 @@ def test_pct_change_with_duplicated_indices(fill_method):
expected = Series([np.nan, np.nan, 1.0, 0.5, 2.0, 1.0], index=["a", "b"] * 3)
tm.assert_series_equal(result, expected)
+
+
+def test_pct_change_no_warning_na_beginning():
+ # GH#54981
+ ser = Series([None, None, 1, 2, 3])
+ result = ser.pct_change()
+ expected = Series([np.nan, np.nan, np.nan, 1, 0.5])
+ tm.assert_series_equal(result, expected)
| - [ ] closes #54981 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54983 | 2023-09-03T14:57:39Z | 2023-09-06T22:40:24Z | 2023-09-06T22:40:24Z | 2023-09-06T22:40:59Z |
REG: filter not respecting the order of labels | diff --git a/doc/source/whatsnew/v2.1.1.rst b/doc/source/whatsnew/v2.1.1.rst
index 258f05d4277bd..b9bdb36fe0ed3 100644
--- a/doc/source/whatsnew/v2.1.1.rst
+++ b/doc/source/whatsnew/v2.1.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Fixed regression in :func:`read_csv` when ``delim_whitespace`` is True (:issue:`54918`, :issue:`54931`)
- Fixed regression in :meth:`.GroupBy.get_group` raising for ``axis=1`` (:issue:`54858`)
- Fixed regression in :meth:`DataFrame.__setitem__` raising ``AssertionError`` when setting a :class:`Series` with a partial :class:`MultiIndex` (:issue:`54875`)
+- Fixed regression in :meth:`DataFrame.filter` not respecting the order of elements for ``filter`` (:issue:`54980`)
- Fixed regression in :meth:`DataFrame.to_sql` not roundtripping datetime columns correctly for sqlite (:issue:`54877`)
- Fixed regression in :meth:`MultiIndex.append` raising when appending overlapping :class:`IntervalIndex` levels (:issue:`54934`)
- Fixed regression in :meth:`Series.drop_duplicates` for PyArrow strings (:issue:`54904`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e6bf55a1cbadf..8c1406fc305e3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5718,10 +5718,12 @@ def filter(
if items is not None:
name = self._get_axis_name(axis)
+ items = Index(items).intersection(labels)
+ if len(items) == 0:
+ # Keep the dtype of labels when we are empty
+ items = items.astype(labels.dtype)
# error: Keywords must be strings
- return self.reindex( # type: ignore[misc]
- **{name: labels.intersection(items)}
- )
+ return self.reindex(**{name: items}) # type: ignore[misc]
elif like:
def f(x) -> bool_t:
diff --git a/pandas/tests/frame/methods/test_filter.py b/pandas/tests/frame/methods/test_filter.py
index 1a2fbf8a65a55..9d5e6876bb08c 100644
--- a/pandas/tests/frame/methods/test_filter.py
+++ b/pandas/tests/frame/methods/test_filter.py
@@ -137,3 +137,17 @@ def test_filter_regex_non_string(self):
result = df.filter(regex="STRING")
expected = df[["STRING"]]
tm.assert_frame_equal(result, expected)
+
+ def test_filter_keep_order(self):
+ # GH#54980
+ df = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
+ result = df.filter(items=["B", "A"])
+ expected = df[["B", "A"]]
+ tm.assert_frame_equal(result, expected)
+
+ def test_filter_different_dtype(self):
+ # GH#54980
+ df = DataFrame({1: [1, 2, 3], 2: [4, 5, 6]})
+ result = df.filter(items=["B", "A"])
+ expected = df[[]]
+ tm.assert_frame_equal(result, expected)
| - [ ] closes #54980 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54982 | 2023-09-03T14:34:30Z | 2023-09-05T22:41:52Z | 2023-09-05T22:41:52Z | 2023-09-05T22:41:56Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.