title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
Update multiple instances of `dtype_backend` parameter description to make them all consistent | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 22c2aa374263d..344a6c2be7a2a 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1004,11 +1004,16 @@ def convert_dtypes(
infer_objects : bool, defaults False
Whether to also infer objects to float/int if possible. Is only hit if the
object array contains pd.NA.
- dtype_backend : str, default "numpy_nullable"
- Nullable dtype implementation to use.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
- * "numpy_nullable" returns numpy-backed nullable types
- * "pyarrow" returns pyarrow-backed nullable types using ``ArrowDtype``
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
+
+ .. versionadded:: 2.0
Returns
-------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 42cd74a0ca781..9084395871675 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6737,13 +6737,14 @@ def convert_dtypes(
dtypes if the floats can be faithfully casted to integers.
.. versionadded:: 1.2.0
- dtype_backend : {"numpy_nullable", "pyarrow"}, default "numpy_nullable"
- Which dtype_backend to use, e.g. whether a DataFrame should use nullable
- dtypes for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index e387a7cee8c56..a50dbeb110bff 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -88,13 +88,14 @@ def to_numeric(
the dtype it is to be cast to, so if none of the dtypes
checked satisfy that specification, no downcasting will be
performed on the data.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/clipboards.py b/pandas/io/clipboards.py
index 790cdc327d1ce..a15e37328e9fa 100644
--- a/pandas/io/clipboards.py
+++ b/pandas/io/clipboards.py
@@ -37,13 +37,14 @@ def read_clipboard(
A string or regex delimiter. The default of ``'\\s+'`` denotes
one or more whitespace characters.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g., whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when ``'numpy_nullable'`` is set, pyarrow is used for all
- dtypes if ``'pyarrow'`` is set.
-
- The dtype_backends are still experimental.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index d3860ce4f77ca..c310b2614fa5f 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -286,13 +286,14 @@
.. versionadded:: 1.2.0
-dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 8d297b4aa4edc..045eb7201631c 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -89,13 +89,14 @@ def read_feather(
.. versionadded:: 1.2.0
- dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 6de0eb4d995e9..67c080ac96148 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -1121,13 +1121,14 @@ def read_html(
.. versionadded:: 1.5.0
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 472c3b4f9aff9..ec0469a393873 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -662,13 +662,14 @@ def read_json(
.. versionadded:: 1.2.0
- dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index 410a11b8ca01c..edba03d2d46a8 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -63,13 +63,14 @@ def read_orc(
Output always follows the ordering of the file and not the columns list.
This mirrors the original behaviour of
:external+pyarrow:py:meth:`pyarrow.orc.ORCFile.read`.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index dd8d2ceaa7c3d..9bb000c363684 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -533,13 +533,14 @@ def read_parquet(
.. deprecated:: 2.0
- dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 90e2675da5703..d1fc29b9d317b 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -416,14 +416,14 @@
.. versionadded:: 1.2
-dtype_backend : {{'numpy_nullable', 'pyarrow'}}, defaults to NumPy backed DataFrame
- Back-end data type to use for the :class:`~pandas.DataFrame`. For
- ``'numpy_nullable'``, have NumPy arrays, nullable ``dtypes`` are used for all
- ``dtypes`` that have a
- nullable implementation when ``'numpy_nullable'`` is set, pyarrow is used for all
- dtypes if ``'pyarrow'`` is set.
+dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
- The ``dtype_backends`` are still experimental.
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
@@ -1319,13 +1319,14 @@ def read_fwf(
infer_nrows : int, default 100
The number of rows to consider when letting the parser determine the
`colspecs`.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/spss.py b/pandas/io/spss.py
index 980a3e0aabe60..a50bff6cf49b5 100644
--- a/pandas/io/spss.py
+++ b/pandas/io/spss.py
@@ -36,13 +36,14 @@ def read_spss(
Return a subset of the columns. If None, return all columns.
convert_categoricals : bool, default is True
Convert categorical columns into pd.Categorical.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
- The dtype_backends are still experimential.
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 7946780b24da9..309d33d5ae75b 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -305,13 +305,14 @@ def read_sql_table(
chunksize : int, default None
If specified, returns an iterator where `chunksize` is the number of
rows to include in each chunk.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
- The dtype_backends are still experimential.
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
@@ -443,13 +444,14 @@ def read_sql_query(
{‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}.
.. versionadded:: 1.3.0
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
- The dtype_backends are still experimential.
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
@@ -576,13 +578,14 @@ def read_sql(
chunksize : int, default None
If specified, return an iterator where `chunksize` is the
number of rows to include in each chunk.
- dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
- The dtype_backends are still experimential.
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
dtype : Type name or dict of columns
@@ -1631,13 +1634,14 @@ def read_table(
chunksize : int, default None
If specified, return an iterator where `chunksize` is the number
of rows to include in each chunk.
- dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy dtypes
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {'numpy_nullable', 'pyarrow'}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
diff --git a/pandas/io/xml.py b/pandas/io/xml.py
index 6421f710f80d6..ca3ba526d0c3c 100644
--- a/pandas/io/xml.py
+++ b/pandas/io/xml.py
@@ -1014,13 +1014,14 @@ def read_xml(
{storage_options}
- dtype_backend : {{"numpy_nullable", "pyarrow"}}, defaults to NumPy backed DataFrames
- Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
- arrays, nullable dtypes are used for all dtypes that have a nullable
- implementation when "numpy_nullable" is set, pyarrow is used for all
- dtypes if "pyarrow" is set.
-
- The dtype_backends are still experimential.
+ dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
.. versionadded:: 2.0
| - [x] closes #53878
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Note that this supersedes PR #53881. | https://api.github.com/repos/pandas-dev/pandas/pulls/54104 | 2023-07-13T03:26:38Z | 2023-07-13T08:54:08Z | 2023-07-13T08:54:08Z | 2023-07-13T08:54:16Z |
TST: Add test for concat multiindex with category | diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 8208abc23551d..d0fed21b15897 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -832,3 +832,32 @@ def test_concat_mismatched_keys_length():
concat((x for x in sers), keys=(y for y in keys), axis=1)
with tm.assert_produces_warning(FutureWarning, match=msg):
concat((x for x in sers), keys=(y for y in keys), axis=0)
+
+
+def test_concat_multiindex_with_category():
+ df1 = DataFrame(
+ {
+ "c1": Series(list("abc"), dtype="category"),
+ "c2": Series(list("eee"), dtype="category"),
+ "i2": Series([1, 2, 3]),
+ }
+ )
+ df1 = df1.set_index(["c1", "c2"])
+ df2 = DataFrame(
+ {
+ "c1": Series(list("abc"), dtype="category"),
+ "c2": Series(list("eee"), dtype="category"),
+ "i2": Series([4, 5, 6]),
+ }
+ )
+ df2 = df2.set_index(["c1", "c2"])
+ result = concat([df1, df2])
+ expected = DataFrame(
+ {
+ "c1": Series(list("abcabc"), dtype="category"),
+ "c2": Series(list("eeeeee"), dtype="category"),
+ "i2": Series([1, 2, 3, 4, 5, 6]),
+ }
+ )
+ expected = expected.set_index(["c1", "c2"])
+ tm.assert_frame_equal(result, expected)
| https://github.com/pandas-dev/pandas/pull/53697#issuecomment-1633340930
cc @phofl
| https://api.github.com/repos/pandas-dev/pandas/pulls/54103 | 2023-07-13T00:55:23Z | 2023-07-13T16:20:05Z | 2023-07-13T16:20:05Z | 2023-07-25T00:15:53Z |
BUG: Timestamp(pd.NA) raising TypeError | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index b7cc254d5c7e5..1467455d86d94 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -435,6 +435,7 @@ Datetimelike
- Bug in :meth:`arrays.DatetimeArray.map` and :meth:`DatetimeIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
- Bug in constructing a :class:`Series` or :class:`DataFrame` from a datetime or timedelta scalar always inferring nanosecond resolution instead of inferring from the input (:issue:`52212`)
- Bug in constructing a :class:`Timestamp` from a string representing a time without a date inferring an incorrect unit (:issue:`54097`)
+- Bug in constructing a :class:`Timestamp` with ``ts_input=pd.NA`` raising ``TypeError`` (:issue:`45481`)
- Bug in parsing datetime strings with weekday but no day e.g. "2023 Sept Thu" incorrectly raising ``AttributeError`` instead of ``ValueError`` (:issue:`52659`)
Timedelta
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 57b7754b08289..45c4d7809fe7a 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -25,6 +25,7 @@ from cpython.datetime cimport (
import_datetime()
+from pandas._libs.missing cimport checknull_with_nat_and_na
from pandas._libs.tslibs.base cimport ABCTimestamp
from pandas._libs.tslibs.dtypes cimport (
abbrev_to_npy_unit,
@@ -57,7 +58,6 @@ from pandas._libs.tslibs.np_datetime import OutOfBoundsDatetime
from pandas._libs.tslibs.nattype cimport (
NPY_NAT,
- c_NaT as NaT,
c_nat_strings as nat_strings,
)
from pandas._libs.tslibs.parsing cimport parse_datetime_string
@@ -268,7 +268,7 @@ cdef _TSObject convert_to_tsobject(object ts, tzinfo tz, str unit,
if isinstance(ts, str):
return convert_str_to_tsobject(ts, tz, unit, dayfirst, yearfirst)
- if ts is None or ts is NaT:
+ if checknull_with_nat_and_na(ts):
obj.value = NPY_NAT
elif is_datetime64_object(ts):
reso = get_supported_reso(get_datetime64_unit(ts))
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index 198b6feea5f4e..b65b34f748260 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -18,6 +18,8 @@
from pandas.errors import OutOfBoundsDatetime
from pandas import (
+ NA,
+ NaT,
Period,
Timedelta,
Timestamp,
@@ -898,3 +900,11 @@ def test_timestamp_constructor_adjust_value_for_fold(tz, ts_input, fold, value_o
result = ts._value
expected = value_out
assert result == expected
+
+
+@pytest.mark.parametrize("na_value", [None, np.nan, np.datetime64("NaT"), NaT, NA])
+def test_timestamp_constructor_na_value(na_value):
+ # GH45481
+ result = Timestamp(na_value)
+ expected = NaT
+ assert result is expected
| - [x] closes #45481
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.1.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54102 | 2023-07-13T00:30:33Z | 2023-07-17T16:23:44Z | 2023-07-17T16:23:44Z | 2023-07-25T00:15:43Z |
REF: rename PandasArray->NumpyExtensionArray | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 1c6e23746352e..8f410d45a9dc9 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -106,7 +106,7 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.api.indexers.VariableOffsetWindowIndexer \
pandas.api.extensions.ExtensionDtype \
pandas.api.extensions.ExtensionArray \
- pandas.arrays.PandasArray \
+ pandas.arrays.NumpyExtensionArray \
pandas.api.extensions.ExtensionArray._accumulate \
pandas.api.extensions.ExtensionArray._concat_same_type \
pandas.api.extensions.ExtensionArray._formatter \
diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst
index f3ff5b70d4aac..df5b69c471b09 100644
--- a/doc/source/development/contributing_codebase.rst
+++ b/doc/source/development/contributing_codebase.rst
@@ -475,7 +475,7 @@ be located.
8) Is your test for one of the pandas-provided ExtensionArrays (``Categorical``,
``DatetimeArray``, ``TimedeltaArray``, ``PeriodArray``, ``IntervalArray``,
- ``PandasArray``, ``FloatArray``, ``BoolArray``, ``StringArray``)?
+ ``NumpyExtensionArray``, ``FloatArray``, ``BoolArray``, ``StringArray``)?
This test likely belongs in one of:
- tests.arrays
diff --git a/doc/source/reference/extensions.rst b/doc/source/reference/extensions.rst
index 63eacc3f6d1d9..bff5b2b70b518 100644
--- a/doc/source/reference/extensions.rst
+++ b/doc/source/reference/extensions.rst
@@ -24,7 +24,7 @@ objects.
:template: autosummary/class_without_autosummary.rst
api.extensions.ExtensionArray
- arrays.PandasArray
+ arrays.NumpyExtensionArray
.. We need this autosummary so that methods and attributes are generated.
.. Separate block, since they aren't classes.
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 137f2e5c12211..b817ef353f5b9 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -286,6 +286,7 @@ See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for mor
Other API changes
^^^^^^^^^^^^^^^^^
+- :class:`arrays.PandasArray` has been renamed ``NumpyExtensionArray`` and the attached dtype name changed from ``PandasDtype`` to ``NumpyEADtype``; importing ``PandasArray`` still works until the next major version (:issue:`53694`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index d1a729343e062..886c0f389ebeb 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -102,7 +102,7 @@
from pandas.core.arrays import (
BaseMaskedArray,
ExtensionArray,
- PandasArray,
+ NumpyExtensionArray,
)
from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
from pandas.core.construction import extract_array
@@ -307,7 +307,7 @@ def box_expected(expected, box_cls, transpose: bool = True):
if box_cls is pd.array:
if isinstance(expected, RangeIndex):
# pd.array would return an IntegerArray
- expected = PandasArray(np.asarray(expected._values))
+ expected = NumpyExtensionArray(np.asarray(expected._values))
else:
expected = pd.array(expected, copy=False)
elif box_cls is Index:
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index d296cc998134b..0591394f5d9ed 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -25,7 +25,7 @@
CategoricalDtype,
DatetimeTZDtype,
ExtensionDtype,
- PandasDtype,
+ NumpyEADtype,
)
from pandas.core.dtypes.missing import array_equivalent
@@ -577,12 +577,12 @@ def raise_assert_detail(
if isinstance(left, np.ndarray):
left = pprint_thing(left)
- elif isinstance(left, (CategoricalDtype, PandasDtype, StringDtype)):
+ elif isinstance(left, (CategoricalDtype, NumpyEADtype, StringDtype)):
left = repr(left)
if isinstance(right, np.ndarray):
right = pprint_thing(right)
- elif isinstance(right, (CategoricalDtype, PandasDtype, StringDtype)):
+ elif isinstance(right, (CategoricalDtype, NumpyEADtype, StringDtype)):
right = repr(right)
msg += f"""
diff --git a/pandas/arrays/__init__.py b/pandas/arrays/__init__.py
index 3a8e80a6b5d2b..32e2afc0eef52 100644
--- a/pandas/arrays/__init__.py
+++ b/pandas/arrays/__init__.py
@@ -12,7 +12,7 @@
FloatingArray,
IntegerArray,
IntervalArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
SparseArray,
StringArray,
@@ -28,9 +28,26 @@
"FloatingArray",
"IntegerArray",
"IntervalArray",
- "PandasArray",
+ "NumpyExtensionArray",
"PeriodArray",
"SparseArray",
"StringArray",
"TimedeltaArray",
]
+
+
+def __getattr__(name: str):
+ if name == "PandasArray":
+ # GH#53694
+ import warnings
+
+ from pandas.util._exceptions import find_stack_level
+
+ warnings.warn(
+ "PandasArray has been renamed NumpyExtensionArray. Use that "
+ "instead. This alias will be removed in a future version.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return NumpyExtensionArray
+ raise AttributeError(f"module 'pandas.arrays' has no attribute '{name}'")
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index e0340f99b92e1..14dee202a9d8d 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -58,7 +58,7 @@
BaseMaskedDtype,
CategoricalDtype,
ExtensionDtype,
- PandasDtype,
+ NumpyEADtype,
)
from pandas.core.dtypes.generic import (
ABCDatetimeArray,
@@ -1439,8 +1439,8 @@ def diff(arr, n: int, axis: AxisInt = 0):
else:
op = operator.sub
- if isinstance(dtype, PandasDtype):
- # PandasArray cannot necessarily hold shifted versions of itself.
+ if isinstance(dtype, NumpyEADtype):
+ # NumpyExtensionArray cannot necessarily hold shifted versions of itself.
arr = arr.to_numpy()
dtype = arr.dtype
diff --git a/pandas/core/arrays/__init__.py b/pandas/core/arrays/__init__.py
index 79be8760db931..245a171fea74b 100644
--- a/pandas/core/arrays/__init__.py
+++ b/pandas/core/arrays/__init__.py
@@ -11,7 +11,7 @@
from pandas.core.arrays.integer import IntegerArray
from pandas.core.arrays.interval import IntervalArray
from pandas.core.arrays.masked import BaseMaskedArray
-from pandas.core.arrays.numpy_ import PandasArray
+from pandas.core.arrays.numpy_ import NumpyExtensionArray
from pandas.core.arrays.period import (
PeriodArray,
period_array,
@@ -34,7 +34,7 @@
"FloatingArray",
"IntegerArray",
"IntervalArray",
- "PandasArray",
+ "NumpyExtensionArray",
"PeriodArray",
"period_array",
"SparseArray",
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 190b74a675711..d399b4780a938 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -317,7 +317,8 @@ def fillna(self, value=None, method=None, limit: int | None = None) -> Self:
func(npvalues, limit=limit, mask=mask.T)
npvalues = npvalues.T
- # TODO: PandasArray didn't used to copy, need tests for this
+ # TODO: NumpyExtensionArray didn't used to copy, need tests
+ # for this
new_values = self._from_backing_data(npvalues)
else:
# fill with value
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 33cd5fe147d2e..7ce399ab64a29 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2642,18 +2642,18 @@ def _str_map(
# Optimization to apply the callable `f` to the categories once
# and rebuild the result by `take`ing from the result with the codes.
# Returns the same type as the object-dtype implementation though.
- from pandas.core.arrays import PandasArray
+ from pandas.core.arrays import NumpyExtensionArray
categories = self.categories
codes = self.codes
- result = PandasArray(categories.to_numpy())._str_map(f, na_value, dtype)
+ result = NumpyExtensionArray(categories.to_numpy())._str_map(f, na_value, dtype)
return take_nd(result, codes, fill_value=na_value)
def _str_get_dummies(self, sep: str = "|"):
# sep may not be in categories. Just bail on this.
- from pandas.core.arrays import PandasArray
+ from pandas.core.arrays import NumpyExtensionArray
- return PandasArray(self.astype(str))._str_get_dummies(sep)
+ return NumpyExtensionArray(self.astype(str))._str_get_dummies(sep)
# ------------------------------------------------------------------------
# GroupBy Methods
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 3274b822f3bd7..e11878dace88e 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -652,7 +652,8 @@ def _validate_listlike(self, value, allow_object: bool = False):
msg = self._validation_error_message(value, True)
raise TypeError(msg)
- # Do type inference if necessary up front (after unpacking PandasArray)
+ # Do type inference if necessary up front (after unpacking
+ # NumpyExtensionArray)
# e.g. we passed PeriodIndex.values and got an ndarray of Periods
value = extract_array(value, extract_numpy=True)
value = pd_array(value)
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 5f02053a454ed..6d01dfcf6d90b 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -17,7 +17,7 @@
from pandas.core.dtypes.astype import astype_array
from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
from pandas.core.dtypes.common import pandas_dtype
-from pandas.core.dtypes.dtypes import PandasDtype
+from pandas.core.dtypes.dtypes import NumpyEADtype
from pandas.core.dtypes.missing import isna
from pandas.core import (
@@ -48,7 +48,7 @@
# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is
# incompatible with definition in base class "ExtensionArray"
-class PandasArray( # type: ignore[misc]
+class NumpyExtensionArray( # type: ignore[misc]
OpsMixin,
NDArrayBackedExtensionArray,
ObjectStringArrayMixin,
@@ -76,19 +76,21 @@ class PandasArray( # type: ignore[misc]
"""
# If you're wondering why pd.Series(cls) doesn't put the array in an
- # ExtensionBlock, search for `ABCPandasArray`. We check for
+ # ExtensionBlock, search for `ABCNumpyExtensionArray`. We check for
# that _typ to ensure that users don't unnecessarily use EAs inside
# pandas internals, which turns off things like block consolidation.
_typ = "npy_extension"
__array_priority__ = 1000
_ndarray: np.ndarray
- _dtype: PandasDtype
+ _dtype: NumpyEADtype
_internal_fill_value = np.nan
# ------------------------------------------------------------------------
# Constructors
- def __init__(self, values: np.ndarray | PandasArray, copy: bool = False) -> None:
+ def __init__(
+ self, values: np.ndarray | NumpyExtensionArray, copy: bool = False
+ ) -> None:
if isinstance(values, type(self)):
values = values._ndarray
if not isinstance(values, np.ndarray):
@@ -98,19 +100,19 @@ def __init__(self, values: np.ndarray | PandasArray, copy: bool = False) -> None
if values.ndim == 0:
# Technically we support 2, but do not advertise that fact.
- raise ValueError("PandasArray must be 1-dimensional.")
+ raise ValueError("NumpyExtensionArray must be 1-dimensional.")
if copy:
values = values.copy()
- dtype = PandasDtype(values.dtype)
+ dtype = NumpyEADtype(values.dtype)
super().__init__(values, dtype)
@classmethod
def _from_sequence(
cls, scalars, *, dtype: Dtype | None = None, copy: bool = False
- ) -> PandasArray:
- if isinstance(dtype, PandasDtype):
+ ) -> NumpyExtensionArray:
+ if isinstance(dtype, NumpyEADtype):
dtype = dtype._dtype
# error: Argument "dtype" to "asarray" has incompatible type
@@ -131,14 +133,14 @@ def _from_sequence(
result = result.copy()
return cls(result)
- def _from_backing_data(self, arr: np.ndarray) -> PandasArray:
+ def _from_backing_data(self, arr: np.ndarray) -> NumpyExtensionArray:
return type(self)(arr)
# ------------------------------------------------------------------------
# Data
@property
- def dtype(self) -> PandasDtype:
+ def dtype(self) -> NumpyEADtype:
return self._dtype
# ------------------------------------------------------------------------
@@ -151,7 +153,7 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# Lightly modified version of
# https://numpy.org/doc/stable/reference/generated/numpy.lib.mixins.NDArrayOperatorsMixin.html
# The primary modification is not boxing scalar return values
- # in PandasArray, since pandas' ExtensionArrays are 1-d.
+ # in NumpyExtensionArray, since pandas' ExtensionArrays are 1-d.
out = kwargs.get("out", ())
result = arraylike.maybe_dispatch_ufunc_to_dunder_op(
@@ -175,10 +177,12 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
return result
# Defer to the implementation of the ufunc on unwrapped values.
- inputs = tuple(x._ndarray if isinstance(x, PandasArray) else x for x in inputs)
+ inputs = tuple(
+ x._ndarray if isinstance(x, NumpyExtensionArray) else x for x in inputs
+ )
if out:
kwargs["out"] = tuple(
- x._ndarray if isinstance(x, PandasArray) else x for x in out
+ x._ndarray if isinstance(x, NumpyExtensionArray) else x for x in out
)
result = getattr(ufunc, method)(*inputs, **kwargs)
@@ -499,20 +503,20 @@ def to_numpy(
# ------------------------------------------------------------------------
# Ops
- def __invert__(self) -> PandasArray:
+ def __invert__(self) -> NumpyExtensionArray:
return type(self)(~self._ndarray)
- def __neg__(self) -> PandasArray:
+ def __neg__(self) -> NumpyExtensionArray:
return type(self)(-self._ndarray)
- def __pos__(self) -> PandasArray:
+ def __pos__(self) -> NumpyExtensionArray:
return type(self)(+self._ndarray)
- def __abs__(self) -> PandasArray:
+ def __abs__(self) -> NumpyExtensionArray:
return type(self)(abs(self._ndarray))
def _cmp_method(self, other, op):
- if isinstance(other, PandasArray):
+ if isinstance(other, NumpyExtensionArray):
other = other._ndarray
other = ops.maybe_prepare_scalar_for_op(other, (len(self),))
@@ -538,7 +542,7 @@ def _cmp_method(self, other, op):
def _wrap_ndarray_result(self, result: np.ndarray):
# If we have timedelta64[ns] result, return a TimedeltaArray instead
- # of a PandasArray
+ # of a NumpyExtensionArray
if result.dtype.kind == "m" and is_supported_unit(
get_unit_from_dtype(result.dtype)
):
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index 2640cbd7f6ba1..7c834d2e26b3a 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -43,7 +43,7 @@
IntegerArray,
IntegerDtype,
)
-from pandas.core.arrays.numpy_ import PandasArray
+from pandas.core.arrays.numpy_ import NumpyExtensionArray
from pandas.core.construction import extract_array
from pandas.core.indexers import check_array_indexer
from pandas.core.missing import isna
@@ -231,7 +231,7 @@ def tolist(self):
# error: Definition of "_concat_same_type" in base class "NDArrayBacked" is
# incompatible with definition in base class "ExtensionArray"
-class StringArray(BaseStringArray, PandasArray): # type: ignore[misc]
+class StringArray(BaseStringArray, NumpyExtensionArray): # type: ignore[misc]
"""
Extension array for string data.
@@ -294,7 +294,7 @@ class StringArray(BaseStringArray, PandasArray): # type: ignore[misc]
will convert the values to strings.
>>> pd.array(['1', 1], dtype="object")
- <PandasArray>
+ <NumpyExtensionArray>
['1', 1]
Length: 2, dtype: object
>>> pd.array(['1', 1], dtype="string")
@@ -312,7 +312,7 @@ class StringArray(BaseStringArray, PandasArray): # type: ignore[misc]
Length: 3, dtype: boolean
"""
- # undo the PandasArray hack
+ # undo the NumpyExtensionArray hack
_typ = "extension"
def __init__(self, values, copy: bool = False) -> None:
@@ -404,7 +404,7 @@ def _values_for_factorize(self):
def __setitem__(self, key, value):
value = extract_array(value, extract_numpy=True)
if isinstance(value, type(self)):
- # extract_array doesn't extract PandasArray subclasses
+ # extract_array doesn't extract NumpyExtensionArray subclasses
value = value._ndarray
key = check_array_indexer(self, key)
@@ -461,7 +461,7 @@ def astype(self, dtype, copy: bool = True):
values = arr.astype(dtype.numpy_dtype)
return FloatingArray(values, mask, copy=False)
elif isinstance(dtype, ExtensionDtype):
- # Skip the PandasArray.astype method
+ # Skip the NumpyExtensionArray.astype method
return ExtensionArray.astype(self, dtype, copy)
elif np.issubdtype(dtype, np.floating):
arr = self._ndarray.copy()
@@ -557,7 +557,7 @@ def _cmp_method(self, other, op):
# ------------------------------------------------------------------------
# String methods interface
# error: Incompatible types in assignment (expression has type "NAType",
- # base class "PandasArray" defined the type as "float")
+ # base class "NumpyExtensionArray" defined the type as "float")
_str_na_value = libmissing.NA # type: ignore[assignment]
def _str_map(
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 057b381bbdb58..e707a151fdb7f 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -518,11 +518,11 @@ def array(self) -> ExtensionArray:
Examples
--------
- For regular NumPy types like int, and float, a PandasArray
+ For regular NumPy types like int, and float, a NumpyExtensionArray
is returned.
>>> pd.Series([1, 2, 3]).array
- <PandasArray>
+ <NumpyExtensionArray>
[1, 2, 3]
Length: 3, dtype: int64
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 9112c7e52a348..4ce6c35244e5b 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -49,7 +49,7 @@
is_object_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.dtypes import PandasDtype
+from pandas.core.dtypes.dtypes import NumpyEADtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCExtensionArray,
@@ -150,7 +150,7 @@ def array(
numpy.array : Construct a NumPy array.
Series : Construct a pandas Series.
Index : Construct a pandas Index.
- arrays.PandasArray : ExtensionArray wrapping a NumPy array.
+ arrays.NumpyExtensionArray : ExtensionArray wrapping a NumPy array.
Series.array : Extract the array stored within a Series.
Notes
@@ -169,11 +169,11 @@ def array(
rather than a string alias or allowing it to be inferred. For example,
a future version of pandas or a 3rd-party library may include a
dedicated ExtensionArray for string data. In this event, the following
- would no longer return a :class:`arrays.PandasArray` backed by a NumPy
- array.
+ would no longer return a :class:`arrays.NumpyExtensionArray` backed by a
+ NumPy array.
>>> pd.array(['a', 'b'], dtype=str)
- <PandasArray>
+ <NumpyExtensionArray>
['a', 'b']
Length: 2, dtype: str32
@@ -182,7 +182,7 @@ def array(
specify that in the dtype.
>>> pd.array(['a', 'b'], dtype=np.dtype("<U1"))
- <PandasArray>
+ <NumpyExtensionArray>
['a', 'b']
Length: 2, dtype: str32
@@ -193,7 +193,7 @@ def array(
When data with a ``datetime64[ns]`` or ``timedelta64[ns]`` dtype is
passed, pandas will always return a ``DatetimeArray`` or ``TimedeltaArray``
- rather than a ``PandasArray``. This is for symmetry with the case of
+ rather than a ``NumpyExtensionArray``. This is for symmetry with the case of
timezone-aware data, which NumPy does not natively support.
>>> pd.array(['2015', '2016'], dtype='datetime64[ns]')
@@ -258,21 +258,21 @@ def array(
Categories (3, object): ['a' < 'b' < 'c']
If pandas does not infer a dedicated extension type a
- :class:`arrays.PandasArray` is returned.
+ :class:`arrays.NumpyExtensionArray` is returned.
>>> pd.array([1 + 1j, 3 + 2j])
- <PandasArray>
+ <NumpyExtensionArray>
[(1+1j), (3+2j)]
Length: 2, dtype: complex128
As mentioned in the "Notes" section, new extension types may be added
in the future (by pandas or 3rd party libraries), causing the return
- value to no longer be a :class:`arrays.PandasArray`. Specify the `dtype`
- as a NumPy dtype if you need to ensure there's no future change in
+ value to no longer be a :class:`arrays.NumpyExtensionArray`. Specify the
+ `dtype` as a NumPy dtype if you need to ensure there's no future change in
behavior.
>>> pd.array([1, 2], dtype=np.dtype("int32"))
- <PandasArray>
+ <NumpyExtensionArray>
[1, 2]
Length: 2, dtype: int32
@@ -291,7 +291,7 @@ def array(
FloatingArray,
IntegerArray,
IntervalArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
TimedeltaArray,
)
@@ -314,7 +314,7 @@ def array(
dtype = pandas_dtype(dtype)
if isinstance(data, ExtensionArray) and (dtype is None or data.dtype == dtype):
- # e.g. TimedeltaArray[s], avoid casting to PandasArray
+ # e.g. TimedeltaArray[s], avoid casting to NumpyExtensionArray
if copy:
return data.copy()
return data
@@ -337,7 +337,7 @@ def array(
try:
return DatetimeArray._from_sequence(data, copy=copy)
except ValueError:
- # Mixture of timezones, fall back to PandasArray
+ # Mixture of timezones, fall back to NumpyExtensionArray
pass
elif inferred_dtype.startswith("timedelta"):
@@ -356,7 +356,7 @@ def array(
and getattr(data, "dtype", None) != np.float16
):
# GH#44715 Exclude np.float16 bc FloatingArray does not support it;
- # we will fall back to PandasArray.
+ # we will fall back to NumpyExtensionArray.
return FloatingArray._from_sequence(data, copy=copy)
elif inferred_dtype == "boolean":
@@ -381,7 +381,7 @@ def array(
stacklevel=find_stack_level(),
)
- return PandasArray._from_sequence(data, dtype=dtype, copy=copy)
+ return NumpyExtensionArray._from_sequence(data, dtype=dtype, copy=copy)
_typs = frozenset(
@@ -427,7 +427,7 @@ def extract_array(
For Series / Index, the underlying ExtensionArray is unboxed.
extract_numpy : bool, default False
- Whether to extract the ndarray from a PandasArray.
+ Whether to extract the ndarray from a NumpyExtensionArray.
extract_range : bool, default False
If we have a RangeIndex, return range._values if True
@@ -471,7 +471,7 @@ def extract_array(
return obj._values # type: ignore[attr-defined]
elif extract_numpy and typ == "npy_extension":
- # i.e. isinstance(obj, ABCPandasArray)
+ # i.e. isinstance(obj, ABCNumpyExtensionArray)
# error: "T" has no attribute "to_numpy"
return obj.to_numpy() # type: ignore[attr-defined]
@@ -540,11 +540,11 @@ def sanitize_array(
if isinstance(data, ma.MaskedArray):
data = sanitize_masked_array(data)
- if isinstance(dtype, PandasDtype):
- # Avoid ending up with a PandasArray
+ if isinstance(dtype, NumpyEADtype):
+ # Avoid ending up with a NumpyExtensionArray
dtype = dtype.numpy_dtype
- # extract ndarray or ExtensionArray, ensure we have no PandasArray
+ # extract ndarray or ExtensionArray, ensure we have no NumpyExtensionArray
data = extract_array(data, extract_numpy=True, extract_range=True)
if isinstance(data, np.ndarray) and data.ndim == 0:
@@ -563,7 +563,7 @@ def sanitize_array(
return data
elif isinstance(data, ABCExtensionArray):
- # it is already ensured above this is not a PandasArray
+ # it is already ensured above this is not a NumpyExtensionArray
# Until GH#49309 is fixed this check needs to come before the
# ExtensionDtype check
if dtype is not None:
@@ -688,7 +688,7 @@ def _sanitize_ndim(
f"Data must be 1-dimensional, got ndarray of shape {data.shape} instead"
)
if is_object_dtype(dtype) and isinstance(dtype, ExtensionDtype):
- # i.e. PandasDtype("O")
+ # i.e. NumpyEADtype("O")
result = com.asarray_tuplesafe(data, dtype=np.dtype("object"))
cls = dtype.construct_array_type()
diff --git a/pandas/core/dtypes/astype.py b/pandas/core/dtypes/astype.py
index 64df3827d7a3d..ac3a44276ac6d 100644
--- a/pandas/core/dtypes/astype.py
+++ b/pandas/core/dtypes/astype.py
@@ -24,7 +24,7 @@
)
from pandas.core.dtypes.dtypes import (
ExtensionDtype,
- PandasDtype,
+ NumpyEADtype,
)
if TYPE_CHECKING:
@@ -230,8 +230,8 @@ def astype_array_safe(
raise TypeError(msg)
dtype = pandas_dtype(dtype)
- if isinstance(dtype, PandasDtype):
- # Ensure we don't end up with a PandasArray
+ if isinstance(dtype, NumpyEADtype):
+ # Ensure we don't end up with a NumpyExtensionArray
dtype = dtype.numpy_dtype
try:
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 04e2b00744156..05ebe8295f817 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -86,7 +86,7 @@
BaseMaskedArray,
DatetimeArray,
IntervalArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
SparseArray,
)
@@ -1382,7 +1382,7 @@ def _get_common_dtype(self, dtypes: list[DtypeObj]) -> DtypeObj | None:
return IntervalDtype(common, closed=closed)
-class PandasDtype(ExtensionDtype):
+class NumpyEADtype(ExtensionDtype):
"""
A Pandas ExtensionDtype for NumPy dtypes.
@@ -1401,19 +1401,19 @@ class PandasDtype(ExtensionDtype):
_metadata = ("_dtype",)
- def __init__(self, dtype: npt.DTypeLike | PandasDtype | None) -> None:
- if isinstance(dtype, PandasDtype):
+ def __init__(self, dtype: npt.DTypeLike | NumpyEADtype | None) -> None:
+ if isinstance(dtype, NumpyEADtype):
# make constructor univalent
dtype = dtype.numpy_dtype
self._dtype = np.dtype(dtype)
def __repr__(self) -> str:
- return f"PandasDtype({repr(self.name)})"
+ return f"NumpyEADtype({repr(self.name)})"
@property
def numpy_dtype(self) -> np.dtype:
"""
- The NumPy dtype this PandasDtype wraps.
+ The NumPy dtype this NumpyEADtype wraps.
"""
return self._dtype
@@ -1441,19 +1441,19 @@ def _is_boolean(self) -> bool:
return self.kind == "b"
@classmethod
- def construct_from_string(cls, string: str) -> PandasDtype:
+ def construct_from_string(cls, string: str) -> NumpyEADtype:
try:
dtype = np.dtype(string)
except TypeError as err:
if not isinstance(string, str):
msg = f"'construct_from_string' expects a string, got {type(string)}"
else:
- msg = f"Cannot construct a 'PandasDtype' from '{string}'"
+ msg = f"Cannot construct a 'NumpyEADtype' from '{string}'"
raise TypeError(msg) from err
return cls(dtype)
@classmethod
- def construct_array_type(cls) -> type_t[PandasArray]:
+ def construct_array_type(cls) -> type_t[NumpyExtensionArray]:
"""
Return the array type associated with this dtype.
@@ -1461,9 +1461,9 @@ def construct_array_type(cls) -> type_t[PandasArray]:
-------
type
"""
- from pandas.core.arrays import PandasArray
+ from pandas.core.arrays import NumpyExtensionArray
- return PandasArray
+ return NumpyExtensionArray
@property
def kind(self) -> str:
diff --git a/pandas/core/dtypes/generic.py b/pandas/core/dtypes/generic.py
index 5904ba4895aef..9718ad600cb80 100644
--- a/pandas/core/dtypes/generic.py
+++ b/pandas/core/dtypes/generic.py
@@ -24,7 +24,7 @@
from pandas.core.arrays import (
DatetimeArray,
ExtensionArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
TimedeltaArray,
)
@@ -141,7 +141,7 @@ def _subclasscheck(cls, inst) -> bool:
{"extension", "categorical", "periodarray", "datetimearray", "timedeltaarray"},
),
)
-ABCPandasArray = cast(
- "Type[PandasArray]",
- create_pandas_abc_type("ABCPandasArray", "_typ", ("npy_extension",)),
+ABCNumpyExtensionArray = cast(
+ "Type[NumpyExtensionArray]",
+ create_pandas_abc_type("ABCNumpyExtensionArray", "_typ", ("npy_extension",)),
)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 4b6b59898c199..4cf93aebf7de5 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -5099,9 +5099,9 @@ def values(self) -> ArrayLike:
def array(self) -> ExtensionArray:
array = self._data
if isinstance(array, np.ndarray):
- from pandas.core.arrays.numpy_ import PandasArray
+ from pandas.core.arrays.numpy_ import NumpyExtensionArray
- array = PandasArray(array)
+ array = NumpyExtensionArray(array)
return array
@property
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 5591253618f5f..3b77540efcdd2 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -37,7 +37,7 @@
)
from pandas.core.dtypes.dtypes import (
ExtensionDtype,
- PandasDtype,
+ NumpyEADtype,
SparseDtype,
)
from pandas.core.dtypes.generic import (
@@ -56,7 +56,7 @@
from pandas.core.arrays import (
DatetimeArray,
ExtensionArray,
- PandasArray,
+ NumpyExtensionArray,
TimedeltaArray,
)
from pandas.core.construction import (
@@ -331,7 +331,8 @@ def convert(self, copy: bool | None) -> Self:
def _convert(arr):
if is_object_dtype(arr.dtype):
- # extract PandasArray for tests that patch PandasArray._typ
+ # extract NumpyExtensionArray for tests that patch
+ # NumpyExtensionArray._typ
arr = np.asarray(arr)
result = lib.maybe_convert_objects(
arr,
@@ -1022,7 +1023,7 @@ def as_array(
if isinstance(dtype, SparseDtype):
dtype = dtype.subtype
- elif isinstance(dtype, PandasDtype):
+ elif isinstance(dtype, NumpyEADtype):
dtype = dtype.numpy_dtype
elif isinstance(dtype, ExtensionDtype):
dtype = np.dtype("object")
@@ -1148,7 +1149,7 @@ def array_values(self):
"""The array that Series.array returns"""
arr = self.array
if isinstance(arr, np.ndarray):
- arr = PandasArray(arr)
+ arr = NumpyExtensionArray(arr)
return arr
@property
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 45a4f765df082..525bf110b2465 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -64,14 +64,14 @@
DatetimeTZDtype,
ExtensionDtype,
IntervalDtype,
- PandasDtype,
+ NumpyEADtype,
PeriodDtype,
SparseDtype,
)
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCIndex,
- ABCPandasArray,
+ ABCNumpyExtensionArray,
ABCSeries,
)
from pandas.core.dtypes.missing import (
@@ -101,7 +101,7 @@
DatetimeArray,
ExtensionArray,
IntervalArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
TimedeltaArray,
)
@@ -1391,9 +1391,9 @@ def pad_or_backfill(
copy, refs = self._get_refs_and_copy(using_cow, inplace)
- # Dispatch to the PandasArray method.
- # We know self.array_values is a PandasArray bc EABlock overrides
- vals = cast(PandasArray, self.array_values)
+ # Dispatch to the NumpyExtensionArray method.
+ # We know self.array_values is a NumpyExtensionArray bc EABlock overrides
+ vals = cast(NumpyExtensionArray, self.array_values)
if axis == 1:
vals = vals.T
new_values = vals.pad_or_backfill(
@@ -2167,7 +2167,7 @@ def is_view(self) -> bool:
@property
def array_values(self) -> ExtensionArray:
- return PandasArray(self.values)
+ return NumpyExtensionArray(self.values)
def get_values(self, dtype: DtypeObj | None = None) -> np.ndarray:
if dtype == _dtype_obj:
@@ -2265,7 +2265,7 @@ def maybe_coerce_values(values: ArrayLike) -> ArrayLike:
-------
values : np.ndarray or ExtensionArray
"""
- # Caller is responsible for ensuring PandasArray is already extracted.
+ # Caller is responsible for ensuring NumpyExtensionArray is already extracted.
if isinstance(values, np.ndarray):
values = ensure_wrapped_if_datetimelike(values)
@@ -2297,7 +2297,7 @@ def get_block_type(dtype: DtypeObj) -> type[Block]:
elif isinstance(dtype, PeriodDtype):
return NDArrayBackedExtensionBlock
elif isinstance(dtype, ExtensionDtype):
- # Note: need to be sure PandasArray is unwrapped before we get here
+ # Note: need to be sure NumpyExtensionArray is unwrapped before we get here
return ExtensionBlock
# We use kind checks because it is much more performant
@@ -2330,7 +2330,7 @@ def new_block(
refs: BlockValuesRefs | None = None,
) -> Block:
# caller is responsible for ensuring:
- # - values is NOT a PandasArray
+ # - values is NOT a NumpyExtensionArray
# - check_ndim/ensure_block_shape already checked
# - maybe_coerce_values already called/unnecessary
klass = get_block_type(values.dtype)
@@ -2383,16 +2383,16 @@ def extract_pandas_array(
values: ArrayLike, dtype: DtypeObj | None, ndim: int
) -> tuple[ArrayLike, DtypeObj | None]:
"""
- Ensure that we don't allow PandasArray / PandasDtype in internals.
+ Ensure that we don't allow NumpyExtensionArray / NumpyEADtype in internals.
"""
# For now, blocks should be backed by ndarrays when possible.
- if isinstance(values, ABCPandasArray):
+ if isinstance(values, ABCNumpyExtensionArray):
values = values.to_numpy()
if ndim and ndim > 1:
# TODO(EA2D): special case not needed with 2D EAs
values = np.atleast_2d(values)
- if isinstance(dtype, PandasDtype):
+ if isinstance(dtype, NumpyEADtype):
dtype = dtype.numpy_dtype
return values, dtype
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index ee85bc5a87834..2290cd86f35e6 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -118,7 +118,7 @@ def arrays_to_mgr(
# - all(len(x) == len(index) for x in arrays)
# - all(x.ndim == 1 for x in arrays)
# - all(isinstance(x, (np.ndarray, ExtensionArray)) for x in arrays)
- # - all(type(x) is not PandasArray for x in arrays)
+ # - all(type(x) is not NumpyExtensionArray for x in arrays)
else:
index = ensure_index(index)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index be030f7ad7a47..ac2dd08d47427 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -2067,7 +2067,7 @@ def create_block_manager_from_column_arrays(
# assert isinstance(axes, list)
# assert all(isinstance(x, Index) for x in axes)
# assert all(isinstance(x, (np.ndarray, ExtensionArray)) for x in arrays)
- # assert all(type(x) is not PandasArray for x in arrays)
+ # assert all(type(x) is not NumpyExtensionArray for x in arrays)
# assert all(x.ndim == 1 for x in arrays)
# assert all(len(x) == len(axes[1]) for x in arrays)
# assert len(arrays) == len(axes[0])
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 079366a942f8e..223b04ac9a733 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -767,15 +767,15 @@ def _values(self):
Overview:
- dtype | values | _values | array |
- ----------- | ------------- | ------------- | ------------- |
- Numeric | ndarray | ndarray | PandasArray |
- Category | Categorical | Categorical | Categorical |
- dt64[ns] | ndarray[M8ns] | DatetimeArray | DatetimeArray |
- dt64[ns tz] | ndarray[M8ns] | DatetimeArray | DatetimeArray |
- td64[ns] | ndarray[m8ns] | TimedeltaArray| ndarray[m8ns] |
- Period | ndarray[obj] | PeriodArray | PeriodArray |
- Nullable | EA | EA | EA |
+ dtype | values | _values | array |
+ ----------- | ------------- | ------------- | --------------------- |
+ Numeric | ndarray | ndarray | NumpyExtensionArray |
+ Category | Categorical | Categorical | Categorical |
+ dt64[ns] | ndarray[M8ns] | DatetimeArray | DatetimeArray |
+ dt64[ns tz] | ndarray[M8ns] | DatetimeArray | DatetimeArray |
+ td64[ns] | ndarray[m8ns] | TimedeltaArray| TimedeltaArray |
+ Period | ndarray[obj] | PeriodArray | PeriodArray |
+ Nullable | EA | EA | EA |
"""
return self._mgr.internal_values()
@@ -889,7 +889,7 @@ def view(self, dtype: Dtype | None = None) -> Series:
4 2
dtype: int8
"""
- # self.array instead of self._values so we piggyback on PandasArray
+ # self.array instead of self._values so we piggyback on NumpyExtensionArray
# implementation
res_values = self.array.view(dtype)
res_ser = self._constructor(res_values, index=self.index, copy=False)
diff --git a/pandas/core/strings/__init__.py b/pandas/core/strings/__init__.py
index eb650477c2b6b..d4ce75f768c5d 100644
--- a/pandas/core/strings/__init__.py
+++ b/pandas/core/strings/__init__.py
@@ -23,6 +23,6 @@
# BaseStringArrayMethods
# - ObjectStringArrayMixin
# - StringArray
-# - PandasArray
+# - NumpyExtensionArray
# - Categorical
# - ArrowStringArray
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index afe904f02ea8b..801968bd59f4e 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -67,7 +67,7 @@
from pandas.arrays import (
DatetimeArray,
IntegerArray,
- PandasArray,
+ NumpyExtensionArray,
)
from pandas.core import algorithms
from pandas.core.algorithms import unique
@@ -393,7 +393,7 @@ def _convert_listlike_datetimes(
"""
if isinstance(arg, (list, tuple)):
arg = np.array(arg, dtype="O")
- elif isinstance(arg, PandasArray):
+ elif isinstance(arg, NumpyExtensionArray):
arg = np.array(arg)
arg_dtype = getattr(arg, "dtype", None)
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 924a6db4b901b..60bcb97aaa364 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -373,3 +373,11 @@ def test_testing(self):
def test_util_in_top_level(self):
with pytest.raises(AttributeError, match="foo"):
pd.util.foo
+
+
+def test_pandas_array_alias():
+ msg = "PandasArray has been renamed NumpyExtensionArray"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ res = pd.arrays.PandasArray
+
+ assert res is pd.arrays.NumpyExtensionArray
diff --git a/pandas/tests/arithmetic/common.py b/pandas/tests/arithmetic/common.py
index f3173e8f0eb57..b608df1554154 100644
--- a/pandas/tests/arithmetic/common.py
+++ b/pandas/tests/arithmetic/common.py
@@ -13,7 +13,7 @@
import pandas._testing as tm
from pandas.core.arrays import (
BooleanArray,
- PandasArray,
+ NumpyExtensionArray,
)
@@ -95,8 +95,8 @@ def assert_invalid_comparison(left, right, box):
def xbox2(x):
# Eventually we'd like this to be tighter, but for now we'll
- # just exclude PandasArray[bool]
- if isinstance(x, PandasArray):
+ # just exclude NumpyExtensionArray[bool]
+ if isinstance(x, NumpyExtensionArray):
return x._ndarray
if isinstance(x, BooleanArray):
# NB: we are assuming no pd.NAs for now
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 6a0584485be42..e6c743c76a2c1 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -1570,7 +1570,7 @@ def test_dt64arr_add_sub_offset_array(
# Same thing but boxing other
other = tm.box_expected(other, box_with_array)
if box_with_array is pd.array and op is roperator.radd:
- # We expect a PandasArray, not ndarray[object] here
+ # We expect a NumpyExtensionArray, not ndarray[object] here
expected = pd.array(expected, dtype=object)
with tm.assert_produces_warning(PerformanceWarning):
res = op(dtarr, other)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index c2f835212529f..0ffe1ddc3dfb7 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -27,7 +27,7 @@
timedelta_range,
)
import pandas._testing as tm
-from pandas.core.arrays import PandasArray
+from pandas.core.arrays import NumpyExtensionArray
from pandas.tests.arithmetic.common import (
assert_invalid_addsub_type,
assert_invalid_comparison,
@@ -1716,7 +1716,7 @@ def test_td64_div_object_mixed_result(self, box_with_array):
expected = Index([1.0, np.timedelta64("NaT", "ns"), orig[0], 1.5], dtype=object)
expected = tm.box_expected(expected, box_with_array, transpose=False)
- if isinstance(expected, PandasArray):
+ if isinstance(expected, NumpyExtensionArray):
expected = expected.to_numpy()
tm.assert_equal(res, expected)
if box_with_array is DataFrame:
@@ -1727,7 +1727,7 @@ def test_td64_div_object_mixed_result(self, box_with_array):
expected = Index([1, np.timedelta64("NaT", "ns"), orig[0], 1], dtype=object)
expected = tm.box_expected(expected, box_with_array, transpose=False)
- if isinstance(expected, PandasArray):
+ if isinstance(expected, NumpyExtensionArray):
expected = expected.to_numpy()
tm.assert_equal(res, expected)
if box_with_array is DataFrame:
diff --git a/pandas/tests/arrays/numpy_/test_numpy.py b/pandas/tests/arrays/numpy_/test_numpy.py
index c748d487a2f9c..4217745e60e76 100644
--- a/pandas/tests/arrays/numpy_/test_numpy.py
+++ b/pandas/tests/arrays/numpy_/test_numpy.py
@@ -1,15 +1,15 @@
"""
-Additional tests for PandasArray that aren't covered by
+Additional tests for NumpyExtensionArray that aren't covered by
the interface tests.
"""
import numpy as np
import pytest
-from pandas.core.dtypes.dtypes import PandasDtype
+from pandas.core.dtypes.dtypes import NumpyEADtype
import pandas as pd
import pandas._testing as tm
-from pandas.arrays import PandasArray
+from pandas.arrays import NumpyExtensionArray
@pytest.fixture(
@@ -33,7 +33,7 @@ def any_numpy_array(request):
# ----------------------------------------------------------------------------
-# PandasDtype
+# NumpyEADtype
@pytest.mark.parametrize(
@@ -52,7 +52,7 @@ def any_numpy_array(request):
],
)
def test_is_numeric(dtype, expected):
- dtype = PandasDtype(dtype)
+ dtype = NumpyEADtype(dtype)
assert dtype._is_numeric is expected
@@ -72,25 +72,25 @@ def test_is_numeric(dtype, expected):
],
)
def test_is_boolean(dtype, expected):
- dtype = PandasDtype(dtype)
+ dtype = NumpyEADtype(dtype)
assert dtype._is_boolean is expected
def test_repr():
- dtype = PandasDtype(np.dtype("int64"))
- assert repr(dtype) == "PandasDtype('int64')"
+ dtype = NumpyEADtype(np.dtype("int64"))
+ assert repr(dtype) == "NumpyEADtype('int64')"
def test_constructor_from_string():
- result = PandasDtype.construct_from_string("int64")
- expected = PandasDtype(np.dtype("int64"))
+ result = NumpyEADtype.construct_from_string("int64")
+ expected = NumpyEADtype(np.dtype("int64"))
assert result == expected
def test_dtype_univalent(any_numpy_dtype):
- dtype = PandasDtype(any_numpy_dtype)
+ dtype = NumpyEADtype(any_numpy_dtype)
- result = PandasDtype(dtype)
+ result = NumpyEADtype(dtype)
assert result == dtype
@@ -100,40 +100,40 @@ def test_dtype_univalent(any_numpy_dtype):
def test_constructor_no_coercion():
with pytest.raises(ValueError, match="NumPy array"):
- PandasArray([1, 2, 3])
+ NumpyExtensionArray([1, 2, 3])
def test_series_constructor_with_copy():
ndarray = np.array([1, 2, 3])
- ser = pd.Series(PandasArray(ndarray), copy=True)
+ ser = pd.Series(NumpyExtensionArray(ndarray), copy=True)
assert ser.values is not ndarray
def test_series_constructor_with_astype():
ndarray = np.array([1, 2, 3])
- result = pd.Series(PandasArray(ndarray), dtype="float64")
+ result = pd.Series(NumpyExtensionArray(ndarray), dtype="float64")
expected = pd.Series([1.0, 2.0, 3.0], dtype="float64")
tm.assert_series_equal(result, expected)
def test_from_sequence_dtype():
arr = np.array([1, 2, 3], dtype="int64")
- result = PandasArray._from_sequence(arr, dtype="uint64")
- expected = PandasArray(np.array([1, 2, 3], dtype="uint64"))
+ result = NumpyExtensionArray._from_sequence(arr, dtype="uint64")
+ expected = NumpyExtensionArray(np.array([1, 2, 3], dtype="uint64"))
tm.assert_extension_array_equal(result, expected)
def test_constructor_copy():
arr = np.array([0, 1])
- result = PandasArray(arr, copy=True)
+ result = NumpyExtensionArray(arr, copy=True)
assert not tm.shares_memory(result, arr)
def test_constructor_with_data(any_numpy_array):
nparr = any_numpy_array
- arr = PandasArray(nparr)
+ arr = NumpyExtensionArray(nparr)
assert arr.dtype.numpy_dtype == nparr.dtype
@@ -142,7 +142,7 @@ def test_constructor_with_data(any_numpy_array):
def test_to_numpy():
- arr = PandasArray(np.array([1, 2, 3]))
+ arr = NumpyExtensionArray(np.array([1, 2, 3]))
result = arr.to_numpy()
assert result is arr._ndarray
@@ -167,7 +167,7 @@ def test_setitem_series():
def test_setitem(any_numpy_array):
nparr = any_numpy_array
- arr = PandasArray(nparr, copy=True)
+ arr = NumpyExtensionArray(nparr, copy=True)
arr[0] = arr[1]
nparr[0] = nparr[1]
@@ -181,14 +181,14 @@ def test_setitem(any_numpy_array):
def test_bad_reduce_raises():
arr = np.array([1, 2, 3], dtype="int64")
- arr = PandasArray(arr)
+ arr = NumpyExtensionArray(arr)
msg = "cannot perform not_a_method with type int"
with pytest.raises(TypeError, match=msg):
arr._reduce(msg)
def test_validate_reduction_keyword_args():
- arr = PandasArray(np.array([1, 2, 3]))
+ arr = NumpyExtensionArray(np.array([1, 2, 3]))
msg = "the 'keepdims' parameter is not supported .*all"
with pytest.raises(ValueError, match=msg):
arr.all(keepdims=True)
@@ -217,7 +217,7 @@ def test_np_max_nested_tuples():
def test_np_reduce_2d():
raw = np.arange(12).reshape(4, 3)
- arr = PandasArray(raw)
+ arr = NumpyExtensionArray(raw)
res = np.maximum.reduce(arr, axis=0)
tm.assert_extension_array_equal(res, arr[-1])
@@ -232,24 +232,24 @@ def test_np_reduce_2d():
@pytest.mark.parametrize("ufunc", [np.abs, np.negative, np.positive])
def test_ufunc_unary(ufunc):
- arr = PandasArray(np.array([-1.0, 0.0, 1.0]))
+ arr = NumpyExtensionArray(np.array([-1.0, 0.0, 1.0]))
result = ufunc(arr)
- expected = PandasArray(ufunc(arr._ndarray))
+ expected = NumpyExtensionArray(ufunc(arr._ndarray))
tm.assert_extension_array_equal(result, expected)
# same thing but with the 'out' keyword
- out = PandasArray(np.array([-9.0, -9.0, -9.0]))
+ out = NumpyExtensionArray(np.array([-9.0, -9.0, -9.0]))
ufunc(arr, out=out)
tm.assert_extension_array_equal(out, expected)
def test_ufunc():
- arr = PandasArray(np.array([-1.0, 0.0, 1.0]))
+ arr = NumpyExtensionArray(np.array([-1.0, 0.0, 1.0]))
r1, r2 = np.divmod(arr, np.add(arr, 2))
e1, e2 = np.divmod(arr._ndarray, np.add(arr._ndarray, 2))
- e1 = PandasArray(e1)
- e2 = PandasArray(e2)
+ e1 = NumpyExtensionArray(e1)
+ e2 = NumpyExtensionArray(e2)
tm.assert_extension_array_equal(r1, e1)
tm.assert_extension_array_equal(r2, e2)
@@ -257,23 +257,23 @@ def test_ufunc():
def test_basic_binop():
# Just a basic smoke test. The EA interface tests exercise this
# more thoroughly.
- x = PandasArray(np.array([1, 2, 3]))
+ x = NumpyExtensionArray(np.array([1, 2, 3]))
result = x + x
- expected = PandasArray(np.array([2, 4, 6]))
+ expected = NumpyExtensionArray(np.array([2, 4, 6]))
tm.assert_extension_array_equal(result, expected)
@pytest.mark.parametrize("dtype", [None, object])
def test_setitem_object_typecode(dtype):
- arr = PandasArray(np.array(["a", "b", "c"], dtype=dtype))
+ arr = NumpyExtensionArray(np.array(["a", "b", "c"], dtype=dtype))
arr[0] = "t"
- expected = PandasArray(np.array(["t", "b", "c"], dtype=dtype))
+ expected = NumpyExtensionArray(np.array(["t", "b", "c"], dtype=dtype))
tm.assert_extension_array_equal(arr, expected)
def test_setitem_no_coercion():
# https://github.com/pandas-dev/pandas/issues/28150
- arr = PandasArray(np.array([1, 2, 3]))
+ arr = NumpyExtensionArray(np.array([1, 2, 3]))
with pytest.raises(ValueError, match="int"):
arr[0] = "a"
@@ -285,7 +285,7 @@ def test_setitem_no_coercion():
def test_setitem_preserves_views():
# GH#28150, see also extension test of the same name
- arr = PandasArray(np.array([1, 2, 3]))
+ arr = NumpyExtensionArray(np.array([1, 2, 3]))
view1 = arr.view()
view2 = arr[:]
view3 = np.asarray(arr)
@@ -303,22 +303,22 @@ def test_setitem_preserves_views():
@pytest.mark.parametrize("dtype", [np.int64, np.uint64])
def test_quantile_empty(dtype):
# we should get back np.nans, not -1s
- arr = PandasArray(np.array([], dtype=dtype))
+ arr = NumpyExtensionArray(np.array([], dtype=dtype))
idx = pd.Index([0.0, 0.5])
result = arr._quantile(idx, interpolation="linear")
- expected = PandasArray(np.array([np.nan, np.nan]))
+ expected = NumpyExtensionArray(np.array([np.nan, np.nan]))
tm.assert_extension_array_equal(result, expected)
def test_factorize_unsigned():
- # don't raise when calling factorize on unsigned int PandasArray
+ # don't raise when calling factorize on unsigned int NumpyExtensionArray
arr = np.array([1, 2, 3], dtype=np.uint64)
- obj = PandasArray(arr)
+ obj = NumpyExtensionArray(arr)
res_codes, res_unique = obj.factorize()
exp_codes, exp_unique = pd.factorize(arr)
tm.assert_numpy_array_equal(res_codes, exp_codes)
- tm.assert_extension_array_equal(res_unique, PandasArray(exp_unique))
+ tm.assert_extension_array_equal(res_unique, NumpyExtensionArray(exp_unique))
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index d5c1d5bbd03b0..b8b5e3588d48f 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -19,7 +19,7 @@
TimedeltaArray,
)
from pandas.core.arrays import (
- PandasArray,
+ NumpyExtensionArray,
period_array,
)
from pandas.tests.extension.decimal import (
@@ -48,11 +48,11 @@ def test_dt64_array(dtype_unit):
[
# Basic NumPy defaults.
([1, 2], None, IntegerArray._from_sequence([1, 2])),
- ([1, 2], object, PandasArray(np.array([1, 2], dtype=object))),
+ ([1, 2], object, NumpyExtensionArray(np.array([1, 2], dtype=object))),
(
[1, 2],
np.dtype("float32"),
- PandasArray(np.array([1.0, 2.0], dtype=np.dtype("float32"))),
+ NumpyExtensionArray(np.array([1.0, 2.0], dtype=np.dtype("float32"))),
),
(np.array([1, 2], dtype="int64"), None, IntegerArray._from_sequence([1, 2])),
(
@@ -61,19 +61,20 @@ def test_dt64_array(dtype_unit):
FloatingArray._from_sequence([1.0, 2.0]),
),
# String alias passes through to NumPy
- ([1, 2], "float32", PandasArray(np.array([1, 2], dtype="float32"))),
- ([1, 2], "int64", PandasArray(np.array([1, 2], dtype=np.int64))),
- # GH#44715 FloatingArray does not support float16, so fall back to PandasArray
+ ([1, 2], "float32", NumpyExtensionArray(np.array([1, 2], dtype="float32"))),
+ ([1, 2], "int64", NumpyExtensionArray(np.array([1, 2], dtype=np.int64))),
+ # GH#44715 FloatingArray does not support float16, so fall
+ # back to NumpyExtensionArray
(
np.array([1, 2], dtype=np.float16),
None,
- PandasArray(np.array([1, 2], dtype=np.float16)),
+ NumpyExtensionArray(np.array([1, 2], dtype=np.float16)),
),
# idempotency with e.g. pd.array(pd.array([1, 2], dtype="int64"))
(
- PandasArray(np.array([1, 2], dtype=np.int32)),
+ NumpyExtensionArray(np.array([1, 2], dtype=np.int32)),
None,
- PandasArray(np.array([1, 2], dtype=np.int32)),
+ NumpyExtensionArray(np.array([1, 2], dtype=np.int32)),
),
# Period alias
(
@@ -148,7 +149,7 @@ def test_dt64_array(dtype_unit):
TimedeltaArray._from_sequence(["1H", "2H"]),
),
(
- # preserve non-nano, i.e. don't cast to PandasArray
+ # preserve non-nano, i.e. don't cast to NumpyExtensionArray
TimedeltaArray._simple_new(
np.arange(5, dtype=np.int64).view("m8[s]"), dtype=np.dtype("m8[s]")
),
@@ -158,7 +159,7 @@ def test_dt64_array(dtype_unit):
),
),
(
- # preserve non-nano, i.e. don't cast to PandasArray
+ # preserve non-nano, i.e. don't cast to NumpyExtensionArray
TimedeltaArray._simple_new(
np.arange(5, dtype=np.int64).view("m8[s]"), dtype=np.dtype("m8[s]")
),
@@ -184,7 +185,11 @@ def test_dt64_array(dtype_unit):
([0, 1], "Sparse[int64]", SparseArray([0, 1], dtype="int64")),
# IntegerNA
([1, None], "Int16", pd.array([1, None], dtype="Int16")),
- (pd.Series([1, 2]), None, PandasArray(np.array([1, 2], dtype=np.int64))),
+ (
+ pd.Series([1, 2]),
+ None,
+ NumpyExtensionArray(np.array([1, 2], dtype=np.int64)),
+ ),
# String
(
["a", None],
@@ -200,7 +205,7 @@ def test_dt64_array(dtype_unit):
([True, None], "boolean", BooleanArray._from_sequence([True, None])),
([True, None], pd.BooleanDtype(), BooleanArray._from_sequence([True, None])),
# Index
- (pd.Index([1, 2]), None, PandasArray(np.array([1, 2], dtype=np.int64))),
+ (pd.Index([1, 2]), None, NumpyExtensionArray(np.array([1, 2], dtype=np.int64))),
# Series[EA] returns the EA
(
pd.Series(pd.Categorical(["a", "b"], categories=["a", "b", "c"])),
@@ -351,13 +356,13 @@ def test_array_inference(data, expected):
)
def test_array_inference_fails(data):
result = pd.array(data)
- expected = PandasArray(np.array(data, dtype=object))
+ expected = NumpyExtensionArray(np.array(data, dtype=object))
tm.assert_extension_array_equal(result, expected)
@pytest.mark.parametrize("data", [np.array(0)])
def test_nd_raises(data):
- with pytest.raises(ValueError, match="PandasArray must be 1-dimensional"):
+ with pytest.raises(ValueError, match="NumpyExtensionArray must be 1-dimensional"):
pd.array(data, dtype="int64")
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 9e402af931199..7df17c42134e9 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -23,7 +23,7 @@
import pandas._testing as tm
from pandas.core.arrays import (
DatetimeArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
TimedeltaArray,
)
@@ -425,7 +425,7 @@ def test_setitem(self):
pd.Series,
np.array,
list,
- PandasArray,
+ NumpyExtensionArray,
],
)
def test_setitem_object_dtype(self, box, arr1d):
@@ -439,7 +439,7 @@ def test_setitem_object_dtype(self, box, arr1d):
elif box is np.array:
# if we do np.array(x).astype(object) then dt64 and td64 cast to ints
vals = np.array(vals.astype(object))
- elif box is PandasArray:
+ elif box is NumpyExtensionArray:
vals = box(np.asarray(vals, dtype=object))
else:
vals = box(vals).astype(object)
@@ -1291,7 +1291,7 @@ def test_period_index_construction_from_strings(klass):
def test_from_pandas_array(dtype):
# GH#24615
data = np.array([1, 2, 3], dtype=dtype)
- arr = PandasArray(data)
+ arr = NumpyExtensionArray(data)
cls = {"M8[ns]": DatetimeArray, "m8[ns]": TimedeltaArray}[dtype]
diff --git a/pandas/tests/arrays/test_ndarray_backed.py b/pandas/tests/arrays/test_ndarray_backed.py
index c48fb7e78d45b..1fe7cc9b03e8a 100644
--- a/pandas/tests/arrays/test_ndarray_backed.py
+++ b/pandas/tests/arrays/test_ndarray_backed.py
@@ -10,7 +10,7 @@
from pandas.core.arrays import (
Categorical,
DatetimeArray,
- PandasArray,
+ NumpyExtensionArray,
TimedeltaArray,
)
@@ -65,11 +65,11 @@ def test_empty_td64(self):
assert result.shape == shape
def test_empty_pandas_array(self):
- arr = PandasArray(np.array([1, 2]))
+ arr = NumpyExtensionArray(np.array([1, 2]))
dtype = arr.dtype
shape = (3, 9)
- result = PandasArray._empty(shape, dtype=dtype)
- assert isinstance(result, PandasArray)
+ result = NumpyExtensionArray._empty(shape, dtype=dtype)
+ assert isinstance(result, NumpyExtensionArray)
assert result.dtype == dtype
assert result.shape == shape
diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py
index eda7871e5ab0a..0e618ea20bf67 100644
--- a/pandas/tests/base/test_conversion.py
+++ b/pandas/tests/base/test_conversion.py
@@ -15,7 +15,7 @@
from pandas.core.arrays import (
DatetimeArray,
IntervalArray,
- PandasArray,
+ NumpyExtensionArray,
PeriodArray,
SparseArray,
TimedeltaArray,
@@ -222,7 +222,7 @@ def test_values_consistent(arr, expected_type, dtype):
def test_numpy_array(arr):
ser = Series(arr)
result = ser.array
- expected = PandasArray(arr)
+ expected = NumpyExtensionArray(arr)
tm.assert_extension_array_equal(result, expected)
@@ -234,7 +234,7 @@ def test_numpy_array_all_dtypes(any_numpy_dtype):
elif np.dtype(any_numpy_dtype).kind == "m":
assert isinstance(result, TimedeltaArray)
else:
- assert isinstance(result, PandasArray)
+ assert isinstance(result, NumpyExtensionArray)
@pytest.mark.parametrize(
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 85fbac186b369..aefed29a490ca 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -34,7 +34,7 @@ def to_numpy_dtypes(dtypes):
return [getattr(np, dt) for dt in dtypes if isinstance(dt, str)]
-class TestPandasDtype:
+class TestNumpyEADtype:
# Passing invalid dtype, both as a string or object, must raise TypeError
# Per issue GH15520
@pytest.mark.parametrize("box", [pd.Timestamp, "pd.Timestamp", list])
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 9a5bd5b1d047b..6f516b0564edc 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -33,7 +33,7 @@ class TestABCClasses:
"ABCPeriodArray",
pd.arrays.PeriodArray([2000, 2001, 2002], dtype="period[D]"),
),
- ("ABCPandasArray", pd.arrays.PandasArray(np.array([0, 1, 2]))),
+ ("ABCNumpyExtensionArray", pd.arrays.NumpyExtensionArray(np.array([0, 1, 2]))),
("ABCPeriodIndex", period_index),
("ABCCategoricalIndex", categorical_df.index),
("ABCSeries", pd.Series([1, 2, 3])),
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index cadc3a46e0ba4..9931e71c16254 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -1362,7 +1362,8 @@ def test_infer_dtype_period_array(self, klass, skipna):
pd.NaT,
]
)
- # with pd.array this becomes PandasArray which ends up as "unknown-array"
+ # with pd.array this becomes NumpyExtensionArray which ends up
+ # as "unknown-array"
exp = "unknown-array" if klass is pd.array else "mixed"
assert lib.infer_dtype(values, skipna=skipna) == exp
diff --git a/pandas/tests/extension/base/constructors.py b/pandas/tests/extension/base/constructors.py
index 1f85c89ef38be..26716922da8fa 100644
--- a/pandas/tests/extension/base/constructors.py
+++ b/pandas/tests/extension/base/constructors.py
@@ -112,7 +112,7 @@ def test_pandas_array(self, data):
def test_pandas_array_dtype(self, data):
# ... but specifying dtype will override idempotency
result = pd.array(data, dtype=np.dtype(object))
- expected = pd.arrays.PandasArray(np.asarray(data, dtype=object))
+ expected = pd.arrays.NumpyExtensionArray(np.asarray(data, dtype=object))
self.assert_equal(result, expected)
def test_construct_empty_dataframe(self, dtype):
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index 0392597769930..0d4624087fffd 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -12,7 +12,7 @@
classes (if they are relevant for the extension interface for all dtypes), or
be added to the array-specific tests in `pandas/tests/arrays/`.
-Note: we do not bother with base.BaseIndexTests because PandasArray
+Note: we do not bother with base.BaseIndexTests because NumpyExtensionArray
will never be held in an Index.
"""
import numpy as np
@@ -21,19 +21,19 @@
from pandas.core.dtypes.cast import can_hold_element
from pandas.core.dtypes.dtypes import (
ExtensionDtype,
- PandasDtype,
+ NumpyEADtype,
)
import pandas as pd
import pandas._testing as tm
from pandas.api.types import is_object_dtype
-from pandas.core.arrays.numpy_ import PandasArray
+from pandas.core.arrays.numpy_ import NumpyExtensionArray
from pandas.core.internals import blocks
from pandas.tests.extension import base
def _can_hold_element_patched(obj, element) -> bool:
- if isinstance(element, PandasArray):
+ if isinstance(element, NumpyExtensionArray):
element = element.to_numpy()
return can_hold_element(obj, element)
@@ -43,15 +43,15 @@ def _can_hold_element_patched(obj, element) -> bool:
def _assert_attr_equal(attr: str, left, right, obj: str = "Attributes"):
"""
- patch tm.assert_attr_equal so PandasDtype("object") is closed enough to
+ patch tm.assert_attr_equal so NumpyEADtype("object") is closed enough to
np.dtype("object")
"""
if attr == "dtype":
lattr = getattr(left, "dtype", None)
rattr = getattr(right, "dtype", None)
- if isinstance(lattr, PandasDtype) and not isinstance(rattr, PandasDtype):
+ if isinstance(lattr, NumpyEADtype) and not isinstance(rattr, NumpyEADtype):
left = left.astype(lattr.numpy_dtype)
- elif isinstance(rattr, PandasDtype) and not isinstance(lattr, PandasDtype):
+ elif isinstance(rattr, NumpyEADtype) and not isinstance(lattr, NumpyEADtype):
right = right.astype(rattr.numpy_dtype)
orig_assert_attr_equal(attr, left, right, obj)
@@ -59,7 +59,7 @@ def _assert_attr_equal(attr: str, left, right, obj: str = "Attributes"):
@pytest.fixture(params=["float", "object"])
def dtype(request):
- return PandasDtype(np.dtype(request.param))
+ return NumpyEADtype(np.dtype(request.param))
@pytest.fixture
@@ -67,20 +67,20 @@ def allow_in_pandas(monkeypatch):
"""
A monkeypatch to tells pandas to let us in.
- By default, passing a PandasArray to an index / series / frame
- constructor will unbox that PandasArray to an ndarray, and treat
+ By default, passing a NumpyExtensionArray to an index / series / frame
+ constructor will unbox that NumpyExtensionArray to an ndarray, and treat
it as a non-EA column. We don't want people using EAs without
reason.
- The mechanism for this is a check against ABCPandasArray
+ The mechanism for this is a check against ABCNumpyExtensionArray
in each constructor.
But, for testing, we need to allow them in pandas. So we patch
- the _typ of PandasArray, so that we evade the ABCPandasArray
+ the _typ of NumpyExtensionArray, so that we evade the ABCNumpyExtensionArray
check.
"""
with monkeypatch.context() as m:
- m.setattr(PandasArray, "_typ", "extension")
+ m.setattr(NumpyExtensionArray, "_typ", "extension")
m.setattr(blocks, "can_hold_element", _can_hold_element_patched)
m.setattr(tm.asserters, "assert_attr_equal", _assert_attr_equal)
yield
@@ -90,14 +90,14 @@ def allow_in_pandas(monkeypatch):
def data(allow_in_pandas, dtype):
if dtype.numpy_dtype == "object":
return pd.Series([(i,) for i in range(100)]).array
- return PandasArray(np.arange(1, 101, dtype=dtype._dtype))
+ return NumpyExtensionArray(np.arange(1, 101, dtype=dtype._dtype))
@pytest.fixture
def data_missing(allow_in_pandas, dtype):
if dtype.numpy_dtype == "object":
- return PandasArray(np.array([np.nan, (1,)], dtype=object))
- return PandasArray(np.array([np.nan, 1.0]))
+ return NumpyExtensionArray(np.array([np.nan, (1,)], dtype=object))
+ return NumpyExtensionArray(np.array([np.nan, 1.0]))
@pytest.fixture
@@ -123,8 +123,8 @@ def data_for_sorting(allow_in_pandas, dtype):
if dtype.numpy_dtype == "object":
# Use an empty tuple for first element, then remove,
# to disable np.array's shape inference.
- return PandasArray(np.array([(), (2,), (3,), (1,)], dtype=object)[1:])
- return PandasArray(np.array([1, 2, 0]))
+ return NumpyExtensionArray(np.array([(), (2,), (3,), (1,)], dtype=object)[1:])
+ return NumpyExtensionArray(np.array([1, 2, 0]))
@pytest.fixture
@@ -135,8 +135,8 @@ def data_missing_for_sorting(allow_in_pandas, dtype):
A < B and NA missing.
"""
if dtype.numpy_dtype == "object":
- return PandasArray(np.array([(1,), np.nan, (0,)], dtype=object))
- return PandasArray(np.array([1, np.nan, 0]))
+ return NumpyExtensionArray(np.array([(1,), np.nan, (0,)], dtype=object))
+ return NumpyExtensionArray(np.array([1, np.nan, 0]))
@pytest.fixture
@@ -151,7 +151,7 @@ def data_for_grouping(allow_in_pandas, dtype):
a, b, c = (1,), (2,), (3,)
else:
a, b, c = np.arange(3)
- return PandasArray(
+ return NumpyExtensionArray(
np.array([b, b, np.nan, np.nan, a, a, b, c], dtype=dtype.numpy_dtype)
)
@@ -159,7 +159,7 @@ def data_for_grouping(allow_in_pandas, dtype):
@pytest.fixture
def skip_numpy_object(dtype, request):
"""
- Tests for PandasArray with nested data. Users typically won't create
+ Tests for NumpyExtensionArray with nested data. Users typically won't create
these objects via `pd.array`, but they can show up through `.array`
on a Series with nested data. Many of the base tests fail, as they aren't
appropriate for nested data.
@@ -179,13 +179,13 @@ class BaseNumPyTests:
@classmethod
def assert_series_equal(cls, left, right, *args, **kwargs):
# base class tests hard-code expected values with numpy dtypes,
- # whereas we generally want the corresponding PandasDtype
+ # whereas we generally want the corresponding NumpyEADtype
if (
isinstance(right, pd.Series)
and not isinstance(right.dtype, ExtensionDtype)
- and isinstance(left.dtype, PandasDtype)
+ and isinstance(left.dtype, NumpyEADtype)
):
- right = right.astype(PandasDtype(right.dtype))
+ right = right.astype(NumpyEADtype(right.dtype))
return tm.assert_series_equal(left, right, *args, **kwargs)
@@ -210,7 +210,7 @@ def test_check_dtype(self, data, request):
if data.dtype.numpy_dtype == "object":
request.node.add_marker(
pytest.mark.xfail(
- reason=f"PandasArray expectedly clashes with a "
+ reason=f"NumpyExtensionArray expectedly clashes with a "
f"NumPy name: {data.dtype.numpy_dtype}"
)
)
@@ -219,7 +219,7 @@ def test_check_dtype(self, data, request):
def test_is_not_object_type(self, dtype, request):
if dtype.numpy_dtype == "object":
# Different from BaseDtypeTests.test_is_not_object_type
- # because PandasDtype(object) is an object type
+ # because NumpyEADtype(object) is an object type
assert is_object_dtype(dtype)
else:
super().test_is_not_object_type(dtype)
@@ -264,7 +264,7 @@ def test_searchsorted(self, data_for_sorting, as_series):
# Test setup fails.
super().test_searchsorted(data_for_sorting, as_series)
- @pytest.mark.xfail(reason="PandasArray.diff may fail on dtype")
+ @pytest.mark.xfail(reason="NumpyExtensionArray.diff may fail on dtype")
def test_diff(self, data, periods):
return super().test_diff(data, periods)
@@ -277,7 +277,7 @@ def test_insert(self, data, request):
@skip_nested
def test_insert_invalid(self, data, invalid_scalar):
- # PandasArray[object] can hold anything, so skip
+ # NumpyExtensionArray[object] can hold anything, so skip
super().test_insert_invalid(data, invalid_scalar)
@@ -377,7 +377,7 @@ def test_setitem_scalar_key_sequence_raise(self, data):
# Failed: DID NOT RAISE <class 'ValueError'>
super().test_setitem_scalar_key_sequence_raise(data)
- # TODO: there is some issue with PandasArray, therefore,
+ # TODO: there is some issue with NumpyExtensionArray, therefore,
# skip the setitem test for now, and fix it later (GH 31446)
@skip_nested
@@ -432,7 +432,7 @@ def test_setitem_with_expansion_dataframe_column(self, data, full_indexer):
key = full_indexer(df)
result.loc[key, "data"] = df["data"]
- # base class method has expected = df; PandasArray behaves oddly because
+ # base class method has expected = df; NumpyExtensionArray behaves oddly because
# we patch _typ for these tests.
if data.dtype.numpy_dtype != object:
if not isinstance(key, slice) or key != slice(None):
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index 9005798d66d17..722f61de3f43a 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -1248,7 +1248,7 @@ def test_iloc_setitem_nullable_2d_values(self):
df.loc[:] = df.values[:, ::-1]
tm.assert_frame_equal(df, orig)
- df.loc[:] = pd.core.arrays.PandasArray(df.values[:, ::-1])
+ df.loc[:] = pd.core.arrays.NumpyExtensionArray(df.values[:, ::-1])
tm.assert_frame_equal(df, orig)
df.iloc[:] = df.iloc[:, :]
diff --git a/pandas/tests/frame/methods/test_to_dict_of_blocks.py b/pandas/tests/frame/methods/test_to_dict_of_blocks.py
index cc4860beea491..906e74230a762 100644
--- a/pandas/tests/frame/methods/test_to_dict_of_blocks.py
+++ b/pandas/tests/frame/methods/test_to_dict_of_blocks.py
@@ -8,7 +8,7 @@
MultiIndex,
)
import pandas._testing as tm
-from pandas.core.arrays import PandasArray
+from pandas.core.arrays import NumpyExtensionArray
pytestmark = td.skip_array_manager_invalid_test
@@ -55,7 +55,7 @@ def test_to_dict_of_blocks_item_cache(request, using_copy_on_write):
request.node.add_marker(pytest.mark.xfail(reason="CoW - not yet implemented"))
# Calling to_dict_of_blocks should not poison item_cache
df = DataFrame({"a": [1, 2, 3, 4], "b": ["a", "b", "c", "d"]})
- df["c"] = PandasArray(np.array([1, 2, None, 3], dtype=object))
+ df["c"] = NumpyExtensionArray(np.array([1, 2, None, 3], dtype=object))
mgr = df._mgr
assert len(mgr.blocks) == 3 # i.e. not consolidated
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 00df0530fe70f..d8e46d4b606c6 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -378,7 +378,7 @@ def test_strange_column_corruption_issue(self, using_copy_on_write):
assert first == second == 0
def test_constructor_no_pandas_array(self):
- # Ensure that PandasArray isn't allowed inside Series
+ # Ensure that NumpyExtensionArray isn't allowed inside Series
# See https://github.com/pandas-dev/pandas/issues/23995 for more.
arr = Series([1, 2, 3]).array
result = DataFrame({"A": arr})
@@ -390,12 +390,14 @@ def test_constructor_no_pandas_array(self):
def test_add_column_with_pandas_array(self):
# GH 26390
df = DataFrame({"a": [1, 2, 3, 4], "b": ["a", "b", "c", "d"]})
- df["c"] = pd.arrays.PandasArray(np.array([1, 2, None, 3], dtype=object))
+ df["c"] = pd.arrays.NumpyExtensionArray(np.array([1, 2, None, 3], dtype=object))
df2 = DataFrame(
{
"a": [1, 2, 3, 4],
"b": ["a", "b", "c", "d"],
- "c": pd.arrays.PandasArray(np.array([1, 2, None, 3], dtype=object)),
+ "c": pd.arrays.NumpyExtensionArray(
+ np.array([1, 2, None, 3], dtype=object)
+ ),
}
)
assert type(df["c"]._mgr.blocks[0]) == NumpyBlock
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 8a4d1624fcb30..c71a0dd5f92b2 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -30,7 +30,7 @@
from pandas.core.dtypes.dtypes import (
DatetimeTZDtype,
IntervalDtype,
- PandasDtype,
+ NumpyEADtype,
PeriodDtype,
)
@@ -188,7 +188,7 @@ def test_datetimelike_values_with_object_dtype(self, kind, frame_or_series):
assert obj._mgr.arrays[0].dtype == object
assert isinstance(obj._mgr.arrays[0].ravel()[0], scalar_type)
- obj = frame_or_series(frame_or_series(arr), dtype=PandasDtype(object))
+ obj = frame_or_series(frame_or_series(arr), dtype=NumpyEADtype(object))
assert obj._mgr.arrays[0].dtype == object
assert isinstance(obj._mgr.arrays[0].ravel()[0], scalar_type)
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 47e7092743b00..8a4b5fd5f2e01 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -1407,23 +1407,23 @@ def test_block_shape():
def test_make_block_no_pandas_array(block_maker):
# https://github.com/pandas-dev/pandas/pull/24866
- arr = pd.arrays.PandasArray(np.array([1, 2]))
+ arr = pd.arrays.NumpyExtensionArray(np.array([1, 2]))
- # PandasArray, no dtype
+ # NumpyExtensionArray, no dtype
result = block_maker(arr, BlockPlacement(slice(len(arr))), ndim=arr.ndim)
assert result.dtype.kind in ["i", "u"]
if block_maker is make_block:
- # new_block requires caller to unwrap PandasArray
+ # new_block requires caller to unwrap NumpyExtensionArray
assert result.is_extension is False
- # PandasArray, PandasDtype
+ # NumpyExtensionArray, NumpyEADtype
result = block_maker(arr, slice(len(arr)), dtype=arr.dtype, ndim=arr.ndim)
assert result.dtype.kind in ["i", "u"]
assert result.is_extension is False
# new_block no longer taked dtype keyword
- # ndarray, PandasDtype
+ # ndarray, NumpyEADtype
result = block_maker(
arr.to_numpy(), slice(len(arr)), dtype=arr.dtype, ndim=arr.ndim
)
diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py
index d72c8599dfe5e..524bf1b310d38 100644
--- a/pandas/tests/series/methods/test_astype.py
+++ b/pandas/tests/series/methods/test_astype.py
@@ -147,8 +147,8 @@ def test_astype_float_to_period(self):
def test_astype_no_pandas_dtype(self):
# https://github.com/pandas-dev/pandas/pull/24866
ser = Series([1, 2], dtype="int64")
- # Don't have PandasDtype in the public API, so we use `.array.dtype`,
- # which is a PandasDtype.
+ # Don't have NumpyEADtype in the public API, so we use `.array.dtype`,
+ # which is a NumpyEADtype.
result = ser.astype(ser.array.dtype)
tm.assert_series_equal(result, ser)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index c7536273862c0..9540d7a014409 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1918,7 +1918,7 @@ def test_constructor_with_pandas_dtype(self):
# going through 2D->1D path
vals = [(1,), (2,), (3,)]
ser = Series(vals)
- dtype = ser.array.dtype # PandasDtype
+ dtype = ser.array.dtype # NumpyEADtype
ser2 = Series(vals, dtype=dtype)
tm.assert_series_equal(ser, ser2)
| - [x] closes #53694 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I'm open to suggestions on names. | https://api.github.com/repos/pandas-dev/pandas/pulls/54101 | 2023-07-12T23:12:34Z | 2023-07-18T20:32:53Z | 2023-07-18T20:32:53Z | 2023-07-18T20:34:06Z |
DOC: point out the limitation of precision while doing serialization | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 42cd74a0ca781..cfa0320c6fdd7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2413,7 +2413,8 @@ def to_json(
the default is 'epoch'.
double_precision : int, default 10
The number of decimal places to use when encoding
- floating point values.
+ floating point values. The possible maximal value is 15.
+ Passing double_precision greater than 15 will raise a ValueError.
force_ascii : bool, default True
Force encoded string to be ASCII.
date_unit : str, default 'ms' (milliseconds)
| - [x] closes #38437
Updated docstring for `to_json`, pointed out that the maximal possible value for `double_precision` is equal 15.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54100 | 2023-07-12T21:50:24Z | 2023-07-18T16:00:20Z | 2023-07-18T16:00:20Z | 2023-07-18T16:00:21Z |
BUG: groupby.resample(kind="period") raising AttributeError | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 7450fc6fdc1da..dd99d5031e724 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -511,11 +511,11 @@ Groupby/resample/rolling
- Bug in :meth:`GroupBy.groups` with a datetime key in conjunction with another key produced incorrect number of group keys (:issue:`51158`)
- Bug in :meth:`GroupBy.quantile` may implicitly sort the result index with ``sort=False`` (:issue:`53009`)
- Bug in :meth:`GroupBy.var` failing to raise ``TypeError`` when called with datetime64, timedelta64 or :class:`PeriodDtype` values (:issue:`52128`, :issue:`53045`)
+- Bug in :meth:`DataFrameGroupby.resample` with ``kind="period"`` raising ``AttributeError`` (:issue:`24103`)
- Bug in :meth:`Resampler.ohlc` with empty object returning a :class:`Series` instead of empty :class:`DataFrame` (:issue:`42902`)
- Bug in :meth:`SeriesGroupBy.nth` and :meth:`DataFrameGroupBy.nth` after performing column selection when using ``dropna="any"`` or ``dropna="all"`` would not subset columns (:issue:`53518`)
- Bug in :meth:`SeriesGroupBy.nth` and :meth:`DataFrameGroupBy.nth` raised after performing column selection when using ``dropna="any"`` or ``dropna="all"`` resulted in rows being dropped (:issue:`53518`)
- Bug in :meth:`SeriesGroupBy.sum` and :meth:`DataFrameGroupby.sum` summing ``np.inf + np.inf`` and ``(-np.inf) + (-np.inf)`` to ``np.nan`` (:issue:`53606`)
--
Reshaping
^^^^^^^^^
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 6a4e33229690b..e4cebc01cccdd 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1797,7 +1797,13 @@ def _wrap_result(self, result):
# we may have a different kind that we were asked originally
# convert if needed
if self.kind == "period" and not isinstance(result.index, PeriodIndex):
- result.index = result.index.to_period(self.freq)
+ if isinstance(result.index, MultiIndex):
+ # GH 24103 - e.g. groupby resample
+ if not isinstance(result.index.levels[-1], PeriodIndex):
+ new_level = result.index.levels[-1].to_period(self.freq)
+ result.index = result.index.set_levels(new_level, level=-1)
+ else:
+ result.index = result.index.to_period(self.freq)
return result
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index df14a5bc374c6..23b4f4bcf01d1 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -662,3 +662,29 @@ def test_groupby_resample_on_index_with_list_of_keys_missing_column():
)
with pytest.raises(KeyError, match="Columns not found"):
df.groupby("group").resample("2D")[["val_not_in_dataframe"]].mean()
+
+
+@pytest.mark.parametrize("kind", ["datetime", "period"])
+def test_groupby_resample_kind(kind):
+ # GH 24103
+ df = DataFrame(
+ {
+ "datetime": pd.to_datetime(
+ ["20181101 1100", "20181101 1200", "20181102 1300", "20181102 1400"]
+ ),
+ "group": ["A", "B", "A", "B"],
+ "value": [1, 2, 3, 4],
+ }
+ )
+ df = df.set_index("datetime")
+ result = df.groupby("group")["value"].resample("D", kind=kind).last()
+
+ dt_level = pd.DatetimeIndex(["2018-11-01", "2018-11-02"])
+ if kind == "period":
+ dt_level = dt_level.to_period(freq="D")
+ expected_index = pd.MultiIndex.from_product(
+ [["A", "B"], dt_level],
+ names=["group", "datetime"],
+ )
+ expected = Series([1, 3, 2, 4], index=expected_index, name="value")
+ tm.assert_series_equal(result, expected)
| - [x] closes #24103
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.1.0.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54099 | 2023-07-12T21:42:39Z | 2023-07-12T23:15:22Z | 2023-07-12T23:15:22Z | 2023-07-25T00:15:55Z |
DEV: Improves Error msg when constructing a Timedelta | diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 28aeb854638b6..ffa9a67542e21 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1843,7 +1843,10 @@ class Timedelta(_Timedelta):
NPY_DATETIMEUNIT.NPY_FR_W,
NPY_DATETIMEUNIT.NPY_FR_GENERIC]):
err = npy_unit_to_abbrev(reso)
- raise ValueError(f" cannot construct a Timedelta from a unit {err}")
+ raise ValueError(
+ f"Unit {err} is not supported. "
+ "Only unambiguous timedelta values durations are supported. "
+ "Allowed units are 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns'")
new_reso = get_supported_reso(reso)
if reso != NPY_DATETIMEUNIT.NPY_FR_GENERIC:
diff --git a/pandas/tests/tslibs/test_timedeltas.py b/pandas/tests/tslibs/test_timedeltas.py
index 2308aa27b60ab..4784a6d0d600d 100644
--- a/pandas/tests/tslibs/test_timedeltas.py
+++ b/pandas/tests/tslibs/test_timedeltas.py
@@ -76,7 +76,10 @@ def test_delta_to_nanoseconds_td64_MY_raises():
def test_unsupported_td64_unit_raises(unit):
# GH 52806
with pytest.raises(
- ValueError, match=f"cannot construct a Timedelta from a unit {unit}"
+ ValueError,
+ match=f"Unit {unit} is not supported. "
+ "Only unambiguous timedelta values durations are supported. "
+ "Allowed units are 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns'",
):
Timedelta(np.timedelta64(1, unit))
| - [x] closes #54059
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/54098 | 2023-07-12T21:10:35Z | 2023-07-17T17:40:56Z | 2023-07-17T17:40:56Z | 2023-07-17T17:41:28Z |
BUG: Timestamp unit with time and not date | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 7450fc6fdc1da..57b2bf910e2e5 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -393,6 +393,7 @@ Datetimelike
- Bug in :meth:`Timestamp.round` with values close to the implementation bounds returning incorrect results instead of raising ``OutOfBoundsDatetime`` (:issue:`51494`)
- Bug in :meth:`arrays.DatetimeArray.map` and :meth:`DatetimeIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
- Bug in constructing a :class:`Series` or :class:`DataFrame` from a datetime or timedelta scalar always inferring nanosecond resolution instead of inferring from the input (:issue:`52212`)
+- Bug in constructing a :class:`Timestamp` from a string representing a time without a date inferring an incorrect unit (:issue:`54097`)
- Bug in parsing datetime strings with weekday but no day e.g. "2023 Sept Thu" incorrectly raising ``AttributeError`` instead of ``ValueError`` (:issue:`52659`)
Timedelta
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 9173b7e8b1449..3643c840a50a6 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -42,10 +42,7 @@ cnp.import_array()
from decimal import InvalidOperation
-from dateutil.parser import (
- DEFAULTPARSER,
- parse as du_parse,
-)
+from dateutil.parser import DEFAULTPARSER
from dateutil.tz import (
tzlocal as _dateutil_tzlocal,
tzoffset,
@@ -309,9 +306,12 @@ cdef datetime parse_datetime_string(
raise ValueError(f'Given date string "{date_string}" not likely a datetime')
if _does_string_look_like_time(date_string):
+ # time without date e.g. "01:01:01.111"
# use current datetime as default, not pass _DEFAULT_DATETIME
- dt = du_parse(date_string, dayfirst=dayfirst,
- yearfirst=yearfirst)
+ default = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
+ dt = dateutil_parse(date_string, default=default,
+ dayfirst=dayfirst, yearfirst=yearfirst,
+ ignoretz=False, out_bestunit=out_bestunit)
return dt
dt = _parse_delimited_date(date_string, dayfirst, out_bestunit)
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index 5fca577ff28d1..198b6feea5f4e 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -25,6 +25,11 @@
class TestTimestampConstructors:
+ def test_construct_from_time_unit(self):
+ # GH#54097 only passing a time component, no date
+ ts = Timestamp("01:01:01.111")
+ assert ts.unit == "ms"
+
def test_weekday_but_no_day_raises(self):
# GH#52659
msg = "Parsing datetimes with weekday but no day information is not supported"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54097 | 2023-07-12T21:05:06Z | 2023-07-13T13:12:41Z | 2023-07-13T13:12:41Z | 2023-07-13T13:52:32Z |
CI: Use better pip check to uninstall existing pandas | diff --git a/.github/actions/build_pandas/action.yml b/.github/actions/build_pandas/action.yml
index 2d6b0aada4abd..a65718ba045a0 100644
--- a/.github/actions/build_pandas/action.yml
+++ b/.github/actions/build_pandas/action.yml
@@ -16,8 +16,8 @@ runs:
- name: Uninstall existing Pandas installation
run: |
- if pip list | grep -q ^pandas; then
- pip uninstall -y pandas || true
+ if pip show pandas 1>/dev/null; then
+ pip uninstall -y pandas
fi
shell: bash -el {0}
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54096 | 2023-07-12T20:24:50Z | 2023-07-13T16:08:11Z | 2023-07-13T16:08:11Z | 2023-07-13T16:08:15Z |
CLN: Replace confusing brackets with backticks | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 5783842e9ddef..e0340f99b92e1 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -470,12 +470,12 @@ def isin(comps: ListLike, values: ListLike) -> npt.NDArray[np.bool_]:
if not is_list_like(comps):
raise TypeError(
"only list-like objects are allowed to be passed "
- f"to isin(), you passed a [{type(comps).__name__}]"
+ f"to isin(), you passed a `{type(comps).__name__}`"
)
if not is_list_like(values):
raise TypeError(
"only list-like objects are allowed to be passed "
- f"to isin(), you passed a [{type(values).__name__}]"
+ f"to isin(), you passed a `{type(values).__name__}`"
)
if not isinstance(values, (ABCIndex, ABCSeries, ABCExtensionArray, np.ndarray)):
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 8898379689cfd..46e2b64cb60c6 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2581,7 +2581,7 @@ def isin(self, values) -> npt.NDArray[np.bool_]:
values_type = type(values).__name__
raise TypeError(
"only list-like objects are allowed to be passed "
- f"to isin(), you passed a [{values_type}]"
+ f"to isin(), you passed a `{values_type}`"
)
values = sanitize_array(values, None, None)
null_mask = np.asarray(isna(values))
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 35960c707d3bd..e6e1363603e09 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -158,7 +158,7 @@ def test_simple_cmp_ops(self, cmp_op, lhs, rhs, engine, parser):
[
r"only list-like( or dict-like)? objects are allowed to be "
r"passed to (DataFrame\.)?isin\(\), you passed a "
- r"(\[|')bool(\]|')",
+ r"(`|')bool(`|')",
"argument of type 'bool' is not iterable",
]
)
@@ -203,7 +203,7 @@ def test_compound_invert_op(self, op, lhs, rhs, request, engine, parser):
[
r"only list-like( or dict-like)? objects are allowed to be "
r"passed to (DataFrame\.)?isin\(\), you passed a "
- r"(\[|')float(\]|')",
+ r"(`|')float(`|')",
"argument of type 'float' is not iterable",
]
)
diff --git a/pandas/tests/series/methods/test_isin.py b/pandas/tests/series/methods/test_isin.py
index 3e4857b7abf38..4ea2fbf51afc9 100644
--- a/pandas/tests/series/methods/test_isin.py
+++ b/pandas/tests/series/methods/test_isin.py
@@ -34,7 +34,7 @@ def test_isin_with_string_scalar(self):
s = Series(["A", "B", "C", "a", "B", "B", "A", "C"])
msg = (
r"only list-like objects are allowed to be passed to isin\(\), "
- r"you passed a \[str\]"
+ r"you passed a `str`"
)
with pytest.raises(TypeError, match=msg):
s.isin("a")
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 8c26bbd209a6a..34a0bee877664 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -888,7 +888,7 @@ class TestIsin:
def test_invalid(self):
msg = (
r"only list-like objects are allowed to be passed to isin\(\), "
- r"you passed a \[int\]"
+ r"you passed a `int`"
)
with pytest.raises(TypeError, match=msg):
algos.isin(1, 1)
| There are some errors which read like:
> only list-like objects are allowed to be passed to isin(), you
> passed a [str]
(note how the type `str` is enclosed in square brackets)
This is potentially confusing, since some languages use notation like `[TYPE]` to denote a list of TYPE objects. So the user could read the error message saying they can't use `[str]`, but intepret `[str]` to mean "a list of str objects".
This commit removes the possibility of confusion by using backticks instead of square brackets. | https://api.github.com/repos/pandas-dev/pandas/pulls/54091 | 2023-07-11T20:45:07Z | 2023-07-12T16:30:04Z | 2023-07-12T16:30:04Z | 2023-07-12T16:31:16Z |
DOC: Update README.md to use URL in documentation link. | diff --git a/README.md b/README.md
index 1bff2941f86ca..a7f38b0fde5e8 100644
--- a/README.md
+++ b/README.md
@@ -156,7 +156,7 @@ See the full instructions for [installing from source](https://pandas.pydata.org
[BSD 3](LICENSE)
## Documentation
-The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable
+The official documentation is hosted on [PyData.org](https://pandas.pydata.org/pandas-docs/stable/).
## Background
Work on ``pandas`` started at [AQR](https://www.aqr.com/) (a quantitative hedge fund) in 2008 and
| - [x] closes #54089 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54090 | 2023-07-11T20:21:28Z | 2023-07-11T21:16:12Z | 2023-07-11T21:16:12Z | 2023-07-11T21:16:18Z |
BUG: merge cross with Series | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 0e4a64d7e6c5f..a904f4d9fbe13 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -145,10 +145,12 @@ def merge(
indicator: str | bool = False,
validate: str | None = None,
) -> DataFrame:
+ left_df = _validate_operand(left)
+ right_df = _validate_operand(right)
if how == "cross":
return _cross_merge(
- left,
- right,
+ left_df,
+ right_df,
on=on,
left_on=left_on,
right_on=right_on,
@@ -162,8 +164,8 @@ def merge(
)
else:
op = _MergeOperation(
- left,
- right,
+ left_df,
+ right_df,
how=how,
on=on,
left_on=left_on,
@@ -179,8 +181,8 @@ def merge(
def _cross_merge(
- left: DataFrame | Series,
- right: DataFrame | Series,
+ left: DataFrame,
+ right: DataFrame,
on: IndexLabel | None = None,
left_on: IndexLabel | None = None,
right_on: IndexLabel | None = None,
diff --git a/pandas/tests/reshape/merge/test_merge_cross.py b/pandas/tests/reshape/merge/test_merge_cross.py
index 7e14b515836cf..14f9036e43fce 100644
--- a/pandas/tests/reshape/merge/test_merge_cross.py
+++ b/pandas/tests/reshape/merge/test_merge_cross.py
@@ -1,6 +1,9 @@
import pytest
-from pandas import DataFrame
+from pandas import (
+ DataFrame,
+ Series,
+)
import pandas._testing as tm
from pandas.core.reshape.merge import (
MergeError,
@@ -96,3 +99,13 @@ def test_join_cross_error_reporting():
)
with pytest.raises(MergeError, match=msg):
left.join(right, how="cross", on="a")
+
+
+def test_merge_cross_series():
+ # GH#54055
+ ls = Series([1, 2, 3, 4], index=[1, 2, 3, 4], name="left")
+ rs = Series([3, 4, 5, 6], index=[3, 4, 5, 6], name="right")
+ res = merge(ls, rs, how="cross")
+
+ expected = merge(ls.to_frame(), rs.to_frame(), how="cross")
+ tm.assert_frame_equal(res, expected)
| - [x] closes #54055 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54087 | 2023-07-11T19:18:46Z | 2023-07-11T21:17:59Z | 2023-07-11T21:17:59Z | 2023-07-11T21:36:17Z |
DOC: Supress setups less in user guide | diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index d76c7e2bf3b03..41b0c98e339da 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -470,11 +470,6 @@ Compare the above with the result using ``drop_level=True`` (the default value).
df.xs("one", level="second", axis=1, drop_level=True)
-.. ipython:: python
- :suppress:
-
- df = df.T
-
.. _advanced.advanced_reindex:
Advanced reindexing and alignment
diff --git a/doc/source/user_guide/basics.rst b/doc/source/user_guide/basics.rst
index 389a2d23c466d..06e52d8713409 100644
--- a/doc/source/user_guide/basics.rst
+++ b/doc/source/user_guide/basics.rst
@@ -220,11 +220,6 @@ either match on the *index* or *columns* via the **axis** keyword:
df.sub(column, axis="index")
df.sub(column, axis=0)
-.. ipython:: python
- :suppress:
-
- df_orig = df
-
Furthermore you can align a level of a MultiIndexed DataFrame with a Series.
.. ipython:: python
@@ -272,13 +267,9 @@ case the result will be NaN (you can later replace NaN with some other value
using ``fillna`` if you wish).
.. ipython:: python
- :suppress:
df2 = df.copy()
df2["three"]["a"] = 1.0
-
-.. ipython:: python
-
df
df2
df + df2
@@ -936,7 +927,6 @@ Another useful feature is the ability to pass Series methods to carry out some
Series operation on each column or row:
.. ipython:: python
- :suppress:
tsdf = pd.DataFrame(
np.random.randn(10, 3),
@@ -944,9 +934,6 @@ Series operation on each column or row:
index=pd.date_range("1/1/2000", periods=10),
)
tsdf.iloc[3:7] = np.nan
-
-.. ipython:: python
-
tsdf
tsdf.apply(pd.Series.interpolate)
@@ -1170,13 +1157,9 @@ another array or value), the methods :meth:`~DataFrame.map` on DataFrame
and analogously :meth:`~Series.map` on Series accept any Python function taking
a single value and returning a single value. For example:
-.. ipython:: python
- :suppress:
-
- df4 = df_orig.copy()
-
.. ipython:: python
+ df4 = df.copy()
df4
def f(x):
@@ -1280,14 +1263,9 @@ is a common enough operation that the :meth:`~DataFrame.reindex_like` method is
available to make this simpler:
.. ipython:: python
- :suppress:
df2 = df.reindex(["a", "b", "c"], columns=["one", "two"])
df3 = df2 - df2.mean()
-
-
-.. ipython:: python
-
df2
df3
df.reindex_like(df2)
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 7ddce18d8a259..482e3fe91ca09 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -271,7 +271,6 @@ the length of the ``groups`` dict, so it is largely just a convenience:
``GroupBy`` will tab complete column names (and other attributes):
.. ipython:: python
- :suppress:
n = 10
weight = np.random.normal(166, 20, size=n)
@@ -281,9 +280,6 @@ the length of the ``groups`` dict, so it is largely just a convenience:
df = pd.DataFrame(
{"height": height, "weight": weight, "gender": gender}, index=time
)
-
-.. ipython:: python
-
df
gb = df.groupby("gender")
@@ -334,19 +330,14 @@ number:
Grouping with multiple levels is supported.
.. ipython:: python
- :suppress:
arrays = [
["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
["doo", "doo", "bee", "bee", "bop", "bop", "bop", "bop"],
["one", "two", "one", "two", "one", "two", "one", "two"],
]
- tuples = list(zip(*arrays))
- index = pd.MultiIndex.from_tuples(tuples, names=["first", "second", "third"])
+ index = pd.MultiIndex.from_arrays(arrays, names=["first", "second", "third"])
s = pd.Series(np.random.randn(8), index=index)
-
-.. ipython:: python
-
s
s.groupby(level=["first", "second"]).sum()
@@ -963,7 +954,6 @@ match the shape of the input array.
Another common data transform is to replace missing data with the group mean.
.. ipython:: python
- :suppress:
cols = ["A", "B", "C"]
values = np.random.randn(1000, 3)
@@ -971,9 +961,6 @@ Another common data transform is to replace missing data with the group mean.
values[np.random.randint(0, 1000, 50), 1] = np.nan
values[np.random.randint(0, 1000, 200), 2] = np.nan
data_df = pd.DataFrame(values, columns=cols)
-
-.. ipython:: python
-
data_df
countries = np.array(["US", "UK", "GR", "JP"])
diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index 77eee8e58a5e8..e785376ab10a4 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -1029,14 +1029,10 @@ input data shape. ``where`` is used under the hood as the implementation.
The code below is equivalent to ``df.where(df < 0)``.
.. ipython:: python
- :suppress:
dates = pd.date_range('1/1/2000', periods=8)
df = pd.DataFrame(np.random.randn(8, 4),
index=dates, columns=['A', 'B', 'C', 'D'])
-
-.. ipython:: python
-
df[df < 0]
In addition, ``where`` takes an optional ``other`` argument for replacement of
@@ -1431,7 +1427,6 @@ This plot was created using a ``DataFrame`` with 3 columns each containing
floating point values generated using ``numpy.random.randn()``.
.. ipython:: python
- :suppress:
df = pd.DataFrame(np.random.randn(8, 4),
index=dates, columns=['A', 'B', 'C', 'D'])
@@ -1694,15 +1689,11 @@ DataFrame has a :meth:`~DataFrame.set_index` method which takes a column name
To create a new, re-indexed DataFrame:
.. ipython:: python
- :suppress:
data = pd.DataFrame({'a': ['bar', 'bar', 'foo', 'foo'],
'b': ['one', 'two', 'one', 'two'],
'c': ['z', 'y', 'x', 'w'],
'd': [1., 2., 3, 4]})
-
-.. ipython:: python
-
data
indexed1 = data.set_index('c')
indexed1
@@ -1812,11 +1803,6 @@ But it turns out that assigning to the product of chained indexing has
inherently unpredictable results. To see this, think about how the Python
interpreter executes this code:
-.. ipython:: python
- :suppress:
-
- value = None
-
.. code-block:: python
dfmi.loc[:, ('one', 'second')] = value
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 4d4b9e086e9e5..daf4c7b54331b 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -704,7 +704,6 @@ Comments
Sometimes comments or meta data may be included in a file:
.. ipython:: python
- :suppress:
data = (
"ID,level,category\n"
@@ -712,12 +711,9 @@ Sometimes comments or meta data may be included in a file:
"Patient2,23000,y # wouldn't take his medicine\n"
"Patient3,1234018,z # awesome"
)
-
with open("tmp.csv", "w") as fh:
fh.write(data)
-.. ipython:: python
-
print(open("tmp.csv").read())
By default, the parser includes the comments in the output:
diff --git a/doc/source/user_guide/missing_data.rst b/doc/source/user_guide/missing_data.rst
index 443fdd4f59e3f..ac7e383d6d7ff 100644
--- a/doc/source/user_guide/missing_data.rst
+++ b/doc/source/user_guide/missing_data.rst
@@ -142,14 +142,10 @@ Missing values propagate naturally through arithmetic operations between pandas
objects.
.. ipython:: python
- :suppress:
df = df2.loc[:, ["one", "two", "three"]]
a = df2.loc[df2.index[:5], ["one", "two"]].ffill()
b = df2.loc[df2.index[:5], ["one", "two", "three"]]
-
-.. ipython:: python
-
a
b
a + b
@@ -247,12 +243,8 @@ If we only want consecutive gaps filled up to a certain number of data points,
we can use the ``limit`` keyword:
.. ipython:: python
- :suppress:
df.iloc[2:4, :] = np.nan
-
-.. ipython:: python
-
df
df.ffill(limit=1)
@@ -308,13 +300,9 @@ You may wish to simply exclude labels from a data set which refer to missing
data. To do this, use :meth:`~DataFrame.dropna`:
.. ipython:: python
- :suppress:
df["two"] = df["two"].fillna(0)
df["three"] = df["three"].fillna(0)
-
-.. ipython:: python
-
df
df.dropna(axis=0)
df.dropna(axis=1)
@@ -333,7 +321,6 @@ Both Series and DataFrame objects have :meth:`~DataFrame.interpolate`
that, by default, performs linear interpolation at missing data points.
.. ipython:: python
- :suppress:
np.random.seed(123456)
idx = pd.date_range("1/1/2000", periods=100, freq="BM")
@@ -343,8 +330,6 @@ that, by default, performs linear interpolation at missing data points.
ts[60:80] = np.nan
ts = ts.cumsum()
-.. ipython:: python
-
ts
ts.count()
@savefig series_before_interpolate.png
@@ -361,12 +346,8 @@ that, by default, performs linear interpolation at missing data points.
Index aware interpolation is available via the ``method`` keyword:
.. ipython:: python
- :suppress:
ts2 = ts.iloc[[0, 1, 30, 60, 99]]
-
-.. ipython:: python
-
ts2
ts2.interpolate()
ts2.interpolate(method="time")
@@ -374,13 +355,10 @@ Index aware interpolation is available via the ``method`` keyword:
For a floating-point index, use ``method='values'``:
.. ipython:: python
- :suppress:
idx = [0.0, 1.0, 10.0]
ser = pd.Series([0.0, np.nan, 10.0], idx)
-.. ipython:: python
-
ser
ser.interpolate()
ser.interpolate(method="values")
diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index 67799edf96ce2..9081d13ef2cf1 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -42,12 +42,9 @@ The ``plot`` method on Series and DataFrame is just a simple wrapper around
:meth:`plt.plot() <matplotlib.axes.Axes.plot>`:
.. ipython:: python
- :suppress:
np.random.seed(123456)
-.. ipython:: python
-
ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
ts = ts.cumsum()
@@ -1468,7 +1465,6 @@ otherwise you will see a warning.
Another option is passing an ``ax`` argument to :meth:`Series.plot` to plot on a particular axis:
.. ipython:: python
- :suppress:
np.random.seed(123456)
ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
@@ -1583,12 +1579,8 @@ Plotting tables
Plotting with matplotlib table is now supported in :meth:`DataFrame.plot` and :meth:`Series.plot` with a ``table`` keyword. The ``table`` keyword can accept ``bool``, :class:`DataFrame` or :class:`Series`. The simple way to draw a table is to specify ``table=True``. Data will be transposed to meet matplotlib's default layout.
.. ipython:: python
- :suppress:
np.random.seed(123456)
-
-.. ipython:: python
-
fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
df = pd.DataFrame(np.random.rand(5, 3), columns=["a", "b", "c"])
ax.xaxis.tick_top() # Display x-axis ticks on top.
@@ -1663,12 +1655,8 @@ colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass ``colormap='cubehelix'``.
.. ipython:: python
- :suppress:
np.random.seed(123456)
-
-.. ipython:: python
-
df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
df = df.cumsum()
@@ -1701,12 +1689,8 @@ Alternatively, we can pass the colormap itself:
Colormaps can also be used other plot types, like bar charts:
.. ipython:: python
- :suppress:
np.random.seed(123456)
-
-.. ipython:: python
-
dd = pd.DataFrame(np.random.randn(10, 10)).map(abs)
dd = dd.cumsum()
@@ -1764,12 +1748,8 @@ level of refinement you would get when plotting via pandas, it can be faster
when plotting a large number of points.
.. ipython:: python
- :suppress:
np.random.seed(123456)
-
-.. ipython:: python
-
price = pd.Series(
np.random.randn(150).cumsum(),
index=pd.date_range("2000-1-1", periods=150, freq="B"),
| xref https://github.com/pandas-dev/pandas/issues/28038
I think it would be valuable to show the setup for some examples in the user guide so they can more easily be copy pasted | https://api.github.com/repos/pandas-dev/pandas/pulls/54086 | 2023-07-11T19:16:34Z | 2023-07-17T18:14:00Z | 2023-07-17T18:14:00Z | 2023-07-17T18:14:03Z |
TYP: update mypy and small pyi fixes from ruff | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 8997dfe32dcb2..366db4337b0e1 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -21,7 +21,7 @@ repos:
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.275
+ rev: v0.0.277
hooks:
- id: ruff
args: [--exit-non-zero-on-fix]
@@ -130,7 +130,7 @@ repos:
types: [python]
stages: [manual]
additional_dependencies: &pyright_dependencies
- - pyright@1.1.292
+ - pyright@1.1.296
- id: pyright_reportGeneralTypeIssues
# note: assumes python env is setup and activated
name: pyright reportGeneralTypeIssues
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 7450fc6fdc1da..9473735525bf8 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -171,7 +171,7 @@ If installed, we now require:
+=================+=================+==========+=========+
| numpy | 1.21.6 | X | X |
+-----------------+-----------------+----------+---------+
-| mypy (dev) | 1.2 | | X |
+| mypy (dev) | 1.4.1 | | X |
+-----------------+-----------------+----------+---------+
| beautifulsoup4 | 4.11.1 | | X |
+-----------------+-----------------+----------+---------+
diff --git a/environment.yml b/environment.yml
index 8e3c3a26ffc0f..e85e55e76775b 100644
--- a/environment.yml
+++ b/environment.yml
@@ -76,7 +76,7 @@ dependencies:
# code checks
- flake8=6.0.0 # run in subprocess over docstring examples
- - mypy=1.2 # pre-commit uses locally installed mypy
+ - mypy=1.4.1 # pre-commit uses locally installed mypy
- tokenize-rt # scripts/check_for_inconsistent_pandas_namespace.py
- pre-commit>=2.15.0
diff --git a/pandas/_libs/lib.pyi b/pandas/_libs/lib.pyi
index 4a2c7a874238a..ee190ad8db2d9 100644
--- a/pandas/_libs/lib.pyi
+++ b/pandas/_libs/lib.pyi
@@ -30,7 +30,7 @@ from enum import Enum
class _NoDefault(Enum):
no_default = ...
-no_default: Final = _NoDefault.no_default # noqa: PYI015
+no_default: Final = _NoDefault.no_default
NoDefault = Literal[_NoDefault.no_default]
i8max: int
diff --git a/pandas/_libs/sas.pyi b/pandas/_libs/sas.pyi
index 73068693aa2c6..5d65e2b56b591 100644
--- a/pandas/_libs/sas.pyi
+++ b/pandas/_libs/sas.pyi
@@ -1,7 +1,4 @@
-from typing import TYPE_CHECKING
-
-if TYPE_CHECKING:
- from pandas.io.sas.sas7bdat import SAS7BDATReader
+from pandas.io.sas.sas7bdat import SAS7BDATReader
class Parser:
def __init__(self, parser: SAS7BDATReader) -> None: ...
diff --git a/pandas/_typing.py b/pandas/_typing.py
index ef53c117b7b45..6a61b37ff4a94 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -84,14 +84,14 @@
NumpySorter = Optional[npt._ArrayLikeInt_co] # type: ignore[name-defined]
if sys.version_info >= (3, 10):
- from typing import TypeGuard
+ from typing import TypeGuard # pyright: ignore[reportUnusedImport]
else:
- from typing_extensions import TypeGuard # pyright: reportUnusedImport = false
+ from typing_extensions import TypeGuard # pyright: ignore[reportUnusedImport]
if sys.version_info >= (3, 11):
from typing import Self
else:
- from typing_extensions import Self # pyright: reportUnusedImport = false
+ from typing_extensions import Self # pyright: ignore[reportUnusedImport]
else:
npt: Any = None
Self: Any = None
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index 5536a28157b63..de99f828d604f 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -46,6 +46,8 @@
from pandas.core.dtypes.inference import is_list_like
if TYPE_CHECKING:
+ from re import Pattern
+
from pandas._typing import (
ArrayLike,
DtypeObj,
@@ -69,7 +71,7 @@
@overload
-def isna(obj: Scalar) -> bool:
+def isna(obj: Scalar | Pattern) -> bool:
...
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 265be87be40f1..f17a633259816 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -18,7 +18,6 @@
Any,
Callable,
Literal,
- Union,
cast,
overload,
)
@@ -3459,7 +3458,7 @@ def sort_values(
self,
*,
axis: Axis = ...,
- ascending: bool | int | Sequence[bool] | Sequence[int] = ...,
+ ascending: bool | Sequence[bool] = ...,
inplace: Literal[False] = ...,
kind: SortKind = ...,
na_position: NaPosition = ...,
@@ -3473,7 +3472,7 @@ def sort_values(
self,
*,
axis: Axis = ...,
- ascending: bool | int | Sequence[bool] | Sequence[int] = ...,
+ ascending: bool | Sequence[bool] = ...,
inplace: Literal[True],
kind: SortKind = ...,
na_position: NaPosition = ...,
@@ -3482,11 +3481,25 @@ def sort_values(
) -> None:
...
+ @overload
+ def sort_values(
+ self,
+ *,
+ axis: Axis = ...,
+ ascending: bool | Sequence[bool] = ...,
+ inplace: bool = ...,
+ kind: SortKind = ...,
+ na_position: NaPosition = ...,
+ ignore_index: bool = ...,
+ key: ValueKeyFunc = ...,
+ ) -> Series | None:
+ ...
+
def sort_values(
self,
*,
axis: Axis = 0,
- ascending: bool | int | Sequence[bool] | Sequence[int] = True,
+ ascending: bool | Sequence[bool] = True,
inplace: bool = False,
kind: SortKind = "quicksort",
na_position: NaPosition = "last",
@@ -3647,7 +3660,7 @@ def sort_values(
)
if is_list_like(ascending):
- ascending = cast(Sequence[Union[bool, int]], ascending)
+ ascending = cast(Sequence[bool], ascending)
if len(ascending) != 1:
raise ValueError(
f"Length of ascending ({len(ascending)}) must be 1 for Series"
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 7576b2d49614f..0d00d8b2fb693 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -53,7 +53,7 @@ moto
flask
asv>=0.5.1
flake8==6.0.0
-mypy==1.2
+mypy==1.4.1
tokenize-rt
pre-commit>=2.15.0
gitpython
diff --git a/typings/numba.pyi b/typings/numba.pyi
index 0d9184af19a0f..36cccb894049b 100644
--- a/typings/numba.pyi
+++ b/typings/numba.pyi
@@ -1,16 +1,14 @@
# pyright: reportIncompleteStub = false
from typing import (
- TYPE_CHECKING,
Any,
Callable,
Literal,
overload,
)
-if TYPE_CHECKING:
- import numba
+import numba
- from pandas._typing import F
+from pandas._typing import F
def __getattr__(name: str) -> Any: ... # incomplete
@overload
| There is a much newer version of pyright but I haven't yet managed to get it to pass. | https://api.github.com/repos/pandas-dev/pandas/pulls/54085 | 2023-07-11T19:14:44Z | 2023-07-13T16:08:58Z | 2023-07-13T16:08:58Z | 2023-08-09T15:08:33Z |
DOC: Fixing EX01 - Added examples | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 756096a7fe345..fd256f2ff7db0 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -158,14 +158,11 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.api.extensions.ExtensionArray.shape \
pandas.api.extensions.ExtensionArray.tolist \
pandas.DataFrame.columns \
- pandas.DataFrame.backfill \
pandas.DataFrame.ffill \
pandas.DataFrame.pad \
pandas.DataFrame.swapaxes \
- pandas.DataFrame.attrs \
pandas.DataFrame.plot \
pandas.DataFrame.to_gbq \
- pandas.DataFrame.style \
pandas.DataFrame.__dataframe__
RET=$(($RET + $?)) ; echo $MSG "DONE"
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c469c99940f7e..feae3bb517c22 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1297,6 +1297,14 @@ def style(self) -> Styler:
--------
io.formats.style.Styler : Helps style a DataFrame or Series according to the
data with HTML and CSS.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({'A': [1, 2, 3]})
+ >>> df.style # doctest: +SKIP
+
+ Please see
+ `Table Visualization <../../user_guide/style.ipynb>`_ for more examples.
"""
from pandas.io.formats.style import Styler
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7735e55fdd042..42cd74a0ca781 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -367,10 +367,19 @@ def attrs(self) -> dict[Hashable, Any]:
Examples
--------
+ For Series:
+
>>> ser = pd.Series([1, 2, 3])
>>> ser.attrs = {"A": [10, 20, 30]}
>>> ser.attrs
{'A': [10, 20, 30]}
+
+ For DataFrame:
+
+ >>> df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
+ >>> df.attrs = {"A": [10, 20, 30]}
+ >>> df.attrs
+ {'A': [10, 20, 30]}
"""
if self._attrs is None:
self._attrs = {}
@@ -7500,6 +7509,10 @@ def backfill(
-------
{klass} or None
Object with missing values filled or None if ``inplace=True``.
+
+ Examples
+ --------
+ Please see examples for :meth:`DataFrame.bfill`.
"""
warnings.warn(
"DataFrame.backfill/Series.backfill is deprecated. Use "
| - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Towards https://github.com/pandas-dev/pandas/issues/37875
| https://api.github.com/repos/pandas-dev/pandas/pulls/54084 | 2023-07-11T17:57:07Z | 2023-07-11T19:48:04Z | 2023-07-11T19:48:04Z | 2023-07-14T08:49:52Z |
TST/DOC: clarify warning message for inplace methods with CoW | diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index 0c5b8caeaba6e..be6a9ef488be9 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -395,12 +395,13 @@ class ChainedAssignmentError(Warning):
_chained_assignment_method_msg = (
"A value is trying to be set on a copy of a DataFrame or Series "
- "through chained assignment.\n"
- "When using the Copy-on-Write mode, such chained assignment never works "
+ "through chained assignment using an inplace method.\n"
+ "When using the Copy-on-Write mode, such inplace method never works "
"to update the original DataFrame or Series, because the intermediate "
"object on which we are setting values always behaves as a copy.\n\n"
- "Try using 'df.method({col: value}, inplace=True)' instead, to perform "
- "the operation inplace.\n\n"
+ "For example, when doing 'df[col].method(value, inplace=True)', try "
+ "using 'df.method({col: value}, inplace=True)' instead, to perform "
+ "the operation inplace on the original object.\n\n"
)
diff --git a/pandas/tests/frame/methods/test_fillna.py b/pandas/tests/frame/methods/test_fillna.py
index 40fe7d2ce9af5..a5f6f58e66392 100644
--- a/pandas/tests/frame/methods/test_fillna.py
+++ b/pandas/tests/frame/methods/test_fillna.py
@@ -1,7 +1,6 @@
import numpy as np
import pytest
-from pandas.errors import ChainedAssignmentError
import pandas.util._test_decorators as td
from pandas import (
@@ -51,7 +50,7 @@ def test_fillna_on_column_view(self, using_copy_on_write):
df = DataFrame(arr, copy=False)
if using_copy_on_write:
- with tm.assert_produces_warning(ChainedAssignmentError):
+ with tm.raises_chained_assignment_error():
df[0].fillna(-1, inplace=True)
assert np.isnan(arr[:, 0]).all()
else:
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 335901c457240..00df0530fe70f 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -7,10 +7,7 @@
import numpy as np
import pytest
-from pandas.errors import (
- ChainedAssignmentError,
- PerformanceWarning,
-)
+from pandas.errors import PerformanceWarning
import pandas.util._test_decorators as td
import pandas as pd
@@ -414,7 +411,7 @@ def test_update_inplace_sets_valid_block_values(using_copy_on_write):
# inplace update of a single column
if using_copy_on_write:
- with tm.assert_produces_warning(ChainedAssignmentError):
+ with tm.raises_chained_assignment_error():
df["a"].fillna(1, inplace=True)
else:
df["a"].fillna(1, inplace=True)
diff --git a/pandas/tests/indexing/multiindex/test_chaining_and_caching.py b/pandas/tests/indexing/multiindex/test_chaining_and_caching.py
index d27a2cde9417e..fdbbdbdd45169 100644
--- a/pandas/tests/indexing/multiindex/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/multiindex/test_chaining_and_caching.py
@@ -1,10 +1,7 @@
import numpy as np
import pytest
-from pandas.errors import (
- ChainedAssignmentError,
- SettingWithCopyError,
-)
+from pandas.errors import SettingWithCopyError
import pandas.util._test_decorators as td
from pandas import (
@@ -33,7 +30,7 @@ def test_detect_chained_assignment(using_copy_on_write):
zed = DataFrame(events, index=["a", "b"], columns=multiind)
if using_copy_on_write:
- with tm.assert_produces_warning(ChainedAssignmentError):
+ with tm.raises_chained_assignment_error():
zed["eyes"]["right"].fillna(value=555, inplace=True)
else:
msg = "A value is trying to be set on a copy of a slice from a DataFrame"
| Small follow-up on https://github.com/pandas-dev/pandas/pull/53779 | https://api.github.com/repos/pandas-dev/pandas/pulls/54081 | 2023-07-11T12:14:55Z | 2023-07-11T15:31:32Z | 2023-07-11T15:31:32Z | 2023-10-14T22:20:52Z |
DOC: to_datetime param format has no effect for df | diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index ea418a2c16d06..454c6e1be5b5f 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -706,7 +706,8 @@ def to_datetime(
arg : int, float, str, datetime, list, tuple, 1-d array, Series, DataFrame/dict-like
The object to convert to a datetime. If a :class:`DataFrame` is provided, the
method expects minimally the following columns: :const:`"year"`,
- :const:`"month"`, :const:`"day"`.
+ :const:`"month"`, :const:`"day"`. The column "year"
+ must be specified in 4-digit format.
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
- If :const:`'raise'`, then invalid parsing will raise an exception.
- If :const:`'coerce'`, then invalid parsing will be set as :const:`NaT`.
@@ -765,6 +766,11 @@ def to_datetime(
time string (not necessarily in exactly the same format);
- "mixed", to infer the format for each element individually. This is risky,
and you should probably use it along with `dayfirst`.
+
+ .. note::
+
+ If a :class:`DataFrame` is passed, then `format` has no effect.
+
exact : bool, default True
Control how `format` is used:
| - [x] closes #54029
Updated docstring for `to_datetime`. Pointed out, that the parameter `format` has no effect if `DataFrame` is passed. Emphasized that in this case the column `"year"` must be specified in 4-digit format. | https://api.github.com/repos/pandas-dev/pandas/pulls/54079 | 2023-07-11T10:33:12Z | 2023-07-11T13:33:58Z | 2023-07-11T13:33:58Z | 2023-07-11T13:33:58Z |
Warn: add space | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 8a28b30aecc03..6ab2f958b8730 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -1284,7 +1284,7 @@ def curried(x):
if len(mapped) and isinstance(mapped[0], ABCSeries):
warnings.warn(
- "Returning a DataFrame from Series.apply when the supplied function"
+ "Returning a DataFrame from Series.apply when the supplied function "
"returns a Series is deprecated and will be removed in a future "
"version.",
FutureWarning,
| - [X] closes #54056
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54075 | 2023-07-11T07:44:51Z | 2023-07-11T08:54:05Z | 2023-07-11T08:54:05Z | 2023-07-11T09:38:54Z |
BUG: checking for value type when parquet is partitioned | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 0f669beaa036f..255c707a04a94 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -533,13 +533,13 @@ Sparse
ExtensionArray
^^^^^^^^^^^^^^
+- Bug in :class:`ArrowStringArray` constructor raises ``ValueError`` with dictionary types of strings (:issue:`54074`)
- Bug in :class:`DataFrame` constructor not copying :class:`Series` with extension dtype when given in dict (:issue:`53744`)
- Bug in :class:`~arrays.ArrowExtensionArray` converting pandas non-nanosecond temporal objects from non-zero values to zero values (:issue:`53171`)
- Bug in :meth:`Series.quantile` for pyarrow temporal types raising ArrowInvalid (:issue:`52678`)
- Bug in :meth:`Series.rank` returning wrong order for small values with ``Float64`` dtype (:issue:`52471`)
- Bug in :meth:`~arrays.ArrowExtensionArray.__iter__` and :meth:`~arrays.ArrowExtensionArray.__getitem__` returning python datetime and timedelta objects for non-nano dtypes (:issue:`53326`)
- Bug where the ``__from_arrow__`` method of masked ExtensionDtypes(e.g. :class:`Float64Dtype`, :class:`BooleanDtype`) would not accept pyarrow arrays of type ``pyarrow.null()`` (:issue:`52223`)
--
Styler
^^^^^^
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 12f4b5486b6b9..4a70fcf6b5a93 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -118,7 +118,10 @@ def __init__(self, values) -> None:
super().__init__(values)
self._dtype = StringDtype(storage="pyarrow")
- if not pa.types.is_string(self._pa_array.type):
+ if not pa.types.is_string(self._pa_array.type) and not (
+ pa.types.is_dictionary(self._pa_array.type)
+ and pa.types.is_string(self._pa_array.type.value_type)
+ ):
raise ValueError(
"ArrowStringArray requires a PyArrow (chunked) array of string type"
)
diff --git a/pandas/tests/arrays/string_/test_string_arrow.py b/pandas/tests/arrays/string_/test_string_arrow.py
index 45098e12ccb38..0f899f4c8e876 100644
--- a/pandas/tests/arrays/string_/test_string_arrow.py
+++ b/pandas/tests/arrays/string_/test_string_arrow.py
@@ -69,6 +69,33 @@ def test_constructor_not_string_type_raises(array, chunked):
ArrowStringArray(arr)
+@pytest.mark.parametrize("chunked", [True, False])
+def test_constructor_not_string_type_value_dictionary_raises(chunked):
+ pa = pytest.importorskip("pyarrow")
+
+ arr = pa.array([1, 2, 3], pa.dictionary(pa.int32(), pa.int32()))
+ if chunked:
+ arr = pa.chunked_array(arr)
+
+ msg = re.escape(
+ "ArrowStringArray requires a PyArrow (chunked) array of string type"
+ )
+ with pytest.raises(ValueError, match=msg):
+ ArrowStringArray(arr)
+
+
+@pytest.mark.parametrize("chunked", [True, False])
+def test_constructor_valid_string_type_value_dictionary(chunked):
+ pa = pytest.importorskip("pyarrow")
+
+ arr = pa.array(["1", "2", "3"], pa.dictionary(pa.int32(), pa.utf8()))
+ if chunked:
+ arr = pa.chunked_array(arr)
+
+ arr = ArrowStringArray(arr)
+ assert pa.types.is_string(arr._pa_array.type.value_type)
+
+
@skip_if_no_pyarrow
def test_from_sequence_wrong_dtype_raises():
with pd.option_context("string_storage", "python"):
| - [ ] closes #53951
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest doc/source/whatsnew/vX.X.X.rst file if fixing a bug or adding a new feature.
Instead of checking pyarrow array's type to be string, it could be a dictionary of values with string type when partitioned. Performed checks for both cases.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54074 | 2023-07-11T04:22:10Z | 2023-07-13T18:23:37Z | 2023-07-13T18:23:37Z | 2023-07-14T00:23:47Z |
TST: fix test broken on main | diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 63ce7ab2b85c8..635416f0cb1d6 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -390,10 +390,8 @@ def f3(x):
depr_msg = "The behavior of array concatenation with empty entries is deprecated"
# correct result
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- result1 = df.groupby("a").apply(f1)
- with tm.assert_produces_warning(FutureWarning, match=depr_msg):
- result2 = df2.groupby("a").apply(f1)
+ result1 = df.groupby("a").apply(f1)
+ result2 = df2.groupby("a").apply(f1)
tm.assert_frame_equal(result1, result2)
# should fail (not the same number of levels)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54073 | 2023-07-10T23:51:29Z | 2023-07-11T01:49:40Z | 2023-07-11T01:49:40Z | 2023-07-11T03:34:26Z |
Revert "BUG: DataFrame.stack sometimes sorting the resulting index" | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index e154ca2cd3884..5bde92ddc6dde 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -113,6 +113,7 @@ Other enhancements
- Let :meth:`DataFrame.to_feather` accept a non-default :class:`Index` and non-string column names (:issue:`51787`)
- Performance improvement in :func:`read_csv` (:issue:`52632`) with ``engine="c"``
- :meth:`Categorical.from_codes` has gotten a ``validate`` parameter (:issue:`50975`)
+- :meth:`DataFrame.stack` gained the ``sort`` keyword to dictate whether the resulting :class:`MultiIndex` levels are sorted (:issue:`15105`)
- :meth:`DataFrame.unstack` gained the ``sort`` keyword to dictate whether the resulting :class:`MultiIndex` levels are sorted (:issue:`15105`)
- :meth:`DataFrameGroupby.agg` and :meth:`DataFrameGroupby.transform` now support grouping by multiple keys when the index is not a :class:`MultiIndex` for ``engine="numba"`` (:issue:`53486`)
- :meth:`Series.explode` now supports pyarrow-backed list types (:issue:`53602`)
@@ -526,8 +527,7 @@ Reshaping
- Bug in :meth:`DataFrame.idxmin` and :meth:`DataFrame.idxmax`, where the axis dtype would be lost for empty frames (:issue:`53265`)
- Bug in :meth:`DataFrame.merge` not merging correctly when having ``MultiIndex`` with single level (:issue:`52331`)
- Bug in :meth:`DataFrame.stack` losing extension dtypes when columns is a :class:`MultiIndex` and frame contains mixed dtypes (:issue:`45740`)
-- Bug in :meth:`DataFrame.stack` sorting columns lexicographically in rare cases (:issue:`53786`)
-- Bug in :meth:`DataFrame.stack` sorting index lexicographically in rare cases (:issue:`53824`)
+- Bug in :meth:`DataFrame.stack` sorting columns lexicographically (:issue:`53786`)
- Bug in :meth:`DataFrame.transpose` inferring dtype for object column (:issue:`51546`)
- Bug in :meth:`Series.combine_first` converting ``int64`` dtype to ``float64`` and losing precision on very large integers (:issue:`51764`)
- Bug when joining empty :class:`DataFrame` objects, where the joined index would be a :class:`RangeIndex` instead of the joined index type (:issue:`52777`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5084c7ed6ba97..743763001fef1 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9012,7 +9012,7 @@ def pivot_table(
sort=sort,
)
- def stack(self, level: IndexLabel = -1, dropna: bool = True):
+ def stack(self, level: IndexLabel = -1, dropna: bool = True, sort: bool = True):
"""
Stack the prescribed level(s) from columns to index.
@@ -9038,6 +9038,8 @@ def stack(self, level: IndexLabel = -1, dropna: bool = True):
axis can create combinations of index and column values
that are missing from the original dataframe. See Examples
section.
+ sort : bool, default True
+ Whether to sort the levels of the resulting MultiIndex.
Returns
-------
@@ -9137,15 +9139,15 @@ def stack(self, level: IndexLabel = -1, dropna: bool = True):
>>> df_multi_level_cols2.stack(0)
kg m
- cat weight 1.0 NaN
- height NaN 2.0
- dog weight 3.0 NaN
- height NaN 4.0
+ cat height NaN 2.0
+ weight 1.0 NaN
+ dog height NaN 4.0
+ weight 3.0 NaN
>>> df_multi_level_cols2.stack([0, 1])
- cat weight kg 1.0
- height m 2.0
- dog weight kg 3.0
- height m 4.0
+ cat height m 2.0
+ weight kg 1.0
+ dog height m 4.0
+ weight kg 3.0
dtype: float64
**Dropping missing values**
@@ -9181,9 +9183,9 @@ def stack(self, level: IndexLabel = -1, dropna: bool = True):
)
if isinstance(level, (tuple, list)):
- result = stack_multiple(self, level, dropna=dropna)
+ result = stack_multiple(self, level, dropna=dropna, sort=sort)
else:
- result = stack(self, level, dropna=dropna)
+ result = stack(self, level, dropna=dropna, sort=sort)
return result.__finalize__(self, method="stack")
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index b0c74745511c4..5deaa41e2f63c 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import itertools
from typing import (
TYPE_CHECKING,
cast,
@@ -498,7 +499,7 @@ def unstack(obj: Series | DataFrame, level, fill_value=None, sort: bool = True):
if isinstance(obj.index, MultiIndex):
return _unstack_frame(obj, level, fill_value=fill_value, sort=sort)
else:
- return obj.T.stack(dropna=False)
+ return obj.T.stack(dropna=False, sort=sort)
elif not isinstance(obj.index, MultiIndex):
# GH 36113
# Give nicer error messages when unstack a Series whose
@@ -571,7 +572,7 @@ def _unstack_extension_series(
return result
-def stack(frame: DataFrame, level=-1, dropna: bool = True):
+def stack(frame: DataFrame, level=-1, dropna: bool = True, sort: bool = True):
"""
Convert DataFrame to Series with multi-level Index. Columns become the
second level of the resulting hierarchical index
@@ -593,7 +594,9 @@ def factorize(index):
level_num = frame.columns._get_level_number(level)
if isinstance(frame.columns, MultiIndex):
- return _stack_multi_columns(frame, level_num=level_num, dropna=dropna)
+ return _stack_multi_columns(
+ frame, level_num=level_num, dropna=dropna, sort=sort
+ )
elif isinstance(frame.index, MultiIndex):
new_levels = list(frame.index.levels)
new_codes = [lab.repeat(K) for lab in frame.index.codes]
@@ -646,13 +649,13 @@ def factorize(index):
return frame._constructor_sliced(new_values, index=new_index)
-def stack_multiple(frame: DataFrame, level, dropna: bool = True):
+def stack_multiple(frame: DataFrame, level, dropna: bool = True, sort: bool = True):
# If all passed levels match up to column names, no
# ambiguity about what to do
if all(lev in frame.columns.names for lev in level):
result = frame
for lev in level:
- result = stack(result, lev, dropna=dropna)
+ result = stack(result, lev, dropna=dropna, sort=sort)
# Otherwise, level numbers may change as each successive level is stacked
elif all(isinstance(lev, int) for lev in level):
@@ -665,7 +668,7 @@ def stack_multiple(frame: DataFrame, level, dropna: bool = True):
while level:
lev = level.pop(0)
- result = stack(result, lev, dropna=dropna)
+ result = stack(result, lev, dropna=dropna, sort=sort)
# Decrement all level numbers greater than current, as these
# have now shifted down by one
level = [v if v <= lev else v - 1 for v in level]
@@ -691,14 +694,7 @@ def _stack_multi_column_index(columns: MultiIndex) -> MultiIndex:
# Remove duplicate tuples in the MultiIndex.
tuples = zip(*levs)
- seen = set()
- # mypy doesn't like our trickery to get `set.add` to work in a comprehension
- # error: "add" of "set" does not return a value
- unique_tuples = (
- key
- for key in tuples
- if not (key in seen or seen.add(key)) # type: ignore[func-returns-value]
- )
+ unique_tuples = (key for key, _ in itertools.groupby(tuples))
new_levs = zip(*unique_tuples)
# The dtype of each level must be explicitly set to avoid inferring the wrong type.
@@ -714,7 +710,7 @@ def _stack_multi_column_index(columns: MultiIndex) -> MultiIndex:
def _stack_multi_columns(
- frame: DataFrame, level_num: int = -1, dropna: bool = True
+ frame: DataFrame, level_num: int = -1, dropna: bool = True, sort: bool = True
) -> DataFrame:
def _convert_level_number(level_num: int, columns: Index):
"""
@@ -744,12 +740,23 @@ def _convert_level_number(level_num: int, columns: Index):
roll_columns = roll_columns.swaplevel(lev1, lev2)
this.columns = mi_cols = roll_columns
+ if not mi_cols._is_lexsorted() and sort:
+ # Workaround the edge case where 0 is one of the column names,
+ # which interferes with trying to sort based on the first
+ # level
+ level_to_sort = _convert_level_number(0, mi_cols)
+ this = this.sort_index(level=level_to_sort, axis=1)
+ mi_cols = this.columns
+
+ mi_cols = cast(MultiIndex, mi_cols)
new_columns = _stack_multi_column_index(mi_cols)
# time to ravel the values
new_data = {}
level_vals = mi_cols.levels[-1]
level_codes = unique(mi_cols.codes[-1])
+ if sort:
+ level_codes = np.sort(level_codes)
level_vals_nan = level_vals.insert(len(level_vals), None)
level_vals_used = np.take(level_vals_nan, level_codes)
@@ -757,9 +764,7 @@ def _convert_level_number(level_num: int, columns: Index):
drop_cols = []
for key in new_columns:
try:
- with warnings.catch_warnings():
- warnings.simplefilter("ignore", PerformanceWarning)
- loc = this.columns.get_loc(key)
+ loc = this.columns.get_loc(key)
except KeyError:
drop_cols.append(key)
continue
@@ -769,12 +774,9 @@ def _convert_level_number(level_num: int, columns: Index):
# but if unsorted can get a boolean
# indexer
if not isinstance(loc, slice):
- slice_len = loc.sum()
+ slice_len = len(loc)
else:
slice_len = loc.stop - loc.start
- if loc.step is not None:
- # Integer division using ceiling instead of floor
- slice_len = -(slice_len // -loc.step)
if slice_len != levsize:
chunk = this.loc[:, this.columns[loc]]
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index ffdcb06ee2847..a48728a778877 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -1099,7 +1099,7 @@ def test_stack_preserve_categorical_dtype(self, ordered, labels):
"labels,data",
[
(list("xyz"), [10, 11, 12, 13, 14, 15]),
- (list("zyx"), [10, 11, 12, 13, 14, 15]),
+ (list("zyx"), [14, 15, 12, 13, 10, 11]),
],
)
def test_stack_multi_preserve_categorical_dtype(self, ordered, labels, data):
@@ -1107,10 +1107,10 @@ def test_stack_multi_preserve_categorical_dtype(self, ordered, labels, data):
cidx = pd.CategoricalIndex(labels, categories=sorted(labels), ordered=ordered)
cidx2 = pd.CategoricalIndex(["u", "v"], ordered=ordered)
midx = MultiIndex.from_product([cidx, cidx2])
- df = DataFrame([data], columns=midx)
+ df = DataFrame([sorted(data)], columns=midx)
result = df.stack([0, 1])
- s_cidx = pd.CategoricalIndex(labels, ordered=ordered)
+ s_cidx = pd.CategoricalIndex(sorted(labels), ordered=ordered)
expected = Series(data, index=MultiIndex.from_product([[0], s_cidx, cidx2]))
tm.assert_series_equal(result, expected)
@@ -1400,8 +1400,8 @@ def test_unstack_non_slice_like_blocks(using_array_manager):
tm.assert_frame_equal(res, expected)
-def test_stack_nosort():
- # GH 15105, GH 53825
+def test_stack_sort_false():
+ # GH 15105
data = [[1, 2, 3.0, 4.0], [2, 3, 4.0, 5.0], [3, 4, np.nan, np.nan]]
df = DataFrame(
data,
@@ -1409,7 +1409,7 @@ def test_stack_nosort():
levels=[["B", "A"], ["x", "y"]], codes=[[0, 0, 1, 1], [0, 1, 0, 1]]
),
)
- result = df.stack(level=0)
+ result = df.stack(level=0, sort=False)
expected = DataFrame(
{"x": [1.0, 3.0, 2.0, 4.0, 3.0], "y": [2.0, 4.0, 3.0, 5.0, 4.0]},
index=MultiIndex.from_arrays([[0, 0, 1, 1, 2], ["B", "A", "B", "A", "B"]]),
@@ -1421,15 +1421,15 @@ def test_stack_nosort():
data,
columns=MultiIndex.from_arrays([["B", "B", "A", "A"], ["x", "y", "x", "y"]]),
)
- result = df.stack(level=0)
+ result = df.stack(level=0, sort=False)
tm.assert_frame_equal(result, expected)
-def test_stack_nosort_multi_level():
- # GH 15105, GH 53825
+def test_stack_sort_false_multi_level():
+ # GH 15105
idx = MultiIndex.from_tuples([("weight", "kg"), ("height", "m")])
df = DataFrame([[1.0, 2.0], [3.0, 4.0]], index=["cat", "dog"], columns=idx)
- result = df.stack([0, 1])
+ result = df.stack([0, 1], sort=False)
expected_index = MultiIndex.from_tuples(
[
("cat", "weight", "kg"),
@@ -1999,12 +1999,13 @@ def __init__(self, *args, **kwargs) -> None:
),
)
@pytest.mark.parametrize("stack_lev", range(2))
- def test_stack_order_with_unsorted_levels(self, levels, stack_lev):
+ @pytest.mark.parametrize("sort", [True, False])
+ def test_stack_order_with_unsorted_levels(self, levels, stack_lev, sort):
# GH#16323
# deep check for 1-row case
columns = MultiIndex(levels=levels, codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
df = DataFrame(columns=columns, data=[range(4)])
- df_stacked = df.stack(stack_lev)
+ df_stacked = df.stack(stack_lev, sort=sort)
for row in df.index:
for col in df.columns:
expected = df.loc[row, col]
@@ -2036,7 +2037,7 @@ def test_stack_order_with_unsorted_levels_multi_row_2(self):
stack_lev = 1
columns = MultiIndex(levels=levels, codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
df = DataFrame(columns=columns, data=[range(4)], index=[1, 0, 2, 3])
- result = df.stack(stack_lev)
+ result = df.stack(stack_lev, sort=True)
expected_index = MultiIndex(
levels=[[0, 1, 2, 3], [0, 1]],
codes=[[1, 1, 0, 0, 2, 2, 3, 3], [1, 0, 1, 0, 1, 0, 1, 0]],
| Closes #53969
Reverts pandas-dev/pandas#53825
| https://api.github.com/repos/pandas-dev/pandas/pulls/54068 | 2023-07-10T20:48:57Z | 2023-07-13T20:19:52Z | 2023-07-13T20:19:52Z | 2023-07-13T20:19:57Z |
CI: Fix weekly cache cleanup job | diff --git a/.github/workflows/cache-cleanup-weekly.yml b/.github/workflows/cache-cleanup-weekly.yml
index 225503f2894f8..6da31f7354457 100644
--- a/.github/workflows/cache-cleanup-weekly.yml
+++ b/.github/workflows/cache-cleanup-weekly.yml
@@ -7,6 +7,9 @@ on:
jobs:
cleanup:
runs-on: ubuntu-latest
+ if: github.repository_owner == 'pandas-dev'
+ permissions:
+ actions: write
steps:
- name: Clean Cache
run: |
| It appears this job didn't fully clear all the GHA caches. I think setting the correct permissions should do the trick | https://api.github.com/repos/pandas-dev/pandas/pulls/54067 | 2023-07-10T16:50:21Z | 2023-07-11T16:46:36Z | 2023-07-11T16:46:36Z | 2023-07-11T16:46:39Z |
DOC: constant check in series | diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst
index fd4f7cd1b83fe..041061f32db3f 100644
--- a/doc/source/user_guide/cookbook.rst
+++ b/doc/source/user_guide/cookbook.rst
@@ -1488,3 +1488,31 @@ of the data values:
{"height": [60, 70], "weight": [100, 140, 180], "sex": ["Male", "Female"]}
)
df
+
+Constant series
+---------------
+
+To assess if a series has a constant value, we can check if ``series.nunique() <= 1``.
+However, a more performant approach, that does not count all unique values first, is:
+
+.. ipython:: python
+
+ v = s.to_numpy()
+ is_constant = v.shape[0] == 0 or (s[0] == s).all()
+
+This approach assumes that the series does not contain missing values.
+For the case that we would drop NA values, we can simply remove those values first:
+
+.. ipython:: python
+
+ v = s.dropna().to_numpy()
+ is_constant = v.shape[0] == 0 or (s[0] == s).all()
+
+If missing values are considered distinct from any other value, then one could use:
+
+.. ipython:: python
+
+ v = s.to_numpy()
+ is_constant = v.shape[0] == 0 or (s[0] == s).all() or not pd.notna(v).any()
+
+(Note that this example does not disambiguate between ``np.nan``, ``pd.NA`` and ``None``)
|
- [X] closes #54033
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54064 | 2023-07-10T11:26:51Z | 2023-07-12T15:40:24Z | 2023-07-12T15:40:24Z | 2023-07-12T15:41:05Z |
DEPR: deprecate strings T, S, L, U, and N in offsets frequencies, resolution abbreviations, _attrname_to_abbrevs | diff --git a/asv_bench/benchmarks/arithmetic.py b/asv_bench/benchmarks/arithmetic.py
index 4fd9740f184c8..49543c166d047 100644
--- a/asv_bench/benchmarks/arithmetic.py
+++ b/asv_bench/benchmarks/arithmetic.py
@@ -262,7 +262,7 @@ class Timeseries:
def setup(self, tz):
N = 10**6
halfway = (N // 2) - 1
- self.s = Series(date_range("20010101", periods=N, freq="T", tz=tz))
+ self.s = Series(date_range("20010101", periods=N, freq="min", tz=tz))
self.ts = self.s[halfway]
self.s2 = Series(date_range("20010101", periods=N, freq="s", tz=tz))
@@ -460,7 +460,7 @@ class OffsetArrayArithmetic:
def setup(self, offset):
N = 10000
- rng = date_range(start="1/1/2000", periods=N, freq="T")
+ rng = date_range(start="1/1/2000", periods=N, freq="min")
self.rng = rng
self.ser = Series(rng)
@@ -479,7 +479,7 @@ class ApplyIndex:
def setup(self, offset):
N = 10000
- rng = date_range(start="1/1/2000", periods=N, freq="T")
+ rng = date_range(start="1/1/2000", periods=N, freq="min")
self.rng = rng
def time_apply_index(self, offset):
diff --git a/asv_bench/benchmarks/eval.py b/asv_bench/benchmarks/eval.py
index 8a3d224c59a09..656d16a910a9f 100644
--- a/asv_bench/benchmarks/eval.py
+++ b/asv_bench/benchmarks/eval.py
@@ -44,7 +44,7 @@ class Query:
def setup(self):
N = 10**6
halfway = (N // 2) - 1
- index = pd.date_range("20010101", periods=N, freq="T")
+ index = pd.date_range("20010101", periods=N, freq="min")
s = pd.Series(index)
self.ts = s.iloc[halfway]
self.df = pd.DataFrame({"a": np.random.randn(N), "dates": index}, index=index)
diff --git a/asv_bench/benchmarks/gil.py b/asv_bench/benchmarks/gil.py
index 4d5c31d2dddf8..4993ffd2c47d0 100644
--- a/asv_bench/benchmarks/gil.py
+++ b/asv_bench/benchmarks/gil.py
@@ -178,7 +178,7 @@ def time_kth_smallest(self):
class ParallelDatetimeFields:
def setup(self):
N = 10**6
- self.dti = date_range("1900-01-01", periods=N, freq="T")
+ self.dti = date_range("1900-01-01", periods=N, freq="min")
self.period = self.dti.to_period("D")
def time_datetime_field_year(self):
diff --git a/asv_bench/benchmarks/index_cached_properties.py b/asv_bench/benchmarks/index_cached_properties.py
index b3d8de39a858a..d21bbe15c4cc8 100644
--- a/asv_bench/benchmarks/index_cached_properties.py
+++ b/asv_bench/benchmarks/index_cached_properties.py
@@ -25,14 +25,14 @@ def setup(self, index_type):
N = 10**5
if index_type == "MultiIndex":
self.idx = pd.MultiIndex.from_product(
- [pd.date_range("1/1/2000", freq="T", periods=N // 2), ["a", "b"]]
+ [pd.date_range("1/1/2000", freq="min", periods=N // 2), ["a", "b"]]
)
elif index_type == "DatetimeIndex":
- self.idx = pd.date_range("1/1/2000", freq="T", periods=N)
+ self.idx = pd.date_range("1/1/2000", freq="min", periods=N)
elif index_type == "Int64Index":
self.idx = pd.Index(range(N), dtype="int64")
elif index_type == "PeriodIndex":
- self.idx = pd.period_range("1/1/2000", freq="T", periods=N)
+ self.idx = pd.period_range("1/1/2000", freq="min", periods=N)
elif index_type == "RangeIndex":
self.idx = pd.RangeIndex(start=0, stop=N)
elif index_type == "IntervalIndex":
diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py
index bdc8a6a7aa1df..2d8014570466e 100644
--- a/asv_bench/benchmarks/index_object.py
+++ b/asv_bench/benchmarks/index_object.py
@@ -25,7 +25,7 @@ class SetOperations:
def setup(self, index_structure, dtype, method):
N = 10**5
- dates_left = date_range("1/1/2000", periods=N, freq="T")
+ dates_left = date_range("1/1/2000", periods=N, freq="min")
fmt = "%Y-%m-%d %H:%M:%S"
date_str_left = Index(dates_left.strftime(fmt))
int_left = Index(np.arange(N))
diff --git a/asv_bench/benchmarks/io/json.py b/asv_bench/benchmarks/io/json.py
index 9eaffddd8b87f..bebf6ee993aba 100644
--- a/asv_bench/benchmarks/io/json.py
+++ b/asv_bench/benchmarks/io/json.py
@@ -290,7 +290,7 @@ def time_float_longint_str_lines(self):
class ToJSONMem:
def setup_cache(self):
df = DataFrame([[1]])
- df2 = DataFrame(range(8), date_range("1/1/2000", periods=8, freq="T"))
+ df2 = DataFrame(range(8), date_range("1/1/2000", periods=8, freq="min"))
frames = {"int": df, "float": df.astype(float), "datetime": df2}
return frames
diff --git a/asv_bench/benchmarks/join_merge.py b/asv_bench/benchmarks/join_merge.py
index 4f325335829af..54bcdb0fa2843 100644
--- a/asv_bench/benchmarks/join_merge.py
+++ b/asv_bench/benchmarks/join_merge.py
@@ -212,7 +212,7 @@ class JoinNonUnique:
# outer join of non-unique
# GH 6329
def setup(self):
- date_index = date_range("01-Jan-2013", "23-Jan-2013", freq="T")
+ date_index = date_range("01-Jan-2013", "23-Jan-2013", freq="min")
daily_dates = date_index.to_period("D").to_timestamp("S", "S")
self.fracofday = date_index.values - daily_dates.values
self.fracofday = self.fracofday.astype("timedelta64[ns]")
@@ -338,7 +338,7 @@ class MergeDatetime:
def setup(self, units, tz):
unit_left, unit_right = units
N = 10_000
- keys = Series(date_range("2012-01-01", freq="T", periods=N, tz=tz))
+ keys = Series(date_range("2012-01-01", freq="min", periods=N, tz=tz))
self.left = DataFrame(
{
"key": keys.sample(N * 10, replace=True).dt.as_unit(unit_left),
diff --git a/asv_bench/benchmarks/sparse.py b/asv_bench/benchmarks/sparse.py
index c8a9a9e6e9176..22a5511e4c678 100644
--- a/asv_bench/benchmarks/sparse.py
+++ b/asv_bench/benchmarks/sparse.py
@@ -22,7 +22,7 @@ class SparseSeriesToFrame:
def setup(self):
K = 50
N = 50001
- rng = date_range("1/1/2000", periods=N, freq="T")
+ rng = date_range("1/1/2000", periods=N, freq="min")
self.series = {}
for i in range(1, K):
data = np.random.randn(N)[:-i]
diff --git a/asv_bench/benchmarks/timeseries.py b/asv_bench/benchmarks/timeseries.py
index 1253fefde2d5f..8c78a9c1723df 100644
--- a/asv_bench/benchmarks/timeseries.py
+++ b/asv_bench/benchmarks/timeseries.py
@@ -116,7 +116,7 @@ def time_infer_freq(self, freq):
class TimeDatetimeConverter:
def setup(self):
N = 100000
- self.rng = date_range(start="1/1/2000", periods=N, freq="T")
+ self.rng = date_range(start="1/1/2000", periods=N, freq="min")
def time_convert(self):
DatetimeConverter.convert(self.rng, None, None)
@@ -129,9 +129,9 @@ class Iteration:
def setup(self, time_index):
N = 10**6
if time_index is timedelta_range:
- self.idx = time_index(start=0, freq="T", periods=N)
+ self.idx = time_index(start=0, freq="min", periods=N)
else:
- self.idx = time_index(start="20140101", freq="T", periods=N)
+ self.idx = time_index(start="20140101", freq="min", periods=N)
self.exit = 10000
def time_iter(self, time_index):
@@ -149,7 +149,7 @@ class ResampleDataFrame:
param_names = ["method"]
def setup(self, method):
- rng = date_range(start="20130101", periods=100000, freq="50L")
+ rng = date_range(start="20130101", periods=100000, freq="50ms")
df = DataFrame(np.random.randn(100000, 2), index=rng)
self.resample = getattr(df.resample("1s"), method)
@@ -163,8 +163,8 @@ class ResampleSeries:
def setup(self, index, freq, method):
indexes = {
- "period": period_range(start="1/1/2000", end="1/1/2001", freq="T"),
- "datetime": date_range(start="1/1/2000", end="1/1/2001", freq="T"),
+ "period": period_range(start="1/1/2000", end="1/1/2001", freq="min"),
+ "datetime": date_range(start="1/1/2000", end="1/1/2001", freq="min"),
}
idx = indexes[index]
ts = Series(np.random.randn(len(idx)), index=idx)
@@ -178,7 +178,7 @@ class ResampleDatetetime64:
# GH 7754
def setup(self):
rng3 = date_range(
- start="2000-01-01 00:00:00", end="2000-01-01 10:00:00", freq="555000U"
+ start="2000-01-01 00:00:00", end="2000-01-01 10:00:00", freq="555000us"
)
self.dt_ts = Series(5, rng3, dtype="datetime64[ns]")
@@ -270,7 +270,7 @@ class DatetimeAccessor:
def setup(self, tz):
N = 100000
- self.series = Series(date_range(start="1/1/2000", periods=N, freq="T", tz=tz))
+ self.series = Series(date_range(start="1/1/2000", periods=N, freq="min", tz=tz))
def time_dt_accessor(self, tz):
self.series.dt
diff --git a/asv_bench/benchmarks/tslibs/timestamp.py b/asv_bench/benchmarks/tslibs/timestamp.py
index d7706a39dfae5..082220ee0dff2 100644
--- a/asv_bench/benchmarks/tslibs/timestamp.py
+++ b/asv_bench/benchmarks/tslibs/timestamp.py
@@ -136,10 +136,10 @@ def time_to_julian_date(self, tz):
self.ts.to_julian_date()
def time_floor(self, tz):
- self.ts.floor("5T")
+ self.ts.floor("5min")
def time_ceil(self, tz):
- self.ts.ceil("5T")
+ self.ts.ceil("5min")
class TimestampAcrossDst:
diff --git a/doc/source/user_guide/10min.rst b/doc/source/user_guide/10min.rst
index 1a891dca839e3..5def84b91705c 100644
--- a/doc/source/user_guide/10min.rst
+++ b/doc/source/user_guide/10min.rst
@@ -610,7 +610,7 @@ financial applications. See the :ref:`Time Series section <timeseries>`.
.. ipython:: python
- rng = pd.date_range("1/1/2012", periods=100, freq="S")
+ rng = pd.date_range("1/1/2012", periods=100, freq="s")
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample("5Min").sum()
diff --git a/doc/source/user_guide/scale.rst b/doc/source/user_guide/scale.rst
index bc49c7f958cb7..b262de5d71439 100644
--- a/doc/source/user_guide/scale.rst
+++ b/doc/source/user_guide/scale.rst
@@ -40,7 +40,7 @@ Suppose our raw dataset on disk has many columns.
return df
timeseries = [
- make_timeseries(freq="1T", seed=i).rename(columns=lambda x: f"{x}_{i}")
+ make_timeseries(freq="1min", seed=i).rename(columns=lambda x: f"{x}_{i}")
for i in range(10)
]
ts_wide = pd.concat(timeseries, axis=1)
@@ -87,7 +87,7 @@ can store larger datasets in memory.
.. ipython:: python
:okwarning:
- ts = make_timeseries(freq="30S", seed=0)
+ ts = make_timeseries(freq="30s", seed=0)
ts.to_parquet("timeseries.parquet")
ts = pd.read_parquet("timeseries.parquet")
ts
@@ -173,7 +173,7 @@ files. Each file in the directory represents a different year of the entire data
pathlib.Path("data/timeseries").mkdir(exist_ok=True)
for i, (start, end) in enumerate(zip(starts, ends)):
- ts = make_timeseries(start=start, end=end, freq="1T", seed=i)
+ ts = make_timeseries(start=start, end=end, freq="1min", seed=i)
ts.to_parquet(f"data/timeseries/ts-{i:0>2d}.parquet")
diff --git a/doc/source/user_guide/timedeltas.rst b/doc/source/user_guide/timedeltas.rst
index a6eb96f91a4bf..cd567f8442671 100644
--- a/doc/source/user_guide/timedeltas.rst
+++ b/doc/source/user_guide/timedeltas.rst
@@ -390,7 +390,7 @@ The ``freq`` parameter can passed a variety of :ref:`frequency aliases <timeseri
.. ipython:: python
- pd.timedelta_range(start="1 days", end="2 days", freq="30T")
+ pd.timedelta_range(start="1 days", end="2 days", freq="30min")
pd.timedelta_range(start="1 days", periods=5, freq="2D5H")
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index bc6a3926188f1..2e9d683cae417 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -603,7 +603,7 @@ would include matching times on an included date:
dft = pd.DataFrame(
np.random.randn(100000, 1),
columns=["A"],
- index=pd.date_range("20130101", periods=100000, freq="T"),
+ index=pd.date_range("20130101", periods=100000, freq="min"),
)
dft
dft.loc["2013"]
@@ -905,11 +905,11 @@ into ``freq`` keyword arguments. The available date offsets and associated frequ
:class:`~pandas.tseries.offsets.CustomBusinessHour`, ``'CBH'``, "custom business hour"
:class:`~pandas.tseries.offsets.Day`, ``'D'``, "one absolute day"
:class:`~pandas.tseries.offsets.Hour`, ``'H'``, "one hour"
- :class:`~pandas.tseries.offsets.Minute`, ``'T'`` or ``'min'``,"one minute"
- :class:`~pandas.tseries.offsets.Second`, ``'S'``, "one second"
- :class:`~pandas.tseries.offsets.Milli`, ``'L'`` or ``'ms'``, "one millisecond"
- :class:`~pandas.tseries.offsets.Micro`, ``'U'`` or ``'us'``, "one microsecond"
- :class:`~pandas.tseries.offsets.Nano`, ``'N'``, "one nanosecond"
+ :class:`~pandas.tseries.offsets.Minute`, ``'min'``,"one minute"
+ :class:`~pandas.tseries.offsets.Second`, ``'s'``, "one second"
+ :class:`~pandas.tseries.offsets.Milli`, ``'ms'``, "one millisecond"
+ :class:`~pandas.tseries.offsets.Micro`, ``'us'``, "one microsecond"
+ :class:`~pandas.tseries.offsets.Nano`, ``'ns'``, "one nanosecond"
``DateOffsets`` additionally have :meth:`rollforward` and :meth:`rollback`
methods for moving a date forward or backward respectively to a valid offset
@@ -1264,11 +1264,16 @@ frequencies. We will refer to these aliases as *offset aliases*.
"BAS, BYS", "business year start frequency"
"BH", "business hour frequency"
"H", "hourly frequency"
- "T, min", "minutely frequency"
- "S", "secondly frequency"
- "L, ms", "milliseconds"
- "U, us", "microseconds"
- "N", "nanoseconds"
+ "min", "minutely frequency"
+ "s", "secondly frequency"
+ "ms", "milliseconds"
+ "us", "microseconds"
+ "ns", "nanoseconds"
+
+.. deprecated:: 2.2.0
+
+ Aliases ``T``, ``S``, ``L``, ``U``, and ``N`` are deprecated in favour of the aliases
+ ``min``, ``s``, ``ms``, ``us``, and ``ns``.
.. note::
@@ -1318,11 +1323,16 @@ frequencies. We will refer to these aliases as *period aliases*.
"Q", "quarterly frequency"
"A, Y", "yearly frequency"
"H", "hourly frequency"
- "T, min", "minutely frequency"
- "S", "secondly frequency"
- "L, ms", "milliseconds"
- "U, us", "microseconds"
- "N", "nanoseconds"
+ "min", "minutely frequency"
+ "s", "secondly frequency"
+ "ms", "milliseconds"
+ "us", "microseconds"
+ "ns", "nanoseconds"
+
+.. deprecated:: 2.2.0
+
+ Aliases ``T``, ``S``, ``L``, ``U``, and ``N`` are deprecated in favour of the aliases
+ ``min``, ``s``, ``ms``, ``us``, and ``ns``.
Combining aliases
@@ -1343,7 +1353,7 @@ You can combine together day and intraday offsets:
pd.date_range(start, periods=10, freq="2h20min")
- pd.date_range(start, periods=10, freq="1D10U")
+ pd.date_range(start, periods=10, freq="1D10us")
Anchored offsets
~~~~~~~~~~~~~~~~
@@ -1635,7 +1645,7 @@ Basics
.. ipython:: python
- rng = pd.date_range("1/1/2012", periods=100, freq="S")
+ rng = pd.date_range("1/1/2012", periods=100, freq="s")
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
@@ -1725,11 +1735,11 @@ For upsampling, you can specify a way to upsample and the ``limit`` parameter to
# from secondly to every 250 milliseconds
- ts[:2].resample("250L").asfreq()
+ ts[:2].resample("250ms").asfreq()
- ts[:2].resample("250L").ffill()
+ ts[:2].resample("250ms").ffill()
- ts[:2].resample("250L").ffill(limit=2)
+ ts[:2].resample("250ms").ffill(limit=2)
Sparse resampling
~~~~~~~~~~~~~~~~~
@@ -1752,7 +1762,7 @@ If we want to resample to the full range of the series:
.. ipython:: python
- ts.resample("3T").sum()
+ ts.resample("3min").sum()
We can instead only resample those groups where we have points as follows:
@@ -1766,7 +1776,7 @@ We can instead only resample those groups where we have points as follows:
freq = to_offset(freq)
return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
- ts.groupby(partial(round, freq="3T")).sum()
+ ts.groupby(partial(round, freq="3min")).sum()
.. _timeseries.aggregate:
@@ -1783,10 +1793,10 @@ Resampling a ``DataFrame``, the default will be to act on all columns with the s
df = pd.DataFrame(
np.random.randn(1000, 3),
- index=pd.date_range("1/1/2012", freq="S", periods=1000),
+ index=pd.date_range("1/1/2012", freq="s", periods=1000),
columns=["A", "B", "C"],
)
- r = df.resample("3T")
+ r = df.resample("3min")
r.mean()
We can select a specific column or columns using standard getitem.
@@ -2155,7 +2165,7 @@ Passing a string representing a lower frequency than ``PeriodIndex`` returns par
dfp = pd.DataFrame(
np.random.randn(600, 1),
columns=["A"],
- index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
+ index=pd.period_range("2013-01-01 9:00", periods=600, freq="min"),
)
dfp
dfp.loc["2013-01-01 10H"]
diff --git a/doc/source/whatsnew/v0.13.0.rst b/doc/source/whatsnew/v0.13.0.rst
index c60a821968c0c..49a779be82e75 100644
--- a/doc/source/whatsnew/v0.13.0.rst
+++ b/doc/source/whatsnew/v0.13.0.rst
@@ -642,9 +642,16 @@ Enhancements
Period conversions in the range of seconds and below were reworked and extended
up to nanoseconds. Periods in the nanosecond range are now available.
- .. ipython:: python
+ .. code-block:: python
- pd.date_range('2013-01-01', periods=5, freq='5N')
+ In [79]: pd.date_range('2013-01-01', periods=5, freq='5N')
+ Out[79]:
+ DatetimeIndex([ '2013-01-01 00:00:00',
+ '2013-01-01 00:00:00.000000005',
+ '2013-01-01 00:00:00.000000010',
+ '2013-01-01 00:00:00.000000015',
+ '2013-01-01 00:00:00.000000020'],
+ dtype='datetime64[ns]', freq='5N')
or with frequency as offset
diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst
index 6b962cbb49c74..dffb4c7b9ff9e 100644
--- a/doc/source/whatsnew/v0.15.0.rst
+++ b/doc/source/whatsnew/v0.15.0.rst
@@ -185,7 +185,29 @@ Constructing a ``TimedeltaIndex`` with a regular range
.. ipython:: python
pd.timedelta_range('1 days', periods=5, freq='D')
- pd.timedelta_range(start='1 days', end='2 days', freq='30T')
+
+.. code-block:: python
+
+ In [20]: pd.timedelta_range(start='1 days', end='2 days', freq='30T')
+ Out[20]:
+ TimedeltaIndex(['1 days 00:00:00', '1 days 00:30:00', '1 days 01:00:00',
+ '1 days 01:30:00', '1 days 02:00:00', '1 days 02:30:00',
+ '1 days 03:00:00', '1 days 03:30:00', '1 days 04:00:00',
+ '1 days 04:30:00', '1 days 05:00:00', '1 days 05:30:00',
+ '1 days 06:00:00', '1 days 06:30:00', '1 days 07:00:00',
+ '1 days 07:30:00', '1 days 08:00:00', '1 days 08:30:00',
+ '1 days 09:00:00', '1 days 09:30:00', '1 days 10:00:00',
+ '1 days 10:30:00', '1 days 11:00:00', '1 days 11:30:00',
+ '1 days 12:00:00', '1 days 12:30:00', '1 days 13:00:00',
+ '1 days 13:30:00', '1 days 14:00:00', '1 days 14:30:00',
+ '1 days 15:00:00', '1 days 15:30:00', '1 days 16:00:00',
+ '1 days 16:30:00', '1 days 17:00:00', '1 days 17:30:00',
+ '1 days 18:00:00', '1 days 18:30:00', '1 days 19:00:00',
+ '1 days 19:30:00', '1 days 20:00:00', '1 days 20:30:00',
+ '1 days 21:00:00', '1 days 21:30:00', '1 days 22:00:00',
+ '1 days 22:30:00', '1 days 23:00:00', '1 days 23:30:00',
+ '2 days 00:00:00'],
+ dtype='timedelta64[ns]', freq='30T')
You can now use a ``TimedeltaIndex`` as the index of a pandas object
diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index e8e2b8d0ef908..391316acd43f7 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -92,6 +92,7 @@ Other API changes
Deprecations
~~~~~~~~~~~~
+- Changed :meth:`Timedelta.resolution_string` to return ``min``, ``s``, ``ms``, ``us``, and ``ns`` instead of ``T``, ``S``, ``L``, ``U``, and ``N``, for compatibility with respective deprecations in frequency aliases (:issue:`52536`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_clipboard`. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_csv` except ``path_or_buf``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_dict`. (:issue:`54229`)
@@ -104,6 +105,9 @@ Deprecations
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
- Deprecated not passing a tuple to :class:`DataFrameGroupBy.get_group` or :class:`SeriesGroupBy.get_group` when grouping by a length-1 list-like (:issue:`25971`)
+- Deprecated strings ``S``, ``U``, and ``N`` denoting units in :func:`to_timedelta` (:issue:`52536`)
+- Deprecated strings ``T``, ``S``, ``L``, ``U``, and ``N`` denoting frequencies in :class:`Minute`, :class:`Second`, :class:`Milli`, :class:`Micro`, :class:`Nano` (:issue:`52536`)
+- Deprecated strings ``T``, ``S``, ``L``, ``U``, and ``N`` denoting units in :class:`Timedelta` (:issue:`52536`)
- Deprecated the extension test classes ``BaseNoReduceTests``, ``BaseBooleanReduceTests``, and ``BaseNumericReduceTests``, use ``BaseReduceTests`` instead (:issue:`54663`)
-
diff --git a/pandas/_libs/tslibs/dtypes.pxd b/pandas/_libs/tslibs/dtypes.pxd
index a8dd88c763c14..37c149343e40f 100644
--- a/pandas/_libs/tslibs/dtypes.pxd
+++ b/pandas/_libs/tslibs/dtypes.pxd
@@ -11,6 +11,7 @@ cpdef int64_t periods_per_second(NPY_DATETIMEUNIT reso) except? -1
cpdef NPY_DATETIMEUNIT get_supported_reso(NPY_DATETIMEUNIT reso)
cpdef bint is_supported_unit(NPY_DATETIMEUNIT reso)
+cdef dict c_DEPR_ABBREVS
cdef dict attrname_to_abbrevs
cdef dict npy_unit_to_attrname
cdef dict attrname_to_npy_unit
diff --git a/pandas/_libs/tslibs/dtypes.pyi b/pandas/_libs/tslibs/dtypes.pyi
index bea3e18273318..9974d2e648398 100644
--- a/pandas/_libs/tslibs/dtypes.pyi
+++ b/pandas/_libs/tslibs/dtypes.pyi
@@ -1,9 +1,12 @@
from enum import Enum
+from pandas._libs.tslibs.timedeltas import UnitChoices
+
# These are not public API, but are exposed in the .pyi file because they
# are imported in tests.
_attrname_to_abbrevs: dict[str, str]
_period_code_map: dict[str, int]
+DEPR_ABBREVS: dict[str, UnitChoices]
def periods_per_day(reso: int) -> int: ...
def periods_per_second(reso: int) -> int: ...
diff --git a/pandas/_libs/tslibs/dtypes.pyx b/pandas/_libs/tslibs/dtypes.pyx
index 19f4c83e6cecf..bafde9f3b237b 100644
--- a/pandas/_libs/tslibs/dtypes.pyx
+++ b/pandas/_libs/tslibs/dtypes.pyx
@@ -1,6 +1,9 @@
# period frequency constants corresponding to scikits timeseries
# originals
from enum import Enum
+import warnings
+
+from pandas.util._exceptions import find_stack_level
from pandas._libs.tslibs.np_datetime cimport (
NPY_DATETIMEUNIT,
@@ -141,11 +144,11 @@ _period_code_map = {
"B": PeriodDtypeCode.B, # Business days
"D": PeriodDtypeCode.D, # Daily
"H": PeriodDtypeCode.H, # Hourly
- "T": PeriodDtypeCode.T, # Minutely
- "S": PeriodDtypeCode.S, # Secondly
- "L": PeriodDtypeCode.L, # Millisecondly
- "U": PeriodDtypeCode.U, # Microsecondly
- "N": PeriodDtypeCode.N, # Nanosecondly
+ "min": PeriodDtypeCode.T, # Minutely
+ "s": PeriodDtypeCode.S, # Secondly
+ "ms": PeriodDtypeCode.L, # Millisecondly
+ "us": PeriodDtypeCode.U, # Microsecondly
+ "ns": PeriodDtypeCode.N, # Nanosecondly
}
_reverse_period_code_map = {
@@ -174,15 +177,29 @@ _attrname_to_abbrevs = {
"month": "M",
"day": "D",
"hour": "H",
- "minute": "T",
- "second": "S",
- "millisecond": "L",
- "microsecond": "U",
- "nanosecond": "N",
+ "minute": "min",
+ "second": "s",
+ "millisecond": "ms",
+ "microsecond": "us",
+ "nanosecond": "ns",
}
cdef dict attrname_to_abbrevs = _attrname_to_abbrevs
cdef dict _abbrev_to_attrnames = {v: k for k, v in attrname_to_abbrevs.items()}
+# Map deprecated resolution abbreviations to correct resolution abbreviations
+DEPR_ABBREVS: dict[str, str]= {
+ "T": "min",
+ "t": "min",
+ "S": "s",
+ "L": "ms",
+ "l": "ms",
+ "U": "us",
+ "u": "us",
+ "N": "ns",
+ "n": "ns",
+}
+cdef dict c_DEPR_ABBREVS = DEPR_ABBREVS
+
class FreqGroup(Enum):
# Mirrors c_FreqGroup in the .pxd file
@@ -273,6 +290,15 @@ class Resolution(Enum):
True
"""
try:
+ if freq in DEPR_ABBREVS:
+ warnings.warn(
+ f"\'{freq}\' is deprecated and will be removed in a future "
+ f"version. Please use \'{DEPR_ABBREVS.get(freq)}\' "
+ "instead of \'{freq}\'.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ freq = DEPR_ABBREVS[freq]
attr_name = _abbrev_to_attrnames[freq]
except KeyError:
# For quarterly and yearly resolutions, we need to chop off
@@ -283,6 +309,15 @@ class Resolution(Enum):
if split_freq[1] not in _month_names:
# i.e. we want e.g. "Q-DEC", not "Q-INVALID"
raise
+ if split_freq[0] in DEPR_ABBREVS:
+ warnings.warn(
+ f"\'{split_freq[0]}\' is deprecated and will be removed in a "
+ f"future version. Please use \'{DEPR_ABBREVS.get(split_freq[0])}\' "
+ f"instead of \'{split_freq[0]}\'.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ split_freq[0] = DEPR_ABBREVS[split_freq[0]]
attr_name = _abbrev_to_attrnames[split_freq[0]]
return cls.from_attrname(attr_name)
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 7d75fa3114d2b..bb497f2e17b93 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -1003,23 +1003,23 @@ timedelta}, default 'raise'
>>> ts.round(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
- >>> ts.round(freq='T') # minute
+ >>> ts.round(freq='min') # minute
Timestamp('2020-03-14 15:33:00')
- >>> ts.round(freq='S') # seconds
+ >>> ts.round(freq='s') # seconds
Timestamp('2020-03-14 15:32:52')
- >>> ts.round(freq='L') # milliseconds
+ >>> ts.round(freq='ms') # milliseconds
Timestamp('2020-03-14 15:32:52.193000')
- ``freq`` can also be a multiple of a single unit, like '5T' (i.e. 5 minutes):
+ ``freq`` can also be a multiple of a single unit, like '5min' (i.e. 5 minutes):
- >>> ts.round(freq='5T')
+ >>> ts.round(freq='5min')
Timestamp('2020-03-14 15:35:00')
- or a combination of multiple units, like '1H30T' (i.e. 1 hour and 30 minutes):
+ or a combination of multiple units, like '1H30min' (i.e. 1 hour and 30 minutes):
- >>> ts.round(freq='1H30T')
+ >>> ts.round(freq='1H30min')
Timestamp('2020-03-14 15:00:00')
Analogous for ``pd.NaT``:
@@ -1092,23 +1092,23 @@ timedelta}, default 'raise'
>>> ts.floor(freq='H') # hour
Timestamp('2020-03-14 15:00:00')
- >>> ts.floor(freq='T') # minute
+ >>> ts.floor(freq='min') # minute
Timestamp('2020-03-14 15:32:00')
- >>> ts.floor(freq='S') # seconds
+ >>> ts.floor(freq='s') # seconds
Timestamp('2020-03-14 15:32:52')
- >>> ts.floor(freq='N') # nanoseconds
+ >>> ts.floor(freq='ns') # nanoseconds
Timestamp('2020-03-14 15:32:52.192548651')
- ``freq`` can also be a multiple of a single unit, like '5T' (i.e. 5 minutes):
+ ``freq`` can also be a multiple of a single unit, like '5min' (i.e. 5 minutes):
- >>> ts.floor(freq='5T')
+ >>> ts.floor(freq='5min')
Timestamp('2020-03-14 15:30:00')
- or a combination of multiple units, like '1H30T' (i.e. 1 hour and 30 minutes):
+ or a combination of multiple units, like '1H30min' (i.e. 1 hour and 30 minutes):
- >>> ts.floor(freq='1H30T')
+ >>> ts.floor(freq='1H30min')
Timestamp('2020-03-14 15:00:00')
Analogous for ``pd.NaT``:
@@ -1181,23 +1181,23 @@ timedelta}, default 'raise'
>>> ts.ceil(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
- >>> ts.ceil(freq='T') # minute
+ >>> ts.ceil(freq='min') # minute
Timestamp('2020-03-14 15:33:00')
- >>> ts.ceil(freq='S') # seconds
+ >>> ts.ceil(freq='s') # seconds
Timestamp('2020-03-14 15:32:53')
- >>> ts.ceil(freq='U') # microseconds
+ >>> ts.ceil(freq='us') # microseconds
Timestamp('2020-03-14 15:32:52.192549')
- ``freq`` can also be a multiple of a single unit, like '5T' (i.e. 5 minutes):
+ ``freq`` can also be a multiple of a single unit, like '5min' (i.e. 5 minutes):
- >>> ts.ceil(freq='5T')
+ >>> ts.ceil(freq='5min')
Timestamp('2020-03-14 15:35:00')
- or a combination of multiple units, like '1H30T' (i.e. 1 hour and 30 minutes):
+ or a combination of multiple units, like '1H30min' (i.e. 1 hour and 30 minutes):
- >>> ts.ceil(freq='1H30T')
+ >>> ts.ceil(freq='1H30min')
Timestamp('2020-03-14 16:30:00')
Analogous for ``pd.NaT``:
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index ac08f57844b9a..7c1187820ea13 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -1,5 +1,8 @@
import re
import time
+import warnings
+
+from pandas.util._exceptions import find_stack_level
cimport cython
from cpython.datetime cimport (
@@ -50,7 +53,10 @@ from pandas._libs.tslibs.ccalendar cimport (
get_lastbday,
)
from pandas._libs.tslibs.conversion cimport localize_pydatetime
-from pandas._libs.tslibs.dtypes cimport periods_per_day
+from pandas._libs.tslibs.dtypes cimport (
+ c_DEPR_ABBREVS,
+ periods_per_day,
+)
from pandas._libs.tslibs.nattype cimport (
NPY_NAT,
c_NaT as NaT,
@@ -621,10 +627,10 @@ cdef class BaseOffset:
'2BH'
>>> pd.offsets.Nano().freqstr
- 'N'
+ 'ns'
>>> pd.offsets.Nano(-3).freqstr
- '-3N'
+ '-3ns'
"""
try:
code = self.rule_code
@@ -1191,7 +1197,7 @@ cdef class Minute(Tick):
Timestamp('2022-12-09 14:50:00')
"""
_nanos_inc = 60 * 1_000_000_000
- _prefix = "T"
+ _prefix = "min"
_period_dtype_code = PeriodDtypeCode.T
_creso = NPY_DATETIMEUNIT.NPY_FR_m
@@ -1227,28 +1233,28 @@ cdef class Second(Tick):
Timestamp('2022-12-09 14:59:50')
"""
_nanos_inc = 1_000_000_000
- _prefix = "S"
+ _prefix = "s"
_period_dtype_code = PeriodDtypeCode.S
_creso = NPY_DATETIMEUNIT.NPY_FR_s
cdef class Milli(Tick):
_nanos_inc = 1_000_000
- _prefix = "L"
+ _prefix = "ms"
_period_dtype_code = PeriodDtypeCode.L
_creso = NPY_DATETIMEUNIT.NPY_FR_ms
cdef class Micro(Tick):
_nanos_inc = 1000
- _prefix = "U"
+ _prefix = "us"
_period_dtype_code = PeriodDtypeCode.U
_creso = NPY_DATETIMEUNIT.NPY_FR_us
cdef class Nano(Tick):
_nanos_inc = 1
- _prefix = "N"
+ _prefix = "ns"
_period_dtype_code = PeriodDtypeCode.N
_creso = NPY_DATETIMEUNIT.NPY_FR_ns
@@ -4431,16 +4437,16 @@ prefix_mapping = {
CustomBusinessHour, # 'CBH'
MonthEnd, # 'M'
MonthBegin, # 'MS'
- Nano, # 'N'
+ Nano, # 'ns'
SemiMonthEnd, # 'SM'
SemiMonthBegin, # 'SMS'
Week, # 'W'
- Second, # 'S'
- Minute, # 'T'
- Micro, # 'U'
+ Second, # 's'
+ Minute, # 'min'
+ Micro, # 'us'
QuarterEnd, # 'Q'
QuarterBegin, # 'QS'
- Milli, # 'L'
+ Milli, # 'ms'
Hour, # 'H'
Day, # 'D'
WeekOfMonth, # 'WOM'
@@ -4467,14 +4473,14 @@ _lite_rule_alias = {
"BAS": "BAS-JAN", # BYearBegin(month=1),
"BYS": "BAS-JAN",
- "Min": "T",
- "min": "T",
- "ms": "L",
- "us": "U",
- "ns": "N",
+ "Min": "min",
+ "min": "min",
+ "ms": "ms",
+ "us": "us",
+ "ns": "ns",
}
-_dont_uppercase = {"MS", "ms"}
+_dont_uppercase = {"MS", "ms", "s"}
INVALID_FREQ_ERR_MSG = "Invalid frequency: {0}"
@@ -4591,7 +4597,16 @@ cpdef to_offset(freq):
if not stride:
stride = 1
- if prefix in {"D", "H", "T", "S", "L", "U", "N"}:
+ if prefix in c_DEPR_ABBREVS:
+ warnings.warn(
+ f"\'{prefix}\' is deprecated and will be removed in a "
+ f"future version. Please use \'{c_DEPR_ABBREVS.get(prefix)}\' "
+ f"instead of \'{prefix}\'.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ prefix = c_DEPR_ABBREVS[prefix]
+ if prefix in {"D", "H", "min", "s", "ms", "us", "ns"}:
# For these prefixes, we have something like "3H" or
# "2.5T", so we can construct a Timedelta with the
# matching unit and get our offset from delta_to_tick
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index ffa9a67542e21..2d9fe93c397cb 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1,6 +1,8 @@
import collections
import warnings
+from pandas.util._exceptions import find_stack_level
+
cimport cython
from cpython.object cimport (
Py_EQ,
@@ -41,6 +43,7 @@ from pandas._libs.tslibs.conversion cimport (
precision_from_unit,
)
from pandas._libs.tslibs.dtypes cimport (
+ c_DEPR_ABBREVS,
get_supported_reso,
is_supported_unit,
npy_unit_to_abbrev,
@@ -124,7 +127,6 @@ cdef dict timedelta_abbrevs = {
"minute": "m",
"min": "m",
"minutes": "m",
- "t": "m",
"s": "s",
"seconds": "s",
"sec": "s",
@@ -134,20 +136,17 @@ cdef dict timedelta_abbrevs = {
"millisecond": "ms",
"milli": "ms",
"millis": "ms",
- "l": "ms",
"us": "us",
"microseconds": "us",
"microsecond": "us",
"µs": "us",
"micro": "us",
"micros": "us",
- "u": "us",
"ns": "ns",
"nanoseconds": "ns",
"nano": "ns",
"nanos": "ns",
"nanosecond": "ns",
- "n": "ns",
}
_no_input = object()
@@ -725,6 +724,15 @@ cpdef inline str parse_timedelta_unit(str unit):
return "ns"
elif unit == "M":
return unit
+ elif unit in c_DEPR_ABBREVS:
+ warnings.warn(
+ f"\'{unit}\' is deprecated and will be removed in a "
+ f"future version. Please use \'{c_DEPR_ABBREVS.get(unit)}\' "
+ f"instead of \'{unit}\'.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ unit = c_DEPR_ABBREVS[unit]
try:
return timedelta_abbrevs[unit.lower()]
except KeyError:
@@ -901,7 +909,7 @@ cdef int64_t parse_iso_format_string(str ts) except? -1:
elif c == ".":
# append any seconds
if len(number):
- r = timedelta_from_spec(number, "0", "S")
+ r = timedelta_from_spec(number, "0", "s")
result += timedelta_as_neg(r, neg)
unit, number = [], []
have_dot = 1
@@ -918,7 +926,7 @@ cdef int64_t parse_iso_format_string(str ts) except? -1:
r = timedelta_from_spec(number, "0", dec_unit)
result += timedelta_as_neg(r, neg)
else: # seconds
- r = timedelta_from_spec(number, "0", "S")
+ r = timedelta_from_spec(number, "0", "s")
result += timedelta_as_neg(r, neg)
else:
raise ValueError(err_msg)
@@ -1435,11 +1443,11 @@ cdef class _Timedelta(timedelta):
* Days: 'D'
* Hours: 'H'
- * Minutes: 'T'
- * Seconds: 'S'
- * Milliseconds: 'L'
- * Microseconds: 'U'
- * Nanoseconds: 'N'
+ * Minutes: 'min'
+ * Seconds: 's'
+ * Milliseconds: 'ms'
+ * Microseconds: 'us'
+ * Nanoseconds: 'ns'
Returns
-------
@@ -1450,31 +1458,31 @@ cdef class _Timedelta(timedelta):
--------
>>> td = pd.Timedelta('1 days 2 min 3 us 42 ns')
>>> td.resolution_string
- 'N'
+ 'ns'
>>> td = pd.Timedelta('1 days 2 min 3 us')
>>> td.resolution_string
- 'U'
+ 'us'
>>> td = pd.Timedelta('2 min 3 s')
>>> td.resolution_string
- 'S'
+ 's'
>>> td = pd.Timedelta(36, unit='us')
>>> td.resolution_string
- 'U'
+ 'us'
"""
self._ensure_components()
if self._ns:
- return "N"
+ return "ns"
elif self._us:
- return "U"
+ return "us"
elif self._ms:
- return "L"
+ return "ms"
elif self._s:
- return "S"
+ return "s"
elif self._m:
- return "T"
+ return "min"
elif self._h:
return "H"
else:
@@ -1706,15 +1714,20 @@ class Timedelta(_Timedelta):
Possible values:
- * 'W', 'D', 'T', 'S', 'L', 'U', or 'N'
- * 'days' or 'day'
+ * 'W', or 'D'
+ * 'days', or 'day'
* 'hours', 'hour', 'hr', or 'h'
* 'minutes', 'minute', 'min', or 'm'
- * 'seconds', 'second', or 'sec'
- * 'milliseconds', 'millisecond', 'millis', or 'milli'
- * 'microseconds', 'microsecond', 'micros', or 'micro'
+ * 'seconds', 'second', 'sec', or 's'
+ * 'milliseconds', 'millisecond', 'millis', 'milli', or 'ms'
+ * 'microseconds', 'microsecond', 'micros', 'micro', or 'us'
* 'nanoseconds', 'nanosecond', 'nanos', 'nano', or 'ns'.
+ .. deprecated:: 2.2.0
+
+ Values `T`, `S`, `L`, `U`, and `N` are deprecated in favour of the values
+ `min`, `s`, `ms`, `us`, and `ns`.
+
**kwargs
Available kwargs: {days, seconds, microseconds,
milliseconds, minutes, hours, weeks}.
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 844fc8f0ed187..944a2b0e97382 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1997,23 +1997,23 @@ timedelta}, default 'raise'
>>> ts.round(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
- >>> ts.round(freq='T') # minute
+ >>> ts.round(freq='min') # minute
Timestamp('2020-03-14 15:33:00')
- >>> ts.round(freq='S') # seconds
+ >>> ts.round(freq='s') # seconds
Timestamp('2020-03-14 15:32:52')
- >>> ts.round(freq='L') # milliseconds
+ >>> ts.round(freq='ms') # milliseconds
Timestamp('2020-03-14 15:32:52.193000')
- ``freq`` can also be a multiple of a single unit, like '5T' (i.e. 5 minutes):
+ ``freq`` can also be a multiple of a single unit, like '5min' (i.e. 5 minutes):
- >>> ts.round(freq='5T')
+ >>> ts.round(freq='5min')
Timestamp('2020-03-14 15:35:00')
- or a combination of multiple units, like '1H30T' (i.e. 1 hour and 30 minutes):
+ or a combination of multiple units, like '1H30min' (i.e. 1 hour and 30 minutes):
- >>> ts.round(freq='1H30T')
+ >>> ts.round(freq='1H30min')
Timestamp('2020-03-14 15:00:00')
Analogous for ``pd.NaT``:
@@ -2088,23 +2088,23 @@ timedelta}, default 'raise'
>>> ts.floor(freq='H') # hour
Timestamp('2020-03-14 15:00:00')
- >>> ts.floor(freq='T') # minute
+ >>> ts.floor(freq='min') # minute
Timestamp('2020-03-14 15:32:00')
- >>> ts.floor(freq='S') # seconds
+ >>> ts.floor(freq='s') # seconds
Timestamp('2020-03-14 15:32:52')
- >>> ts.floor(freq='N') # nanoseconds
+ >>> ts.floor(freq='ns') # nanoseconds
Timestamp('2020-03-14 15:32:52.192548651')
- ``freq`` can also be a multiple of a single unit, like '5T' (i.e. 5 minutes):
+ ``freq`` can also be a multiple of a single unit, like '5min' (i.e. 5 minutes):
- >>> ts.floor(freq='5T')
+ >>> ts.floor(freq='5min')
Timestamp('2020-03-14 15:30:00')
- or a combination of multiple units, like '1H30T' (i.e. 1 hour and 30 minutes):
+ or a combination of multiple units, like '1H30min' (i.e. 1 hour and 30 minutes):
- >>> ts.floor(freq='1H30T')
+ >>> ts.floor(freq='1H30min')
Timestamp('2020-03-14 15:00:00')
Analogous for ``pd.NaT``:
@@ -2177,23 +2177,23 @@ timedelta}, default 'raise'
>>> ts.ceil(freq='H') # hour
Timestamp('2020-03-14 16:00:00')
- >>> ts.ceil(freq='T') # minute
+ >>> ts.ceil(freq='min') # minute
Timestamp('2020-03-14 15:33:00')
- >>> ts.ceil(freq='S') # seconds
+ >>> ts.ceil(freq='s') # seconds
Timestamp('2020-03-14 15:32:53')
- >>> ts.ceil(freq='U') # microseconds
+ >>> ts.ceil(freq='us') # microseconds
Timestamp('2020-03-14 15:32:52.192549')
- ``freq`` can also be a multiple of a single unit, like '5T' (i.e. 5 minutes):
+ ``freq`` can also be a multiple of a single unit, like '5min' (i.e. 5 minutes):
- >>> ts.ceil(freq='5T')
+ >>> ts.ceil(freq='5min')
Timestamp('2020-03-14 15:35:00')
- or a combination of multiple units, like '1H30T' (i.e. 1 hour and 30 minutes):
+ or a combination of multiple units, like '1H30min' (i.e. 1 hour and 30 minutes):
- >>> ts.ceil(freq='1H30T')
+ >>> ts.ceil(freq='1H30min')
Timestamp('2020-03-14 16:30:00')
Analogous for ``pd.NaT``:
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 48ff769f6c737..b653f3f4e1eae 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -2462,11 +2462,11 @@ def _round_temporally(
"W": "week",
"D": "day",
"H": "hour",
- "T": "minute",
- "S": "second",
- "L": "millisecond",
- "U": "microsecond",
- "N": "nanosecond",
+ "min": "minute",
+ "s": "second",
+ "ms": "millisecond",
+ "us": "microsecond",
+ "ns": "nanosecond",
}
unit = pa_supported_unit.get(offset._prefix, None)
if unit is None:
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index c3b8d1c0e79e8..dd46e9ebc547f 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1813,7 +1813,7 @@ def strftime(self, date_format: str) -> npt.NDArray[np.object_]:
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
- dtype='datetime64[ns]', freq='T')
+ dtype='datetime64[ns]', freq='min')
"""
_round_example = """>>> rng.round('H')
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 8ad51e4a90027..dd2d7c0060392 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1589,7 +1589,7 @@ def isocalendar(self) -> DataFrame:
Examples
--------
>>> datetime_series = pd.Series(
- ... pd.date_range("2000-01-01", periods=3, freq="T")
+ ... pd.date_range("2000-01-01", periods=3, freq="min")
... )
>>> datetime_series
0 2000-01-01 00:00:00
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index a81609e1bb618..b7b81b8271106 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -854,7 +854,7 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
--------
For Series:
- >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='S'))
+ >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='s'))
>>> ser
0 0 days 00:00:01
1 0 days 00:00:02
@@ -868,7 +868,7 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
For TimedeltaIndex:
- >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='S')
+ >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='s')
>>> tdelta_idx
TimedeltaIndex(['0 days 00:00:01', '0 days 00:00:02', '0 days 00:00:03'],
dtype='timedelta64[ns]', freq=None)
@@ -888,7 +888,7 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
--------
For Series:
- >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='U'))
+ >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='us'))
>>> ser
0 0 days 00:00:00.000001
1 0 days 00:00:00.000002
@@ -902,7 +902,7 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
For TimedeltaIndex:
- >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='U')
+ >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='us')
>>> tdelta_idx
TimedeltaIndex(['0 days 00:00:00.000001', '0 days 00:00:00.000002',
'0 days 00:00:00.000003'],
@@ -923,7 +923,7 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
--------
For Series:
- >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='N'))
+ >>> ser = pd.Series(pd.to_timedelta([1, 2, 3], unit='ns'))
>>> ser
0 0 days 00:00:00.000000001
1 0 days 00:00:00.000000002
@@ -937,7 +937,7 @@ def to_pytimedelta(self) -> npt.NDArray[np.object_]:
For TimedeltaIndex:
- >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='N')
+ >>> tdelta_idx = pd.to_timedelta([1, 2, 3], unit='ns')
>>> tdelta_idx
TimedeltaIndex(['0 days 00:00:00.000000001', '0 days 00:00:00.000000002',
'0 days 00:00:00.000000003'],
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 59939057d4b37..3b9cb6741910c 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -61,6 +61,8 @@
is_list_like,
)
+from pandas.util import capitalize_first_letter
+
if not pa_version_under7p0:
import pyarrow as pa
@@ -1071,7 +1073,7 @@ def na_value(self) -> NaTType:
def __eq__(self, other: object) -> bool:
if isinstance(other, str):
- return other in [self.name, self.name.title()]
+ return other in [self.name, capitalize_first_letter(self.name)]
return super().__eq__(other)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a763d46e11939..baca4e0bc7b6b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8791,7 +8791,7 @@ def asfreq(
--------
Start by creating a series with 4 one minute timestamps.
- >>> index = pd.date_range('1/1/2000', periods=4, freq='T')
+ >>> index = pd.date_range('1/1/2000', periods=4, freq='min')
>>> series = pd.Series([0.0, None, 2.0, 3.0], index=index)
>>> df = pd.DataFrame({{'s': series}})
>>> df
@@ -8803,7 +8803,7 @@ def asfreq(
Upsample the series into 30 second bins.
- >>> df.asfreq(freq='30S')
+ >>> df.asfreq(freq='30s')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
@@ -8815,7 +8815,7 @@ def asfreq(
Upsample again, providing a ``fill value``.
- >>> df.asfreq(freq='30S', fill_value=9.0)
+ >>> df.asfreq(freq='30s', fill_value=9.0)
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 9.0
@@ -8827,7 +8827,7 @@ def asfreq(
Upsample again, providing a ``method``.
- >>> df.asfreq(freq='30S', method='bfill')
+ >>> df.asfreq(freq='30s', method='bfill')
s
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
@@ -9103,7 +9103,7 @@ def resample(
--------
Start by creating a series with 9 one minute timestamps.
- >>> index = pd.date_range('1/1/2000', periods=9, freq='T')
+ >>> index = pd.date_range('1/1/2000', periods=9, freq='min')
>>> series = pd.Series(range(9), index=index)
>>> series
2000-01-01 00:00:00 0
@@ -9115,16 +9115,16 @@ def resample(
2000-01-01 00:06:00 6
2000-01-01 00:07:00 7
2000-01-01 00:08:00 8
- Freq: T, dtype: int64
+ Freq: min, dtype: int64
Downsample the series into 3 minute bins and sum the values
of the timestamps falling into a bin.
- >>> series.resample('3T').sum()
+ >>> series.resample('3min').sum()
2000-01-01 00:00:00 3
2000-01-01 00:03:00 12
2000-01-01 00:06:00 21
- Freq: 3T, dtype: int64
+ Freq: 3min, dtype: int64
Downsample the series into 3 minute bins as above, but label each
bin using the right edge instead of the left. Please note that the
@@ -9136,64 +9136,64 @@ def resample(
To include this value close the right side of the bin interval as
illustrated in the example below this one.
- >>> series.resample('3T', label='right').sum()
+ >>> series.resample('3min', label='right').sum()
2000-01-01 00:03:00 3
2000-01-01 00:06:00 12
2000-01-01 00:09:00 21
- Freq: 3T, dtype: int64
+ Freq: 3min, dtype: int64
Downsample the series into 3 minute bins as above, but close the right
side of the bin interval.
- >>> series.resample('3T', label='right', closed='right').sum()
+ >>> series.resample('3min', label='right', closed='right').sum()
2000-01-01 00:00:00 0
2000-01-01 00:03:00 6
2000-01-01 00:06:00 15
2000-01-01 00:09:00 15
- Freq: 3T, dtype: int64
+ Freq: 3min, dtype: int64
Upsample the series into 30 second bins.
- >>> series.resample('30S').asfreq()[0:5] # Select first 5 rows
+ >>> series.resample('30s').asfreq()[0:5] # Select first 5 rows
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 1.0
2000-01-01 00:01:30 NaN
2000-01-01 00:02:00 2.0
- Freq: 30S, dtype: float64
+ Freq: 30s, dtype: float64
Upsample the series into 30 second bins and fill the ``NaN``
values using the ``ffill`` method.
- >>> series.resample('30S').ffill()[0:5]
+ >>> series.resample('30s').ffill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 0
2000-01-01 00:01:00 1
2000-01-01 00:01:30 1
2000-01-01 00:02:00 2
- Freq: 30S, dtype: int64
+ Freq: 30s, dtype: int64
Upsample the series into 30 second bins and fill the
``NaN`` values using the ``bfill`` method.
- >>> series.resample('30S').bfill()[0:5]
+ >>> series.resample('30s').bfill()[0:5]
2000-01-01 00:00:00 0
2000-01-01 00:00:30 1
2000-01-01 00:01:00 1
2000-01-01 00:01:30 2
2000-01-01 00:02:00 2
- Freq: 30S, dtype: int64
+ Freq: 30s, dtype: int64
Pass a custom function via ``apply``
>>> def custom_resampler(arraylike):
... return np.sum(arraylike) + 5
...
- >>> series.resample('3T').apply(custom_resampler)
+ >>> series.resample('3min').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
2000-01-01 00:06:00 26
- Freq: 3T, dtype: int64
+ Freq: 3min, dtype: int64
For a Series with a PeriodIndex, the keyword `convention` can be
used to control whether to use the start or end of `rule`.
@@ -9313,7 +9313,7 @@ def resample(
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
- Freq: 7T, dtype: int64
+ Freq: 7min, dtype: int64
>>> ts.resample('17min').sum()
2000-10-01 23:14:00 0
@@ -9321,7 +9321,7 @@ def resample(
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
>>> ts.resample('17min', origin='epoch').sum()
2000-10-01 23:18:00 0
@@ -9329,7 +9329,7 @@ def resample(
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
>>> ts.resample('17W', origin='2000-01-01').sum()
2000-01-02 0
@@ -9346,14 +9346,14 @@ def resample(
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
>>> ts.resample('17min', offset='23h30min').sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
If you want to take the largest Timestamp as the end of the bins:
@@ -9362,7 +9362,7 @@ def resample(
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
In contrast with the `start_day`, you can use `end_day` to take the ceiling
midnight of the largest Timestamp as the end of the bins and drop the bins
@@ -9373,7 +9373,7 @@ def resample(
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
"""
from pandas.core.resample import get_resampler
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 49b47545d6297..68cca0f66a9f0 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -3550,7 +3550,7 @@ def resample(self, rule, *args, **kwargs):
Examples
--------
- >>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
+ >>> idx = pd.date_range('1/1/2000', periods=4, freq='min')
>>> df = pd.DataFrame(data=4 * [range(2)],
... index=idx,
... columns=['a', 'b'])
@@ -3565,7 +3565,7 @@ def resample(self, rule, *args, **kwargs):
Downsample the DataFrame into 3 minute bins and sum the values of
the timestamps falling into a bin.
- >>> df.groupby('a').resample('3T').sum()
+ >>> df.groupby('a').resample('3min').sum()
a b
a
0 2000-01-01 00:00:00 0 2
@@ -3574,7 +3574,7 @@ def resample(self, rule, *args, **kwargs):
Upsample the series into 30 second bins.
- >>> df.groupby('a').resample('30S').sum()
+ >>> df.groupby('a').resample('30s').sum()
a b
a
0 2000-01-01 00:00:00 0 1
@@ -3597,7 +3597,7 @@ def resample(self, rule, *args, **kwargs):
Downsample the series into 3 minute bins as above, but close the right
side of the bin interval.
- >>> df.groupby('a').resample('3T', closed='right').sum()
+ >>> df.groupby('a').resample('3min', closed='right').sum()
a b
a
0 1999-12-31 23:57:00 0 1
@@ -3608,7 +3608,7 @@ def resample(self, rule, *args, **kwargs):
the bin interval, but label each bin using the right edge instead of
the left.
- >>> df.groupby('a').resample('3T', closed='right', label='right').sum()
+ >>> df.groupby('a').resample('3min', closed='right', label='right').sum()
a b
a
0 2000-01-01 00:00:00 0 1
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 95cb114c1472a..add6c3ac4ec20 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -189,7 +189,7 @@ class Grouper:
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
- Freq: 7T, dtype: int64
+ Freq: 7min, dtype: int64
>>> ts.groupby(pd.Grouper(freq='17min')).sum()
2000-10-01 23:14:00 0
@@ -197,7 +197,7 @@ class Grouper:
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
>>> ts.groupby(pd.Grouper(freq='17min', origin='epoch')).sum()
2000-10-01 23:18:00 0
@@ -205,7 +205,7 @@ class Grouper:
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
>>> ts.groupby(pd.Grouper(freq='17W', origin='2000-01-01')).sum()
2000-01-02 0
@@ -222,14 +222,14 @@ class Grouper:
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
>>> ts.groupby(pd.Grouper(freq='17min', offset='23h30min')).sum()
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
To replace the use of the deprecated `base` argument, you can now use `offset`,
in this example it is equivalent to have `base=2`:
@@ -240,7 +240,7 @@ class Grouper:
2000-10-01 23:50:00 36
2000-10-02 00:07:00 39
2000-10-02 00:24:00 24
- Freq: 17T, dtype: int64
+ Freq: 17min, dtype: int64
"""
sort: bool
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index d972983532e3c..5134c506b8c61 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -414,7 +414,7 @@ class TimedeltaProperties(Properties):
Examples
--------
>>> seconds_series = pd.Series(
- ... pd.timedelta_range(start="1 second", periods=3, freq="S")
+ ... pd.timedelta_range(start="1 second", periods=3, freq="s")
... )
>>> seconds_series
0 0 days 00:00:01
@@ -528,7 +528,7 @@ class PeriodProperties(Properties):
1 2000-01-01 00:00:01
2 2000-01-01 00:00:02
3 2000-01-01 00:00:03
- dtype: period[S]
+ dtype: period[s]
>>> seconds_series.dt.second
0 0
1 1
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 9b8d1c870091d..e45cff0b0679f 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -296,7 +296,7 @@ def pipe(
2013-01-01 00:00:02 3
2013-01-01 00:00:03 4
2013-01-01 00:00:04 5
- Freq: S, dtype: int64
+ Freq: s, dtype: int64
>>> r = s.resample('2s')
@@ -304,7 +304,7 @@ def pipe(
2013-01-01 00:00:00 3
2013-01-01 00:00:02 7
2013-01-01 00:00:04 5
- Freq: 2S, dtype: int64
+ Freq: 2s, dtype: int64
>>> r.agg(['sum', 'mean', 'max'])
sum mean max
@@ -605,7 +605,7 @@ def nearest(self, limit: int | None = None):
2018-01-01 00:30:00 2
2018-01-01 00:45:00 2
2018-01-01 01:00:00 2
- Freq: 15T, dtype: int64
+ Freq: 15min, dtype: int64
Limit the number of upsampled values imputed by the nearest:
@@ -615,7 +615,7 @@ def nearest(self, limit: int | None = None):
2018-01-01 00:30:00 NaN
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
- Freq: 15T, dtype: float64
+ Freq: 15min, dtype: float64
"""
return self._upsample("nearest", limit=limit)
@@ -674,7 +674,7 @@ def bfill(self, limit: int | None = None):
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
- Freq: 30T, dtype: int64
+ Freq: 30min, dtype: int64
>>> s.resample('15min').bfill(limit=2)
2018-01-01 00:00:00 1.0
@@ -686,7 +686,7 @@ def bfill(self, limit: int | None = None):
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
- Freq: 15T, dtype: float64
+ Freq: 15min, dtype: float64
Resampling a DataFrame that has missing values:
@@ -787,7 +787,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 2.0
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
- Freq: 30T, dtype: float64
+ Freq: 30min, dtype: float64
>>> s.resample('30min').fillna("backfill")
2018-01-01 00:00:00 1
@@ -795,7 +795,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
- Freq: 30T, dtype: int64
+ Freq: 30min, dtype: int64
>>> s.resample('15min').fillna("backfill", limit=2)
2018-01-01 00:00:00 1.0
@@ -807,7 +807,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
- Freq: 15T, dtype: float64
+ Freq: 15min, dtype: float64
>>> s.resample('30min').fillna("pad")
2018-01-01 00:00:00 1
@@ -815,7 +815,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 2
2018-01-01 01:30:00 2
2018-01-01 02:00:00 3
- Freq: 30T, dtype: int64
+ Freq: 30min, dtype: int64
>>> s.resample('30min').fillna("nearest")
2018-01-01 00:00:00 1
@@ -823,7 +823,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
- Freq: 30T, dtype: int64
+ Freq: 30min, dtype: int64
Missing values present before the upsampling are not affected.
@@ -841,7 +841,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
- Freq: 30T, dtype: float64
+ Freq: 30min, dtype: float64
>>> sm.resample('30min').fillna('pad')
2018-01-01 00:00:00 1.0
@@ -849,7 +849,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 NaN
2018-01-01 02:00:00 3.0
- Freq: 30T, dtype: float64
+ Freq: 30min, dtype: float64
>>> sm.resample('30min').fillna('nearest')
2018-01-01 00:00:00 1.0
@@ -857,7 +857,7 @@ def fillna(self, method, limit: int | None = None):
2018-01-01 01:00:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 02:00:00 3.0
- Freq: 30T, dtype: float64
+ Freq: 30min, dtype: float64
DataFrame resampling is done column-wise. All the same options are
available.
@@ -1018,7 +1018,7 @@ def interpolate(
2023-03-01 07:00:00 1
2023-03-01 07:00:02 2
2023-03-01 07:00:04 3
- Freq: 2S, dtype: int64
+ Freq: 2s, dtype: int64
Downsample the dataframe to 2Hz by providing the period time of 500ms.
@@ -1032,7 +1032,7 @@ def interpolate(
2023-03-01 07:00:03.000 1.0
2023-03-01 07:00:03.500 2.0
2023-03-01 07:00:04.000 3.0
- Freq: 500L, dtype: float64
+ Freq: 500ms, dtype: float64
Internal reindexing with ``as_freq()`` prior to interpolation leads to
an interpolated timeseries on the basis the reindexed timestamps (anchors).
@@ -1051,7 +1051,7 @@ def interpolate(
2023-03-01 07:00:03.200 2.6
2023-03-01 07:00:03.600 2.8
2023-03-01 07:00:04.000 3.0
- Freq: 400L, dtype: float64
+ Freq: 400ms, dtype: float64
Note that the series erroneously increases between two anchors
``07:00:00`` and ``07:00:02``.
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index 3f2f832c08dc6..a9abc0714baa3 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -7,7 +7,6 @@
TYPE_CHECKING,
overload,
)
-import warnings
import numpy as np
@@ -20,7 +19,6 @@
Timedelta,
parse_timedelta_unit,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import is_list_like
from pandas.core.dtypes.dtypes import ArrowDtype
@@ -115,15 +113,17 @@ def to_timedelta(
* 'D' / 'days' / 'day'
* 'hours' / 'hour' / 'hr' / 'h'
* 'm' / 'minute' / 'min' / 'minutes' / 'T'
- * 'S' / 'seconds' / 'sec' / 'second'
+ * 's' / 'seconds' / 'sec' / 'second' / 'S'
* 'ms' / 'milliseconds' / 'millisecond' / 'milli' / 'millis' / 'L'
* 'us' / 'microseconds' / 'microsecond' / 'micro' / 'micros' / 'U'
* 'ns' / 'nanoseconds' / 'nano' / 'nanos' / 'nanosecond' / 'N'
Must not be specified when `arg` context strings and ``errors="raise"``.
- .. deprecated:: 2.1.0
- Units 'T' and 'L' are deprecated and will be removed in a future version.
+ .. deprecated:: 2.2.0
+ Units 'T', 'S', 'L', 'U' and 'N' are deprecated and will be removed
+ in a future version. Please use 'min', 's', 'ms', 'us', and 'ns' instead of
+ 'T', 'S', 'L', 'U' and 'N'.
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
- If 'raise', then invalid parsing will raise an exception.
@@ -176,13 +176,6 @@ def to_timedelta(
TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq=None)
"""
- if unit in {"T", "t", "L", "l"}:
- warnings.warn(
- f"Unit '{unit}' is deprecated and will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
if unit is not None:
unit = parse_timedelta_unit(unit)
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index cd7823ba15e44..33aeaa6d81406 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -412,7 +412,7 @@ def __call__(self):
)
interval = self._get_interval()
- freq = f"{interval}L"
+ freq = f"{interval}ms"
tz = self.tz.tzname(None)
st = dmin.replace(tzinfo=None)
ed = dmin.replace(tzinfo=None)
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 7a079ae7795e6..2a34566d2f9f5 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -698,7 +698,7 @@ def test_sub_n_gt_1_offsets(self, offset, kwd_name, n):
# datetime-like arrays
pd.date_range("2016-01-01", periods=3, freq="H"),
pd.date_range("2016-01-01", periods=3, tz="Europe/Brussels"),
- pd.date_range("2016-01-01", periods=3, freq="S")._data,
+ pd.date_range("2016-01-01", periods=3, freq="s")._data,
pd.date_range("2016-01-01", periods=3, tz="Asia/Tokyo")._data,
# Miscellaneous invalid types
3.14,
@@ -794,13 +794,13 @@ def test_parr_sub_td64array(self, box_with_array, tdi_freq, pi_freq):
if pi_freq == "H":
result = pi - td64obj
- expected = (pi.to_timestamp("S") - tdi).to_period(pi_freq)
+ expected = (pi.to_timestamp("s") - tdi).to_period(pi_freq)
expected = tm.box_expected(expected, xbox)
tm.assert_equal(result, expected)
# Subtract from scalar
result = pi[0] - td64obj
- expected = (pi[0].to_timestamp("S") - tdi).to_period(pi_freq)
+ expected = (pi[0].to_timestamp("s") - tdi).to_period(pi_freq)
expected = tm.box_expected(expected, box)
tm.assert_equal(result, expected)
@@ -1048,7 +1048,7 @@ def test_parr_add_timedeltalike_minute_gt1(self, three_days, box_with_array):
with pytest.raises(TypeError, match=msg):
other - rng
- @pytest.mark.parametrize("freqstr", ["5ns", "5us", "5ms", "5s", "5T", "5h", "5d"])
+ @pytest.mark.parametrize("freqstr", ["5ns", "5us", "5ms", "5s", "5min", "5h", "5d"])
def test_parr_add_timedeltalike_tick_gt1(self, three_days, freqstr, box_with_array):
# GH#23031 adding a time-delta-like offset to a PeriodArray that has
# tick-like frequency with n != 1
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 9eee2e0bea687..ee1b26054ea5e 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -622,13 +622,13 @@ def test_round(self, arr1d):
# GH#24064
dti = self.index_cls(arr1d)
- result = dti.round(freq="2T")
+ result = dti.round(freq="2min")
expected = dti - pd.Timedelta(minutes=1)
expected = expected._with_freq(None)
tm.assert_index_equal(result, expected)
dta = dti._data
- result = dta.round(freq="2T")
+ result = dta.round(freq="2min")
expected = expected._data._with_freq(None)
tm.assert_datetime_array_equal(result, expected)
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index fe1be2d8b6a0a..76b974330cbf1 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -1541,10 +1541,10 @@ def test_chained_where_mask(using_copy_on_write, func):
def test_asfreq_noop(using_copy_on_write):
df = DataFrame(
{"a": [0.0, None, 2.0, 3.0]},
- index=date_range("1/1/2000", periods=4, freq="T"),
+ index=date_range("1/1/2000", periods=4, freq="min"),
)
df_orig = df.copy()
- df2 = df.asfreq(freq="T")
+ df2 = df.asfreq(freq="min")
if using_copy_on_write:
assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 471e456146178..da69d58066e22 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -94,10 +94,10 @@ def test_categorical_dtype(self):
[
"period[D]",
"period[3M]",
- "period[U]",
+ "period[us]",
"Period[D]",
"Period[3M]",
- "Period[U]",
+ "Period[us]",
],
)
def test_period_dtype(self, dtype):
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index f57093c29b733..6562074eee634 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -467,8 +467,8 @@ def test_identity(self):
assert PeriodDtype("period[3D]") == PeriodDtype("period[3D]")
assert PeriodDtype("period[3D]") is not PeriodDtype("period[3D]")
- assert PeriodDtype("period[1S1U]") == PeriodDtype("period[1000001U]")
- assert PeriodDtype("period[1S1U]") is not PeriodDtype("period[1000001U]")
+ assert PeriodDtype("period[1s1us]") == PeriodDtype("period[1000001us]")
+ assert PeriodDtype("period[1s1us]") is not PeriodDtype("period[1000001us]")
def test_compat(self, dtype):
assert not is_datetime64_ns_dtype(dtype)
@@ -505,15 +505,15 @@ def test_is_dtype(self, dtype):
assert PeriodDtype.is_dtype("period[D]")
assert PeriodDtype.is_dtype("period[3D]")
assert PeriodDtype.is_dtype(PeriodDtype("3D"))
- assert PeriodDtype.is_dtype("period[U]")
- assert PeriodDtype.is_dtype("period[S]")
- assert PeriodDtype.is_dtype(PeriodDtype("U"))
- assert PeriodDtype.is_dtype(PeriodDtype("S"))
+ assert PeriodDtype.is_dtype("period[us]")
+ assert PeriodDtype.is_dtype("period[s]")
+ assert PeriodDtype.is_dtype(PeriodDtype("us"))
+ assert PeriodDtype.is_dtype(PeriodDtype("s"))
assert not PeriodDtype.is_dtype("D")
assert not PeriodDtype.is_dtype("3D")
assert not PeriodDtype.is_dtype("U")
- assert not PeriodDtype.is_dtype("S")
+ assert not PeriodDtype.is_dtype("s")
assert not PeriodDtype.is_dtype("foo")
assert not PeriodDtype.is_dtype(np.object_)
assert not PeriodDtype.is_dtype(np.int64)
@@ -728,7 +728,7 @@ def test_is_dtype(self, dtype):
assert not IntervalDtype.is_dtype("D")
assert not IntervalDtype.is_dtype("3D")
- assert not IntervalDtype.is_dtype("U")
+ assert not IntervalDtype.is_dtype("us")
assert not IntervalDtype.is_dtype("S")
assert not IntervalDtype.is_dtype("foo")
assert not IntervalDtype.is_dtype("IntervalA")
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index f9c420607812c..432c86ce3afd5 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -2497,7 +2497,7 @@ def test_dt_roundlike_unsupported_freq(method):
@pytest.mark.xfail(
pa_version_under7p0, reason="Methods not supported for pyarrow < 7.0"
)
-@pytest.mark.parametrize("freq", ["D", "H", "T", "S", "L", "U", "N"])
+@pytest.mark.parametrize("freq", ["D", "H", "min", "s", "ms", "us", "ns"])
@pytest.mark.parametrize("method", ["ceil", "floor", "round"])
def test_dt_ceil_year_floor(freq, method):
ser = pd.Series(
diff --git a/pandas/tests/frame/methods/test_asfreq.py b/pandas/tests/frame/methods/test_asfreq.py
index 2c5137db94c16..72652df2e1e58 100644
--- a/pandas/tests/frame/methods/test_asfreq.py
+++ b/pandas/tests/frame/methods/test_asfreq.py
@@ -76,7 +76,7 @@ def test_tz_aware_asfreq_smoke(self, tz, frame_or_series):
)
# it works!
- obj.asfreq("T")
+ obj.asfreq("min")
def test_asfreq_normalize(self, frame_or_series):
rng = date_range("1/1/2000 09:30", periods=20)
@@ -170,7 +170,7 @@ def test_asfreq_fillvalue(self):
# test for fill value during upsampling, related to issue 3715
# setup
- rng = date_range("1/1/2016", periods=10, freq="2S")
+ rng = date_range("1/1/2016", periods=10, freq="2s")
# Explicit cast to 'float' to avoid implicit cast when setting None
ts = Series(np.arange(len(rng)), index=rng, dtype="float")
df = DataFrame({"one": ts})
@@ -178,13 +178,13 @@ def test_asfreq_fillvalue(self):
# insert pre-existing missing value
df.loc["2016-01-01 00:00:08", "one"] = None
- actual_df = df.asfreq(freq="1S", fill_value=9.0)
- expected_df = df.asfreq(freq="1S").fillna(9.0)
+ actual_df = df.asfreq(freq="1s", fill_value=9.0)
+ expected_df = df.asfreq(freq="1s").fillna(9.0)
expected_df.loc["2016-01-01 00:00:08", "one"] = None
tm.assert_frame_equal(expected_df, actual_df)
- expected_series = ts.asfreq(freq="1S").fillna(9.0)
- actual_series = ts.asfreq(freq="1S", fill_value=9.0)
+ expected_series = ts.asfreq(freq="1s").fillna(9.0)
+ actual_series = ts.asfreq(freq="1s", fill_value=9.0)
tm.assert_series_equal(expected_series, actual_series)
def test_asfreq_with_date_object_index(self, frame_or_series):
diff --git a/pandas/tests/frame/methods/test_equals.py b/pandas/tests/frame/methods/test_equals.py
index 4028a26dfdc65..6fcf670f96ef0 100644
--- a/pandas/tests/frame/methods/test_equals.py
+++ b/pandas/tests/frame/methods/test_equals.py
@@ -35,7 +35,7 @@ def test_equals(self):
np.random.default_rng(2).random(10), index=index, columns=["floats"]
)
df1["text"] = "the sky is so blue. we could use more chocolate.".split()
- df1["start"] = date_range("2000-1-1", periods=10, freq="T")
+ df1["start"] = date_range("2000-1-1", periods=10, freq="min")
df1["end"] = date_range("2000-1-1", periods=10, freq="D")
df1["diff"] = df1["end"] - df1["start"]
# Explicitly cast to object, to avoid implicit cast when setting np.nan
@@ -66,7 +66,7 @@ def test_equals(self):
assert not df1.equals(different)
# DatetimeIndex
- index = date_range("2000-1-1", periods=10, freq="T")
+ index = date_range("2000-1-1", periods=10, freq="min")
df1 = df1.set_index(index)
df2 = df1.copy()
assert df1.equals(df2)
diff --git a/pandas/tests/frame/methods/test_join.py b/pandas/tests/frame/methods/test_join.py
index 98f3926968ad0..2d4ac1d4a4444 100644
--- a/pandas/tests/frame/methods/test_join.py
+++ b/pandas/tests/frame/methods/test_join.py
@@ -553,13 +553,13 @@ def test_frame_join_tzaware(self):
test1 = DataFrame(
np.zeros((6, 3)),
index=date_range(
- "2012-11-15 00:00:00", periods=6, freq="100L", tz="US/Central"
+ "2012-11-15 00:00:00", periods=6, freq="100ms", tz="US/Central"
),
)
test2 = DataFrame(
np.zeros((3, 3)),
index=date_range(
- "2012-11-15 00:00:00", periods=3, freq="250L", tz="US/Central"
+ "2012-11-15 00:00:00", periods=3, freq="250ms", tz="US/Central"
),
columns=range(3, 6),
)
diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py
index 808f0cff2485c..9b2190dd763f8 100644
--- a/pandas/tests/frame/methods/test_shift.py
+++ b/pandas/tests/frame/methods/test_shift.py
@@ -75,8 +75,8 @@ def test_shift_mismatched_freq(self, frame_or_series):
index=date_range("1/1/2000", periods=5, freq="H"),
)
- result = ts.shift(1, freq="5T")
- exp_index = ts.index.shift(1, freq="5T")
+ result = ts.shift(1, freq="5min")
+ exp_index = ts.index.shift(1, freq="5min")
tm.assert_index_equal(result.index, exp_index)
# GH#1063, multiple of same base
diff --git a/pandas/tests/frame/methods/test_to_records.py b/pandas/tests/frame/methods/test_to_records.py
index 8853d718270f4..e18f236d40804 100644
--- a/pandas/tests/frame/methods/test_to_records.py
+++ b/pandas/tests/frame/methods/test_to_records.py
@@ -512,7 +512,7 @@ def keys(self):
@pytest.mark.parametrize("tz", ["UTC", "GMT", "US/Eastern"])
def test_to_records_datetimeindex_with_tz(self, tz):
# GH#13937
- dr = date_range("2016-01-01", periods=10, freq="S", tz=tz)
+ dr = date_range("2016-01-01", periods=10, freq="s", tz=tz)
df = DataFrame({"datetime": dr}, index=dr)
diff --git a/pandas/tests/frame/methods/test_to_timestamp.py b/pandas/tests/frame/methods/test_to_timestamp.py
index 2f73e3d58b516..e72b576fca833 100644
--- a/pandas/tests/frame/methods/test_to_timestamp.py
+++ b/pandas/tests/frame/methods/test_to_timestamp.py
@@ -99,7 +99,7 @@ def test_to_timestamp_columns(self):
tm.assert_index_equal(result.columns, exp_index)
delta = timedelta(hours=23, minutes=59)
- result = df.to_timestamp("T", "end", axis=1)
+ result = df.to_timestamp("min", "end", axis=1)
exp_index = _get_with_delta(delta)
exp_index = exp_index + Timedelta(1, "m") - Timedelta(1, "ns")
tm.assert_index_equal(result.columns, exp_index)
@@ -110,8 +110,8 @@ def test_to_timestamp_columns(self):
exp_index = exp_index + Timedelta(1, "s") - Timedelta(1, "ns")
tm.assert_index_equal(result.columns, exp_index)
- result1 = df.to_timestamp("5t", axis=1)
- result2 = df.to_timestamp("t", axis=1)
+ result1 = df.to_timestamp("5min", axis=1)
+ result2 = df.to_timestamp("min", axis=1)
expected = date_range("2001-01-01", "2009-01-01", freq="AS")
assert isinstance(result1.columns, DatetimeIndex)
assert isinstance(result2.columns, DatetimeIndex)
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 63cddb7f192e6..009a053dd64d6 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2961,7 +2961,9 @@ def test_frame_datetime64_mixed_index_ctor_1681(self):
def test_frame_timeseries_column(self):
# GH19157
- dr = date_range(start="20130101T10:00:00", periods=3, freq="T", tz="US/Eastern")
+ dr = date_range(
+ start="20130101T10:00:00", periods=3, freq="min", tz="US/Eastern"
+ )
result = DataFrame(dr, columns=["timestamps"])
expected = DataFrame(
{
diff --git a/pandas/tests/generic/test_frame.py b/pandas/tests/generic/test_frame.py
index 620d5055f5d3b..fc7aa9e7b2c46 100644
--- a/pandas/tests/generic/test_frame.py
+++ b/pandas/tests/generic/test_frame.py
@@ -87,7 +87,7 @@ def test_metadata_propagation_indiv_resample(self):
np.random.default_rng(2).standard_normal((1000, 2)),
index=date_range("20130101", periods=1000, freq="s"),
)
- result = df.resample("1T")
+ result = df.resample("1min")
tm.assert_metadata_equivalent(df, result)
def test_metadata_propagation_indiv(self, monkeypatch):
diff --git a/pandas/tests/generic/test_series.py b/pandas/tests/generic/test_series.py
index 4ea205ac13c47..3648961eb3808 100644
--- a/pandas/tests/generic/test_series.py
+++ b/pandas/tests/generic/test_series.py
@@ -111,13 +111,13 @@ def test_metadata_propagation_indiv_resample(self):
index=date_range("20130101", periods=1000, freq="s"),
name="foo",
)
- result = ts.resample("1T").mean()
+ result = ts.resample("1min").mean()
tm.assert_metadata_equivalent(ts, result)
- result = ts.resample("1T").min()
+ result = ts.resample("1min").min()
tm.assert_metadata_equivalent(ts, result)
- result = ts.resample("1T").apply(lambda x: x.sum())
+ result = ts.resample("1min").apply(lambda x: x.sum())
tm.assert_metadata_equivalent(ts, result)
def test_metadata_propagation_indiv(self, monkeypatch):
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index cdfa80c8c7cb5..c01ca4922a84b 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -360,16 +360,16 @@ def test_agg_multiple_functions_same_name():
# GH 30880
df = DataFrame(
np.random.default_rng(2).standard_normal((1000, 3)),
- index=pd.date_range("1/1/2012", freq="S", periods=1000),
+ index=pd.date_range("1/1/2012", freq="s", periods=1000),
columns=["A", "B", "C"],
)
- result = df.resample("3T").agg(
+ result = df.resample("3min").agg(
{"A": [partial(np.quantile, q=0.9999), partial(np.quantile, q=0.1111)]}
)
- expected_index = pd.date_range("1/1/2012", freq="3T", periods=6)
+ expected_index = pd.date_range("1/1/2012", freq="3min", periods=6)
expected_columns = MultiIndex.from_tuples([("A", "quantile"), ("A", "quantile")])
expected_values = np.array(
- [df.resample("3T").A.quantile(q=q).values for q in [0.9999, 0.1111]]
+ [df.resample("3min").A.quantile(q=q).values for q in [0.9999, 0.1111]]
).T
expected = DataFrame(
expected_values, columns=expected_columns, index=expected_index
@@ -382,13 +382,13 @@ def test_agg_multiple_functions_same_name_with_ohlc_present():
# ohlc expands dimensions, so different test to the above is required.
df = DataFrame(
np.random.default_rng(2).standard_normal((1000, 3)),
- index=pd.date_range("1/1/2012", freq="S", periods=1000, name="dti"),
+ index=pd.date_range("1/1/2012", freq="s", periods=1000, name="dti"),
columns=Index(["A", "B", "C"], name="alpha"),
)
- result = df.resample("3T").agg(
+ result = df.resample("3min").agg(
{"A": ["ohlc", partial(np.quantile, q=0.9999), partial(np.quantile, q=0.1111)]}
)
- expected_index = pd.date_range("1/1/2012", freq="3T", periods=6, name="dti")
+ expected_index = pd.date_range("1/1/2012", freq="3min", periods=6, name="dti")
expected_columns = MultiIndex.from_tuples(
[
("A", "ohlc", "open"),
@@ -401,9 +401,11 @@ def test_agg_multiple_functions_same_name_with_ohlc_present():
names=["alpha", None, None],
)
non_ohlc_expected_values = np.array(
- [df.resample("3T").A.quantile(q=q).values for q in [0.9999, 0.1111]]
+ [df.resample("3min").A.quantile(q=q).values for q in [0.9999, 0.1111]]
).T
- expected_values = np.hstack([df.resample("3T").A.ohlc(), non_ohlc_expected_values])
+ expected_values = np.hstack(
+ [df.resample("3min").A.ohlc(), non_ohlc_expected_values]
+ )
expected = DataFrame(
expected_values, columns=expected_columns, index=expected_index
)
diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py
index f917f567e1ce3..865fda0ab54a2 100644
--- a/pandas/tests/groupby/aggregate/test_cython.py
+++ b/pandas/tests/groupby/aggregate/test_cython.py
@@ -118,7 +118,7 @@ def test_cython_agg_nothing_to_agg_with_dates():
{
"a": np.random.default_rng(2).integers(0, 5, 50),
"b": ["foo", "bar"] * 25,
- "dates": pd.date_range("now", periods=50, freq="T"),
+ "dates": pd.date_range("now", periods=50, freq="min"),
}
)
msg = "Cannot use numeric_only=True with SeriesGroupBy.mean and non-numeric dtypes"
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 06b473e40ad11..f2d21c10f7a15 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -1150,7 +1150,7 @@ def test_groupby_multiindex_categorical_datetime():
{
"key1": Categorical(list("abcbabcba")),
"key2": Categorical(
- list(pd.date_range("2018-06-01 00", freq="1T", periods=3)) * 3
+ list(pd.date_range("2018-06-01 00", freq="1min", periods=3)) * 3
),
"values": np.arange(9),
}
@@ -1160,7 +1160,7 @@ def test_groupby_multiindex_categorical_datetime():
idx = MultiIndex.from_product(
[
Categorical(["a", "b", "c"]),
- Categorical(pd.date_range("2018-06-01 00", freq="1T", periods=3)),
+ Categorical(pd.date_range("2018-06-01 00", freq="1min", periods=3)),
],
names=["key1", "key2"],
)
diff --git a/pandas/tests/groupby/test_counting.py b/pandas/tests/groupby/test_counting.py
index 6c27344ce3110..5022e9629f155 100644
--- a/pandas/tests/groupby/test_counting.py
+++ b/pandas/tests/groupby/test_counting.py
@@ -265,7 +265,7 @@ def test_groupby_timedelta_cython_count():
def test_count():
n = 1 << 15
- dr = date_range("2015-08-30", periods=n // 10, freq="T")
+ dr = date_range("2015-08-30", periods=n // 10, freq="min")
df = DataFrame(
{
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 772ce90b1e611..c0ac94c09e1ea 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -3141,13 +3141,13 @@ def test_groupby_with_Time_Grouper():
expected_output = DataFrame(
{
- "time2": date_range("2016-08-31 22:08:00", periods=13, freq="1T"),
+ "time2": date_range("2016-08-31 22:08:00", periods=13, freq="1min"),
"quant": [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
"quant2": [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
}
)
- df = test_data.groupby(Grouper(key="time2", freq="1T")).count().reset_index()
+ df = test_data.groupby(Grouper(key="time2", freq="1min")).count().reset_index()
tm.assert_frame_equal(df, expected_output)
diff --git a/pandas/tests/groupby/test_quantile.py b/pandas/tests/groupby/test_quantile.py
index 165d72bf3e878..efe7b171d630d 100644
--- a/pandas/tests/groupby/test_quantile.py
+++ b/pandas/tests/groupby/test_quantile.py
@@ -420,7 +420,7 @@ def test_timestamp_groupby_quantile():
df = DataFrame(
{
"timestamp": pd.date_range(
- start="2020-04-19 00:00:00", freq="1T", periods=100, tz="UTC"
+ start="2020-04-19 00:00:00", freq="1min", periods=100, tz="UTC"
).floor("1H"),
"category": list(range(1, 101)),
"value": list(range(101, 201)),
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index 527e7c6081970..c9fe011f7063b 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -753,7 +753,7 @@ def test_timezone_info(self):
def test_datetime_count(self):
df = DataFrame(
- {"a": [1, 2, 3] * 2, "dates": date_range("now", periods=6, freq="T")}
+ {"a": [1, 2, 3] * 2, "dates": date_range("now", periods=6, freq="min")}
)
result = df.groupby("a").dates.count()
expected = Series([2, 2, 2], index=Index([1, 2, 3], name="a"), name="dates")
diff --git a/pandas/tests/indexes/conftest.py b/pandas/tests/indexes/conftest.py
index 458a37c994091..fe397e2c7c88e 100644
--- a/pandas/tests/indexes/conftest.py
+++ b/pandas/tests/indexes/conftest.py
@@ -25,7 +25,7 @@ def sort(request):
return request.param
-@pytest.fixture(params=["D", "3D", "-3D", "H", "2H", "-2H", "T", "2T", "S", "-3S"])
+@pytest.fixture(params=["D", "3D", "-3D", "H", "2H", "-2H", "min", "2min", "s", "-3s"])
def freq_sample(request):
"""
Valid values for 'freq' parameter used to create date_range and
diff --git a/pandas/tests/indexes/datetimelike_/test_drop_duplicates.py b/pandas/tests/indexes/datetimelike_/test_drop_duplicates.py
index e5da06cb005f6..c38e24232f181 100644
--- a/pandas/tests/indexes/datetimelike_/test_drop_duplicates.py
+++ b/pandas/tests/indexes/datetimelike_/test_drop_duplicates.py
@@ -68,7 +68,7 @@ def test_drop_duplicates(self, keep, expected, index, idx):
class TestDropDuplicatesPeriodIndex(DropDuplicates):
- @pytest.fixture(params=["D", "3D", "H", "2H", "T", "2T", "S", "3S"])
+ @pytest.fixture(params=["D", "3D", "H", "2H", "min", "2min", "s", "3s"])
def freq(self, request):
return request.param
diff --git a/pandas/tests/indexes/datetimes/methods/test_shift.py b/pandas/tests/indexes/datetimes/methods/test_shift.py
index 65bdfc9053e5e..e8661fafc3bb7 100644
--- a/pandas/tests/indexes/datetimes/methods/test_shift.py
+++ b/pandas/tests/indexes/datetimes/methods/test_shift.py
@@ -96,7 +96,7 @@ def test_dti_shift_localized(self, tzstr):
dr = date_range("2011/1/1", "2012/1/1", freq="W-FRI")
dr_tz = dr.tz_localize(tzstr)
- result = dr_tz.shift(1, "10T")
+ result = dr_tz.shift(1, "10min")
assert result.tz == dr_tz.tz
def test_dti_shift_across_dst(self):
diff --git a/pandas/tests/indexes/datetimes/methods/test_to_period.py b/pandas/tests/indexes/datetimes/methods/test_to_period.py
index 14de6c5907d03..2d762710168ab 100644
--- a/pandas/tests/indexes/datetimes/methods/test_to_period.py
+++ b/pandas/tests/indexes/datetimes/methods/test_to_period.py
@@ -119,10 +119,10 @@ def test_to_period_millisecond(self):
with tm.assert_produces_warning(UserWarning):
# warning that timezone info will be lost
- period = index.to_period(freq="L")
+ period = index.to_period(freq="ms")
assert 2 == len(period)
- assert period[0] == Period("2007-01-01 10:11:12.123Z", "L")
- assert period[1] == Period("2007-01-01 10:11:13.789Z", "L")
+ assert period[0] == Period("2007-01-01 10:11:12.123Z", "ms")
+ assert period[1] == Period("2007-01-01 10:11:13.789Z", "ms")
def test_to_period_microsecond(self):
index = DatetimeIndex(
@@ -134,10 +134,10 @@ def test_to_period_microsecond(self):
with tm.assert_produces_warning(UserWarning):
# warning that timezone info will be lost
- period = index.to_period(freq="U")
+ period = index.to_period(freq="us")
assert 2 == len(period)
- assert period[0] == Period("2007-01-01 10:11:12.123456Z", "U")
- assert period[1] == Period("2007-01-01 10:11:13.789123Z", "U")
+ assert period[0] == Period("2007-01-01 10:11:12.123456Z", "us")
+ assert period[1] == Period("2007-01-01 10:11:13.789123Z", "us")
@pytest.mark.parametrize(
"tz",
diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py
index 733c14f33567a..c8f5c5d561339 100644
--- a/pandas/tests/indexes/datetimes/test_constructors.py
+++ b/pandas/tests/indexes/datetimes/test_constructors.py
@@ -1036,7 +1036,7 @@ def test_constructor_int64_nocopy(self):
assert (index.asi8[50:100] != -1).all()
@pytest.mark.parametrize(
- "freq", ["M", "Q", "A", "D", "B", "BH", "T", "S", "L", "U", "H", "N", "C"]
+ "freq", ["M", "Q", "A", "D", "B", "BH", "min", "s", "ms", "us", "H", "ns", "C"]
)
def test_from_freq_recreate_from_data(self, freq):
org = date_range(start="2001/02/01 09:00", freq=freq, periods=1)
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 2e2e33e2fb366..d5500a19f2bb6 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -123,7 +123,7 @@ def test_date_range_timestamp_equiv_preserve_frequency(self):
class TestDateRanges:
- @pytest.mark.parametrize("freq", ["N", "U", "L", "T", "S", "H", "D"])
+ @pytest.mark.parametrize("freq", ["ns", "us", "ms", "min", "s", "H", "D"])
def test_date_range_edges(self, freq):
# GH#13672
td = Timedelta(f"1{freq}")
@@ -761,13 +761,13 @@ def test_freq_divides_end_in_nanos(self):
expected_1 = DatetimeIndex(
["2005-01-12 10:00:00", "2005-01-12 15:45:00"],
dtype="datetime64[ns]",
- freq="345T",
+ freq="345min",
tz=None,
)
expected_2 = DatetimeIndex(
["2005-01-13 10:00:00", "2005-01-13 15:45:00"],
dtype="datetime64[ns]",
- freq="345T",
+ freq="345min",
tz=None,
)
tm.assert_index_equal(result_1, expected_1)
@@ -836,6 +836,25 @@ def test_freq_dateoffset_with_relateivedelta_nanos(self):
)
tm.assert_index_equal(result, expected)
+ @pytest.mark.parametrize(
+ "freq,freq_depr",
+ [
+ ("min", "T"),
+ ("s", "S"),
+ ("ms", "L"),
+ ("us", "U"),
+ ("ns", "N"),
+ ],
+ )
+ def test_frequencies_T_S_L_U_N_deprecated(self, freq, freq_depr):
+ # GH#52536
+ msg = f"'{freq_depr}' is deprecated and will be removed in a future version."
+
+ expected = date_range("1/1/2000", periods=4, freq=freq)
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = date_range("1/1/2000", periods=4, freq=freq_depr)
+ tm.assert_index_equal(result, expected)
+
class TestDateRangeTZ:
"""Tests for date_range with timezones"""
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index aa4954ff0ba85..f44cbbf560584 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -56,10 +56,10 @@ def test_time_overflow_for_32bit_machines(self):
# overflow.
periods = np.int_(1000)
- idx1 = date_range(start="2000", periods=periods, freq="S")
+ idx1 = date_range(start="2000", periods=periods, freq="s")
assert len(idx1) == periods
- idx2 = date_range(end="2000", periods=periods, freq="S")
+ idx2 = date_range(end="2000", periods=periods, freq="s")
assert len(idx2) == periods
def test_nat(self):
@@ -148,8 +148,8 @@ def test_groupby_function_tuple_1677(self):
assert isinstance(result.index[0], tuple)
def assert_index_parameters(self, index):
- assert index.freq == "40960N"
- assert index.inferred_freq == "40960N"
+ assert index.freq == "40960ns"
+ assert index.inferred_freq == "40960ns"
def test_ns_index(self):
nsamples = 400
diff --git a/pandas/tests/indexes/datetimes/test_formats.py b/pandas/tests/indexes/datetimes/test_formats.py
index cb3e0179bf46c..502cb0407bfcd 100644
--- a/pandas/tests/indexes/datetimes/test_formats.py
+++ b/pandas/tests/indexes/datetimes/test_formats.py
@@ -73,17 +73,17 @@ def test_dti_repr_short(self):
[
(
["2012-01-01 00:00:00"],
- "60T",
+ "60min",
(
"DatetimeIndex(['2012-01-01 00:00:00'], "
- "dtype='datetime64[ns]', freq='60T')"
+ "dtype='datetime64[ns]', freq='60min')"
),
),
(
["2012-01-01 00:00:00", "2012-01-01 01:00:00"],
- "60T",
+ "60min",
"DatetimeIndex(['2012-01-01 00:00:00', '2012-01-01 01:00:00'], "
- "dtype='datetime64[ns]', freq='60T')",
+ "dtype='datetime64[ns]', freq='60min')",
),
(
["2012-01-01"],
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 8e1b41095e056..bfecc5d064b11 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -426,7 +426,7 @@ def test_get_loc_time_obj2(self):
step = 24 * 3600
for n in ns:
- idx = date_range("2014-11-26", periods=n, freq="S")
+ idx = date_range("2014-11-26", periods=n, freq="s")
ts = pd.Series(np.random.default_rng(2).standard_normal(n), index=idx)
locs = np.arange(start, n, step, dtype=np.intp)
diff --git a/pandas/tests/indexes/datetimes/test_npfuncs.py b/pandas/tests/indexes/datetimes/test_npfuncs.py
index 301466c0da41c..6c3e44c2a5db1 100644
--- a/pandas/tests/indexes/datetimes/test_npfuncs.py
+++ b/pandas/tests/indexes/datetimes/test_npfuncs.py
@@ -7,7 +7,7 @@
class TestSplit:
def test_split_non_utc(self):
# GH#14042
- indices = date_range("2016-01-01 00:00:00+0200", freq="S", periods=10)
+ indices = date_range("2016-01-01 00:00:00+0200", freq="s", periods=10)
result = np.split(indices, indices_or_sections=[])[0]
expected = indices._with_freq(None)
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index d6ef4198fad2e..c782121505642 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -25,10 +25,10 @@ class TestDatetimeIndexOps:
("M", "day"),
("D", "day"),
("H", "hour"),
- ("T", "minute"),
- ("S", "second"),
- ("L", "millisecond"),
- ("U", "microsecond"),
+ ("min", "minute"),
+ ("s", "second"),
+ ("ms", "millisecond"),
+ ("us", "microsecond"),
],
)
def test_resolution(self, request, tz_naive_fixture, freq, expected):
diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py
index 7978e596e6ee5..0ecccd7da3a5c 100644
--- a/pandas/tests/indexes/datetimes/test_partial_slicing.py
+++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py
@@ -204,7 +204,7 @@ def test_partial_slice_daily(self):
s["2004-12-31 00"]
def test_partial_slice_hourly(self):
- rng = date_range(freq="T", start=datetime(2005, 1, 1, 20, 0, 0), periods=500)
+ rng = date_range(freq="min", start=datetime(2005, 1, 1, 20, 0, 0), periods=500)
s = Series(np.arange(len(rng)), index=rng)
result = s["2005-1-1"]
@@ -218,7 +218,7 @@ def test_partial_slice_hourly(self):
s["2004-12-31 00:15"]
def test_partial_slice_minutely(self):
- rng = date_range(freq="S", start=datetime(2005, 1, 1, 23, 59, 0), periods=500)
+ rng = date_range(freq="s", start=datetime(2005, 1, 1, 23, 59, 0), periods=500)
s = Series(np.arange(len(rng)), index=rng)
result = s["2005-1-1 23:59"]
@@ -336,7 +336,7 @@ def test_partial_slicing_with_multiindex(self):
"TICKER": ["ABC", "MNP", "XYZ", "XYZ"],
"val": [1, 2, 3, 4],
},
- index=date_range("2013-06-19 09:30:00", periods=4, freq="5T"),
+ index=date_range("2013-06-19 09:30:00", periods=4, freq="5min"),
)
df_multi = df.set_index(["ACCOUNT", "TICKER"], append=True)
diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py
index f07a9dce5f6ae..1c5b6adf0b527 100644
--- a/pandas/tests/indexes/datetimes/test_scalar_compat.py
+++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py
@@ -175,7 +175,7 @@ def test_no_rounding_occurs(self, tz_naive_fixture):
]
)
- tm.assert_index_equal(rng.round(freq="2T"), expected_rng)
+ tm.assert_index_equal(rng.round(freq="2min"), expected_rng)
@pytest.mark.parametrize(
"test_input, rounder, freq, expected",
@@ -196,8 +196,8 @@ def test_no_rounding_occurs(self, tz_naive_fixture):
),
(["1823-01-01 00:00:01"], "floor", "1s", ["1823-01-01 00:00:01"]),
(["1823-01-01 00:00:01"], "ceil", "1s", ["1823-01-01 00:00:01"]),
- (["2018-01-01 00:15:00"], "ceil", "15T", ["2018-01-01 00:15:00"]),
- (["2018-01-01 00:15:00"], "floor", "15T", ["2018-01-01 00:15:00"]),
+ (["2018-01-01 00:15:00"], "ceil", "15min", ["2018-01-01 00:15:00"]),
+ (["2018-01-01 00:15:00"], "floor", "15min", ["2018-01-01 00:15:00"]),
(["1823-01-01 03:00:00"], "ceil", "3H", ["1823-01-01 03:00:00"]),
(["1823-01-01 03:00:00"], "floor", "3H", ["1823-01-01 03:00:00"]),
(
@@ -333,14 +333,14 @@ def test_hour(self):
tm.assert_index_equal(r1, r2)
def test_minute(self):
- dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="T")
+ dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="min")
r1 = pd.Index([x.to_julian_date() for x in dr])
r2 = dr.to_julian_date()
assert isinstance(r2, pd.Index) and r2.dtype == np.float64
tm.assert_index_equal(r1, r2)
def test_second(self):
- dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="S")
+ dr = date_range(start=Timestamp("2000-02-27"), periods=5, freq="s")
r1 = pd.Index([x.to_julian_date() for x in dr])
r2 = dr.to_julian_date()
assert isinstance(r2, pd.Index) and r2.dtype == np.float64
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index adf7acfa59e0c..2e7b38abf4212 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -269,7 +269,7 @@ def test_intersection(self, tz, sort):
# parametrize over both anchored and non-anchored freqs, as they
# have different code paths
- @pytest.mark.parametrize("freq", ["T", "B"])
+ @pytest.mark.parametrize("freq", ["min", "B"])
def test_intersection_empty(self, tz_aware_fixture, freq):
# empty same freq GH2129
tz = tz_aware_fixture
@@ -283,7 +283,7 @@ def test_intersection_empty(self, tz_aware_fixture, freq):
assert result.freq == rng.freq
# no overlap GH#33604
- check_freq = freq != "T" # We don't preserve freq on non-anchored offsets
+ check_freq = freq != "min" # We don't preserve freq on non-anchored offsets
result = rng[:3].intersection(rng[-3:])
tm.assert_index_equal(result, rng[:0])
if check_freq:
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 09b06ecd5630d..e9cc69aea0ed5 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -192,7 +192,7 @@ def test_dti_tz_convert_hour_overflow_dst_timestamps(self, tz):
expected = Index([9, 9, 9], dtype=np.int32)
tm.assert_index_equal(ut.hour, expected)
- @pytest.mark.parametrize("freq, n", [("H", 1), ("T", 60), ("S", 3600)])
+ @pytest.mark.parametrize("freq, n", [("H", 1), ("min", 60), ("s", 3600)])
def test_dti_tz_convert_trans_pos_plus_1__bug(self, freq, n):
# Regression test for tslib.tz_convert(vals, tz1, tz2).
# See https://github.com/pandas-dev/pandas/issues/4496 for details.
@@ -204,7 +204,7 @@ def test_dti_tz_convert_trans_pos_plus_1__bug(self, freq, n):
tm.assert_index_equal(idx.hour, Index(expected, dtype=np.int32))
def test_dti_tz_convert_dst(self):
- for freq, n in [("H", 1), ("T", 60), ("S", 3600)]:
+ for freq, n in [("H", 1), ("min", 60), ("s", 3600)]:
# Start DST
idx = date_range(
"2014-03-08 23:00", "2014-03-09 09:00", freq=freq, tz="UTC"
@@ -281,8 +281,8 @@ def test_tz_convert_roundtrip(self, tz_aware_fixture):
idx3 = date_range(start="2014-01-01", end="2014-03-01", freq="H", tz="UTC")
exp3 = date_range(start="2014-01-01", end="2014-03-01", freq="H")
- idx4 = date_range(start="2014-08-01", end="2014-10-31", freq="T", tz="UTC")
- exp4 = date_range(start="2014-08-01", end="2014-10-31", freq="T")
+ idx4 = date_range(start="2014-08-01", end="2014-10-31", freq="min", tz="UTC")
+ exp4 = date_range(start="2014-08-01", end="2014-10-31", freq="min")
for idx, expected in [(idx1, exp1), (idx2, exp2), (idx3, exp3), (idx4, exp4)]:
converted = idx.tz_convert(tz)
@@ -440,11 +440,11 @@ def test_dti_tz_localize_pass_dates_to_utc(self, tzstr):
@pytest.mark.parametrize("prefix", ["", "dateutil/"])
def test_dti_tz_localize(self, prefix):
tzstr = prefix + "US/Eastern"
- dti = date_range(start="1/1/2005", end="1/1/2005 0:00:30.256", freq="L")
+ dti = date_range(start="1/1/2005", end="1/1/2005 0:00:30.256", freq="ms")
dti2 = dti.tz_localize(tzstr)
dti_utc = date_range(
- start="1/1/2005 05:00", end="1/1/2005 5:00:30.256", freq="L", tz="utc"
+ start="1/1/2005 05:00", end="1/1/2005 5:00:30.256", freq="ms", tz="utc"
)
tm.assert_numpy_array_equal(dti2.values, dti_utc.values)
@@ -452,11 +452,11 @@ def test_dti_tz_localize(self, prefix):
dti3 = dti2.tz_convert(prefix + "US/Pacific")
tm.assert_numpy_array_equal(dti3.values, dti_utc.values)
- dti = date_range(start="11/6/2011 1:59", end="11/6/2011 2:00", freq="L")
+ dti = date_range(start="11/6/2011 1:59", end="11/6/2011 2:00", freq="ms")
with pytest.raises(pytz.AmbiguousTimeError, match="Cannot infer dst time"):
dti.tz_localize(tzstr)
- dti = date_range(start="3/13/2011 1:59", end="3/13/2011 2:00", freq="L")
+ dti = date_range(start="3/13/2011 1:59", end="3/13/2011 2:00", freq="ms")
with pytest.raises(pytz.NonExistentTimeError, match="2011-03-13 02:00:00"):
dti.tz_localize(tzstr)
@@ -474,14 +474,14 @@ def test_dti_tz_localize_utc_conversion(self, tz):
# 1) check for DST ambiguities
# 2) convert to UTC
- rng = date_range("3/10/2012", "3/11/2012", freq="30T")
+ rng = date_range("3/10/2012", "3/11/2012", freq="30min")
converted = rng.tz_localize(tz)
expected_naive = rng + pd.offsets.Hour(5)
tm.assert_numpy_array_equal(converted.asi8, expected_naive.asi8)
# DST ambiguity, this should fail
- rng = date_range("3/11/2012", "3/12/2012", freq="30T")
+ rng = date_range("3/11/2012", "3/12/2012", freq="30min")
# Is this really how it should fail??
with pytest.raises(pytz.NonExistentTimeError, match="2012-03-11 02:00:00"):
rng.tz_localize(tz)
@@ -490,7 +490,7 @@ def test_dti_tz_localize_roundtrip(self, tz_aware_fixture):
# note: this tz tests that a tz-naive index can be localized
# and de-localized successfully, when there are no DST transitions
# in the range.
- idx = date_range(start="2014-06-01", end="2014-08-30", freq="15T")
+ idx = date_range(start="2014-06-01", end="2014-08-30", freq="15min")
tz = tz_aware_fixture
localized = idx.tz_localize(tz)
# can't localize a tz-aware object
@@ -879,7 +879,7 @@ def test_dti_tz_conversion_freq(self, tz_naive_fixture):
# GH25241
t3 = DatetimeIndex(["2019-01-01 10:00"], freq="H")
assert t3.tz_localize(tz=tz_naive_fixture).freq == t3.freq
- t4 = DatetimeIndex(["2019-01-02 12:00"], tz="UTC", freq="T")
+ t4 = DatetimeIndex(["2019-01-02 12:00"], tz="UTC", freq="min")
assert t4.tz_convert(tz="UTC").freq == t4.freq
def test_drop_dst_boundary(self):
diff --git a/pandas/tests/indexes/period/methods/test_asfreq.py b/pandas/tests/indexes/period/methods/test_asfreq.py
index 4f5cfbade4d84..89ea4fb6472d0 100644
--- a/pandas/tests/indexes/period/methods/test_asfreq.py
+++ b/pandas/tests/indexes/period/methods/test_asfreq.py
@@ -16,57 +16,57 @@ def test_asfreq(self):
pi4 = period_range(freq="D", start="1/1/2001", end="1/1/2001")
pi5 = period_range(freq="H", start="1/1/2001", end="1/1/2001 00:00")
pi6 = period_range(freq="Min", start="1/1/2001", end="1/1/2001 00:00")
- pi7 = period_range(freq="S", start="1/1/2001", end="1/1/2001 00:00:00")
+ pi7 = period_range(freq="s", start="1/1/2001", end="1/1/2001 00:00:00")
- assert pi1.asfreq("Q", "S") == pi2
+ assert pi1.asfreq("Q", "s") == pi2
assert pi1.asfreq("Q", "s") == pi2
assert pi1.asfreq("M", "start") == pi3
assert pi1.asfreq("D", "StarT") == pi4
assert pi1.asfreq("H", "beGIN") == pi5
- assert pi1.asfreq("Min", "S") == pi6
- assert pi1.asfreq("S", "S") == pi7
-
- assert pi2.asfreq("A", "S") == pi1
- assert pi2.asfreq("M", "S") == pi3
- assert pi2.asfreq("D", "S") == pi4
- assert pi2.asfreq("H", "S") == pi5
- assert pi2.asfreq("Min", "S") == pi6
- assert pi2.asfreq("S", "S") == pi7
-
- assert pi3.asfreq("A", "S") == pi1
- assert pi3.asfreq("Q", "S") == pi2
- assert pi3.asfreq("D", "S") == pi4
- assert pi3.asfreq("H", "S") == pi5
- assert pi3.asfreq("Min", "S") == pi6
- assert pi3.asfreq("S", "S") == pi7
-
- assert pi4.asfreq("A", "S") == pi1
- assert pi4.asfreq("Q", "S") == pi2
- assert pi4.asfreq("M", "S") == pi3
- assert pi4.asfreq("H", "S") == pi5
- assert pi4.asfreq("Min", "S") == pi6
- assert pi4.asfreq("S", "S") == pi7
-
- assert pi5.asfreq("A", "S") == pi1
- assert pi5.asfreq("Q", "S") == pi2
- assert pi5.asfreq("M", "S") == pi3
- assert pi5.asfreq("D", "S") == pi4
- assert pi5.asfreq("Min", "S") == pi6
- assert pi5.asfreq("S", "S") == pi7
-
- assert pi6.asfreq("A", "S") == pi1
- assert pi6.asfreq("Q", "S") == pi2
- assert pi6.asfreq("M", "S") == pi3
- assert pi6.asfreq("D", "S") == pi4
- assert pi6.asfreq("H", "S") == pi5
- assert pi6.asfreq("S", "S") == pi7
-
- assert pi7.asfreq("A", "S") == pi1
- assert pi7.asfreq("Q", "S") == pi2
- assert pi7.asfreq("M", "S") == pi3
- assert pi7.asfreq("D", "S") == pi4
- assert pi7.asfreq("H", "S") == pi5
- assert pi7.asfreq("Min", "S") == pi6
+ assert pi1.asfreq("Min", "s") == pi6
+ assert pi1.asfreq("s", "s") == pi7
+
+ assert pi2.asfreq("A", "s") == pi1
+ assert pi2.asfreq("M", "s") == pi3
+ assert pi2.asfreq("D", "s") == pi4
+ assert pi2.asfreq("H", "s") == pi5
+ assert pi2.asfreq("Min", "s") == pi6
+ assert pi2.asfreq("s", "s") == pi7
+
+ assert pi3.asfreq("A", "s") == pi1
+ assert pi3.asfreq("Q", "s") == pi2
+ assert pi3.asfreq("D", "s") == pi4
+ assert pi3.asfreq("H", "s") == pi5
+ assert pi3.asfreq("Min", "s") == pi6
+ assert pi3.asfreq("s", "s") == pi7
+
+ assert pi4.asfreq("A", "s") == pi1
+ assert pi4.asfreq("Q", "s") == pi2
+ assert pi4.asfreq("M", "s") == pi3
+ assert pi4.asfreq("H", "s") == pi5
+ assert pi4.asfreq("Min", "s") == pi6
+ assert pi4.asfreq("s", "s") == pi7
+
+ assert pi5.asfreq("A", "s") == pi1
+ assert pi5.asfreq("Q", "s") == pi2
+ assert pi5.asfreq("M", "s") == pi3
+ assert pi5.asfreq("D", "s") == pi4
+ assert pi5.asfreq("Min", "s") == pi6
+ assert pi5.asfreq("s", "s") == pi7
+
+ assert pi6.asfreq("A", "s") == pi1
+ assert pi6.asfreq("Q", "s") == pi2
+ assert pi6.asfreq("M", "s") == pi3
+ assert pi6.asfreq("D", "s") == pi4
+ assert pi6.asfreq("H", "s") == pi5
+ assert pi6.asfreq("s", "s") == pi7
+
+ assert pi7.asfreq("A", "s") == pi1
+ assert pi7.asfreq("Q", "s") == pi2
+ assert pi7.asfreq("M", "s") == pi3
+ assert pi7.asfreq("D", "s") == pi4
+ assert pi7.asfreq("H", "s") == pi5
+ assert pi7.asfreq("Min", "s") == pi6
msg = "How must be one of S or E"
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/indexes/period/test_constructors.py b/pandas/tests/indexes/period/test_constructors.py
index 7d4d681659ab6..a3d5dc63ad7c8 100644
--- a/pandas/tests/indexes/period/test_constructors.py
+++ b/pandas/tests/indexes/period/test_constructors.py
@@ -110,16 +110,18 @@ def test_constructor_U(self):
def test_constructor_nano(self):
idx = period_range(
- start=Period(ordinal=1, freq="N"), end=Period(ordinal=4, freq="N"), freq="N"
+ start=Period(ordinal=1, freq="ns"),
+ end=Period(ordinal=4, freq="ns"),
+ freq="ns",
)
exp = PeriodIndex(
[
- Period(ordinal=1, freq="N"),
- Period(ordinal=2, freq="N"),
- Period(ordinal=3, freq="N"),
- Period(ordinal=4, freq="N"),
+ Period(ordinal=1, freq="ns"),
+ Period(ordinal=2, freq="ns"),
+ Period(ordinal=3, freq="ns"),
+ Period(ordinal=4, freq="ns"),
],
- freq="N",
+ freq="ns",
)
tm.assert_index_equal(idx, exp)
@@ -144,7 +146,7 @@ def test_constructor_corner(self):
def test_constructor_with_without_freq(self):
# GH53687
- start = Period("2002-01-01 00:00", freq="30T")
+ start = Period("2002-01-01 00:00", freq="30min")
exp = period_range(start=start, periods=5, freq=start.freq)
result = period_range(start=start, periods=5)
tm.assert_index_equal(exp, result)
@@ -413,7 +415,7 @@ def test_constructor_freq_mult(self):
with pytest.raises(ValueError, match=msg):
period_range("2011-01", periods=3, freq="0M")
- @pytest.mark.parametrize("freq", ["A", "M", "D", "T", "S"])
+ @pytest.mark.parametrize("freq", ["A", "M", "D", "min", "s"])
@pytest.mark.parametrize("mult", [1, 2, 3, 4, 5])
def test_constructor_freq_mult_dti_compat(self, mult, freq):
freqstr = str(mult) + freq
@@ -456,7 +458,7 @@ def test_constructor(self):
pi = period_range(freq="Min", start="1/1/2001", end="1/1/2001 23:59")
assert len(pi) == 24 * 60
- pi = period_range(freq="S", start="1/1/2001", end="1/1/2001 23:59:59")
+ pi = period_range(freq="s", start="1/1/2001", end="1/1/2001 23:59:59")
assert len(pi) == 24 * 60 * 60
with tm.assert_produces_warning(FutureWarning, match=msg):
@@ -506,7 +508,7 @@ def test_constructor(self):
Period("2006-12-31", ("w", 1))
@pytest.mark.parametrize(
- "freq", ["M", "Q", "A", "D", "B", "T", "S", "L", "U", "N", "H"]
+ "freq", ["M", "Q", "A", "D", "B", "min", "s", "ms", "us", "ns", "H"]
)
@pytest.mark.filterwarnings(
r"ignore:Period with BDay freq is deprecated:FutureWarning"
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index c0c6f3c977ceb..109a4a41e2841 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -174,8 +174,8 @@ def test_getitem_list_periods(self):
@pytest.mark.arm_slow
def test_getitem_seconds(self):
# GH#6716
- didx = date_range(start="2013/01/01 09:00:00", freq="S", periods=4000)
- pidx = period_range(start="2013/01/01 09:00:00", freq="S", periods=4000)
+ didx = date_range(start="2013/01/01 09:00:00", freq="s", periods=4000)
+ pidx = period_range(start="2013/01/01 09:00:00", freq="s", periods=4000)
for idx in [didx, pidx]:
# getitem against index should raise ValueError
@@ -579,7 +579,7 @@ def test_where_invalid_dtypes(self):
result = pi.where(mask, tdi)
tm.assert_index_equal(result, expected)
- dti = i2.to_timestamp("S")
+ dti = i2.to_timestamp("s")
expected = pd.Index([dti[0], dti[1]] + tail, dtype=object)
assert expected[0] is NaT
result = pi.where(mask, dti)
diff --git a/pandas/tests/indexes/period/test_partial_slicing.py b/pandas/tests/indexes/period/test_partial_slicing.py
index e52866abbe234..3a272f53091b5 100644
--- a/pandas/tests/indexes/period/test_partial_slicing.py
+++ b/pandas/tests/indexes/period/test_partial_slicing.py
@@ -84,7 +84,7 @@ def test_range_slice_day(self, make_range):
@pytest.mark.parametrize("make_range", [date_range, period_range])
def test_range_slice_seconds(self, make_range):
# GH#6716
- idx = make_range(start="2013/01/01 09:00:00", freq="S", periods=4000)
+ idx = make_range(start="2013/01/01 09:00:00", freq="s", periods=4000)
msg = "slice indices must be integers or None or have an __index__ method"
# slices against index should raise IndexError
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 6d8ae1793d5ec..7191175f16f73 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -164,7 +164,7 @@ def test_period_index_length(self):
period_range(freq="H", start="12/31/2001", end="1/1/2002 23:00"),
period_range(freq="Min", start="12/31/2001", end="1/1/2002 00:20"),
period_range(
- freq="S", start="12/31/2001 00:00:00", end="12/31/2001 00:05:00"
+ freq="s", start="12/31/2001 00:00:00", end="12/31/2001 00:05:00"
),
period_range(end=Period("2006-12-31", "W"), periods=10),
],
diff --git a/pandas/tests/indexes/period/test_resolution.py b/pandas/tests/indexes/period/test_resolution.py
index 7ecbde75cfa47..6c876b4f9366f 100644
--- a/pandas/tests/indexes/period/test_resolution.py
+++ b/pandas/tests/indexes/period/test_resolution.py
@@ -12,10 +12,10 @@ class TestResolution:
("M", "month"),
("D", "day"),
("H", "hour"),
- ("T", "minute"),
- ("S", "second"),
- ("L", "millisecond"),
- ("U", "microsecond"),
+ ("min", "minute"),
+ ("s", "second"),
+ ("ms", "millisecond"),
+ ("us", "microsecond"),
],
)
def test_resolution(self, freq, expected):
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index af89d712b5565..dd05210e417b0 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -62,10 +62,10 @@ def test_union(self, sort):
)
rng5 = PeriodIndex(
- ["2000-01-01 09:01", "2000-01-01 09:03", "2000-01-01 09:05"], freq="T"
+ ["2000-01-01 09:01", "2000-01-01 09:03", "2000-01-01 09:05"], freq="min"
)
other5 = PeriodIndex(
- ["2000-01-01 09:01", "2000-01-01 09:05", "2000-01-01 09:08"], freq="T"
+ ["2000-01-01 09:01", "2000-01-01 09:05", "2000-01-01 09:08"], freq="min"
)
expected5 = PeriodIndex(
[
@@ -74,7 +74,7 @@ def test_union(self, sort):
"2000-01-01 09:05",
"2000-01-01 09:08",
],
- freq="T",
+ freq="min",
)
rng6 = period_range("2000-01-01", freq="M", periods=7)
@@ -240,7 +240,7 @@ def test_intersection_cases(self, sort):
assert result.freq == "D"
# empty same freq
- rng = date_range("6/1/2000", "6/15/2000", freq="T")
+ rng = date_range("6/1/2000", "6/15/2000", freq="min")
result = rng[0:0].intersection(rng)
assert len(result) == 0
@@ -274,10 +274,10 @@ def test_difference(self, sort):
expected4 = rng4
rng5 = PeriodIndex(
- ["2000-01-01 09:03", "2000-01-01 09:01", "2000-01-01 09:05"], freq="T"
+ ["2000-01-01 09:03", "2000-01-01 09:01", "2000-01-01 09:05"], freq="min"
)
- other5 = PeriodIndex(["2000-01-01 09:01", "2000-01-01 09:05"], freq="T")
- expected5 = PeriodIndex(["2000-01-01 09:03"], freq="T")
+ other5 = PeriodIndex(["2000-01-01 09:01", "2000-01-01 09:05"], freq="min")
+ expected5 = PeriodIndex(["2000-01-01 09:03"], freq="min")
period_rng = [
"2000-02-01",
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index 13509bd58b4b8..18668fd357fd8 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -21,11 +21,11 @@ class TestPeriodRepresentation:
("D", "1970-01-01"),
("B", "1970-01-01"),
("H", "1970-01-01"),
- ("T", "1970-01-01"),
- ("S", "1970-01-01"),
- ("L", "1970-01-01"),
- ("U", "1970-01-01"),
- ("N", "1970-01-01"),
+ ("min", "1970-01-01"),
+ ("s", "1970-01-01"),
+ ("ms", "1970-01-01"),
+ ("us", "1970-01-01"),
+ ("ns", "1970-01-01"),
("M", "1970-01"),
("A", 1970),
],
diff --git a/pandas/tests/indexes/timedeltas/methods/test_shift.py b/pandas/tests/indexes/timedeltas/methods/test_shift.py
index f49af73f9f499..e33b8de3e6594 100644
--- a/pandas/tests/indexes/timedeltas/methods/test_shift.py
+++ b/pandas/tests/indexes/timedeltas/methods/test_shift.py
@@ -29,11 +29,11 @@ def test_tdi_shift_hours(self):
def test_tdi_shift_minutes(self):
# GH#9903
idx = TimedeltaIndex(["5 hours", "6 hours", "9 hours"], name="xxx")
- tm.assert_index_equal(idx.shift(0, freq="T"), idx)
+ tm.assert_index_equal(idx.shift(0, freq="min"), idx)
exp = TimedeltaIndex(["05:03:00", "06:03:00", "9:03:00"], name="xxx")
- tm.assert_index_equal(idx.shift(3, freq="T"), exp)
+ tm.assert_index_equal(idx.shift(3, freq="min"), exp)
exp = TimedeltaIndex(["04:57:00", "05:57:00", "8:57:00"], name="xxx")
- tm.assert_index_equal(idx.shift(-3, freq="T"), exp)
+ tm.assert_index_equal(idx.shift(-3, freq="min"), exp)
def test_tdi_shift_int(self):
# GH#8083
diff --git a/pandas/tests/indexes/timedeltas/test_scalar_compat.py b/pandas/tests/indexes/timedeltas/test_scalar_compat.py
index 9f470b40d1f58..22e50a974d9e1 100644
--- a/pandas/tests/indexes/timedeltas/test_scalar_compat.py
+++ b/pandas/tests/indexes/timedeltas/test_scalar_compat.py
@@ -104,23 +104,23 @@ def test_round(self):
# note that negative times round DOWN! so don't give whole numbers
for freq, s1, s2 in [
- ("N", t1, t2),
- ("U", t1, t2),
+ ("ns", t1, t2),
+ ("us", t1, t2),
(
- "L",
+ "ms",
t1a,
TimedeltaIndex(
["-1 days +00:00:00", "-2 days +23:58:58", "-2 days +23:57:56"]
),
),
(
- "S",
+ "s",
t1a,
TimedeltaIndex(
["-1 days +00:00:00", "-2 days +23:58:58", "-2 days +23:57:56"]
),
),
- ("12T", t1c, TimedeltaIndex(["-1 days", "-1 days", "-1 days"])),
+ ("12min", t1c, TimedeltaIndex(["-1 days", "-1 days", "-1 days"])),
("H", t1c, TimedeltaIndex(["-1 days", "-1 days", "-1 days"])),
("d", t1c, TimedeltaIndex([-1, -1, -1], unit="D")),
]:
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta_range.py b/pandas/tests/indexes/timedeltas/test_timedelta_range.py
index 72bdc6da47d94..d0593b3230959 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta_range.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta_range.py
@@ -47,12 +47,19 @@ def test_timedelta_range(self):
[
("T", "minute"),
("t", "minute"),
+ ("S", "second"),
("L", "millisecond"),
("l", "millisecond"),
+ ("U", "microsecond"),
+ ("u", "microsecond"),
+ ("N", "nanosecond"),
+ ("n", "nanosecond"),
],
)
- def test_timedelta_units_T_L_deprecated(self, depr_unit, unit):
- depr_msg = f"Unit '{depr_unit}' is deprecated."
+ def test_timedelta_units_T_S_L_U_N_deprecated(self, depr_unit, unit):
+ depr_msg = (
+ f"'{depr_unit}' is deprecated and will be removed in a future version."
+ )
expected = to_timedelta(np.arange(5), unit=unit)
with tm.assert_produces_warning(FutureWarning, match=depr_msg):
@@ -60,7 +67,7 @@ def test_timedelta_units_T_L_deprecated(self, depr_unit, unit):
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize(
- "periods, freq", [(3, "2D"), (5, "D"), (6, "19H12T"), (7, "16H"), (9, "12H")]
+ "periods, freq", [(3, "2D"), (5, "D"), (6, "19H12min"), (7, "16H"), (9, "12H")]
)
def test_linspace_behavior(self, periods, freq):
# GH 20976
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index b7108896f01ed..597bc2975268e 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -1353,7 +1353,7 @@ def test_period_can_hold_element(self, element):
with tm.assert_produces_warning(FutureWarning):
self.check_series_setitem(elem, pi, False)
- dti = pi.to_timestamp("S")[:-1]
+ dti = pi.to_timestamp("s")[:-1]
elem = element(dti)
with tm.assert_produces_warning(FutureWarning):
self.check_series_setitem(elem, pi, False)
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 8341dda1597bb..a2c6429f342da 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -3291,7 +3291,7 @@ def test_dates_display(self):
assert result[1].strip() == "NaT"
assert result[4].strip() == "2013-01-01 09:00:00.000004"
- x = Series(date_range("20130101 09:00:00", periods=5, freq="N"))
+ x = Series(date_range("20130101 09:00:00", periods=5, freq="ns"))
x.iloc[1] = np.nan
result = fmt.Datetime64Formatter(x).get_result()
assert result[0].strip() == "2013-01-01 09:00:00.000000000"
@@ -3342,7 +3342,7 @@ def test_period_format_and_strftime_default(self):
assert per.strftime(None)[1] is np.nan # ...except for NaTs
# Same test with nanoseconds freq
- per = pd.period_range("2003-01-01 12:01:01.123456789", periods=2, freq="n")
+ per = pd.period_range("2003-01-01 12:01:01.123456789", periods=2, freq="ns")
formatted = per.format()
assert (formatted == per.strftime(None)).all()
assert formatted[0] == "2003-01-01 12:01:01.123456789"
@@ -3352,19 +3352,19 @@ def test_period_custom(self):
# GH#46252 custom formatting directives %l (ms) and %u (us)
# 3 digits
- per = pd.period_range("2003-01-01 12:01:01.123", periods=2, freq="l")
+ per = pd.period_range("2003-01-01 12:01:01.123", periods=2, freq="ms")
formatted = per.format(date_format="%y %I:%M:%S (ms=%l us=%u ns=%n)")
assert formatted[0] == "03 12:01:01 (ms=123 us=123000 ns=123000000)"
assert formatted[1] == "03 12:01:01 (ms=124 us=124000 ns=124000000)"
# 6 digits
- per = pd.period_range("2003-01-01 12:01:01.123456", periods=2, freq="u")
+ per = pd.period_range("2003-01-01 12:01:01.123456", periods=2, freq="us")
formatted = per.format(date_format="%y %I:%M:%S (ms=%l us=%u ns=%n)")
assert formatted[0] == "03 12:01:01 (ms=123 us=123456 ns=123456000)"
assert formatted[1] == "03 12:01:01 (ms=123 us=123457 ns=123457000)"
# 9 digits
- per = pd.period_range("2003-01-01 12:01:01.123456789", periods=2, freq="n")
+ per = pd.period_range("2003-01-01 12:01:01.123456789", periods=2, freq="ns")
formatted = per.format(date_format="%y %I:%M:%S (ms=%l us=%u ns=%n)")
assert formatted[0] == "03 12:01:01 (ms=123 us=123456 ns=123456789)"
assert formatted[1] == "03 12:01:01 (ms=123 us=123456 ns=123456790)"
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
index 974a2174cb03b..9643cf3258e64 100644
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -142,7 +142,7 @@ def create_data():
"period": period_range("2013-01-01", freq="M", periods=10),
"float": Index(np.arange(10, dtype=np.float64)),
"uint": Index(np.arange(10, dtype=np.uint64)),
- "timedelta": timedelta_range("00:00:00", freq="30T", periods=10),
+ "timedelta": timedelta_range("00:00:00", freq="30min", periods=10),
}
index["range"] = RangeIndex(10)
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 0ee53572877a5..fc2edc7559a48 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -32,7 +32,7 @@ def df_schema():
"A": [1, 2, 3, 4],
"B": ["a", "b", "c", "c"],
"C": pd.date_range("2016-01-01", freq="d", periods=4),
- "D": pd.timedelta_range("1H", periods=4, freq="T"),
+ "D": pd.timedelta_range("1H", periods=4, freq="min"),
},
index=pd.Index(range(4), name="idx"),
)
@@ -45,7 +45,7 @@ def df_table():
"A": [1, 2, 3, 4],
"B": ["a", "b", "c", "c"],
"C": pd.date_range("2016-01-01", freq="d", periods=4),
- "D": pd.timedelta_range("1H", periods=4, freq="T"),
+ "D": pd.timedelta_range("1H", periods=4, freq="min"),
"E": pd.Series(pd.Categorical(["a", "b", "c", "c"])),
"F": pd.Series(pd.Categorical(["a", "b", "c", "c"], ordered=True)),
"G": [1.0, 2.0, 3, 4.0],
@@ -695,7 +695,7 @@ def test_read_json_table_orient(self, index_nm, vals, recwarn):
@pytest.mark.parametrize("index_nm", [None, "idx", "index"])
@pytest.mark.parametrize(
"vals",
- [{"timedeltas": pd.timedelta_range("1H", periods=4, freq="T")}],
+ [{"timedeltas": pd.timedelta_range("1H", periods=4, freq="min")}],
)
def test_read_json_table_orient_raises(self, index_nm, vals, recwarn):
df = DataFrame(vals, index=pd.Index(range(4), name=index_nm))
diff --git a/pandas/tests/io/pytables/test_select.py b/pandas/tests/io/pytables/test_select.py
index 2f2349fe1168b..e387b1b607f63 100644
--- a/pandas/tests/io/pytables/test_select.py
+++ b/pandas/tests/io/pytables/test_select.py
@@ -64,7 +64,7 @@ def test_select_with_dups(setup_path):
df = DataFrame(
np.random.default_rng(2).standard_normal((10, 4)), columns=["A", "A", "B", "B"]
)
- df.index = date_range("20130101 9:30", periods=10, freq="T")
+ df.index = date_range("20130101 9:30", periods=10, freq="min")
with ensure_clean_store(setup_path) as store:
store.append("df", df)
@@ -95,7 +95,7 @@ def test_select_with_dups(setup_path):
],
axis=1,
)
- df.index = date_range("20130101 9:30", periods=10, freq="T")
+ df.index = date_range("20130101 9:30", periods=10, freq="min")
with ensure_clean_store(setup_path) as store:
store.append("df", df)
@@ -397,7 +397,7 @@ def test_select_iterator_complete_8014(setup_path):
# no iterator
with ensure_clean_store(setup_path) as store:
- expected = tm.makeTimeDataFrame(100064, "S")
+ expected = tm.makeTimeDataFrame(100064, "s")
_maybe_remove(store, "df")
store.append("df", expected)
@@ -428,7 +428,7 @@ def test_select_iterator_complete_8014(setup_path):
# with iterator, full range
with ensure_clean_store(setup_path) as store:
- expected = tm.makeTimeDataFrame(100064, "S")
+ expected = tm.makeTimeDataFrame(100064, "s")
_maybe_remove(store, "df")
store.append("df", expected)
@@ -466,7 +466,7 @@ def test_select_iterator_non_complete_8014(setup_path):
# with iterator, non complete range
with ensure_clean_store(setup_path) as store:
- expected = tm.makeTimeDataFrame(100064, "S")
+ expected = tm.makeTimeDataFrame(100064, "s")
_maybe_remove(store, "df")
store.append("df", expected)
@@ -496,7 +496,7 @@ def test_select_iterator_non_complete_8014(setup_path):
# with iterator, empty where
with ensure_clean_store(setup_path) as store:
- expected = tm.makeTimeDataFrame(100064, "S")
+ expected = tm.makeTimeDataFrame(100064, "s")
_maybe_remove(store, "df")
store.append("df", expected)
@@ -516,7 +516,7 @@ def test_select_iterator_many_empty_frames(setup_path):
# with iterator, range limited to the first chunk
with ensure_clean_store(setup_path) as store:
- expected = tm.makeTimeDataFrame(100000, "S")
+ expected = tm.makeTimeDataFrame(100000, "s")
_maybe_remove(store, "df")
store.append("df", expected)
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 9182e4c4e7674..1ab8aa086ff6e 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -979,7 +979,7 @@ def test_timestamp_nanoseconds(self, pa):
ver = "2.6"
else:
ver = "2.0"
- df = pd.DataFrame({"a": pd.date_range("2017-01-01", freq="1n", periods=10)})
+ df = pd.DataFrame({"a": pd.date_range("2017-01-01", freq="1ns", periods=10)})
check_round_trip(df, pa, write_kwargs={"version": ver})
def test_timezone_aware_index(self, request, pa, timezone_aware_date_list):
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 56d7900e2907d..0108079f1110f 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -255,7 +255,7 @@ def test_time_formatter(self, time, format_expected):
result = converter.TimeFormatter(None)(time)
assert result == format_expected
- @pytest.mark.parametrize("freq", ("B", "L", "S"))
+ @pytest.mark.parametrize("freq", ("B", "ms", "s"))
def test_dateindex_conversion(self, freq, dtc):
rtol = 10**-9
dateindex = tm.makeDateIndex(k=10, freq=freq)
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index b3ae25ac9168f..c2626c2aa30c7 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -86,7 +86,7 @@ def test_frame_inferred(self):
def test_frame_inferred_n_gt_1(self):
# N > 1
- idx = date_range("2008-1-1 00:15:00", freq="15T", periods=10)
+ idx = date_range("2008-1-1 00:15:00", freq="15min", periods=10)
idx = DatetimeIndex(idx.values, freq=None)
df = DataFrame(
np.random.default_rng(2).standard_normal((len(idx), 3)), index=idx
@@ -116,7 +116,7 @@ def test_nonnumeric_exclude_error(self):
with pytest.raises(TypeError, match=msg):
df["A"].plot()
- @pytest.mark.parametrize("freq", ["S", "T", "H", "D", "W", "M", "Q", "A"])
+ @pytest.mark.parametrize("freq", ["s", "min", "H", "D", "W", "M", "Q", "A"])
def test_tsplot_period(self, freq):
idx = period_range("12/31/1999", freq=freq, periods=100)
ser = Series(np.random.default_rng(2).standard_normal(len(idx)), idx)
@@ -124,7 +124,7 @@ def test_tsplot_period(self, freq):
_check_plot_works(ser.plot, ax=ax)
@pytest.mark.parametrize(
- "freq", ["S", "T", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
+ "freq", ["s", "min", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
)
def test_tsplot_datetime(self, freq):
idx = date_range("12/31/1999", freq=freq, periods=100)
@@ -186,14 +186,14 @@ def check_format_of_first_point(ax, expected_string):
daily.plot(ax=ax)
check_format_of_first_point(ax, "t = 2014-01-01 y = 1.000000")
- @pytest.mark.parametrize("freq", ["S", "T", "H", "D", "W", "M", "Q", "A"])
+ @pytest.mark.parametrize("freq", ["s", "min", "H", "D", "W", "M", "Q", "A"])
def test_line_plot_period_series(self, freq):
idx = period_range("12/31/1999", freq=freq, periods=100)
ser = Series(np.random.default_rng(2).standard_normal(len(idx)), idx)
_check_plot_works(ser.plot, ser.index.freq)
@pytest.mark.parametrize(
- "frqncy", ["1S", "3S", "5T", "7H", "4D", "8W", "11M", "3A"]
+ "frqncy", ["1s", "3s", "5min", "7H", "4D", "8W", "11M", "3A"]
)
def test_line_plot_period_mlt_series(self, frqncy):
# test period index line plot for series with multiples (`mlt`) of the
@@ -203,14 +203,14 @@ def test_line_plot_period_mlt_series(self, frqncy):
_check_plot_works(s.plot, s.index.freq.rule_code)
@pytest.mark.parametrize(
- "freq", ["S", "T", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
+ "freq", ["s", "min", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
)
def test_line_plot_datetime_series(self, freq):
idx = date_range("12/31/1999", freq=freq, periods=100)
ser = Series(np.random.default_rng(2).standard_normal(len(idx)), idx)
_check_plot_works(ser.plot, ser.index.freq.rule_code)
- @pytest.mark.parametrize("freq", ["S", "T", "H", "D", "W", "M", "Q", "A"])
+ @pytest.mark.parametrize("freq", ["s", "min", "H", "D", "W", "M", "Q", "A"])
def test_line_plot_period_frame(self, freq):
idx = date_range("12/31/1999", freq=freq, periods=100)
df = DataFrame(
@@ -221,7 +221,7 @@ def test_line_plot_period_frame(self, freq):
_check_plot_works(df.plot, df.index.freq)
@pytest.mark.parametrize(
- "frqncy", ["1S", "3S", "5T", "7H", "4D", "8W", "11M", "3A"]
+ "frqncy", ["1s", "3s", "5min", "7H", "4D", "8W", "11M", "3A"]
)
def test_line_plot_period_mlt_frame(self, frqncy):
# test period index line plot for DataFrames with multiples (`mlt`)
@@ -238,7 +238,7 @@ def test_line_plot_period_mlt_frame(self, frqncy):
@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning")
@pytest.mark.parametrize(
- "freq", ["S", "T", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
+ "freq", ["s", "min", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
)
def test_line_plot_datetime_frame(self, freq):
idx = date_range("12/31/1999", freq=freq, periods=100)
@@ -251,7 +251,7 @@ def test_line_plot_datetime_frame(self, freq):
_check_plot_works(df.plot, freq)
@pytest.mark.parametrize(
- "freq", ["S", "T", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
+ "freq", ["s", "min", "H", "D", "W", "M", "Q-DEC", "A", "1B30Min"]
)
def test_line_plot_inferred_freq(self, freq):
idx = date_range("12/31/1999", freq=freq, periods=100)
@@ -288,7 +288,7 @@ def test_plot_multiple_inferred_freq(self):
def test_uhf(self):
import pandas.plotting._matplotlib.converter as conv
- idx = date_range("2012-6-22 21:59:51.960928", freq="L", periods=500)
+ idx = date_range("2012-6-22 21:59:51.960928", freq="ms", periods=500)
df = DataFrame(
np.random.default_rng(2).standard_normal((len(idx), 2)), index=idx
)
@@ -306,7 +306,7 @@ def test_uhf(self):
assert xp == rs
def test_irreg_hf(self):
- idx = date_range("2012-6-22 21:59:51", freq="S", periods=10)
+ idx = date_range("2012-6-22 21:59:51", freq="s", periods=10)
df = DataFrame(
np.random.default_rng(2).standard_normal((len(idx), 2)), index=idx
)
@@ -320,7 +320,7 @@ def test_irreg_hf(self):
assert (np.fabs(diffs[1:] - [sec, sec * 2, sec]) < 1e-8).all()
def test_irreg_hf_object(self):
- idx = date_range("2012-6-22 21:59:51", freq="S", periods=10)
+ idx = date_range("2012-6-22 21:59:51", freq="s", periods=10)
df2 = DataFrame(
np.random.default_rng(2).standard_normal((len(idx), 2)), index=idx
)
@@ -815,7 +815,7 @@ def test_mixed_freq_alignment(self):
ts_data = np.random.default_rng(2).standard_normal(12)
ts = Series(ts_data, index=ts_ind)
- ts2 = ts.asfreq("T").interpolate()
+ ts2 = ts.asfreq("min").interpolate()
_, ax = mpl.pyplot.subplots()
ax = ts.plot(ax=ax)
@@ -838,7 +838,7 @@ def test_mixed_freq_lf_first(self):
mpl.pyplot.close(ax.get_figure())
def test_mixed_freq_lf_first_hourly(self):
- idxh = date_range("1/1/1999", periods=240, freq="T")
+ idxh = date_range("1/1/1999", periods=240, freq="min")
idxl = date_range("1/1/1999", periods=4, freq="H")
high = Series(np.random.default_rng(2).standard_normal(len(idxh)), idxh)
low = Series(np.random.default_rng(2).standard_normal(len(idxl)), idxl)
@@ -846,7 +846,7 @@ def test_mixed_freq_lf_first_hourly(self):
low.plot(ax=ax)
high.plot(ax=ax)
for line in ax.get_lines():
- assert PeriodIndex(data=line.get_xdata()).freq == "T"
+ assert PeriodIndex(data=line.get_xdata()).freq == "min"
@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning")
def test_mixed_freq_irreg_period(self):
@@ -1058,8 +1058,8 @@ def test_from_resampling_area_line_mixed_high_to_low(self, kind1, kind2):
def test_mixed_freq_second_millisecond(self):
# GH 7772, GH 7760
- idxh = date_range("2014-07-01 09:00", freq="S", periods=50)
- idxl = date_range("2014-07-01 09:00", freq="100L", periods=500)
+ idxh = date_range("2014-07-01 09:00", freq="s", periods=50)
+ idxl = date_range("2014-07-01 09:00", freq="100ms", periods=500)
high = Series(np.random.default_rng(2).standard_normal(len(idxh)), idxh)
low = Series(np.random.default_rng(2).standard_normal(len(idxl)), idxl)
# high to low
@@ -1068,12 +1068,12 @@ def test_mixed_freq_second_millisecond(self):
low.plot(ax=ax)
assert len(ax.get_lines()) == 2
for line in ax.get_lines():
- assert PeriodIndex(data=line.get_xdata()).freq == "L"
+ assert PeriodIndex(data=line.get_xdata()).freq == "ms"
def test_mixed_freq_second_millisecond_low_to_high(self):
# GH 7772, GH 7760
- idxh = date_range("2014-07-01 09:00", freq="S", periods=50)
- idxl = date_range("2014-07-01 09:00", freq="100L", periods=500)
+ idxh = date_range("2014-07-01 09:00", freq="s", periods=50)
+ idxl = date_range("2014-07-01 09:00", freq="100ms", periods=500)
high = Series(np.random.default_rng(2).standard_normal(len(idxh)), idxh)
low = Series(np.random.default_rng(2).standard_normal(len(idxl)), idxl)
# low to high
@@ -1082,7 +1082,7 @@ def test_mixed_freq_second_millisecond_low_to_high(self):
high.plot(ax=ax)
assert len(ax.get_lines()) == 2
for line in ax.get_lines():
- assert PeriodIndex(data=line.get_xdata()).freq == "L"
+ assert PeriodIndex(data=line.get_xdata()).freq == "ms"
def test_irreg_dtypes(self):
# date
diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py
index 55d78c516b6f3..3cea39fa75ece 100644
--- a/pandas/tests/reductions/test_stat_reductions.py
+++ b/pandas/tests/reductions/test_stat_reductions.py
@@ -41,7 +41,7 @@ def test_dt64_mean(self, tz_naive_fixture, box):
assert obj.mean(skipna=False) is pd.NaT
@pytest.mark.parametrize("box", [Series, pd.Index, PeriodArray])
- @pytest.mark.parametrize("freq", ["S", "H", "D", "W", "B"])
+ @pytest.mark.parametrize("freq", ["s", "H", "D", "W", "B"])
def test_period_mean(self, box, freq):
# GH#24757
dti = pd.date_range("2001-01-01", periods=11)
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 7a76a21a6c579..c39268f3b9477 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -83,8 +83,8 @@ def test_asfreq_fill_value(series, create_index):
def test_resample_interpolate(frame):
# GH#12925
df = frame
- result = df.resample("1T").asfreq().interpolate()
- expected = df.resample("1T").interpolate()
+ result = df.resample("1min").asfreq().interpolate()
+ expected = df.resample("1min").interpolate()
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index dbda751e82113..66ecb93385a87 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -82,7 +82,7 @@ def test_custom_grouper(index, unit):
arr = [1] + [5] * 2592
idx = dti[0:-1:5]
idx = idx.append(dti[-1:])
- idx = DatetimeIndex(idx, freq="5T").as_unit(unit)
+ idx = DatetimeIndex(idx, freq="5min").as_unit(unit)
expect = Series(arr, index=idx)
# GH2763 - return input dtype if we can
@@ -140,21 +140,21 @@ def test_resample_integerarray(unit):
# GH 25580, resample on IntegerArray
ts = Series(
range(9),
- index=date_range("1/1/2000", periods=9, freq="T").as_unit(unit),
+ index=date_range("1/1/2000", periods=9, freq="min").as_unit(unit),
dtype="Int64",
)
- result = ts.resample("3T").sum()
+ result = ts.resample("3min").sum()
expected = Series(
[3, 12, 21],
- index=date_range("1/1/2000", periods=3, freq="3T").as_unit(unit),
+ index=date_range("1/1/2000", periods=3, freq="3min").as_unit(unit),
dtype="Int64",
)
tm.assert_series_equal(result, expected)
- result = ts.resample("3T").mean()
+ result = ts.resample("3min").mean()
expected = Series(
[1, 4, 7],
- index=date_range("1/1/2000", periods=3, freq="3T").as_unit(unit),
+ index=date_range("1/1/2000", periods=3, freq="3min").as_unit(unit),
dtype="Float64",
)
tm.assert_series_equal(result, expected)
@@ -493,7 +493,7 @@ def test_resample_how_method(unit):
),
)
expected.index = expected.index.as_unit(unit)
- tm.assert_series_equal(s.resample("10S").mean(), expected)
+ tm.assert_series_equal(s.resample("10s").mean(), expected)
def test_resample_extra_index_point(unit):
@@ -508,16 +508,16 @@ def test_resample_extra_index_point(unit):
def test_upsample_with_limit(unit):
- rng = date_range("1/1/2000", periods=3, freq="5t").as_unit(unit)
+ rng = date_range("1/1/2000", periods=3, freq="5min").as_unit(unit)
ts = Series(np.random.default_rng(2).standard_normal(len(rng)), rng)
- result = ts.resample("t").ffill(limit=2)
+ result = ts.resample("min").ffill(limit=2)
expected = ts.reindex(result.index, method="ffill", limit=2)
tm.assert_series_equal(result, expected)
-@pytest.mark.parametrize("freq", ["5D", "10H", "5Min", "10S"])
-@pytest.mark.parametrize("rule", ["Y", "3M", "15D", "30H", "15Min", "30S"])
+@pytest.mark.parametrize("freq", ["5D", "10H", "5Min", "10s"])
+@pytest.mark.parametrize("rule", ["Y", "3M", "15D", "30H", "15Min", "30s"])
def test_nearest_upsample_with_limit(tz_aware_fixture, freq, rule, unit):
# GH 33939
rng = date_range("1/1/2000", periods=3, freq=freq, tz=tz_aware_fixture).as_unit(
@@ -560,10 +560,10 @@ def test_resample_ohlc_result(unit):
index = index.union(date_range("4-15-2000", "5-15-2000", freq="h").as_unit(unit))
s = Series(range(len(index)), index=index)
- a = s.loc[:"4-15-2000"].resample("30T").ohlc()
+ a = s.loc[:"4-15-2000"].resample("30min").ohlc()
assert isinstance(a, DataFrame)
- b = s.loc[:"4-14-2000"].resample("30T").ohlc()
+ b = s.loc[:"4-14-2000"].resample("30min").ohlc()
assert isinstance(b, DataFrame)
@@ -744,7 +744,7 @@ def test_resample_axis1(unit):
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize("freq", ["t", "5t", "15t", "30t", "4h", "12h"])
+@pytest.mark.parametrize("freq", ["min", "5min", "15min", "30min", "4h", "12h"])
def test_resample_anchored_ticks(freq, unit):
# If a fixed delta (5 minute, 4 hour) evenly divides a day, we should
# "anchor" the origin at midnight so we get regular intervals rather
@@ -1030,7 +1030,7 @@ def _create_series(values, timestamps, freq="D"):
def test_resample_daily_anchored(unit):
- rng = date_range("1/1/2000 0:00:00", periods=10000, freq="T").as_unit(unit)
+ rng = date_range("1/1/2000 0:00:00", periods=10000, freq="min").as_unit(unit)
ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng)
ts[:2] = np.nan # so results are the same
@@ -1140,12 +1140,12 @@ def test_nanosecond_resample_error():
# Resampling using pd.tseries.offsets.Nano as period
start = 1443707890427
exp_start = 1443707890400
- indx = date_range(start=pd.to_datetime(start), periods=10, freq="100n")
+ indx = date_range(start=pd.to_datetime(start), periods=10, freq="100ns")
ts = Series(range(len(indx)), index=indx)
r = ts.resample(pd.tseries.offsets.Nano(100))
result = r.agg("mean")
- exp_indx = date_range(start=pd.to_datetime(exp_start), periods=10, freq="100n")
+ exp_indx = date_range(start=pd.to_datetime(exp_start), periods=10, freq="100ns")
exp = Series(range(len(exp_indx)), index=exp_indx, dtype=float)
tm.assert_series_equal(result, exp)
@@ -1214,25 +1214,25 @@ def test_resample_anchored_multiday(label, sec):
#
# See: https://github.com/pandas-dev/pandas/issues/8683
- index1 = date_range("2014-10-14 23:06:23.206", periods=3, freq="400L")
- index2 = date_range("2014-10-15 23:00:00", periods=2, freq="2200L")
+ index1 = date_range("2014-10-14 23:06:23.206", periods=3, freq="400ms")
+ index2 = date_range("2014-10-15 23:00:00", periods=2, freq="2200ms")
index = index1.union(index2)
s = Series(np.random.default_rng(2).standard_normal(5), index=index)
# Ensure left closing works
- result = s.resample("2200L", label=label).mean()
+ result = s.resample("2200ms", label=label).mean()
assert result.index[-1] == Timestamp(f"2014-10-15 23:00:{sec}00")
def test_corner_cases(unit):
# miscellaneous test coverage
- rng = date_range("1/1/2000", periods=12, freq="t").as_unit(unit)
+ rng = date_range("1/1/2000", periods=12, freq="min").as_unit(unit)
ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng)
- result = ts.resample("5t", closed="right", label="left").mean()
- ex_index = date_range("1999-12-31 23:55", periods=4, freq="5t").as_unit(unit)
+ result = ts.resample("5min", closed="right", label="left").mean()
+ ex_index = date_range("1999-12-31 23:55", periods=4, freq="5min").as_unit(unit)
tm.assert_index_equal(result.index, ex_index)
@@ -1302,12 +1302,12 @@ def test_resample_median_bug_1688(dtype):
dtype=dtype,
)
- result = df.resample("T").apply(lambda x: x.mean())
- exp = df.asfreq("T")
+ result = df.resample("min").apply(lambda x: x.mean())
+ exp = df.asfreq("min")
tm.assert_frame_equal(result, exp)
- result = df.resample("T").median()
- exp = df.asfreq("T")
+ result = df.resample("min").median()
+ exp = df.asfreq("min")
tm.assert_frame_equal(result, exp)
@@ -1354,12 +1354,12 @@ def test_resample_consistency(unit):
# GH 6418
# resample with bfill / limit / reindex consistency
- i30 = date_range("2002-02-02", periods=4, freq="30T").as_unit(unit)
+ i30 = date_range("2002-02-02", periods=4, freq="30min").as_unit(unit)
s = Series(np.arange(4.0), index=i30)
s.iloc[2] = np.nan
# Upsample by factor 3 with reindex() and resample() methods:
- i10 = date_range(i30[0], i30[-1], freq="10T").as_unit(unit)
+ i10 = date_range(i30[0], i30[-1], freq="10min").as_unit(unit)
s10 = s.reindex(index=i10, method="bfill")
s10_2 = s.reindex(index=i10, method="bfill", limit=2)
@@ -1493,11 +1493,13 @@ def test_resample_group_info(n, k, unit):
# use a fixed seed to always have the same uniques
prng = np.random.default_rng(2)
- dr = date_range(start="2015-08-27", periods=n // 10, freq="T").as_unit(unit)
+ dr = date_range(start="2015-08-27", periods=n // 10, freq="min").as_unit(unit)
ts = Series(prng.integers(0, n // k, n).astype("int64"), index=prng.choice(dr, n))
- left = ts.resample("30T").nunique()
- ix = date_range(start=ts.index.min(), end=ts.index.max(), freq="30T").as_unit(unit)
+ left = ts.resample("30min").nunique()
+ ix = date_range(start=ts.index.min(), end=ts.index.max(), freq="30min").as_unit(
+ unit
+ )
vals = ts.values
bins = np.searchsorted(ix.values, ts.index, side="right")
@@ -1516,14 +1518,16 @@ def test_resample_group_info(n, k, unit):
def test_resample_size(unit):
n = 10000
- dr = date_range("2015-09-19", periods=n, freq="T").as_unit(unit)
+ dr = date_range("2015-09-19", periods=n, freq="min").as_unit(unit)
ts = Series(
np.random.default_rng(2).standard_normal(n),
index=np.random.default_rng(2).choice(dr, n),
)
- left = ts.resample("7T").size()
- ix = date_range(start=left.index.min(), end=ts.index.max(), freq="7T").as_unit(unit)
+ left = ts.resample("7min").size()
+ ix = date_range(start=left.index.min(), end=ts.index.max(), freq="7min").as_unit(
+ unit
+ )
bins = np.searchsorted(ix.values, ts.index.values, side="right")
val = np.bincount(bins, minlength=len(ix) + 1)[1:].astype("int64", copy=False)
@@ -1828,13 +1832,13 @@ def f(data, add_arg):
@pytest.mark.parametrize(
"n1, freq1, n2, freq2",
[
- (30, "S", 0.5, "Min"),
- (60, "S", 1, "Min"),
- (3600, "S", 1, "H"),
+ (30, "s", 0.5, "Min"),
+ (60, "s", 1, "Min"),
+ (3600, "s", 1, "H"),
(60, "Min", 1, "H"),
- (21600, "S", 0.25, "D"),
- (86400, "S", 1, "D"),
- (43200, "S", 0.5, "D"),
+ (21600, "s", 0.25, "D"),
+ (86400, "s", 1, "D"),
+ (43200, "s", 0.5, "D"),
(1440, "Min", 1, "D"),
(12, "H", 0.5, "D"),
(24, "H", 1, "D"),
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index 7559a85de7a6b..bc0c1984cf2f3 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -219,13 +219,13 @@ def test_resample_basic(self):
)
s[10:30] = np.nan
index = PeriodIndex(
- [Period("2013-01-01 00:00", "T"), Period("2013-01-01 00:01", "T")],
+ [Period("2013-01-01 00:00", "min"), Period("2013-01-01 00:01", "min")],
name="idx",
)
expected = Series([34.5, 79.5], index=index)
- result = s.to_period().resample("T", kind="period").mean()
+ result = s.to_period().resample("min", kind="period").mean()
tm.assert_series_equal(result, expected)
- result2 = s.resample("T", kind="period").mean()
+ result2 = s.resample("min", kind="period").mean()
tm.assert_series_equal(result2, expected)
@pytest.mark.parametrize(
@@ -325,11 +325,11 @@ def test_with_local_timezone_dateutil(self):
def test_resample_nonexistent_time_bin_edge(self):
# GH 19375
- index = date_range("2017-03-12", "2017-03-12 1:45:00", freq="15T")
+ index = date_range("2017-03-12", "2017-03-12 1:45:00", freq="15min")
s = Series(np.zeros(len(index)), index=index)
expected = s.tz_localize("US/Pacific")
- expected.index = pd.DatetimeIndex(expected.index, freq="900S")
- result = expected.resample("900S").mean()
+ expected.index = pd.DatetimeIndex(expected.index, freq="900s")
+ result = expected.resample("900s").mean()
tm.assert_series_equal(result, expected)
# GH 23742
@@ -350,10 +350,13 @@ def test_resample_nonexistent_time_bin_edge(self):
def test_resample_ambiguous_time_bin_edge(self):
# GH 10117
idx = date_range(
- "2014-10-25 22:00:00", "2014-10-26 00:30:00", freq="30T", tz="Europe/London"
+ "2014-10-25 22:00:00",
+ "2014-10-26 00:30:00",
+ freq="30min",
+ tz="Europe/London",
)
expected = Series(np.zeros(len(idx)), index=idx)
- result = expected.resample("30T").mean()
+ result = expected.resample("30min").mean()
tm.assert_series_equal(result, expected)
def test_fill_method_and_how_upsample(self):
@@ -438,7 +441,7 @@ def test_cant_fill_missing_dups(self):
@pytest.mark.parametrize("freq", ["5min"])
@pytest.mark.parametrize("kind", ["period", None, "timestamp"])
def test_resample_5minute(self, freq, kind):
- rng = period_range("1/1/2000", "1/5/2000", freq="T")
+ rng = period_range("1/1/2000", "1/5/2000", freq="min")
ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng)
expected = ts.to_timestamp().resample(freq).mean()
if kind != "timestamp":
@@ -505,7 +508,7 @@ def test_resample_tz_localized(self):
# #2245
idx = date_range(
- "2001-09-20 15:59", "2001-09-20 16:00", freq="T", tz="Australia/Sydney"
+ "2001-09-20 15:59", "2001-09-20 16:00", freq="min", tz="Australia/Sydney"
)
s = Series([1, 2], index=idx)
@@ -653,7 +656,7 @@ def test_default_right_closed_label(self, from_freq, to_freq):
@pytest.mark.parametrize(
"from_freq, to_freq",
- [("D", "MS"), ("Q", "AS"), ("M", "QS"), ("H", "D"), ("T", "H")],
+ [("D", "MS"), ("Q", "AS"), ("M", "QS"), ("H", "D"), ("min", "H")],
)
def test_default_left_closed_label(self, from_freq, to_freq):
idx = date_range(start="8/15/2012", periods=100, freq=from_freq)
@@ -800,7 +803,7 @@ def test_upsampling_ohlc(self, freq, period_mult, kind):
)
def test_resample_with_nat(self, periods, values, freq, expected_values):
# GH 13224
- index = PeriodIndex(periods, freq="S")
+ index = PeriodIndex(periods, freq="s")
frame = DataFrame(values, index=index)
expected_index = period_range(
@@ -812,7 +815,7 @@ def test_resample_with_nat(self, periods, values, freq, expected_values):
def test_resample_with_only_nat(self):
# GH 13224
- pi = PeriodIndex([pd.NaT] * 3, freq="S")
+ pi = PeriodIndex([pd.NaT] * 3, freq="s")
frame = DataFrame([2, 3, 5], index=pi, columns=["a"])
expected_index = PeriodIndex(data=[], freq=pi.freq)
expected = DataFrame(index=expected_index, columns=["a"], dtype="float64")
@@ -893,3 +896,22 @@ def test_sum_min_count(self):
[3.0, np.nan], index=PeriodIndex(["2018Q1", "2018Q2"], freq="Q-DEC")
)
tm.assert_series_equal(result, expected)
+
+ def test_resample_t_l_deprecated(self):
+ # GH 52536
+ msg_t = "'T' is deprecated and will be removed in a future version."
+ msg_l = "'L' is deprecated and will be removed in a future version."
+
+ with tm.assert_produces_warning(FutureWarning, match=msg_l):
+ rng_l = period_range(
+ "2020-01-01 00:00:00 00:00", "2020-01-01 00:00:00 00:01", freq="L"
+ )
+ ser = Series(np.arange(len(rng_l)), index=rng_l)
+
+ rng = period_range(
+ "2020-01-01 00:00:00 00:00", "2020-01-01 00:00:00 00:01", freq="min"
+ )
+ expected = Series([29999.5, 60000.0], index=rng)
+ with tm.assert_produces_warning(FutureWarning, match=msg_t):
+ result = ser.resample("T").mean()
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index 1cfcf555355b5..1b20a7b99d1d7 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -171,7 +171,7 @@ def test_attribute_access(test_frame):
def test_api_compat_before_use(attr):
# make sure that we are setting the binner
# on these attributes
- rng = date_range("1/1/2012", periods=100, freq="S")
+ rng = date_range("1/1/2012", periods=100, freq="s")
ts = Series(np.arange(len(rng)), index=rng)
rs = ts.resample("30s")
@@ -201,7 +201,7 @@ def tests_raises_on_nuisance(test_frame):
def test_downsample_but_actually_upsampling():
# this is reindex / asfreq
- rng = date_range("1/1/2012", periods=100, freq="S")
+ rng = date_range("1/1/2012", periods=100, freq="s")
ts = Series(np.arange(len(rng), dtype="int64"), index=rng)
result = ts.resample("20s").asfreq()
expected = Series(
@@ -216,7 +216,7 @@ def test_combined_up_downsampling_of_irregular():
# ts2.resample('2s').mean().ffill()
# preserve these semantics
- rng = date_range("1/1/2012", periods=100, freq="S")
+ rng = date_range("1/1/2012", periods=100, freq="s")
ts = Series(np.arange(len(rng)), index=rng)
ts2 = ts.iloc[[0, 1, 2, 3, 5, 7, 11, 15, 16, 25, 30]]
@@ -260,7 +260,7 @@ def test_combined_up_downsampling_of_irregular():
"2012-01-01 00:00:30",
],
dtype="datetime64[ns]",
- freq="2S",
+ freq="2s",
),
)
tm.assert_series_equal(result, expected)
@@ -294,7 +294,7 @@ def test_transform_frame(on):
def test_fillna():
# need to upsample here
- rng = date_range("1/1/2012", periods=10, freq="2S")
+ rng = date_range("1/1/2012", periods=10, freq="2s")
ts = Series(np.arange(len(rng), dtype="int64"), index=rng)
r = ts.resample("s")
@@ -344,11 +344,11 @@ def test_agg_consistency():
# similar aggregations with and w/o selection list
df = DataFrame(
np.random.default_rng(2).standard_normal((1000, 3)),
- index=date_range("1/1/2012", freq="S", periods=1000),
+ index=date_range("1/1/2012", freq="s", periods=1000),
columns=["A", "B", "C"],
)
- r = df.resample("3T")
+ r = df.resample("3min")
msg = r"Column\(s\) \['r1', 'r2'\] do not exist"
with pytest.raises(KeyError, match=msg):
@@ -359,11 +359,11 @@ def test_agg_consistency_int_str_column_mix():
# GH#39025
df = DataFrame(
np.random.default_rng(2).standard_normal((1000, 2)),
- index=date_range("1/1/2012", freq="S", periods=1000),
+ index=date_range("1/1/2012", freq="s", periods=1000),
columns=[1, "a"],
)
- r = df.resample("3T")
+ r = df.resample("3min")
msg = r"Column\(s\) \[2, 'b'\] do not exist"
with pytest.raises(KeyError, match=msg):
@@ -650,7 +650,7 @@ def test_try_aggregate_non_existing_column():
# Error as we don't have 'z' column
msg = r"Column\(s\) \['z'\] do not exist"
with pytest.raises(KeyError, match=msg):
- df.resample("30T").agg({"x": ["mean"], "y": ["median"], "z": ["sum"]})
+ df.resample("30min").agg({"x": ["mean"], "y": ["median"], "z": ["sum"]})
def test_agg_list_like_func_with_args():
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index 62b0bc2012af1..6f4f1154907dc 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -184,7 +184,7 @@ def test_groupby_with_origin():
def test_nearest():
# GH 17496
# Resample nearest
- index = date_range("1/1/2000", periods=3, freq="T")
+ index = date_range("1/1/2000", periods=3, freq="min")
result = Series(range(3), index=index).resample("20s").nearest()
expected = Series(
@@ -200,7 +200,7 @@ def test_nearest():
"2000-01-01 00:02:00",
],
dtype="datetime64[ns]",
- freq="20S",
+ freq="20s",
),
)
tm.assert_series_equal(result, expected)
@@ -321,7 +321,7 @@ def weighted_quantile(series, weights, q):
cutoff = cumsum.iloc[-1] * q
return series[cumsum >= cutoff].iloc[0]
- times = date_range("2017-6-23 18:00", periods=8, freq="15T", tz="UTC")
+ times = date_range("2017-6-23 18:00", periods=8, freq="15min", tz="UTC")
data = Series([1.0, 1, 1, 1, 1, 2, 2, 0], index=times)
weights = Series([160.0, 91, 65, 43, 24, 10, 1, 0], index=times)
diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py
index 8c06f1e8a1e38..d7fdbc4fe5f08 100644
--- a/pandas/tests/resample/test_time_grouper.py
+++ b/pandas/tests/resample/test_time_grouper.py
@@ -305,10 +305,10 @@ def test_repr():
)
def test_upsample_sum(method, method_args, expected_values):
s = Series(1, index=date_range("2017", periods=2, freq="H"))
- resampled = s.resample("30T")
+ resampled = s.resample("30min")
index = pd.DatetimeIndex(
["2017-01-01T00:00:00", "2017-01-01T00:30:00", "2017-01-01T01:00:00"],
- freq="30T",
+ freq="30min",
)
result = methodcaller(method, **method_args)(resampled)
expected = Series(expected_values, index=index)
diff --git a/pandas/tests/resample/test_timedelta.py b/pandas/tests/resample/test_timedelta.py
index a119a911e5fbe..79b13673e70c6 100644
--- a/pandas/tests/resample/test_timedelta.py
+++ b/pandas/tests/resample/test_timedelta.py
@@ -14,10 +14,10 @@
def test_asfreq_bug():
df = DataFrame(data=[1, 3], index=[timedelta(), timedelta(minutes=3)])
- result = df.resample("1T").asfreq()
+ result = df.resample("1min").asfreq()
expected = DataFrame(
data=[1, np.nan, np.nan, 3],
- index=timedelta_range("0 day", periods=4, freq="1T"),
+ index=timedelta_range("0 day", periods=4, freq="1min"),
)
tm.assert_frame_equal(result, expected)
@@ -28,19 +28,19 @@ def test_resample_with_nat():
result = DataFrame({"value": [2, 3, 5]}, index).resample("1s").mean()
expected = DataFrame(
{"value": [2.5, np.nan, 5.0]},
- index=timedelta_range("0 day", periods=3, freq="1S"),
+ index=timedelta_range("0 day", periods=3, freq="1s"),
)
tm.assert_frame_equal(result, expected)
def test_resample_as_freq_with_subperiod():
# GH 13022
- index = timedelta_range("00:00:00", "00:10:00", freq="5T")
+ index = timedelta_range("00:00:00", "00:10:00", freq="5min")
df = DataFrame(data={"value": [1, 5, 10]}, index=index)
- result = df.resample("2T").asfreq()
+ result = df.resample("2min").asfreq()
expected_data = {"value": [1, np.nan, np.nan, np.nan, np.nan, 10]}
expected = DataFrame(
- data=expected_data, index=timedelta_range("00:00:00", "00:10:00", freq="2T")
+ data=expected_data, index=timedelta_range("00:00:00", "00:10:00", freq="2min")
)
tm.assert_frame_equal(result, expected)
@@ -71,9 +71,9 @@ def test_resample_single_period_timedelta():
def test_resample_timedelta_idempotency():
# GH 12072
- index = timedelta_range("0", periods=9, freq="10L")
+ index = timedelta_range("0", periods=9, freq="10ms")
series = Series(range(9), index=index)
- result = series.resample("10L").mean()
+ result = series.resample("10ms").mean()
expected = series.astype(float)
tm.assert_series_equal(result, expected)
@@ -128,13 +128,13 @@ def test_resample_timedelta_values():
@pytest.mark.parametrize(
"start, end, freq, resample_freq",
[
- ("8H", "21h59min50s", "10S", "3H"), # GH 30353 example
+ ("8H", "21h59min50s", "10s", "3H"), # GH 30353 example
("3H", "22H", "1H", "5H"),
("527D", "5006D", "3D", "10D"),
("1D", "10D", "1D", "2D"), # GH 13022 example
# tests that worked before GH 33498:
- ("8H", "21h59min50s", "10S", "2H"),
- ("0H", "21h59min50s", "10S", "3H"),
+ ("8H", "21h59min50s", "10s", "2H"),
+ ("0H", "21h59min50s", "10s", "3H"),
("10D", "85D", "D", "2D"),
],
)
@@ -155,7 +155,7 @@ def test_resample_with_timedelta_yields_no_empty_groups(duplicates):
# GH 10603
df = DataFrame(
np.random.default_rng(2).normal(size=(10000, 4)),
- index=timedelta_range(start="0s", periods=10000, freq="3906250n"),
+ index=timedelta_range(start="0s", periods=10000, freq="3906250ns"),
)
if duplicates:
# case with non-unique columns
@@ -196,11 +196,11 @@ def test_resample_closed_right():
# GH#45414
idx = pd.Index([pd.Timedelta(seconds=120 + i * 30) for i in range(10)])
ser = Series(range(10), index=idx)
- result = ser.resample("T", closed="right", label="right").sum()
+ result = ser.resample("min", closed="right", label="right").sum()
expected = Series(
[0, 3, 7, 11, 15, 9],
index=pd.TimedeltaIndex(
- [pd.Timedelta(seconds=120 + i * 60) for i in range(6)], freq="T"
+ [pd.Timedelta(seconds=120 + i * 60) for i in range(6)], freq="min"
),
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/reshape/concat/test_datetimes.py b/pandas/tests/reshape/concat/test_datetimes.py
index a06fc5eede55c..2f50a19189987 100644
--- a/pandas/tests/reshape/concat/test_datetimes.py
+++ b/pandas/tests/reshape/concat/test_datetimes.py
@@ -112,7 +112,7 @@ def test_concat_datetime_timezone(self):
def test_concat_datetimeindex_freq(self):
# GH 3232
# Monotonic index result
- dr = date_range("01-Jan-2013", periods=100, freq="50L", tz="UTC")
+ dr = date_range("01-Jan-2013", periods=100, freq="50ms", tz="UTC")
data = list(range(100))
expected = DataFrame(data, index=dr)
result = concat([expected[:50], expected[50:]])
diff --git a/pandas/tests/scalar/interval/test_interval.py b/pandas/tests/scalar/interval/test_interval.py
index 192aaacbac2b5..a02dbf0a0413f 100644
--- a/pandas/tests/scalar/interval/test_interval.py
+++ b/pandas/tests/scalar/interval/test_interval.py
@@ -81,7 +81,7 @@ def test_hash(self, interval):
(Timedelta("0 days"), Timedelta("5 days"), Timedelta("5 days")),
(Timedelta("10 days"), Timedelta("10 days"), Timedelta("0 days")),
(Timedelta("1H10min"), Timedelta("5H5min"), Timedelta("3H55min")),
- (Timedelta("5S"), Timedelta("1H"), Timedelta("59min55S")),
+ (Timedelta("5s"), Timedelta("1H"), Timedelta("59min55s")),
],
)
def test_length(self, left, right, expected):
diff --git a/pandas/tests/scalar/period/test_asfreq.py b/pandas/tests/scalar/period/test_asfreq.py
index f6c1675766210..00285148a3c90 100644
--- a/pandas/tests/scalar/period/test_asfreq.py
+++ b/pandas/tests/scalar/period/test_asfreq.py
@@ -50,13 +50,13 @@ def test_to_timestamp_out_of_bounds(self):
def test_asfreq_corner(self):
val = Period(freq="A", year=2007)
- result1 = val.asfreq("5t")
- result2 = val.asfreq("t")
- expected = Period("2007-12-31 23:59", freq="t")
+ result1 = val.asfreq("5min")
+ result2 = val.asfreq("min")
+ expected = Period("2007-12-31 23:59", freq="min")
assert result1.ordinal == expected.ordinal
- assert result1.freqstr == "5T"
+ assert result1.freqstr == "5min"
assert result2.ordinal == expected.ordinal
- assert result2.freqstr == "T"
+ assert result2.freqstr == "min"
def test_conv_annual(self):
# frequency conversion tests: from Annual Frequency
@@ -87,10 +87,10 @@ def test_conv_annual(self):
freq="Min", year=2007, month=12, day=31, hour=23, minute=59
)
ival_A_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_A_to_S_end = Period(
- freq="S", year=2007, month=12, day=31, hour=23, minute=59, second=59
+ freq="s", year=2007, month=12, day=31, hour=23, minute=59, second=59
)
ival_AJAN_to_D_end = Period(freq="D", year=2007, month=1, day=31)
@@ -100,33 +100,37 @@ def test_conv_annual(self):
ival_ANOV_to_D_end = Period(freq="D", year=2007, month=11, day=30)
ival_ANOV_to_D_start = Period(freq="D", year=2006, month=12, day=1)
- assert ival_A.asfreq("Q", "S") == ival_A_to_Q_start
+ assert ival_A.asfreq("Q", "s") == ival_A_to_Q_start
assert ival_A.asfreq("Q", "e") == ival_A_to_Q_end
assert ival_A.asfreq("M", "s") == ival_A_to_M_start
assert ival_A.asfreq("M", "E") == ival_A_to_M_end
- assert ival_A.asfreq("W", "S") == ival_A_to_W_start
+ assert ival_A.asfreq("W", "s") == ival_A_to_W_start
assert ival_A.asfreq("W", "E") == ival_A_to_W_end
with tm.assert_produces_warning(FutureWarning, match=bday_msg):
- assert ival_A.asfreq("B", "S") == ival_A_to_B_start
+ assert ival_A.asfreq("B", "s") == ival_A_to_B_start
assert ival_A.asfreq("B", "E") == ival_A_to_B_end
- assert ival_A.asfreq("D", "S") == ival_A_to_D_start
+ assert ival_A.asfreq("D", "s") == ival_A_to_D_start
assert ival_A.asfreq("D", "E") == ival_A_to_D_end
- assert ival_A.asfreq("H", "S") == ival_A_to_H_start
+ assert ival_A.asfreq("H", "s") == ival_A_to_H_start
assert ival_A.asfreq("H", "E") == ival_A_to_H_end
- assert ival_A.asfreq("min", "S") == ival_A_to_T_start
+ assert ival_A.asfreq("min", "s") == ival_A_to_T_start
assert ival_A.asfreq("min", "E") == ival_A_to_T_end
- assert ival_A.asfreq("T", "S") == ival_A_to_T_start
- assert ival_A.asfreq("T", "E") == ival_A_to_T_end
- assert ival_A.asfreq("S", "S") == ival_A_to_S_start
- assert ival_A.asfreq("S", "E") == ival_A_to_S_end
-
- assert ival_AJAN.asfreq("D", "S") == ival_AJAN_to_D_start
+ msg = "'T' is deprecated and will be removed in a future version."
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ assert ival_A.asfreq("T", "s") == ival_A_to_T_start
+ assert ival_A.asfreq("T", "E") == ival_A_to_T_end
+ msg = "'S' is deprecated and will be removed in a future version."
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ assert ival_A.asfreq("S", "S") == ival_A_to_S_start
+ assert ival_A.asfreq("S", "E") == ival_A_to_S_end
+
+ assert ival_AJAN.asfreq("D", "s") == ival_AJAN_to_D_start
assert ival_AJAN.asfreq("D", "E") == ival_AJAN_to_D_end
- assert ival_AJUN.asfreq("D", "S") == ival_AJUN_to_D_start
+ assert ival_AJUN.asfreq("D", "s") == ival_AJUN_to_D_start
assert ival_AJUN.asfreq("D", "E") == ival_AJUN_to_D_end
- assert ival_ANOV.asfreq("D", "S") == ival_ANOV_to_D_start
+ assert ival_ANOV.asfreq("D", "s") == ival_ANOV_to_D_start
assert ival_ANOV.asfreq("D", "E") == ival_ANOV_to_D_end
assert ival_A.asfreq("A") == ival_A
@@ -159,10 +163,10 @@ def test_conv_quarterly(self):
freq="Min", year=2007, month=3, day=31, hour=23, minute=59
)
ival_Q_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_Q_to_S_end = Period(
- freq="S", year=2007, month=3, day=31, hour=23, minute=59, second=59
+ freq="s", year=2007, month=3, day=31, hour=23, minute=59, second=59
)
ival_QEJAN_to_D_start = Period(freq="D", year=2006, month=2, day=1)
@@ -174,25 +178,25 @@ def test_conv_quarterly(self):
assert ival_Q.asfreq("A") == ival_Q_to_A
assert ival_Q_end_of_year.asfreq("A") == ival_Q_to_A
- assert ival_Q.asfreq("M", "S") == ival_Q_to_M_start
+ assert ival_Q.asfreq("M", "s") == ival_Q_to_M_start
assert ival_Q.asfreq("M", "E") == ival_Q_to_M_end
- assert ival_Q.asfreq("W", "S") == ival_Q_to_W_start
+ assert ival_Q.asfreq("W", "s") == ival_Q_to_W_start
assert ival_Q.asfreq("W", "E") == ival_Q_to_W_end
with tm.assert_produces_warning(FutureWarning, match=bday_msg):
- assert ival_Q.asfreq("B", "S") == ival_Q_to_B_start
+ assert ival_Q.asfreq("B", "s") == ival_Q_to_B_start
assert ival_Q.asfreq("B", "E") == ival_Q_to_B_end
- assert ival_Q.asfreq("D", "S") == ival_Q_to_D_start
+ assert ival_Q.asfreq("D", "s") == ival_Q_to_D_start
assert ival_Q.asfreq("D", "E") == ival_Q_to_D_end
- assert ival_Q.asfreq("H", "S") == ival_Q_to_H_start
+ assert ival_Q.asfreq("H", "s") == ival_Q_to_H_start
assert ival_Q.asfreq("H", "E") == ival_Q_to_H_end
- assert ival_Q.asfreq("Min", "S") == ival_Q_to_T_start
+ assert ival_Q.asfreq("Min", "s") == ival_Q_to_T_start
assert ival_Q.asfreq("Min", "E") == ival_Q_to_T_end
- assert ival_Q.asfreq("S", "S") == ival_Q_to_S_start
- assert ival_Q.asfreq("S", "E") == ival_Q_to_S_end
+ assert ival_Q.asfreq("s", "s") == ival_Q_to_S_start
+ assert ival_Q.asfreq("s", "E") == ival_Q_to_S_end
- assert ival_QEJAN.asfreq("D", "S") == ival_QEJAN_to_D_start
+ assert ival_QEJAN.asfreq("D", "s") == ival_QEJAN_to_D_start
assert ival_QEJAN.asfreq("D", "E") == ival_QEJAN_to_D_end
- assert ival_QEJUN.asfreq("D", "S") == ival_QEJUN_to_D_start
+ assert ival_QEJUN.asfreq("D", "s") == ival_QEJUN_to_D_start
assert ival_QEJUN.asfreq("D", "E") == ival_QEJUN_to_D_end
assert ival_Q.asfreq("Q") == ival_Q
@@ -221,10 +225,10 @@ def test_conv_monthly(self):
freq="Min", year=2007, month=1, day=31, hour=23, minute=59
)
ival_M_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_M_to_S_end = Period(
- freq="S", year=2007, month=1, day=31, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=31, hour=23, minute=59, second=59
)
assert ival_M.asfreq("A") == ival_M_to_A
@@ -232,19 +236,19 @@ def test_conv_monthly(self):
assert ival_M.asfreq("Q") == ival_M_to_Q
assert ival_M_end_of_quarter.asfreq("Q") == ival_M_to_Q
- assert ival_M.asfreq("W", "S") == ival_M_to_W_start
+ assert ival_M.asfreq("W", "s") == ival_M_to_W_start
assert ival_M.asfreq("W", "E") == ival_M_to_W_end
with tm.assert_produces_warning(FutureWarning, match=bday_msg):
- assert ival_M.asfreq("B", "S") == ival_M_to_B_start
+ assert ival_M.asfreq("B", "s") == ival_M_to_B_start
assert ival_M.asfreq("B", "E") == ival_M_to_B_end
- assert ival_M.asfreq("D", "S") == ival_M_to_D_start
+ assert ival_M.asfreq("D", "s") == ival_M_to_D_start
assert ival_M.asfreq("D", "E") == ival_M_to_D_end
- assert ival_M.asfreq("H", "S") == ival_M_to_H_start
+ assert ival_M.asfreq("H", "s") == ival_M_to_H_start
assert ival_M.asfreq("H", "E") == ival_M_to_H_end
- assert ival_M.asfreq("Min", "S") == ival_M_to_T_start
+ assert ival_M.asfreq("Min", "s") == ival_M_to_T_start
assert ival_M.asfreq("Min", "E") == ival_M_to_T_end
- assert ival_M.asfreq("S", "S") == ival_M_to_S_start
- assert ival_M.asfreq("S", "E") == ival_M_to_S_end
+ assert ival_M.asfreq("s", "s") == ival_M_to_S_start
+ assert ival_M.asfreq("s", "E") == ival_M_to_S_end
assert ival_M.asfreq("M") == ival_M
@@ -311,10 +315,10 @@ def test_conv_weekly(self):
freq="Min", year=2007, month=1, day=7, hour=23, minute=59
)
ival_W_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_W_to_S_end = Period(
- freq="S", year=2007, month=1, day=7, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=7, hour=23, minute=59, second=59
)
assert ival_W.asfreq("A") == ival_W_to_A
@@ -327,33 +331,33 @@ def test_conv_weekly(self):
assert ival_W_end_of_month.asfreq("M") == ival_W_to_M_end_of_month
with tm.assert_produces_warning(FutureWarning, match=bday_msg):
- assert ival_W.asfreq("B", "S") == ival_W_to_B_start
+ assert ival_W.asfreq("B", "s") == ival_W_to_B_start
assert ival_W.asfreq("B", "E") == ival_W_to_B_end
- assert ival_W.asfreq("D", "S") == ival_W_to_D_start
+ assert ival_W.asfreq("D", "s") == ival_W_to_D_start
assert ival_W.asfreq("D", "E") == ival_W_to_D_end
- assert ival_WSUN.asfreq("D", "S") == ival_WSUN_to_D_start
+ assert ival_WSUN.asfreq("D", "s") == ival_WSUN_to_D_start
assert ival_WSUN.asfreq("D", "E") == ival_WSUN_to_D_end
- assert ival_WSAT.asfreq("D", "S") == ival_WSAT_to_D_start
+ assert ival_WSAT.asfreq("D", "s") == ival_WSAT_to_D_start
assert ival_WSAT.asfreq("D", "E") == ival_WSAT_to_D_end
- assert ival_WFRI.asfreq("D", "S") == ival_WFRI_to_D_start
+ assert ival_WFRI.asfreq("D", "s") == ival_WFRI_to_D_start
assert ival_WFRI.asfreq("D", "E") == ival_WFRI_to_D_end
- assert ival_WTHU.asfreq("D", "S") == ival_WTHU_to_D_start
+ assert ival_WTHU.asfreq("D", "s") == ival_WTHU_to_D_start
assert ival_WTHU.asfreq("D", "E") == ival_WTHU_to_D_end
- assert ival_WWED.asfreq("D", "S") == ival_WWED_to_D_start
+ assert ival_WWED.asfreq("D", "s") == ival_WWED_to_D_start
assert ival_WWED.asfreq("D", "E") == ival_WWED_to_D_end
- assert ival_WTUE.asfreq("D", "S") == ival_WTUE_to_D_start
+ assert ival_WTUE.asfreq("D", "s") == ival_WTUE_to_D_start
assert ival_WTUE.asfreq("D", "E") == ival_WTUE_to_D_end
- assert ival_WMON.asfreq("D", "S") == ival_WMON_to_D_start
+ assert ival_WMON.asfreq("D", "s") == ival_WMON_to_D_start
assert ival_WMON.asfreq("D", "E") == ival_WMON_to_D_end
- assert ival_W.asfreq("H", "S") == ival_W_to_H_start
+ assert ival_W.asfreq("H", "s") == ival_W_to_H_start
assert ival_W.asfreq("H", "E") == ival_W_to_H_end
- assert ival_W.asfreq("Min", "S") == ival_W_to_T_start
+ assert ival_W.asfreq("Min", "s") == ival_W_to_T_start
assert ival_W.asfreq("Min", "E") == ival_W_to_T_end
- assert ival_W.asfreq("S", "S") == ival_W_to_S_start
- assert ival_W.asfreq("S", "E") == ival_W_to_S_end
+ assert ival_W.asfreq("s", "s") == ival_W_to_S_start
+ assert ival_W.asfreq("s", "E") == ival_W_to_S_end
assert ival_W.asfreq("W") == ival_W
@@ -404,10 +408,10 @@ def test_conv_business(self):
freq="Min", year=2007, month=1, day=1, hour=23, minute=59
)
ival_B_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_B_to_S_end = Period(
- freq="S", year=2007, month=1, day=1, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=1, hour=23, minute=59, second=59
)
assert ival_B.asfreq("A") == ival_B_to_A
@@ -421,12 +425,12 @@ def test_conv_business(self):
assert ival_B.asfreq("D") == ival_B_to_D
- assert ival_B.asfreq("H", "S") == ival_B_to_H_start
+ assert ival_B.asfreq("H", "s") == ival_B_to_H_start
assert ival_B.asfreq("H", "E") == ival_B_to_H_end
- assert ival_B.asfreq("Min", "S") == ival_B_to_T_start
+ assert ival_B.asfreq("Min", "s") == ival_B_to_T_start
assert ival_B.asfreq("Min", "E") == ival_B_to_T_end
- assert ival_B.asfreq("S", "S") == ival_B_to_S_start
- assert ival_B.asfreq("S", "E") == ival_B_to_S_end
+ assert ival_B.asfreq("s", "s") == ival_B_to_S_start
+ assert ival_B.asfreq("s", "E") == ival_B_to_S_end
with tm.assert_produces_warning(FutureWarning, match=bday_msg):
assert ival_B.asfreq("B") == ival_B
@@ -470,10 +474,10 @@ def test_conv_daily(self):
freq="Min", year=2007, month=1, day=1, hour=23, minute=59
)
ival_D_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_D_to_S_end = Period(
- freq="S", year=2007, month=1, day=1, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=1, hour=23, minute=59, second=59
)
assert ival_D.asfreq("A") == ival_D_to_A
@@ -494,17 +498,17 @@ def test_conv_daily(self):
with tm.assert_produces_warning(FutureWarning, match=bday_msg):
assert ival_D_friday.asfreq("B") == ival_B_friday
- assert ival_D_saturday.asfreq("B", "S") == ival_B_friday
+ assert ival_D_saturday.asfreq("B", "s") == ival_B_friday
assert ival_D_saturday.asfreq("B", "E") == ival_B_monday
- assert ival_D_sunday.asfreq("B", "S") == ival_B_friday
+ assert ival_D_sunday.asfreq("B", "s") == ival_B_friday
assert ival_D_sunday.asfreq("B", "E") == ival_B_monday
- assert ival_D.asfreq("H", "S") == ival_D_to_H_start
+ assert ival_D.asfreq("H", "s") == ival_D_to_H_start
assert ival_D.asfreq("H", "E") == ival_D_to_H_end
- assert ival_D.asfreq("Min", "S") == ival_D_to_T_start
+ assert ival_D.asfreq("Min", "s") == ival_D_to_T_start
assert ival_D.asfreq("Min", "E") == ival_D_to_T_end
- assert ival_D.asfreq("S", "S") == ival_D_to_S_start
- assert ival_D.asfreq("S", "E") == ival_D_to_S_end
+ assert ival_D.asfreq("s", "s") == ival_D_to_S_start
+ assert ival_D.asfreq("s", "E") == ival_D_to_S_end
assert ival_D.asfreq("D") == ival_D
@@ -534,10 +538,10 @@ def test_conv_hourly(self):
freq="Min", year=2007, month=1, day=1, hour=0, minute=59
)
ival_H_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_H_to_S_end = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=59, second=59
+ freq="s", year=2007, month=1, day=1, hour=0, minute=59, second=59
)
assert ival_H.asfreq("A") == ival_H_to_A
@@ -554,10 +558,10 @@ def test_conv_hourly(self):
assert ival_H.asfreq("B") == ival_H_to_B
assert ival_H_end_of_bus.asfreq("B") == ival_H_to_B
- assert ival_H.asfreq("Min", "S") == ival_H_to_T_start
+ assert ival_H.asfreq("Min", "s") == ival_H_to_T_start
assert ival_H.asfreq("Min", "E") == ival_H_to_T_end
- assert ival_H.asfreq("S", "S") == ival_H_to_S_start
- assert ival_H.asfreq("S", "E") == ival_H_to_S_end
+ assert ival_H.asfreq("s", "s") == ival_H_to_S_start
+ assert ival_H.asfreq("s", "E") == ival_H_to_S_end
assert ival_H.asfreq("H") == ival_H
@@ -597,10 +601,10 @@ def test_conv_minutely(self):
ival_T_to_H = Period(freq="H", year=2007, month=1, day=1, hour=0)
ival_T_to_S_start = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0
)
ival_T_to_S_end = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=59
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=59
)
assert ival_T.asfreq("A") == ival_T_to_A
@@ -619,38 +623,38 @@ def test_conv_minutely(self):
assert ival_T.asfreq("H") == ival_T_to_H
assert ival_T_end_of_hour.asfreq("H") == ival_T_to_H
- assert ival_T.asfreq("S", "S") == ival_T_to_S_start
- assert ival_T.asfreq("S", "E") == ival_T_to_S_end
+ assert ival_T.asfreq("s", "s") == ival_T_to_S_start
+ assert ival_T.asfreq("s", "E") == ival_T_to_S_end
assert ival_T.asfreq("Min") == ival_T
def test_conv_secondly(self):
# frequency conversion tests: from Secondly Frequency"
- ival_S = Period(freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=0)
+ ival_S = Period(freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=0)
ival_S_end_of_year = Period(
- freq="S", year=2007, month=12, day=31, hour=23, minute=59, second=59
+ freq="s", year=2007, month=12, day=31, hour=23, minute=59, second=59
)
ival_S_end_of_quarter = Period(
- freq="S", year=2007, month=3, day=31, hour=23, minute=59, second=59
+ freq="s", year=2007, month=3, day=31, hour=23, minute=59, second=59
)
ival_S_end_of_month = Period(
- freq="S", year=2007, month=1, day=31, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=31, hour=23, minute=59, second=59
)
ival_S_end_of_week = Period(
- freq="S", year=2007, month=1, day=7, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=7, hour=23, minute=59, second=59
)
ival_S_end_of_day = Period(
- freq="S", year=2007, month=1, day=1, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=1, hour=23, minute=59, second=59
)
ival_S_end_of_bus = Period(
- freq="S", year=2007, month=1, day=1, hour=23, minute=59, second=59
+ freq="s", year=2007, month=1, day=1, hour=23, minute=59, second=59
)
ival_S_end_of_hour = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=59, second=59
+ freq="s", year=2007, month=1, day=1, hour=0, minute=59, second=59
)
ival_S_end_of_minute = Period(
- freq="S", year=2007, month=1, day=1, hour=0, minute=0, second=59
+ freq="s", year=2007, month=1, day=1, hour=0, minute=0, second=59
)
ival_S_to_A = Period(freq="A", year=2007)
@@ -681,12 +685,12 @@ def test_conv_secondly(self):
assert ival_S.asfreq("Min") == ival_S_to_T
assert ival_S_end_of_minute.asfreq("Min") == ival_S_to_T
- assert ival_S.asfreq("S") == ival_S
+ assert ival_S.asfreq("s") == ival_S
def test_conv_microsecond(self):
# GH#31475 Avoid floating point errors dropping the start_time to
# before the beginning of the Period
- per = Period("2020-01-30 15:57:27.576166", freq="U")
+ per = Period("2020-01-30 15:57:27.576166", freq="us")
assert per.ordinal == 1580399847576166
start = per.start_time
@@ -733,7 +737,7 @@ def test_asfreq_mult(self):
assert result.freq == expected.freq
# ordinal will not change
for freq in ["A", offsets.YearEnd()]:
- result = p.asfreq(freq, how="S")
+ result = p.asfreq(freq, how="s")
expected = Period("2007", freq="A")
assert result == expected
@@ -749,7 +753,7 @@ def test_asfreq_mult(self):
assert result.ordinal == expected.ordinal
assert result.freq == expected.freq
for freq in ["2M", offsets.MonthEnd(2)]:
- result = p.asfreq(freq, how="S")
+ result = p.asfreq(freq, how="s")
expected = Period("2007-01", freq="2M")
assert result == expected
@@ -765,7 +769,7 @@ def test_asfreq_mult(self):
assert result.ordinal == expected.ordinal
assert result.freq == expected.freq
for freq in ["2M", offsets.MonthEnd(2)]:
- result = p.asfreq(freq, how="S")
+ result = p.asfreq(freq, how="s")
expected = Period("2007-01", freq="2M")
assert result == expected
diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py
index b1fb657bb2051..a152da15fe71f 100644
--- a/pandas/tests/scalar/period/test_period.py
+++ b/pandas/tests/scalar/period/test_period.py
@@ -102,17 +102,17 @@ def test_construction(self):
assert i1 == i3
i1 = Period("2007-01-01 09:00:00.001")
- expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq="L")
+ expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq="ms")
assert i1 == expected
- expected = Period("2007-01-01 09:00:00.001", freq="L")
+ expected = Period("2007-01-01 09:00:00.001", freq="ms")
assert i1 == expected
i1 = Period("2007-01-01 09:00:00.00101")
- expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq="U")
+ expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq="us")
assert i1 == expected
- expected = Period("2007-01-01 09:00:00.00101", freq="U")
+ expected = Period("2007-01-01 09:00:00.00101", freq="us")
assert i1 == expected
msg = "Must supply freq for ordinal value"
@@ -282,17 +282,17 @@ def test_period_constructor_offsets(self):
assert i1 == i5
i1 = Period("2007-01-01 09:00:00.001")
- expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq="L")
+ expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq="ms")
assert i1 == expected
- expected = Period("2007-01-01 09:00:00.001", freq="L")
+ expected = Period("2007-01-01 09:00:00.001", freq="ms")
assert i1 == expected
i1 = Period("2007-01-01 09:00:00.00101")
- expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq="U")
+ expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq="us")
assert i1 == expected
- expected = Period("2007-01-01 09:00:00.00101", freq="U")
+ expected = Period("2007-01-01 09:00:00.00101", freq="us")
assert i1 == expected
def test_invalid_arguments(self):
@@ -346,21 +346,21 @@ def test_constructor_infer_freq(self):
assert p.freq == "H"
p = Period("2007-01-01 07:10")
- assert p.freq == "T"
+ assert p.freq == "min"
p = Period("2007-01-01 07:10:15")
- assert p.freq == "S"
+ assert p.freq == "s"
p = Period("2007-01-01 07:10:15.123")
- assert p.freq == "L"
+ assert p.freq == "ms"
# We see that there are 6 digits after the decimal, so get microsecond
# even though they are all zeros.
p = Period("2007-01-01 07:10:15.123000")
- assert p.freq == "U"
+ assert p.freq == "us"
p = Period("2007-01-01 07:10:15.123400")
- assert p.freq == "U"
+ assert p.freq == "us"
def test_multiples(self):
result1 = Period("1989", freq="2A")
@@ -638,7 +638,7 @@ def test_to_timestamp(self):
assert end_ts == p.to_timestamp("D", how=a)
assert end_ts == p.to_timestamp("3D", how=a)
- from_lst = ["A", "Q", "M", "W", "B", "D", "H", "Min", "S"]
+ from_lst = ["A", "Q", "M", "W", "B", "D", "H", "Min", "s"]
def _ex(p):
if p.freq == "B":
@@ -664,10 +664,10 @@ def _ex(p):
result = p.to_timestamp("3H", how="end")
assert result == expected
- result = p.to_timestamp("T", how="end")
+ result = p.to_timestamp("min", how="end")
expected = Timestamp(1986, 1, 1) - Timedelta(1, "ns")
assert result == expected
- result = p.to_timestamp("2T", how="end")
+ result = p.to_timestamp("2min", how="end")
assert result == expected
result = p.to_timestamp(how="end")
@@ -677,13 +677,13 @@ def _ex(p):
expected = datetime(1985, 1, 1)
result = p.to_timestamp("H", how="start")
assert result == expected
- result = p.to_timestamp("T", how="start")
+ result = p.to_timestamp("min", how="start")
assert result == expected
- result = p.to_timestamp("S", how="start")
+ result = p.to_timestamp("s", how="start")
assert result == expected
result = p.to_timestamp("3H", how="start")
assert result == expected
- result = p.to_timestamp("5S", how="start")
+ result = p.to_timestamp("5s", how="start")
assert result == expected
def test_to_timestamp_business_end(self):
@@ -724,16 +724,16 @@ def test_to_timestamp_microsecond(self, ts, expected, freq):
("2000-12-15", None, "2000-12-15", "D"),
(
"2000-12-15 13:45:26.123456789",
- "N",
+ "ns",
"2000-12-15 13:45:26.123456789",
- "N",
+ "ns",
),
- ("2000-12-15 13:45:26.123456789", "U", "2000-12-15 13:45:26.123456", "U"),
- ("2000-12-15 13:45:26.123456", None, "2000-12-15 13:45:26.123456", "U"),
- ("2000-12-15 13:45:26.123456789", "L", "2000-12-15 13:45:26.123", "L"),
- ("2000-12-15 13:45:26.123", None, "2000-12-15 13:45:26.123", "L"),
- ("2000-12-15 13:45:26", "S", "2000-12-15 13:45:26", "S"),
- ("2000-12-15 13:45:26", "T", "2000-12-15 13:45", "T"),
+ ("2000-12-15 13:45:26.123456789", "us", "2000-12-15 13:45:26.123456", "us"),
+ ("2000-12-15 13:45:26.123456", None, "2000-12-15 13:45:26.123456", "us"),
+ ("2000-12-15 13:45:26.123456789", "ms", "2000-12-15 13:45:26.123", "ms"),
+ ("2000-12-15 13:45:26.123", None, "2000-12-15 13:45:26.123", "ms"),
+ ("2000-12-15 13:45:26", "s", "2000-12-15 13:45:26", "s"),
+ ("2000-12-15 13:45:26", "min", "2000-12-15 13:45", "min"),
("2000-12-15 13:45:26", "H", "2000-12-15 13:00", "H"),
("2000-12-15", "Y", "2000", "A-DEC"),
("2000-12-15", "Q", "2000Q4", "Q-DEC"),
@@ -757,7 +757,7 @@ def test_repr_nat(self):
def test_strftime(self):
# GH#3363
- p = Period("2000-1-1 12:34:12", freq="S")
+ p = Period("2000-1-1 12:34:12", freq="s")
res = p.strftime("%Y-%m-%d %H:%M:%S")
assert res == "2000-01-01 12:34:12"
assert isinstance(res, str)
@@ -801,7 +801,7 @@ def test_quarterly_negative_ordinals(self):
def test_freq_str(self):
i1 = Period("1982", freq="Min")
assert i1.freq == offsets.Minute()
- assert i1.freqstr == "T"
+ assert i1.freqstr == "min"
@pytest.mark.filterwarnings(
"ignore:Period with BDay freq is deprecated:FutureWarning"
@@ -812,11 +812,11 @@ def test_period_deprecated_freq(self):
"B": ["BUS", "BUSINESS", "BUSINESSLY", "WEEKDAY", "bus"],
"D": ["DAY", "DLY", "DAILY", "Day", "Dly", "Daily"],
"H": ["HR", "HOUR", "HRLY", "HOURLY", "hr", "Hour", "HRly"],
- "T": ["minute", "MINUTE", "MINUTELY", "minutely"],
- "S": ["sec", "SEC", "SECOND", "SECONDLY", "second"],
- "L": ["MILLISECOND", "MILLISECONDLY", "millisecond"],
- "U": ["MICROSECOND", "MICROSECONDLY", "microsecond"],
- "N": ["NANOSECOND", "NANOSECONDLY", "nanosecond"],
+ "min": ["minute", "MINUTE", "MINUTELY", "minutely"],
+ "s": ["sec", "SEC", "SECOND", "SECONDLY", "second"],
+ "ms": ["MILLISECOND", "MILLISECONDLY", "millisecond"],
+ "us": ["MICROSECOND", "MICROSECONDLY", "microsecond"],
+ "ns": ["NANOSECOND", "NANOSECONDLY", "nanosecond"],
}
msg = INVALID_FREQ_ERR_MSG
@@ -858,13 +858,13 @@ def test_outer_bounds_start_and_end_time(self, bound, offset, period_property):
def test_inner_bounds_start_and_end_time(self, bound, offset, period_property):
# GH #13346
period = TestPeriodProperties._period_constructor(bound, -offset)
- expected = period.to_timestamp().round(freq="S")
- assert getattr(period, period_property).round(freq="S") == expected
- expected = (bound - offset * Timedelta(1, unit="S")).floor("S")
- assert getattr(period, period_property).floor("S") == expected
+ expected = period.to_timestamp().round(freq="s")
+ assert getattr(period, period_property).round(freq="s") == expected
+ expected = (bound - offset * Timedelta(1, unit="s")).floor("s")
+ assert getattr(period, period_property).floor("s") == expected
def test_start_time(self):
- freq_lst = ["A", "Q", "M", "D", "H", "T", "S"]
+ freq_lst = ["A", "Q", "M", "D", "H", "min", "s"]
xp = datetime(2012, 1, 1)
for f in freq_lst:
p = Period("2012", freq=f)
@@ -1592,7 +1592,7 @@ def test_small_year_parsing():
def test_negone_ordinals():
- freqs = ["A", "M", "Q", "D", "H", "T", "S"]
+ freqs = ["A", "M", "Q", "D", "H", "min", "s"]
period = Period(ordinal=-1, freq="D")
for freq in freqs:
diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index 701cfdf157d26..f1d8acf47b29a 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -511,7 +511,6 @@ def test_nat_converters(self):
"seconds",
"sec",
"second",
- "S",
"Seconds",
"Sec",
"Second",
@@ -576,28 +575,35 @@ def test_unit_parser(self, unit, np_unit, wrapper):
dtype="m8[ns]",
)
# TODO(2.0): the desired output dtype may have non-nano resolution
- result = to_timedelta(wrapper(range(5)), unit=unit)
- tm.assert_index_equal(result, expected)
- result = TimedeltaIndex(wrapper(range(5)), unit=unit)
- tm.assert_index_equal(result, expected)
-
- str_repr = [f"{x}{unit}" for x in np.arange(5)]
- result = to_timedelta(wrapper(str_repr))
- tm.assert_index_equal(result, expected)
- result = to_timedelta(wrapper(str_repr))
- tm.assert_index_equal(result, expected)
-
- # scalar
- expected = Timedelta(np.timedelta64(2, np_unit).astype("timedelta64[ns]"))
- result = to_timedelta(2, unit=unit)
- assert result == expected
- result = Timedelta(2, unit=unit)
- assert result == expected
+ msg = f"'{unit}' is deprecated and will be removed in a future version."
- result = to_timedelta(f"2{unit}")
- assert result == expected
- result = Timedelta(f"2{unit}")
- assert result == expected
+ if (unit, np_unit) in (("u", "us"), ("U", "us"), ("n", "ns"), ("N", "ns")):
+ warn = FutureWarning
+ else:
+ warn = None
+ with tm.assert_produces_warning(warn, match=msg):
+ result = to_timedelta(wrapper(range(5)), unit=unit)
+ tm.assert_index_equal(result, expected)
+ result = TimedeltaIndex(wrapper(range(5)), unit=unit)
+ tm.assert_index_equal(result, expected)
+
+ str_repr = [f"{x}{unit}" for x in np.arange(5)]
+ result = to_timedelta(wrapper(str_repr))
+ tm.assert_index_equal(result, expected)
+ result = to_timedelta(wrapper(str_repr))
+ tm.assert_index_equal(result, expected)
+
+ # scalar
+ expected = Timedelta(np.timedelta64(2, np_unit).astype("timedelta64[ns]"))
+ result = to_timedelta(2, unit=unit)
+ assert result == expected
+ result = Timedelta(2, unit=unit)
+ assert result == expected
+
+ result = to_timedelta(f"2{unit}")
+ assert result == expected
+ result = Timedelta(f"2{unit}")
+ assert result == expected
@pytest.mark.parametrize("unit", ["Y", "y", "M"])
def test_unit_m_y_raises(self, unit):
@@ -647,25 +653,25 @@ def test_to_numpy_alias(self):
[
# This first case has s1, s2 being the same as t1,t2 below
(
- "N",
+ "ns",
Timedelta("1 days 02:34:56.789123456"),
Timedelta("-1 days 02:34:56.789123456"),
),
(
- "U",
+ "us",
Timedelta("1 days 02:34:56.789123000"),
Timedelta("-1 days 02:34:56.789123000"),
),
(
- "L",
+ "ms",
Timedelta("1 days 02:34:56.789000000"),
Timedelta("-1 days 02:34:56.789000000"),
),
- ("S", Timedelta("1 days 02:34:57"), Timedelta("-1 days 02:34:57")),
- ("2S", Timedelta("1 days 02:34:56"), Timedelta("-1 days 02:34:56")),
- ("5S", Timedelta("1 days 02:34:55"), Timedelta("-1 days 02:34:55")),
- ("T", Timedelta("1 days 02:35:00"), Timedelta("-1 days 02:35:00")),
- ("12T", Timedelta("1 days 02:36:00"), Timedelta("-1 days 02:36:00")),
+ ("s", Timedelta("1 days 02:34:57"), Timedelta("-1 days 02:34:57")),
+ ("2s", Timedelta("1 days 02:34:56"), Timedelta("-1 days 02:34:56")),
+ ("5s", Timedelta("1 days 02:34:55"), Timedelta("-1 days 02:34:55")),
+ ("min", Timedelta("1 days 02:35:00"), Timedelta("-1 days 02:35:00")),
+ ("12min", Timedelta("1 days 02:36:00"), Timedelta("-1 days 02:36:00")),
("H", Timedelta("1 days 03:00:00"), Timedelta("-1 days 03:00:00")),
("d", Timedelta("1 days"), Timedelta("-1 days")),
],
@@ -976,21 +982,21 @@ def test_implementation_limits(self):
def test_total_seconds_precision(self):
# GH 19458
- assert Timedelta("30S").total_seconds() == 30.0
+ assert Timedelta("30s").total_seconds() == 30.0
assert Timedelta("0").total_seconds() == 0.0
- assert Timedelta("-2S").total_seconds() == -2.0
- assert Timedelta("5.324S").total_seconds() == 5.324
- assert (Timedelta("30S").total_seconds() - 30.0) < 1e-20
- assert (30.0 - Timedelta("30S").total_seconds()) < 1e-20
+ assert Timedelta("-2s").total_seconds() == -2.0
+ assert Timedelta("5.324s").total_seconds() == 5.324
+ assert (Timedelta("30s").total_seconds() - 30.0) < 1e-20
+ assert (30.0 - Timedelta("30s").total_seconds()) < 1e-20
def test_resolution_string(self):
assert Timedelta(days=1).resolution_string == "D"
assert Timedelta(days=1, hours=6).resolution_string == "H"
- assert Timedelta(days=1, minutes=6).resolution_string == "T"
- assert Timedelta(days=1, seconds=6).resolution_string == "S"
- assert Timedelta(days=1, milliseconds=6).resolution_string == "L"
- assert Timedelta(days=1, microseconds=6).resolution_string == "U"
- assert Timedelta(days=1, nanoseconds=6).resolution_string == "N"
+ assert Timedelta(days=1, minutes=6).resolution_string == "min"
+ assert Timedelta(days=1, seconds=6).resolution_string == "s"
+ assert Timedelta(days=1, milliseconds=6).resolution_string == "ms"
+ assert Timedelta(days=1, microseconds=6).resolution_string == "us"
+ assert Timedelta(days=1, nanoseconds=6).resolution_string == "ns"
def test_resolution_deprecated(self):
# GH#21344
@@ -1007,8 +1013,8 @@ def test_resolution_deprecated(self):
@pytest.mark.parametrize(
"value, expected",
[
- (Timedelta("10S"), True),
- (Timedelta("-10S"), True),
+ (Timedelta("10s"), True),
+ (Timedelta("-10s"), True),
(Timedelta(10, unit="ns"), True),
(Timedelta(0, unit="ns"), False),
(Timedelta(-10, unit="ns"), True),
@@ -1032,3 +1038,23 @@ def test_timedelta_attribute_precision():
result += td.nanoseconds
expected = td._value
assert result == expected
+
+
+@pytest.mark.parametrize(
+ "unit,unit_depr",
+ [
+ ("min", "T"),
+ ("s", "S"),
+ ("ms", "L"),
+ ("ns", "N"),
+ ("us", "U"),
+ ],
+)
+def test_units_t_l_u_n_deprecated(unit, unit_depr):
+ # GH 52536
+ msg = f"'{unit_depr}' is deprecated and will be removed in a future version."
+
+ expected = Timedelta(1, unit=unit)
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = Timedelta(1, unit=unit_depr)
+ tm.assert_equal(result, expected)
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index 0a43db87674af..e501bd93bc1c6 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -46,7 +46,7 @@ def test_round_division_by_zero_raises(self):
("20130104 12:00:00", "D", "20130105"),
("2000-01-05 05:09:15.13", "D", "2000-01-05 00:00:00"),
("2000-01-05 05:09:15.13", "H", "2000-01-05 05:00:00"),
- ("2000-01-05 05:09:15.13", "S", "2000-01-05 05:09:15"),
+ ("2000-01-05 05:09:15.13", "s", "2000-01-05 05:09:15"),
],
)
def test_round_frequencies(self, timestamp, freq, expected):
@@ -137,10 +137,10 @@ def test_ceil_floor_edge(self, test_input, rounder, freq, expected):
"test_input, freq, expected",
[
("2018-01-01 00:02:06", "2s", "2018-01-01 00:02:06"),
- ("2018-01-01 00:02:00", "2T", "2018-01-01 00:02:00"),
- ("2018-01-01 00:04:00", "4T", "2018-01-01 00:04:00"),
- ("2018-01-01 00:15:00", "15T", "2018-01-01 00:15:00"),
- ("2018-01-01 00:20:00", "20T", "2018-01-01 00:20:00"),
+ ("2018-01-01 00:02:00", "2min", "2018-01-01 00:02:00"),
+ ("2018-01-01 00:04:00", "4min", "2018-01-01 00:04:00"),
+ ("2018-01-01 00:15:00", "15min", "2018-01-01 00:15:00"),
+ ("2018-01-01 00:20:00", "20min", "2018-01-01 00:20:00"),
("2018-01-01 03:00:00", "3H", "2018-01-01 03:00:00"),
],
)
diff --git a/pandas/tests/series/accessors/test_dt_accessor.py b/pandas/tests/series/accessors/test_dt_accessor.py
index dd810a31c25af..c7f9e7da4f398 100644
--- a/pandas/tests/series/accessors/test_dt_accessor.py
+++ b/pandas/tests/series/accessors/test_dt_accessor.py
@@ -253,7 +253,7 @@ def test_dt_accessor_limited_display_api(self):
tm.assert_almost_equal(results, sorted(set(ok_for_dt + ok_for_dt_methods)))
# tzaware
- ser = Series(date_range("2015-01-01", "2016-01-01", freq="T"), name="xxx")
+ ser = Series(date_range("2015-01-01", "2016-01-01", freq="min"), name="xxx")
ser = ser.dt.tz_localize("UTC").dt.tz_convert("America/Chicago")
results = get_dir(ser)
tm.assert_almost_equal(results, sorted(set(ok_for_dt + ok_for_dt_methods)))
@@ -270,11 +270,11 @@ def test_dt_accessor_limited_display_api(self):
def test_dt_accessor_ambiguous_freq_conversions(self):
# GH#11295
# ambiguous time error on the conversions
- ser = Series(date_range("2015-01-01", "2016-01-01", freq="T"), name="xxx")
+ ser = Series(date_range("2015-01-01", "2016-01-01", freq="min"), name="xxx")
ser = ser.dt.tz_localize("UTC").dt.tz_convert("America/Chicago")
exp_values = date_range(
- "2015-01-01", "2016-01-01", freq="T", tz="UTC"
+ "2015-01-01", "2016-01-01", freq="min", tz="UTC"
).tz_convert("America/Chicago")
# freq not preserved by tz_localize above
exp_values = exp_values._with_freq(None)
@@ -385,7 +385,7 @@ def test_dt_round_tz_nonexistent(self, method, ts_str, freq):
with pytest.raises(pytz.NonExistentTimeError, match="2018-03-11 02:00:00"):
getattr(ser.dt, method)(freq, nonexistent="raise")
- @pytest.mark.parametrize("freq", ["ns", "U", "1000U"])
+ @pytest.mark.parametrize("freq", ["ns", "us", "1000us"])
def test_dt_round_nonnano_higher_resolution_no_op(self, freq):
# GH 52761
ser = Series(
@@ -611,7 +611,7 @@ def test_strftime_period_hours(self):
tm.assert_series_equal(result, expected)
def test_strftime_period_minutes(self):
- ser = Series(period_range("20130101", periods=4, freq="L"))
+ ser = Series(period_range("20130101", periods=4, freq="ms"))
result = ser.dt.strftime("%Y/%m/%d %H:%M:%S.%l")
expected = Series(
[
@@ -784,8 +784,8 @@ class TestSeriesPeriodValuesDtAccessor:
Period("2016-01-01 00:01:00", freq="M"),
],
[
- Period("2016-01-01 00:00:00", freq="S"),
- Period("2016-01-01 00:00:01", freq="S"),
+ Period("2016-01-01 00:00:00", freq="s"),
+ Period("2016-01-01 00:00:01", freq="s"),
],
],
)
diff --git a/pandas/tests/series/indexing/test_datetime.py b/pandas/tests/series/indexing/test_datetime.py
index 8daaf087085c6..a388f0f80fa94 100644
--- a/pandas/tests/series/indexing/test_datetime.py
+++ b/pandas/tests/series/indexing/test_datetime.py
@@ -355,7 +355,7 @@ def test_indexing_over_size_cutoff_period_index(monkeypatch):
monkeypatch.setattr(libindex, "_SIZE_CUTOFF", 1000)
n = 1100
- idx = period_range("1/1/2000", freq="T", periods=n)
+ idx = period_range("1/1/2000", freq="min", periods=n)
assert idx._engine.over_size_threshold
s = Series(np.random.default_rng(2).standard_normal(len(idx)), index=idx)
@@ -455,7 +455,7 @@ def test_getitem_str_month_with_datetimeindex():
expected = ts["2013-05"]
tm.assert_series_equal(expected, ts)
- idx = date_range(start="2013-05-31 00:00", end="2013-05-31 23:59", freq="S")
+ idx = date_range(start="2013-05-31 00:00", end="2013-05-31 23:59", freq="s")
ts = Series(range(len(idx)), index=idx)
expected = ts["2013-05"]
tm.assert_series_equal(expected, ts)
diff --git a/pandas/tests/tseries/frequencies/test_freq_code.py b/pandas/tests/tseries/frequencies/test_freq_code.py
index e961fdc295c96..f25477afa2626 100644
--- a/pandas/tests/tseries/frequencies/test_freq_code.py
+++ b/pandas/tests/tseries/frequencies/test_freq_code.py
@@ -8,10 +8,12 @@
)
from pandas._libs.tslibs.dtypes import _attrname_to_abbrevs
+import pandas._testing as tm
+
@pytest.mark.parametrize(
"freqstr,exp_freqstr",
- [("D", "D"), ("W", "D"), ("M", "D"), ("S", "S"), ("T", "S"), ("H", "S")],
+ [("D", "D"), ("W", "D"), ("M", "D"), ("s", "s"), ("min", "s"), ("H", "s")],
)
def test_get_to_timestamp_base(freqstr, exp_freqstr):
off = to_offset(freqstr)
@@ -30,18 +32,18 @@ def test_get_to_timestamp_base(freqstr, exp_freqstr):
("M", "month"),
("D", "day"),
("H", "hour"),
- ("T", "minute"),
- ("S", "second"),
- ("L", "millisecond"),
- ("U", "microsecond"),
- ("N", "nanosecond"),
+ ("min", "minute"),
+ ("s", "second"),
+ ("ms", "millisecond"),
+ ("us", "microsecond"),
+ ("ns", "nanosecond"),
],
)
def test_get_attrname_from_abbrev(freqstr, expected):
assert Resolution.get_reso_from_freqstr(freqstr).attrname == expected
-@pytest.mark.parametrize("freq", ["D", "H", "T", "S", "L", "U", "N"])
+@pytest.mark.parametrize("freq", ["D", "H", "min", "s", "ms", "us", "ns"])
def test_get_freq_roundtrip2(freq):
obj = Resolution.get_reso_from_freqstr(freq)
result = _attrname_to_abbrevs[obj.attrname]
@@ -51,12 +53,12 @@ def test_get_freq_roundtrip2(freq):
@pytest.mark.parametrize(
"args,expected",
[
- ((1.5, "T"), (90, "S")),
- ((62.4, "T"), (3744, "S")),
- ((1.04, "H"), (3744, "S")),
+ ((1.5, "min"), (90, "s")),
+ ((62.4, "min"), (3744, "s")),
+ ((1.04, "H"), (3744, "s")),
((1, "D"), (1, "D")),
- ((0.342931, "H"), (1234551600, "U")),
- ((1.2345, "D"), (106660800, "L")),
+ ((0.342931, "H"), (1234551600, "us")),
+ ((1.2345, "D"), (106660800, "ms")),
],
)
def test_resolution_bumping(args, expected):
@@ -69,7 +71,7 @@ def test_resolution_bumping(args, expected):
@pytest.mark.parametrize(
"args",
[
- (0.5, "N"),
+ (0.5, "ns"),
# Too much precision in the input can prevent.
(0.3429324798798269273987982, "H"),
],
@@ -95,3 +97,12 @@ def test_compatibility(freqstr, expected):
ts_np = np.datetime64("2021-01-01T08:00:00.00")
do = to_offset(freqstr)
assert ts_np + do == np.datetime64(expected)
+
+
+@pytest.mark.parametrize("freq", ["T", "S", "L", "N", "U"])
+def test_units_t_l_deprecated_from__attrname_to_abbrevs(freq):
+ # GH 52536
+ msg = f"'{freq}' is deprecated and will be removed in a future version."
+
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ Resolution.get_reso_from_freqstr(freq)
diff --git a/pandas/tests/tseries/frequencies/test_inference.py b/pandas/tests/tseries/frequencies/test_inference.py
index d811b2cf12c19..ab7bda2fa5792 100644
--- a/pandas/tests/tseries/frequencies/test_inference.py
+++ b/pandas/tests/tseries/frequencies/test_inference.py
@@ -39,11 +39,11 @@
params=[
(timedelta(1), "D"),
(timedelta(hours=1), "H"),
- (timedelta(minutes=1), "T"),
- (timedelta(seconds=1), "S"),
- (np.timedelta64(1, "ns"), "N"),
- (timedelta(microseconds=1), "U"),
- (timedelta(microseconds=1000), "L"),
+ (timedelta(minutes=1), "min"),
+ (timedelta(seconds=1), "s"),
+ (np.timedelta64(1, "ns"), "ns"),
+ (timedelta(microseconds=1), "us"),
+ (timedelta(microseconds=1000), "ms"),
]
)
def base_delta_code_pair(request):
@@ -254,7 +254,8 @@ def test_infer_freq_tz_series(tz_naive_fixture):
],
)
@pytest.mark.parametrize(
- "freq", ["H", "3H", "10T", "3601S", "3600001L", "3600000001U", "3600000000001N"]
+ "freq",
+ ["H", "3H", "10min", "3601s", "3600001ms", "3600000001us", "3600000000001ns"],
)
def test_infer_freq_tz_transition(tz_naive_fixture, date_pair, freq):
# see gh-8772
@@ -437,7 +438,7 @@ def test_series_inconvertible_string():
frequencies.infer_freq(Series(["foo", "bar"]))
-@pytest.mark.parametrize("freq", [None, "L"])
+@pytest.mark.parametrize("freq", [None, "ms"])
def test_series_period_index(freq):
# see gh-6407
#
@@ -449,7 +450,7 @@ def test_series_period_index(freq):
frequencies.infer_freq(s)
-@pytest.mark.parametrize("freq", ["M", "L", "S"])
+@pytest.mark.parametrize("freq", ["M", "ms", "s"])
def test_series_datetime_index(freq):
s = Series(date_range("20130101", periods=10, freq=freq))
inferred = frequencies.infer_freq(s)
@@ -530,12 +531,12 @@ def test_infer_freq_non_nano():
arr = np.arange(10).astype(np.int64).view("M8[s]")
dta = DatetimeArray._simple_new(arr, dtype=arr.dtype)
res = frequencies.infer_freq(dta)
- assert res == "S"
+ assert res == "s"
arr2 = arr.view("m8[ms]")
tda = TimedeltaArray._simple_new(arr2, dtype=arr2.dtype)
res2 = frequencies.infer_freq(tda)
- assert res2 == "L"
+ assert res2 == "ms"
def test_infer_freq_non_nano_tzaware(tz_aware_fixture):
diff --git a/pandas/tests/tseries/offsets/test_business_month.py b/pandas/tests/tseries/offsets/test_business_month.py
index 9f7fb990d238a..a14451e60aa89 100644
--- a/pandas/tests/tseries/offsets/test_business_month.py
+++ b/pandas/tests/tseries/offsets/test_business_month.py
@@ -31,7 +31,7 @@
)
def test_apply_index(cls, n):
offset = cls(n=n)
- rng = pd.date_range(start="1/1/2000", periods=100000, freq="T")
+ rng = pd.date_range(start="1/1/2000", periods=100000, freq="min")
ser = pd.Series(rng)
res = rng + offset
diff --git a/pandas/tests/tseries/offsets/test_index.py b/pandas/tests/tseries/offsets/test_index.py
index ad3478b319898..7a62944556d11 100644
--- a/pandas/tests/tseries/offsets/test_index.py
+++ b/pandas/tests/tseries/offsets/test_index.py
@@ -44,7 +44,7 @@
)
def test_apply_index(cls, n):
offset = cls(n=n)
- rng = date_range(start="1/1/2000", periods=100000, freq="T")
+ rng = date_range(start="1/1/2000", periods=100000, freq="min")
ser = Series(rng)
res = rng + offset
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index 5139331bebaf7..29215101a84e0 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -811,7 +811,7 @@ def test_alias_equality(self):
assert k == v.copy()
def test_rule_code(self):
- lst = ["M", "MS", "BM", "BMS", "D", "B", "H", "T", "S", "L", "U"]
+ lst = ["M", "MS", "BM", "BMS", "D", "B", "H", "min", "s", "ms", "us"]
for k in lst:
assert k == _get_offset(k).rule_code
# should be cached - this is kind of an internals test...
diff --git a/pandas/tests/tslibs/test_period_asfreq.py b/pandas/tests/tslibs/test_period_asfreq.py
index 7c9047b3e7c60..49cb1af9406fe 100644
--- a/pandas/tests/tslibs/test_period_asfreq.py
+++ b/pandas/tests/tslibs/test_period_asfreq.py
@@ -25,26 +25,26 @@ def get_freq_code(freqstr: str) -> int:
"freq1,freq2,expected",
[
("D", "H", 24),
- ("D", "T", 1440),
- ("D", "S", 86400),
- ("D", "L", 86400000),
- ("D", "U", 86400000000),
- ("D", "N", 86400000000000),
- ("H", "T", 60),
- ("H", "S", 3600),
- ("H", "L", 3600000),
- ("H", "U", 3600000000),
- ("H", "N", 3600000000000),
- ("T", "S", 60),
- ("T", "L", 60000),
- ("T", "U", 60000000),
- ("T", "N", 60000000000),
- ("S", "L", 1000),
- ("S", "U", 1000000),
- ("S", "N", 1000000000),
- ("L", "U", 1000),
- ("L", "N", 1000000),
- ("U", "N", 1000),
+ ("D", "min", 1440),
+ ("D", "s", 86400),
+ ("D", "ms", 86400000),
+ ("D", "us", 86400000000),
+ ("D", "ns", 86400000000000),
+ ("H", "min", 60),
+ ("H", "s", 3600),
+ ("H", "ms", 3600000),
+ ("H", "us", 3600000000),
+ ("H", "ns", 3600000000000),
+ ("min", "s", 60),
+ ("min", "ms", 60000),
+ ("min", "us", 60000000),
+ ("min", "ns", 60000000000),
+ ("s", "ms", 1000),
+ ("s", "us", 1000000),
+ ("s", "ns", 1000000000),
+ ("ms", "us", 1000),
+ ("ms", "ns", 1000000),
+ ("us", "ns", 1000),
],
)
def test_intra_day_conversion_factors(freq1, freq2, expected):
diff --git a/pandas/tests/tslibs/test_to_offset.py b/pandas/tests/tslibs/test_to_offset.py
index 27ddbb82f49a9..bc3e06646b235 100644
--- a/pandas/tests/tslibs/test_to_offset.py
+++ b/pandas/tests/tslibs/test_to_offset.py
@@ -20,12 +20,12 @@
("2h 60min", offsets.Hour(3)),
("2h 20.5min", offsets.Second(8430)),
("1.5min", offsets.Second(90)),
- ("0.5S", offsets.Milli(500)),
- ("15l500u", offsets.Micro(15500)),
- ("10s75L", offsets.Milli(10075)),
+ ("0.5s", offsets.Milli(500)),
+ ("15ms500us", offsets.Micro(15500)),
+ ("10s75ms", offsets.Milli(10075)),
("1s0.25ms", offsets.Micro(1000250)),
- ("1s0.25L", offsets.Micro(1000250)),
- ("2800N", offsets.Nano(2800)),
+ ("1s0.25ms", offsets.Micro(1000250)),
+ ("2800ns", offsets.Nano(2800)),
("2SM", offsets.SemiMonthEnd(2)),
("2SM-16", offsets.SemiMonthEnd(2, day_of_month=16)),
("2SMS-14", offsets.SemiMonthBegin(2, day_of_month=14)),
@@ -38,7 +38,7 @@ def test_to_offset(freq_input, expected):
@pytest.mark.parametrize(
- "freqstr,expected", [("-1S", -1), ("-2SM", -2), ("-1SMS", -1), ("-5min10s", -310)]
+ "freqstr,expected", [("-1s", -1), ("-2SM", -2), ("-1SMS", -1), ("-5min10s", -310)]
)
def test_to_offset_negative(freqstr, expected):
result = to_offset(freqstr)
@@ -49,12 +49,12 @@ def test_to_offset_negative(freqstr, expected):
"freqstr",
[
"2h20m",
- "U1",
- "-U",
- "3U1",
- "-2-3U",
+ "us1",
+ "-us",
+ "3us1",
+ "-2-3us",
"-2D:3H",
- "1.5.0S",
+ "1.5.0s",
"2SMS-15-15",
"2SMS-15D",
"100foo",
@@ -119,7 +119,7 @@ def test_to_offset_whitespace(freqstr, expected):
@pytest.mark.parametrize(
- "freqstr,expected", [("00H 00T 01S", 1), ("-00H 03T 14S", -194)]
+ "freqstr,expected", [("00H 00min 01s", 1), ("-00H 03min 14s", -194)]
)
def test_to_offset_leading_zero(freqstr, expected):
result = to_offset(freqstr)
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py
index 70b7534b296f3..f4d903dc19fb7 100644
--- a/pandas/tests/window/test_rolling.py
+++ b/pandas/tests/window/test_rolling.py
@@ -920,7 +920,7 @@ def test_rolling_numerical_accuracy_kahan_mean(add):
result = (
df.resample("1s").ffill().rolling("3s", closed="left", min_periods=3).mean()
)
- dates = date_range("19700101 09:00:00", periods=7, freq="S")
+ dates = date_range("19700101 09:00:00", periods=7, freq="s")
expected = DataFrame(
{
"A": [
@@ -1065,11 +1065,13 @@ def test_rolling_on_df_transposed():
("index", "window"),
[
(
- period_range(start="2020-01-01 08:00", end="2020-01-01 08:08", freq="T"),
- "2T",
+ period_range(start="2020-01-01 08:00", end="2020-01-01 08:08", freq="min"),
+ "2min",
),
(
- period_range(start="2020-01-01 08:00", end="2020-01-01 12:00", freq="30T"),
+ period_range(
+ start="2020-01-01 08:00", end="2020-01-01 12:00", freq="30min"
+ ),
"1h",
),
],
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index caa34a067ac69..af88bd7b2a6d6 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -68,11 +68,11 @@
"MS": "M",
"D": "D",
"B": "B",
- "T": "T",
- "S": "S",
- "L": "L",
- "U": "U",
- "N": "N",
+ "min": "min",
+ "s": "s",
+ "ms": "ms",
+ "us": "us",
+ "ns": "ns",
"H": "H",
"Q": "Q",
"A": "A",
@@ -271,19 +271,19 @@ def get_freq(self) -> str | None:
return _maybe_add_count("H", delta / pph)
elif _is_multiple(delta, ppm):
# Minutes
- return _maybe_add_count("T", delta / ppm)
+ return _maybe_add_count("min", delta / ppm)
elif _is_multiple(delta, pps):
# Seconds
- return _maybe_add_count("S", delta / pps)
+ return _maybe_add_count("s", delta / pps)
elif _is_multiple(delta, (pps // 1000)):
# Milliseconds
- return _maybe_add_count("L", delta / (pps // 1000))
+ return _maybe_add_count("ms", delta / (pps // 1000))
elif _is_multiple(delta, (pps // 1_000_000)):
# Microseconds
- return _maybe_add_count("U", delta / (pps // 1_000_000))
+ return _maybe_add_count("us", delta / (pps // 1_000_000))
else:
# Nanoseconds
- return _maybe_add_count("N", delta)
+ return _maybe_add_count("ns", delta)
@cache_readonly
def day_deltas(self) -> list[int]:
@@ -472,7 +472,6 @@ def is_subperiod(source, target) -> bool:
-------
bool
"""
-
if target is None or source is None:
return False
source = _maybe_coerce_freq(source)
@@ -483,31 +482,31 @@ def is_subperiod(source, target) -> bool:
return _quarter_months_conform(
get_rule_month(source), get_rule_month(target)
)
- return source in {"D", "C", "B", "M", "H", "T", "S", "L", "U", "N"}
+ return source in {"D", "C", "B", "M", "H", "min", "s", "ms", "us", "ns"}
elif _is_quarterly(target):
- return source in {"D", "C", "B", "M", "H", "T", "S", "L", "U", "N"}
+ return source in {"D", "C", "B", "M", "H", "min", "s", "ms", "us", "ns"}
elif _is_monthly(target):
- return source in {"D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return source in {"D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif _is_weekly(target):
- return source in {target, "D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return source in {target, "D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif target == "B":
- return source in {"B", "H", "T", "S", "L", "U", "N"}
+ return source in {"B", "H", "min", "s", "ms", "us", "ns"}
elif target == "C":
- return source in {"C", "H", "T", "S", "L", "U", "N"}
+ return source in {"C", "H", "min", "s", "ms", "us", "ns"}
elif target == "D":
- return source in {"D", "H", "T", "S", "L", "U", "N"}
+ return source in {"D", "H", "min", "s", "ms", "us", "ns"}
elif target == "H":
- return source in {"H", "T", "S", "L", "U", "N"}
- elif target == "T":
- return source in {"T", "S", "L", "U", "N"}
- elif target == "S":
- return source in {"S", "L", "U", "N"}
- elif target == "L":
- return source in {"L", "U", "N"}
- elif target == "U":
- return source in {"U", "N"}
- elif target == "N":
- return source in {"N"}
+ return source in {"H", "min", "s", "ms", "us", "ns"}
+ elif target == "min":
+ return source in {"min", "s", "ms", "us", "ns"}
+ elif target == "s":
+ return source in {"s", "ms", "us", "ns"}
+ elif target == "ms":
+ return source in {"ms", "us", "ns"}
+ elif target == "us":
+ return source in {"us", "ns"}
+ elif target == "ns":
+ return source in {"ns"}
else:
return False
@@ -541,31 +540,31 @@ def is_superperiod(source, target) -> bool:
smonth = get_rule_month(source)
tmonth = get_rule_month(target)
return _quarter_months_conform(smonth, tmonth)
- return target in {"D", "C", "B", "M", "H", "T", "S", "L", "U", "N"}
+ return target in {"D", "C", "B", "M", "H", "min", "s", "ms", "us", "ns"}
elif _is_quarterly(source):
- return target in {"D", "C", "B", "M", "H", "T", "S", "L", "U", "N"}
+ return target in {"D", "C", "B", "M", "H", "min", "s", "ms", "us", "ns"}
elif _is_monthly(source):
- return target in {"D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return target in {"D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif _is_weekly(source):
- return target in {source, "D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return target in {source, "D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif source == "B":
- return target in {"D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return target in {"D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif source == "C":
- return target in {"D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return target in {"D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif source == "D":
- return target in {"D", "C", "B", "H", "T", "S", "L", "U", "N"}
+ return target in {"D", "C", "B", "H", "min", "s", "ms", "us", "ns"}
elif source == "H":
- return target in {"H", "T", "S", "L", "U", "N"}
- elif source == "T":
- return target in {"T", "S", "L", "U", "N"}
- elif source == "S":
- return target in {"S", "L", "U", "N"}
- elif source == "L":
- return target in {"L", "U", "N"}
- elif source == "U":
- return target in {"U", "N"}
- elif source == "N":
- return target in {"N"}
+ return target in {"H", "min", "s", "ms", "us", "ns"}
+ elif source == "min":
+ return target in {"min", "s", "ms", "us", "ns"}
+ elif source == "s":
+ return target in {"s", "ms", "us", "ns"}
+ elif source == "ms":
+ return target in {"ms", "us", "ns"}
+ elif source == "us":
+ return target in {"us", "ns"}
+ elif source == "ns":
+ return target in {"ns"}
else:
return False
@@ -586,7 +585,10 @@ def _maybe_coerce_freq(code) -> str:
assert code is not None
if isinstance(code, DateOffset):
code = code.rule_code
- return code.upper()
+ if code in {"min", "s", "ms", "us", "ns"}:
+ return code
+ else:
+ return code.upper()
def _quarter_months_conform(source: str, target: str) -> bool:
diff --git a/pandas/util/__init__.py b/pandas/util/__init__.py
index 8fe928ed6c5cf..82b3aa56c653c 100644
--- a/pandas/util/__init__.py
+++ b/pandas/util/__init__.py
@@ -23,3 +23,7 @@ def __getattr__(key: str):
return cache_readonly
raise AttributeError(f"module 'pandas.util' has no attribute '{key}'")
+
+
+def capitalize_first_letter(s):
+ return s[:1].upper() + s[1:]
| xref #52536
deprecated codes `'T'` and `'L'` in `_attrname_to_abbrevs/_abbrev_to_attrnames`, added a test for FutureWarning.
EDIT:
- Deprecated in Timedelta units `'T', 'L', 'U', 'N'` in favour of `'min', 'ms', 'us', 'ns'`.
- Deprecated aliases `'T', 'L', 'U', 'N'` for time series frequencies in favour of `'min', 'ms', 'us', 'ns'`.
- Deprecated resolutions `'T', 'L', 'U', 'N'` for Timedelta.resolution_string in favour of `'min', 'ms', 'us', 'ns'`. | https://api.github.com/repos/pandas-dev/pandas/pulls/54061 | 2023-07-10T08:40:33Z | 2023-08-29T15:10:53Z | 2023-08-29T15:10:53Z | 2023-08-29T15:10:53Z |
DOC: Add security policy | diff --git a/doc/source/development/policies.rst b/doc/source/development/policies.rst
index d079cc59b0ca5..8465820452353 100644
--- a/doc/source/development/policies.rst
+++ b/doc/source/development/policies.rst
@@ -51,5 +51,9 @@ Python support
pandas mirrors the `NumPy guidelines for Python support <https://numpy.org/neps/nep-0029-deprecation_policy.html#implementation>`__.
+Security policy
+~~~~~~~~~~~~~~~
+
+To report a security vulnerability to pandas, please go to https://github.com/pandas-dev/pandas/security/policy and see the instructions there.
.. _SemVer: https://semver.org
| - [ ] closes #8545 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54060 | 2023-07-10T07:16:41Z | 2023-07-11T01:50:33Z | 2023-07-11T01:50:33Z | 2023-07-11T01:54:21Z |
REF: share block methods | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 4480a1a0c6746..f34a9f7590a5e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -156,10 +156,34 @@ class Block(PandasObject):
__slots__ = ()
is_numeric = False
- is_object = False
- is_extension = False
- _can_consolidate = True
- _validate_ndim = True
+
+ @final
+ @cache_readonly
+ def _validate_ndim(self) -> bool:
+ """
+ We validate dimension for blocks that can hold 2D values, which for now
+ means numpy dtypes or DatetimeTZDtype.
+ """
+ dtype = self.dtype
+ return not isinstance(dtype, ExtensionDtype) or isinstance(
+ dtype, DatetimeTZDtype
+ )
+
+ @final
+ @cache_readonly
+ def is_object(self) -> bool:
+ return self.values.dtype == _dtype_obj
+
+ @final
+ @cache_readonly
+ def is_extension(self) -> bool:
+ return not lib.is_np_dtype(self.values.dtype)
+
+ @final
+ @cache_readonly
+ def _can_consolidate(self) -> bool:
+ # We _could_ consolidate for DatetimeTZDtype but don't for now.
+ return not self.is_extension
@final
@cache_readonly
@@ -1905,10 +1929,6 @@ class ExtensionBlock(libinternals.Block, EABackedBlock):
ExtensionArrays are limited to 1-D.
"""
- _can_consolidate = False
- _validate_ndim = False
- is_extension = True
-
values: ExtensionArray
def fillna(
@@ -2172,10 +2192,6 @@ def is_numeric(self) -> bool: # type: ignore[override]
return kind in "fciub"
- @cache_readonly
- def is_object(self) -> bool: # type: ignore[override]
- return self.values.dtype.kind == "O"
-
class NumericBlock(NumpyBlock):
# this Block type is kept for backwards-compatibility
@@ -2196,12 +2212,6 @@ class NDArrayBackedExtensionBlock(libinternals.NDArrayBackedBlock, EABackedBlock
values: NDArrayBackedExtensionArray
- # error: Signature of "is_extension" incompatible with supertype "Block"
- @cache_readonly
- def is_extension(self) -> bool: # type: ignore[override]
- # i.e. datetime64tz, PeriodDtype
- return not isinstance(self.dtype, np.dtype)
-
@property
def is_view(self) -> bool:
"""return a boolean if I am possibly a view"""
@@ -2239,9 +2249,6 @@ class DatetimeTZBlock(DatetimeLikeBlock):
values: DatetimeArray
__slots__ = ()
- is_extension = True
- _validate_ndim = True
- _can_consolidate = False
# Don't use values_for_json from DatetimeLikeBlock since it is
# an invalid optimization here(drop the tz)
diff --git a/pandas/tests/extension/test_external_block.py b/pandas/tests/extension/test_external_block.py
deleted file mode 100644
index 1b5b46c6a01bb..0000000000000
--- a/pandas/tests/extension/test_external_block.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas._libs.internals import BlockPlacement
-import pandas.util._test_decorators as td
-
-import pandas as pd
-from pandas.core.internals import BlockManager
-from pandas.core.internals.blocks import ExtensionBlock
-
-pytestmark = td.skip_array_manager_invalid_test
-
-
-class CustomBlock(ExtensionBlock):
- _holder = np.ndarray
-
- # Cannot override final attribute "_can_hold_na"
- @property # type: ignore[misc]
- def _can_hold_na(self) -> bool:
- return False
-
-
-@pytest.fixture
-def df():
- df1 = pd.DataFrame({"a": [1, 2, 3]})
- blocks = df1._mgr.blocks
- values = np.arange(3, dtype="int64")
- bp = BlockPlacement(slice(1, 2))
- custom_block = CustomBlock(values, placement=bp, ndim=2)
- blocks = blocks + (custom_block,)
- block_manager = BlockManager(blocks, [pd.Index(["a", "b"]), df1.index])
- return pd.DataFrame(block_manager)
-
-
-def test_concat_axis1(df):
- # GH17954
- df2 = pd.DataFrame({"c": [0.1, 0.2, 0.3]})
- res = pd.concat([df, df2], axis=1)
- assert isinstance(res._mgr.blocks[1], CustomBlock)
| 1) Move towards having fewer Block subclasses
2) Hopefully a little bit clearer about why the different cases behave the way they do. | https://api.github.com/repos/pandas-dev/pandas/pulls/54058 | 2023-07-09T20:18:29Z | 2023-07-10T22:21:56Z | 2023-07-10T22:21:56Z | 2023-07-10T22:27:58Z |
Backport PR #54051 on branch 2.0.x (DOC: Add whatsnew for 2.0.4) | diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst
index 3905c320be023..018f341ad4ee4 100644
--- a/doc/source/whatsnew/index.rst
+++ b/doc/source/whatsnew/index.rst
@@ -16,6 +16,7 @@ Version 2.0
.. toctree::
:maxdepth: 2
+ v2.0.4
v2.0.3
v2.0.2
v2.0.1
diff --git a/doc/source/whatsnew/v2.0.3.rst b/doc/source/whatsnew/v2.0.3.rst
index 3e2dc1b92c779..26e34e0c823ce 100644
--- a/doc/source/whatsnew/v2.0.3.rst
+++ b/doc/source/whatsnew/v2.0.3.rst
@@ -43,4 +43,4 @@ Other
Contributors
~~~~~~~~~~~~
-.. contributors:: v2.0.2..v2.0.3|HEAD
+.. contributors:: v2.0.2..v2.0.3
diff --git a/doc/source/whatsnew/v2.0.4.rst b/doc/source/whatsnew/v2.0.4.rst
new file mode 100644
index 0000000000000..36d798ea68c30
--- /dev/null
+++ b/doc/source/whatsnew/v2.0.4.rst
@@ -0,0 +1,41 @@
+.. _whatsnew_204:
+
+What's new in 2.0.4 (August ??, 2023)
+---------------------------------------
+
+These are the changes in pandas 2.0.4. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+{{ header }}
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.regressions:
+
+Fixed regressions
+~~~~~~~~~~~~~~~~~
+-
+-
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.bug_fixes:
+
+Bug fixes
+~~~~~~~~~
+-
+-
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.other:
+
+Other
+~~~~~
+-
+-
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.contributors:
+
+Contributors
+~~~~~~~~~~~~
+
+.. contributors:: v2.0.3..v2.0.4|HEAD
| Backport PR #54051: DOC: Add whatsnew for 2.0.4 | https://api.github.com/repos/pandas-dev/pandas/pulls/54053 | 2023-07-09T03:11:01Z | 2023-07-09T05:23:41Z | 2023-07-09T05:23:41Z | 2023-07-09T05:23:41Z |
BLD: Shrink sdist/wheel sizes | diff --git a/.gitattributes b/.gitattributes
index 736fa09d070fe..19c6fd2fd1d47 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -14,3 +14,71 @@
*.xls binary
*.xlsx binary
pandas/_version.py export-subst
+
+
+*.bz2 export-ignore
+*.csv export-ignore
+*.data export-ignore
+*.dta export-ignore
+*.feather export-ignore
+*.tar export-ignore
+*.gz export-ignore
+*.h5 export-ignore
+*.html export-ignore
+*.json export-ignore
+*.jsonl export-ignore
+*.kml export-ignore
+*.msgpack export-ignore
+*.pdf export-ignore
+*.parquet export-ignore
+*.pickle export-ignore
+*.pkl export-ignore
+*.png export-ignore
+*.pptx export-ignore
+*.ods export-ignore
+*.odt export-ignore
+*.orc export-ignore
+*.sas7bdat export-ignore
+*.sav export-ignore
+*.so export-ignore
+*.txt export-ignore
+*.xls export-ignore
+*.xlsb export-ignore
+*.xlsm export-ignore
+*.xlsx export-ignore
+*.xpt export-ignore
+*.cpt export-ignore
+*.xml export-ignore
+*.xsl export-ignore
+*.xz export-ignore
+*.zip export-ignore
+*.zst export-ignore
+*~ export-ignore
+.DS_Store export-ignore
+.git* export-ignore
+
+*.py[ocd] export-ignore
+*.pxi export-ignore
+
+# Ignoring stuff from the top level
+.circleci export-ignore
+.github export-ignore
+asv_bench export-ignore
+ci export-ignore
+doc export-ignore
+gitpod export-ignore
+MANIFEST.in export-ignore
+scripts export-ignore
+typings export-ignore
+web export-ignore
+CITATION.cff export-ignore
+codecov.yml export-ignore
+Dockerfile export-ignore
+environment.yml export-ignore
+setup.py export-ignore
+
+
+# GH 39321
+# csv_dir_path fixture checks the existence of the directory
+# exclude the whole directory to avoid running related tests in sdist
+pandas/tests/io/parser/data export-ignore
diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index 759cacb299550..77ab152ce712e 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -104,6 +104,8 @@ jobs:
with:
fetch-depth: 0
+ # We need to build wheels from the sdist since the sdist
+ # removes unnecessary files from the release
- name: Download sdist
uses: actions/download-artifact@v3
with:
@@ -115,8 +117,8 @@ jobs:
# TODO: Build wheels from sdist again
# There's some sort of weird race condition?
# within Github that makes the sdist be missing files
- #with:
- # package-dir: ./dist/${{ needs.build_sdist.outputs.sdist_file }}
+ with:
+ package-dir: ./dist/${{ needs.build_sdist.outputs.sdist_file }}
env:
CIBW_PRERELEASE_PYTHONS: True
CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}
@@ -144,7 +146,7 @@ jobs:
$TST_CMD = @"
python -m pip install pytz six numpy python-dateutil tzdata>=2022.1 hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytest-asyncio>=0.17;
python -m pip install --find-links=pandas\wheelhouse --no-index pandas;
- python -c `'import pandas as pd; pd.test()`';
+ python -c `'import pandas as pd; pd.test(extra_args=[\"`\"--no-strict-data-files`\"\", \"`\"-m not clipboard and not single_cpu and not slow and not network and not db`\"\"])`';
"@
docker pull python:${{ matrix.python[1] }}-windowsservercore
docker run --env PANDAS_CI='1' -v ${PWD}:C:\pandas python:${{ matrix.python[1] }}-windowsservercore powershell -Command $TST_CMD
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 45fe755568d76..5449a7917b032 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -102,9 +102,9 @@
def pytest_addoption(parser) -> None:
parser.addoption(
- "--strict-data-files",
- action="store_true",
- help="Fail if a test is skipped for missing data file.",
+ "--no-strict-data-files",
+ action="store_false",
+ help="Don't fail if a test is skipped for missing data file.",
)
@@ -1172,9 +1172,9 @@ def all_numeric_accumulations(request):
@pytest.fixture
def strict_data_files(pytestconfig):
"""
- Returns the configuration for the test setting `--strict-data-files`.
+ Returns the configuration for the test setting `--no-strict-data-files`.
"""
- return pytestconfig.getoption("--strict-data-files")
+ return pytestconfig.getoption("--no-strict-data-files")
@pytest.fixture
@@ -1204,7 +1204,7 @@ def datapath(strict_data_files: str) -> Callable[..., str]:
Raises
------
ValueError
- If the path doesn't exist and the --strict-data-files option is set.
+ If the path doesn't exist and the --no-strict-data-files option is not set.
"""
BASE_PATH = os.path.join(os.path.dirname(__file__), "tests")
@@ -1213,7 +1213,7 @@ def deco(*args):
if not os.path.exists(path):
if strict_data_files:
raise ValueError(
- f"Could not find file {path} and --strict-data-files is set."
+ f"Could not find file {path} and --no-strict-data-files is not set."
)
pytest.skip(f"Could not find {path}.")
return path
diff --git a/pandas/tests/io/xml/conftest.py b/pandas/tests/io/xml/conftest.py
index 510e22fb32e77..c88616eb78029 100644
--- a/pandas/tests/io/xml/conftest.py
+++ b/pandas/tests/io/xml/conftest.py
@@ -2,35 +2,35 @@
@pytest.fixture
-def xml_data_path(tests_io_data_path):
+def xml_data_path(tests_io_data_path, datapath):
return tests_io_data_path / "xml"
@pytest.fixture
-def xml_books(xml_data_path):
- return xml_data_path / "books.xml"
+def xml_books(xml_data_path, datapath):
+ return datapath(xml_data_path / "books.xml")
@pytest.fixture
-def xml_doc_ch_utf(xml_data_path):
- return xml_data_path / "doc_ch_utf.xml"
+def xml_doc_ch_utf(xml_data_path, datapath):
+ return datapath(xml_data_path / "doc_ch_utf.xml")
@pytest.fixture
-def xml_baby_names(xml_data_path):
- return xml_data_path / "baby_names.xml"
+def xml_baby_names(xml_data_path, datapath):
+ return datapath(xml_data_path / "baby_names.xml")
@pytest.fixture
-def kml_cta_rail_lines(xml_data_path):
- return xml_data_path / "cta_rail_lines.kml"
+def kml_cta_rail_lines(xml_data_path, datapath):
+ return datapath(xml_data_path / "cta_rail_lines.kml")
@pytest.fixture
-def xsl_flatten_doc(xml_data_path):
- return xml_data_path / "flatten_doc.xsl"
+def xsl_flatten_doc(xml_data_path, datapath):
+ return datapath(xml_data_path / "flatten_doc.xsl")
@pytest.fixture
-def xsl_row_field_output(xml_data_path):
- return xml_data_path / "row_field_output.xsl"
+def xsl_row_field_output(xml_data_path, datapath):
+ return datapath(xml_data_path / "row_field_output.xsl")
diff --git a/pyproject.toml b/pyproject.toml
index f5e903d0b5c8c..6e82b200bb1c7 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -153,8 +153,8 @@ environment = {LDFLAGS="-Wl,--strip-all"}
test-requires = "hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0 pytest-asyncio>=0.17"
test-command = """
PANDAS_CI='1' python -c 'import pandas as pd; \
- pd.test(extra_args=["-m not clipboard and not single_cpu and not slow and not network and not db", "-n 2"]); \
- pd.test(extra_args=["-m not clipboard and single_cpu and not slow and not network and not db"]);' \
+ pd.test(extra_args=["-m not clipboard and not single_cpu and not slow and not network and not db", "-n 2", "--no-strict-data-files"]); \
+ pd.test(extra_args=["-m not clipboard and single_cpu and not slow and not network and not db", "--no-strict-data-files"]);' \
"""
[tool.cibuildwheel.macos]
@@ -471,7 +471,7 @@ disable = [
[tool.pytest.ini_options]
# sync minversion with pyproject.toml & install.rst
minversion = "7.3.2"
-addopts = "--strict-data-files --strict-markers --strict-config --capture=no --durations=30 --junitxml=test-data.xml"
+addopts = "--strict-markers --strict-config --capture=no --durations=30 --junitxml=test-data.xml"
empty_parameter_set_mark = "fail_at_collect"
xfail_strict = true
testpaths = "pandas"
| - [ ] closes #53224 (Replace xxxx with the GitHub issue number)
- [ ] closes #50302 (this should also fix that)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54052 | 2023-07-09T03:09:26Z | 2023-07-31T18:11:43Z | 2023-07-31T18:11:43Z | 2023-07-31T18:13:16Z |
DOC: Add whatsnew for 2.0.4 | diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst
index fc225d0f44497..f22fd2a79b50e 100644
--- a/doc/source/whatsnew/index.rst
+++ b/doc/source/whatsnew/index.rst
@@ -24,6 +24,7 @@ Version 2.0
.. toctree::
:maxdepth: 2
+ v2.0.4
v2.0.3
v2.0.2
v2.0.1
diff --git a/doc/source/whatsnew/v2.0.3.rst b/doc/source/whatsnew/v2.0.3.rst
index 3e2dc1b92c779..26e34e0c823ce 100644
--- a/doc/source/whatsnew/v2.0.3.rst
+++ b/doc/source/whatsnew/v2.0.3.rst
@@ -43,4 +43,4 @@ Other
Contributors
~~~~~~~~~~~~
-.. contributors:: v2.0.2..v2.0.3|HEAD
+.. contributors:: v2.0.2..v2.0.3
diff --git a/doc/source/whatsnew/v2.0.4.rst b/doc/source/whatsnew/v2.0.4.rst
new file mode 100644
index 0000000000000..36d798ea68c30
--- /dev/null
+++ b/doc/source/whatsnew/v2.0.4.rst
@@ -0,0 +1,41 @@
+.. _whatsnew_204:
+
+What's new in 2.0.4 (August ??, 2023)
+---------------------------------------
+
+These are the changes in pandas 2.0.4. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+{{ header }}
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.regressions:
+
+Fixed regressions
+~~~~~~~~~~~~~~~~~
+-
+-
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.bug_fixes:
+
+Bug fixes
+~~~~~~~~~
+-
+-
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.other:
+
+Other
+~~~~~
+-
+-
+
+.. ---------------------------------------------------------------------------
+.. _whatsnew_204.contributors:
+
+Contributors
+~~~~~~~~~~~~
+
+.. contributors:: v2.0.3..v2.0.4|HEAD
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54051 | 2023-07-08T22:40:55Z | 2023-07-09T03:09:59Z | 2023-07-09T03:09:59Z | 2023-07-09T03:10:00Z |
CI: Use PYTEST_WORKERS=0 instead of 1 for single cpu runtime | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 600986d3297a9..91c30c4c0b333 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -173,7 +173,7 @@ jobs:
uses: ./.github/actions/run-tests
env:
PATTERN: 'single_cpu'
- PYTEST_WORKERS: 1
+ PYTEST_WORKERS: 0
if: ${{ matrix.pattern == '' && (always() && steps.build.outcome == 'success')}}
macos-windows:
@@ -193,8 +193,8 @@ jobs:
PANDAS_CI: 1
PYTEST_TARGET: pandas
PATTERN: "not slow and not db and not network and not single_cpu"
- # GH 47443: PYTEST_WORKERS > 1 crashes Windows builds with memory related errors
- PYTEST_WORKERS: ${{ matrix.os == 'macos-latest' && 'auto' || '1' }}
+ # GH 47443: PYTEST_WORKERS > 0 crashes Windows builds with memory related errors
+ PYTEST_WORKERS: ${{ matrix.os == 'macos-latest' && 'auto' || '0' }}
steps:
- name: Checkout
| It appears `-n 1` with pytest-xdist will spawn a worker and run a test on the single worker while `-n 0` avoids that extra work of spawning a worker and still runs the tests in a "single cpu" runtime | https://api.github.com/repos/pandas-dev/pandas/pulls/54049 | 2023-07-07T22:24:31Z | 2023-07-10T18:10:05Z | 2023-07-10T18:10:05Z | 2023-07-10T18:10:09Z |
STY: Consolidate & add pre-commit checks | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index f8e7dfe71115d..9932d73bbcbdc 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -46,14 +46,22 @@ repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
+ - id: check-ast
+ - id: check-case-conflict
+ - id: check-toml
+ - id: check-xml
+ - id: check-yaml
+ exclude: ^ci/meta.yaml$
- id: debug-statements
- id: end-of-file-fixer
exclude: \.txt$
- stages: [commit, merge-commit, push, prepare-commit-msg, commit-msg,
- post-checkout, post-commit, post-merge, post-rewrite]
+ - id: mixed-line-ending
+ args: [--fix=auto]
+ exclude: ^pandas/tests/io/parser/data/utf16_ex.txt$
+ - id: fix-byte-order-marker
+ - id: fix-encoding-pragma
+ args: [--remove]
- id: trailing-whitespace
- stages: [commit, merge-commit, push, prepare-commit-msg, commit-msg,
- post-checkout, post-commit, post-merge, post-rewrite]
- repo: https://github.com/cpplint/cpplint
rev: 1.6.1
hooks:
@@ -98,6 +106,8 @@ repos:
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
+ - id: python-check-blanket-noqa
+ - id: python-check-blanket-type-ignore
- id: rst-backticks
- id: rst-directive-colons
types: [text] # overwrite types: [rst]
@@ -154,31 +164,17 @@ repos:
exclude: ^pandas/core/interchange/
language: python
types: [python]
- - id: no-os-remove
- name: Check code for instances of os.remove
- entry: os\.remove
- language: pygrep
- types: [python]
- files: ^pandas/tests/
- exclude: |
- (?x)^
- pandas/tests/io/pytables/test_store\.py$
- id: unwanted-patterns
name: Unwanted patterns
language: pygrep
entry: |
(?x)
- # outdated annotation syntax, missing error codes
+ # outdated annotation syntax
\#\ type:\ (?!ignore)
- |\#\ type:\s?ignore(?!\[)
# foo._class__ instead of type(foo)
|\.__class__
- # np.bool/np.object instead of np.bool_/np.object_
- |np\.bool[^_8`]
- |np\.object[^_8`]
-
# Numpy
|from\ numpy\ import\ random
|from\ numpy\.random\ import
@@ -197,16 +193,8 @@ repos:
# builtin filter function
|(?<!def)[\(\s]filter\(
-
- # exec
- |[^a-zA-Z0-9_]exec\(
types_or: [python, cython, rst]
exclude: ^doc/source/development/code_style\.rst # contains examples of patterns to avoid
- - id: cython-casting
- name: Check Cython casting is `<type>obj`, not `<type> obj`
- language: pygrep
- entry: '[a-zA-Z0-9*]> '
- files: (\.pyx|\.pxi.in)$
- id: incorrect-backticks
name: Check for backticks incorrectly rendering because of missing spaces
language: pygrep
@@ -219,19 +207,6 @@ repos:
entry: 'np\.random\.seed'
files: ^asv_bench/benchmarks
exclude: ^asv_bench/benchmarks/pandas_vb_common\.py
- - id: np-testing-array-equal
- name: Check for usage of numpy testing or array_equal
- language: pygrep
- entry: '(numpy|np)(\.testing|\.array_equal)'
- files: ^pandas/tests/
- types: [python]
- - id: invalid-ea-testing
- name: Check for invalid EA testing
- language: pygrep
- entry: 'tm\.assert_(series|frame)_equal'
- files: ^pandas/tests/extension/base
- types: [python]
- exclude: ^pandas/tests/extension/base/base\.py
- id: unwanted-patterns-in-tests
name: Unwanted patterns in tests
language: pygrep
@@ -265,6 +240,9 @@ repos:
# pytest.warns (use tm.assert_produces_warning instead)
|pytest\.warns
+
+ # os.remove
+ |os\.remove
files: ^pandas/tests/
types_or: [python, cython, rst]
- id: unwanted-patterns-in-ea-tests
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 82330e1d63c9a..17b7eef011a0a 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -1,6 +1,6 @@
+import contextlib
import datetime as dt
import hashlib
-import os
import tempfile
import time
from warnings import (
@@ -27,7 +27,6 @@
from pandas.tests.io.pytables.common import (
_maybe_remove,
ensure_clean_store,
- safe_close,
)
from pandas.io.pytables import (
@@ -832,53 +831,34 @@ def reader(path):
tm.assert_frame_equal(df, result)
-def test_copy():
- with catch_warnings(record=True):
-
- def do_copy(f, new_f=None, keys=None, propindexes=True, **kwargs):
- if new_f is None:
- fd, new_f = tempfile.mkstemp()
-
- try:
- store = HDFStore(f, "r")
- tstore = store.copy(new_f, keys=keys, propindexes=propindexes, **kwargs)
+@pytest.mark.parametrize("propindexes", [True, False])
+def test_copy(propindexes):
+ df = tm.makeDataFrame()
- # check keys
- if keys is None:
+ with tm.ensure_clean() as path:
+ with HDFStore(path) as st:
+ st.append("df", df, data_columns=["A"])
+ with tempfile.NamedTemporaryFile() as new_f:
+ with HDFStore(path) as store:
+ with contextlib.closing(
+ store.copy(new_f.name, keys=None, propindexes=propindexes)
+ ) as tstore:
+ # check keys
keys = store.keys()
- assert set(keys) == set(tstore.keys())
-
- # check indices & nrows
- for k in tstore.keys():
- if tstore.get_storer(k).is_table:
- new_t = tstore.get_storer(k)
- orig_t = store.get_storer(k)
-
- assert orig_t.nrows == new_t.nrows
-
- # check propindixes
- if propindexes:
- for a in orig_t.axes:
- if a.is_indexed:
- assert new_t[a.name].is_indexed
-
- finally:
- safe_close(store)
- safe_close(tstore)
- try:
- os.close(fd)
- except (OSError, ValueError):
- pass
- os.remove(new_f)
-
- # new table
- df = tm.makeDataFrame()
-
- with tm.ensure_clean() as path:
- with HDFStore(path) as st:
- st.append("df", df, data_columns=["A"])
- do_copy(f=path)
- do_copy(f=path, propindexes=False)
+ assert set(keys) == set(tstore.keys())
+ # check indices & nrows
+ for k in tstore.keys():
+ if tstore.get_storer(k).is_table:
+ new_t = tstore.get_storer(k)
+ orig_t = store.get_storer(k)
+
+ assert orig_t.nrows == new_t.nrows
+
+ # check propindixes
+ if propindexes:
+ for a in orig_t.axes:
+ if a.is_indexed:
+ assert new_t[a.name].is_indexed
def test_duplicate_column_name(tmp_path, setup_path):
diff --git a/pyproject.toml b/pyproject.toml
index a2ae269c26667..4dfdb1aa927f5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -239,6 +239,8 @@ select = [
"PGH",
# Ruff-specific rules
"RUF",
+ # flake8-bandit: exec-builtin
+ "S102"
]
ignore = [
| * Adds some easy/useful `pre-commit-hooks` and `pygrep-hooks`
* Consolidates & de-duplicates some redundant pre-commit `pygrep` checks | https://api.github.com/repos/pandas-dev/pandas/pulls/54047 | 2023-07-07T22:02:49Z | 2023-07-10T22:16:43Z | 2023-07-10T22:16:43Z | 2023-07-10T22:16:46Z |
BLD: Update Gitpod to use docker installation flow and pip/meson for setup | diff --git a/.gitpod.yml b/.gitpod.yml
index 0a5b5648994ae..9222639136a17 100644
--- a/.gitpod.yml
+++ b/.gitpod.yml
@@ -3,19 +3,20 @@
# https://www.gitpod.io/docs/config-start-tasks/#configuring-the-terminal
# -------------------------------------------------------------------------
-# assuming we use dockerhub: name of the docker user, docker image, tag, e.g. https://hub.docker.com/r/pandas/pandas-gitpod/tags
-image: pandas/pandas-gitpod:latest
+# images for gitpod pandas are in https://hub.docker.com/r/pandas/pandas-gitpod/tags
+# we're using the Dockerfile in the base of the repo
+image:
+ file: Dockerfile
tasks:
- name: Prepare development environment
init: |
mkdir -p .vscode
cp gitpod/settings.json .vscode/settings.json
- conda activate pandas-dev
- git pull --unshallow # need to force this else the prebuild fails
git fetch --tags
- python setup.py build_ext --inplace -j 4
- echo "🛠 Completed rebuilding Pandas!! 🛠 "
+ python -m pip install -ve . --no-build-isolation --config-settings editable-verbose=true
pre-commit install
+ command: |
+ python -m pip install -ve . --no-build-isolation --config-settings editable-verbose=true
echo "✨ Pre-build complete! You can close this terminal ✨ "
# --------------------------------------------------------
@@ -37,7 +38,7 @@ vscode:
# avoid adding too many. they each open a pop-up window
# --------------------------------------------------------
-# using prebuilds for the container
+# Using prebuilds for the container
# With this configuration the prebuild will happen on push to main
github:
prebuilds:
diff --git a/gitpod/Dockerfile b/gitpod/Dockerfile
index a706824912174..dd4ddf64d16b4 100644
--- a/gitpod/Dockerfile
+++ b/gitpod/Dockerfile
@@ -27,7 +27,7 @@
# OS/ARCH: linux/amd64
FROM gitpod/workspace-base:latest
-ARG MAMBAFORGE_VERSION="22.9.0-1"
+ARG MAMBAFORGE_VERSION="23.1.0-3"
ARG CONDA_ENV=pandas-dev
ARG PANDAS_HOME="/home/pandas"
diff --git a/gitpod/settings.json b/gitpod/settings.json
index 6251c55878541..2c2c3b551e1d1 100644
--- a/gitpod/settings.json
+++ b/gitpod/settings.json
@@ -1,6 +1,5 @@
{
- "restructuredtext.updateOnTextChanged": "true",
- "restructuredtext.updateDelay": 300,
+ "esbonio.server.pythonPath": "/usr/local/bin/python",
"restructuredtext.linter.disabledLinters": ["doc8","rst-lint", "rstcheck"],
- "python.defaultInterpreterPath": "/home/gitpod/mambaforge3/envs/pandas-dev/bin/python"
+ "python.defaultInterpreterPath": "/usr/local/bin/python"
}
| - [x] closes #53685
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature | it is not a feature/bug
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. | no new functions
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature. | not a feature/bug
The docker images on Dockerhub that are used with Gitpod are outdated by like 6 months, which is causing the build to fail in Gitpod (because the docker images are using python 3.8.16).
This is the original repo in gitpod with the issue (using the latest commit):
https://gitpod.io#https://github.com/pandas-dev/pandas/commit/457690995ccbfc5b8eee80a0818d62070d078bcf
```
(pandas-dev) gitpod > /workspace/pandas $ python -i
Python 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55)
[GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspace/pandas/pandas/__init__.py", line 46, in <module>
from pandas.core.api import (
File "/workspace/pandas/pandas/core/api.py", line 47, in <module>
from pandas.core.groupby import (
File "/workspace/pandas/pandas/core/groupby/__init__.py", line 1, in <module>
from pandas.core.groupby.generic import (
File "/workspace/pandas/pandas/core/groupby/generic.py", line 70, in <module>
from pandas.core.frame import DataFrame
File "/workspace/pandas/pandas/core/frame.py", line 137, in <module>
from pandas.core.generic import (
File "/workspace/pandas/pandas/core/generic.py", line 191, in <module>
from pandas.core.window import (
File "/workspace/pandas/pandas/core/window/__init__.py", line 1, in <module>
from pandas.core.window.ewm import (
File "/workspace/pandas/pandas/core/window/ewm.py", line 41, in <module>
from pandas.core.window.numba_ import (
File "/workspace/pandas/pandas/core/window/numba_.py", line 20, in <module>
@functools.cache
AttributeError: module 'functools' has no attribute 'cache'
```
I've made a couple changes to fix this and other errors related to the Gitpod build. This is what it looks like with the changes:
https://gitpod.io#https://github.com/pandas-dev/pandas/pull/54046
```
gitpod@theuerc-pandas-5ztda316yrs:/workspace/pandas$ python -i
Python 3.10.8 (main, Dec 6 2022, 14:13:21) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas
+ /usr/local/bin/ninja
[1/1] Generating write_version_file with a custom command
>>> pandas.DataFrame({'test': 'testing'}, index=[0])
test
0 testing
```
**The Bigger Changes**
I'm following the updated [development environment creation](https://pandas.pydata.org/docs/dev/development/contributing_environment.html) instructions for these changes, but with the docker option instead of the mamba option (as mamba requires version pinning and causes other issues that can make it hard to maintain).
- Update setup to use pip/meson.
- Add a duplicate line in `command` to resolve a small issue with pip when prebuilding with pip/meson. Strangely this prebuild issue is not present on all of the branches I was testing on.
- Build the Gitpod image from the Dockerfile in the base of the repo instead of pulling the image from Dockerhub, so that it will always stay current with the rest of the repo.
- Gitpod will build and install all of the dependencies inside the image, and then reuse the image after that. If there are any changes to the Dockerfile, it will rebuild the image automatically.
- This would automate the process of having to manually update the Docker image on Dockerhub every 3-6 months to get Gitpod working again. The image will rebuild itself when Gitpod detects a difference in the Dockerfile. [source](https://www.gitpod.io/blog/docker-in-gitpod)
<img width="865" alt="Screen Shot 2023-07-07 at 3 18 33 PM" src="https://github.com/pandas-dev/pandas/assets/60138157/f119b1c3-72b1-46fc-97d0-d037c16935d5">
**The Smaller Changes**
- Update `gitpod/Dockerfile` to use the latest version of conda (though the mamba flow wouldn't be used at all anymore).
- Remove intermediary echo statements
- Remove legacy plugin settings that were causing errors.
**Next Steps and Other Considerations**
I tried to do the minimal changes to get everything working again, but it seems like the `gitpod/` folder could be gotten rid of completely. Only `.gitpod.yml`, `Dockerfile`, and `gitpod/settings.json` are needed for Gitpod (could change `gitpod/settings.json > settings.json`).
`gitpod/Dockerfile`, `gitpod/gitpod.Dockerfile`, and `gitpod/workspace_config` are customizations for the Gitpod workspace, but they have to be continually updated over time for them to work. Otherwise they just cause errors after a few months (like they're doing in the repo right now).
Enabling prebuilds for branches/forks/pull-requests would be cool in the future. It would allow for instantly opening/running pull requests in a web browser. Prebuilds save about 3 minutes of time each time Gitpod is booted up. I wasn't sure if there would be cost associated with it so I didn't enable autoprebuilds for those options in the .gitpod.yml file. Right now prebuilds have to be done manually since they are only enabled for `main`. | https://api.github.com/repos/pandas-dev/pandas/pulls/54046 | 2023-07-07T20:48:40Z | 2023-07-11T18:09:54Z | 2023-07-11T18:09:54Z | 2023-07-11T18:23:56Z |
STY: pyupgrade to 3.9 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index c9cd7528bcd2f..f8e7dfe71115d 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -20,7 +20,7 @@ repos:
rev: 23.3.0
hooks:
- id: black
-- repo: https://github.com/charliermarsh/ruff-pre-commit
+- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.0.270
hooks:
- id: ruff
@@ -94,7 +94,7 @@ repos:
rev: v3.4.0
hooks:
- id: pyupgrade
- args: [--py38-plus]
+ args: [--py39-plus]
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
@@ -179,9 +179,6 @@ repos:
|np\.bool[^_8`]
|np\.object[^_8`]
- # imports from collections.abc instead of `from collections import abc`
- |from\ collections\.abc\ import
-
# Numpy
|from\ numpy\ import\ random
|from\ numpy\.random\ import
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 31893bdf929d8..71bc05f6fd6e1 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -586,14 +586,7 @@ class AccessorCallableDocumenter(AccessorLevelDocumenter, MethodDocumenter):
priority = 0.5
def format_name(self):
- if sys.version_info < (3, 9):
- # NOTE pyupgrade will remove this when we run it with --py39-plus
- # so don't remove the unnecessary `else` statement below
- from pandas.util._str_methods import removesuffix
-
- return removesuffix(MethodDocumenter.format_name(self), ".__call__")
- else:
- return MethodDocumenter.format_name(self).removesuffix(".__call__")
+ return MethodDocumenter.format_name(self).removesuffix(".__call__")
class PandasAutosummary(Autosummary):
diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index 9caeaadc1ad84..3ad35b3f9068b 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -56,11 +56,10 @@
)
import re
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
- Generator,
Generic,
- Iterable,
NamedTuple,
cast,
)
@@ -72,6 +71,12 @@
)
from pandas.util._exceptions import find_stack_level
+if TYPE_CHECKING:
+ from collections.abc import (
+ Generator,
+ Iterable,
+ )
+
class DeprecatedOption(NamedTuple):
key: str
diff --git a/pandas/_config/localization.py b/pandas/_config/localization.py
index 4e9a0142af3a4..5c1a0ff139533 100644
--- a/pandas/_config/localization.py
+++ b/pandas/_config/localization.py
@@ -10,10 +10,13 @@
import platform
import re
import subprocess
-from typing import Generator
+from typing import TYPE_CHECKING
from pandas._config.config import options
+if TYPE_CHECKING:
+ from collections.abc import Generator
+
@contextmanager
def set_locale(
diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index ddb3acb7397e6..d1a729343e062 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -1,6 +1,7 @@
from __future__ import annotations
import collections
+from collections import Counter
from datetime import datetime
from decimal import Decimal
import operator
@@ -12,8 +13,6 @@
TYPE_CHECKING,
Callable,
ContextManager,
- Counter,
- Iterable,
cast,
)
@@ -109,6 +108,8 @@
from pandas.core.construction import extract_array
if TYPE_CHECKING:
+ from collections.abc import Iterable
+
from pandas._typing import (
Dtype,
Frequency,
diff --git a/pandas/_testing/_warnings.py b/pandas/_testing/_warnings.py
index 201aa81183301..11cf60ef36a9c 100644
--- a/pandas/_testing/_warnings.py
+++ b/pandas/_testing/_warnings.py
@@ -7,14 +7,18 @@
import re
import sys
from typing import (
- Generator,
+ TYPE_CHECKING,
Literal,
- Sequence,
- Type,
cast,
)
import warnings
+if TYPE_CHECKING:
+ from collections.abc import (
+ Generator,
+ Sequence,
+ )
+
@contextmanager
def assert_produces_warning(
@@ -91,7 +95,7 @@ class for all warnings. To raise multiple types of exceptions,
yield w
finally:
if expected_warning:
- expected_warning = cast(Type[Warning], expected_warning)
+ expected_warning = cast(type[Warning], expected_warning)
_assert_caught_expected_warning(
caught_warnings=w,
expected_warning=expected_warning,
@@ -195,7 +199,7 @@ def _is_unexpected_warning(
"""Check if the actual warning issued is unexpected."""
if actual_warning and not expected_warning:
return True
- expected_warning = cast(Type[Warning], expected_warning)
+ expected_warning = cast(type[Warning], expected_warning)
return bool(not issubclass(actual_warning.category, expected_warning))
diff --git a/pandas/_testing/contexts.py b/pandas/_testing/contexts.py
index f939bd42add93..ac1a6a2450444 100644
--- a/pandas/_testing/contexts.py
+++ b/pandas/_testing/contexts.py
@@ -8,7 +8,6 @@
IO,
TYPE_CHECKING,
Any,
- Generator,
)
import uuid
@@ -20,6 +19,8 @@
from pandas.io.common import get_handle
if TYPE_CHECKING:
+ from collections.abc import Generator
+
from pandas._typing import (
BaseBuffer,
CompressionOptions,
diff --git a/pandas/_typing.py b/pandas/_typing.py
index ffe9e6b319dfd..ef53c117b7b45 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -1,5 +1,11 @@
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterator,
+ Mapping,
+ Sequence,
+)
from datetime import (
datetime,
timedelta,
@@ -11,16 +17,9 @@
TYPE_CHECKING,
Any,
Callable,
- Dict,
- Hashable,
- Iterator,
- List,
Literal,
- Mapping,
Optional,
Protocol,
- Sequence,
- Tuple,
Type as type_t,
TypeVar,
Union,
@@ -111,7 +110,7 @@
# Cannot use `Sequence` because a string is a sequence, and we don't want to
# accept that. Could refine if https://github.com/python/typing/issues/256 is
# resolved to differentiate between Sequence[str] and str
-ListLike = Union[AnyArrayLike, List, range]
+ListLike = Union[AnyArrayLike, list, range]
# scalars
@@ -146,10 +145,10 @@
Axis = Union[AxisInt, Literal["index", "columns", "rows"]]
IndexLabel = Union[Hashable, Sequence[Hashable]]
Level = Hashable
-Shape = Tuple[int, ...]
-Suffixes = Tuple[Optional[str], Optional[str]]
+Shape = tuple[int, ...]
+Suffixes = tuple[Optional[str], Optional[str]]
Ordered = Optional[bool]
-JSONSerializable = Optional[Union[PythonScalar, List, Dict]]
+JSONSerializable = Optional[Union[PythonScalar, list, dict]]
Frequency = Union[str, "BaseOffset"]
Axes = ListLike
@@ -166,15 +165,15 @@
Dtype = Union["ExtensionDtype", NpDtype]
AstypeArg = Union["ExtensionDtype", "npt.DTypeLike"]
# DtypeArg specifies all allowable dtypes in a functions its dtype argument
-DtypeArg = Union[Dtype, Dict[Hashable, Dtype]]
+DtypeArg = Union[Dtype, dict[Hashable, Dtype]]
DtypeObj = Union[np.dtype, "ExtensionDtype"]
# converters
-ConvertersArg = Dict[Hashable, Callable[[Dtype], Dtype]]
+ConvertersArg = dict[Hashable, Callable[[Dtype], Dtype]]
# parse_dates
ParseDatesArg = Union[
- bool, List[Hashable], List[List[Hashable]], Dict[Hashable, List[Hashable]]
+ bool, list[Hashable], list[list[Hashable]], dict[Hashable, list[Hashable]]
]
# For functions like rename that convert one label to another
@@ -195,10 +194,10 @@
# types of `func` kwarg for DataFrame.aggregate and Series.aggregate
AggFuncTypeBase = Union[Callable, str]
-AggFuncTypeDict = Dict[Hashable, Union[AggFuncTypeBase, List[AggFuncTypeBase]]]
+AggFuncTypeDict = dict[Hashable, Union[AggFuncTypeBase, list[AggFuncTypeBase]]]
AggFuncType = Union[
AggFuncTypeBase,
- List[AggFuncTypeBase],
+ list[AggFuncTypeBase],
AggFuncTypeDict,
]
AggObjType = Union[
@@ -286,18 +285,18 @@ def closed(self) -> bool:
FilePath = Union[str, "PathLike[str]"]
# for arbitrary kwargs passed during reading/writing files
-StorageOptions = Optional[Dict[str, Any]]
+StorageOptions = Optional[dict[str, Any]]
# compression keywords and compression
-CompressionDict = Dict[str, Any]
+CompressionDict = dict[str, Any]
CompressionOptions = Optional[
Union[Literal["infer", "gzip", "bz2", "zip", "xz", "zstd", "tar"], CompressionDict]
]
# types in DataFrameFormatter
FormattersType = Union[
- List[Callable], Tuple[Callable, ...], Mapping[Union[str, int], Callable]
+ list[Callable], tuple[Callable, ...], Mapping[Union[str, int], Callable]
]
ColspaceType = Mapping[Hashable, Union[str, int]]
FloatFormatType = Union[str, Callable, "EngFormatter"]
@@ -347,9 +346,9 @@ def closed(self) -> bool:
# https://bugs.python.org/issue41810
# Using List[int] here rather than Sequence[int] to disallow tuples.
ScalarIndexer = Union[int, np.integer]
-SequenceIndexer = Union[slice, List[int], np.ndarray]
+SequenceIndexer = Union[slice, list[int], np.ndarray]
PositionalIndexer = Union[ScalarIndexer, SequenceIndexer]
-PositionalIndexerTuple = Tuple[PositionalIndexer, PositionalIndexer]
+PositionalIndexerTuple = tuple[PositionalIndexer, PositionalIndexer]
PositionalIndexer2D = Union[PositionalIndexer, PositionalIndexerTuple]
if TYPE_CHECKING:
TakeIndexer = Union[Sequence[int], Sequence[np.integer], npt.NDArray[np.integer]]
diff --git a/pandas/_version.py b/pandas/_version.py
index 8c655648377c7..5d610b5e1ea7e 100644
--- a/pandas/_version.py
+++ b/pandas/_version.py
@@ -16,10 +16,7 @@
import re
import subprocess
import sys
-from typing import (
- Callable,
- Dict,
-)
+from typing import Callable
def get_keywords():
@@ -57,8 +54,8 @@ class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
-LONG_VERSION_PY: Dict[str, str] = {}
-HANDLERS: Dict[str, Dict[str, Callable]] = {}
+LONG_VERSION_PY: dict[str, str] = {}
+HANDLERS: dict[str, dict[str, Callable]] = {}
def register_vcs_handler(vcs, method): # decorator
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index c15dd7b37be93..8282ec25c1d58 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -7,7 +7,7 @@
import copy
import io
import pickle as pkl
-from typing import Generator
+from typing import TYPE_CHECKING
import numpy as np
@@ -22,6 +22,9 @@
)
from pandas.core.internals import BlockManager
+if TYPE_CHECKING:
+ from collections.abc import Generator
+
def load_reduce(self):
stack = self.stack
diff --git a/pandas/conftest.py b/pandas/conftest.py
index b2f1377a9fb32..d7e8fbeb9336b 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -32,9 +32,8 @@
import os
from pathlib import Path
from typing import (
+ TYPE_CHECKING,
Callable,
- Hashable,
- Iterator,
)
from dateutil.tz import (
@@ -73,6 +72,12 @@
MultiIndex,
)
+if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ )
+
try:
import pyarrow as pa
except ImportError:
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index fc322312a9195..8a28b30aecc03 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -9,13 +9,7 @@
Any,
Callable,
DefaultDict,
- Dict,
- Hashable,
- Iterable,
- Iterator,
- List,
Literal,
- Sequence,
cast,
)
import warnings
@@ -59,6 +53,13 @@
from pandas.core.construction import ensure_wrapped_if_datetimelike
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Iterator,
+ Sequence,
+ )
+
from pandas import (
DataFrame,
Index,
@@ -69,7 +70,7 @@
from pandas.core.window.rolling import BaseWindow
-ResType = Dict[int, Any]
+ResType = dict[int, Any]
def frame_apply(
@@ -213,7 +214,7 @@ def transform(self) -> DataFrame | Series:
return obj.T.transform(func, 0, *args, **kwargs).T
if is_list_like(func) and not is_dict_like(func):
- func = cast(List[AggFuncTypeBase], func)
+ func = cast(list[AggFuncTypeBase], func)
# Convert func equivalent dict
if is_series:
func = {com.get_callable_name(v) or v: v for v in func}
@@ -335,7 +336,7 @@ def compute_list_like(
Data for result. When aggregating with a Series, this can contain any
Python objects.
"""
- func = cast(List[AggFuncTypeBase], self.func)
+ func = cast(list[AggFuncTypeBase], self.func)
obj = self.obj
results = []
diff --git a/pandas/core/array_algos/replace.py b/pandas/core/array_algos/replace.py
index 85d1f7ccf2e88..5f377276be480 100644
--- a/pandas/core/array_algos/replace.py
+++ b/pandas/core/array_algos/replace.py
@@ -5,10 +5,10 @@
import operator
import re
+from re import Pattern
from typing import (
TYPE_CHECKING,
Any,
- Pattern,
)
import numpy as np
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index f586de3d2bdee..f987bab7a2b87 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -5,7 +5,6 @@
TYPE_CHECKING,
Any,
Literal,
- Sequence,
cast,
overload,
)
@@ -58,6 +57,8 @@
from pandas.core.sorting import nargminmax
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
NumpySorter,
NumpyValueArrayLike,
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 284044dfadfef..e2630fd61072b 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1,16 +1,13 @@
from __future__ import annotations
-import functools
import operator
import re
-import sys
import textwrap
from typing import (
TYPE_CHECKING,
Any,
Callable,
Literal,
- Sequence,
cast,
)
import unicodedata
@@ -127,6 +124,8 @@ def floordiv_compat(
}
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
ArrayLike,
AxisInt,
@@ -2189,14 +2188,7 @@ def _str_removeprefix(self, prefix: str):
# removed = pc.utf8_slice_codeunits(self._pa_array, len(prefix))
# result = pc.if_else(starts_with, removed, self._pa_array)
# return type(self)(result)
- if sys.version_info < (3, 9):
- # NOTE pyupgrade will remove this when we run it with --py39-plus
- # so don't remove the unnecessary `else` statement below
- from pandas.util._str_methods import removeprefix
-
- predicate = functools.partial(removeprefix, prefix=prefix)
- else:
- predicate = lambda val: val.removeprefix(prefix)
+ predicate = lambda val: val.removeprefix(prefix)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 64f917a419391..c82f2ac018f93 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -14,9 +14,7 @@
Any,
Callable,
ClassVar,
- Iterator,
Literal,
- Sequence,
cast,
overload,
)
@@ -75,6 +73,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterator,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
AstypeArg,
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index fdf4434d89f4c..6c61ce7a3e99b 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -6,10 +6,7 @@
from shutil import get_terminal_size
from typing import (
TYPE_CHECKING,
- Hashable,
- Iterator,
Literal,
- Sequence,
cast,
overload,
)
@@ -93,6 +90,12 @@
from pandas.io.formats import console
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
AstypeArg,
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 40cd59340f942..3274b822f3bd7 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -10,9 +10,7 @@
TYPE_CHECKING,
Any,
Callable,
- Iterator,
Literal,
- Sequence,
Union,
cast,
final,
@@ -144,6 +142,11 @@
from pandas.tseries import frequencies
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterator,
+ Sequence,
+ )
+
from pandas import Index
from pandas.core.arrays import (
DatetimeArray,
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 3427ba025118a..8ad51e4a90027 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -2,13 +2,11 @@
from datetime import (
datetime,
- time,
timedelta,
tzinfo,
)
from typing import (
TYPE_CHECKING,
- Iterator,
cast,
)
import warnings
@@ -72,6 +70,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Iterator
+
from pandas._typing import (
DateTimeErrorChoices,
IntervalClosedType,
@@ -84,8 +84,6 @@
from pandas import DataFrame
from pandas.core.arrays import PeriodArray
-_midnight = time(0, 0)
-
def tz_to_dtype(
tz: tzinfo | None, unit: str = "ns"
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 446c0957db343..7c0263660ef55 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -8,9 +8,7 @@
import textwrap
from typing import (
TYPE_CHECKING,
- Iterator,
Literal,
- Sequence,
Union,
overload,
)
@@ -100,6 +98,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterator,
+ Sequence,
+ )
+
from pandas import (
Index,
Series,
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 15c485cbb1499..d24aa95cdd6c5 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -3,9 +3,7 @@
from typing import (
TYPE_CHECKING,
Any,
- Iterator,
Literal,
- Sequence,
overload,
)
import warnings
@@ -83,6 +81,10 @@
from pandas.core.ops import invalid_comparison
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterator,
+ Sequence,
+ )
from pandas import Series
from pandas.core.arrays import BooleanArray
from pandas._typing import (
diff --git a/pandas/core/arrays/numeric.py b/pandas/core/arrays/numeric.py
index d6adb0315a2e3..0e86c1efba17a 100644
--- a/pandas/core/arrays/numeric.py
+++ b/pandas/core/arrays/numeric.py
@@ -5,7 +5,6 @@
TYPE_CHECKING,
Any,
Callable,
- Mapping,
)
import numpy as np
@@ -29,6 +28,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Mapping
+
import pyarrow
from pandas._typing import (
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index c9c2d258a9a16..c22e74876f58f 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -7,7 +7,6 @@
Any,
Callable,
Literal,
- Sequence,
TypeVar,
cast,
overload,
@@ -74,6 +73,8 @@
import pandas.core.common as com
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
AnyArrayLike,
Dtype,
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index aba6811c5eeb7..6b48a9181a5a8 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -11,7 +11,6 @@
Any,
Callable,
Literal,
- Sequence,
cast,
overload,
)
@@ -86,6 +85,7 @@
# See https://github.com/python/typing/issues/684
if TYPE_CHECKING:
+ from collections.abc import Sequence
from enum import Enum
class ellipsis(Enum):
diff --git a/pandas/core/arrays/sparse/scipy_sparse.py b/pandas/core/arrays/sparse/scipy_sparse.py
index 7f6a9c589c486..71b71a9779da5 100644
--- a/pandas/core/arrays/sparse/scipy_sparse.py
+++ b/pandas/core/arrays/sparse/scipy_sparse.py
@@ -5,10 +5,7 @@
"""
from __future__ import annotations
-from typing import (
- TYPE_CHECKING,
- Iterable,
-)
+from typing import TYPE_CHECKING
from pandas._libs import lib
@@ -19,6 +16,8 @@
from pandas.core.series import Series
if TYPE_CHECKING:
+ from collections.abc import Iterable
+
import numpy as np
import scipy.sparse
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index debac4dd5243e..a81609e1bb618 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -4,7 +4,6 @@
import operator
from typing import (
TYPE_CHECKING,
- Iterator,
cast,
)
import warnings
@@ -67,6 +66,8 @@
from pandas.core.ops.common import unpack_zerodim_and_defer
if TYPE_CHECKING:
+ from collections.abc import Iterator
+
from pandas._typing import (
AxisInt,
DateTimeErrorChoices,
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 3710a644c7826..3629a6f9526af 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -9,8 +9,6 @@
TYPE_CHECKING,
Any,
Generic,
- Hashable,
- Iterator,
Literal,
cast,
final,
@@ -69,6 +67,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ )
+
from pandas._typing import (
DropKeep,
NumpySorter,
diff --git a/pandas/core/common.py b/pandas/core/common.py
index c1d78f7c19c98..11c6d8ea1a821 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -10,6 +10,13 @@
abc,
defaultdict,
)
+from collections.abc import (
+ Collection,
+ Generator,
+ Hashable,
+ Iterable,
+ Sequence,
+)
import contextlib
from functools import partial
import inspect
@@ -17,11 +24,6 @@
TYPE_CHECKING,
Any,
Callable,
- Collection,
- Generator,
- Hashable,
- Iterable,
- Sequence,
cast,
overload,
)
diff --git a/pandas/core/computation/align.py b/pandas/core/computation/align.py
index a8ca08c58c261..85d412d044ba8 100644
--- a/pandas/core/computation/align.py
+++ b/pandas/core/computation/align.py
@@ -10,7 +10,6 @@
from typing import (
TYPE_CHECKING,
Callable,
- Sequence,
)
import warnings
@@ -29,6 +28,8 @@
from pandas.core.computation.common import result_type_many
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import F
from pandas.core.generic import NDFrame
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index dd7f9c3f76049..b14187b0cc3a5 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -8,9 +8,8 @@
from functools import partial
import operator
from typing import (
+ TYPE_CHECKING,
Callable,
- Iterable,
- Iterator,
Literal,
)
@@ -35,6 +34,12 @@
pprint_thing_encoded,
)
+if TYPE_CHECKING:
+ from collections.abc import (
+ Iterable,
+ Iterator,
+ )
+
REDUCTIONS = ("sum", "prod", "min", "max")
_unary_math_ops = (
diff --git a/pandas/core/computation/parsing.py b/pandas/core/computation/parsing.py
index 4020ec7b5e9eb..4cfa0f2baffd5 100644
--- a/pandas/core/computation/parsing.py
+++ b/pandas/core/computation/parsing.py
@@ -7,10 +7,13 @@
from keyword import iskeyword
import token
import tokenize
-from typing import (
- Hashable,
- Iterator,
-)
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ )
# A token value Python's tokenizer probably will never use.
BACKTICK_QUOTED_STRING = 100
diff --git a/pandas/core/computation/scope.py b/pandas/core/computation/scope.py
index 0b9ba84cae511..7e553ca448218 100644
--- a/pandas/core/computation/scope.py
+++ b/pandas/core/computation/scope.py
@@ -3,6 +3,7 @@
"""
from __future__ import annotations
+from collections import ChainMap
import datetime
import inspect
from io import StringIO
@@ -10,10 +11,7 @@
import pprint
import struct
import sys
-from typing import (
- ChainMap,
- TypeVar,
-)
+from typing import TypeVar
import numpy as np
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 34f8ef500be86..014c99c87ad00 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -6,10 +6,10 @@
"""
from __future__ import annotations
+from collections.abc import Sequence
from typing import (
TYPE_CHECKING,
Optional,
- Sequence,
Union,
cast,
overload,
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index a73362cadb93a..22c2aa374263d 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -10,8 +10,6 @@
TYPE_CHECKING,
Any,
Literal,
- Sequence,
- Sized,
TypeVar,
cast,
overload,
@@ -82,6 +80,11 @@
from pandas.io._util import _arrow_dtype_mapping
if TYPE_CHECKING:
+ from collections.abc import (
+ Sequence,
+ Sized,
+ )
+
from pandas._typing import (
ArrayLike,
Dtype,
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index cba7c44a219bf..c733d4578ca04 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -5,7 +5,6 @@
from typing import (
TYPE_CHECKING,
- Sequence,
cast,
)
@@ -26,6 +25,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
ArrayLike,
AxisInt,
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index ea4d10c06efe3..074cff5b6564a 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -14,7 +14,6 @@
from typing import (
TYPE_CHECKING,
Any,
- MutableMapping,
cast,
)
import warnings
@@ -65,6 +64,7 @@
import pyarrow as pa
if TYPE_CHECKING:
+ from collections.abc import MutableMapping
from datetime import tzinfo
import pyarrow as pa # noqa: F811, TCH004
diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index af4f0a1c0aa05..9c04e57be36fc 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -5,17 +5,16 @@
from collections import abc
from numbers import Number
import re
-from typing import (
- TYPE_CHECKING,
- Hashable,
- Pattern,
-)
+from re import Pattern
+from typing import TYPE_CHECKING
import numpy as np
from pandas._libs import lib
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import TypeGuard
is_bool = lib.is_bool
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6fdd6cb2a639e..c0f4dbb4aeb2d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -12,6 +12,13 @@
import collections
from collections import abc
+from collections.abc import (
+ Hashable,
+ Iterable,
+ Iterator,
+ Mapping,
+ Sequence,
+)
import functools
from io import StringIO
import itertools
@@ -22,12 +29,7 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterable,
- Iterator,
Literal,
- Mapping,
- Sequence,
cast,
overload,
)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 8f213a1b7a1e2..010bd0cd79de9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -15,13 +15,8 @@
Any,
Callable,
ClassVar,
- Hashable,
- Iterator,
Literal,
- Mapping,
NoReturn,
- Sequence,
- Type,
cast,
final,
overload,
@@ -202,6 +197,13 @@
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ Mapping,
+ Sequence,
+ )
+
from pandas._libs.tslibs import BaseOffset
from pandas import (
@@ -6866,7 +6868,7 @@ def convert_dtypes(
]
if len(results) > 0:
result = concat(results, axis=1, copy=False, keys=self.columns)
- cons = cast(Type["DataFrame"], self._constructor)
+ cons = cast(type["DataFrame"], self._constructor)
result = cons(result)
result = result.__finalize__(self, method="convert_dtypes")
# https://github.com/python/mypy/issues/8354
diff --git a/pandas/core/groupby/base.py b/pandas/core/groupby/base.py
index 0f6d39be7d32f..a443597347283 100644
--- a/pandas/core/groupby/base.py
+++ b/pandas/core/groupby/base.py
@@ -4,7 +4,10 @@
from __future__ import annotations
import dataclasses
-from typing import Hashable
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Hashable
@dataclasses.dataclass(order=True, frozen=True)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 3bedcb935b6ba..8691866bac752 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -14,11 +14,8 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
Literal,
- Mapping,
NamedTuple,
- Sequence,
TypeVar,
Union,
cast,
@@ -92,6 +89,12 @@
from pandas.plotting import boxplot_frame_groupby
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Mapping,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
Axis,
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index ff9c1cf757f37..85ec8c1b86374 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -8,6 +8,12 @@ class providing the base-class of operations.
"""
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterator,
+ Mapping,
+ Sequence,
+)
import datetime
from functools import (
partial,
@@ -18,12 +24,7 @@ class providing the base-class of operations.
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
- Iterator,
- List,
Literal,
- Mapping,
- Sequence,
TypeVar,
Union,
cast,
@@ -685,9 +686,9 @@ def f(self):
_KeysArgType = Union[
Hashable,
- List[Hashable],
+ list[Hashable],
Callable[[Hashable], Hashable],
- List[Callable[[Hashable], Hashable]],
+ list[Callable[[Hashable], Hashable]],
Mapping[Hashable, Hashable],
]
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 316b896da126f..764b74f81e7ef 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -6,8 +6,6 @@
from typing import (
TYPE_CHECKING,
- Hashable,
- Iterator,
final,
)
import warnings
@@ -46,6 +44,11 @@
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ )
+
from pandas._typing import (
ArrayLike,
Axis,
diff --git a/pandas/core/groupby/indexing.py b/pandas/core/groupby/indexing.py
index 61e88565f8e33..a3c5ab8edc94e 100644
--- a/pandas/core/groupby/indexing.py
+++ b/pandas/core/groupby/indexing.py
@@ -1,8 +1,8 @@
from __future__ import annotations
+from collections.abc import Iterable
from typing import (
TYPE_CHECKING,
- Iterable,
Literal,
cast,
)
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index f0e4484f69f8d..3c4a22d009406 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -13,9 +13,6 @@
TYPE_CHECKING,
Callable,
Generic,
- Hashable,
- Iterator,
- Sequence,
final,
)
@@ -71,6 +68,12 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ Sequence,
+ )
+
from pandas.core.generic import NDFrame
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 5f19f6d06a194..73559e80cbcc6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -9,11 +9,8 @@
Any,
Callable,
ClassVar,
- Hashable,
- Iterable,
Literal,
NoReturn,
- Sequence,
cast,
final,
overload,
@@ -189,6 +186,12 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Sequence,
+ )
+
from pandas import (
CategoricalIndex,
DataFrame,
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 9bd0bc98dc733..648a3ad5b3bd7 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -3,7 +3,6 @@
from typing import (
TYPE_CHECKING,
Any,
- Hashable,
Literal,
cast,
)
@@ -42,6 +41,8 @@
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import (
Dtype,
DtypeObj,
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index c4be04c469fae..1617b7c750c3c 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -11,7 +11,6 @@
TYPE_CHECKING,
Any,
Callable,
- Sequence,
cast,
final,
)
@@ -65,6 +64,7 @@
from pandas.core.tools.timedeltas import to_timedelta
if TYPE_CHECKING:
+ from collections.abc import Sequence
from datetime import datetime
from pandas._typing import (
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 1500bcef5d4d9..62b4dbded50ab 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -2,10 +2,7 @@
import datetime as dt
import operator
-from typing import (
- TYPE_CHECKING,
- Hashable,
-)
+from typing import TYPE_CHECKING
import warnings
import numpy as np
@@ -50,6 +47,8 @@
from pandas.core.tools.times import to_time
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import (
Dtype,
DtypeObj,
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 50838f8c65881..13e907741d233 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -9,7 +9,6 @@
from typing import (
TYPE_CHECKING,
Any,
- Hashable,
Literal,
)
@@ -88,6 +87,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import (
Dtype,
DtypeObj,
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 824ffa5cf4c67..d060ad24da2da 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1,19 +1,19 @@
from __future__ import annotations
+from collections.abc import (
+ Collection,
+ Generator,
+ Hashable,
+ Iterable,
+ Sequence,
+)
from functools import wraps
from sys import getsizeof
from typing import (
TYPE_CHECKING,
Any,
Callable,
- Collection,
- Generator,
- Hashable,
- Iterable,
- List,
Literal,
- Sequence,
- Tuple,
cast,
)
import warnings
@@ -587,7 +587,7 @@ def from_tuples(
raise TypeError("Input must be a list / sequence of tuple-likes.")
if is_iterator(tuples):
tuples = list(tuples)
- tuples = cast(Collection[Tuple[Hashable, ...]], tuples)
+ tuples = cast(Collection[tuple[Hashable, ...]], tuples)
# handling the empty tuple cases
if len(tuples) and all(isinstance(e, tuple) and not e for e in tuples):
@@ -617,7 +617,7 @@ def from_tuples(
arrays = list(lib.to_object_array_tuples(tuples).T)
else:
arrs = zip(*tuples)
- arrays = cast(List[Sequence[Hashable]], arrs)
+ arrays = cast(list[Sequence[Hashable]], arrs)
return cls.from_arrays(arrays, sortorder=sortorder, names=names)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index fd7cab0344e42..bb39cdbdce262 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -4,10 +4,7 @@
datetime,
timedelta,
)
-from typing import (
- TYPE_CHECKING,
- Hashable,
-)
+from typing import TYPE_CHECKING
import numpy as np
@@ -46,6 +43,8 @@
from pandas.core.indexes.extension import inherit_names
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import (
Dtype,
DtypeObj,
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 2b0dc53a736ea..fe66ec9c6792b 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -1,5 +1,9 @@
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterator,
+)
from datetime import timedelta
import operator
from sys import getsizeof
@@ -7,9 +11,6 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterator,
- List,
cast,
)
@@ -910,7 +911,7 @@ def _concat(self, indexes: list[Index], name: Hashable) -> Index:
elif len(indexes) == 1:
return indexes[0]
- rng_indexes = cast(List[RangeIndex], indexes)
+ rng_indexes = cast(list[RangeIndex], indexes)
start = step = next_ = None
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 4a2803f638c73..9fb83b3d55df9 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -4,8 +4,6 @@
import sys
from typing import (
TYPE_CHECKING,
- Hashable,
- Sequence,
cast,
final,
)
@@ -75,6 +73,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Sequence,
+ )
+
from pandas._typing import (
Axis,
AxisInt,
diff --git a/pandas/core/interchange/dataframe_protocol.py b/pandas/core/interchange/dataframe_protocol.py
index d36bda120e33d..95e7b6a26f93a 100644
--- a/pandas/core/interchange/dataframe_protocol.py
+++ b/pandas/core/interchange/dataframe_protocol.py
@@ -10,12 +10,17 @@
)
import enum
from typing import (
+ TYPE_CHECKING,
Any,
- Iterable,
- Sequence,
TypedDict,
)
+if TYPE_CHECKING:
+ from collections.abc import (
+ Iterable,
+ Sequence,
+ )
+
class DlpackDeviceType(enum.IntEnum):
"""Integer enum for device type codes matching DLPack."""
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 431de70a25392..5591253618f5f 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -7,7 +7,6 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
Literal,
)
@@ -90,6 +89,8 @@
from pandas.core.internals.managers import make_na_array
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import (
ArrayLike,
AxisInt,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 4480a1a0c6746..fee8314796dfe 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -6,9 +6,7 @@
TYPE_CHECKING,
Any,
Callable,
- Iterable,
Literal,
- Sequence,
cast,
final,
)
@@ -117,6 +115,11 @@
from pandas.core.indexers import check_setitem_lengths
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterable,
+ Sequence,
+ )
+
from pandas.core.api import Index
from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 8d12fb91887ac..294ef96a9aab9 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -2,7 +2,6 @@
from typing import (
TYPE_CHECKING,
- Sequence,
cast,
)
import warnings
@@ -51,6 +50,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
ArrayLike,
AxisInt,
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 7dcb33780dd08..ee85bc5a87834 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -8,8 +8,6 @@
from typing import (
TYPE_CHECKING,
Any,
- Hashable,
- Sequence,
)
import numpy as np
@@ -77,6 +75,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
DtypeObj,
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 0d24f742b97c7..b25eba15dfc06 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1,12 +1,14 @@
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Sequence,
+)
import itertools
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
Literal,
- Sequence,
cast,
)
import warnings
diff --git a/pandas/core/internals/ops.py b/pandas/core/internals/ops.py
index 8434ed05571b7..cf9466c0bdf0b 100644
--- a/pandas/core/internals/ops.py
+++ b/pandas/core/internals/ops.py
@@ -2,13 +2,14 @@
from typing import (
TYPE_CHECKING,
- Iterator,
NamedTuple,
)
from pandas.core.dtypes.common import is_1d_only_ea_dtype
if TYPE_CHECKING:
+ from collections.abc import Iterator
+
from pandas._libs.internals import BlockPlacement
from pandas._typing import ArrayLike
diff --git a/pandas/core/methods/describe.py b/pandas/core/methods/describe.py
index 71693f5b9195c..7b2c71ac1ca3c 100644
--- a/pandas/core/methods/describe.py
+++ b/pandas/core/methods/describe.py
@@ -12,8 +12,6 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
- Sequence,
cast,
)
@@ -43,6 +41,11 @@
from pandas.io.formats.format import format_percentiles
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Sequence,
+ )
+
from pandas import (
DataFrame,
Series,
diff --git a/pandas/core/methods/selectn.py b/pandas/core/methods/selectn.py
index 56dd9fa7658bc..894791cb46371 100644
--- a/pandas/core/methods/selectn.py
+++ b/pandas/core/methods/selectn.py
@@ -4,10 +4,12 @@
from __future__ import annotations
-from typing import (
- TYPE_CHECKING,
+from collections.abc import (
Hashable,
Sequence,
+)
+from typing import (
+ TYPE_CHECKING,
cast,
final,
)
diff --git a/pandas/core/ops/common.py b/pandas/core/ops/common.py
index f8f53310e773b..559977bacf881 100644
--- a/pandas/core/ops/common.py
+++ b/pandas/core/ops/common.py
@@ -4,7 +4,6 @@
from __future__ import annotations
from functools import wraps
-import sys
from typing import (
TYPE_CHECKING,
Callable,
@@ -57,15 +56,7 @@ def _unpack_zerodim_and_defer(method, name: str):
-------
method
"""
- if sys.version_info < (3, 9):
- from pandas.util._str_methods import (
- removeprefix,
- removesuffix,
- )
-
- stripped_name = removesuffix(removeprefix(name, "__"), "__")
- else:
- stripped_name = name.removeprefix("__").removesuffix("__")
+ stripped_name = name.removeprefix("__").removesuffix("__")
is_cmp = stripped_name in {"eq", "ne", "lt", "le", "gt", "ge"}
@wraps(method)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index c0a6587d527e1..c48ad224d3645 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -5,7 +5,6 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
Literal,
cast,
final,
@@ -86,6 +85,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import (
AnyArrayLike,
Axis,
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 81e3f575f6cb3..f6aedcbd67cba 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -7,10 +7,7 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
- Iterable,
Literal,
- Mapping,
cast,
overload,
)
@@ -51,6 +48,12 @@
from pandas.core.internals import concatenate_managers
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Mapping,
+ )
+
from pandas._typing import (
Axis,
AxisInt,
diff --git a/pandas/core/reshape/encoding.py b/pandas/core/reshape/encoding.py
index 58209c357b65d..8c464c2229515 100644
--- a/pandas/core/reshape/encoding.py
+++ b/pandas/core/reshape/encoding.py
@@ -1,12 +1,12 @@
from __future__ import annotations
from collections import defaultdict
-import itertools
-from typing import (
- TYPE_CHECKING,
+from collections.abc import (
Hashable,
Iterable,
)
+import itertools
+from typing import TYPE_CHECKING
import numpy as np
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 8e61754002cdb..cd0dc313a7fb3 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -1,10 +1,7 @@
from __future__ import annotations
import re
-from typing import (
- TYPE_CHECKING,
- Hashable,
-)
+from typing import TYPE_CHECKING
import numpy as np
@@ -27,6 +24,8 @@
from pandas.core.tools.numeric import to_numeric
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._typing import AnyArrayLike
from pandas import DataFrame
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index cfaa5a1fdad64..a4ef3013af249 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -3,14 +3,16 @@
"""
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Sequence,
+)
import datetime
from functools import partial
import string
from typing import (
TYPE_CHECKING,
- Hashable,
Literal,
- Sequence,
cast,
final,
)
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 2852ca8cf576a..099bfde7af1d3 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -1,10 +1,12 @@
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Sequence,
+)
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
- Sequence,
cast,
)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 2fc926d7e43d1..5da054a956603 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3,6 +3,12 @@
"""
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterable,
+ Mapping,
+ Sequence,
+)
import operator
import sys
from textwrap import dedent
@@ -11,11 +17,7 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterable,
Literal,
- Mapping,
- Sequence,
Union,
cast,
overload,
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 0dc230f760437..bc6e29c3c7fbf 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -6,9 +6,6 @@
TYPE_CHECKING,
Callable,
DefaultDict,
- Hashable,
- Iterable,
- Sequence,
cast,
)
@@ -34,6 +31,12 @@
from pandas.core.construction import extract_array
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
AxisInt,
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 24fe7e6bfc0c1..abd0dceb6e35d 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -6,7 +6,6 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
Literal,
cast,
)
@@ -49,6 +48,8 @@
from pandas.core.construction import extract_array
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas import (
DataFrame,
Index,
diff --git a/pandas/core/strings/base.py b/pandas/core/strings/base.py
index 44b311ad96387..96b0352666b41 100644
--- a/pandas/core/strings/base.py
+++ b/pandas/core/strings/base.py
@@ -5,12 +5,12 @@
TYPE_CHECKING,
Callable,
Literal,
- Sequence,
)
import numpy as np
if TYPE_CHECKING:
+ from collections.abc import Sequence
import re
from pandas._typing import Scalar
diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py
index 87cc6e71b8672..6993ae3235943 100644
--- a/pandas/core/strings/object_array.py
+++ b/pandas/core/strings/object_array.py
@@ -2,13 +2,11 @@
import functools
import re
-import sys
import textwrap
from typing import (
TYPE_CHECKING,
Callable,
Literal,
- Sequence,
cast,
)
import unicodedata
@@ -24,6 +22,8 @@
from pandas.core.strings.base import BaseStringArrayMethods
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
NpDtype,
Scalar,
@@ -469,14 +469,7 @@ def removeprefix(text: str) -> str:
return self._str_map(removeprefix)
def _str_removesuffix(self, suffix: str) -> Series:
- if sys.version_info < (3, 9):
- # NOTE pyupgrade will remove this when we run it with --py39-plus
- # so don't remove the unnecessary `else` statement below
- from pandas.util._str_methods import removesuffix
-
- return self._str_map(functools.partial(removesuffix, suffix=suffix))
- else:
- return self._str_map(lambda x: x.removesuffix(suffix))
+ return self._str_map(lambda x: x.removesuffix(suffix))
def _str_extract(self, pat: str, flags: int = 0, expand: bool = True):
regex = re.compile(pat, flags=flags)
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 13a434812db3b..6cde744d5704f 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -7,9 +7,6 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
- List,
- Tuple,
TypedDict,
Union,
cast,
@@ -82,6 +79,8 @@
from pandas.core.indexes.datetimes import DatetimeIndex
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from pandas._libs.tslibs.nattype import NaTType
from pandas._libs.tslibs.timedeltas import UnitChoices
@@ -93,13 +92,13 @@
# ---------------------------------------------------------------------
# types used in annotations
-ArrayConvertible = Union[List, Tuple, AnyArrayLike]
+ArrayConvertible = Union[list, tuple, AnyArrayLike]
Scalar = Union[float, str]
DatetimeScalar = Union[Scalar, datetime]
DatetimeScalarOrArrayConvertible = Union[DatetimeScalar, ArrayConvertible]
-DatetimeDictArg = Union[List[Scalar], Tuple[Scalar, ...], AnyArrayLike]
+DatetimeDictArg = Union[list[Scalar], tuple[Scalar, ...], AnyArrayLike]
class YearMonthDayDict(TypedDict, total=True):
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 5fd962da0ec48..676490693b121 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -4,12 +4,7 @@
from __future__ import annotations
import itertools
-from typing import (
- TYPE_CHECKING,
- Hashable,
- Iterable,
- Iterator,
-)
+from typing import TYPE_CHECKING
import numpy as np
@@ -26,6 +21,12 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Iterator,
+ )
+
from pandas._typing import (
ArrayLike,
npt,
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 5fd9930da4463..d26c405a34e87 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -13,9 +13,6 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterator,
- Sized,
cast,
)
@@ -93,6 +90,12 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ Sized,
+ )
+
from pandas._typing import (
ArrayLike,
Axis,
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 6199491be71a5..0d0191b22c023 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -7,6 +7,11 @@
)
import codecs
from collections import defaultdict
+from collections.abc import (
+ Hashable,
+ Mapping,
+ Sequence,
+)
import dataclasses
import functools
import gzip
@@ -29,10 +34,7 @@
AnyStr,
DefaultDict,
Generic,
- Hashable,
Literal,
- Mapping,
- Sequence,
TypeVar,
cast,
overload,
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 8383449ff21f1..10eea5e139387 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1,6 +1,12 @@
from __future__ import annotations
import abc
+from collections.abc import (
+ Hashable,
+ Iterable,
+ Mapping,
+ Sequence,
+)
import datetime
from functools import partial
from io import BytesIO
@@ -11,12 +17,7 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterable,
- List,
Literal,
- Mapping,
- Sequence,
Union,
cast,
overload,
@@ -763,7 +764,7 @@ def parse(
sheets = [sheet_name]
# handle same-type duplicates.
- sheets = cast(Union[List[int], List[str]], list(dict.fromkeys(sheets).keys()))
+ sheets = cast(Union[list[int], list[str]], list(dict.fromkeys(sheets).keys()))
output = {}
diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index f5aaf08530591..0333794d714aa 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -6,7 +6,6 @@
TYPE_CHECKING,
Any,
DefaultDict,
- Tuple,
cast,
)
@@ -118,7 +117,7 @@ def _write_cells(
self.book.spreadsheet.addElement(wks)
if validate_freeze_panes(freeze_panes):
- freeze_panes = cast(Tuple[int, int], freeze_panes)
+ freeze_panes = cast(tuple[int, int], freeze_panes)
self._create_freeze_panes(sheet_name, freeze_panes)
for _ in range(startrow):
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index 195d3a3a8b263..f94b82a0677ed 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -4,7 +4,6 @@
from typing import (
TYPE_CHECKING,
Any,
- Tuple,
cast,
)
@@ -478,7 +477,7 @@ def _write_cells(
wks.title = sheet_name
if validate_freeze_panes(freeze_panes):
- freeze_panes = cast(Tuple[int, int], freeze_panes)
+ freeze_panes = cast(tuple[int, int], freeze_panes)
wks.freeze_panes = wks.cell(
row=freeze_panes[0] + 1, column=freeze_panes[1] + 1
)
diff --git a/pandas/io/excel/_util.py b/pandas/io/excel/_util.py
index 72c64c5ec8939..f7a1fcb8052e3 100644
--- a/pandas/io/excel/_util.py
+++ b/pandas/io/excel/_util.py
@@ -1,14 +1,16 @@
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterable,
+ MutableMapping,
+ Sequence,
+)
from typing import (
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterable,
Literal,
- MutableMapping,
- Sequence,
TypeVar,
overload,
)
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 28df235084cf5..633c6f0f43889 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -1,11 +1,7 @@
""" feather-format compat """
from __future__ import annotations
-from typing import (
- TYPE_CHECKING,
- Hashable,
- Sequence,
-)
+from typing import TYPE_CHECKING
from pandas._libs import lib
from pandas.compat._optional import import_optional_dependency
@@ -19,6 +15,11 @@
from pandas.io.common import get_handle
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Sequence,
+ )
+
from pandas._typing import (
DtypeBackend,
FilePath,
diff --git a/pandas/io/formats/css.py b/pandas/io/formats/css.py
index 98a8697740266..ccce60c00a9e0 100644
--- a/pandas/io/formats/css.py
+++ b/pandas/io/formats/css.py
@@ -5,16 +5,21 @@
import re
from typing import (
+ TYPE_CHECKING,
Callable,
- Generator,
- Iterable,
- Iterator,
)
import warnings
from pandas.errors import CSSWarning
from pandas.util._exceptions import find_stack_level
+if TYPE_CHECKING:
+ from collections.abc import (
+ Generator,
+ Iterable,
+ Iterator,
+ )
+
def _side_expander(prop_fmt: str) -> Callable:
"""
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index 3b759010d1abb..ce6f9344bb8b2 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -4,14 +4,16 @@
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterator,
+ Sequence,
+)
import csv as csvlib
import os
from typing import (
TYPE_CHECKING,
Any,
- Hashable,
- Iterator,
- Sequence,
cast,
)
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index 0dbb6529cd384..f3eb4f78fa74e 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -3,6 +3,12 @@
"""
from __future__ import annotations
+from collections.abc import (
+ Hashable,
+ Iterable,
+ Mapping,
+ Sequence,
+)
import functools
import itertools
import re
@@ -10,10 +16,6 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterable,
- Mapping,
- Sequence,
cast,
)
import warnings
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 8caa9d0cbd3a5..3fa99027540a9 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -4,6 +4,13 @@
"""
from __future__ import annotations
+from collections.abc import (
+ Generator,
+ Hashable,
+ Iterable,
+ Mapping,
+ Sequence,
+)
from contextlib import contextmanager
from csv import (
QUOTE_NONE,
@@ -21,12 +28,6 @@
Any,
Callable,
Final,
- Generator,
- Hashable,
- Iterable,
- List,
- Mapping,
- Sequence,
cast,
)
from unicodedata import east_asian_width
@@ -856,7 +857,7 @@ def _get_strcols_without_index(self) -> list[list[str]]:
if is_list_like(self.header):
# cast here since can't be bool if is_list_like
- self.header = cast(List[str], self.header)
+ self.header = cast(list[str], self.header)
if len(self.header) != len(self.columns):
raise ValueError(
f"Writing {len(self.columns)} cols "
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 0ab02a81d4880..32a0cab1fbc41 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -5,11 +5,9 @@
from textwrap import dedent
from typing import (
+ TYPE_CHECKING,
Any,
Final,
- Hashable,
- Iterable,
- Mapping,
cast,
)
@@ -29,6 +27,13 @@
)
from pandas.io.formats.printing import pprint_thing
+if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Mapping,
+ )
+
class HTMLFormatter:
"""
diff --git a/pandas/io/formats/info.py b/pandas/io/formats/info.py
index 260620e145105..d20c2a62c61e2 100644
--- a/pandas/io/formats/info.py
+++ b/pandas/io/formats/info.py
@@ -6,13 +6,7 @@
)
import sys
from textwrap import dedent
-from typing import (
- TYPE_CHECKING,
- Iterable,
- Iterator,
- Mapping,
- Sequence,
-)
+from typing import TYPE_CHECKING
from pandas._config import get_option
@@ -20,6 +14,13 @@
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterable,
+ Iterator,
+ Mapping,
+ Sequence,
+ )
+
from pandas._typing import (
Dtype,
WriteBuffer,
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 828f3cf3735e9..7d9a3037c46f6 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -3,14 +3,15 @@
"""
from __future__ import annotations
+from collections.abc import (
+ Iterable,
+ Mapping,
+ Sequence,
+)
import sys
from typing import (
Any,
Callable,
- Dict,
- Iterable,
- Mapping,
- Sequence,
TypeVar,
Union,
)
@@ -498,7 +499,7 @@ def _justify(
return head_tuples, tail_tuples
-class PrettyDict(Dict[_KT, _VT]):
+class PrettyDict(dict[_KT, _VT]):
"""Dict extension to support abbreviated __repr__"""
def __repr__(self) -> str:
diff --git a/pandas/io/formats/string.py b/pandas/io/formats/string.py
index a4ec058467fb7..769f9dee1c31a 100644
--- a/pandas/io/formats/string.py
+++ b/pandas/io/formats/string.py
@@ -4,16 +4,15 @@
from __future__ import annotations
from shutil import get_terminal_size
-from typing import (
- TYPE_CHECKING,
- Iterable,
-)
+from typing import TYPE_CHECKING
import numpy as np
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from collections.abc import Iterable
+
from pandas.io.formats.format import DataFrameFormatter
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index c599bcbfd4170..426eda26588c7 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -11,9 +11,6 @@
TYPE_CHECKING,
Any,
Callable,
- Generator,
- Hashable,
- Sequence,
overload,
)
import warnings
@@ -60,6 +57,12 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Generator,
+ Hashable,
+ Sequence,
+ )
+
from matplotlib.colors import Colormap
from pandas._typing import (
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 7f2c237c8b296..07437fd231790 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -1,6 +1,7 @@
from __future__ import annotations
from collections import defaultdict
+from collections.abc import Sequence
from functools import partial
import re
from typing import (
@@ -8,11 +9,7 @@
Any,
Callable,
DefaultDict,
- Dict,
- List,
Optional,
- Sequence,
- Tuple,
TypedDict,
Union,
)
@@ -52,9 +49,9 @@
from markupsafe import escape as escape_html # markupsafe is jinja2 dependency
BaseFormatter = Union[str, Callable]
-ExtFormatter = Union[BaseFormatter, Dict[Any, Optional[BaseFormatter]]]
-CSSPair = Tuple[str, Union[str, float]]
-CSSList = List[CSSPair]
+ExtFormatter = Union[BaseFormatter, dict[Any, Optional[BaseFormatter]]]
+CSSPair = tuple[str, Union[str, float]]
+CSSList = list[CSSPair]
CSSProperties = Union[str, CSSList]
@@ -63,7 +60,7 @@ class CSSDict(TypedDict):
props: CSSProperties
-CSSStyles = List[CSSDict]
+CSSStyles = list[CSSDict]
Subset = Union[slice, Sequence, Index]
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 606bdc5a326a2..6de0eb4d995e9 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -9,12 +9,10 @@
from collections import abc
import numbers
import re
+from re import Pattern
from typing import (
TYPE_CHECKING,
- Iterable,
Literal,
- Pattern,
- Sequence,
cast,
)
import warnings
@@ -49,6 +47,11 @@
from pandas.io.parsers import TextParser
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterable,
+ Sequence,
+ )
+
from pandas._typing import (
BaseBuffer,
DtypeBackend,
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index eaeaedfdddfcb..7a2b327df447d 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -12,9 +12,7 @@
Any,
Callable,
Generic,
- Hashable,
Literal,
- Mapping,
TypeVar,
overload,
)
@@ -69,6 +67,10 @@
from pandas.io.parsers.readers import validate_integer
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Mapping,
+ )
from types import TracebackType
from pandas._typing import (
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 459b4035627cc..b1e2210f9d894 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -7,12 +7,10 @@
defaultdict,
)
import copy
-import sys
from typing import (
TYPE_CHECKING,
Any,
DefaultDict,
- Iterable,
)
import numpy as np
@@ -23,6 +21,8 @@
from pandas import DataFrame
if TYPE_CHECKING:
+ from collections.abc import Iterable
+
from pandas._typing import (
IgnoreRaise,
Scalar,
@@ -151,12 +151,7 @@ def _normalise_json(
new_key = f"{key_string}{separator}{key}"
if not key_string:
- if sys.version_info < (3, 9):
- from pandas.util._str_methods import removeprefix
-
- new_key = removeprefix(new_key, separator)
- else:
- new_key = new_key.removeprefix(separator)
+ new_key = new_key.removeprefix(separator)
_normalise_json(
data=value,
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 6cded2e21a43c..0a90deedf7ad2 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -10,12 +10,6 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
- Iterable,
- List,
- Mapping,
- Sequence,
- Tuple,
cast,
final,
overload,
@@ -88,6 +82,13 @@
from pandas.io.common import is_potential_multi_index
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterable,
+ Mapping,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
DtypeArg,
@@ -353,7 +354,7 @@ def _maybe_make_multi_index_columns(
) -> Sequence[Hashable] | MultiIndex:
# possibly create a column mi here
if is_potential_multi_index(columns):
- list_columns = cast(List[Tuple], columns)
+ list_columns = cast(list[tuple], columns)
return MultiIndex.from_tuples(list_columns, names=col_names)
return columns
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index cb3629ed0af4e..0cd788c5e5739 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -1,12 +1,7 @@
from __future__ import annotations
from collections import defaultdict
-from typing import (
- TYPE_CHECKING,
- Hashable,
- Mapping,
- Sequence,
-)
+from typing import TYPE_CHECKING
import warnings
import numpy as np
@@ -39,6 +34,12 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Mapping,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
DtypeArg,
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 36d5ef7111685..f2c9be66c0905 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -4,6 +4,12 @@
abc,
defaultdict,
)
+from collections.abc import (
+ Hashable,
+ Iterator,
+ Mapping,
+ Sequence,
+)
import csv
from io import StringIO
import re
@@ -12,12 +18,7 @@
IO,
TYPE_CHECKING,
DefaultDict,
- Hashable,
- Iterator,
- List,
Literal,
- Mapping,
- Sequence,
cast,
)
@@ -207,7 +208,7 @@ class MyDialect(csv.Dialect):
self.pos += 1
line = f.readline()
lines = self._check_comments([[line]])[0]
- lines_str = cast(List[str], lines)
+ lines_str = cast(list[str], lines)
# since `line` was a string, lines will be a list containing
# only a single string
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 0e4f85bfe3d63..e3c4fa3bbab96 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -14,11 +14,8 @@
TYPE_CHECKING,
Any,
Callable,
- Hashable,
Literal,
- Mapping,
NamedTuple,
- Sequence,
TypedDict,
overload,
)
@@ -66,6 +63,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Mapping,
+ Sequence,
+ )
from types import TracebackType
from pandas._typing import (
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index a67ff5274e0f7..e50a1f6e56d51 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -19,10 +19,7 @@
Any,
Callable,
Final,
- Hashable,
- Iterator,
Literal,
- Sequence,
cast,
overload,
)
@@ -105,6 +102,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ Sequence,
+ )
from types import TracebackType
from tables import (
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index 2a395f790a5b5..4b31c4b3828ce 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -5,7 +5,6 @@
from typing import (
TYPE_CHECKING,
- Hashable,
Protocol,
overload,
)
@@ -17,6 +16,7 @@
from pandas.io.common import stringify_path
if TYPE_CHECKING:
+ from collections.abc import Hashable
from types import TracebackType
from pandas._typing import (
diff --git a/pandas/io/spss.py b/pandas/io/spss.py
index 876eb83890836..980a3e0aabe60 100644
--- a/pandas/io/spss.py
+++ b/pandas/io/spss.py
@@ -1,9 +1,6 @@
from __future__ import annotations
-from typing import (
- TYPE_CHECKING,
- Sequence,
-)
+from typing import TYPE_CHECKING
from pandas._libs import lib
from pandas.compat._optional import import_optional_dependency
@@ -14,6 +11,7 @@
from pandas.io.common import stringify_path
if TYPE_CHECKING:
+ from collections.abc import Sequence
from pathlib import Path
from pandas._typing import DtypeBackend
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 719479754340b..4018a70fd0450 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -24,9 +24,7 @@
TYPE_CHECKING,
Any,
Callable,
- Iterator,
Literal,
- Mapping,
cast,
overload,
)
@@ -62,6 +60,11 @@
from pandas.core.tools.datetimes import to_datetime
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterator,
+ Mapping,
+ )
+
from sqlalchemy import Table
from sqlalchemy.sql.expression import (
Select,
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index d62830ffe3ea1..88d24fadd327b 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -27,8 +27,6 @@
AnyStr,
Callable,
Final,
- Hashable,
- Sequence,
cast,
)
import warnings
@@ -75,6 +73,10 @@
from pandas.io.common import get_handle
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Sequence,
+ )
from types import TracebackType
from typing import Literal
diff --git a/pandas/io/xml.py b/pandas/io/xml.py
index 62bbb410dacc1..b950a56633ae1 100644
--- a/pandas/io/xml.py
+++ b/pandas/io/xml.py
@@ -10,7 +10,6 @@
TYPE_CHECKING,
Any,
Callable,
- Sequence,
)
from pandas._libs import lib
@@ -37,6 +36,7 @@
from pandas.io.parsers import TextParser
if TYPE_CHECKING:
+ from collections.abc import Sequence
from xml.etree.ElementTree import Element
from lxml import etree
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 24b8816109677..fcea410971c91 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -4,9 +4,7 @@
from typing import (
TYPE_CHECKING,
Callable,
- Hashable,
Literal,
- Sequence,
)
from pandas._config import get_option
@@ -28,6 +26,10 @@
from pandas.core.base import PandasObject
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Sequence,
+ )
import types
from matplotlib.axes import Axes
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index 93e8146c947b2..83cb8a6ab67dd 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -2,7 +2,6 @@
from typing import (
TYPE_CHECKING,
- Collection,
Literal,
NamedTuple,
)
@@ -33,6 +32,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Collection
+
from matplotlib.axes import Axes
from matplotlib.lines import Line2D
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 9a4806ae51920..cd7823ba15e44 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -12,7 +12,6 @@
TYPE_CHECKING,
Any,
Final,
- Generator,
cast,
)
@@ -57,6 +56,8 @@
import pandas.core.tools.datetimes as tools
if TYPE_CHECKING:
+ from collections.abc import Generator
+
from pandas._libs.tslibs.offsets import BaseOffset
# constants
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index f11d663f39cea..b2a2e1c59a175 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -4,13 +4,15 @@
ABC,
abstractmethod,
)
-from typing import (
- TYPE_CHECKING,
+from collections.abc import (
Hashable,
Iterable,
- Literal,
Sequence,
)
+from typing import (
+ TYPE_CHECKING,
+ Literal,
+)
import warnings
import matplotlib as mpl
diff --git a/pandas/plotting/_matplotlib/groupby.py b/pandas/plotting/_matplotlib/groupby.py
index 94533d55d31df..00c3c4114d377 100644
--- a/pandas/plotting/_matplotlib/groupby.py
+++ b/pandas/plotting/_matplotlib/groupby.py
@@ -16,15 +16,12 @@
from pandas.plotting._matplotlib.misc import unpack_single_str_list
if TYPE_CHECKING:
- from pandas._typing import (
- Dict,
- IndexLabel,
- )
+ from pandas._typing import IndexLabel
def create_iter_data_given_by(
data: DataFrame, kind: str = "hist"
-) -> Dict[str, DataFrame | Series]:
+) -> dict[str, DataFrame | Series]:
"""
Create data for iteration given `by` is assigned or not, and it is only
used in both hist and boxplot.
diff --git a/pandas/plotting/_matplotlib/misc.py b/pandas/plotting/_matplotlib/misc.py
index 7db9acdc68d51..1f9212587e05e 100644
--- a/pandas/plotting/_matplotlib/misc.py
+++ b/pandas/plotting/_matplotlib/misc.py
@@ -1,10 +1,7 @@
from __future__ import annotations
import random
-from typing import (
- TYPE_CHECKING,
- Hashable,
-)
+from typing import TYPE_CHECKING
from matplotlib import patches
import matplotlib.lines as mlines
@@ -22,6 +19,8 @@
)
if TYPE_CHECKING:
+ from collections.abc import Hashable
+
from matplotlib.axes import Axes
from matplotlib.figure import Figure
diff --git a/pandas/plotting/_matplotlib/style.py b/pandas/plotting/_matplotlib/style.py
index 839da35a8ae83..a5f34e9434cb7 100644
--- a/pandas/plotting/_matplotlib/style.py
+++ b/pandas/plotting/_matplotlib/style.py
@@ -1,10 +1,12 @@
from __future__ import annotations
+from collections.abc import (
+ Collection,
+ Iterator,
+)
import itertools
from typing import (
TYPE_CHECKING,
- Collection,
- Iterator,
cast,
)
import warnings
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index 414a20cde62b6..8c0e401f991a6 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -2,11 +2,7 @@
from __future__ import annotations
from math import ceil
-from typing import (
- TYPE_CHECKING,
- Iterable,
- Sequence,
-)
+from typing import TYPE_CHECKING
import warnings
from matplotlib import ticker
@@ -23,6 +19,11 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Iterable,
+ Sequence,
+ )
+
from matplotlib.axes import Axes
from matplotlib.axis import Axis
from matplotlib.figure import Figure
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index f8f9a584f0563..4c12a7ad01537 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -4,13 +4,16 @@
from typing import (
TYPE_CHECKING,
Any,
- Generator,
- Mapping,
)
from pandas.plotting._core import _get_plot_backend
if TYPE_CHECKING:
+ from collections.abc import (
+ Generator,
+ Mapping,
+ )
+
from matplotlib.axes import Axes
from matplotlib.colors import Colormap
from matplotlib.figure import Figure
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index bbce40727c669..cadc3a46e0ba4 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -5,6 +5,7 @@
"""
import collections
from collections import namedtuple
+from collections.abc import Iterator
from datetime import (
date,
datetime,
@@ -20,7 +21,6 @@
import sys
from typing import (
Generic,
- Iterator,
TypeVar,
)
diff --git a/pandas/tests/extension/date/array.py b/pandas/tests/extension/date/array.py
index 20373e323e2de..8c7b17cd8e3b8 100644
--- a/pandas/tests/extension/date/array.py
+++ b/pandas/tests/extension/date/array.py
@@ -4,7 +4,6 @@
from typing import (
TYPE_CHECKING,
Any,
- Sequence,
cast,
)
@@ -19,6 +18,8 @@
from pandas.api.types import pandas_dtype
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from pandas._typing import (
Dtype,
PositionalIndexer,
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index 9ce60ae5d435c..8495ffbbbe70d 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -25,7 +25,6 @@
from typing import (
TYPE_CHECKING,
Any,
- Mapping,
)
import numpy as np
@@ -45,6 +44,8 @@
from pandas.core.indexers import unpack_tuple_and_ellipses
if TYPE_CHECKING:
+ from collections.abc import Mapping
+
from pandas._typing import type_t
diff --git a/pandas/tests/frame/constructors/test_from_records.py b/pandas/tests/frame/constructors/test_from_records.py
index 9f44e85789cbd..bd8f1ae5adf5b 100644
--- a/pandas/tests/frame/constructors/test_from_records.py
+++ b/pandas/tests/frame/constructors/test_from_records.py
@@ -1,6 +1,6 @@
+from collections.abc import Iterator
from datetime import datetime
from decimal import Decimal
-from typing import Iterator
import numpy as np
import pytest
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 3fbc6558b5c15..8a4d1624fcb30 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -5,6 +5,7 @@
defaultdict,
namedtuple,
)
+from collections.abc import Iterator
from dataclasses import make_dataclass
from datetime import (
date,
@@ -14,7 +15,6 @@
import functools
import random
import re
-from typing import Iterator
import warnings
import numpy as np
diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index 54f4980b1e4e3..c2f915e33df8a 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -1,6 +1,6 @@
+from collections.abc import Iterator
from io import StringIO
from pathlib import Path
-from typing import Iterator
import numpy as np
import pytest
diff --git a/pandas/tests/io/parser/test_python_parser_only.py b/pandas/tests/io/parser/test_python_parser_only.py
index b22953fedd6af..959b988e208c1 100644
--- a/pandas/tests/io/parser/test_python_parser_only.py
+++ b/pandas/tests/io/parser/test_python_parser_only.py
@@ -12,7 +12,7 @@
StringIO,
TextIOWrapper,
)
-from typing import Iterator
+from typing import TYPE_CHECKING
import numpy as np
import pytest
@@ -29,6 +29,9 @@
)
import pandas._testing as tm
+if TYPE_CHECKING:
+ from collections.abc import Iterator
+
def test_default_separator(python_parser_only):
# see gh-17333
diff --git a/pandas/tests/io/pytables/common.py b/pandas/tests/io/pytables/common.py
index ef90d97a5a98c..62582b212eb38 100644
--- a/pandas/tests/io/pytables/common.py
+++ b/pandas/tests/io/pytables/common.py
@@ -1,7 +1,7 @@
+from collections.abc import Generator
from contextlib import contextmanager
import pathlib
import tempfile
-from typing import Generator
import pytest
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 488ffd77739f3..e884d9bea26fe 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -1,3 +1,4 @@
+from collections.abc import Iterator
from functools import partial
from io import (
BytesIO,
@@ -7,7 +8,6 @@
from pathlib import Path
import re
import threading
-from typing import Iterator
from urllib.error import URLError
import numpy as np
diff --git a/pandas/tests/libs/test_hashtable.py b/pandas/tests/libs/test_hashtable.py
index ab4dd58e18ce8..41ea557924b25 100644
--- a/pandas/tests/libs/test_hashtable.py
+++ b/pandas/tests/libs/test_hashtable.py
@@ -1,8 +1,8 @@
+from collections.abc import Generator
from contextlib import contextmanager
import re
import struct
import tracemalloc
-from typing import Generator
import numpy as np
import pytest
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 9dce817c23169..e51dd06881c4f 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -4,10 +4,7 @@
from __future__ import annotations
-from typing import (
- TYPE_CHECKING,
- Sequence,
-)
+from typing import TYPE_CHECKING
import numpy as np
@@ -18,6 +15,8 @@
import pandas._testing as tm
if TYPE_CHECKING:
+ from collections.abc import Sequence
+
from matplotlib.axes import Axes
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 3c6f75bdcfc46..86a3017753844 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1,7 +1,6 @@
from datetime import datetime
from functools import partial
from io import StringIO
-from typing import List
import numpy as np
import pytest
@@ -1335,7 +1334,7 @@ def test_resample_consistency(unit):
tm.assert_series_equal(s10_2, rl)
-dates1: List[DatetimeNaTType] = [
+dates1: list[DatetimeNaTType] = [
datetime(2014, 10, 1),
datetime(2014, 9, 3),
datetime(2014, 11, 5),
@@ -1344,7 +1343,7 @@ def test_resample_consistency(unit):
datetime(2014, 7, 15),
]
-dates2: List[DatetimeNaTType] = (
+dates2: list[DatetimeNaTType] = (
dates1[:2] + [pd.NaT] + dates1[2:4] + [pd.NaT] + dates1[4:]
)
dates3 = [pd.NaT] + dates1 + [pd.NaT]
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index dc14e6e74302e..6ab1407b9ff82 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -2,9 +2,9 @@
abc,
deque,
)
+from collections.abc import Iterator
from datetime import datetime
from decimal import Decimal
-from typing import Iterator
from warnings import (
catch_warnings,
simplefilter,
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index ceb283ca9e9e7..c7536273862c0 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1,9 +1,9 @@
from collections import OrderedDict
+from collections.abc import Iterator
from datetime import (
datetime,
timedelta,
)
-from typing import Iterator
from dateutil.tz import tzoffset
import numpy as np
diff --git a/pandas/tests/test_register_accessor.py b/pandas/tests/test_register_accessor.py
index a76e4b09b7f4d..5b200711f4b36 100644
--- a/pandas/tests/test_register_accessor.py
+++ b/pandas/tests/test_register_accessor.py
@@ -1,5 +1,5 @@
+from collections.abc import Generator
import contextlib
-from typing import Generator
import pytest
diff --git a/pandas/tests/util/test_str_methods.py b/pandas/tests/util/test_str_methods.py
deleted file mode 100644
index c07730f589824..0000000000000
--- a/pandas/tests/util/test_str_methods.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import sys
-
-import pytest
-
-if sys.version_info < (3, 9):
- from pandas.util._str_methods import (
- removeprefix,
- removesuffix,
- )
-
- @pytest.mark.parametrize(
- "string, prefix, expected",
- (
- ("wildcat", "wild", "cat"),
- ("blackbird", "black", "bird"),
- ("housefly", "house", "fly"),
- ("ladybug", "lady", "bug"),
- ("rattlesnake", "rattle", "snake"),
- ("baboon", "badger", "baboon"),
- ("quetzal", "elk", "quetzal"),
- ),
- )
- def test_remove_prefix(string, prefix, expected):
- result = removeprefix(string, prefix)
- assert result == expected
-
- @pytest.mark.parametrize(
- "string, suffix, expected",
- (
- ("wildcat", "cat", "wild"),
- ("blackbird", "bird", "black"),
- ("housefly", "fly", "house"),
- ("ladybug", "bug", "lady"),
- ("rattlesnake", "snake", "rattle"),
- ("seahorse", "horse", "sea"),
- ("baboon", "badger", "baboon"),
- ("quetzal", "elk", "quetzal"),
- ),
- )
- def test_remove_suffix(string, suffix, expected):
- result = removesuffix(string, suffix)
- assert result == expected
-
-else:
- # NOTE: remove this file when pyupgrade --py39-plus removes
- # the above block
- pass
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index cba7010a5bc39..4c2122c3fdff1 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -4,9 +4,9 @@
import inspect
from textwrap import dedent
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
- Mapping,
cast,
)
import warnings
@@ -18,6 +18,9 @@
)
from pandas.util._exceptions import find_stack_level
+if TYPE_CHECKING:
+ from collections.abc import Mapping
+
def deprecate(
name: str,
diff --git a/pandas/util/_doctools.py b/pandas/util/_doctools.py
index 9e3ab80d1d40a..12619abf4baaf 100644
--- a/pandas/util/_doctools.py
+++ b/pandas/util/_doctools.py
@@ -1,11 +1,14 @@
from __future__ import annotations
-from typing import Iterable
+from typing import TYPE_CHECKING
import numpy as np
import pandas as pd
+if TYPE_CHECKING:
+ from collections.abc import Iterable
+
class TablePlotter:
"""
diff --git a/pandas/util/_exceptions.py b/pandas/util/_exceptions.py
index 1eefd06a133fb..573f76a63459b 100644
--- a/pandas/util/_exceptions.py
+++ b/pandas/util/_exceptions.py
@@ -4,9 +4,12 @@
import inspect
import os
import re
-from typing import Generator
+from typing import TYPE_CHECKING
import warnings
+if TYPE_CHECKING:
+ from collections.abc import Generator
+
@contextlib.contextmanager
def rewrite_exception(old_name: str, new_name: str) -> Generator[None, None, None]:
diff --git a/pandas/util/_str_methods.py b/pandas/util/_str_methods.py
deleted file mode 100644
index 8f7aef80bc108..0000000000000
--- a/pandas/util/_str_methods.py
+++ /dev/null
@@ -1,28 +0,0 @@
-"""
-Python3.9 introduces removesuffix and remove prefix.
-
-They're reimplemented here for use in Python3.8.
-
-NOTE: when pyupgrade --py39-plus removes nearly everything in this file,
-this file and the associated tests should be removed.
-"""
-from __future__ import annotations
-
-import sys
-
-if sys.version_info < (3, 9):
-
- def removesuffix(string: str, suffix: str) -> str:
- if string.endswith(suffix):
- return string[: -len(suffix)]
- return string
-
- def removeprefix(string: str, prefix: str) -> str:
- if string.startswith(prefix):
- return string[len(prefix) :]
- return string
-
-else:
- # NOTE: remove this file when pyupgrade --py39-plus removes
- # the above block
- pass
diff --git a/pandas/util/_validators.py b/pandas/util/_validators.py
index 517b88b0cd938..310d08e84433c 100644
--- a/pandas/util/_validators.py
+++ b/pandas/util/_validators.py
@@ -4,9 +4,11 @@
"""
from __future__ import annotations
-from typing import (
+from collections.abc import (
Iterable,
Sequence,
+)
+from typing import (
TypeVar,
overload,
)
diff --git a/pandas/util/version/__init__.py b/pandas/util/version/__init__.py
index 5d4937451c49c..c72598eb50410 100644
--- a/pandas/util/version/__init__.py
+++ b/pandas/util/version/__init__.py
@@ -9,11 +9,11 @@
from __future__ import annotations
import collections
+from collections.abc import Iterator
import itertools
import re
from typing import (
Callable,
- Iterator,
SupportsInt,
Tuple,
Union,
@@ -88,23 +88,23 @@ def __neg__(self: object) -> InfinityType:
InfiniteTypes = Union[InfinityType, NegativeInfinityType]
-PrePostDevType = Union[InfiniteTypes, Tuple[str, int]]
+PrePostDevType = Union[InfiniteTypes, tuple[str, int]]
SubLocalType = Union[InfiniteTypes, int, str]
LocalType = Union[
NegativeInfinityType,
- Tuple[
+ tuple[
Union[
SubLocalType,
- Tuple[SubLocalType, str],
- Tuple[NegativeInfinityType, SubLocalType],
+ tuple[SubLocalType, str],
+ tuple[NegativeInfinityType, SubLocalType],
],
...,
],
]
-CmpKey = Tuple[
- int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType
+CmpKey = tuple[
+ int, tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType
]
-LegacyCmpKey = Tuple[int, Tuple[str, ...]]
+LegacyCmpKey = tuple[int, tuple[str, ...]]
VersionComparisonMethod = Callable[
[Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool
]
diff --git a/scripts/check_for_inconsistent_pandas_namespace.py b/scripts/check_for_inconsistent_pandas_namespace.py
index 3c21821e794a9..52eca6f6d93ac 100644
--- a/scripts/check_for_inconsistent_pandas_namespace.py
+++ b/scripts/check_for_inconsistent_pandas_namespace.py
@@ -22,13 +22,14 @@
import argparse
import ast
+from collections.abc import (
+ MutableMapping,
+ Sequence,
+)
import sys
from typing import (
- MutableMapping,
NamedTuple,
Optional,
- Sequence,
- Set,
)
ERROR_MESSAGE = (
@@ -46,7 +47,7 @@ class OffsetWithNamespace(NamedTuple):
class Visitor(ast.NodeVisitor):
def __init__(self) -> None:
self.pandas_namespace: MutableMapping[OffsetWithNamespace, str] = {}
- self.imported_from_pandas: Set[str] = set()
+ self.imported_from_pandas: set[str] = set()
def visit_Attribute(self, node: ast.Attribute) -> None:
if isinstance(node.value, ast.Name) and node.value.id in {"pandas", "pd"}:
diff --git a/scripts/check_test_naming.py b/scripts/check_test_naming.py
index 158cf46f264c2..f9190643b3246 100644
--- a/scripts/check_test_naming.py
+++ b/scripts/check_test_naming.py
@@ -15,10 +15,13 @@ class or function definition. Though hopefully that shouldn't be necessary.
import os
from pathlib import Path
import sys
-from typing import (
- Iterator,
- Sequence,
-)
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import (
+ Iterator,
+ Sequence,
+ )
PRAGMA = "# not a test"
diff --git a/scripts/no_bool_in_generic.py b/scripts/no_bool_in_generic.py
index 1427974249702..e57ac30f7084b 100644
--- a/scripts/no_bool_in_generic.py
+++ b/scripts/no_bool_in_generic.py
@@ -15,7 +15,10 @@
import argparse
import ast
import collections
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
def visit(tree: ast.Module) -> dict[int, list[int]]:
diff --git a/scripts/pandas_errors_documented.py b/scripts/pandas_errors_documented.py
index 116a63b33eaf0..b68da137717de 100644
--- a/scripts/pandas_errors_documented.py
+++ b/scripts/pandas_errors_documented.py
@@ -12,7 +12,10 @@
import ast
import pathlib
import sys
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
API_PATH = pathlib.Path("doc/source/reference/testing.rst").resolve()
diff --git a/scripts/run_autotyping.py b/scripts/run_autotyping.py
index 4c0a3a9cf985f..8729085f0dd4b 100644
--- a/scripts/run_autotyping.py
+++ b/scripts/run_autotyping.py
@@ -9,7 +9,10 @@
import argparse
import subprocess
import sys
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
def main(argv: Sequence[str] | None = None) -> None:
diff --git a/scripts/sort_whatsnew_note.py b/scripts/sort_whatsnew_note.py
index 531ea57244b23..428ffca83ea26 100644
--- a/scripts/sort_whatsnew_note.py
+++ b/scripts/sort_whatsnew_note.py
@@ -28,7 +28,11 @@
import argparse
import re
import sys
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
+
# Check line starts with `-` and ends with e.g. `(:issue:`12345`)`,
# possibly with a trailing full stop.
diff --git a/scripts/use_io_common_urlopen.py b/scripts/use_io_common_urlopen.py
index 11d8378fce574..ade97f53cd827 100644
--- a/scripts/use_io_common_urlopen.py
+++ b/scripts/use_io_common_urlopen.py
@@ -14,7 +14,11 @@
import argparse
import ast
import sys
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
+
ERROR_MESSAGE = (
"{path}:{lineno}:{col_offset}: "
diff --git a/scripts/use_pd_array_in_core.py b/scripts/use_pd_array_in_core.py
index 61ba070e52f1b..c9e14dece44e4 100644
--- a/scripts/use_pd_array_in_core.py
+++ b/scripts/use_pd_array_in_core.py
@@ -14,7 +14,11 @@
import argparse
import ast
import sys
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
+
ERROR_MESSAGE = (
"{path}:{lineno}:{col_offset}: "
diff --git a/scripts/validate_exception_location.py b/scripts/validate_exception_location.py
index 5f77e4c78db82..fb0dc47252717 100644
--- a/scripts/validate_exception_location.py
+++ b/scripts/validate_exception_location.py
@@ -24,7 +24,10 @@
import ast
import pathlib
import sys
-from typing import Sequence
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Sequence
API_PATH = pathlib.Path("doc/source/reference/testing.rst").resolve()
ERROR_MESSAGE = (
diff --git a/scripts/validate_rst_title_capitalization.py b/scripts/validate_rst_title_capitalization.py
index 0f4c11eb30b07..44318cd797163 100755
--- a/scripts/validate_rst_title_capitalization.py
+++ b/scripts/validate_rst_title_capitalization.py
@@ -16,7 +16,11 @@
import argparse
import re
import sys
-from typing import Iterable
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from collections.abc import Iterable
+
CAPITALIZATION_EXCEPTIONS = {
"pandas",
diff --git a/scripts/validate_unwanted_patterns.py b/scripts/validate_unwanted_patterns.py
index 466419bf5093e..8fed7513f5d4e 100755
--- a/scripts/validate_unwanted_patterns.py
+++ b/scripts/validate_unwanted_patterns.py
@@ -12,19 +12,16 @@
import argparse
import ast
+from collections.abc import Iterable
import sys
import token
import tokenize
from typing import (
IO,
Callable,
- Iterable,
- List,
- Set,
- Tuple,
)
-PRIVATE_IMPORTS_TO_IGNORE: Set[str] = {
+PRIVATE_IMPORTS_TO_IGNORE: set[str] = {
"_extension_array_shared_docs",
"_index_shared_docs",
"_interval_shared_docs",
@@ -88,7 +85,7 @@ def _get_literal_string_prefix_len(token_string: str) -> int:
return 0
-def bare_pytest_raises(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
+def bare_pytest_raises(file_obj: IO[str]) -> Iterable[tuple[int, str]]:
"""
Test Case for bare pytest raises.
@@ -150,7 +147,7 @@ def bare_pytest_raises(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
PRIVATE_FUNCTIONS_ALLOWED = {"sys._getframe"} # no known alternative
-def private_function_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
+def private_function_across_module(file_obj: IO[str]) -> Iterable[tuple[int, str]]:
"""
Checking that a private function is not used across modules.
Parameters
@@ -167,7 +164,7 @@ def private_function_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str
contents = file_obj.read()
tree = ast.parse(contents)
- imported_modules: Set[str] = set()
+ imported_modules: set[str] = set()
for node in ast.walk(tree):
if isinstance(node, (ast.Import, ast.ImportFrom)):
@@ -199,7 +196,7 @@ def private_function_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str
yield (node.lineno, f"Private function '{module_name}.{function_name}'")
-def private_import_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
+def private_import_across_module(file_obj: IO[str]) -> Iterable[tuple[int, str]]:
"""
Checking that a private function is not imported across modules.
Parameters
@@ -231,7 +228,7 @@ def private_import_across_module(file_obj: IO[str]) -> Iterable[Tuple[int, str]]
def strings_with_wrong_placed_whitespace(
file_obj: IO[str],
-) -> Iterable[Tuple[int, str]]:
+) -> Iterable[tuple[int, str]]:
"""
Test case for leading spaces in concated strings.
@@ -328,7 +325,7 @@ def has_wrong_whitespace(first_line: str, second_line: str) -> bool:
return True
return False
- tokens: List = list(tokenize.generate_tokens(file_obj.readline))
+ tokens: list = list(tokenize.generate_tokens(file_obj.readline))
for first_token, second_token, third_token in zip(tokens, tokens[1:], tokens[2:]):
# Checking if we are in a block of concated string
@@ -354,7 +351,7 @@ def has_wrong_whitespace(first_line: str, second_line: str) -> bool:
)
-def nodefault_used_not_only_for_typing(file_obj: IO[str]) -> Iterable[Tuple[int, str]]:
+def nodefault_used_not_only_for_typing(file_obj: IO[str]) -> Iterable[tuple[int, str]]:
"""Test case where pandas._libs.lib.NoDefault is not used for typing.
Parameters
@@ -372,7 +369,7 @@ def nodefault_used_not_only_for_typing(file_obj: IO[str]) -> Iterable[Tuple[int,
contents = file_obj.read()
tree = ast.parse(contents)
in_annotation = False
- nodes: List[tuple[bool, ast.AST]] = [(in_annotation, tree)]
+ nodes: list[tuple[bool, ast.AST]] = [(in_annotation, tree)]
while nodes:
in_annotation, node = nodes.pop()
@@ -401,7 +398,7 @@ def nodefault_used_not_only_for_typing(file_obj: IO[str]) -> Iterable[Tuple[int,
def main(
- function: Callable[[IO[str]], Iterable[Tuple[int, str]]],
+ function: Callable[[IO[str]], Iterable[tuple[int, str]]],
source_path: str,
output_format: str,
) -> bool:
@@ -447,7 +444,7 @@ def main(
if __name__ == "__main__":
- available_validation_types: List[str] = [
+ available_validation_types: list[str] = [
"bare_pytest_raises",
"private_function_across_module",
"private_import_across_module",
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54044 | 2023-07-07T18:57:36Z | 2023-07-10T20:09:54Z | 2023-07-10T20:09:54Z | 2023-07-10T20:11:31Z |
⬆️ UPGRADE: Autoupdate pre-commit config | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index f8e7dfe71115d..53962a9426cc2 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -21,7 +21,7 @@ repos:
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.0.270
+ rev: v0.0.275
hooks:
- id: ruff
args: [--exit-non-zero-on-fix]
@@ -33,7 +33,7 @@ repos:
pass_filenames: true
require_serial: false
- repo: https://github.com/codespell-project/codespell
- rev: v2.2.4
+ rev: v2.2.5
hooks:
- id: codespell
types_or: [python, rst, markdown, cython, c]
@@ -91,7 +91,7 @@ repos:
hooks:
- id: isort
- repo: https://github.com/asottile/pyupgrade
- rev: v3.4.0
+ rev: v3.7.0
hooks:
- id: pyupgrade
args: [--py39-plus]
diff --git a/pandas/_libs/algos.pyi b/pandas/_libs/algos.pyi
index cbbe418c8ab48..caf5425dfc7b4 100644
--- a/pandas/_libs/algos.pyi
+++ b/pandas/_libs/algos.pyi
@@ -5,10 +5,6 @@ import numpy as np
from pandas._typing import npt
class Infinity:
- """
- Provide a positive Infinity comparison method for ranking.
- """
-
def __eq__(self, other) -> bool: ...
def __ne__(self, other) -> bool: ...
def __lt__(self, other) -> bool: ...
@@ -17,10 +13,6 @@ class Infinity:
def __ge__(self, other) -> bool: ...
class NegInfinity:
- """
- Provide a negative Infinity comparison method for ranking.
- """
-
def __eq__(self, other) -> bool: ...
def __ne__(self, other) -> bool: ...
def __lt__(self, other) -> bool: ...
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 9d0e2145567bf..5783842e9ddef 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1322,7 +1322,7 @@ def searchsorted(
arr: ArrayLike,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
"""
Find indices where elements should be inserted to maintain order.
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index f987bab7a2b87..190b74a675711 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -233,7 +233,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
npvalue = self._validate_setitem_value(value)
return self._ndarray.searchsorted(npvalue, side=side, sorter=sorter)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index e2630fd61072b..6ad07eb04753a 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1056,7 +1056,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
if self._hasna:
raise ValueError(
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index c82f2ac018f93..5695b4117f372 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -926,7 +926,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
"""
Find indices where elements should be inserted to maintain order.
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index d24aa95cdd6c5..53afba19222da 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -922,7 +922,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
if self._hasna:
raise ValueError(
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 0942d5eab679e..a8bbe607caba5 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -781,7 +781,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
npvalue = self._validate_setitem_value(value).view("M8[ns]")
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 6b48a9181a5a8..796758ead0c62 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -1134,7 +1134,7 @@ def searchsorted(
self,
v: ArrayLike | object,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
msg = "searchsorted requires high memory usage."
warnings.warn(msg, PerformanceWarning, stacklevel=find_stack_level())
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index a6579879cab96..2640cbd7f6ba1 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -513,7 +513,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
if self._hasna:
raise ValueError(
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 3629a6f9526af..057b381bbdb58 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1307,7 +1307,7 @@ def searchsorted(
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
if isinstance(value, ABCDataFrame):
msg = (
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 074cff5b6564a..04e2b00744156 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -219,7 +219,7 @@ def _from_fastpath(
@classmethod
def _from_categorical_dtype(
- cls, dtype: CategoricalDtype, categories=None, ordered: Ordered = None
+ cls, dtype: CategoricalDtype, categories=None, ordered: Ordered | None = None
) -> CategoricalDtype:
if categories is ordered is None:
return dtype
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c0f4dbb4aeb2d..5084c7ed6ba97 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2563,7 +2563,7 @@ def to_stata(
version: int | None = 114,
convert_strl: Sequence[Hashable] | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
value_labels: dict[Hashable, dict[float, str]] | None = None,
) -> None:
"""
@@ -2758,7 +2758,7 @@ def to_markdown(
buf: FilePath | WriteBuffer[str] | None = None,
mode: str = "wt",
index: bool = True,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
**kwargs,
) -> str | None:
if "showindex" in kwargs:
@@ -2810,7 +2810,7 @@ def to_parquet(
compression: str | None = "snappy",
index: bool | None = None,
partition_cols: list[str] | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
**kwargs,
) -> bytes | None:
"""
@@ -3179,7 +3179,7 @@ def to_xml(
parser: XMLParsers | None = "lxml",
stylesheet: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> str | None:
"""
Render a DataFrame to an XML document.
@@ -5080,12 +5080,12 @@ def drop(
def drop(
self,
- labels: IndexLabel = None,
+ labels: IndexLabel | None = None,
*,
axis: Axis = 0,
- index: IndexLabel = None,
- columns: IndexLabel = None,
- level: Level = None,
+ index: IndexLabel | None = None,
+ columns: IndexLabel | None = None,
+ level: Level | None = None,
inplace: bool = False,
errors: IgnoreRaise = "raise",
) -> DataFrame | None:
@@ -5290,7 +5290,7 @@ def rename(
axis: Axis | None = None,
copy: bool | None = None,
inplace: bool = False,
- level: Level = None,
+ level: Level | None = None,
errors: IgnoreRaise = "ignore",
) -> DataFrame | None:
"""
@@ -5805,7 +5805,7 @@ def reset_index(
col_level: Hashable = ...,
col_fill: Hashable = ...,
allow_duplicates: bool | lib.NoDefault = ...,
- names: Hashable | Sequence[Hashable] = None,
+ names: Hashable | Sequence[Hashable] | None = None,
) -> DataFrame:
...
@@ -5819,7 +5819,7 @@ def reset_index(
col_level: Hashable = ...,
col_fill: Hashable = ...,
allow_duplicates: bool | lib.NoDefault = ...,
- names: Hashable | Sequence[Hashable] = None,
+ names: Hashable | Sequence[Hashable] | None = None,
) -> None:
...
@@ -5833,20 +5833,20 @@ def reset_index(
col_level: Hashable = ...,
col_fill: Hashable = ...,
allow_duplicates: bool | lib.NoDefault = ...,
- names: Hashable | Sequence[Hashable] = None,
+ names: Hashable | Sequence[Hashable] | None = None,
) -> DataFrame | None:
...
def reset_index(
self,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
*,
drop: bool = False,
inplace: bool = False,
col_level: Hashable = 0,
col_fill: Hashable = "",
allow_duplicates: bool | lib.NoDefault = lib.no_default,
- names: Hashable | Sequence[Hashable] = None,
+ names: Hashable | Sequence[Hashable] | None = None,
) -> DataFrame | None:
"""
Reset the index, or a level of it.
@@ -6142,7 +6142,7 @@ def dropna(
axis: Axis = 0,
how: AnyAll | lib.NoDefault = lib.no_default,
thresh: int | lib.NoDefault = lib.no_default,
- subset: IndexLabel = None,
+ subset: IndexLabel | None = None,
inplace: bool = False,
ignore_index: bool = False,
) -> DataFrame | None:
@@ -6579,7 +6579,7 @@ def sort_values(
kind: SortKind = "quicksort",
na_position: str = "last",
ignore_index: bool = False,
- key: ValueKeyFunc = None,
+ key: ValueKeyFunc | None = None,
) -> DataFrame | None:
"""
Sort by the values along either axis.
@@ -6858,14 +6858,14 @@ def sort_index(
self,
*,
axis: Axis = 0,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
ascending: bool | Sequence[bool] = True,
inplace: bool = False,
kind: SortKind = "quicksort",
na_position: NaPosition = "last",
sort_remaining: bool = True,
ignore_index: bool = False,
- key: IndexKeyFunc = None,
+ key: IndexKeyFunc | None = None,
) -> DataFrame | None:
"""
Sort object by labels (along an axis).
@@ -7633,7 +7633,11 @@ def _should_reindex_frame_op(self, right, op, axis: int, fill_value, level) -> b
return False
def _align_for_op(
- self, other, axis: AxisInt, flex: bool | None = False, level: Level = None
+ self,
+ other,
+ axis: AxisInt,
+ flex: bool | None = False,
+ level: Level | None = None,
):
"""
Convert rhs to meet lhs dims if input is list, tuple or np.ndarray.
@@ -9387,7 +9391,7 @@ def melt(
value_vars=None,
var_name=None,
value_name: Hashable = "value",
- col_level: Level = None,
+ col_level: Level | None = None,
ignore_index: bool = True,
) -> DataFrame:
return melt(
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ec6788a3dc8c5..b3efe02d201d5 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2191,14 +2191,14 @@ def to_excel(
columns: Sequence[Hashable] | None = None,
header: Sequence[Hashable] | bool_t = True,
index: bool_t = True,
- index_label: IndexLabel = None,
+ index_label: IndexLabel | None = None,
startrow: int = 0,
startcol: int = 0,
engine: Literal["openpyxl", "xlsxwriter"] | None = None,
merge_cells: bool_t = True,
inf_rep: str = "inf",
freeze_panes: tuple[int, int] | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict[str, Any] | None = None,
) -> None:
"""
@@ -2358,7 +2358,7 @@ def to_json(
compression: CompressionOptions = "infer",
index: bool_t | None = None,
indent: int | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
mode: Literal["a", "w"] = "w",
) -> str | None:
"""
@@ -2787,7 +2787,7 @@ def to_sql(
schema: str | None = None,
if_exists: Literal["fail", "replace", "append"] = "fail",
index: bool_t = True,
- index_label: IndexLabel = None,
+ index_label: IndexLabel | None = None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
method: Literal["multi"] | Callable | None = None,
@@ -3010,7 +3010,7 @@ def to_pickle(
path: FilePath | WriteBuffer[bytes],
compression: CompressionOptions = "infer",
protocol: int = pickle.HIGHEST_PROTOCOL,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> None:
"""
Pickle (serialize) object to file.
@@ -3726,7 +3726,7 @@ def to_csv(
escapechar: str | None = None,
decimal: str = ".",
errors: OpenFileErrors = "strict",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> str | None:
r"""
Write object to a comma-separated values (csv) file.
@@ -4078,7 +4078,7 @@ def xs(
self,
key: IndexLabel,
axis: Axis = 0,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
drop_level: bool_t = True,
) -> Self:
"""
@@ -4661,11 +4661,11 @@ def drop(
def drop(
self,
- labels: IndexLabel = None,
+ labels: IndexLabel | None = None,
*,
axis: Axis = 0,
- index: IndexLabel = None,
- columns: IndexLabel = None,
+ index: IndexLabel | None = None,
+ columns: IndexLabel | None = None,
level: Level | None = None,
inplace: bool_t = False,
errors: IgnoreRaise = "raise",
@@ -5001,7 +5001,7 @@ def sort_values(
kind: SortKind = "quicksort",
na_position: NaPosition = "last",
ignore_index: bool_t = False,
- key: ValueKeyFunc = None,
+ key: ValueKeyFunc | None = None,
) -> Self | None:
"""
Sort by the values along either axis.
@@ -5196,14 +5196,14 @@ def sort_index(
self,
*,
axis: Axis = 0,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
ascending: bool_t | Sequence[bool_t] = True,
inplace: bool_t = False,
kind: SortKind = "quicksort",
na_position: NaPosition = "last",
sort_remaining: bool_t = True,
ignore_index: bool_t = False,
- key: IndexKeyFunc = None,
+ key: IndexKeyFunc | None = None,
) -> Self | None:
inplace = validate_bool_kwarg(inplace, "inplace")
axis = self._get_axis_number(axis)
@@ -6976,7 +6976,7 @@ def fillna(
)
def fillna(
self,
- value: Hashable | Mapping | Series | DataFrame = None,
+ value: Hashable | Mapping | Series | DataFrame | None = None,
*,
method: FillnaOptions | None = None,
axis: Axis | None = None,
@@ -8611,7 +8611,7 @@ def asfreq(
method: FillnaOptions | None = None,
how: Literal["start", "end"] | None = None,
normalize: bool_t = False,
- fill_value: Hashable = None,
+ fill_value: Hashable | None = None,
) -> Self:
"""
Convert time series to specified frequency.
@@ -8881,8 +8881,8 @@ def resample(
label: Literal["right", "left"] | None = None,
convention: Literal["start", "end", "s", "e"] = "start",
kind: Literal["timestamp", "period"] | None = None,
- on: Level = None,
- level: Level = None,
+ on: Level | None = None,
+ level: Level | None = None,
origin: str | TimestampConvertibleTypes = "start_day",
offset: TimedeltaConvertibleTypes | None = None,
group_keys: bool_t = False,
@@ -9693,9 +9693,9 @@ def align(
other: NDFrameT,
join: AlignJoin = "outer",
axis: Axis | None = None,
- level: Level = None,
+ level: Level | None = None,
copy: bool_t | None = None,
- fill_value: Hashable = None,
+ fill_value: Hashable | None = None,
method: FillnaOptions | None | lib.NoDefault = lib.no_default,
limit: int | None | lib.NoDefault = lib.no_default,
fill_axis: Axis | lib.NoDefault = lib.no_default,
@@ -10296,7 +10296,7 @@ def where(
*,
inplace: bool_t = False,
axis: Axis | None = None,
- level: Level = None,
+ level: Level | None = None,
) -> Self | None:
"""
Replace values where the condition is {cond_rev}.
@@ -10490,7 +10490,7 @@ def mask(
*,
inplace: bool_t = False,
axis: Axis | None = None,
- level: Level = None,
+ level: Level | None = None,
) -> Self | None:
inplace = validate_bool_kwarg(inplace, "inplace")
cond = common.apply_if_callable(cond, self)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 8691866bac752..2a1bd381abfdd 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -2416,7 +2416,7 @@ def value_counts(
def fillna(
self,
- value: Hashable | Mapping | Series | DataFrame = None,
+ value: Hashable | Mapping | Series | DataFrame | None = None,
method: FillnaOptions | None = None,
axis: Axis | None | lib.NoDefault = lib.no_default,
inplace: bool = False,
@@ -2791,7 +2791,7 @@ def cov(
@doc(DataFrame.hist.__doc__)
def hist(
self,
- column: IndexLabel = None,
+ column: IndexLabel | None = None,
by=None,
grid: bool = True,
xlabelsize: int | None = None,
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 73559e80cbcc6..4b6b59898c199 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -645,7 +645,9 @@ def _dtype_to_subclass(cls, dtype: DtypeObj):
# See each method's docstring.
@classmethod
- def _simple_new(cls, values: ArrayLike, name: Hashable = None, refs=None) -> Self:
+ def _simple_new(
+ cls, values: ArrayLike, name: Hashable | None = None, refs=None
+ ) -> Self:
"""
We require that we have a dtype compat for the values. If we are passed
a non-dtype compat, then coerce using the constructor.
@@ -1524,7 +1526,7 @@ def to_flat_index(self) -> Self:
return self
@final
- def to_series(self, index=None, name: Hashable = None) -> Series:
+ def to_series(self, index=None, name: Hashable | None = None) -> Series:
"""
Create a Series with both index and values equal to the index keys.
@@ -4544,7 +4546,7 @@ def join(
other: Index,
*,
how: JoinHow = "left",
- level: Level = None,
+ level: Level | None = None,
return_indexers: bool = False,
sort: bool = False,
) -> Index | tuple[Index, npt.NDArray[np.intp] | None, npt.NDArray[np.intp] | None]:
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 648a3ad5b3bd7..e189d9216d5e3 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -209,7 +209,7 @@ def __new__(
ordered=None,
dtype: Dtype | None = None,
copy: bool = False,
- name: Hashable = None,
+ name: Hashable | None = None,
) -> CategoricalIndex:
name = maybe_extract_name(name, data, cls)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 62b4dbded50ab..5465442b17fc3 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -327,7 +327,7 @@ def __new__(
yearfirst: bool = False,
dtype: Dtype | None = None,
copy: bool = False,
- name: Hashable = None,
+ name: Hashable | None = None,
) -> Self:
if closed is not lib.no_default:
# GH#52628
@@ -808,7 +808,7 @@ def date_range(
freq=None,
tz=None,
normalize: bool = False,
- name: Hashable = None,
+ name: Hashable | None = None,
inclusive: IntervalClosedType = "both",
*,
unit: str | None = None,
@@ -1009,7 +1009,7 @@ def bdate_range(
freq: Frequency = "B",
tz=None,
normalize: bool = True,
- name: Hashable = None,
+ name: Hashable | None = None,
weekmask=None,
holidays=None,
inclusive: IntervalClosedType = "both",
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 13e907741d233..1f6c275934070 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -223,7 +223,7 @@ def __new__(
closed: IntervalClosedType | None = None,
dtype: Dtype | None = None,
copy: bool = False,
- name: Hashable = None,
+ name: Hashable | None = None,
verify_integrity: bool = True,
) -> IntervalIndex:
name = maybe_extract_name(name, data, cls)
@@ -264,7 +264,7 @@ def from_breaks(
cls,
breaks,
closed: IntervalClosedType | None = "right",
- name: Hashable = None,
+ name: Hashable | None = None,
copy: bool = False,
dtype: Dtype | None = None,
) -> IntervalIndex:
@@ -300,7 +300,7 @@ def from_arrays(
left,
right,
closed: IntervalClosedType = "right",
- name: Hashable = None,
+ name: Hashable | None = None,
copy: bool = False,
dtype: Dtype | None = None,
) -> IntervalIndex:
@@ -335,7 +335,7 @@ def from_tuples(
cls,
data,
closed: IntervalClosedType = "right",
- name: Hashable = None,
+ name: Hashable | None = None,
copy: bool = False,
dtype: Dtype | None = None,
) -> IntervalIndex:
@@ -984,7 +984,7 @@ def interval_range(
end=None,
periods=None,
freq=None,
- name: Hashable = None,
+ name: Hashable | None = None,
closed: IntervalClosedType = "right",
) -> IntervalIndex:
"""
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 8df184396687b..ea9bf16a4e951 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -546,7 +546,7 @@ def from_tuples(
cls,
tuples: Iterable[tuple[Hashable, ...]],
sortorder: int | None = None,
- names: Sequence[Hashable] | Hashable = None,
+ names: Sequence[Hashable] | Hashable | None = None,
) -> MultiIndex:
"""
Convert list of tuples to MultiIndex.
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index bb39cdbdce262..a8bd220d14613 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -212,7 +212,7 @@ def __new__(
freq=None,
dtype: Dtype | None = None,
copy: bool = False,
- name: Hashable = None,
+ name: Hashable | None = None,
**fields,
) -> Self:
valid_field_set = {
@@ -466,7 +466,11 @@ def shift(self, periods: int = 1, freq=None) -> Self:
def period_range(
- start=None, end=None, periods: int | None = None, freq=None, name: Hashable = None
+ start=None,
+ end=None,
+ periods: int | None = None,
+ freq=None,
+ name: Hashable | None = None,
) -> PeriodIndex:
"""
Return a fixed frequency PeriodIndex.
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index fe66ec9c6792b..ca415d2089ecf 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -137,7 +137,7 @@ def __new__(
step=None,
dtype: Dtype | None = None,
copy: bool = False,
- name: Hashable = None,
+ name: Hashable | None = None,
) -> RangeIndex:
cls._validate_dtype(dtype)
name = maybe_extract_name(name, start, cls)
@@ -196,7 +196,7 @@ def from_range(cls, data: range, name=None, dtype: Dtype | None = None) -> Self:
# "Union[ExtensionArray, ndarray[Any, Any]]" [override]
@classmethod
def _simple_new( # type: ignore[override]
- cls, values: range, name: Hashable = None
+ cls, values: range, name: Hashable | None = None
) -> Self:
result = object.__new__(cls)
@@ -486,7 +486,7 @@ def _view(self) -> Self:
return result
@doc(Index.copy)
- def copy(self, name: Hashable = None, deep: bool = False) -> Self:
+ def copy(self, name: Hashable | None = None, deep: bool = False) -> Self:
name = self._validate_names(name=name, deep=deep)[0]
new_index = self._rename(name=name)
return new_index
@@ -837,7 +837,9 @@ def _difference(self, other, sort=None):
return new_index
- def symmetric_difference(self, other, result_name: Hashable = None, sort=None):
+ def symmetric_difference(
+ self, other, result_name: Hashable | None = None, sort=None
+ ):
if not isinstance(other, RangeIndex) or sort is not None:
return super().symmetric_difference(other, result_name, sort)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 5da054a956603..4dd7f84e894a3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1504,7 +1504,7 @@ def reset_index(
def reset_index(
self,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
*,
drop: bool = False,
name: Level = lib.no_default,
@@ -1816,7 +1816,7 @@ def to_markdown(
buf: IO[str] | None = None,
mode: str = "wt",
index: bool = True,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
**kwargs,
) -> str | None:
"""
@@ -2093,7 +2093,7 @@ def groupby(
self,
by=None,
axis: Axis = 0,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
as_index: bool = True,
sort: bool = True,
group_keys: bool = True,
@@ -3097,7 +3097,7 @@ def searchsorted( # type: ignore[override]
self,
value: NumpyValueArrayLike | ExtensionArray,
side: Literal["left", "right"] = "left",
- sorter: NumpySorter = None,
+ sorter: NumpySorter | None = None,
) -> npt.NDArray[np.intp] | np.intp:
return base.IndexOpsMixin.searchsorted(self, value, side=side, sorter=sorter)
@@ -3207,7 +3207,7 @@ def combine(
self,
other: Series | Hashable,
func: Callable[[Hashable, Hashable], Hashable],
- fill_value: Hashable = None,
+ fill_value: Hashable | None = None,
) -> Series:
"""
Combine the Series with a Series or scalar according to `func`.
@@ -3482,7 +3482,7 @@ def sort_values(
kind: SortKind = "quicksort",
na_position: NaPosition = "last",
ignore_index: bool = False,
- key: ValueKeyFunc = None,
+ key: ValueKeyFunc | None = None,
) -> Series | None:
"""
Sort by the values.
@@ -3726,14 +3726,14 @@ def sort_index(
self,
*,
axis: Axis = 0,
- level: IndexLabel = None,
+ level: IndexLabel | None = None,
ascending: bool | Sequence[bool] = True,
inplace: bool = False,
kind: SortKind = "quicksort",
na_position: NaPosition = "last",
sort_remaining: bool = True,
ignore_index: bool = False,
- key: IndexKeyFunc = None,
+ key: IndexKeyFunc | None = None,
) -> Series | None:
"""
Sort Series by index labels.
@@ -4322,7 +4322,10 @@ def explode(self, ignore_index: bool = False) -> Series:
return self._constructor(values, index=index, name=self.name, copy=False)
def unstack(
- self, level: IndexLabel = -1, fill_value: Hashable = None, sort: bool = True
+ self,
+ level: IndexLabel = -1,
+ fill_value: Hashable | None = None,
+ sort: bool = True,
) -> DataFrame:
"""
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
@@ -4968,11 +4971,11 @@ def drop(
def drop(
self,
- labels: IndexLabel = None,
+ labels: IndexLabel | None = None,
*,
axis: Axis = 0,
- index: IndexLabel = None,
- columns: IndexLabel = None,
+ index: IndexLabel | None = None,
+ columns: IndexLabel | None = None,
level: Level | None = None,
inplace: bool = False,
errors: IgnoreRaise = "raise",
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 6cde744d5704f..ea418a2c16d06 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -260,7 +260,7 @@ def _maybe_cache(
def _box_as_indexlike(
- dt_array: ArrayLike, utc: bool = False, name: Hashable = None
+ dt_array: ArrayLike, utc: bool = False, name: Hashable | None = None
) -> Index:
"""
Properly boxes the ndarray of datetimes to DatetimeIndex
@@ -352,7 +352,7 @@ def _return_parsed_timezone_results(
def _convert_listlike_datetimes(
arg,
format: str | None,
- name: Hashable = None,
+ name: Hashable | None = None,
utc: bool = False,
unit: str | None = None,
errors: DateTimeErrorChoices = "raise",
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 0d0191b22c023..35e93d287f31a 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -293,9 +293,9 @@ def is_fsspec_url(url: FilePath | BaseBuffer) -> bool:
def _get_filepath_or_buffer(
filepath_or_buffer: FilePath | BaseBuffer,
encoding: str = "utf-8",
- compression: CompressionOptions = None,
+ compression: CompressionOptions | None = None,
mode: str = "r",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> IOArgs:
"""
If the filepath_or_buffer is a url, translate and return the buffer.
@@ -655,11 +655,11 @@ def get_handle(
mode: str,
*,
encoding: str | None = None,
- compression: CompressionOptions = None,
+ compression: CompressionOptions | None = None,
memory_map: bool = False,
is_text: bool = True,
errors: str | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> IOHandles[str] | IOHandles[bytes]:
"""
Get file handle for given path/buffer and mode.
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 10eea5e139387..d3860ce4f77ca 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -485,7 +485,7 @@ def read_excel(
decimal: str = ".",
comment: str | None = None,
skipfooter: int = 0,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
engine_kwargs: dict | None = None,
) -> DataFrame | dict[IntStrT, DataFrame]:
@@ -545,7 +545,7 @@ class BaseExcelReader(metaclass=abc.ABCMeta):
def __init__(
self,
filepath_or_buffer,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
if engine_kwargs is None:
@@ -1127,7 +1127,7 @@ def __new__(
date_format: str | None = None,
datetime_format: str | None = None,
mode: str = "w",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
if_sheet_exists: Literal["error", "new", "replace", "overlay"] | None = None,
engine_kwargs: dict | None = None,
) -> ExcelWriter:
@@ -1216,7 +1216,7 @@ def __init__(
date_format: str | None = None,
datetime_format: str | None = None,
mode: str = "w",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
if_sheet_exists: str | None = None,
engine_kwargs: dict[str, Any] | None = None,
) -> None:
@@ -1372,7 +1372,7 @@ def close(self) -> None:
@doc(storage_options=_shared_docs["storage_options"])
def inspect_excel_format(
content_or_path: FilePath | ReadBuffer[bytes],
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> str | None:
"""
Inspect the path or content of an excel file and get its format.
@@ -1505,7 +1505,7 @@ def __init__(
self,
path_or_buffer,
engine: str | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
if engine_kwargs is None:
diff --git a/pandas/io/excel/_odfreader.py b/pandas/io/excel/_odfreader.py
index c46424d5b26da..16fb870e4700a 100644
--- a/pandas/io/excel/_odfreader.py
+++ b/pandas/io/excel/_odfreader.py
@@ -30,7 +30,7 @@ class ODFReader(BaseExcelReader):
def __init__(
self,
filepath_or_buffer: FilePath | ReadBuffer[bytes],
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
"""
diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index 0333794d714aa..2d5c61a4139f6 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -38,7 +38,7 @@ def __init__(
date_format: str | None = None,
datetime_format=None,
mode: str = "w",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
if_sheet_exists: str | None = None,
engine_kwargs: dict[str, Any] | None = None,
**kwargs,
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index f94b82a0677ed..8ca2c098cd426 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -47,7 +47,7 @@ def __init__(
date_format: str | None = None,
datetime_format: str | None = None,
mode: str = "w",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
if_sheet_exists: str | None = None,
engine_kwargs: dict[str, Any] | None = None,
**kwargs,
@@ -534,7 +534,7 @@ class OpenpyxlReader(BaseExcelReader):
def __init__(
self,
filepath_or_buffer: FilePath | ReadBuffer[bytes],
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
"""
diff --git a/pandas/io/excel/_pyxlsb.py b/pandas/io/excel/_pyxlsb.py
index a1234b0e74c3e..86805a0463c47 100644
--- a/pandas/io/excel/_pyxlsb.py
+++ b/pandas/io/excel/_pyxlsb.py
@@ -24,7 +24,7 @@ class PyxlsbReader(BaseExcelReader):
def __init__(
self,
filepath_or_buffer: FilePath | ReadBuffer[bytes],
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
"""
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index d131567cf70f7..cb0ff975af9bb 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -24,7 +24,7 @@ class XlrdReader(BaseExcelReader):
def __init__(
self,
filepath_or_buffer,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
"""
diff --git a/pandas/io/excel/_xlsxwriter.py b/pandas/io/excel/_xlsxwriter.py
index d7262c2f62d94..fb0d452c69ca0 100644
--- a/pandas/io/excel/_xlsxwriter.py
+++ b/pandas/io/excel/_xlsxwriter.py
@@ -188,7 +188,7 @@ def __init__(
date_format: str | None = None,
datetime_format: str | None = None,
mode: str = "w",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
if_sheet_exists: str | None = None,
engine_kwargs: dict[str, Any] | None = None,
**kwargs,
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 633c6f0f43889..8d297b4aa4edc 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -33,7 +33,7 @@
def to_feather(
df: DataFrame,
path: FilePath | WriteBuffer[bytes],
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
**kwargs,
) -> None:
"""
@@ -68,7 +68,7 @@ def read_feather(
path: FilePath | ReadBuffer[bytes],
columns: Sequence[Hashable] | None = None,
use_threads: bool = True,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
):
"""
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index ce6f9344bb8b2..39abb0bf127d9 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -71,7 +71,7 @@ def __init__(
date_format: str | None = None,
doublequote: bool = True,
escapechar: str | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> None:
self.fmt = formatter
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index f3eb4f78fa74e..9970d465ced9d 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -899,7 +899,7 @@ def write(
startcol: int = 0,
freeze_panes: tuple[int, int] | None = None,
engine: str | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
engine_kwargs: dict | None = None,
) -> None:
"""
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 3fa99027540a9..a7a6f481ebdde 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1117,7 +1117,7 @@ def to_csv(
doublequote: bool = True,
escapechar: str | None = None,
errors: str = "strict",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> str | None:
"""
Render dataframe as comma-separated file.
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 426eda26588c7..9062721c73285 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -523,7 +523,7 @@ def to_excel(
inf_rep: str = "inf",
verbose: bool = True,
freeze_panes: tuple[int, int] | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> None:
from pandas.io.formats.excel import ExcelFormatter
diff --git a/pandas/io/formats/xml.py b/pandas/io/formats/xml.py
index 7927e27cc9284..5725975bb6278 100644
--- a/pandas/io/formats/xml.py
+++ b/pandas/io/formats/xml.py
@@ -117,7 +117,7 @@ def __init__(
pretty_print: bool | None = True,
stylesheet: FilePath | ReadBuffer[str] | ReadBuffer[bytes] | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> None:
self.frame = frame
self.path_or_buffer = path_or_buffer
@@ -252,10 +252,18 @@ def other_namespaces(self) -> dict:
nmsp_dict: dict[str, str] = {}
if self.namespaces and self.prefix is None:
- nmsp_dict = {"xmlns": n for p, n in self.namespaces.items() if p != ""}
+ nmsp_dict = {
+ "xmlns": n # noqa: RUF011
+ for p, n in self.namespaces.items()
+ if p != ""
+ }
if self.namespaces and self.prefix:
- nmsp_dict = {"xmlns": n for p, n in self.namespaces.items() if p == ""}
+ nmsp_dict = {
+ "xmlns": n # noqa: RUF011
+ for p, n in self.namespaces.items()
+ if p == ""
+ }
return nmsp_dict
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 7a2b327df447d..472c3b4f9aff9 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -145,7 +145,7 @@ def to_json(
compression: CompressionOptions = "infer",
index: bool | None = None,
indent: int = 0,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
mode: Literal["a", "w"] = "w",
) -> str | None:
if orient in ["records", "values"] and index is True:
@@ -518,7 +518,7 @@ def read_json(
chunksize: int | None = None,
compression: CompressionOptions = "infer",
nrows: int | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
engine: JSONEngine = "ujson",
) -> DataFrame | Series | JsonReader:
@@ -828,7 +828,7 @@ def __init__(
chunksize: int | None,
compression: CompressionOptions,
nrows: int | None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
encoding_errors: str | None = "strict",
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
engine: JSONEngine = "ujson",
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index e8670757e1669..dd8d2ceaa7c3d 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -80,7 +80,7 @@ def get_engine(engine: str) -> BaseImpl:
def _get_path_or_handle(
path: FilePath | ReadBuffer[bytes] | WriteBuffer[bytes],
fs: Any,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
mode: str = "rb",
is_dir: bool = False,
) -> tuple[
@@ -171,7 +171,7 @@ def write(
path: FilePath | WriteBuffer[bytes],
compression: str | None = "snappy",
index: bool | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
partition_cols: list[str] | None = None,
filesystem=None,
**kwargs,
@@ -230,7 +230,7 @@ def read(
columns=None,
use_nullable_dtypes: bool = False,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
filesystem=None,
**kwargs,
) -> DataFrame:
@@ -285,7 +285,7 @@ def write(
compression: Literal["snappy", "gzip", "brotli"] | None = "snappy",
index=None,
partition_cols=None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
filesystem=None,
**kwargs,
) -> None:
@@ -335,7 +335,7 @@ def read(
self,
path,
columns=None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
filesystem=None,
**kwargs,
) -> DataFrame:
@@ -388,7 +388,7 @@ def to_parquet(
engine: str = "auto",
compression: str | None = "snappy",
index: bool | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
partition_cols: list[str] | None = None,
filesystem: Any = None,
**kwargs,
@@ -483,7 +483,7 @@ def read_parquet(
path: FilePath | ReadBuffer[bytes],
engine: str = "auto",
columns: list[str] | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
use_nullable_dtypes: bool | lib.NoDefault = lib.no_default,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
filesystem: Any = None,
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index e3c4fa3bbab96..90e2675da5703 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -913,7 +913,7 @@ def read_csv(
low_memory: bool = _c_parser_defaults["low_memory"],
memory_map: bool = False,
float_precision: Literal["high", "legacy"] | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
) -> DataFrame | TextFileReader:
if infer_datetime_format is not lib.no_default:
@@ -1246,7 +1246,7 @@ def read_table(
low_memory: bool = _c_parser_defaults["low_memory"],
memory_map: bool = False,
float_precision: str | None = None,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
) -> DataFrame | TextFileReader:
if infer_datetime_format is not lib.no_default:
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index ba837e2f57243..04e0cbc3d289d 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -34,7 +34,7 @@ def to_pickle(
filepath_or_buffer: FilePath | WriteBuffer[bytes],
compression: CompressionOptions = "infer",
protocol: int = pickle.HIGHEST_PROTOCOL,
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> None:
"""
Pickle (serialize) object to file.
@@ -115,7 +115,7 @@ def to_pickle(
def read_pickle(
filepath_or_buffer: FilePath | ReadPickleBuffer,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
):
"""
Load pickled pandas object (or any object) from file.
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 4018a70fd0450..7946780b24da9 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -688,7 +688,7 @@ def to_sql(
schema: str | None = None,
if_exists: Literal["fail", "replace", "append"] = "fail",
index: bool = True,
- index_label: IndexLabel = None,
+ index_label: IndexLabel | None = None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
method: Literal["multi"] | Callable | None = None,
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 88d24fadd327b..633f67cac4643 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1135,7 +1135,7 @@ def __init__(
order_categoricals: bool = True,
chunksize: int | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> None:
super().__init__()
self._col_sizes: list[int] = []
@@ -2082,7 +2082,7 @@ def read_stata(
chunksize: int | None = None,
iterator: bool = False,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
) -> DataFrame | StataReader:
reader = StataReader(
filepath_or_buffer,
@@ -2342,7 +2342,7 @@ def __init__(
data_label: str | None = None,
variable_labels: dict[Hashable, str] | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
*,
value_labels: dict[Hashable, dict[float, str]] | None = None,
) -> None:
@@ -3269,7 +3269,7 @@ def __init__(
variable_labels: dict[Hashable, str] | None = None,
convert_strl: Sequence[Hashable] | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
*,
value_labels: dict[Hashable, dict[float, str]] | None = None,
) -> None:
@@ -3661,7 +3661,7 @@ def __init__(
convert_strl: Sequence[Hashable] | None = None,
version: int | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
*,
value_labels: dict[Hashable, dict[float, str]] | None = None,
) -> None:
diff --git a/pandas/io/xml.py b/pandas/io/xml.py
index b950a56633ae1..bb165c4724022 100644
--- a/pandas/io/xml.py
+++ b/pandas/io/xml.py
@@ -878,7 +878,7 @@ def read_xml(
stylesheet: FilePath | ReadBuffer[bytes] | ReadBuffer[str] | None = None,
iterparse: dict[str, list[str]] | None = None,
compression: CompressionOptions = "infer",
- storage_options: StorageOptions = None,
+ storage_options: StorageOptions | None = None,
dtype_backend: DtypeBackend | lib.NoDefault = lib.no_default,
) -> DataFrame:
r"""
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index fcea410971c91..97492c4d5843e 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -130,7 +130,7 @@ def hist_series(
def hist_frame(
data: DataFrame,
- column: IndexLabel = None,
+ column: IndexLabel | None = None,
by=None,
grid: bool = True,
xlabelsize: int | None = None,
@@ -1061,7 +1061,9 @@ def __call__(self, *args, **kwargs):
)
@Substitution(kind="line")
@Appender(_bar_or_line_doc)
- def line(self, x: Hashable = None, y: Hashable = None, **kwargs) -> PlotAccessor:
+ def line(
+ self, x: Hashable | None = None, y: Hashable | None = None, **kwargs
+ ) -> PlotAccessor:
"""
Plot Series or DataFrame as lines.
@@ -1149,7 +1151,7 @@ def line(self, x: Hashable = None, y: Hashable = None, **kwargs) -> PlotAccessor
@Substitution(kind="bar")
@Appender(_bar_or_line_doc)
def bar( # pylint: disable=disallowed-name
- self, x: Hashable = None, y: Hashable = None, **kwargs
+ self, x: Hashable | None = None, y: Hashable | None = None, **kwargs
) -> PlotAccessor:
"""
Vertical bar plot.
@@ -1236,7 +1238,9 @@ def bar( # pylint: disable=disallowed-name
)
@Substitution(kind="bar")
@Appender(_bar_or_line_doc)
- def barh(self, x: Hashable = None, y: Hashable = None, **kwargs) -> PlotAccessor:
+ def barh(
+ self, x: Hashable | None = None, y: Hashable | None = None, **kwargs
+ ) -> PlotAccessor:
"""
Make a horizontal bar plot.
@@ -1248,7 +1252,7 @@ def barh(self, x: Hashable = None, y: Hashable = None, **kwargs) -> PlotAccessor
"""
return self(kind="barh", x=x, y=y, **kwargs)
- def box(self, by: IndexLabel = None, **kwargs) -> PlotAccessor:
+ def box(self, by: IndexLabel | None = None, **kwargs) -> PlotAccessor:
r"""
Make a box plot of the DataFrame columns.
@@ -1315,7 +1319,9 @@ def box(self, by: IndexLabel = None, **kwargs) -> PlotAccessor:
"""
return self(kind="box", by=by, **kwargs)
- def hist(self, by: IndexLabel = None, bins: int = 10, **kwargs) -> PlotAccessor:
+ def hist(
+ self, by: IndexLabel | None = None, bins: int = 10, **kwargs
+ ) -> PlotAccessor:
"""
Draw one histogram of the DataFrame's columns.
@@ -1493,7 +1499,11 @@ def kde(
density = kde
def area(
- self, x: Hashable = None, y: Hashable = None, stacked: bool = True, **kwargs
+ self,
+ x: Hashable | None = None,
+ y: Hashable | None = None,
+ stacked: bool = True,
+ **kwargs,
) -> PlotAccessor:
"""
Draw a stacked area plot.
@@ -1626,8 +1636,8 @@ def scatter(
self,
x: Hashable,
y: Hashable,
- s: Hashable | Sequence[Hashable] = None,
- c: Hashable | Sequence[Hashable] = None,
+ s: Hashable | Sequence[Hashable] | None = None,
+ c: Hashable | Sequence[Hashable] | None = None,
**kwargs,
) -> PlotAccessor:
"""
@@ -1716,7 +1726,7 @@ def hexbin(
self,
x: Hashable,
y: Hashable,
- C: Hashable = None,
+ C: Hashable | None = None,
reduce_C_function: Callable | None = None,
gridsize: int | tuple[int, int] | None = None,
**kwargs,
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index c84ee3114b71f..cfa336907e71a 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -875,16 +875,16 @@ def test_transform_with_non_scalar_group():
cols = MultiIndex.from_tuples(
[
("syn", "A"),
- ("mis", "A"),
+ ("foo", "A"),
("non", "A"),
("syn", "C"),
- ("mis", "C"),
+ ("foo", "C"),
("non", "C"),
("syn", "T"),
- ("mis", "T"),
+ ("foo", "T"),
("non", "T"),
("syn", "G"),
- ("mis", "G"),
+ ("foo", "G"),
("non", "G"),
]
)
diff --git a/pandas/tests/indexes/multi/test_constructors.py b/pandas/tests/indexes/multi/test_constructors.py
index cabc2bfd61db6..91ec1b2475cde 100644
--- a/pandas/tests/indexes/multi/test_constructors.py
+++ b/pandas/tests/indexes/multi/test_constructors.py
@@ -59,7 +59,7 @@ def test_constructor_nonhashable_names():
codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=("foo", "bar"),
)
- renamed = [["foor"], ["barr"]]
+ renamed = [["fooo"], ["barr"]]
with pytest.raises(TypeError, match=msg):
mi.rename(names=renamed)
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index 7f622295472e4..47794c09bf541 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -695,10 +695,10 @@ def test_binary_mode():
GH 18035.
"""
- data = """aas aas aas
+ data = """aaa aaa aaa
bba bab b a"""
df_reference = DataFrame(
- [["bba", "bab", "b a"]], columns=["aas", "aas.1", "aas.2"], index=[0]
+ [["bba", "bab", "b a"]], columns=["aaa", "aaa.1", "aaa.2"], index=[0]
)
with tm.ensure_clean() as path:
Path(path).write_text(data)
diff --git a/pandas/tests/io/test_fsspec.py b/pandas/tests/io/test_fsspec.py
index a1cde9f2f7e61..fe5818620b9a9 100644
--- a/pandas/tests/io/test_fsspec.py
+++ b/pandas/tests/io/test_fsspec.py
@@ -247,18 +247,18 @@ def test_not_present_exception():
@td.skip_if_no("pyarrow")
def test_feather_options(fsspectest):
df = DataFrame({"a": [0]})
- df.to_feather("testmem://afile", storage_options={"test": "feather_write"})
+ df.to_feather("testmem://mockfile", storage_options={"test": "feather_write"})
assert fsspectest.test[0] == "feather_write"
- out = read_feather("testmem://afile", storage_options={"test": "feather_read"})
+ out = read_feather("testmem://mockfile", storage_options={"test": "feather_read"})
assert fsspectest.test[0] == "feather_read"
tm.assert_frame_equal(df, out)
def test_pickle_options(fsspectest):
df = DataFrame({"a": [0]})
- df.to_pickle("testmem://afile", storage_options={"test": "pickle_write"})
+ df.to_pickle("testmem://mockfile", storage_options={"test": "pickle_write"})
assert fsspectest.test[0] == "pickle_write"
- out = read_pickle("testmem://afile", storage_options={"test": "pickle_read"})
+ out = read_pickle("testmem://mockfile", storage_options={"test": "pickle_read"})
assert fsspectest.test[0] == "pickle_read"
tm.assert_frame_equal(df, out)
@@ -266,13 +266,13 @@ def test_pickle_options(fsspectest):
def test_json_options(fsspectest, compression):
df = DataFrame({"a": [0]})
df.to_json(
- "testmem://afile",
+ "testmem://mockfile",
compression=compression,
storage_options={"test": "json_write"},
)
assert fsspectest.test[0] == "json_write"
out = read_json(
- "testmem://afile",
+ "testmem://mockfile",
compression=compression,
storage_options={"test": "json_read"},
)
@@ -283,10 +283,10 @@ def test_json_options(fsspectest, compression):
def test_stata_options(fsspectest):
df = DataFrame({"a": [0]})
df.to_stata(
- "testmem://afile", storage_options={"test": "stata_write"}, write_index=False
+ "testmem://mockfile", storage_options={"test": "stata_write"}, write_index=False
)
assert fsspectest.test[0] == "stata_write"
- out = read_stata("testmem://afile", storage_options={"test": "stata_read"})
+ out = read_stata("testmem://mockfile", storage_options={"test": "stata_read"})
assert fsspectest.test[0] == "stata_read"
tm.assert_frame_equal(df, out.astype("int64"))
@@ -294,9 +294,9 @@ def test_stata_options(fsspectest):
@td.skip_if_no("tabulate")
def test_markdown_options(fsspectest):
df = DataFrame({"a": [0]})
- df.to_markdown("testmem://afile", storage_options={"test": "md_write"})
+ df.to_markdown("testmem://mockfile", storage_options={"test": "md_write"})
assert fsspectest.test[0] == "md_write"
- assert fsspectest.cat("testmem://afile")
+ assert fsspectest.cat("testmem://mockfile")
@td.skip_if_no("pyarrow")
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 60506aa2fbd0a..71ff029ed2201 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -459,7 +459,7 @@ def mock_urlopen_read(*args, **kwargs):
@td.skip_if_no("fsspec")
def test_pickle_fsspec_roundtrip():
with tm.ensure_clean():
- mockurl = "memory://afile"
+ mockurl = "memory://mockfile"
df = tm.makeDataFrame()
df.to_pickle(mockurl)
result = pd.read_pickle(mockurl)
diff --git a/pyproject.toml b/pyproject.toml
index a2ae269c26667..64bc5e55ebd64 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -308,6 +308,8 @@ ignore = [
"B904",
# Magic number
"PLR2004",
+ # comparison-with-itself
+ "PLR0124",
# Consider `elif` instead of `else` then `if` to remove indentation level
"PLR5501",
# ambiguous-unicode-character-string
@@ -321,7 +323,9 @@ ignore = [
# pairwise-over-zipped (>=PY310 only)
"RUF007",
# explicit-f-string-type-conversion
- "RUF010"
+ "RUF010",
+ # mutable-class-default
+ "RUF012"
]
exclude = [
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54043 | 2023-07-07T18:17:33Z | 2023-07-10T22:12:18Z | 2023-07-10T22:12:18Z | 2023-07-11T00:02:11Z |
DOC: Fix typos in groupby user guide | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 6f4008853f161..7ddce18d8a259 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -878,7 +878,7 @@ will be broadcast across the group.
grouped.transform("sum")
In addition to string aliases, the :meth:`~.DataFrameGroupBy.transform` method can
-also except User-Defined functions (UDFs). The UDF must:
+also accept User-Defined Functions (UDFs). The UDF must:
* Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
@@ -1363,7 +1363,7 @@ implementation headache).
Grouping with ordered factors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Categorical variables represented as instance of pandas's ``Categorical`` class
+Categorical variables represented as instances of pandas's ``Categorical`` class
can be used as group keys. If so, the order of the levels will be preserved:
.. ipython:: python
@@ -1496,7 +1496,7 @@ You can also select multiple rows from each group by specifying multiple nth val
# get the first, 4th, and last date index for each month
df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
-You may also use a slices or lists of slices.
+You may also use slices or lists of slices.
.. ipython:: python
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
-----
Very trivial changes! The capitalisation of UDF matches that used elsewhere in the docs. | https://api.github.com/repos/pandas-dev/pandas/pulls/54041 | 2023-07-07T16:25:41Z | 2023-07-07T17:08:23Z | 2023-07-07T17:08:23Z | 2023-07-07T17:08:32Z |
DOC: Fixing EX01 - Added examples | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index fd256f2ff7db0..79445ec7b936d 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -112,8 +112,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.DatetimeIndex.snap \
pandas.api.indexers.BaseIndexer \
pandas.api.indexers.VariableOffsetWindowIndexer \
- pandas.io.formats.style.Styler \
- pandas.io.formats.style.Styler.from_custom_template \
pandas.io.formats.style.Styler.set_caption \
pandas.io.formats.style.Styler.set_sticky \
pandas.io.formats.style.Styler.set_uuid \
@@ -157,8 +155,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.api.extensions.ExtensionArray.ndim \
pandas.api.extensions.ExtensionArray.shape \
pandas.api.extensions.ExtensionArray.tolist \
- pandas.DataFrame.columns \
- pandas.DataFrame.ffill \
pandas.DataFrame.pad \
pandas.DataFrame.swapaxes \
pandas.DataFrame.plot \
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index feae3bb517c22..3a2ad225ae495 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -11849,7 +11849,24 @@ def isin_(x):
updated with the new labels, and the output shows the modified DataFrame.
""",
)
- columns = properties.AxisProperty(axis=0, doc="The column labels of the DataFrame.")
+ columns = properties.AxisProperty(
+ axis=0,
+ doc=dedent(
+ """
+ The column labels of the DataFrame.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
+ >>> df
+ A B
+ 0 1 3
+ 1 2 4
+ >>> df.columns
+ Index(['A', 'B'], dtype='object')
+ """
+ ),
+ )
# ----------------------------------------------------------------------
# Add plotting methods to DataFrame
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 9062721c73285..278527fa8fc85 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -241,6 +241,16 @@ class Styler(StylerRenderer):
Any, or all, or these classes can be renamed by using the ``css_class_names``
argument in ``Styler.set_table_classes``, giving a value such as
*{"row": "MY_ROW_CLASS", "col_trim": "", "row_trim": ""}*.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([[1.0, 2.0, 3.0], [4, 5, 6]], index=['a', 'b'],
+ ... columns=['A', 'B', 'C'])
+ >>> pd.io.formats.style.Styler(df, precision=2,
+ ... caption="My table") # doctest: +SKIP
+
+ Please see:
+ `Table Visualization <../../user_guide/style.ipynb>`_ for more examples.
"""
def __init__(
@@ -3492,6 +3502,19 @@ def from_custom_template(
MyStyler : subclass of Styler
Has the correct ``env``,``template_html``, ``template_html_table`` and
``template_html_style`` class attributes set.
+
+ Examples
+ --------
+ >>> from pandas.io.formats.style import Styler # doctest: +SKIP
+ >>> from IPython.display import HTML # doctest: +SKIP
+ >>> df = pd.DataFrame({"A": [1, 2]}) # doctest: +SKIP
+ >>> path = "path/to/template" # doctest: +SKIP
+ >>> file = "template.tpl" # doctest: +SKIP
+ >>> EasyStyler = Styler.from_custom_template(path, file) # doctest: +SKIP
+ >>> HTML(EasyStyler(df).to_html(table_title="Another Title")) # doctest: +SKIP
+
+ Please see:
+ `Table Visualization <../../user_guide/style.ipynb>`_ for more examples.
"""
loader = jinja2.ChoiceLoader([jinja2.FileSystemLoader(searchpath), cls.loader])
| - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Towards https://github.com/pandas-dev/pandas/issues/37875
We need to preview here as html documents for `Styler` can't be done locally. | https://api.github.com/repos/pandas-dev/pandas/pulls/54039 | 2023-07-07T16:12:33Z | 2023-07-13T10:58:58Z | 2023-07-13T10:58:58Z | 2023-07-18T08:51:57Z |
WEB: Add Noa Tamir to maintainers list | diff --git a/web/pandas/config.yml b/web/pandas/config.yml
index 4f17822c35582..27e5ea25c1bad 100644
--- a/web/pandas/config.yml
+++ b/web/pandas/config.yml
@@ -93,6 +93,7 @@ maintainers:
- lithomas1
- mzeitlin11
- lukemanley
+ - noatamir
inactive:
- lodagro
- jseabold
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/54038 | 2023-07-07T16:11:09Z | 2023-07-11T16:20:47Z | 2023-07-11T16:20:47Z | 2023-07-11T16:20:47Z |
BUG: Fix failing hash function for certain non-ns resolution `Timedelta`s | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index dc306471dbd3f..97069e32a66e2 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -393,6 +393,7 @@ Timedelta
- :meth:`TimedeltaIndex.map` with ``na_action="ignore"`` now works as expected (:issue:`51644`)
- Bug in :class:`TimedeltaIndex` division or multiplication leading to ``.freq`` of "0 Days" instead of ``None`` (:issue:`51575`)
- Bug in :class:`Timedelta` with Numpy timedelta64 objects not properly raising ``ValueError`` (:issue:`52806`)
+- Bug in :meth:`Timedelta.__hash__`, raising an ``OutOfBoundsTimedelta`` on certain large values of second resolution (:issue:`54037`)
- Bug in :meth:`Timedelta.round` with values close to the implementation bounds returning incorrect results instead of raising ``OutOfBoundsTimedelta`` (:issue:`51494`)
- Bug in :meth:`arrays.TimedeltaArray.map` and :meth:`TimedeltaIndex.map`, where the supplied callable operated array-wise instead of element-wise (:issue:`51977`)
-
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 0981c966c4cd4..28aeb854638b6 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1156,7 +1156,7 @@ cdef class _Timedelta(timedelta):
# resolution.
try:
obj = (<_Timedelta>self)._as_creso(<NPY_DATETIMEUNIT>(self._creso + 1))
- except OverflowError:
+ except OutOfBoundsTimedelta:
# Doesn't fit, so we're off the hook
return hash(self._value)
else:
diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index 722a68a1dce71..a83c6a8596575 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -305,6 +305,12 @@ def test_resolution(self, td):
assert result == expected
assert result._creso == expected._creso
+ def test_hash(self) -> None:
+ # GH#54037
+ second_resolution_max = Timedelta(0).as_unit("s").max
+
+ assert hash(second_resolution_max)
+
def test_timedelta_class_min_max_resolution():
# when accessed on the class (as opposed to an instance), we default
| I noticed that hashing certain large `pd.Timedelta`s resultet in an `OutOfBoundsTimedelta` exception, which shouldn't happen.
In the hash implementation there is seems to be an attempt at catching this error, as the offending line is wrapped in a `try` clause, but, the `except` clause catches an `OverflowError` instead. The call that raises `OutOfBoundsTimedelta` does so when itself catches an `OverflowError`, so my interpretation is that the intention was for the hash implementation to catch the `OutOfBoundsTimedelta`. My changes reflect this. I've also added a test reproducing the issue.
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54037 | 2023-07-07T14:08:53Z | 2023-07-07T19:01:05Z | 2023-07-07T19:01:05Z | 2023-07-08T11:08:14Z |
TST: Add test for Timedelta hash invariance | diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index a83c6a8596575..701cfdf157d26 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -1,5 +1,6 @@
""" test the scalar Timedelta """
from datetime import timedelta
+import sys
from hypothesis import (
given,
@@ -918,6 +919,30 @@ def test_timedelta_hash_equality(self):
ns_td = Timedelta(1, "ns")
assert hash(ns_td) != hash(ns_td.to_pytimedelta())
+ @pytest.mark.xfail(
+ reason="pd.Timedelta violates the Python hash invariant (GH#44504).",
+ raises=AssertionError,
+ )
+ @given(
+ st.integers(
+ min_value=(-sys.maxsize - 1) // 500,
+ max_value=sys.maxsize // 500,
+ )
+ )
+ def test_hash_equality_invariance(self, half_microseconds: int) -> None:
+ # GH#44504
+
+ nanoseconds = half_microseconds * 500
+
+ pandas_timedelta = Timedelta(nanoseconds)
+ numpy_timedelta = np.timedelta64(nanoseconds)
+
+ # See: https://docs.python.org/3/glossary.html#term-hashable
+ # Hashable objects which compare equal must have the same hash value.
+ assert pandas_timedelta != numpy_timedelta or hash(pandas_timedelta) == hash(
+ numpy_timedelta
+ )
+
def test_implementation_limits(self):
min_td = Timedelta(Timedelta.min)
max_td = Timedelta(Timedelta.max)
| Adding an xfailing test to reproduce and document #44504.
Ideally the test could cover an even wider set of comparisons, but I think this is a good start.
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/54035 | 2023-07-07T11:56:41Z | 2023-07-12T16:24:08Z | 2023-07-12T16:24:08Z | 2023-07-12T16:24:15Z |
DOC: add backticks to docstrings | diff --git a/pandas/io/xml.py b/pandas/io/xml.py
index 2aec361d46b99..62bbb410dacc1 100644
--- a/pandas/io/xml.py
+++ b/pandas/io/xml.py
@@ -1,5 +1,5 @@
"""
-:mod:`pandas.io.xml` is a module for reading XML.
+:mod:``pandas.io.xml`` is a module for reading XML.
"""
from __future__ import annotations
@@ -66,26 +66,26 @@ class _XMLFrameParser:
Parameters
----------
- path_or_buffer : a valid JSON str, path object or file-like object
+ path_or_buffer : a valid JSON ``str``, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file.
xpath : str or regex
The XPath expression to parse required set of nodes for
- migration to `Data Frame`. `etree` supports limited XPath.
+ migration to :class:`~pandas.DataFrame`. `etree` supports limited XPath.
namespaces : dict
- The namespaces defined in XML document (`xmlns:namespace='URI')
+ The namespaces defined in XML document (``xmlns:namespace='URI'``)
as dicts with key being namespace and value the URI.
elems_only : bool
- Parse only the child elements at the specified `xpath`.
+ Parse only the child elements at the specified ``xpath``.
attrs_only : bool
- Parse only the attributes at the specified `xpath`.
+ Parse only the attributes at the specified ``xpath``.
names : list
- Column names for Data Frame of parsed XML data.
+ Column names for :class:`~pandas.DataFrame`of parsed XML data.
dtype : dict
Data type for data or columns. E.g. {{'a': np.float64,
| - [ ] Issue DOC: Inconsistent use of code-style formatting (backticks) in docstrings #53674
- [ ] Issue originally referenced only read_csv and read_excel but open to
all docstrings to update this formatting for consistency
This PR doesn't address all backsticks needed in pandas/io/xml.py but want to get the ball rolling to see if I'm headed in the right direction. | https://api.github.com/repos/pandas-dev/pandas/pulls/54030 | 2023-07-07T03:59:16Z | 2023-07-07T17:12:15Z | 2023-07-07T17:12:15Z | 2023-07-11T01:39:46Z |
BUG: fixes #53935 Categorical order lost after call to remove_categories | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index e154ca2cd3884..0c71c82788c9f 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -375,6 +375,7 @@ Bug fixes
Categorical
^^^^^^^^^^^
+- Bug in :meth:`CategoricalIndex.remove_categories` where ordered categories would not be maintained (:issue:`53935`).
- Bug in :meth:`Series.astype` with ``dtype="category"`` for nullable arrays with read-only null value masks (:issue:`53658`)
- Bug in :meth:`Series.map` , where the value of the ``na_action`` parameter was not used if the series held a :class:`Categorical` (:issue:`22527`).
-
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6c61ce7a3e99b..8898379689cfd 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1369,7 +1369,11 @@ def remove_categories(self, removals) -> Self:
removals = [removals]
removals = Index(removals).unique().dropna()
- new_categories = self.dtype.categories.difference(removals)
+ new_categories = (
+ self.dtype.categories.difference(removals, sort=False)
+ if self.dtype.ordered is True
+ else self.dtype.categories.difference(removals)
+ )
not_included = removals.difference(self.dtype.categories)
if len(not_included) != 0:
diff --git a/pandas/tests/indexes/categorical/test_category.py b/pandas/tests/indexes/categorical/test_category.py
index 232d966e39a01..873d06db58fab 100644
--- a/pandas/tests/indexes/categorical/test_category.py
+++ b/pandas/tests/indexes/categorical/test_category.py
@@ -373,3 +373,18 @@ def test_method_delegation(self):
msg = "cannot use inplace with CategoricalIndex"
with pytest.raises(ValueError, match=msg):
ci.set_categories(list("cab"), inplace=True)
+
+ def test_remove_maintains_order(self):
+ ci = CategoricalIndex(list("abcdda"), categories=list("abcd"))
+ result = ci.reorder_categories(["d", "c", "b", "a"], ordered=True)
+ tm.assert_index_equal(
+ result,
+ CategoricalIndex(list("abcdda"), categories=list("dcba"), ordered=True),
+ )
+ result = result.remove_categories(["c"])
+ tm.assert_index_equal(
+ result,
+ CategoricalIndex(
+ ["a", "b", np.nan, "d", "d", "a"], categories=list("dba"), ordered=True
+ ),
+ )
| - [x] closes https://github.com/pandas-dev/pandas/issues/53935
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest doc/source/whatsnew/vX.X.X.rst file if fixing a bug or adding a new feature.
I simply put an if/else in the `remove_categories` method. If `ordered=True` it will pass `sort=False` into the `difference1 method.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54027 | 2023-07-06T21:55:41Z | 2023-07-11T17:32:50Z | 2023-07-11T17:32:50Z | 2023-07-11T21:47:23Z |
TST: Mark subprocess tests as single_cpu | diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index 9bdfbad347481..af83ec4a55fa5 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -202,6 +202,7 @@ def test_gzip_reproducibility_file_object():
assert output == buffer.getvalue()
+@pytest.mark.single_cpu
def test_with_missing_lzma():
"""Tests if import pandas works when lzma is not present."""
# https://github.com/pandas-dev/pandas/issues/27575
@@ -215,6 +216,7 @@ def test_with_missing_lzma():
subprocess.check_output([sys.executable, "-c", code], stderr=subprocess.PIPE)
+@pytest.mark.single_cpu
def test_with_missing_lzma_runtime():
"""Tests if RuntimeError is hit when calling lzma without
having the module available.
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 6caeb3a5d7445..cadd4c4589964 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -43,6 +43,7 @@
dates = pytest.importorskip("matplotlib.dates")
+@pytest.mark.single_cpu
def test_registry_mpl_resets():
# Check that Matplotlib converters are properly reset (see issue #27481)
code = (
@@ -63,6 +64,7 @@ def test_timtetonum_accepts_unicode():
class TestRegistration:
+ @pytest.mark.single_cpu
def test_dont_register_by_default(self):
# Run in subprocess to ensure a clean state
code = (
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 4860ee235c03d..fa7750397369b 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -233,6 +233,7 @@ def test_temp_setattr(with_exception):
assert ser.name == "first"
+@pytest.mark.single_cpu
def test_str_size():
# GH#21758
a = "a"
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 7354e313e24f4..09594588be81c 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -119,11 +119,13 @@ def test_xarray_cftimeindex_nearest():
assert result == expected
+@pytest.mark.single_cpu
def test_oo_optimizable():
# GH 21071
subprocess.check_call([sys.executable, "-OO", "-c", "import pandas"])
+@pytest.mark.single_cpu
def test_oo_optimized_datetime_index_unpickle():
# GH 42866
subprocess.check_call(
@@ -200,6 +202,7 @@ def test_yaml_dump(df):
tm.assert_frame_equal(df, loaded2)
+@pytest.mark.single_cpu
def test_missing_required_dependency():
# GH 23868
# To ensure proper isolation, we pass these flags
| Spawning a new subprocess can be a little intensive, so marking these as single cpu for our CI | https://api.github.com/repos/pandas-dev/pandas/pulls/54026 | 2023-07-06T21:30:41Z | 2023-07-07T17:07:41Z | 2023-07-07T17:07:41Z | 2023-07-07T17:07:45Z |
BUG: fixes Arrow Dataframes/Series producing a Numpy object result | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 137f2e5c12211..bcc8b1877128f 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -466,6 +466,7 @@ Numeric
- Bug in :meth:`Series.mean`, :meth:`DataFrame.mean` with object-dtype values containing strings that can be converted to numbers (e.g. "2") returning incorrect numeric results; these now raise ``TypeError`` (:issue:`36703`, :issue:`44008`)
- Bug in :meth:`DataFrame.corrwith` raising ``NotImplementedError`` for pyarrow-backed dtypes (:issue:`52314`)
- Bug in :meth:`DataFrame.size` and :meth:`Series.size` returning 64-bit integer instead of int (:issue:`52897`)
+- Bug in :meth:`DateFrame.dot` returning ``object`` dtype for :class:`ArrowDtype` data (:issue:`53979`)
- Bug in :meth:`Series.any`, :meth:`Series.all`, :meth:`DataFrame.any`, and :meth:`DataFrame.all` had the default value of ``bool_only`` set to ``None`` instead of ``False``; this change should have no impact on users (:issue:`53258`)
- Bug in :meth:`Series.corr` and :meth:`Series.cov` raising ``AttributeError`` for masked dtypes (:issue:`51422`)
- Bug in :meth:`Series.median` and :meth:`DataFrame.median` with object-dtype values containing strings that can be converted to numbers (e.g. "2") returning incorrect numeric results; these now raise ``TypeError`` (:issue:`34671`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index dc32842947a1a..9f63315c63a27 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1620,15 +1620,18 @@ def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
)
if isinstance(other, DataFrame):
+ common_type = find_common_type(list(self.dtypes) + list(other.dtypes))
return self._constructor(
np.dot(lvals, rvals),
index=left.index,
columns=other.columns,
copy=False,
+ dtype=common_type,
)
elif isinstance(other, Series):
+ common_type = find_common_type(list(self.dtypes) + [other.dtypes])
return self._constructor_sliced(
- np.dot(lvals, rvals), index=left.index, copy=False
+ np.dot(lvals, rvals), index=left.index, copy=False, dtype=common_type
)
elif isinstance(rvals, (np.ndarray, Index)):
result = np.dot(lvals, rvals)
diff --git a/pandas/tests/frame/methods/test_dot.py b/pandas/tests/frame/methods/test_dot.py
index 555e5f0e26eaf..3509b587cb373 100644
--- a/pandas/tests/frame/methods/test_dot.py
+++ b/pandas/tests/frame/methods/test_dot.py
@@ -129,3 +129,19 @@ def reduced_dim_assert(cls, result, expected):
"""
tm.assert_series_equal(result, expected, check_names=False)
assert result.name is None
+
+
+@pytest.mark.parametrize(
+ "dtype,exp_dtype",
+ [("Float32", "Float64"), ("Int16", "Int32"), ("float[pyarrow]", "double[pyarrow]")],
+)
+def test_arrow_dtype(dtype, exp_dtype):
+ pytest.importorskip("pyarrow")
+
+ cols = ["a", "b"]
+ df_a = DataFrame([[1, 2], [3, 4], [5, 6]], columns=cols, dtype="int32")
+ df_b = DataFrame([[1, 0], [0, 1]], index=cols, dtype=dtype)
+ result = df_a.dot(df_b)
+ expected = DataFrame([[1, 2], [3, 4], [5, 6]], dtype=exp_dtype)
+
+ tm.assert_frame_equal(result, expected)
| - [x] closes #53979
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
## Notes:
`.dot()` 's returned object dtype is now the result of `dtypes.cast.find_common_type()`.
It's worth noting the new object's dtype's is of a higher precision but this seems line up what's expected: https://arrow.apache.org/docs/python/pandas.html#pandas-arrow-conversion
| https://api.github.com/repos/pandas-dev/pandas/pulls/54025 | 2023-07-06T20:40:28Z | 2023-07-18T20:24:33Z | 2023-07-18T20:24:33Z | 2023-07-18T20:39:11Z |
CoW: Add ChainedAssignmentError for update | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 1119117c411d3..c9ee055b4a6c7 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -33,6 +33,7 @@ Copy-on-Write improvements
operating inplace like this will never work, since the selection behaves
as a temporary copy. This holds true for:
+ - DataFrame.update / Series.update
- DataFrame.fillna / Series.fillna
.. _whatsnew_210.enhancements.enhancement2:
diff --git a/pandas/compat/_constants.py b/pandas/compat/_constants.py
index 1d7fe23b3d2ea..7ef427604ee06 100644
--- a/pandas/compat/_constants.py
+++ b/pandas/compat/_constants.py
@@ -17,7 +17,7 @@
PY311 = sys.version_info >= (3, 11)
PYPY = platform.python_implementation() == "PyPy"
ISMUSL = "musl" in (sysconfig.get_config_var("HOST_GNU_TYPE") or "")
-
+REF_COUNT = 2 if PY311 else 3
__all__ = [
"IS64",
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ae43a44d68f1c..3ce30d2ace73f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -49,6 +49,7 @@
from pandas._libs.hashtable import duplicated
from pandas._libs.lib import is_range_indexer
from pandas.compat import PYPY
+from pandas.compat._constants import REF_COUNT
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import (
function as nv,
@@ -57,6 +58,7 @@
from pandas.errors import (
ChainedAssignmentError,
InvalidIndexError,
+ _chained_assignment_method_msg,
_chained_assignment_msg,
)
from pandas.util._decorators import (
@@ -8501,6 +8503,14 @@ def update(
1 2 500
2 3 6
"""
+ if not PYPY and using_copy_on_write():
+ if sys.getrefcount(self) <= REF_COUNT:
+ warnings.warn(
+ _chained_assignment_method_msg,
+ ChainedAssignmentError,
+ stacklevel=2,
+ )
+
from pandas.core.computation import expressions
# TODO: Support other joins
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 68e5fbd696ab9..bcb25061eeec8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -90,10 +90,8 @@
WriteExcelBuffer,
npt,
)
-from pandas.compat import (
- PY311,
- PYPY,
-)
+from pandas.compat import PYPY
+from pandas.compat._constants import REF_COUNT
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
from pandas.errors import (
@@ -7092,8 +7090,7 @@ def fillna(
inplace = validate_bool_kwarg(inplace, "inplace")
if inplace:
if not PYPY and using_copy_on_write():
- refcount = 2 if PY311 else 3
- if sys.getrefcount(self) <= refcount:
+ if sys.getrefcount(self) <= REF_COUNT:
warnings.warn(
_chained_assignment_method_msg,
ChainedAssignmentError,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 164b1a61b006c..43782261d3c58 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -37,10 +37,12 @@
)
from pandas._libs.lib import is_range_indexer
from pandas.compat import PYPY
+from pandas.compat._constants import REF_COUNT
from pandas.compat.numpy import function as nv
from pandas.errors import (
ChainedAssignmentError,
InvalidIndexError,
+ _chained_assignment_method_msg,
_chained_assignment_msg,
)
from pandas.util._decorators import (
@@ -3435,6 +3437,13 @@ def update(self, other: Series | Sequence | Mapping) -> None:
2 3
dtype: int64
"""
+ if not PYPY and using_copy_on_write():
+ if sys.getrefcount(self) <= REF_COUNT:
+ warnings.warn(
+ _chained_assignment_method_msg,
+ ChainedAssignmentError,
+ stacklevel=2,
+ )
if not isinstance(other, Series):
other = Series(other)
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 9e7ae9942ea90..e9952e5f4d977 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -1731,6 +1731,20 @@ def test_update_series(using_copy_on_write):
tm.assert_series_equal(view, expected)
+def test_update_chained_assignment(using_copy_on_write):
+ df = DataFrame({"a": [1, 2, 3]})
+ ser2 = Series([100.0], index=[1])
+ df_orig = df.copy()
+ if using_copy_on_write:
+ with tm.raises_chained_assignment_error():
+ df["a"].update(ser2)
+ tm.assert_frame_equal(df, df_orig)
+
+ with tm.raises_chained_assignment_error():
+ df[["a"]].update(ser2.to_frame())
+ tm.assert_frame_equal(df, df_orig)
+
+
def test_inplace_arithmetic_series():
ser = Series([1, 2, 3])
data = get_array(ser)
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 83cae8d148feb..dfc8afbdf3acb 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -284,11 +284,13 @@ def test_underlying_data_conversion(using_copy_on_write):
df["val"] = 0
df_original = df.copy()
df
- df["val"].update(s)
if using_copy_on_write:
+ with tm.raises_chained_assignment_error():
+ df["val"].update(s)
expected = df_original
else:
+ df["val"].update(s)
expected = DataFrame(
{"a": [1, 2, 3], "b": [1, 2, 3], "c": [1, 2, 3], "val": [0, 1, 0]}
)
diff --git a/pandas/tests/series/methods/test_update.py b/pandas/tests/series/methods/test_update.py
index af7e629f74227..5bf134fbeeb86 100644
--- a/pandas/tests/series/methods/test_update.py
+++ b/pandas/tests/series/methods/test_update.py
@@ -29,10 +29,12 @@ def test_update(self, using_copy_on_write):
df["c"] = df["c"].astype(object)
df_orig = df.copy()
- df["c"].update(Series(["foo"], index=[0]))
if using_copy_on_write:
+ with tm.raises_chained_assignment_error():
+ df["c"].update(Series(["foo"], index=[0]))
expected = df_orig
else:
+ df["c"].update(Series(["foo"], index=[0]))
expected = DataFrame(
[[1, np.nan, "foo"], [3, 2.0, np.nan]], columns=["a", "b", "c"]
)
| xref https://github.com/pandas-dev/pandas/issues/48998
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54024 | 2023-07-06T19:48:07Z | 2023-07-11T11:49:03Z | 2023-07-11T11:49:03Z | 2023-07-28T08:32:24Z |
CoW: Add ChainedAssignmentError for replace with inplace=True | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 44e091e12bfa6..cf7285f94b218 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -75,6 +75,7 @@ Copy-on-Write improvements
- DataFrame.update / Series.update
- DataFrame.fillna / Series.fillna
+ - DataFrame.replace / Series.replace
.. _whatsnew_210.enhancements.enhancement2:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9084395871675..7fe3a1a9c90a4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7623,6 +7623,15 @@ def replace(
)
inplace = validate_bool_kwarg(inplace, "inplace")
+ if inplace:
+ if not PYPY and using_copy_on_write():
+ if sys.getrefcount(self) <= REF_COUNT:
+ warnings.warn(
+ _chained_assignment_method_msg,
+ ChainedAssignmentError,
+ stacklevel=2,
+ )
+
if not is_bool(regex) and to_replace is not None:
raise ValueError("'to_replace' must be 'None' if 'regex' is not a bool")
diff --git a/pandas/tests/arrays/categorical/test_replace.py b/pandas/tests/arrays/categorical/test_replace.py
index d38f0b8719de0..0611d04d36d10 100644
--- a/pandas/tests/arrays/categorical/test_replace.py
+++ b/pandas/tests/arrays/categorical/test_replace.py
@@ -68,7 +68,8 @@ def test_replace_categorical(to_replace, value, result, expected_error_msg):
# ensure non-inplace call does not affect original
tm.assert_categorical_equal(cat, expected)
- pd.Series(cat, copy=False).replace(to_replace, value, inplace=True)
+ ser = pd.Series(cat, copy=False)
+ ser.replace(to_replace, value, inplace=True)
tm.assert_categorical_equal(cat, expected)
diff --git a/pandas/tests/copy_view/test_replace.py b/pandas/tests/copy_view/test_replace.py
index dfb1caa4b2ffd..4e80f8ed6d42a 100644
--- a/pandas/tests/copy_view/test_replace.py
+++ b/pandas/tests/copy_view/test_replace.py
@@ -373,3 +373,16 @@ def test_replace_columnwise_no_op(using_copy_on_write):
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
df2.iloc[0, 0] = 100
tm.assert_frame_equal(df, df_orig)
+
+
+def test_replace_chained_assignment(using_copy_on_write):
+ df = DataFrame({"a": [1, np.nan, 2], "b": 1})
+ df_orig = df.copy()
+ if using_copy_on_write:
+ with tm.raises_chained_assignment_error():
+ df["a"].replace(1, 100, inplace=True)
+ tm.assert_frame_equal(df, df_orig)
+
+ with tm.raises_chained_assignment_error():
+ df[["a"]].replace(1, 100, inplace=True)
+ tm.assert_frame_equal(df, df_orig)
| xref https://github.com/pandas-dev/pandas/issues/48998
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54023 | 2023-07-06T19:37:32Z | 2023-07-17T10:12:02Z | 2023-07-17T10:12:02Z | 2023-07-17T11:56:39Z |
BUG: Fix mamba can't create environment issue | diff --git a/environment.yml b/environment.yml
index 8fd97e6fcc0e1..6178fe896760f 100644
--- a/environment.yml
+++ b/environment.yml
@@ -17,7 +17,6 @@ dependencies:
- pytest-cov
- pytest-xdist>=2.2.0
- pytest-asyncio>=0.17.0
- - pytest-localserver>=0.7.1
- coverage
# required dependencies
diff --git a/requirements-dev.txt b/requirements-dev.txt
index b1d8ce1cf2143..38a2ce7f66aa3 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -10,7 +10,6 @@ pytest>=7.3.2
pytest-cov
pytest-xdist>=2.2.0
pytest-asyncio>=0.17.0
-pytest-localserver>=0.7.1
coverage
python-dateutil
numpy
| - [ ] closes #53978 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This solution is as @mroeschke suggested to fix environment broken on ARM Mac.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54017 | 2023-07-06T02:23:03Z | 2023-07-06T19:48:35Z | 2023-07-06T19:48:35Z | 2023-07-06T23:20:24Z |
BUG: Parameter col_space of to_html method not working with multi-level columns | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 137f2e5c12211..24def10266ab7 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -523,6 +523,7 @@ I/O
- Bug in :func:`read_html`, tail texts were removed together with elements containing ``display:none`` style (:issue:`51629`)
- Bug in :func:`read_sql` when reading multiple timezone aware columns with the same column name (:issue:`44421`)
- Bug in :func:`read_xml` stripping whitespace in string data (:issue:`53811`)
+- Bug in :meth:`DataFrame.to_html` where ``colspace`` was incorrectly applied in case of multi index columns (:issue:`53885`)
- Bug when writing and reading empty Stata dta files where dtype information was lost (:issue:`46240`)
- Bug where ``bz2`` was treated as a hard requirement (:issue:`53857`)
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 32a0cab1fbc41..151bde4e1c4c2 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -73,10 +73,16 @@ def __init__(
self.table_id = table_id
self.render_links = render_links
- self.col_space = {
- column: f"{value}px" if isinstance(value, int) else value
- for column, value in self.fmt.col_space.items()
- }
+ self.col_space = {}
+ is_multi_index = isinstance(self.columns, MultiIndex)
+ for column, value in self.fmt.col_space.items():
+ col_space_value = f"{value}px" if isinstance(value, int) else value
+ self.col_space[column] = col_space_value
+ # GH 53885: Handling case where column is index
+ # Flatten the data in the multi index and add in the map
+ if is_multi_index and isinstance(column, tuple):
+ for column_index in column:
+ self.col_space[str(column_index)] = col_space_value
def to_string(self) -> str:
lines = self.render()
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 9c128756339ab..bea0eebef2cf6 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -896,3 +896,59 @@ def test_to_html_float_format_object_col(datapath):
result = df.to_html(float_format=lambda x: f"{x:,.0f}")
expected = expected_html(datapath, "gh40024_expected_output")
assert result == expected
+
+
+def test_to_html_multiindex_col_with_colspace():
+ # GH#53885
+ df = DataFrame([[1, 2]])
+ df.columns = MultiIndex.from_tuples([(1, 1), (2, 1)])
+ result = df.to_html(col_space=100)
+ expected = (
+ '<table border="1" class="dataframe">\n'
+ " <thead>\n"
+ " <tr>\n"
+ ' <th style="min-width: 100px;"></th>\n'
+ ' <th style="min-width: 100px;">1</th>\n'
+ ' <th style="min-width: 100px;">2</th>\n'
+ " </tr>\n"
+ " <tr>\n"
+ ' <th style="min-width: 100px;"></th>\n'
+ ' <th style="min-width: 100px;">1</th>\n'
+ ' <th style="min-width: 100px;">1</th>\n'
+ " </tr>\n"
+ " </thead>\n"
+ " <tbody>\n"
+ " <tr>\n"
+ " <th>0</th>\n"
+ " <td>1</td>\n"
+ " <td>2</td>\n"
+ " </tr>\n"
+ " </tbody>\n"
+ "</table>"
+ )
+ assert result == expected
+
+
+def test_to_html_tuple_col_with_colspace():
+ # GH#53885
+ df = DataFrame({("a", "b"): [1], "b": [2]})
+ result = df.to_html(col_space=100)
+ expected = (
+ '<table border="1" class="dataframe">\n'
+ " <thead>\n"
+ ' <tr style="text-align: right;">\n'
+ ' <th style="min-width: 100px;"></th>\n'
+ ' <th style="min-width: 100px;">(a, b)</th>\n'
+ ' <th style="min-width: 100px;">b</th>\n'
+ " </tr>\n"
+ " </thead>\n"
+ " <tbody>\n"
+ " <tr>\n"
+ " <th>0</th>\n"
+ " <td>1</td>\n"
+ " <td>2</td>\n"
+ " </tr>\n"
+ " </tbody>\n"
+ "</table>"
+ )
+ assert result == expected
| BUG: Parameter col_space of to_html method not working with multi-level columns
- [X] closes #53885
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54015 | 2023-07-05T22:02:36Z | 2023-07-18T23:01:35Z | 2023-07-18T23:01:35Z | 2023-07-18T23:01:54Z |
DEPR: passing mixed offsets with utc=False into to_datetime | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 3dba120c0c64b..bb51124f10e54 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -931,6 +931,8 @@ Parsing a CSV with mixed timezones
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with ``parse_dates``.
+To parse the mixed-timezone values as a datetime column, read in as ``object`` dtype and
+then call :func:`to_datetime` with ``utc=True``.
.. ipython:: python
@@ -939,14 +941,6 @@ an object-dtype column with strings, even with ``parse_dates``.
a
2000-01-01T00:00:00+05:00
2000-01-01T00:00:00+06:00"""
- df = pd.read_csv(StringIO(content), parse_dates=["a"])
- df["a"]
-
-To parse the mixed-timezone values as a datetime column, read in as ``object`` dtype and
-then call :func:`to_datetime` with ``utc=True``.
-
-.. ipython:: python
-
df = pd.read_csv(StringIO(content))
df["a"] = pd.to_datetime(df["a"], utc=True)
df["a"]
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 7f5025e6ce60b..73a523b14f9f7 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -632,13 +632,19 @@ Parsing datetime strings with the same UTC offset will preserve the UTC offset i
Parsing datetime strings with different UTC offsets will now create an Index of
``datetime.datetime`` objects with different UTC offsets
-.. ipython:: python
+.. code-block:: ipython
+
+ In [59]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
+ "2015-11-18 16:30:00+06:30"])
+
+ In[60]: idx
+ Out[60]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')
+
+ In[61]: idx[0]
+ Out[61]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
- idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
- "2015-11-18 16:30:00+06:30"])
- idx
- idx[0]
- idx[1]
+ In[62]: idx[1]
+ Out[62]: Timestamp('2015-11-18 16:30:00+0630', tz='UTC+06:30')
Passing ``utc=True`` will mimic the previous behavior but will correctly indicate
that the dates have been converted to UTC
@@ -673,15 +679,22 @@ Parsing mixed-timezones with :func:`read_csv`
*New behavior*
-.. ipython:: python
+.. code-block:: ipython
+
+ In[64]: import io
+
+ In[65]: content = """\
+ ...: a
+ ...: 2000-01-01T00:00:00+05:00
+ ...: 2000-01-01T00:00:00+06:00"""
+
+ In[66]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
- import io
- content = """\
- a
- 2000-01-01T00:00:00+05:00
- 2000-01-01T00:00:00+06:00"""
- df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
- df.a
+ In[67]: df.a
+ Out[67]:
+ 0 2000-01-01 00:00:00+05:00
+ 1 2000-01-01 00:00:00+06:00
+ Name: a, Length: 2, dtype: object
As can be seen, the ``dtype`` is object; each value in the column is a string.
To convert the strings to an array of datetimes, the ``date_parser`` argument
diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index f31ab02725394..dd566eaab1e75 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -208,7 +208,14 @@ For example:
tz_strs = ["2010-01-01 12:00:00 +0100", "2010-01-01 12:00:00 -0100",
"2010-01-01 12:00:00 +0300", "2010-01-01 12:00:00 +0400"]
pd.to_datetime(tz_strs, format='%Y-%m-%d %H:%M:%S %z', utc=True)
- pd.to_datetime(tz_strs, format='%Y-%m-%d %H:%M:%S %z')
+
+.. code-block:: ipython
+
+ In[37]: pd.to_datetime(tz_strs, format='%Y-%m-%d %H:%M:%S %z')
+ Out[37]:
+ Index([2010-01-01 12:00:00+01:00, 2010-01-01 12:00:00-01:00,
+ 2010-01-01 12:00:00+03:00, 2010-01-01 12:00:00+04:00],
+ dtype='object')
.. _whatsnew_110.grouper_resample_origin:
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index d0dae450735a3..91efcfd590c01 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -295,8 +295,53 @@ Other API changes
.. ---------------------------------------------------------------------------
.. _whatsnew_210.deprecations:
-Deprecations
-~~~~~~~~~~~~
+Deprecate parsing datetimes with mixed time zones
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Parsing datetimes with mixed time zones is deprecated and shows a warning unless user passes ``utc=True`` to :func:`to_datetime` (:issue:`50887`)
+
+*Previous behavior*:
+
+.. code-block:: ipython
+
+ In [7]: data = ["2020-01-01 00:00:00+06:00", "2020-01-01 00:00:00+01:00"]
+
+ In [8]: pd.to_datetime(data, utc=False)
+ Out[8]:
+ Index([2020-01-01 00:00:00+06:00, 2020-01-01 00:00:00+01:00], dtype='object')
+
+*New behavior*:
+
+.. code-block:: ipython
+
+ In [9]: pd.to_datetime(data, utc=False)
+ FutureWarning:
+ In a future version of pandas, parsing datetimes with mixed time zones will raise
+ a warning unless `utc=True`. Please specify `utc=True` to opt in to the new behaviour
+ and silence this warning. To create a `Series` with mixed offsets and `object` dtype,
+ please use `apply` and `datetime.datetime.strptime`.
+ Index([2020-01-01 00:00:00+06:00, 2020-01-01 00:00:00+01:00], dtype='object')
+
+In order to silence this warning and avoid an error in a future version of pandas,
+please specify ``utc=True``:
+
+.. ipython:: python
+
+ data = ["2020-01-01 00:00:00+06:00", "2020-01-01 00:00:00+01:00"]
+ pd.to_datetime(data, utc=True)
+
+To create a ``Series`` with mixed offsets and ``object`` dtype, please use ``apply``
+and ``datetime.datetime.strptime``:
+
+.. ipython:: python
+
+ import datetime as dt
+
+ data = ["2020-01-01 00:00:00+06:00", "2020-01-01 00:00:00+01:00"]
+ pd.Series(data).apply(lambda x: dt.datetime.strptime(x, '%Y-%m-%d %H:%M:%S%z'))
+
+Other Deprecations
+~~~~~~~~~~~~~~~~~~
- Deprecated 'broadcast_axis' keyword in :meth:`Series.align` and :meth:`DataFrame.align`, upcast before calling ``align`` with ``left = DataFrame({col: left for col in right.columns}, index=right.index)`` (:issue:`51856`)
- Deprecated 'downcast' keyword in :meth:`Index.fillna` (:issue:`53956`)
- Deprecated 'fill_method' and 'limit' keywords in :meth:`DataFrame.pct_change`, :meth:`Series.pct_change`, :meth:`DataFrameGroupBy.pct_change`, and :meth:`SeriesGroupBy.pct_change`, explicitly call ``ffill`` or ``bfill`` before calling ``pct_change`` instead (:issue:`53491`)
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 106f203a16855..20a18cf56779f 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -620,6 +620,7 @@ cdef _array_to_datetime_object(
# 1) NaT or NaT-like values
# 2) datetime strings, which we return as datetime.datetime
# 3) special strings - "now" & "today"
+ unique_timezones = set()
for i in range(n):
# Analogous to: val = values[i]
val = <object>(<PyObject**>cnp.PyArray_MultiIter_DATA(mi, 1))[0]
@@ -649,6 +650,7 @@ cdef _array_to_datetime_object(
tzinfo=tsobj.tzinfo,
fold=tsobj.fold,
)
+ unique_timezones.add(tsobj.tzinfo)
except (ValueError, OverflowError) as ex:
ex.args = (f"{ex}, at position {i}", )
@@ -666,6 +668,16 @@ cdef _array_to_datetime_object(
cnp.PyArray_MultiIter_NEXT(mi)
+ if len(unique_timezones) > 1:
+ warnings.warn(
+ "In a future version of pandas, parsing datetimes with mixed time "
+ "zones will raise a warning unless `utc=True`. "
+ "Please specify `utc=True` to opt in to the new behaviour "
+ "and silence this warning. To create a `Series` with mixed offsets and "
+ "`object` dtype, please use `apply` and `datetime.datetime.strptime`",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
return oresult_nd, None
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 801968bd59f4e..95faea468fb5d 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -340,6 +340,7 @@ def _return_parsed_timezone_results(
tz_result : Index-like of parsed dates with timezone
"""
tz_results = np.empty(len(result), dtype=object)
+ non_na_timezones = set()
for zone in unique(timezones):
mask = timezones == zone
dta = DatetimeArray(result[mask]).tz_localize(zone)
@@ -348,8 +349,20 @@ def _return_parsed_timezone_results(
dta = dta.tz_localize("utc")
else:
dta = dta.tz_convert("utc")
+ else:
+ if not dta.isna().all():
+ non_na_timezones.add(zone)
tz_results[mask] = dta
-
+ if len(non_na_timezones) > 1:
+ warnings.warn(
+ "In a future version of pandas, parsing datetimes with mixed time "
+ "zones will raise a warning unless `utc=True`. Please specify `utc=True` "
+ "to opt in to the new behaviour and silence this warning. "
+ "To create a `Series` with mixed offsets and `object` dtype, "
+ "please use `apply` and `datetime.datetime.strptime`",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
return Index(tz_results, name=name)
@@ -772,6 +785,14 @@ def to_datetime(
offsets (typically, daylight savings), see :ref:`Examples
<to_datetime_tz_examples>` section for details.
+ .. warning::
+
+ In a future version of pandas, parsing datetimes with mixed time
+ zones will raise a warning unless `utc=True`.
+ Please specify `utc=True` to opt in to the new behaviour
+ and silence this warning. To create a `Series` with mixed offsets and
+ `object` dtype, please use `apply` and `datetime.datetime.strptime`.
+
See also: pandas general documentation about `timezone conversion and
localization
<https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
@@ -993,11 +1014,19 @@ def to_datetime(
- However, timezone-aware inputs *with mixed time offsets* (for example
issued from a timezone with daylight savings, such as Europe/Paris)
- are **not successfully converted** to a :class:`DatetimeIndex`. Instead a
- simple :class:`Index` containing :class:`datetime.datetime` objects is
- returned:
-
- >>> pd.to_datetime(['2020-10-25 02:00 +0200', '2020-10-25 04:00 +0100'])
+ are **not successfully converted** to a :class:`DatetimeIndex`.
+ Parsing datetimes with mixed time zones will show a warning unless
+ `utc=True`. If you specify `utc=False` the warning below will be shown
+ and a simple :class:`Index` containing :class:`datetime.datetime`
+ objects will be returned:
+
+ >>> pd.to_datetime(['2020-10-25 02:00 +0200',
+ ... '2020-10-25 04:00 +0100']) # doctest: +SKIP
+ FutureWarning: In a future version of pandas, parsing datetimes with mixed
+ time zones will raise a warning unless `utc=True`. Please specify `utc=True`
+ to opt in to the new behaviour and silence this warning. To create a `Series`
+ with mixed offsets and `object` dtype, please use `apply` and
+ `datetime.datetime.strptime`.
Index([2020-10-25 02:00:00+02:00, 2020-10-25 04:00:00+01:00],
dtype='object')
@@ -1005,7 +1034,13 @@ def to_datetime(
a simple :class:`Index` containing :class:`datetime.datetime` objects:
>>> from datetime import datetime
- >>> pd.to_datetime(["2020-01-01 01:00:00-01:00", datetime(2020, 1, 1, 3, 0)])
+ >>> pd.to_datetime(["2020-01-01 01:00:00-01:00",
+ ... datetime(2020, 1, 1, 3, 0)]) # doctest: +SKIP
+ FutureWarning: In a future version of pandas, parsing datetimes with mixed
+ time zones will raise a warning unless `utc=True`. Please specify `utc=True`
+ to opt in to the new behaviour and silence this warning. To create a `Series`
+ with mixed offsets and `object` dtype, please use `apply` and
+ `datetime.datetime.strptime`.
Index([2020-01-01 01:00:00-01:00, 2020-01-01 03:00:00], dtype='object')
|
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index fb45622dac3af..833f4986b6da6 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -1312,7 +1312,14 @@ def _try_convert_to_date(self, data):
date_units = (self.date_unit,) if self.date_unit else self._STAMP_UNITS
for date_unit in date_units:
try:
- new_data = to_datetime(new_data, errors="raise", unit=date_unit)
+ with warnings.catch_warnings():
+ warnings.filterwarnings(
+ "ignore",
+ ".*parsing datetimes with mixed time "
+ "zones will raise a warning",
+ category=FutureWarning,
+ )
+ new_data = to_datetime(new_data, errors="raise", unit=date_unit)
except (ValueError, OverflowError, TypeError):
continue
return new_data, True
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 0a90deedf7ad2..60996a7d42187 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -1144,14 +1144,20 @@ def converter(*date_cols, col: Hashable):
date_format.get(col) if isinstance(date_format, dict) else date_format
)
- result = tools.to_datetime(
- ensure_object(strs),
- format=date_fmt,
- utc=False,
- dayfirst=dayfirst,
- errors="ignore",
- cache=cache_dates,
- )
+ with warnings.catch_warnings():
+ warnings.filterwarnings(
+ "ignore",
+ ".*parsing datetimes with mixed time zones will raise a warning",
+ category=FutureWarning,
+ )
+ result = tools.to_datetime(
+ ensure_object(strs),
+ format=date_fmt,
+ utc=False,
+ dayfirst=dayfirst,
+ errors="ignore",
+ cache=cache_dates,
+ )
if isinstance(result, DatetimeIndex):
arr = result.to_numpy()
arr.flags.writeable = True
@@ -1159,22 +1165,38 @@ def converter(*date_cols, col: Hashable):
return result._values
else:
try:
- result = tools.to_datetime(
- date_parser(*(unpack_if_single_element(arg) for arg in date_cols)),
- errors="ignore",
- cache=cache_dates,
- )
+ with warnings.catch_warnings():
+ warnings.filterwarnings(
+ "ignore",
+ ".*parsing datetimes with mixed time zones "
+ "will raise a warning",
+ category=FutureWarning,
+ )
+ result = tools.to_datetime(
+ date_parser(
+ *(unpack_if_single_element(arg) for arg in date_cols)
+ ),
+ errors="ignore",
+ cache=cache_dates,
+ )
if isinstance(result, datetime.datetime):
raise Exception("scalar parser")
return result
except Exception:
- return tools.to_datetime(
- parsing.try_parse_dates(
- parsing.concat_date_cols(date_cols),
- parser=date_parser,
- ),
- errors="ignore",
- )
+ with warnings.catch_warnings():
+ warnings.filterwarnings(
+ "ignore",
+ ".*parsing datetimes with mixed time zones "
+ "will raise a warning",
+ category=FutureWarning,
+ )
+ return tools.to_datetime(
+ parsing.try_parse_dates(
+ parsing.concat_date_cols(date_cols),
+ parser=date_parser,
+ ),
+ errors="ignore",
+ )
return converter
diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py
index 6d18a292061b9..733c14f33567a 100644
--- a/pandas/tests/indexes/datetimes/test_constructors.py
+++ b/pandas/tests/indexes/datetimes/test_constructors.py
@@ -300,8 +300,10 @@ def test_construction_index_with_mixed_timezones(self):
assert not isinstance(result, DatetimeIndex)
msg = "DatetimeIndex has mixed timezones"
+ msg_depr = "parsing datetimes with mixed time zones will raise a warning"
with pytest.raises(TypeError, match=msg):
- DatetimeIndex(["2013-11-02 22:00-05:00", "2013-11-03 22:00-06:00"])
+ with tm.assert_produces_warning(FutureWarning, match=msg_depr):
+ DatetimeIndex(["2013-11-02 22:00-05:00", "2013-11-03 22:00-06:00"])
# length = 1
result = Index([Timestamp("2011-01-01")], name="idx")
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 5ea0ca1fddbd3..e5dfae169453f 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -446,19 +446,6 @@ def test_to_datetime_format_weeks(self, value, fmt, expected, cache):
["2010-01-01 12:00:00 UTC"] * 2,
[Timestamp("2010-01-01 12:00:00", tz="UTC")] * 2,
],
- [
- "%Y-%m-%d %H:%M:%S %Z",
- [
- "2010-01-01 12:00:00 UTC",
- "2010-01-01 12:00:00 GMT",
- "2010-01-01 12:00:00 US/Pacific",
- ],
- [
- Timestamp("2010-01-01 12:00:00", tz="UTC"),
- Timestamp("2010-01-01 12:00:00", tz="GMT"),
- Timestamp("2010-01-01 12:00:00", tz="US/Pacific"),
- ],
- ],
[
"%Y-%m-%d %H:%M:%S%z",
["2010-01-01 12:00:00+0100"] * 2,
@@ -479,18 +466,6 @@ def test_to_datetime_format_weeks(self, value, fmt, expected, cache):
]
* 2,
],
- [
- "%Y-%m-%d %H:%M:%S %z",
- ["2010-01-01 12:00:00 +0100", "2010-01-01 12:00:00 -0100"],
- [
- Timestamp(
- "2010-01-01 12:00:00", tzinfo=timezone(timedelta(minutes=60))
- ),
- Timestamp(
- "2010-01-01 12:00:00", tzinfo=timezone(timedelta(minutes=-60))
- ),
- ],
- ],
[
"%Y-%m-%d %H:%M:%S %z",
["2010-01-01 12:00:00 Z", "2010-01-01 12:00:00 Z"],
@@ -509,6 +484,46 @@ def test_to_datetime_parse_tzname_or_tzoffset(self, fmt, dates, expected_dates):
expected = Index(expected_dates)
tm.assert_equal(result, expected)
+ @pytest.mark.parametrize(
+ "fmt,dates,expected_dates",
+ [
+ [
+ "%Y-%m-%d %H:%M:%S %Z",
+ [
+ "2010-01-01 12:00:00 UTC",
+ "2010-01-01 12:00:00 GMT",
+ "2010-01-01 12:00:00 US/Pacific",
+ ],
+ [
+ Timestamp("2010-01-01 12:00:00", tz="UTC"),
+ Timestamp("2010-01-01 12:00:00", tz="GMT"),
+ Timestamp("2010-01-01 12:00:00", tz="US/Pacific"),
+ ],
+ ],
+ [
+ "%Y-%m-%d %H:%M:%S %z",
+ ["2010-01-01 12:00:00 +0100", "2010-01-01 12:00:00 -0100"],
+ [
+ Timestamp(
+ "2010-01-01 12:00:00", tzinfo=timezone(timedelta(minutes=60))
+ ),
+ Timestamp(
+ "2010-01-01 12:00:00", tzinfo=timezone(timedelta(minutes=-60))
+ ),
+ ],
+ ],
+ ],
+ )
+ def test_to_datetime_parse_tzname_or_tzoffset_utc_false_deprecated(
+ self, fmt, dates, expected_dates
+ ):
+ # GH 13486, 50887
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = to_datetime(dates, format=fmt)
+ expected = Index(expected_dates)
+ tm.assert_equal(result, expected)
+
def test_to_datetime_parse_tzname_or_tzoffset_different_tz_to_utc(self):
# GH 32792
dates = [
@@ -632,17 +647,6 @@ def test_to_datetime_mixed_date_and_string(self, format):
),
id="all tz-aware, mixed offsets, with utc",
),
- pytest.param(
- False,
- ["2000-01-01 01:00:00", "2000-01-01 02:00:00+00:00"],
- Index(
- [
- Timestamp("2000-01-01 01:00:00"),
- Timestamp("2000-01-01 02:00:00+0000", tz="UTC"),
- ],
- ),
- id="tz-aware string, naive pydatetime, without utc",
- ),
pytest.param(
True,
["2000-01-01 01:00:00", "2000-01-01 02:00:00+00:00"],
@@ -671,20 +675,41 @@ def test_to_datetime_mixed_datetime_and_string_with_format(
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize(
- "fmt, utc, expected",
+ "fmt",
+ ["%Y-%d-%m %H:%M:%S%z", "%Y-%m-%d %H:%M:%S%z"],
+ ids=["non-ISO8601 format", "ISO8601 format"],
+ )
+ @pytest.mark.parametrize(
+ "constructor",
+ [Timestamp, lambda x: Timestamp(x).to_pydatetime()],
+ )
+ def test_to_datetime_mixed_datetime_and_string_with_format_mixed_offsets_utc_false(
+ self, fmt, constructor
+ ):
+ # https://github.com/pandas-dev/pandas/issues/49298
+ # https://github.com/pandas-dev/pandas/issues/50254
+ # note: ISO8601 formats go down a fastpath, so we need to check both
+ # a ISO8601 format and a non-ISO8601 one
+ args = ["2000-01-01 01:00:00", "2000-01-01 02:00:00+00:00"]
+ ts1 = constructor(args[0])
+ ts2 = args[1]
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+
+ expected = Index(
+ [
+ Timestamp("2000-01-01 01:00:00"),
+ Timestamp("2000-01-01 02:00:00+0000", tz="UTC"),
+ ],
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = to_datetime([ts1, ts2], format=fmt, utc=False)
+ tm.assert_index_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "fmt, expected",
[
pytest.param(
"%Y-%m-%d %H:%M:%S%z",
- True,
- DatetimeIndex(
- ["2000-01-01 08:00:00+00:00", "2000-01-02 00:00:00+00:00", "NaT"],
- dtype="datetime64[ns, UTC]",
- ),
- id="ISO8601, UTC",
- ),
- pytest.param(
- "%Y-%m-%d %H:%M:%S%z",
- False,
Index(
[
Timestamp("2000-01-01 09:00:00+0100", tz="UTC+01:00"),
@@ -696,16 +721,6 @@ def test_to_datetime_mixed_datetime_and_string_with_format(
),
pytest.param(
"%Y-%d-%m %H:%M:%S%z",
- True,
- DatetimeIndex(
- ["2000-01-01 08:00:00+00:00", "2000-02-01 00:00:00+00:00", "NaT"],
- dtype="datetime64[ns, UTC]",
- ),
- id="non-ISO8601, UTC",
- ),
- pytest.param(
- "%Y-%d-%m %H:%M:%S%z",
- False,
Index(
[
Timestamp("2000-01-01 09:00:00+0100", tz="UTC+01:00"),
@@ -717,12 +732,45 @@ def test_to_datetime_mixed_datetime_and_string_with_format(
),
],
)
- def test_to_datetime_mixed_offsets_with_none(self, fmt, utc, expected):
+ def test_to_datetime_mixed_offsets_with_none_tz(self, fmt, expected):
+ # https://github.com/pandas-dev/pandas/issues/50071
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = to_datetime(
+ ["2000-01-01 09:00:00+01:00", "2000-01-02 02:00:00+02:00", None],
+ format=fmt,
+ utc=False,
+ )
+ tm.assert_index_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "fmt, expected",
+ [
+ pytest.param(
+ "%Y-%m-%d %H:%M:%S%z",
+ DatetimeIndex(
+ ["2000-01-01 08:00:00+00:00", "2000-01-02 00:00:00+00:00", "NaT"],
+ dtype="datetime64[ns, UTC]",
+ ),
+ id="ISO8601, UTC",
+ ),
+ pytest.param(
+ "%Y-%d-%m %H:%M:%S%z",
+ DatetimeIndex(
+ ["2000-01-01 08:00:00+00:00", "2000-02-01 00:00:00+00:00", "NaT"],
+ dtype="datetime64[ns, UTC]",
+ ),
+ id="non-ISO8601, UTC",
+ ),
+ ],
+ )
+ def test_to_datetime_mixed_offsets_with_none(self, fmt, expected):
# https://github.com/pandas-dev/pandas/issues/50071
result = to_datetime(
["2000-01-01 09:00:00+01:00", "2000-01-02 02:00:00+02:00", None],
format=fmt,
- utc=utc,
+ utc=True,
)
tm.assert_index_equal(result, expected)
@@ -1188,7 +1236,9 @@ def test_to_datetime_different_offsets(self, cache):
ts_string_2 = "March 1, 2018 12:00:00+0500"
arr = [ts_string_1] * 5 + [ts_string_2] * 5
expected = Index([parse(x) for x in arr])
- result = to_datetime(arr, cache=cache)
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = to_datetime(arr, cache=cache)
tm.assert_index_equal(result, expected)
def test_to_datetime_tz_pytz(self, cache):
@@ -1554,7 +1604,9 @@ def test_to_datetime_coerce(self):
"March 1, 2018 12:00:00+0500",
"20100240",
]
- result = to_datetime(ts_strings, errors="coerce")
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = to_datetime(ts_strings, errors="coerce")
expected = Index(
[
datetime(2018, 3, 1, 12, 0, tzinfo=tzoffset(None, 14400)),
@@ -1635,9 +1687,11 @@ def test_iso_8601_strings_with_same_offset(self):
tm.assert_index_equal(result, expected)
def test_iso_8601_strings_with_different_offsets(self):
- # GH 17697, 11736
+ # GH 17697, 11736, 50887
ts_strings = ["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30", NaT]
- result = to_datetime(ts_strings)
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result = to_datetime(ts_strings)
expected = np.array(
[
datetime(2015, 11, 18, 15, 30, tzinfo=tzoffset(None, 19800)),
@@ -1675,7 +1729,9 @@ def test_mixed_offsets_with_native_datetime_raises(self):
now = Timestamp("now")
today = Timestamp("today")
- mixed = to_datetime(ser)
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ mixed = to_datetime(ser)
expected = Series(
[
"NaT",
@@ -1741,6 +1797,23 @@ def test_to_datetime_fixed_offset(self):
result = to_datetime(dates)
assert result.tz == fixed_off
+ @pytest.mark.parametrize(
+ "date",
+ [
+ ["2020-10-26 00:00:00+06:00", "2020-10-26 00:00:00+01:00"],
+ ["2020-10-26 00:00:00+06:00", Timestamp("2018-01-01", tz="US/Pacific")],
+ [
+ "2020-10-26 00:00:00+06:00",
+ datetime(2020, 1, 1, 18, tzinfo=pytz.timezone("Australia/Melbourne")),
+ ],
+ ],
+ )
+ def test_to_datetime_mixed_offsets_with_utc_false_deprecated(self, date):
+ # GH 50887
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ to_datetime(date, utc=False)
+
class TestToDatetimeUnit:
@pytest.mark.parametrize("unit", ["Y", "M"])
@@ -3613,3 +3686,20 @@ def test_from_numeric_arrow_dtype(any_numeric_ea_dtype):
result = to_datetime(ser)
expected = Series([1, 2], dtype="datetime64[ns]")
tm.assert_series_equal(result, expected)
+
+
+def test_to_datetime_with_empty_str_utc_false_format_mixed():
+ # GH 50887
+ result = to_datetime(["2020-01-01 00:00+00:00", ""], format="mixed")
+ expected = Index([Timestamp("2020-01-01 00:00+00:00"), "NaT"], dtype=object)
+ tm.assert_index_equal(result, expected)
+
+
+def test_to_datetime_with_empty_str_utc_false_offsets_and_format_mixed():
+ # GH 50887
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ to_datetime(
+ ["2020-01-01 00:00+00:00", "2020-01-01 00:00+02:00", ""], format="mixed"
+ )
diff --git a/pandas/tests/tslibs/test_array_to_datetime.py b/pandas/tests/tslibs/test_array_to_datetime.py
index ba188c3182f57..435fe5f4b90d8 100644
--- a/pandas/tests/tslibs/test_array_to_datetime.py
+++ b/pandas/tests/tslibs/test_array_to_datetime.py
@@ -85,7 +85,9 @@ def test_parsing_different_timezone_offsets():
data = ["2015-11-18 15:30:00+05:30", "2015-11-18 15:30:00+06:30"]
data = np.array(data, dtype=object)
- result, result_tz = tslib.array_to_datetime(data)
+ msg = "parsing datetimes with mixed time zones will raise a warning"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ result, result_tz = tslib.array_to_datetime(data)
expected = np.array(
[
datetime(2015, 11, 18, 15, 30, tzinfo=tzoffset(None, 19800)),
| xref #50887
parsing datetimes with mixed time zones will raise a warning if `utc=False`. We advice users to use `utc=True` to silence this warning. | https://api.github.com/repos/pandas-dev/pandas/pulls/54014 | 2023-07-05T21:55:09Z | 2023-07-27T19:03:22Z | 2023-07-27T19:03:22Z | 2023-11-27T07:39:04Z |
CLN: Remove unnecessary pa version check | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 17120d0de5c5f..284044dfadfef 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -2026,8 +2026,6 @@ def _str_repeat(self, repeats: int | Sequence[int]):
raise NotImplementedError(
f"repeat is not implemented when repeats is {type(repeats).__name__}"
)
- elif pa_version_under7p0:
- raise NotImplementedError("repeat is not implemented for pyarrow < 7")
else:
return type(self)(pc.binary_repeat(self._pa_array, repeats))
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54012 | 2023-07-05T21:04:55Z | 2023-07-06T20:25:51Z | 2023-07-06T20:25:51Z | 2023-07-06T20:26:02Z |
API/CoW: Return copies for head and tail | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 44e091e12bfa6..8fa9c28c4bad2 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -65,6 +65,7 @@ Copy-on-Write improvements
- The :class:`DataFrame` constructor, when constructing a DataFrame from a dictionary
of Index objects and specifying ``copy=False``, will now use a lazy copy
of those Index objects for the columns of the DataFrame (:issue:`52947`)
+- :meth:`DataFrame.head` and :meth:`DataFrame.tail` will now return deep copies (:issue:`54011`)
- Add lazy copy mechanism to :meth:`DataFrame.eval` (:issue:`53746`)
- Trying to operate inplace on a temporary column selection
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9084395871675..a61bc0e97cf1a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5786,6 +5786,8 @@ def head(self, n: int = 5) -> Self:
4 monkey
5 parrot
"""
+ if using_copy_on_write():
+ return self.iloc[:n].copy()
return self.iloc[:n]
@final
@@ -5861,6 +5863,10 @@ def tail(self, n: int = 5) -> Self:
7 whale
8 zebra
"""
+ if using_copy_on_write():
+ if n == 0:
+ return self.iloc[0:0].copy()
+ return self.iloc[-n:].copy()
if n == 0:
return self.iloc[0:0]
return self.iloc[-n:]
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index e9952e5f4d977..bd5895bc5d970 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -895,16 +895,19 @@ def test_head_tail(method, using_copy_on_write):
df2._mgr._verify_integrity()
if using_copy_on_write:
- assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
- assert np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
+ # We are explicitly deviating for CoW here to make an eager copy (avoids
+ # tracking references for very cheap ops)
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ assert not np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
# modify df2 to trigger CoW for that block
df2.iloc[0, 0] = 0
- assert np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
if using_copy_on_write:
+ assert not np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
else:
# without CoW enabled, head and tail return views. Mutating df2 also mutates df.
+ assert np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
df2.iloc[0, 0] = 1
tm.assert_frame_equal(df, df_orig)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
this is something we talked about to avoid messing up references in Jupyter environments (xref https://github.com/pandas-dev/pandas/issues/48998) | https://api.github.com/repos/pandas-dev/pandas/pulls/54011 | 2023-07-05T20:55:30Z | 2023-07-17T13:58:05Z | 2023-07-17T13:58:05Z | 2023-07-17T13:58:09Z |
DOC: Added to 10mins guide | diff --git a/doc/source/user_guide/10min.rst b/doc/source/user_guide/10min.rst
index 7c98c99fecd5b..cb3c4ab3de658 100644
--- a/doc/source/user_guide/10min.rst
+++ b/doc/source/user_guide/10min.rst
@@ -16,6 +16,16 @@ Customarily, we import as follows:
import numpy as np
import pandas as pd
+Basic data structures in pandas
+-------------------------------
+
+Pandas provides two types of classes for handling data:
+
+1. :class:`Series`: a one-dimensional labeled array holding data of any type
+ such as integers, strings, Python objects etc.
+2. :class:`DataFrame`: a two-dimensional data structure that holds data like
+ a two-dimension array or a table with rows and columns.
+
Object creation
---------------
| - [ ] closes #47282
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/54010 | 2023-07-05T18:13:39Z | 2023-07-06T20:31:31Z | 2023-07-06T20:31:31Z | 2023-07-06T20:31:55Z |
CLN: is_mixed_type, is_homogeneous_type | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ae43a44d68f1c..f90b5c0eedbe8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -961,13 +961,6 @@ def _is_homogeneous_type(self) -> bool:
-------
bool
- See Also
- --------
- Index._is_homogeneous_type : Whether the object has a single
- dtype.
- MultiIndex._is_homogeneous_type : Whether all the levels of a
- MultiIndex have the same dtype.
-
Examples
--------
>>> DataFrame({"A": [1, 2], "B": [3, 4]})._is_homogeneous_type
@@ -983,12 +976,8 @@ def _is_homogeneous_type(self) -> bool:
... "B": np.array([1, 2], dtype=np.int64)})._is_homogeneous_type
False
"""
- if isinstance(self._mgr, ArrayManager):
- return len({arr.dtype for arr in self._mgr.arrays}) == 1
- if self._mgr.any_extension_types:
- return len({block.dtype for block in self._mgr.blocks}) == 1
- else:
- return not self._is_mixed_type
+ # The "<" part of "<=" here is for empty DataFrame cases
+ return len({arr.dtype for arr in self._mgr.arrays}) <= 1
@property
def _can_fast_transpose(self) -> bool:
@@ -4958,7 +4947,7 @@ def _reindex_multi(
if row_indexer is not None and col_indexer is not None:
# Fastpath. By doing two 'take's at once we avoid making an
# unnecessary copy.
- # We only get here with `not self._is_mixed_type`, which (almost)
+ # We only get here with `self._can_fast_transpose`, which (almost)
# ensures that self.values is cheap. It may be worth making this
# condition more specific.
indexer = row_indexer, col_indexer
@@ -10849,17 +10838,7 @@ def count(self, axis: Axis = 0, numeric_only: bool = False):
if len(frame._get_axis(axis)) == 0:
result = self._constructor_sliced(0, index=frame._get_agg_axis(axis))
else:
- if frame._is_mixed_type or frame._mgr.any_extension_types:
- # the or any_extension_types is really only hit for single-
- # column frames with an extension array
- result = notna(frame).sum(axis=axis)
- else:
- # GH13407
- series_counts = notna(frame).sum(axis=axis)
- counts = series_counts._values
- result = self._constructor_sliced(
- counts, index=frame._get_agg_axis(axis), copy=False
- )
+ result = notna(frame).sum(axis=axis)
return result.astype("int64").__finalize__(self, method="count")
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f049e9d479b26..ec0f477a7d0ff 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5536,12 +5536,9 @@ def _needs_reindex_multi(self, axes, method, level: Level | None) -> bool_t:
(common.count_not_none(*axes.values()) == self._AXIS_LEN)
and method is None
and level is None
- and not self._is_mixed_type
- and not (
- self.ndim == 2
- and len(self.dtypes) == 1
- and isinstance(self.dtypes.iloc[0], ExtensionDtype)
- )
+ # reindex_multi calls self.values, so we only want to go
+ # down that path when doing so is cheap.
+ and self._can_fast_transpose
)
def _reindex_multi(self, axes, copy, fill_value):
@@ -6266,9 +6263,11 @@ def _consolidate(self):
self
)
+ @final
@property
def _is_mixed_type(self) -> bool_t:
if self._mgr.is_single_block:
+ # Includes all Series cases
return False
if self._mgr.any_extension_types:
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index f402c9ced0e19..431de70a25392 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -347,10 +347,6 @@ def _convert(arr):
def to_native_types(self, **kwargs) -> Self:
return self.apply(to_native_types, **kwargs)
- @property
- def is_mixed_type(self) -> bool:
- return True
-
@property
def any_extension_types(self) -> bool:
"""Whether any of the blocks in this manager are extension blocks"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e59a4cfc3fcc1..e1b76fe132a11 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1383,7 +1383,6 @@ def _maybe_update_cacher(
return
cacher = getattr(self, "_cacher", None)
if cacher is not None:
- assert self.ndim == 1
ref: DataFrame = cacher[1]()
# we are trying to reference a dead referent, hence
@@ -1407,10 +1406,6 @@ def _maybe_update_cacher(
# ----------------------------------------------------------------------
# Unsorted
- @property
- def _is_mixed_type(self) -> bool:
- return False
-
def repeat(self, repeats: int | Sequence[int], axis: None = None) -> Series:
"""
Repeat elements of a Series.
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54008 | 2023-07-05T16:47:44Z | 2023-07-06T20:33:54Z | 2023-07-06T20:33:54Z | 2023-07-06T20:34:53Z |
ENH: Use explicit methods instead of regex pattern in arrow strings | diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index fa56571d7b0b0..12f4b5486b6b9 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -307,28 +307,31 @@ def _str_contains(
return super()._str_contains(pat, case, flags, na, regex)
if regex:
- if case is False:
- fallback_performancewarning()
- return super()._str_contains(pat, case, flags, na, regex)
- else:
- result = pc.match_substring_regex(self._pa_array, pat)
+ result = pc.match_substring_regex(self._pa_array, pat, ignore_case=not case)
else:
- if case:
- result = pc.match_substring(self._pa_array, pat)
- else:
- result = pc.match_substring(pc.utf8_upper(self._pa_array), pat.upper())
+ result = pc.match_substring(self._pa_array, pat, ignore_case=not case)
result = BooleanDtype().__from_arrow__(result)
if not isna(na):
result[isna(result)] = bool(na)
return result
def _str_startswith(self, pat: str, na=None):
- pat = f"^{re.escape(pat)}"
- return self._str_contains(pat, na=na, regex=True)
+ result = pc.starts_with(self._pa_array, pattern=pat)
+ if not isna(na):
+ result = result.fill_null(na)
+ result = BooleanDtype().__from_arrow__(result)
+ if not isna(na):
+ result[isna(result)] = bool(na)
+ return result
def _str_endswith(self, pat: str, na=None):
- pat = f"{re.escape(pat)}$"
- return self._str_contains(pat, na=na, regex=True)
+ result = pc.ends_with(self._pa_array, pattern=pat)
+ if not isna(na):
+ result = result.fill_null(na)
+ result = BooleanDtype().__from_arrow__(result)
+ if not isna(na):
+ result[isna(result)] = bool(na)
+ return result
def _str_replace(
self,
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 89718b1b35f12..c3cc8b3643ed2 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -53,10 +53,8 @@ def test_contains(any_string_dtype):
np.array(["Foo", "xYz", "fOOomMm__fOo", "MMM_"], dtype=object),
dtype=any_string_dtype,
)
- with tm.maybe_produces_warning(
- PerformanceWarning, any_string_dtype == "string[pyarrow]"
- ):
- result = values.str.contains("FOO|mmm", case=False)
+
+ result = values.str.contains("FOO|mmm", case=False)
expected = Series(np.array([True, False, True, True]), dtype=expected_dtype)
tm.assert_series_equal(result, expected)
@@ -172,10 +170,7 @@ def test_contains_moar(any_string_dtype):
)
tm.assert_series_equal(result, expected)
- with tm.maybe_produces_warning(
- PerformanceWarning, any_string_dtype == "string[pyarrow]"
- ):
- result = s.str.contains("a", case=False)
+ result = s.str.contains("a", case=False)
expected = Series(
[True, False, False, True, True, False, np.nan, True, False, True],
dtype=expected_dtype,
@@ -196,10 +191,7 @@ def test_contains_moar(any_string_dtype):
)
tm.assert_series_equal(result, expected)
- with tm.maybe_produces_warning(
- PerformanceWarning, any_string_dtype == "string[pyarrow]"
- ):
- result = s.str.contains("ba", case=False)
+ result = s.str.contains("ba", case=False)
expected = Series(
[False, False, False, True, True, False, np.nan, True, False, False],
dtype=expected_dtype,
@@ -723,10 +715,7 @@ def test_match_na_kwarg(any_string_dtype):
def test_match_case_kwarg(any_string_dtype):
values = Series(["ab", "AB", "abc", "ABC"], dtype=any_string_dtype)
- with tm.maybe_produces_warning(
- PerformanceWarning, any_string_dtype == "string[pyarrow]"
- ):
- result = values.str.match("ab", case=False)
+ result = values.str.match("ab", case=False)
expected_dtype = np.bool_ if any_string_dtype == "object" else "boolean"
expected = Series([True, True, True, True], dtype=expected_dtype)
tm.assert_series_equal(result, expected)
@@ -769,10 +758,7 @@ def test_fullmatch_case_kwarg(any_string_dtype):
expected = Series([True, True, False, False], dtype=expected_dtype)
- with tm.maybe_produces_warning(
- PerformanceWarning, any_string_dtype == "string[pyarrow]"
- ):
- result = ser.str.fullmatch("ab", case=False)
+ result = ser.str.fullmatch("ab", case=False)
tm.assert_series_equal(result, expected)
with tm.maybe_produces_warning(
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54006 | 2023-07-05T15:29:03Z | 2023-07-06T23:55:09Z | 2023-07-06T23:55:09Z | 2023-07-07T08:04:39Z |
DOC: Fixing EX01 - Added examples | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index bf0711dcc0581..756096a7fe345 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -110,12 +110,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas_object \
pandas.api.interchange.from_dataframe \
pandas.DatetimeIndex.snap \
- pandas.core.window.ewm.ExponentialMovingWindow.mean \
- pandas.core.window.ewm.ExponentialMovingWindow.sum \
- pandas.core.window.ewm.ExponentialMovingWindow.std \
- pandas.core.window.ewm.ExponentialMovingWindow.var \
- pandas.core.window.ewm.ExponentialMovingWindow.corr \
- pandas.core.window.ewm.ExponentialMovingWindow.cov \
pandas.api.indexers.BaseIndexer \
pandas.api.indexers.VariableOffsetWindowIndexer \
pandas.io.formats.style.Styler \
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index 42123fafd62aa..775f3cd428677 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -502,7 +502,19 @@ def aggregate(self, func, *args, **kwargs):
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes.replace("\n", "", 1),
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4])
+ >>> ser.ewm(alpha=.2).mean()
+ 0 1.000000
+ 1 1.555556
+ 2 2.147541
+ 3 2.775068
+ dtype: float64
+ """
+ ),
window_method="ewm",
aggregation_description="(exponential weighted moment) mean",
agg_method="mean",
@@ -554,7 +566,19 @@ def mean(
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes.replace("\n", "", 1),
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4])
+ >>> ser.ewm(alpha=.2).sum()
+ 0 1.000
+ 1 2.800
+ 2 5.240
+ 3 8.192
+ dtype: float64
+ """
+ ),
window_method="ewm",
aggregation_description="(exponential weighted moment) sum",
agg_method="sum",
@@ -602,16 +626,28 @@ def sum(
template_header,
create_section_header("Parameters"),
dedent(
- """
+ """\
bias : bool, default False
Use a standard estimation bias correction.
"""
- ).replace("\n", "", 1),
+ ),
kwargs_numeric_only,
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4])
+ >>> ser.ewm(alpha=.2).std()
+ 0 NaN
+ 1 0.707107
+ 2 0.995893
+ 3 1.277320
+ dtype: float64
+ """
+ ),
window_method="ewm",
aggregation_description="(exponential weighted moment) standard deviation",
agg_method="std",
@@ -632,16 +668,28 @@ def std(self, bias: bool = False, numeric_only: bool = False):
template_header,
create_section_header("Parameters"),
dedent(
- """
+ """\
bias : bool, default False
Use a standard estimation bias correction.
"""
- ).replace("\n", "", 1),
+ ),
kwargs_numeric_only,
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4])
+ >>> ser.ewm(alpha=.2).var()
+ 0 NaN
+ 1 0.500000
+ 2 0.991803
+ 3 1.631547
+ dtype: float64
+ """
+ ),
window_method="ewm",
aggregation_description="(exponential weighted moment) variance",
agg_method="var",
@@ -665,7 +713,7 @@ def var_func(values, begin, end, min_periods):
template_header,
create_section_header("Parameters"),
dedent(
- """
+ """\
other : Series or DataFrame , optional
If not supplied then will default to self and produce pairwise
output.
@@ -679,12 +727,25 @@ def var_func(values, begin, end, min_periods):
bias : bool, default False
Use a standard estimation bias correction.
"""
- ).replace("\n", "", 1),
+ ),
kwargs_numeric_only,
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser1 = pd.Series([1, 2, 3, 4])
+ >>> ser2 = pd.Series([10, 11, 13, 16])
+ >>> ser1.ewm(alpha=.2).cov(ser2)
+ 0 NaN
+ 1 0.500000
+ 2 1.524590
+ 3 3.408836
+ dtype: float64
+ """
+ ),
window_method="ewm",
aggregation_description="(exponential weighted moment) sample covariance",
agg_method="cov",
@@ -739,7 +800,7 @@ def cov_func(x, y):
template_header,
create_section_header("Parameters"),
dedent(
- """
+ """\
other : Series or DataFrame, optional
If not supplied then will default to self and produce pairwise
output.
@@ -751,12 +812,25 @@ def cov_func(x, y):
inputs. In the case of missing elements, only complete pairwise
observations will be used.
"""
- ).replace("\n", "", 1),
+ ),
kwargs_numeric_only,
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser1 = pd.Series([1, 2, 3, 4])
+ >>> ser2 = pd.Series([10, 11, 13, 16])
+ >>> ser1.ewm(alpha=.2).corr(ser2)
+ 0 NaN
+ 1 1.000000
+ 2 0.982821
+ 3 0.977802
+ dtype: float64
+ """
+ ),
window_method="ewm",
aggregation_description="(exponential weighted moment) sample correlation",
agg_method="corr",
| - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Towards https://github.com/pandas-dev/pandas/issues/37875
It wasn't possible to make the html files locally to preview the examples.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54004 | 2023-07-05T11:51:37Z | 2023-07-06T18:17:44Z | 2023-07-06T18:17:44Z | 2023-07-18T09:12:05Z |
Correct check when slicing non-monotonic datetime indexes | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 6390fbeed8548..8644caef21de2 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -368,6 +368,7 @@ Categorical
Datetimelike
^^^^^^^^^^^^
- :meth:`DatetimeIndex.map` with ``na_action="ignore"`` now works as expected. (:issue:`51644`)
+- :meth:`DatetimeIndex.slice_indexer` now raises ``KeyError`` for non-monotonic indexes if either of the slice bounds is not in the index, this behaviour was previously deprecated but inconsistently handled. (:issue:`53983`)
- Bug in :class:`DateOffset` which had inconsistent behavior when multiplying a :class:`DateOffset` object by a constant (:issue:`47953`)
- Bug in :func:`date_range` when ``freq`` was a :class:`DateOffset` with ``nanoseconds`` (:issue:`46877`)
- Bug in :meth:`DataFrame.to_sql` raising ``ValueError`` for pyarrow-backed date like dtypes (:issue:`53854`)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 1500bcef5d4d9..95a6e4a9a26ae 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -666,18 +666,18 @@ def check_str_or_none(point) -> bool:
return Index.slice_indexer(self, start, end, step)
mask = np.array(True)
- raise_mask = np.array(True)
+ in_index = True
if start is not None:
start_casted = self._maybe_cast_slice_bound(start, "left")
mask = start_casted <= self
- raise_mask = start_casted == self
+ in_index &= (start_casted == self).any()
if end is not None:
end_casted = self._maybe_cast_slice_bound(end, "right")
mask = (self <= end_casted) & mask
- raise_mask = (end_casted == self) | raise_mask
+ in_index &= (end_casted == self).any()
- if not raise_mask.any():
+ if not in_index:
raise KeyError(
"Value based partial slicing on non-monotonic DatetimeIndexes "
"with non-existing keys is not allowed.",
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 5d1d4ba6f638a..bc6e8aed449f3 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -664,5 +664,14 @@ def test_slice_irregular_datetime_index_with_nan(self):
index = pd.to_datetime(["2012-01-01", "2012-01-02", "2012-01-03", None])
df = DataFrame(range(len(index)), index=index)
expected = DataFrame(range(len(index[:3])), index=index[:3])
- result = df["2012-01-01":"2012-01-04"]
+ with pytest.raises(KeyError, match="non-existing keys is not allowed"):
+ # Upper bound is not in index (which is unordered)
+ # GH53983
+ # GH37819
+ df["2012-01-01":"2012-01-04"]
+ # Need this precision for right bound since the right slice
+ # bound is "rounded" up to the largest timepoint smaller than
+ # the next "resolution"-step of the provided point.
+ # e.g. 2012-01-03 is rounded up to 2012-01-04 - 1ns
+ result = df["2012-01-01":"2012-01-03 00:00:00.000000000"]
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/indexing/test_datetime.py b/pandas/tests/series/indexing/test_datetime.py
index f47e344336a8b..072607c29fd4c 100644
--- a/pandas/tests/series/indexing/test_datetime.py
+++ b/pandas/tests/series/indexing/test_datetime.py
@@ -384,15 +384,19 @@ def compare(slobj):
expected.index = expected.index._with_freq(None)
tm.assert_series_equal(result, expected)
- compare(slice("2011-01-01", "2011-01-15"))
- with pytest.raises(KeyError, match="Value based partial slicing on non-monotonic"):
- compare(slice("2010-12-30", "2011-01-15"))
- compare(slice("2011-01-01", "2011-01-16"))
-
- # partial ranges
- compare(slice("2011-01-01", "2011-01-6"))
- compare(slice("2011-01-06", "2011-01-8"))
- compare(slice("2011-01-06", "2011-01-12"))
+ for key in [
+ slice("2011-01-01", "2011-01-15"),
+ slice("2010-12-30", "2011-01-15"),
+ slice("2011-01-01", "2011-01-16"),
+ # partial ranges
+ slice("2011-01-01", "2011-01-6"),
+ slice("2011-01-06", "2011-01-8"),
+ slice("2011-01-06", "2011-01-12"),
+ ]:
+ with pytest.raises(
+ KeyError, match="Value based partial slicing on non-monotonic"
+ ):
+ compare(key)
# single values
result = ts2["2011"].sort_index()
| The intention of #37819 was to deprecate (removed in #49607) the special case behaviour of non-monotonic datetime indexes, so that if either slice bound is not in the index, a KeyError is raised.
However, the check only fired correctly for the case where the lower bound was not in the index and either the upper bound was None or it was _also_ not in the index.
Correct the logic here and adapt the one test that exercises this behaviour.
Closes #53983.
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/54002 | 2023-07-05T10:16:25Z | 2023-07-19T15:41:34Z | 2023-07-19T15:41:34Z | 2023-07-19T15:42:15Z |
REF: implement EA.pad_or_backfill | diff --git a/doc/source/reference/extensions.rst b/doc/source/reference/extensions.rst
index bff5b2b70b518..eb2529716b55e 100644
--- a/doc/source/reference/extensions.rst
+++ b/doc/source/reference/extensions.rst
@@ -54,6 +54,7 @@ objects.
api.extensions.ExtensionArray.interpolate
api.extensions.ExtensionArray.isin
api.extensions.ExtensionArray.isna
+ api.extensions.ExtensionArray.pad_or_backfill
api.extensions.ExtensionArray.ravel
api.extensions.ExtensionArray.repeat
api.extensions.ExtensionArray.searchsorted
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 0fdec3175f635..70cb0c0223eee 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -417,6 +417,7 @@ Other Deprecations
- Deprecated positional indexing on :class:`Series` with :meth:`Series.__getitem__` and :meth:`Series.__setitem__`, in a future version ``ser[item]`` will *always* interpret ``item`` as a label, not a position (:issue:`50617`)
- Deprecated replacing builtin and NumPy functions in ``.agg``, ``.apply``, and ``.transform``; use the corresponding string alias (e.g. ``"sum"`` for ``sum`` or ``np.sum``) instead (:issue:`53425`)
- Deprecated strings ``T``, ``t``, ``L`` and ``l`` denoting units in :func:`to_timedelta` (:issue:`52536`)
+- Deprecated the "method" and "limit" keywords in ``ExtensionArray.fillna``, implement and use ``pad_or_backfill`` instead (:issue:`53621`)
- Deprecated the "method" and "limit" keywords on :meth:`Series.fillna`, :meth:`DataFrame.fillna`, :meth:`SeriesGroupBy.fillna`, :meth:`DataFrameGroupBy.fillna`, and :meth:`Resampler.fillna`, use ``obj.bfill()`` or ``obj.ffill()`` instead (:issue:`53394`)
- Deprecated the ``method`` and ``limit`` keywords in :meth:`DataFrame.replace` and :meth:`Series.replace` (:issue:`33302`)
- Deprecated the use of non-supported datetime64 and timedelta64 resolutions with :func:`pandas.array`. Supported resolutions are: "s", "ms", "us", "ns" resolutions (:issue:`53058`)
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 365f85a908099..359e0161e763c 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -18,6 +18,7 @@
AxisInt,
Dtype,
F,
+ FillnaOptions,
PositionalIndexer2D,
PositionalIndexerTuple,
ScalarIndexer,
@@ -295,6 +296,37 @@ def _fill_mask_inplace(
func = missing.get_fill_func(method, ndim=self.ndim)
func(self._ndarray.T, limit=limit, mask=mask.T)
+ def pad_or_backfill(
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ mask = self.isna()
+ if mask.any():
+ # (for now) when self.ndim == 2, we assume axis=0
+ func = missing.get_fill_func(method, ndim=self.ndim)
+
+ npvalues = self._ndarray.T
+ if copy:
+ npvalues = npvalues.copy()
+ func(npvalues, limit=limit, mask=mask.T)
+ npvalues = npvalues.T
+
+ if copy:
+ new_values = self._from_backing_data(npvalues)
+ else:
+ new_values = self
+
+ else:
+ if copy:
+ new_values = self.copy()
+ else:
+ new_values = self
+ return new_values
+
@doc(ExtensionArray.fillna)
def fillna(
self, value=None, method=None, limit: int | None = None, copy: bool = True
@@ -312,7 +344,6 @@ def fillna(
if mask.any():
if method is not None:
- # TODO: check value is None
# (for now) when self.ndim == 2, we assume axis=0
func = missing.get_fill_func(method, ndim=self.ndim)
npvalues = self._ndarray.T
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 7976f97cb49aa..88695f11fba59 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -899,6 +899,24 @@ def dropna(self) -> Self:
"""
return type(self)(pc.drop_null(self._pa_array))
+ def pad_or_backfill(
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ if not self._hasna:
+ # TODO(CoW): Not necessary anymore when CoW is the default
+ return self.copy()
+
+ # TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove
+ # this method entirely.
+ return super().pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
+
@doc(ExtensionArray.fillna)
def fillna(
self,
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index f69ddac4db6bf..325edba670fce 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -18,6 +18,7 @@
cast,
overload,
)
+import warnings
import numpy as np
@@ -33,6 +34,7 @@
Substitution,
cache_readonly,
)
+from pandas.util._exceptions import find_stack_level
from pandas.util._validators import (
validate_bool_kwarg,
validate_fillna_kwargs,
@@ -130,6 +132,7 @@ class ExtensionArray:
interpolate
isin
isna
+ pad_or_backfill
ravel
repeat
searchsorted
@@ -180,6 +183,7 @@ class ExtensionArray:
methods:
* fillna
+ * pad_or_backfill
* dropna
* unique
* factorize / _values_for_factorize
@@ -907,6 +911,93 @@ def interpolate(
f"{type(self).__name__} does not implement interpolate"
)
+ def pad_or_backfill(
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ """
+ Pad or backfill values, used by Series/DataFrame ffill and bfill.
+
+ Parameters
+ ----------
+ method : {'backfill', 'bfill', 'pad', 'ffill'}
+ Method to use for filling holes in reindexed Series:
+
+ * pad / ffill: propagate last valid observation forward to next valid.
+ * backfill / bfill: use NEXT valid observation to fill gap.
+
+ limit : int, default None
+ This is the maximum number of consecutive
+ NaN values to forward/backward fill. In other words, if there is
+ a gap with more than this number of consecutive NaNs, it will only
+ be partially filled. If method is not specified, this is the
+ maximum number of entries along the entire axis where NaNs will be
+ filled.
+
+ copy : bool, default True
+ Whether to make a copy of the data before filling. If False, then
+ the original should be modified and no new memory should be allocated.
+ For ExtensionArray subclasses that cannot do this, it is at the
+ author's discretion whether to ignore "copy=False" or to raise.
+ The base class implementation ignores the keyword if any NAs are
+ present.
+
+ Returns
+ -------
+ Same type as self
+
+ Examples
+ --------
+ >>> arr = pd.array([np.nan, np.nan, 2, 3, np.nan, np.nan])
+ >>> arr.pad_or_backfill(method="backfill", limit=1)
+ <IntegerArray>
+ [<NA>, 2, 2, 3, <NA>, <NA>]
+ Length: 6, dtype: Int64
+ """
+
+ # If a 3rd-party EA has implemented this functionality in fillna,
+ # we warn that they need to implement pad_or_backfill instead.
+ if (
+ type(self).fillna is not ExtensionArray.fillna
+ and type(self).pad_or_backfill is ExtensionArray.pad_or_backfill
+ ):
+ # Check for pad_or_backfill here allows us to call
+ # super().pad_or_backfill without getting this warning
+ warnings.warn(
+ "ExtensionArray.fillna 'method' keyword is deprecated. "
+ "In a future version. arr.pad_or_backfill will be called "
+ "instead. 3rd-party ExtensionArray authors need to implement "
+ "pad_or_backfill.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return self.fillna(method=method, limit=limit)
+
+ mask = self.isna()
+
+ if mask.any():
+ # NB: the base class does not respect the "copy" keyword
+ meth = missing.clean_fill_method(method)
+
+ npmask = np.asarray(mask)
+ if meth == "pad":
+ indexer = libalgos.get_fill_indexer(npmask, limit=limit)
+ return self.take(indexer, allow_fill=True)
+ else:
+ # i.e. meth == "backfill"
+ indexer = libalgos.get_fill_indexer(npmask[::-1], limit=limit)[::-1]
+ return self[::-1].take(indexer, allow_fill=True)
+
+ else:
+ if not copy:
+ return self
+ new_values = self.copy()
+ return new_values
+
def fillna(
self,
value: object | ArrayLike | None = None,
@@ -921,7 +1012,7 @@ def fillna(
----------
value : scalar, array-like
If a scalar value is passed it is used to fill all missing values.
- Alternatively, an array-like 'value' can be given. It's expected
+ Alternatively, an array-like "value" can be given. It's expected
that the array-like have the same length as 'self'.
method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
Method to use for filling holes in reindexed Series:
@@ -929,6 +1020,8 @@ def fillna(
* pad / ffill: propagate last valid observation forward to next valid.
* backfill / bfill: use NEXT valid observation to fill gap.
+ .. deprecated:: 2.1.0
+
limit : int, default None
If method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
@@ -937,6 +1030,8 @@ def fillna(
maximum number of entries along the entire axis where NaNs will be
filled.
+ .. deprecated:: 2.1.0
+
copy : bool, default True
Whether to make a copy of the data before filling. If False, then
the original should be modified and no new memory should be allocated.
@@ -958,6 +1053,15 @@ def fillna(
[0, 0, 2, 3, 0, 0]
Length: 6, dtype: Int64
"""
+ if method is not None:
+ warnings.warn(
+ f"The 'method' keyword in {type(self).__name__}.fillna is "
+ "deprecated and will be removed in a future version. "
+ "Use pad_or_backfill instead.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+
value, method = validate_fillna_kwargs(value, method)
mask = self.isna()
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 01f631b54a1d7..3263dd73fe4dc 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -29,6 +29,7 @@
ArrayLike,
AxisInt,
Dtype,
+ FillnaOptions,
IntervalClosedType,
NpDtype,
PositionalIndexer,
@@ -889,6 +890,20 @@ def max(self, *, axis: AxisInt | None = None, skipna: bool = True) -> IntervalOr
indexer = obj.argsort()[-1]
return obj[indexer]
+ def pad_or_backfill( # pylint: disable=useless-parent-delegation
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ # TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove
+ # this method entirely.
+ return super().pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
+
def fillna(
self, value=None, method=None, limit: int | None = None, copy: bool = True
) -> Self:
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 1de3ae3b2428e..bec875f2bbfa1 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -24,6 +24,7 @@
AstypeArg,
AxisInt,
DtypeObj,
+ FillnaOptions,
NpDtype,
PositionalIndexer,
Scalar,
@@ -189,6 +190,36 @@ def __getitem__(self, item: PositionalIndexer) -> Self | Any:
return self._simple_new(self._data[item], newmask)
+ def pad_or_backfill(
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ mask = self._mask
+
+ if mask.any():
+ func = missing.get_fill_func(method, ndim=self.ndim)
+
+ npvalues = self._data.T
+ new_mask = mask.T
+ if copy:
+ npvalues = npvalues.copy()
+ new_mask = new_mask.copy()
+ func(npvalues, limit=limit, mask=new_mask)
+ if copy:
+ return self._simple_new(npvalues.T, new_mask.T)
+ else:
+ return self
+ else:
+ if copy:
+ new_values = self.copy()
+ else:
+ new_values = self
+ return new_values
+
@doc(ExtensionArray.fillna)
def fillna(
self, value=None, method=None, limit: int | None = None, copy: bool = True
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 6d01dfcf6d90b..79a9ffb5f8c0b 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -237,7 +237,7 @@ def pad_or_backfill(
self,
*,
method: FillnaOptions,
- limit: int | None,
+ limit: int | None = None,
limit_area: Literal["inside", "outside"] | None = None,
copy: bool = True,
) -> Self:
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index af6402b9964e5..4df4375c5d701 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -78,6 +78,7 @@
from pandas._typing import (
AnyArrayLike,
Dtype,
+ FillnaOptions,
NpDtype,
NumpySorter,
NumpyValueArrayLike,
@@ -790,6 +791,25 @@ def searchsorted(
m8arr = self._ndarray.view("M8[ns]")
return m8arr.searchsorted(npvalue, side=side, sorter=sorter)
+ def pad_or_backfill(
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ # view as dt64 so we get treated as timelike in core.missing,
+ # similar to dtl._period_dispatch
+ dta = self.view("M8[ns]")
+ result = dta.pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
+ if copy:
+ return cast("Self", result.view(self.dtype))
+ else:
+ return self
+
def fillna(
self, value=None, method=None, limit: int | None = None, copy: bool = True
) -> Self:
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index d832a9f772f45..d32c98535d7cb 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -712,6 +712,20 @@ def isna(self):
mask[self.sp_index.indices] = isna(self.sp_values)
return type(self)(mask, fill_value=False, dtype=dtype)
+ def pad_or_backfill( # pylint: disable=useless-parent-delegation
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
+ ) -> Self:
+ # TODO(3.0): We can remove this method once deprecation for fillna method
+ # keyword is enforced.
+ return super().pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
+
def fillna(
self,
value=None,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 3e988068dbc12..49f4d333c77b2 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1917,28 +1917,10 @@ def pad_or_backfill(
if values.ndim == 2 and axis == 1:
# NDArrayBackedExtensionArray.fillna assumes axis=0
- new_values = values.T.fillna(method=method, limit=limit, copy=copy).T
+ new_values = values.T.pad_or_backfill(method=method, limit=limit).T
else:
- try:
- new_values = values.fillna(method=method, limit=limit, copy=copy)
- except TypeError:
- # 3rd party EA that has not implemented copy keyword yet
- refs = None
- new_values = values.fillna(method=method, limit=limit)
- # issue the warning *after* retrying, in case the TypeError
- # was caused by an invalid fill_value
- warnings.warn(
- # GH#53278
- "ExtensionArray.fillna added a 'copy' keyword in pandas "
- "2.1.0. In a future version, ExtensionArray subclasses will "
- "need to implement this keyword or an exception will be "
- "raised. In the interim, the keyword is ignored by "
- f"{type(self.values).__name__}.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
- return [self.make_block_same_class(new_values, refs=refs)]
+ new_values = values.pad_or_backfill(method=method, limit=limit)
+ return [self.make_block_same_class(new_values)]
class ExtensionBlock(libinternals.Block, EABackedBlock):
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 8f87749a4ed6e..711a1b5f2f26c 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -258,7 +258,7 @@ def test_fillna_method_doesnt_change_orig(self, method):
fill_value = arr[3] if method == "pad" else arr[5]
- result = arr.fillna(method=method)
+ result = arr.pad_or_backfill(method=method)
assert result[4] == fill_value
# check that the original was not changed
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 1fe1d4efbefd7..8e38a8c741b8d 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -497,7 +497,7 @@ def test_fillna_preserves_tz(self, method):
dtype=DatetimeTZDtype(tz="US/Central"),
)
- result = arr.fillna(method=method)
+ result = arr.pad_or_backfill(method=method)
tm.assert_extension_array_equal(result, expected)
# assert that arr and dti were not modified in-place
@@ -510,12 +510,12 @@ def test_fillna_2d(self):
dta[0, 1] = pd.NaT
dta[1, 0] = pd.NaT
- res1 = dta.fillna(method="pad")
+ res1 = dta.pad_or_backfill(method="pad")
expected1 = dta.copy()
expected1[1, 0] = dta[0, 0]
tm.assert_extension_array_equal(res1, expected1)
- res2 = dta.fillna(method="backfill")
+ res2 = dta.pad_or_backfill(method="backfill")
expected2 = dta.copy()
expected2 = dta.copy()
expected2[1, 0] = dta[2, 0]
@@ -529,10 +529,10 @@ def test_fillna_2d(self):
assert not dta2._ndarray.flags["C_CONTIGUOUS"]
tm.assert_extension_array_equal(dta, dta2)
- res3 = dta2.fillna(method="pad")
+ res3 = dta2.pad_or_backfill(method="pad")
tm.assert_extension_array_equal(res3, expected1)
- res4 = dta2.fillna(method="backfill")
+ res4 = dta2.pad_or_backfill(method="backfill")
tm.assert_extension_array_equal(res4, expected2)
# test the DataFrame method while we're here
diff --git a/pandas/tests/extension/base/dim2.py b/pandas/tests/extension/base/dim2.py
index b9706f87ab7d3..9dcce28f47e52 100644
--- a/pandas/tests/extension/base/dim2.py
+++ b/pandas/tests/extension/base/dim2.py
@@ -155,16 +155,14 @@ def test_concat_2d(self, data):
@pytest.mark.parametrize("method", ["backfill", "pad"])
def test_fillna_2d_method(self, data_missing, method):
+ # pad_or_backfill is always along axis=0
arr = data_missing.repeat(2).reshape(2, 2)
assert arr[0].isna().all()
assert not arr[1].isna().any()
- try:
- result = arr.pad_or_backfill(method=method, limit=None)
- except AttributeError:
- result = arr.fillna(method=method, limit=None)
+ result = arr.pad_or_backfill(method=method, limit=None)
- expected = data_missing.fillna(method=method).repeat(2).reshape(2, 2)
+ expected = data_missing.pad_or_backfill(method=method).repeat(2).reshape(2, 2)
self.assert_extension_array_equal(result, expected)
# Reverse so that backfill is not a no-op.
@@ -172,12 +170,11 @@ def test_fillna_2d_method(self, data_missing, method):
assert not arr2[0].isna().any()
assert arr2[1].isna().all()
- try:
- result2 = arr2.pad_or_backfill(method=method, limit=None)
- except AttributeError:
- result2 = arr2.fillna(method=method, limit=None)
+ result2 = arr2.pad_or_backfill(method=method, limit=None)
- expected2 = data_missing[::-1].fillna(method=method).repeat(2).reshape(2, 2)
+ expected2 = (
+ data_missing[::-1].pad_or_backfill(method=method).repeat(2).reshape(2, 2)
+ )
self.assert_extension_array_equal(result2, expected2)
@pytest.mark.parametrize("method", ["mean", "median", "var", "std", "sum", "prod"])
diff --git a/pandas/tests/extension/base/missing.py b/pandas/tests/extension/base/missing.py
index 43f37a020df3f..a839a9d327f95 100644
--- a/pandas/tests/extension/base/missing.py
+++ b/pandas/tests/extension/base/missing.py
@@ -95,7 +95,7 @@ def test_fillna_no_op_returns_copy(self, data):
assert result is not data
self.assert_extension_array_equal(result, data)
- result = data.fillna(method="backfill")
+ result = data.pad_or_backfill(method="backfill")
assert result is not data
self.assert_extension_array_equal(result, data)
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index 5762bc9ce485c..d24b70a884c45 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -281,6 +281,8 @@ def convert_values(param):
def value_counts(self, dropna: bool = True):
return value_counts(self.to_numpy(), dropna=dropna)
+ # We override fillna here to simulate a 3rd party EA that has done so. This
+ # lets us test the deprecation telling authors to implement pad_or_backfill
# Simulate a 3rd-party EA that has not yet updated to include a "copy"
# keyword in its fillna method.
# error: Signature of "fillna" incompatible with supertype "ExtensionArray"
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index 4f0ff427dd900..6feac7fb9d9dc 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -105,7 +105,7 @@ def test_fillna_frame(self, data_missing):
super().test_fillna_frame(data_missing)
def test_fillna_limit_pad(self, data_missing):
- msg = "ExtensionArray.fillna added a 'copy' keyword"
+ msg = "ExtensionArray.fillna 'method' keyword is deprecated"
with tm.assert_produces_warning(
FutureWarning, match=msg, check_stacklevel=False
):
@@ -123,6 +123,13 @@ def test_fillna_limit_backfill(self, data_missing):
):
super().test_fillna_limit_backfill(data_missing)
+ def test_fillna_no_op_returns_copy(self, data):
+ msg = "ExtensionArray.fillna 'method' keyword is deprecated"
+ with tm.assert_produces_warning(
+ FutureWarning, match=msg, check_stacklevel=False
+ ):
+ super().test_fillna_no_op_returns_copy(data)
+
def test_fillna_series(self, data_missing):
msg = "ExtensionArray.fillna added a 'copy' keyword"
with tm.assert_produces_warning(
@@ -131,7 +138,7 @@ def test_fillna_series(self, data_missing):
super().test_fillna_series(data_missing)
def test_fillna_series_method(self, data_missing, fillna_method):
- msg = "ExtensionArray.fillna added a 'copy' keyword"
+ msg = "ExtensionArray.fillna 'method' keyword is deprecated"
with tm.assert_produces_warning(
FutureWarning, match=msg, check_stacklevel=False
):
| - [x] closes #53621 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Sits on top of #53989.
The deprecation is a little bit awkward. If a 3rd party EA has implement fillna but not pad_or_backfill, we warn and call that. DecimalArray tests that deprecation.
The "method" paths for our existing EA.fillna methods could be ripped out as they are no longer used. I left them untouched for now.
In a follow-up, can de-duplicate Block.pad_or_backfill with EABlock.pad_or_backfill. | https://api.github.com/repos/pandas-dev/pandas/pulls/54001 | 2023-07-04T18:57:20Z | 2023-07-31T16:48:32Z | 2023-07-31T16:48:32Z | 2023-07-31T16:53:38Z |
CLN: tests.groupby.test_any_all | diff --git a/pandas/tests/groupby/conftest.py b/pandas/tests/groupby/conftest.py
index c5e30513f69de..b1b1d455d5027 100644
--- a/pandas/tests/groupby/conftest.py
+++ b/pandas/tests/groupby/conftest.py
@@ -24,6 +24,11 @@ def dropna(request):
return request.param
+@pytest.fixture(params=[True, False])
+def skipna(request):
+ return request.param
+
+
@pytest.fixture(params=[True, False])
def observed(request):
return request.param
diff --git a/pandas/tests/groupby/test_any_all.py b/pandas/tests/groupby/test_any_all.py
index 4e6631cb763fe..57a83335be849 100644
--- a/pandas/tests/groupby/test_any_all.py
+++ b/pandas/tests/groupby/test_any_all.py
@@ -14,7 +14,6 @@
@pytest.mark.parametrize("agg_func", ["any", "all"])
-@pytest.mark.parametrize("skipna", [True, False])
@pytest.mark.parametrize(
"vals",
[
@@ -33,7 +32,7 @@
[np.nan, np.nan, np.nan],
],
)
-def test_groupby_bool_aggs(agg_func, skipna, vals):
+def test_groupby_bool_aggs(skipna, agg_func, vals):
df = DataFrame({"key": ["a"] * 3 + ["b"] * 3, "val": vals * 2})
# Figure out expectation using Python builtin
@@ -43,9 +42,11 @@ def test_groupby_bool_aggs(agg_func, skipna, vals):
if skipna and all(isna(vals)) and agg_func == "any":
exp = False
- exp_df = DataFrame([exp] * 2, columns=["val"], index=Index(["a", "b"], name="key"))
+ expected = DataFrame(
+ [exp] * 2, columns=["val"], index=Index(["a", "b"], name="key")
+ )
result = getattr(df.groupby("key"), agg_func)(skipna=skipna)
- tm.assert_frame_equal(result, exp_df)
+ tm.assert_frame_equal(result, expected)
def test_any():
@@ -63,7 +64,7 @@ def test_any():
@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
def test_bool_aggs_dup_column_labels(bool_agg_func):
- # 21668
+ # GH#21668
df = DataFrame([[True, True]], columns=["a", "a"])
grp_by = df.groupby([0])
result = getattr(grp_by, bool_agg_func)()
@@ -73,7 +74,6 @@ def test_bool_aggs_dup_column_labels(bool_agg_func):
@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
-@pytest.mark.parametrize("skipna", [True, False])
@pytest.mark.parametrize(
"data",
[
@@ -141,7 +141,6 @@ def test_masked_mixed_types(dtype1, dtype2, exp_col1, exp_col2):
@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
@pytest.mark.parametrize("dtype", ["Int64", "Float64", "boolean"])
-@pytest.mark.parametrize("skipna", [True, False])
def test_masked_bool_aggs_skipna(bool_agg_func, dtype, skipna, frame_or_series):
# GH#40585
obj = frame_or_series([pd.NA, 1], dtype=dtype)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/53998 | 2023-07-04T14:32:08Z | 2023-07-06T19:52:58Z | 2023-07-06T19:52:58Z | 2023-07-07T15:14:32Z |
DEPR: start with Deprecation instead of FutureWarning for NDFrame._data | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b806ddbaa89ba..f049e9d479b26 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -506,7 +506,7 @@ def _data(self):
warnings.warn(
f"{type(self).__name__}._data is deprecated and will be removed in "
"a future version. Use public APIs instead.",
- FutureWarning,
+ DeprecationWarning,
stacklevel=find_stack_level(),
)
return self._mgr
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index e0d9d6c281fd5..7cf1c56d9342e 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -379,6 +379,6 @@ def test_inspect_getmembers(self):
df = DataFrame()
msg = "DataFrame._data is deprecated"
with tm.assert_produces_warning(
- FutureWarning, match=msg, check_stacklevel=False
+ DeprecationWarning, match=msg, check_stacklevel=False
):
inspect.getmembers(df)
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index acc1a8c2e1d05..6226f97c73f92 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -307,7 +307,7 @@ def test_copy_and_deepcopy(self, frame_or_series, shape, func):
def test_data_deprecated(self, frame_or_series):
obj = frame_or_series()
msg = "(Series|DataFrame)._data is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
mgr = obj._data
assert mgr is obj._mgr
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index e4e276af121f9..7d70206585be4 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -172,7 +172,7 @@ def test_inspect_getmembers(self):
ser = Series(dtype=object)
msg = "Series._data is deprecated"
with tm.assert_produces_warning(
- FutureWarning, match=msg, check_stacklevel=False
+ DeprecationWarning, match=msg, check_stacklevel=False
):
inspect.getmembers(ser)
| Follow-up on https://github.com/pandas-dev/pandas/pull/52003
Given this is something being used by downstream libraries, and not directly by users, we can start with a DeprecationWarning, which is not visible for users of those libraries. | https://api.github.com/repos/pandas-dev/pandas/pulls/53994 | 2023-07-04T10:55:53Z | 2023-07-04T18:53:09Z | 2023-07-04T18:53:09Z | 2023-07-04T18:53:12Z |
TST: add test to check dtype after replacing values in categorical Series inplace | diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index d3cdae63d26f3..50b9714082054 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -387,6 +387,16 @@ def test_replace_categorical(self, categorical, numeric):
expected = expected.cat.add_categories(2)
tm.assert_series_equal(expected, result)
+ @pytest.mark.parametrize(
+ "data, data_exp", [(["a", "b", "c"], ["b", "b", "c"]), (["a"], ["b"])]
+ )
+ def test_replace_categorical_inplace(self, data, data_exp):
+ # GH 53358
+ result = pd.Series(data, dtype="category")
+ result.replace(to_replace="a", value="b", inplace=True)
+ expected = pd.Series(data_exp, dtype="category")
+ tm.assert_series_equal(result, expected)
+
def test_replace_categorical_single(self):
# GH 26988
dti = pd.date_range("2016-01-01", periods=3, tz="US/Pacific")
| - [x] closes #53358
- added a test in order to check if the `dtype` is being updated, when we replace values in a categorical Series with `inplace=True`. | https://api.github.com/repos/pandas-dev/pandas/pulls/53993 | 2023-07-04T09:50:00Z | 2023-07-07T17:06:54Z | 2023-07-07T17:06:54Z | 2023-07-07T17:07:01Z |
BUG: ignoring sort in DTA.factorize | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 86849aa41e3e1..40cd59340f942 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -2211,7 +2211,15 @@ def factorize(
codes = codes[::-1]
uniques = uniques[::-1]
return codes, uniques
- # FIXME: shouldn't get here; we are ignoring sort
+
+ if sort:
+ # algorithms.factorize only passes sort=True here when freq is
+ # not None, so this should not be reached.
+ raise NotImplementedError(
+ f"The 'sort' keyword in {type(self).__name__}.factorize is "
+ "ignored unless arr.freq is not None. To factorize with sort, "
+ "call pd.factorize(obj, sort=True) instead."
+ )
return super().factorize(use_na_sentinel=use_na_sentinel)
@classmethod
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 2acc7bdc0d902..1fe1d4efbefd7 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -745,3 +745,16 @@ def test_iter_zoneinfo_fold(self, tz):
right2 = dta.astype(object)[2]
assert str(left) == str(right2)
assert left.utcoffset() == right2.utcoffset()
+
+
+def test_factorize_sort_without_freq():
+ dta = DatetimeArray._from_sequence([0, 2, 1])
+
+ msg = r"call pd.factorize\(obj, sort=True\) instead"
+ with pytest.raises(NotImplementedError, match=msg):
+ dta.factorize(sort=True)
+
+ # Do TimedeltaArray while we're here
+ tda = dta - dta[0]
+ with pytest.raises(NotImplementedError, match=msg):
+ tda.factorize(sort=True)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/53992 | 2023-07-03T22:48:09Z | 2023-07-06T19:57:33Z | 2023-07-06T19:57:33Z | 2023-07-06T20:29:50Z |
CLN: remove unreachable, unnecessary axis kwd | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b806ddbaa89ba..ff20d60a40a32 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8035,10 +8035,8 @@ def interpolate(
)
else:
index = missing.get_interp_index(method, obj.index)
- axis = self._info_axis_number
new_data = obj._mgr.interpolate(
method=method,
- axis=axis,
index=index,
limit=limit,
limit_direction=limit_direction,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 8923faf444953..067544636ccbf 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1396,7 +1396,6 @@ def interpolate(
self,
*,
method: InterpolateOptions,
- axis: AxisInt,
index: Index,
inplace: bool = False,
limit: int | None = None,
@@ -1427,27 +1426,12 @@ def interpolate(
return [self.copy(deep=False)]
return [self] if inplace else [self.copy()]
- if self.is_object and self.ndim == 2 and self.shape[0] != 1 and axis == 0:
- # split improves performance in ndarray.copy()
- return self.split_and_operate(
- type(self).interpolate,
- method=method,
- axis=axis,
- index=index,
- inplace=inplace,
- limit=limit,
- limit_direction=limit_direction,
- limit_area=limit_area,
- downcast=downcast,
- **kwargs,
- )
-
copy, refs = self._get_refs_and_copy(using_cow, inplace)
# Dispatch to the EA method.
new_values = self.array_values.interpolate(
method=method,
- axis=axis,
+ axis=self.ndim - 1,
index=index,
limit=limit,
limit_direction=limit_direction,
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/53991 | 2023-07-03T22:29:32Z | 2023-07-06T19:58:37Z | 2023-07-06T19:58:37Z | 2023-07-06T20:29:33Z |
BUG: missing fstring | diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 536ae7ee4673b..9173b7e8b1449 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -704,7 +704,7 @@ cdef datetime dateutil_parse(
# we get tzlocal, once the deprecation is enforced will get
# timezone.utc, not raise.
warnings.warn(
- "Parsing '{res.tzname}' as tzlocal (dependent on system timezone) "
+ f"Parsing '{res.tzname}' as tzlocal (dependent on system timezone) "
"is deprecated and will raise in a future version. Pass the 'tz' "
"keyword or call tz_localize after construction instead",
FutureWarning,
diff --git a/pandas/tests/tslibs/test_parsing.py b/pandas/tests/tslibs/test_parsing.py
index 8408c9df5962b..2c8a6827a3bf1 100644
--- a/pandas/tests/tslibs/test_parsing.py
+++ b/pandas/tests/tslibs/test_parsing.py
@@ -29,7 +29,10 @@
)
def test_parsing_tzlocal_deprecated():
# GH#50791
- msg = "Pass the 'tz' keyword or call tz_localize after construction instead"
+ msg = (
+ "Parsing 'EST' as tzlocal.*"
+ "Pass the 'tz' keyword or call tz_localize after construction instead"
+ )
dtstr = "Jan 15 2004 03:00 EST"
with tm.set_timezone("US/Eastern"):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/53990 | 2023-07-03T20:55:27Z | 2023-07-06T19:56:44Z | 2023-07-06T19:56:44Z | 2023-07-06T20:30:16Z |
REF: swap axis before calling Manager.pad_or_backfill | diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 570e48344c961..5f02053a454ed 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -247,7 +247,7 @@ def pad_or_backfill(
meth = missing.clean_fill_method(method)
missing.pad_or_backfill_inplace(
- out_data,
+ out_data.T,
method=meth,
axis=0,
limit=limit,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b806ddbaa89ba..bc866424c8137 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6911,7 +6911,7 @@ def _pad_or_backfill(
new_mgr = self._mgr.pad_or_backfill(
method=method,
- axis=axis,
+ axis=self._get_block_manager_axis(axis),
limit=limit,
inplace=inplace,
downcast=downcast,
@@ -8027,7 +8027,7 @@ def interpolate(
new_data = obj._mgr.pad_or_backfill(
method=method,
- axis=axis,
+ axis=self._get_block_manager_axis(axis),
limit=limit,
limit_area=limit_area,
inplace=inplace,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 1d572dbfd5386..d5235066d378b 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1893,8 +1893,8 @@ def pad_or_backfill(
using_cow: bool = False,
) -> list[Block]:
values = self.values
- if values.ndim == 2 and axis == 0:
- # NDArrayBackedExtensionArray.fillna assumes axis=1
+ if values.ndim == 2 and axis == 1:
+ # NDArrayBackedExtensionArray.fillna assumes axis=0
new_values = values.T.fillna(method=method, limit=limit).T
else:
new_values = values.fillna(method=method, limit=limit)
diff --git a/pandas/tests/extension/base/dim2.py b/pandas/tests/extension/base/dim2.py
index 85f01b1ee5d5e..6847c5c183267 100644
--- a/pandas/tests/extension/base/dim2.py
+++ b/pandas/tests/extension/base/dim2.py
@@ -159,11 +159,27 @@ def test_fillna_2d_method(self, data_missing, method):
assert arr[0].isna().all()
assert not arr[1].isna().any()
- result = arr.fillna(method=method)
+ try:
+ result = arr.pad_or_backfill(method=method, limit=None)
+ except AttributeError:
+ result = arr.fillna(method=method, limit=None)
expected = data_missing.fillna(method=method).repeat(2).reshape(2, 2)
self.assert_extension_array_equal(result, expected)
+ # Reverse so that backfill is not a no-op.
+ arr2 = arr[::-1]
+ assert not arr2[0].isna().any()
+ assert arr2[1].isna().all()
+
+ try:
+ result2 = arr2.pad_or_backfill(method=method, limit=None)
+ except AttributeError:
+ result2 = arr2.fillna(method=method, limit=None)
+
+ expected2 = data_missing[::-1].fillna(method=method).repeat(2).reshape(2, 2)
+ self.assert_extension_array_equal(result2, expected2)
+
@pytest.mark.parametrize("method", ["mean", "median", "var", "std", "sum", "prod"])
def test_reductions_2d_axis_none(self, data, method):
arr2d = data.reshape(1, -1)
| This was causing some major headaches in trying to implement #53621. An upcoming PR that implements that will get rid fo the ugly try/except in the dim2 test. | https://api.github.com/repos/pandas-dev/pandas/pulls/53989 | 2023-07-03T20:01:21Z | 2023-07-06T20:53:32Z | 2023-07-06T20:53:32Z | 2023-07-06T20:58:28Z |
DOC: Clean up CSV sniffing and chunking examples | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 0084e885db2b5..ec0e7d0636b07 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -1568,8 +1568,7 @@ class of the csv module. For this, you have to specify ``sep=None``.
.. ipython:: python
df = pd.DataFrame(np.random.randn(10, 4))
- df.to_csv("tmp.csv", sep="|")
- df.to_csv("tmp2.csv", sep=":")
+ df.to_csv("tmp2.csv", sep=":", index=False)
pd.read_csv("tmp2.csv", sep=None, engine="python")
.. ipython:: python
@@ -1597,8 +1596,8 @@ rather than reading the entire file into memory, such as the following:
.. ipython:: python
df = pd.DataFrame(np.random.randn(10, 4))
- df.to_csv("tmp.csv", sep="|")
- table = pd.read_csv("tmp.csv", sep="|")
+ df.to_csv("tmp.csv", index=False)
+ table = pd.read_csv("tmp.csv")
table
@@ -1607,8 +1606,8 @@ value will be an iterable object of type ``TextFileReader``:
.. ipython:: python
- with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
- reader
+ with pd.read_csv("tmp.csv", chunksize=4) as reader:
+ print(reader)
for chunk in reader:
print(chunk)
@@ -1620,8 +1619,8 @@ Specifying ``iterator=True`` will also return the ``TextFileReader`` object:
.. ipython:: python
- with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
- reader.get_chunk(5)
+ with pd.read_csv("tmp.csv", iterator=True) as reader:
+ print(reader.get_chunk(5))
.. ipython:: python
:suppress:
| Sniffing:
- Had an unused "to_csv" call
Chunking:
- Used an unnecessary "sep"
- Had two no-op statements, which seem like they were meant to be printed
Both
- Put the index as a column unnecessarily
---
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). | https://api.github.com/repos/pandas-dev/pandas/pulls/53987 | 2023-07-03T18:18:31Z | 2023-07-06T20:03:50Z | 2023-07-06T20:03:50Z | 2023-07-07T02:03:48Z |
REF: Separate groupby, rolling, and window agg/apply list/dict-like | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 6af4557897a0d..fc322312a9195 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -2,14 +2,12 @@
import abc
from collections import defaultdict
-from contextlib import nullcontext
from functools import partial
import inspect
from typing import (
TYPE_CHECKING,
Any,
Callable,
- ContextManager,
DefaultDict,
Dict,
Hashable,
@@ -57,7 +55,6 @@
ABCSeries,
)
-from pandas.core.base import SelectionMixin
import pandas.core.common as com
from pandas.core.construction import ensure_wrapped_if_datetimelike
@@ -144,6 +141,18 @@ def __init__(
def apply(self) -> DataFrame | Series:
pass
+ @abc.abstractmethod
+ def agg_or_apply_list_like(
+ self, op_name: Literal["agg", "apply"]
+ ) -> DataFrame | Series:
+ pass
+
+ @abc.abstractmethod
+ def agg_or_apply_dict_like(
+ self, op_name: Literal["agg", "apply"]
+ ) -> DataFrame | Series:
+ pass
+
def agg(self) -> DataFrame | Series | None:
"""
Provide an implementation for the aggregators.
@@ -300,80 +309,76 @@ def agg_list_like(self) -> DataFrame | Series:
"""
return self.agg_or_apply_list_like(op_name="agg")
- def agg_or_apply_list_like(
- self, op_name: Literal["agg", "apply"]
- ) -> DataFrame | Series:
- from pandas.core.groupby.generic import (
- DataFrameGroupBy,
- SeriesGroupBy,
- )
- from pandas.core.reshape.concat import concat
-
- obj = self.obj
- func = cast(List[AggFuncTypeBase], self.func)
- kwargs = self.kwargs
- if op_name == "apply":
- if isinstance(self, FrameApply):
- by_row = self.by_row
-
- elif isinstance(self, SeriesApply):
- by_row = "_compat" if self.by_row else False
- else:
- by_row = False
- kwargs = {**kwargs, "by_row": by_row}
+ def compute_list_like(
+ self,
+ op_name: Literal["agg", "apply"],
+ selected_obj: Series | DataFrame,
+ kwargs: dict[str, Any],
+ ) -> tuple[list[Hashable], list[Any]]:
+ """
+ Compute agg/apply results for like-like input.
- if getattr(obj, "axis", 0) == 1:
- raise NotImplementedError("axis other than 0 is not supported")
+ Parameters
+ ----------
+ op_name : {"agg", "apply"}
+ Operation being performed.
+ selected_obj : Series or DataFrame
+ Data to perform operation on.
+ kwargs : dict
+ Keyword arguments to pass to the functions.
- if not isinstance(obj, SelectionMixin):
- # i.e. obj is Series or DataFrame
- selected_obj = obj
- elif obj._selected_obj.ndim == 1:
- # For SeriesGroupBy this matches _obj_with_exclusions
- selected_obj = obj._selected_obj
- else:
- selected_obj = obj._obj_with_exclusions
+ Returns
+ -------
+ keys : list[hashable]
+ Index labels for result.
+ results : list
+ Data for result. When aggregating with a Series, this can contain any
+ Python objects.
+ """
+ func = cast(List[AggFuncTypeBase], self.func)
+ obj = self.obj
results = []
keys = []
- is_groupby = isinstance(obj, (DataFrameGroupBy, SeriesGroupBy))
+ # degenerate case
+ if selected_obj.ndim == 1:
+ for a in func:
+ colg = obj._gotitem(selected_obj.name, ndim=1, subset=selected_obj)
+ args = (
+ [self.axis, *self.args]
+ if include_axis(op_name, colg)
+ else self.args
+ )
+ new_res = getattr(colg, op_name)(a, *args, **kwargs)
+ results.append(new_res)
- context_manager: ContextManager
- if is_groupby:
- # When as_index=False, we combine all results using indices
- # and adjust index after
- context_manager = com.temp_setattr(obj, "as_index", True)
- else:
- context_manager = nullcontext()
+ # make sure we find a good name
+ name = com.get_callable_name(a) or a
+ keys.append(name)
- def include_axis(colg) -> bool:
- return isinstance(colg, ABCDataFrame) or (
- isinstance(colg, ABCSeries) and op_name == "agg"
- )
+ else:
+ indices = []
+ for index, col in enumerate(selected_obj):
+ colg = obj._gotitem(col, ndim=1, subset=selected_obj.iloc[:, index])
+ args = (
+ [self.axis, *self.args]
+ if include_axis(op_name, colg)
+ else self.args
+ )
+ new_res = getattr(colg, op_name)(func, *args, **kwargs)
+ results.append(new_res)
+ indices.append(index)
+ keys = selected_obj.columns.take(indices)
- with context_manager:
- # degenerate case
- if selected_obj.ndim == 1:
- for a in func:
- colg = obj._gotitem(selected_obj.name, ndim=1, subset=selected_obj)
- args = [self.axis, *self.args] if include_axis(colg) else self.args
- new_res = getattr(colg, op_name)(a, *args, **kwargs)
- results.append(new_res)
+ return keys, results
- # make sure we find a good name
- name = com.get_callable_name(a) or a
- keys.append(name)
+ def wrap_results_list_like(
+ self, keys: list[Hashable], results: list[Series | DataFrame]
+ ):
+ from pandas.core.reshape.concat import concat
- else:
- indices = []
- for index, col in enumerate(selected_obj):
- colg = obj._gotitem(col, ndim=1, subset=selected_obj.iloc[:, index])
- args = [self.axis, *self.args] if include_axis(colg) else self.args
- new_res = getattr(colg, op_name)(func, *args, **kwargs)
- results.append(new_res)
- indices.append(index)
- keys = selected_obj.columns.take(indices)
+ obj = self.obj
try:
return concat(results, keys=keys, axis=1, sort=False)
@@ -399,100 +404,93 @@ def agg_dict_like(self) -> DataFrame | Series:
"""
return self.agg_or_apply_dict_like(op_name="agg")
- def agg_or_apply_dict_like(
- self, op_name: Literal["agg", "apply"]
- ) -> DataFrame | Series:
- from pandas import Index
- from pandas.core.groupby.generic import (
- DataFrameGroupBy,
- SeriesGroupBy,
- )
- from pandas.core.reshape.concat import concat
-
- assert op_name in ["agg", "apply"]
+ def compute_dict_like(
+ self,
+ op_name: Literal["agg", "apply"],
+ selected_obj: Series | DataFrame,
+ selection: Hashable | Sequence[Hashable],
+ kwargs: dict[str, Any],
+ ) -> tuple[list[Hashable], list[Any]]:
+ """
+ Compute agg/apply results for dict-like input.
+
+ Parameters
+ ----------
+ op_name : {"agg", "apply"}
+ Operation being performed.
+ selected_obj : Series or DataFrame
+ Data to perform operation on.
+ selection : hashable or sequence of hashables
+ Used by GroupBy, Window, and Resample if selection is applied to the object.
+ kwargs : dict
+ Keyword arguments to pass to the functions.
+ Returns
+ -------
+ keys : list[hashable]
+ Index labels for result.
+ results : list
+ Data for result. When aggregating with a Series, this can contain any
+ Python object.
+ """
obj = self.obj
func = cast(AggFuncTypeDict, self.func)
- kwargs = {}
- if op_name == "apply":
- by_row = "_compat" if self.by_row else False
- kwargs.update({"by_row": by_row})
-
- if getattr(obj, "axis", 0) == 1:
- raise NotImplementedError("axis other than 0 is not supported")
-
- if not isinstance(obj, SelectionMixin):
- # i.e. obj is Series or DataFrame
- selected_obj = obj
- selection = None
- else:
- selected_obj = obj._selected_obj
- selection = obj._selection
-
func = self.normalize_dictlike_arg(op_name, selected_obj, func)
- is_groupby = isinstance(obj, (DataFrameGroupBy, SeriesGroupBy))
- context_manager: ContextManager
- if is_groupby:
- # When as_index=False, we combine all results using indices
- # and adjust index after
- context_manager = com.temp_setattr(obj, "as_index", True)
- else:
- context_manager = nullcontext()
-
is_non_unique_col = (
selected_obj.ndim == 2
and selected_obj.columns.nunique() < len(selected_obj.columns)
)
- # Numba Groupby engine/engine-kwargs passthrough
- if is_groupby:
- engine = self.kwargs.get("engine", None)
- engine_kwargs = self.kwargs.get("engine_kwargs", None)
- kwargs.update({"engine": engine, "engine_kwargs": engine_kwargs})
-
- with context_manager:
- if selected_obj.ndim == 1:
- # key only used for output
- colg = obj._gotitem(selection, ndim=1)
- result_data = [
- getattr(colg, op_name)(how, **kwargs) for _, how in func.items()
+ if selected_obj.ndim == 1:
+ # key only used for output
+ colg = obj._gotitem(selection, ndim=1)
+ results = [getattr(colg, op_name)(how, **kwargs) for _, how in func.items()]
+ keys = list(func.keys())
+ elif is_non_unique_col:
+ # key used for column selection and output
+ # GH#51099
+ results = []
+ keys = []
+ for key, how in func.items():
+ indices = selected_obj.columns.get_indexer_for([key])
+ labels = selected_obj.columns.take(indices)
+ label_to_indices = defaultdict(list)
+ for index, label in zip(indices, labels):
+ label_to_indices[label].append(index)
+
+ key_data = [
+ getattr(selected_obj._ixs(indice, axis=1), op_name)(how, **kwargs)
+ for label, indices in label_to_indices.items()
+ for indice in indices
]
- result_index = list(func.keys())
- elif is_non_unique_col:
- # key used for column selection and output
- # GH#51099
- result_data = []
- result_index = []
- for key, how in func.items():
- indices = selected_obj.columns.get_indexer_for([key])
- labels = selected_obj.columns.take(indices)
- label_to_indices = defaultdict(list)
- for index, label in zip(indices, labels):
- label_to_indices[label].append(index)
-
- key_data = [
- getattr(selected_obj._ixs(indice, axis=1), op_name)(
- how, **kwargs
- )
- for label, indices in label_to_indices.items()
- for indice in indices
- ]
-
- result_index += [key] * len(key_data)
- result_data += key_data
- else:
- # key used for column selection and output
- result_data = [
- getattr(obj._gotitem(key, ndim=1), op_name)(how, **kwargs)
- for key, how in func.items()
- ]
- result_index = list(func.keys())
+
+ keys += [key] * len(key_data)
+ results += key_data
+ else:
+ # key used for column selection and output
+ results = [
+ getattr(obj._gotitem(key, ndim=1), op_name)(how, **kwargs)
+ for key, how in func.items()
+ ]
+ keys = list(func.keys())
+
+ return keys, results
+
+ def wrap_results_dict_like(
+ self,
+ selected_obj: Series | DataFrame,
+ result_index: list[Hashable],
+ result_data: list,
+ ):
+ from pandas import Index
+ from pandas.core.reshape.concat import concat
+
+ obj = self.obj
# Avoid making two isinstance calls in all and any below
is_ndframe = [isinstance(r, ABCNDFrame) for r in result_data]
- # combine results
if all(is_ndframe):
results = dict(zip(result_index, result_data))
keys_to_use: Iterable[Hashable]
@@ -693,6 +691,50 @@ def index(self) -> Index:
def agg_axis(self) -> Index:
return self.obj._get_agg_axis(self.axis)
+ def agg_or_apply_list_like(
+ self, op_name: Literal["agg", "apply"]
+ ) -> DataFrame | Series:
+ obj = self.obj
+ kwargs = self.kwargs
+
+ if op_name == "apply":
+ if isinstance(self, FrameApply):
+ by_row = self.by_row
+
+ elif isinstance(self, SeriesApply):
+ by_row = "_compat" if self.by_row else False
+ else:
+ by_row = False
+ kwargs = {**kwargs, "by_row": by_row}
+
+ if getattr(obj, "axis", 0) == 1:
+ raise NotImplementedError("axis other than 0 is not supported")
+
+ keys, results = self.compute_list_like(op_name, obj, kwargs)
+ result = self.wrap_results_list_like(keys, results)
+ return result
+
+ def agg_or_apply_dict_like(
+ self, op_name: Literal["agg", "apply"]
+ ) -> DataFrame | Series:
+ assert op_name in ["agg", "apply"]
+ obj = self.obj
+
+ kwargs = {}
+ if op_name == "apply":
+ by_row = "_compat" if self.by_row else False
+ kwargs.update({"by_row": by_row})
+
+ if getattr(obj, "axis", 0) == 1:
+ raise NotImplementedError("axis other than 0 is not supported")
+
+ selection = None
+ result_index, result_data = self.compute_dict_like(
+ op_name, obj, selection, kwargs
+ )
+ result = self.wrap_results_dict_like(obj, result_index, result_data)
+ return result
+
class FrameApply(NDFrameApply):
obj: DataFrame
@@ -1258,6 +1300,8 @@ def curried(x):
class GroupByApply(Apply):
+ obj: GroupBy | Resampler | BaseWindow
+
def __init__(
self,
obj: GroupBy[NDFrameT],
@@ -1283,8 +1327,73 @@ def apply(self):
def transform(self):
raise NotImplementedError
+ def agg_or_apply_list_like(
+ self, op_name: Literal["agg", "apply"]
+ ) -> DataFrame | Series:
+ obj = self.obj
+ kwargs = self.kwargs
+ if op_name == "apply":
+ kwargs = {**kwargs, "by_row": False}
+
+ if getattr(obj, "axis", 0) == 1:
+ raise NotImplementedError("axis other than 0 is not supported")
-class ResamplerWindowApply(Apply):
+ if obj._selected_obj.ndim == 1:
+ # For SeriesGroupBy this matches _obj_with_exclusions
+ selected_obj = obj._selected_obj
+ else:
+ selected_obj = obj._obj_with_exclusions
+
+ # Only set as_index=True on groupby objects, not Window or Resample
+ # that inherit from this class.
+ with com.temp_setattr(
+ obj, "as_index", True, condition=hasattr(obj, "as_index")
+ ):
+ keys, results = self.compute_list_like(op_name, selected_obj, kwargs)
+ result = self.wrap_results_list_like(keys, results)
+ return result
+
+ def agg_or_apply_dict_like(
+ self, op_name: Literal["agg", "apply"]
+ ) -> DataFrame | Series:
+ from pandas.core.groupby.generic import (
+ DataFrameGroupBy,
+ SeriesGroupBy,
+ )
+
+ assert op_name in ["agg", "apply"]
+
+ obj = self.obj
+ kwargs = {}
+ if op_name == "apply":
+ by_row = "_compat" if self.by_row else False
+ kwargs.update({"by_row": by_row})
+
+ if getattr(obj, "axis", 0) == 1:
+ raise NotImplementedError("axis other than 0 is not supported")
+
+ selected_obj = obj._selected_obj
+ selection = obj._selection
+
+ is_groupby = isinstance(obj, (DataFrameGroupBy, SeriesGroupBy))
+
+ # Numba Groupby engine/engine-kwargs passthrough
+ if is_groupby:
+ engine = self.kwargs.get("engine", None)
+ engine_kwargs = self.kwargs.get("engine_kwargs", None)
+ kwargs.update({"engine": engine, "engine_kwargs": engine_kwargs})
+
+ with com.temp_setattr(
+ obj, "as_index", True, condition=hasattr(obj, "as_index")
+ ):
+ result_index, result_data = self.compute_dict_like(
+ op_name, selected_obj, selection, kwargs
+ )
+ result = self.wrap_results_dict_like(selected_obj, result_index, result_data)
+ return result
+
+
+class ResamplerWindowApply(GroupByApply):
axis: AxisInt = 0
obj: Resampler | BaseWindow
@@ -1296,7 +1405,7 @@ def __init__(
args,
kwargs,
) -> None:
- super().__init__(
+ super(GroupByApply, self).__init__(
obj,
func,
raw=False,
@@ -1699,6 +1808,12 @@ def validate_func_kwargs(
return columns, func
+def include_axis(op_name: Literal["agg", "apply"], colg: Series | DataFrame) -> bool:
+ return isinstance(colg, ABCDataFrame) or (
+ isinstance(colg, ABCSeries) and op_name == "agg"
+ )
+
+
def warn_alias_replacement(
obj: AggObjType,
func: Callable,
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 9db03ac3ae571..c1d78f7c19c98 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -524,23 +524,29 @@ def convert_to_list_like(
@contextlib.contextmanager
-def temp_setattr(obj, attr: str, value) -> Generator[None, None, None]:
+def temp_setattr(
+ obj, attr: str, value, condition: bool = True
+) -> Generator[None, None, None]:
"""Temporarily set attribute on an object.
Args:
obj: Object whose attribute will be modified.
attr: Attribute to modify.
value: Value to temporarily set attribute to.
+ condition: Whether to set the attribute. Provided in order to not have to
+ conditionally use this context manager.
Yields:
obj with modified attribute.
"""
- old_value = getattr(obj, attr)
- setattr(obj, attr, value)
+ if condition:
+ old_value = getattr(obj, attr)
+ setattr(obj, attr, value)
try:
yield obj
finally:
- setattr(obj, attr, old_value)
+ if condition:
+ setattr(obj, attr, old_value)
def require_length_match(data, index: Index) -> None:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Should enable work on #53839. | https://api.github.com/repos/pandas-dev/pandas/pulls/53986 | 2023-07-03T17:30:22Z | 2023-07-09T08:58:55Z | 2023-07-09T08:58:55Z | 2023-07-09T21:59:06Z |
DOC: Fixing EX01 - Added examples | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index a67dc66b26d34..2c43bd794b798 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -118,17 +118,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.core.window.rolling.Window.sum \
pandas.core.window.rolling.Window.var \
pandas.core.window.rolling.Window.std \
- pandas.core.window.expanding.Expanding.count \
- pandas.core.window.expanding.Expanding.sum \
- pandas.core.window.expanding.Expanding.mean \
- pandas.core.window.expanding.Expanding.median \
- pandas.core.window.expanding.Expanding.min \
- pandas.core.window.expanding.Expanding.max \
- pandas.core.window.expanding.Expanding.corr \
- pandas.core.window.expanding.Expanding.cov \
- pandas.core.window.expanding.Expanding.skew \
- pandas.core.window.expanding.Expanding.apply \
- pandas.core.window.expanding.Expanding.quantile \
pandas.core.window.ewm.ExponentialMovingWindow.mean \
pandas.core.window.ewm.ExponentialMovingWindow.sum \
pandas.core.window.ewm.ExponentialMovingWindow.std \
diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py
index 19dd98851611f..ec4c23bfc5e49 100644
--- a/pandas/core/window/expanding.py
+++ b/pandas/core/window/expanding.py
@@ -183,7 +183,19 @@ def aggregate(self, func, *args, **kwargs):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().count()
+ a 1.0
+ b 2.0
+ c 3.0
+ d 4.0
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="count of non NaN observations",
agg_method="count",
@@ -198,7 +210,19 @@ def count(self, numeric_only: bool = False):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().apply(lambda s: s.max() - 2 * s.min())
+ a -1.0
+ b 0.0
+ c 1.0
+ d 2.0
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="custom aggregation function",
agg_method="apply",
@@ -231,7 +255,19 @@ def apply(
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes[:-1],
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().sum()
+ a 1.0
+ b 3.0
+ c 6.0
+ d 10.0
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="sum",
agg_method="sum",
@@ -258,7 +294,19 @@ def sum(
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes[:-1],
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([3, 2, 1, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().max()
+ a 3.0
+ b 3.0
+ c 3.0
+ d 4.0
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="maximum",
agg_method="max",
@@ -285,7 +333,19 @@ def max(
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes[:-1],
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([2, 3, 4, 1], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().min()
+ a 2.0
+ b 2.0
+ c 2.0
+ d 1.0
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="minimum",
agg_method="min",
@@ -312,7 +372,19 @@ def min(
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes[:-1],
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().mean()
+ a 1.0
+ b 1.5
+ c 2.0
+ d 2.5
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="mean",
agg_method="mean",
@@ -339,7 +411,19 @@ def mean(
create_section_header("See Also"),
template_see_also,
create_section_header("Notes"),
- numba_notes[:-1],
+ numba_notes,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser.expanding().median()
+ a 1.0
+ b 1.5
+ c 2.0
+ d 2.5
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="median",
agg_method="median",
@@ -523,7 +607,20 @@ def sem(self, ddof: int = 1, numeric_only: bool = False):
"scipy.stats.skew : Third moment of a probability density.\n",
template_see_also,
create_section_header("Notes"),
- "A minimum of three periods is required for the rolling calculation.\n",
+ "A minimum of three periods is required for the rolling calculation.\n\n",
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([-1, 0, 2, -1, 2], index=['a', 'b', 'c', 'd', 'e'])
+ >>> ser.expanding().skew()
+ a NaN
+ b NaN
+ c 0.935220
+ d 1.414214
+ e 0.315356
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="unbiased skewness",
agg_method="skew",
@@ -597,7 +694,21 @@ def kurt(self, numeric_only: bool = False):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([1, 2, 3, 4, 5, 6], index=['a', 'b', 'c', 'd', 'e', 'f'])
+ >>> ser.expanding(min_periods=4).quantile(.25)
+ a NaN
+ b NaN
+ c NaN
+ d 1.75
+ e 2.00
+ f 2.25
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="quantile",
agg_method="quantile",
@@ -714,7 +825,20 @@ def rank(
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser1 = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser2 = pd.Series([10, 11, 13, 16], index=['a', 'b', 'c', 'd'])
+ >>> ser1.expanding().cov(ser2)
+ a NaN
+ b 0.500000
+ c 1.500000
+ d 3.333333
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="sample covariance",
agg_method="cov",
@@ -782,9 +906,22 @@ def cov(
columns on the second level.
In the case of missing elements, only complete pairwise observations
- will be used.
+ will be used.\n
"""
- ).replace("\n", "", 1),
+ ),
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser1 = pd.Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
+ >>> ser2 = pd.Series([10, 11, 13, 16], index=['a', 'b', 'c', 'd'])
+ >>> ser1.expanding().corr(ser2)
+ a NaN
+ b 1.000000
+ c 0.981981
+ d 0.975900
+ dtype: float64
+ """
+ ),
window_method="expanding",
aggregation_description="correlation",
agg_method="corr",
| - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Towards https://github.com/pandas-dev/pandas/issues/37875
It's not possible to locally make the html for each method. | https://api.github.com/repos/pandas-dev/pandas/pulls/53985 | 2023-07-03T16:36:14Z | 2023-07-05T07:37:38Z | 2023-07-05T07:37:38Z | 2023-07-05T07:54:30Z |
WEB/DOC: use Plausible for analytics (using Scientific Python server) | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 31893bdf929d8..69a3a7187ecc2 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -240,7 +240,10 @@
"footer_start": ["pandas_footer", "sphinx-version"],
"github_url": "https://github.com/pandas-dev/pandas",
"twitter_url": "https://twitter.com/pandas_dev",
- "analytics": {"google_analytics_id": "G-5RE31C1RNW"},
+ "analytics": {
+ "plausible_analytics_domain": "pandas.pydata.org",
+ "plausible_analytics_url": "https://views.scientific-python.org/js/script.js",
+ },
"logo": {"image_dark": "https://pandas.pydata.org/static/img/pandas_white.svg"},
"navbar_end": ["version-switcher", "theme-switcher", "navbar-icon-links"],
"switcher": {
diff --git a/web/pandas/_templates/layout.html b/web/pandas/_templates/layout.html
index d9824f4641667..c8025aeef3791 100644
--- a/web/pandas/_templates/layout.html
+++ b/web/pandas/_templates/layout.html
@@ -1,13 +1,7 @@
<!DOCTYPE html>
<html>
<head>
- <script async="async" src="https://www.googletagmanager.com/gtag/js?id=G-5RE31C1RNW"></script>
- <script>
- window.dataLayer = window.dataLayer || [];
- function gtag(){ dataLayer.push(arguments); }
- gtag('js', new Date());
- gtag('config', 'G-5RE31C1RNW');
- </script>
+ <script defer data-domain="pandas.pydata.org" src="https://views.scientific-python.org/js/script.js"></script>
<title>pandas - Python Data Analysis Library</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
| This proposes to switch from our current usage of Google Analytics to Plausible.
(currently it fully switches, but could also first add it in addition so we have both for some time)
Plausible is a privacy-friendly and open source alternative for Google Analytics (https://plausible.io/).
The server we are using here is a self-hosted one by the Scientific Python team (using one from plausible.io costs some money). But the disclaimer I got from @stefanv is that it's a small server without professional sysadmin support and is currently not automatically backed up (but we could set up a cron job to query the data and dump them somewhere). I think, given that this is also being used by other projects (numpy, scipy, scikit-learn, ..), that this is fine, and if there come up problems, we will be able to look for solutions together.
The main advantage is that privacy-friendly (not collecting data for a big corporation ..). I think our current usage of Google Analytics is actually not correct (I assume we should have a pop-up to ask for permission).
The consequence of that is that Plausible cannot track "unique visitors" over time (only per day), for example per month (https://plausible.io/data-policy#how-we-count-unique-users-without-cookies). | https://api.github.com/repos/pandas-dev/pandas/pulls/53984 | 2023-07-03T15:54:30Z | 2023-08-15T19:18:05Z | 2023-08-15T19:18:05Z | 2024-01-22T16:51:26Z |
DOC: Fixing EX01 - Added examples | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index f9020a192e5b7..bf0711dcc0581 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -110,10 +110,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas_object \
pandas.api.interchange.from_dataframe \
pandas.DatetimeIndex.snap \
- pandas.core.window.rolling.Window.mean \
- pandas.core.window.rolling.Window.sum \
- pandas.core.window.rolling.Window.var \
- pandas.core.window.rolling.Window.std \
pandas.core.window.ewm.ExponentialMovingWindow.mean \
pandas.core.window.ewm.ExponentialMovingWindow.sum \
pandas.core.window.ewm.ExponentialMovingWindow.std \
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 9778651814b23..5fd9930da4463 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -1270,7 +1270,32 @@ def aggregate(self, func, *args, **kwargs):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([0, 1, 5, 2, 8])
+
+ To get an instance of :class:`~pandas.core.window.rolling.Window` we need
+ to pass the parameter `win_type`.
+
+ >>> type(ser.rolling(2, win_type='gaussian'))
+ <class 'pandas.core.window.rolling.Window'>
+
+ In order to use the `SciPy` Gaussian window we need to provide the parameters
+ `M` and `std`. The parameter `M` corresponds to 2 in our example.
+ We pass the second parameter `std` as a parameter of the following method
+ (`sum` in this case):
+
+ >>> ser.rolling(2, win_type='gaussian').sum(std=3)
+ 0 NaN
+ 1 0.986207
+ 2 5.917243
+ 3 6.903450
+ 4 9.862071
+ dtype: float64
+ """
+ ),
window_method="rolling",
aggregation_description="weighted window sum",
agg_method="sum",
@@ -1295,7 +1320,31 @@ def sum(self, numeric_only: bool = False, **kwargs):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([0, 1, 5, 2, 8])
+
+ To get an instance of :class:`~pandas.core.window.rolling.Window` we need
+ to pass the parameter `win_type`.
+
+ >>> type(ser.rolling(2, win_type='gaussian'))
+ <class 'pandas.core.window.rolling.Window'>
+
+ In order to use the `SciPy` Gaussian window we need to provide the parameters
+ `M` and `std`. The parameter `M` corresponds to 2 in our example.
+ We pass the second parameter `std` as a parameter of the following method:
+
+ >>> ser.rolling(2, win_type='gaussian').mean(std=3)
+ 0 NaN
+ 1 0.5
+ 2 3.0
+ 3 3.5
+ 4 5.0
+ dtype: float64
+ """
+ ),
window_method="rolling",
aggregation_description="weighted window mean",
agg_method="mean",
@@ -1320,7 +1369,31 @@ def mean(self, numeric_only: bool = False, **kwargs):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([0, 1, 5, 2, 8])
+
+ To get an instance of :class:`~pandas.core.window.rolling.Window` we need
+ to pass the parameter `win_type`.
+
+ >>> type(ser.rolling(2, win_type='gaussian'))
+ <class 'pandas.core.window.rolling.Window'>
+
+ In order to use the `SciPy` Gaussian window we need to provide the parameters
+ `M` and `std`. The parameter `M` corresponds to 2 in our example.
+ We pass the second parameter `std` as a parameter of the following method:
+
+ >>> ser.rolling(2, win_type='gaussian').var(std=3)
+ 0 NaN
+ 1 0.5
+ 2 8.0
+ 3 4.5
+ 4 18.0
+ dtype: float64
+ """
+ ),
window_method="rolling",
aggregation_description="weighted window variance",
agg_method="var",
@@ -1338,7 +1411,31 @@ def var(self, ddof: int = 1, numeric_only: bool = False, **kwargs):
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Examples"),
+ dedent(
+ """\
+ >>> ser = pd.Series([0, 1, 5, 2, 8])
+
+ To get an instance of :class:`~pandas.core.window.rolling.Window` we need
+ to pass the parameter `win_type`.
+
+ >>> type(ser.rolling(2, win_type='gaussian'))
+ <class 'pandas.core.window.rolling.Window'>
+
+ In order to use the `SciPy` Gaussian window we need to provide the parameters
+ `M` and `std`. The parameter `M` corresponds to 2 in our example.
+ We pass the second parameter `std` as a parameter of the following method:
+
+ >>> ser.rolling(2, win_type='gaussian').std(std=3)
+ 0 NaN
+ 1 0.707107
+ 2 2.828427
+ 3 2.121320
+ 4 4.242641
+ dtype: float64
+ """
+ ),
window_method="rolling",
aggregation_description="weighted window standard deviation",
agg_method="std",
| - [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Towards https://github.com/pandas-dev/pandas/issues/37875
I could not make the html of each method, so I'm not sure if everything looks well.
I'll use `/preview` here. | https://api.github.com/repos/pandas-dev/pandas/pulls/53982 | 2023-07-03T14:48:29Z | 2023-07-05T08:07:14Z | 2023-07-05T08:07:14Z | 2023-07-05T08:08:26Z |
Test CoW for multiple Python versions | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index d608654b510d1..600986d3297a9 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -57,7 +57,15 @@ jobs:
# Also install zh_CN (its encoding is gb2312) but do not activate it.
# It will be temporarily activated during tests with locale.setlocale
extra_loc: "zh_CN"
- - name: "Copy-on-Write"
+ - name: "Copy-on-Write 3.9"
+ env_file: actions-39.yaml
+ pattern: "not slow and not network and not single_cpu"
+ pandas_copy_on_write: "1"
+ - name: "Copy-on-Write 3.10"
+ env_file: actions-310.yaml
+ pattern: "not slow and not network and not single_cpu"
+ pandas_copy_on_write: "1"
+ - name: "Copy-on-Write 3.11"
env_file: actions-311.yaml
pattern: "not slow and not network and not single_cpu"
pandas_copy_on_write: "1"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/53981 | 2023-07-03T14:33:22Z | 2023-07-05T18:46:54Z | 2023-07-05T18:46:54Z | 2023-07-05T18:47:14Z |
Update ecosystem.md | diff --git a/web/pandas/community/ecosystem.md b/web/pandas/community/ecosystem.md
index 957a8d38b204c..fba50faac3e58 100644
--- a/web/pandas/community/ecosystem.md
+++ b/web/pandas/community/ecosystem.md
@@ -210,10 +210,11 @@ or may not be compatible with non-HTML Jupyter output formats.)
See [Options and Settings](https://pandas.pydata.org/docs/user_guide/options.html)
for pandas `display.` settings.
-### [quantopian/qgrid](https://github.com/quantopian/qgrid)
+### [modin-project/modin-spreadsheet](https://github.com/modin-project/modin-spreadsheet)
-qgrid is "an interactive grid for sorting and filtering DataFrames in
-IPython Notebook" built with SlickGrid.
+modin-spreadsheet is an interactive grid for sorting and filtering DataFrames in IPython Notebook.
+It is a fork of qgrid and is actively maintained by the modin project.
+modin-spreadsheet provides similar functionality to qgrid and allows for easy data exploration and manipulation in a tabular format.
### [Spyder](https://www.spyder-ide.org/)
| Hi @lithomas1 ,
This pull request has been raised to update the documentation file - ecosystem.md. As discussed in issue #53451, I have replaced the deprecated qgrid with the modin-spreadsheet in the ecosystem.md file. I am new to open source, kindly let me know if any modifications need to be made. | https://api.github.com/repos/pandas-dev/pandas/pulls/53980 | 2023-07-03T14:14:50Z | 2023-07-06T13:09:41Z | 2023-07-06T13:09:41Z | 2023-07-06T13:09:50Z |
ENH: Don't fragment manager if convert is no-op | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 1d572dbfd5386..8923faf444953 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -480,7 +480,15 @@ def convert(
return [self.copy()] if copy else [self]
if self.ndim != 1 and self.shape[0] != 1:
- return self.split_and_operate(Block.convert, copy=copy, using_cow=using_cow)
+ blocks = self.split_and_operate(
+ Block.convert, copy=copy, using_cow=using_cow
+ )
+ if all(blk.dtype.kind == "O" for blk in blocks):
+ # Avoid fragmenting the block if convert is a no-op
+ if using_cow:
+ return [self.copy(deep=False)]
+ return [self.copy()] if copy else [self]
+ return blocks
values = self.values
if values.ndim == 2:
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 9256df72cdf7b..1846ac24e9cc5 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1592,3 +1592,10 @@ def test_replace_categorical_no_replacement(self):
result = df.replace(to_replace=[".", "def"], value=["_", None])
tm.assert_frame_equal(result, expected)
+
+ def test_replace_object_splitting(self):
+ # GH#53977
+ df = DataFrame({"a": ["a"], "b": "b"})
+ assert len(df._mgr.blocks) == 1
+ df.replace(to_replace=r"^\s*$", value="", inplace=True, regex=True)
+ assert len(df._mgr.blocks) == 1
| - [x] closes #53853 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @jbrockmendel thoughts here? This isn't ideal but since this is probably the code path that's taken 99% of the time this might make sense? | https://api.github.com/repos/pandas-dev/pandas/pulls/53977 | 2023-07-02T21:49:47Z | 2023-07-03T20:07:39Z | 2023-07-03T20:07:39Z | 2023-07-03T20:07:42Z |
DOC: Fixes to the docs style | diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst
index 822bc7407a0c7..483921393f3ea 100644
--- a/doc/source/advanced.rst
+++ b/doc/source/advanced.rst
@@ -563,7 +563,8 @@ they need to be sorted. As with any index, you can use :meth:`~DataFrame.sort_in
.. ipython:: python
- import random; random.shuffle(tuples)
+ import random
+ random.shuffle(tuples)
s = pd.Series(np.random.randn(8), index=pd.MultiIndex.from_tuples(tuples))
s
s.sort_index()
diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index e57fd0ac0f96d..721e032b8bb92 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -782,7 +782,7 @@ Setting values by assigning categorical data will also check that the `categorie
df
try:
df.loc["j":"k", "cats"] = pd.Categorical(["b", "b"],
- categories=["a", "b", "c"])
+ categories=["a", "b", "c"])
except ValueError as e:
print("ValueError:", str(e))
diff --git a/doc/source/extending.rst b/doc/source/extending.rst
index 7046981a3a364..472671ad22273 100644
--- a/doc/source/extending.rst
+++ b/doc/source/extending.rst
@@ -160,6 +160,8 @@ your ``MyExtensionArray`` class, as follows:
.. code-block:: python
+ from pandas.core.arrays import ExtensionArray, ExtensionScalarOpsMixin
+
class MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin):
pass
@@ -269,7 +271,7 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
.. code-block:: python
- class SubclassedSeries(Series):
+ class SubclassedSeries(pd.Series):
@property
def _constructor(self):
@@ -280,7 +282,7 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
return SubclassedDataFrame
- class SubclassedDataFrame(DataFrame):
+ class SubclassedDataFrame(pd.DataFrame):
@property
def _constructor(self):
@@ -342,7 +344,7 @@ Below is an example to define two original properties, "internal_cache" as a tem
.. code-block:: python
- class SubclassedDataFrame2(DataFrame):
+ class SubclassedDataFrame2(pd.DataFrame):
# temporary properties
_internal_names = pd.DataFrame._internal_names + ['internal_cache']
diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index af767d7687749..4cd262cd03f00 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -5,6 +5,7 @@
.. ipython:: python
:suppress:
+ from matplotlib import pyplot as plt
import pandas.util._doctools as doctools
p = doctools.TablePlotter()
diff --git a/setup.cfg b/setup.cfg
index bfffe56e088eb..73a3a6c136b53 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -32,10 +32,18 @@ exclude =
[flake8-rst]
bootstrap =
- import pandas as pd
import numpy as np
+ import pandas as pd
+ np # avoiding error when importing again numpy or pandas
+ pd # (in some cases we want to do it to show users)
ignore = E402, # module level import not at top of file
W503, # line break before binary operator
+ # Classes/functions in different blocks can generate those errors
+ E302, # expected 2 blank lines, found 0
+ E305, # expected 2 blank lines after class or function definition, found 0
+ # We use semicolon at the end to avoid displaying plot objects
+ E703, # statement ends with a semicolon
+
exclude =
doc/source/whatsnew/v0.7.0.rst
doc/source/whatsnew/v0.7.3.rst
@@ -69,23 +77,16 @@ exclude =
doc/source/whatsnew/v0.23.2.rst
doc/source/whatsnew/v0.24.0.rst
doc/source/10min.rst
- doc/source/advanced.rst
doc/source/basics.rst
- doc/source/categorical.rst
doc/source/contributing_docstring.rst
- doc/source/contributing.rst
doc/source/dsintro.rst
doc/source/enhancingperf.rst
- doc/source/extending.rst
doc/source/groupby.rst
doc/source/indexing.rst
doc/source/merging.rst
doc/source/missing_data.rst
doc/source/options.rst
doc/source/release.rst
- doc/source/comparison_with_sas.rst
- doc/source/comparison_with_sql.rst
- doc/source/comparison_with_stata.rst
doc/source/reshaping.rst
doc/source/visualization.rst
| Fixing flake8 errors in documentation pages that are mostly correct. Also changing `setup.cfg` to avoid false positives.
- [X] refs #24173
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24182 | 2018-12-09T19:09:35Z | 2018-12-09T20:09:12Z | 2018-12-09T20:09:12Z | 2018-12-09T20:13:46Z |
DOC: Link to dev version of contributing guide from README.md | diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 95729f845ff5c..21df1a3aacd59 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -1,24 +1,23 @@
-Contributing to pandas
-======================
+# Contributing to pandas
Whether you are a novice or experienced software developer, all contributions and suggestions are welcome!
-Our main contribution docs can be found [here](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst), but if you do not want to read it in its entirety, we will summarize the main ways in which you can contribute and point to relevant places in the docs for further information.
+Our main contributing guide can be found [in this repo](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst) or [on the website](https://pandas-docs.github.io/pandas-docs-travis/contributing.html). If you do not want to read it in its entirety, we will summarize the main ways in which you can contribute and point to relevant sections of that document for further information.
+
+## Getting Started
-Getting Started
----------------
If you are looking to contribute to the *pandas* codebase, the best place to start is the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues). This is also a great place for filing bug reports and making suggestions for ways in which we can improve the code and documentation.
-If you have additional questions, feel free to ask them on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). Further information can also be found in our [Getting Started](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#where-to-start) section of our main contribution doc.
+If you have additional questions, feel free to ask them on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). Further information can also be found in the "[Where to start?](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#where-to-start)" section.
+
+## Filing Issues
+
+If you notice a bug in the code or documentation, or have suggestions for how we can improve either, feel free to create an issue on the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) using [GitHub's "issue" form](https://github.com/pandas-dev/pandas/issues/new). The form contains some questions that will help us best address your issue. For more information regarding how to file issues against *pandas*, please refer to the "[Bug reports and enhancement requests](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#bug-reports-and-enhancement-requests)" section.
-Filing Issues
--------------
-If you notice a bug in the code or in docs or have suggestions for how we can improve either, feel free to create an issue on the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) using [GitHub's "issue" form](https://github.com/pandas-dev/pandas/issues/new). The form contains some questions that will help us best address your issue. For more information regarding how to file issues against *pandas*, please refer to the [Bug reports and enhancement requests](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#bug-reports-and-enhancement-requests) section of our main contribution doc.
+## Contributing to the Codebase
-Contributing to the Codebase
-----------------------------
-The code is hosted on [GitHub](https://www.github.com/pandas-dev/pandas), so you will need to use [Git](http://git-scm.com/) to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. For more information, please refer to our [Working with the code](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#working-with-the-code) section of our main contribution docs.
+The code is hosted on [GitHub](https://www.github.com/pandas-dev/pandas), so you will need to use [Git](http://git-scm.com/) to clone the project and make changes to the codebase. Once you have obtained a copy of the code, you should create a development environment that is separate from your existing Python environment so that you can make and test changes without compromising your own work environment. For more information, please refer to the "[Working with the code](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#working-with-the-code)" section.
-Before submitting your changes for review, make sure to check that your changes do not break any tests. You can find more information about our test suites can be found [here](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#test-driven-development-code-writing). We also have guidelines regarding coding style that will be enforced during testing. Details about coding style can be found [here](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#code-standards).
+Before submitting your changes for review, make sure to check that your changes do not break any tests. You can find more information about our test suites in the "[Test-driven development/code writing](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#test-driven-development-code-writing)" section. We also have guidelines regarding coding style that will be enforced during testing, which can be found in the "[Code standards](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#code-standards)" section.
-Once your changes are ready to be submitted, make sure to push your changes to GitHub before creating a pull request. Details about how to do that can be found in the [Contributing your changes to pandas](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#contributing-your-changes-to-pandas) section of our main contribution docs. We will review your changes, and you will most likely be asked to make additional changes before it is finally ready to merge. However, once it's ready, we will merge it, and you will have successfully contributed to the codebase!
+Once your changes are ready to be submitted, make sure to push your changes to GitHub before creating a pull request. Details about how to do that can be found in the "[Contributing your changes to pandas](https://github.com/pandas-dev/pandas/blob/master/doc/source/contributing.rst#contributing-your-changes-to-pandas)" section. We will review your changes, and you will most likely be asked to make additional changes before it is finally ready to merge. However, once it's ready, we will merge it, and you will have successfully contributed to the codebase!
diff --git a/README.md b/README.md
index 1993b1ecb9dc1..be45faf06187e 100644
--- a/README.md
+++ b/README.md
@@ -231,9 +231,9 @@ Most development discussion is taking place on github in this repo. Further, the
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome.
-A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)**
+A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub.
-If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
+If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
diff --git a/doc/README.rst b/doc/README.rst
index a11ed8d9d03e3..5423e7419d03b 100644
--- a/doc/README.rst
+++ b/doc/README.rst
@@ -1 +1 @@
-See `contributing.rst <https://pandas.pydata.org/pandas-docs/stable/contributing.html>`_ in this repo.
+See `contributing.rst <https://pandas-docs.github.io/pandas-docs-travis/contributing.html>`_ in this repo.
| README.md should link to the development version of the contributing guide, since the stable version may not match current practice for `master`. For example, see #24069 .
I'm not sure this PR is a good idea, but I thought it was easiest open it and get comments directly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24172 | 2018-12-09T17:20:38Z | 2018-12-17T12:56:13Z | 2018-12-17T12:56:13Z | 2018-12-17T15:06:11Z |
remove enum import for PY2 compat, xref #22802 | diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index b4862a5f3b02f..472ac0ee6d45c 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1,5 +1,4 @@
# -*- coding: utf-8 -*-
-import enum
import warnings
from cpython cimport (PyObject_RichCompareBool, PyObject_RichCompare,
@@ -71,8 +70,7 @@ cdef inline object create_timestamp_from_ts(int64_t value,
return ts_base
-@enum.unique
-class RoundTo(enum.Enum):
+class RoundTo(object):
"""
enumeration defining the available rounding modes
@@ -105,11 +103,25 @@ class RoundTo(enum.Enum):
.. [6] "Round half to even"
https://en.wikipedia.org/wiki/Rounding#Round_half_to_even
"""
- MINUS_INFTY = 0
- PLUS_INFTY = 1
- NEAREST_HALF_EVEN = 2
- NEAREST_HALF_PLUS_INFTY = 3
- NEAREST_HALF_MINUS_INFTY = 4
+ @property
+ def MINUS_INFTY(self):
+ return 0
+
+ @property
+ def PLUS_INFTY(self):
+ return 1
+
+ @property
+ def NEAREST_HALF_EVEN(self):
+ return 2
+
+ @property
+ def NEAREST_HALF_PLUS_INFTY(self):
+ return 3
+
+ @property
+ def NEAREST_HALF_MINUS_INFTY(self):
+ return 4
cdef inline _npdivmod(x1, x2):
@@ -152,20 +164,17 @@ def round_nsint64(values, mode, freq):
:obj:`ndarray`
"""
- if not isinstance(mode, RoundTo):
- raise ValueError('mode should be a RoundTo member')
-
unit = to_offset(freq).nanos
- if mode is RoundTo.MINUS_INFTY:
+ if mode == RoundTo.MINUS_INFTY:
return _floor_int64(values, unit)
- elif mode is RoundTo.PLUS_INFTY:
+ elif mode == RoundTo.PLUS_INFTY:
return _ceil_int64(values, unit)
- elif mode is RoundTo.NEAREST_HALF_MINUS_INFTY:
+ elif mode == RoundTo.NEAREST_HALF_MINUS_INFTY:
return _rounddown_int64(values, unit)
- elif mode is RoundTo.NEAREST_HALF_PLUS_INFTY:
+ elif mode == RoundTo.NEAREST_HALF_PLUS_INFTY:
return _roundup_int64(values, unit)
- elif mode is RoundTo.NEAREST_HALF_EVEN:
+ elif mode == RoundTo.NEAREST_HALF_EVEN:
# for odd unit there is no need of a tie break
if unit % 2:
return _rounddown_int64(values, unit)
@@ -179,8 +188,8 @@ def round_nsint64(values, mode, freq):
# if/elif above should catch all rounding modes defined in enum 'RoundTo':
# if flow of control arrives here, it is a bug
- raise AssertionError("round_nsint64 called with an unrecognized "
- "rounding mode")
+ raise ValueError("round_nsint64 called with an unrecognized "
+ "rounding mode")
# This is PITA. Because we inherit from datetime, which has very specific
| xref #22802 | https://api.github.com/repos/pandas-dev/pandas/pulls/24170 | 2018-12-09T15:46:56Z | 2018-12-11T15:11:17Z | 2018-12-11T15:11:17Z | 2018-12-12T20:42:07Z |
DOC: Move 0.23.5 release notes | diff --git a/doc/source/whatsnew/v0.23.5.txt b/doc/source/whatsnew/v0.23.5.txt
deleted file mode 100644
index 8f4b1a13c2e9d..0000000000000
--- a/doc/source/whatsnew/v0.23.5.txt
+++ /dev/null
@@ -1,54 +0,0 @@
-.. _whatsnew_0235:
-
-v0.23.5 (TBD 0, 2018)
----------------------
-
-This is a minor bug-fix release in the 0.23.x series and includes some small regression fixes
-and bug fixes. We recommend that all users upgrade to this version.
-
-.. warning::
-
- Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
-
-.. contents:: What's new in v0.23.5
- :local:
- :backlinks: none
-
-.. _whatsnew_0235.fixed_regressions:
-
-Fixed Regressions
-~~~~~~~~~~~~~~~~~
-
-- Constructing a DataFrame with an index argument that wasn't already an
- instance of :class:`~pandas.core.Index` was broken in `4efb39f
- <https://github.com/pandas-dev/pandas/commit/4efb39f01f5880122fa38d91e12d217ef70fad9e>`_ (:issue:`22227`).
-- Calling :meth:`DataFrameGroupBy.rank` and :meth:`SeriesGroupBy.rank` with empty groups
- and ``pct=True`` was raising a ``ZeroDivisionError`` due to `c1068d9
- <https://github.com/pandas-dev/pandas/commit/c1068d9d242c22cb2199156f6fb82eb5759178ae>`_ (:issue:`22519`)
--
--
-
-
-Development
-~~~~~~~~~~~
-- The minimum required pytest version has been increased to 3.6 (:issue:`22319`)
-
-.. _whatsnew_0235.bug_fixes:
-
-Bug Fixes
-~~~~~~~~~
-
-**Groupby/Resample/Rolling**
-
-- Bug in :meth:`DataFrame.resample` when resampling ``NaT`` in ``TimeDeltaIndex`` (:issue:`13223`).
--
-
-**Missing**
-
--
--
-
-**I/O**
-
-- Bug in :func:`read_csv` that caused it to raise ``OverflowError`` when trying to use 'inf' as ``na_value`` with integer index column (:issue:`17128`)
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 0b2b526dfe9e7..b1780181d8a9c 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -410,6 +410,8 @@ If installed, we now require:
+-----------------+-----------------+----------+
| xlrd | 1.0.0 | |
+-----------------+-----------------+----------+
+| pytest (dev) | 3.6 | |
++-----------------+-----------------+----------+
Additionally we no longer depend on `feather-format` for feather based storage
and replaced it with references to `pyarrow` (:issue:`21639` and :issue:`23053`).
@@ -1504,6 +1506,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- Bug in :meth:`read_excel()` in which ``usecols`` was not being validated for proper column names when passed in as a string (:issue:`20480`)
- Bug in :meth:`DataFrame.to_dict` when the resulting dict contains non-Python scalars in the case of numeric data (:issue:`23753`)
- :func:`DataFrame.to_string()`, :func:`DataFrame.to_html()`, :func:`DataFrame.to_latex()` will correctly format output when a string is passed as the ``float_format`` argument (:issue:`21625`, :issue:`22270`)
+- Bug in :func:`read_csv` that caused it to raise ``OverflowError`` when trying to use 'inf' as ``na_value`` with integer index column (:issue:`17128`)
Plotting
^^^^^^^^
@@ -1530,6 +1533,8 @@ Groupby/Resample/Rolling
- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.transform` which caused missing values when the input function can accept a :class:`DataFrame` but renames it (:issue:`23455`).
- Bug in :func:`pandas.core.groupby.GroupBy.nth` where column order was not always preserved (:issue:`20760`)
- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.rank` with ``method='dense'`` and ``pct=True`` when a group has only one member would raise a ``ZeroDivisionError`` (:issue:`23666`).
+- Calling :meth:`DataFrameGroupBy.rank` and :meth:`SeriesGroupBy.rank` with empty groups and ``pct=True`` was raising a ``ZeroDivisionError`` due to `c1068d9 <https://github.com/pandas-dev/pandas/commit/c1068d9d242c22cb2199156f6fb82eb5759178ae>`_ (:issue:`22519`)
+- Bug in :meth:`DataFrame.resample` when resampling ``NaT`` in ``TimeDeltaIndex`` (:issue:`13223`).
Reshaping
^^^^^^^^^
@@ -1594,6 +1599,7 @@ Other
- Checking PEP 3141 numbers in :func:`~pandas.api.types.is_scalar` function returns ``True`` (:issue:`22903`)
- Bug in :meth:`DataFrame.combine_first` in which column types were unexpectedly converted to float (:issue:`20699`)
- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (:issue:`24113`)
+- Constructing a DataFrame with an index argument that wasn't already an instance of :class:`~pandas.core.Index` was broken in `4efb39f <https://github.com/pandas-dev/pandas/commit/4efb39f01f5880122fa38d91e12d217ef70fad9e>`_ (:issue:`22227`).
.. _whatsnew_0.24.0.contributors:
| Closes https://github.com/pandas-dev/pandas/issues/22923 | https://api.github.com/repos/pandas-dev/pandas/pulls/24165 | 2018-12-08T20:56:33Z | 2018-12-08T22:28:33Z | 2018-12-08T22:28:33Z | 2018-12-08T22:31:38Z |
API: Revert breaking `.values` changes | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 0b2b526dfe9e7..5698bbb5cfab9 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -219,17 +219,21 @@ Previously, these would be cast to a NumPy array with object dtype. In general,
this should result in better performance when storing an array of intervals or periods
in a :class:`Series` or column of a :class:`DataFrame`.
-Note that the ``.values`` of a ``Series`` containing one of these types is no longer a NumPy
-array, but rather an ``ExtensionArray``:
+Use :attr:`Series.array` to extract the underlying array of intervals or periods
+from the ``Series``::
.. ipython:: python
- ser.values
- pser.values
+ ser.array
+ pser.array
-This is the same behavior as ``Series.values`` for categorical data. See
-:ref:`whatsnew_0240.api_breaking.interval_values` for more.
+.. warning::
+ For backwards compatibility, :attr:`Series.values` continues to return
+ a NumPy array of objects for Interval and Period data. We recommend
+ using :attr:`Series.array` when you need the array of data stored in the
+ ``Series``, and :meth:`Series.to_numpy` when you know you need a NumPy array.
+ See :ref:`basics.dtypes` and :ref:`dsintro.attrs` for more.
.. _whatsnew_0240.enhancements.styler_pipe:
@@ -505,44 +509,6 @@ New Behavior on Windows:
...: print(f.read())
b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
-.. _whatsnew_0240.api_breaking.interval_values:
-
-``IntervalIndex.values`` is now an ``IntervalArray``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The :attr:`~Interval.values` attribute of an :class:`IntervalIndex` now returns an
-``IntervalArray``, rather than a NumPy array of :class:`Interval` objects (:issue:`19453`).
-
-Previous Behavior:
-
-.. code-block:: ipython
-
- In [1]: idx = pd.interval_range(0, 4)
-
- In [2]: idx.values
- Out[2]:
- array([Interval(0, 1, closed='right'), Interval(1, 2, closed='right'),
- Interval(2, 3, closed='right'), Interval(3, 4, closed='right')],
- dtype=object)
-
-New Behavior:
-
-.. ipython:: python
-
- idx = pd.interval_range(0, 4)
- idx.values
-
-This mirrors ``CategoricalIndex.values``, which returns a ``Categorical``.
-
-For situations where you need an ``ndarray`` of ``Interval`` objects, use
-:meth:`numpy.asarray`.
-
-.. ipython:: python
-
- np.asarray(idx)
- idx.values.astype(object)
-
-
.. _whatsnew_0240.api.timezone_offset_parsing:
Parsing Datetime Strings with Timezone Offsets
diff --git a/pandas/core/base.py b/pandas/core/base.py
index e224b6a50d332..1d2a0a2544dbc 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -913,7 +913,7 @@ def _ndarray_values(self):
- categorical -> codes
"""
if is_extension_array_dtype(self):
- return self.values._ndarray_values
+ return self.array._ndarray_values
return self.values
@property
@@ -1307,12 +1307,12 @@ def memory_usage(self, deep=False):
Memory usage does not include memory consumed by elements that
are not components of the array if deep=False or if used on PyPy
"""
- if hasattr(self.values, 'memory_usage'):
- return self.values.memory_usage(deep=deep)
+ if hasattr(self.array, 'memory_usage'):
+ return self.array.memory_usage(deep=deep)
- v = self.values.nbytes
+ v = self.array.nbytes
if deep and is_object_dtype(self) and not PYPY:
- v += lib.memory_usage_of_objects(self.values)
+ v += lib.memory_usage_of_objects(self.array)
return v
@Substitution(
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index c30e64fcf04da..ee5f0820a7b3e 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -326,6 +326,15 @@ def nbytes(self):
# for TZ-aware
return self._ndarray_values.nbytes
+ def memory_usage(self, deep=False):
+ # TODO: Remove this when we have a DatetimeTZArray
+ # Necessary to avoid recursion error since DTI._values is a DTI
+ # for TZ-aware
+ result = self._ndarray_values.nbytes
+ # include our engine hashtable
+ result += self._engine.sizeof(deep=deep)
+ return result
+
@cache_readonly
def _is_dates_only(self):
"""Return a boolean if we are only dates (and don't have a timezone)"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 51c47a81f8e2f..d37da14ab5d2c 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -21,9 +21,9 @@
_NS_DTYPE, _TD_DTYPE, ensure_platform_int, is_bool_dtype, is_categorical,
is_categorical_dtype, is_datetime64_dtype, is_datetime64tz_dtype,
is_dtype_equal, is_extension_array_dtype, is_extension_type,
- is_float_dtype, is_integer, is_integer_dtype, is_list_like,
- is_numeric_v_string_like, is_object_dtype, is_re, is_re_compilable,
- is_sparse, is_timedelta64_dtype, pandas_dtype)
+ is_float_dtype, is_integer, is_integer_dtype, is_interval_dtype,
+ is_list_like, is_numeric_v_string_like, is_object_dtype, is_period_dtype,
+ is_re, is_re_compilable, is_sparse, is_timedelta64_dtype, pandas_dtype)
import pandas.core.dtypes.concat as _concat
from pandas.core.dtypes.dtypes import (
CategoricalDtype, DatetimeTZDtype, ExtensionDtype, PandasExtensionDtype)
@@ -1996,6 +1996,18 @@ def _unstack(self, unstacker_func, new_columns, n_rows, fill_value):
return blocks, mask
+class ObjectValuesExtensionBlock(ExtensionBlock):
+ """
+ Block providing backwards-compatibility for `.values`.
+
+ Used by PeriodArray and IntervalArray to ensure that
+ Series[T].values is an ndarray of objects.
+ """
+
+ def external_values(self, dtype=None):
+ return self.values.astype(object)
+
+
class NumericBlock(Block):
__slots__ = ()
is_numeric = True
@@ -3017,6 +3029,8 @@ def get_block_type(values, dtype=None):
if is_categorical(values):
cls = CategoricalBlock
+ elif is_interval_dtype(dtype) or is_period_dtype(dtype):
+ cls = ObjectValuesExtensionBlock
elif is_extension_array_dtype(values):
cls = ExtensionBlock
elif issubclass(vtype, np.floating):
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 5f9860ce98b11..f1372a1fe2f51 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -30,8 +30,9 @@
from pandas.io.formats.printing import pprint_thing
from .blocks import (
- Block, CategoricalBlock, DatetimeTZBlock, ExtensionBlock, _extend_blocks,
- _merge_blocks, _safe_reshape, get_block_type, make_block)
+ Block, CategoricalBlock, DatetimeTZBlock, ExtensionBlock,
+ ObjectValuesExtensionBlock, _extend_blocks, _merge_blocks, _safe_reshape,
+ get_block_type, make_block)
from .concat import ( # all for concatenate_block_managers
combine_concat_plans, concatenate_join_units, get_mgr_concatenation_plan,
is_uniform_join_units)
@@ -1752,6 +1753,14 @@ def form_blocks(arrays, names, axes):
blocks.extend(external_blocks)
+ if len(items_dict['ObjectValuesExtensionBlock']):
+ external_blocks = [
+ make_block(array, klass=ObjectValuesExtensionBlock, placement=[i])
+ for i, _, array in items_dict['ObjectValuesExtensionBlock']
+ ]
+
+ blocks.extend(external_blocks)
+
if len(extra_locs):
shape = (len(extra_locs),) + tuple(len(x) for x in axes[1:])
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index ff4f9b7847019..2bd7e2c0b9b82 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -433,7 +433,7 @@ def _unstack_extension_series(series, level, fill_value):
level=level, fill_value=-1).get_result()
out = []
- values = series.values
+ values = series.array
for col, indices in result.iteritems():
out.append(Series(values.take(indices.values,
diff --git a/pandas/tests/extension/base/reshaping.py b/pandas/tests/extension/base/reshaping.py
index 9904fcd362818..42e481d974295 100644
--- a/pandas/tests/extension/base/reshaping.py
+++ b/pandas/tests/extension/base/reshaping.py
@@ -231,7 +231,7 @@ def test_unstack(self, data, index, obj):
for level in combinations:
result = ser.unstack(level=level)
- assert all(isinstance(result[col].values, type(data))
+ assert all(isinstance(result[col].array, type(data))
for col in result.columns)
expected = ser.astype(object).unstack(level=level)
result = result.astype(object)
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 79b1bc10b9f4b..2bc009c5a2fc8 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -492,3 +492,13 @@ def test_is_homogeneous_type(self):
assert Series()._is_homogeneous_type
assert Series([1, 2])._is_homogeneous_type
assert Series(pd.Categorical([1, 2]))._is_homogeneous_type
+
+ @pytest.mark.parametrize("data", [
+ pd.period_range("2000", periods=4),
+ pd.IntervalIndex.from_breaks([1, 2, 3, 4])
+ ])
+ def test_values_compatibility(self, data):
+ # https://github.com/pandas-dev/pandas/issues/23995
+ result = pd.Series(data).values
+ expected = np.array(data.astype(object))
+ tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 7a1828149cd87..faed4ccebd96b 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1340,11 +1340,11 @@ def assert_series_equal(left, right, check_dtype=True,
assert_numpy_array_equal(left.get_values(), right.get_values(),
check_dtype=check_dtype)
elif is_interval_dtype(left) or is_interval_dtype(right):
- assert_interval_array_equal(left.values, right.values)
+ assert_interval_array_equal(left.array, right.array)
elif (is_extension_array_dtype(left) and not is_categorical_dtype(left) and
is_extension_array_dtype(right) and not is_categorical_dtype(right)):
- return assert_extension_array_equal(left.values, right.values)
+ return assert_extension_array_equal(left.array, right.array)
else:
_testing.assert_almost_equal(left.get_values(), right.get_values(),
| User-facing change: `Series[period].values` nad `Series[interval].values`
continues to be an ndarray of objects. Recommend ``.array`` instead.
There are a handful of related places in pandas where we assumed that
``Series[EA].values`` was an EA.
Part of #23995 | https://api.github.com/repos/pandas-dev/pandas/pulls/24163 | 2018-12-08T13:14:56Z | 2018-12-09T12:11:12Z | 2018-12-09T12:11:12Z | 2018-12-09T12:11:15Z |
DOC: Ignoring F821 in developer.rst, that are breaking the build | diff --git a/doc/source/developer.rst b/doc/source/developer.rst
index 2930ac0f20ed2..ba6cec93d02e4 100644
--- a/doc/source/developer.rst
+++ b/doc/source/developer.rst
@@ -65,8 +65,8 @@ for each column, *including the index columns*. This has JSON form:
.. code-block:: python
# assuming there's at least 3 levels in the index
- index_columns = metadata['index_columns']
- columns = metadata['columns']
+ index_columns = metadata['index_columns'] # noqa: F821
+ columns = metadata['columns'] # noqa: F821
ith_index = 2
assert index_columns[ith_index] == '__index_level_2__'
ith_index_info = columns[-len(index_columns):][ith_index]
| I merged #18201 without a rebase, and I didn't realize it'd break the CI, because of flake8 errors (variable not defined) in an example that is not supposed to run.
Fixing the CI by ignoring the `F821` errors. | https://api.github.com/repos/pandas-dev/pandas/pulls/24160 | 2018-12-08T08:00:43Z | 2018-12-08T18:26:30Z | 2018-12-08T18:26:30Z | 2018-12-08T18:26:30Z |
BUG - anchoring dates for resample with Day(n>1) | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 0b2b526dfe9e7..488971c13508a 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1516,6 +1516,7 @@ Groupby/Resample/Rolling
- Bug in :func:`pandas.core.groupby.GroupBy.first` and :func:`pandas.core.groupby.GroupBy.last` with ``as_index=False`` leading to the loss of timezone information (:issue:`15884`)
- Bug in :meth:`DatetimeIndex.resample` when downsampling across a DST boundary (:issue:`8531`)
+- Bug in date anchoring for :meth:`DatetimeIndex.resample` with offset :class:`Day` when n > 1 (:issue:`24127`)
- Bug where ``ValueError`` is wrongly raised when calling :func:`~pandas.core.groupby.SeriesGroupBy.count` method of a
``SeriesGroupBy`` when the grouping variable only contains NaNs and numpy version < 1.13 (:issue:`21956`).
- Multiple bugs in :func:`pandas.core.Rolling.min` with ``closed='left'`` and a
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index dc1f94c479a37..6d80d747f21b3 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1587,8 +1587,8 @@ def _get_range_edges(first, last, offset, closed='left', base=0):
is_day = isinstance(offset, Day)
day_nanos = delta_to_nanoseconds(timedelta(1))
- # #1165
- if (is_day and day_nanos % offset.nanos == 0) or not is_day:
+ # #1165 and #24127
+ if (is_day and not offset.nanos % day_nanos) or not is_day:
return _adjust_dates_anchored(first, last, offset,
closed=closed, base=base)
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index 183ccfb5182a2..cb7b419710837 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -43,8 +43,8 @@ def test_groupby_with_timegrouper(self):
expected = DataFrame(
{'Quantity': 0},
- index=date_range('20130901 13:00:00',
- '20131205 13:00:00', freq='5D',
+ index=date_range('20130901',
+ '20131205', freq='5D',
name='Date', closed='left'))
expected.iloc[[0, 6, 18], 0] = np.array([24, 6, 9], dtype='int64')
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index b287eb468cd94..69fb92486d523 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1463,3 +1463,29 @@ def f(data, add_arg):
result = df.groupby("A").resample("D").agg(f, multiplier)
expected = df.groupby("A").resample('D').mean().multiply(multiplier)
assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize('k', [1, 2, 3])
+ @pytest.mark.parametrize('n1, freq1, n2, freq2', [
+ (30, 'S', 0.5, 'Min'),
+ (60, 'S', 1, 'Min'),
+ (3600, 'S', 1, 'H'),
+ (60, 'Min', 1, 'H'),
+ (21600, 'S', 0.25, 'D'),
+ (86400, 'S', 1, 'D'),
+ (43200, 'S', 0.5, 'D'),
+ (1440, 'Min', 1, 'D'),
+ (12, 'H', 0.5, 'D'),
+ (24, 'H', 1, 'D'),
+ ])
+ def test_resample_equivalent_offsets(self, n1, freq1, n2, freq2, k):
+ # GH 24127
+ n1_ = n1 * k
+ n2_ = n2 * k
+ s = pd.Series(0, index=pd.date_range('19910905 13:00',
+ '19911005 07:00',
+ freq=freq1))
+ s = s + range(len(s))
+
+ result1 = s.resample(str(n1_) + freq1).mean()
+ result2 = s.resample(str(n2_) + freq2).mean()
+ assert_series_equal(result1, result2)
| - [ ] closes #24127
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24159 | 2018-12-08T05:24:02Z | 2018-12-13T02:02:21Z | 2018-12-13T02:02:20Z | 2018-12-13T03:20:05Z |
BUG: Assorted DatetimeIndex bugfixes | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3f881485937d8..b04d2eeba1ed0 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -380,6 +380,7 @@ Backwards incompatible API changes
- ``max_rows`` and ``max_cols`` parameters removed from :class:`HTMLFormatter` since truncation is handled by :class:`DataFrameFormatter` (:issue:`23818`)
- :meth:`read_csv` will now raise a ``ValueError`` if a column with missing values is declared as having dtype ``bool`` (:issue:`20591`)
- The column order of the resultant :class:`DataFrame` from :meth:`MultiIndex.to_frame` is now guaranteed to match the :attr:`MultiIndex.names` order. (:issue:`22420`)
+- :func:`pd.offsets.generate_range` argument ``time_rule`` has been removed; use ``offset`` instead (:issue:`24157`)
Percentage change on groupby changes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1133,7 +1134,6 @@ Deprecations
- In :meth:`Series.where` with Categorical data, providing an ``other`` that is not present in the categories is deprecated. Convert the categorical to a different dtype or add the ``other`` to the categories first (:issue:`24077`).
- :meth:`Series.clip_lower`, :meth:`Series.clip_upper`, :meth:`DataFrame.clip_lower` and :meth:`DataFrame.clip_upper` are deprecated and will be removed in a future version. Use ``Series.clip(lower=threshold)``, ``Series.clip(upper=threshold)`` and the equivalent ``DataFrame`` methods (:issue:`24203`)
-
.. _whatsnew_0240.deprecations.datetimelike_int_ops:
Integer Addition/Subtraction with Datetime-like Classes Is Deprecated
@@ -1310,6 +1310,9 @@ Datetimelike
- Bug in :class:`Index` where calling ``np.array(dtindex, dtype=object)`` on a timezone-naive :class:`DatetimeIndex` would return an array of ``datetime`` objects instead of :class:`Timestamp` objects, potentially losing nanosecond portions of the timestamps (:issue:`23524`)
- Bug in :class:`Categorical.__setitem__` not allowing setting with another ``Categorical`` when both are undordered and have the same categories, but in a different order (:issue:`24142`)
- Bug in :func:`date_range` where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (:issue:`24110`)
+- Bug in :class:`DatetimeIndex` where constructing a :class:`DatetimeIndex` from a :class:`Categorical` or :class:`CategoricalIndex` would incorrectly drop timezone information (:issue:`18664`)
+- Bug in :class:`DatetimeIndex` and :class:`TimedeltaIndex` where indexing with ``Ellipsis`` would incorrectly lose the index's ``freq`` attribute (:issue:`21282`)
+- Clarified error message produced when passing an incorrect ``freq`` argument to :class:`DatetimeIndex` with ``NaT`` as the first entry in the passed data (:issue:`11587`)
Timedelta
^^^^^^^^^
@@ -1422,6 +1425,7 @@ Indexing
- Bug in :func:`Index.union` and :func:`Index.intersection` where name of the ``Index`` of the result was not computed correctly for certain cases (:issue:`9943`, :issue:`9862`)
- Bug in :class:`Index` slicing with boolean :class:`Index` may raise ``TypeError`` (:issue:`22533`)
- Bug in ``PeriodArray.__setitem__`` when accepting slice and list-like value (:issue:`23978`)
+- Bug in :class:`DatetimeIndex`, :class:`TimedeltaIndex` where indexing with ``Ellipsis`` would lose their ``freq`` attribute (:issue:`21282`)
Missing
^^^^^^^
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index ceaf9e748fe5a..a6eacc3bb4bfd 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -351,6 +351,10 @@ def __getitem__(self, key):
freq = key.step * self.freq
else:
freq = self.freq
+ elif key is Ellipsis:
+ # GH#21282 indexing with Ellipsis is similar to a full slice,
+ # should preserve `freq` attribute
+ freq = self.freq
attribs['freq'] = freq
@@ -547,9 +551,22 @@ def _validate_frequency(cls, index, freq, **kwargs):
if index.size == 0 or inferred == freq.freqstr:
return None
- on_freq = cls._generate_range(start=index[0], end=None,
- periods=len(index), freq=freq, **kwargs)
- if not np.array_equal(index.asi8, on_freq.asi8):
+ try:
+ on_freq = cls._generate_range(start=index[0], end=None,
+ periods=len(index), freq=freq,
+ **kwargs)
+ if not np.array_equal(index.asi8, on_freq.asi8):
+ raise ValueError
+ except ValueError as e:
+ if "non-fixed" in str(e):
+ # non-fixed frequencies are not meaningful for timedelta64;
+ # we retain that error message
+ raise e
+ # GH#11587 the main way this is reached is if the `np.array_equal`
+ # check above is False. This can also be reached if index[0]
+ # is `NaT`, in which case the call to `cls._generate_range` will
+ # raise a ValueError, which we re-raise with a more targeted
+ # message.
raise ValueError('Inferred frequency {infer} from passed values '
'does not conform to passed frequency {passed}'
.format(infer=inferred, passed=freq.freqstr))
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 4849ee1e3e665..2ecbf9f0ff847 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -14,9 +14,9 @@
from pandas.util._decorators import Appender
from pandas.core.dtypes.common import (
- _INT64_DTYPE, _NS_DTYPE, is_datetime64_dtype, is_datetime64tz_dtype,
- is_extension_type, is_float_dtype, is_int64_dtype, is_object_dtype,
- is_period_dtype, is_string_dtype, is_timedelta64_dtype)
+ _INT64_DTYPE, _NS_DTYPE, is_categorical_dtype, is_datetime64_dtype,
+ is_datetime64tz_dtype, is_extension_type, is_float_dtype, is_int64_dtype,
+ is_object_dtype, is_period_dtype, is_string_dtype, is_timedelta64_dtype)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
from pandas.core.dtypes.missing import isna
@@ -264,6 +264,8 @@ def _generate_range(cls, start, end, periods, freq, tz=None,
if closed is not None:
raise ValueError("Closed has to be None if not both of start"
"and end are defined")
+ if start is NaT or end is NaT:
+ raise ValueError("Neither `start` nor `end` can be NaT")
left_closed, right_closed = dtl.validate_endpoints(closed)
@@ -1652,6 +1654,13 @@ def maybe_convert_dtype(data, copy):
raise TypeError("Passing PeriodDtype data is invalid. "
"Use `data.to_timestamp()` instead")
+ elif is_categorical_dtype(data):
+ # GH#18664 preserve tz in going DTI->Categorical->DTI
+ # TODO: cases where we need to do another pass through this func,
+ # e.g. the categories are timedelta64s
+ data = data.categories.take(data.codes, fill_value=NaT)
+ copy = False
+
elif is_extension_type(data) and not is_datetime64tz_dtype(data):
# Includes categorical
# TODO: We have no tests for these
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 5de79044bc239..88c322ff7c9ff 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -14,12 +14,42 @@
from pandas import (
DatetimeIndex, Index, Timestamp, date_range, datetime, offsets,
to_datetime)
-from pandas.core.arrays import period_array
+from pandas.core.arrays import (
+ DatetimeArrayMixin as DatetimeArray, period_array)
import pandas.util.testing as tm
class TestDatetimeIndex(object):
+ @pytest.mark.parametrize('dt_cls', [DatetimeIndex, DatetimeArray])
+ def test_freq_validation_with_nat(self, dt_cls):
+ # GH#11587 make sure we get a useful error message when generate_range
+ # raises
+ msg = ("Inferred frequency None from passed values does not conform "
+ "to passed frequency D")
+ with pytest.raises(ValueError, match=msg):
+ dt_cls([pd.NaT, pd.Timestamp('2011-01-01')], freq='D')
+ with pytest.raises(ValueError, match=msg):
+ dt_cls([pd.NaT, pd.Timestamp('2011-01-01').value],
+ freq='D')
+
+ def test_categorical_preserves_tz(self):
+ # GH#18664 retain tz when going DTI-->Categorical-->DTI
+ # TODO: parametrize over DatetimeIndex/DatetimeArray
+ # once CategoricalIndex(DTA) works
+
+ dti = pd.DatetimeIndex(
+ [pd.NaT, '2015-01-01', '1999-04-06 15:14:13', '2015-01-01'],
+ tz='US/Eastern')
+
+ ci = pd.CategoricalIndex(dti)
+ carr = pd.Categorical(dti)
+ cser = pd.Series(ci)
+
+ for obj in [ci, carr, cser]:
+ result = pd.DatetimeIndex(obj)
+ tm.assert_index_equal(result, dti)
+
def test_dti_with_period_data_raises(self):
# GH#23675
data = pd.PeriodIndex(['2016Q1', '2016Q2'], freq='Q')
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 11cefec4f34cf..a39100b3ec204 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -80,6 +80,14 @@ def test_date_range_timestamp_equiv_preserve_frequency(self):
class TestDateRanges(TestData):
+ def test_date_range_nat(self):
+ # GH#11587
+ msg = "Neither `start` nor `end` can be NaT"
+ with pytest.raises(ValueError, match=msg):
+ date_range(start='2016-01-01', end=pd.NaT, freq='D')
+ with pytest.raises(ValueError, match=msg):
+ date_range(start=pd.NaT, end='2016-01-01', freq='D')
+
def test_date_range_out_of_bounds(self):
# GH#14187
with pytest.raises(OutOfBoundsDatetime):
@@ -533,12 +541,12 @@ class TestGenRangeGeneration(object):
def test_generate(self):
rng1 = list(generate_range(START, END, offset=BDay()))
- rng2 = list(generate_range(START, END, time_rule='B'))
+ rng2 = list(generate_range(START, END, offset='B'))
assert rng1 == rng2
def test_generate_cday(self):
rng1 = list(generate_range(START, END, offset=CDay()))
- rng2 = list(generate_range(START, END, time_rule='C'))
+ rng2 = list(generate_range(START, END, offset='C'))
assert rng1 == rng2
def test_1(self):
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 944c925dabe3e..c3b00133228d8 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -16,6 +16,15 @@
class TestGetItem(object):
+ def test_ellipsis(self):
+ # GH#21282
+ idx = pd.date_range('2011-01-01', '2011-01-31', freq='D',
+ tz='Asia/Tokyo', name='idx')
+
+ result = idx[...]
+ assert result.equals(idx)
+ assert result is not idx
+
def test_getitem(self):
idx1 = pd.date_range('2011-01-01', '2011-01-31', freq='D', name='idx')
idx2 = pd.date_range('2011-01-01', '2011-01-31', freq='D',
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index aaa1126e92f3d..29b96604b7ea8 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -13,6 +13,14 @@
class TestGetItem(object):
+ def test_ellipsis(self):
+ # GH#21282
+ idx = period_range('2011-01-01', '2011-01-31', freq='D',
+ name='idx')
+
+ result = idx[...]
+ assert result.equals(idx)
+ assert result is not idx
def test_getitem(self):
idx1 = pd.period_range('2011-01-01', '2011-01-31', freq='D',
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 94d694b644eb8..4e98732456d2c 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -9,6 +9,14 @@
class TestGetItem(object):
+ def test_ellipsis(self):
+ # GH#21282
+ idx = timedelta_range('1 day', '31 day', freq='D', name='idx')
+
+ result = idx[...]
+ assert result.equals(idx)
+ assert result is not idx
+
def test_getitem(self):
idx1 = timedelta_range('1 day', '31 day', freq='D', name='idx')
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index 030887ac731f3..456e0b10e5a96 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -53,17 +53,11 @@ def test_to_m8():
valb = datetime(2007, 10, 1)
valu = _to_m8(valb)
assert isinstance(valu, np.datetime64)
- # assert valu == np.datetime64(datetime(2007,10,1))
- # def test_datetime64_box():
- # valu = np.datetime64(datetime(2007,10,1))
- # valb = _dt_box(valu)
- # assert type(valb) == datetime
- # assert valb == datetime(2007,10,1)
- #####
- # DateOffset Tests
- #####
+#####
+# DateOffset Tests
+#####
class Base(object):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 45f10a2f06fa2..cff9556a4230e 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -2457,8 +2457,7 @@ class Nano(Tick):
# ---------------------------------------------------------------------
-def generate_range(start=None, end=None, periods=None,
- offset=BDay(), time_rule=None):
+def generate_range(start=None, end=None, periods=None, offset=BDay()):
"""
Generates a sequence of dates corresponding to the specified time
offset. Similar to dateutil.rrule except uses pandas DateOffset
@@ -2470,8 +2469,6 @@ def generate_range(start=None, end=None, periods=None,
end : datetime (default None)
periods : int, (default None)
offset : DateOffset, (default BDay())
- time_rule : (legacy) name of DateOffset object to be used, optional
- Corresponds with names expected by tseries.frequencies.get_offset
Notes
-----
@@ -2479,17 +2476,13 @@ def generate_range(start=None, end=None, periods=None,
* At least two of (start, end, periods) must be specified.
* If both start and end are specified, the returned dates will
satisfy start <= date <= end.
- * If both time_rule and offset are specified, time_rule supersedes offset.
Returns
-------
dates : generator object
-
"""
- if time_rule is not None:
- from pandas.tseries.frequencies import get_offset
-
- offset = get_offset(time_rule)
+ from pandas.tseries.frequencies import to_offset
+ offset = to_offset(offset)
start = to_datetime(start)
end = to_datetime(end)
| Bit of a hodge-podge.
- Deprecate `time_rule` in `offsets.generate_range`
- Improve error messages if NaT gets passed to date_range, or in validate_frequency (closes #11587)
- Preserve timezone in DatetimeIndex->CategoricalIndex->DatetimeIndex (closes #18664)
- Preserve freq when indexing DTI/TDI with Ellipsis (closes #21282)
- [x] closes #11587
@TomAugspurger nothing here is urgent, so if this overlaps with DTA it can stay on the backburner. | https://api.github.com/repos/pandas-dev/pandas/pulls/24157 | 2018-12-08T00:09:34Z | 2018-12-14T14:53:44Z | 2018-12-14T14:53:43Z | 2018-12-14T16:12:39Z |
BUG: Throw ValueError when the format kwarg is passed to the HDFStore… | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 0b2b526dfe9e7..f33003e95d651 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1492,6 +1492,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- Bug in :func:`DataFrame.to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`)
- Bug in :func:`DataFrame.to_string()` that caused representations of :class:`DataFrame` to not take up the whole window (:issue:`22984`)
- Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`).
+- :func:`HDFStore` will raise ``ValueError`` when the ``format`` kwarg is passed to the constructor (:issue:`13291`)
- Bug in :meth:`HDFStore.append` when appending a :class:`DataFrame` with an empty string column and ``min_itemsize`` < 8 (:issue:`12242`)
- Bug in :func:`read_csv()` in which memory leaks occurred in the C engine when parsing ``NaN`` values due to insufficient cleanup on completion or error (:issue:`21353`)
- Bug in :func:`read_csv()` in which incorrect error messages were being raised when ``skipfooter`` was passed in along with ``nrows``, ``iterator``, or ``chunksize`` (:issue:`23711`)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 8132c458ce852..5b76b4bb3d6ab 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -458,6 +458,10 @@ class HDFStore(StringMixin):
def __init__(self, path, mode=None, complevel=None, complib=None,
fletcher32=False, **kwargs):
+
+ if 'format' in kwargs:
+ raise ValueError('format is not a defined argument for HDFStore')
+
try:
import tables # noqa
except ImportError as ex: # pragma: no cover
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 17f27e60ec28f..1c4d00c8b3e15 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -146,6 +146,11 @@ def teardown_method(self, method):
@pytest.mark.filterwarnings("ignore:\\nPanel:FutureWarning")
class TestHDFStore(Base):
+ def test_format_kwarg_in_constructor(self):
+ # GH 13291
+ with ensure_clean_path(self.path) as path:
+ pytest.raises(ValueError, HDFStore, path, format='table')
+
def test_context(self):
path = create_tempfile(self.path)
try:
| … constructor
- [x] closes #13291
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24155 | 2018-12-07T22:49:05Z | 2018-12-09T16:10:58Z | 2018-12-09T16:10:58Z | 2018-12-09T16:11:06Z |
DOC: update usage of randn in io.rst | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 313c4d723d079..c6e7bccdd8aad 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -14,7 +14,6 @@
from pandas.compat import StringIO, BytesIO
- randn = np.random.randn
np.set_printoptions(precision=4, suppress=True)
plt.close('all')
pd.options.display.max_rows = 15
@@ -1767,7 +1766,7 @@ Note ``NaN``'s, ``NaT``'s and ``None`` will be converted to ``null`` and ``datet
.. ipython:: python
- dfj = pd.DataFrame(randn(5, 2), columns=list('AB'))
+ dfj = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))
json = dfj.to_json()
json
@@ -1842,7 +1841,7 @@ Writing in ISO date format:
.. ipython:: python
- dfd = pd.DataFrame(randn(5, 2), columns=list('AB'))
+ dfd = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))
dfd['date'] = pd.Timestamp('20130101')
dfd = dfd.sort_index(1, ascending=False)
json = dfd.to_json(date_format='iso')
@@ -2482,7 +2481,7 @@ Read in pandas ``to_html`` output (with some loss of floating point precision):
.. code-block:: python
- df = pd.DataFrame(randn(2, 2))
+ df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format='{0:.40g}'.format)
dfin = pd.read_html(s, index_col=0)
@@ -2535,7 +2534,7 @@ in the method ``to_string`` described above.
.. ipython:: python
- df = pd.DataFrame(randn(2, 2))
+ df = pd.DataFrame(np.random.randn(2, 2))
df
print(df.to_html()) # raw html
@@ -2611,7 +2610,7 @@ Finally, the ``escape`` argument allows you to control whether the
.. ipython:: python
- df = pd.DataFrame({'a': list('&<>'), 'b': randn(3)})
+ df = pd.DataFrame({'a': list('&<>'), 'b': np.random.randn(3)})
.. ipython:: python
@@ -3187,7 +3186,7 @@ applications (CTRL-V on many operating systems). Here we illustrate writing a
.. ipython:: python
- df = pd.DataFrame(randn(5, 3))
+ df = pd.DataFrame(np.random.randn(5, 3))
df
df.to_clipboard()
pd.read_clipboard()
@@ -3414,10 +3413,10 @@ dict:
.. ipython:: python
index = pd.date_range('1/1/2000', periods=8)
- s = pd.Series(randn(5), index=['a', 'b', 'c', 'd', 'e'])
- df = pd.DataFrame(randn(8, 3), index=index,
+ s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
+ df = pd.DataFrame(np.random.randn(8, 3), index=index,
columns=['A', 'B', 'C'])
- wp = pd.Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
+ wp = pd.Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'],
major_axis=pd.date_range('1/1/2000', periods=5),
minor_axis=['A', 'B', 'C', 'D'])
@@ -3563,7 +3562,7 @@ This format is specified by default when using ``put`` or ``to_hdf`` or by ``for
.. code-block:: python
- >>> pd.DataFrame(randn(10, 2)).to_hdf('test_fixed.h5', 'df')
+ >>> pd.DataFrame(np.random.randn(10, 2)).to_hdf('test_fixed.h5', 'df')
>>> pd.read_hdf('test_fixed.h5', 'df', where='index>5')
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
@@ -3699,9 +3698,9 @@ defaults to `nan`.
.. ipython:: python
- df_mixed = pd.DataFrame({'A': randn(8),
- 'B': randn(8),
- 'C': np.array(randn(8), dtype='float32'),
+ df_mixed = pd.DataFrame({'A': np.random.randn(8),
+ 'B': np.random.randn(8),
+ 'C': np.array(np.random.randn(8), dtype='float32'),
'string': 'string',
'int': 1,
'bool': True,
@@ -3841,7 +3840,7 @@ Here are some examples:
.. ipython:: python
- dfq = pd.DataFrame(randn(10, 4), columns=list('ABCD'),
+ dfq = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD'),
index=pd.date_range('20130101', periods=10))
store.append('dfq', dfq, format='table', data_columns=True)
@@ -3946,8 +3945,8 @@ Oftentimes when appending large amounts of data to a store, it is useful to turn
.. ipython:: python
- df_1 = pd.DataFrame(randn(10, 2), columns=list('AB'))
- df_2 = pd.DataFrame(randn(10, 2), columns=list('AB'))
+ df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))
+ df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))
st = pd.HDFStore('appends.h5', mode='w')
st.append('df', df_1, data_columns=['B'], index=False)
@@ -4151,7 +4150,8 @@ results.
.. ipython:: python
- df_mt = pd.DataFrame(randn(8, 6), index=pd.date_range('1/1/2000', periods=8),
+ df_mt = pd.DataFrame(np.random.randn(8, 6),
+ index=pd.date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C', 'D', 'E', 'F'])
df_mt['foo'] = 'bar'
df_mt.loc[df_mt.index[1], ('A', 'B')] = np.nan
@@ -5181,7 +5181,7 @@ into a .dta file. The format version of this file is always 115 (Stata 12).
.. ipython:: python
- df = pd.DataFrame(randn(10, 2), columns=list('AB'))
+ df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))
df.to_stata('stata.dta')
*Stata* data files have limited data type support; only strings with
@@ -5405,7 +5405,7 @@ ignored.
.. code-block:: ipython
In [1]: sz = 1000000
- In [2]: df = pd.DataFrame({'A': randn(sz), 'B': [1] * sz})
+ In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
| - [x] closes #24148
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24154 | 2018-12-07T21:50:41Z | 2018-12-09T15:49:44Z | 2018-12-09T15:49:44Z | 2018-12-09T15:49:48Z |
DOC: remove long listings in the tutorials.rst page | diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst
index 83c891c0c0e40..d5da7df347573 100644
--- a/doc/source/tutorials.rst
+++ b/doc/source/tutorials.rst
@@ -18,117 +18,28 @@ A handy pandas `cheat sheet <http://pandas.pydata.org/Pandas_Cheat_Sheet.pdf>`_.
Community Guides
================
-pandas Cookbook
----------------
+pandas Cookbook by Julia Evans
+------------------------------
The goal of this 2015 cookbook (by `Julia Evans <http://jvns.ca>`_) is to
give you some concrete examples for getting started with pandas. These
are examples with real-world data, and all the bugs and weirdness that
entails.
+For the table of contents, see the `pandas-cookbook GitHub
+repository <http://github.com/jvns/pandas-cookbook>`_.
-Here are links to the v0.2 release. For an up-to-date table of contents, see the `pandas-cookbook GitHub
-repository <http://github.com/jvns/pandas-cookbook>`_. To run the examples in this tutorial, you'll need to
-clone the GitHub repository and get IPython Notebook running.
-See `How to use this cookbook <https://github.com/jvns/pandas-cookbook#how-to-use-this-cookbook>`_.
-
-* `A quick tour of the IPython Notebook: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/A%20quick%20tour%20of%20IPython%20Notebook.ipynb>`_
- Shows off IPython's awesome tab completion and magic functions.
-* `Chapter 1: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb>`_
- Reading your data into pandas is pretty much the easiest thing. Even
- when the encoding is wrong!
-* `Chapter 2: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%202%20-%20Selecting%20data%20%26%20finding%20the%20most%20common%20complaint%20type.ipynb>`_
- It's not totally obvious how to select data from a pandas dataframe.
- Here we explain the basics (how to take slices and get columns)
-* `Chapter 3: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%203%20-%20Which%20borough%20has%20the%20most%20noise%20complaints%20%28or%2C%20more%20selecting%20data%29.ipynb>`_
- Here we get into serious slicing and dicing and learn how to filter
- dataframes in complicated ways, really fast.
-* `Chapter 4: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%204%20-%20Find%20out%20on%20which%20weekday%20people%20bike%20the%20most%20with%20groupby%20and%20aggregate.ipynb>`_
- Groupby/aggregate is seriously my favorite thing about pandas
- and I use it all the time. You should probably read this.
-* `Chapter 5: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%205%20-%20Combining%20dataframes%20and%20scraping%20Canadian%20weather%20data.ipynb>`_
- Here you get to find out if it's cold in Montreal in the winter
- (spoiler: yes). Web scraping with pandas is fun! Here we combine dataframes.
-* `Chapter 6: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%206%20-%20String%20Operations-%20Which%20month%20was%20the%20snowiest.ipynb>`_
- Strings with pandas are great. It has all these vectorized string
- operations and they're the best. We will turn a bunch of strings
- containing "Snow" into vectors of numbers in a trice.
-* `Chapter 7: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%207%20-%20Cleaning%20up%20messy%20data.ipynb>`_
- Cleaning up messy data is never a joy, but with pandas it's easier.
-* `Chapter 8: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%208%20-%20How%20to%20deal%20with%20timestamps.ipynb>`_
- Parsing Unix timestamps is confusing at first but it turns out
- to be really easy.
-* `Chapter 9: <http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.2/cookbook/Chapter%209%20-%20Loading%20data%20from%20SQL%20databases.ipynb>`_
- Reading data from SQL databases.
-
-
-Lessons for new pandas users
+Learn Pandas by Hernan Rojas
----------------------------
-For more resources, please visit the main `repository <https://bitbucket.org/hrojas/learn-pandas>`__.
-
-* `01 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/01%20-%20Lesson.ipynb>`_
- * Importing libraries
- * Creating data sets
- * Creating data frames
- * Reading from CSV
- * Exporting to CSV
- * Finding maximums
- * Plotting data
-
-* `02 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/02%20-%20Lesson.ipynb>`_
- * Reading from TXT
- * Exporting to TXT
- * Selecting top/bottom records
- * Descriptive statistics
- * Grouping/sorting data
-
-* `03 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/03%20-%20Lesson.ipynb>`_
- * Creating functions
- * Reading from EXCEL
- * Exporting to EXCEL
- * Outliers
- * Lambda functions
- * Slice and dice data
-
-* `04 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/04%20-%20Lesson.ipynb>`_
- * Adding/deleting columns
- * Index operations
-
-* `05 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/05%20-%20Lesson.ipynb>`_
- * Stack/Unstack/Transpose functions
-
-* `06 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/06%20-%20Lesson.ipynb>`_
- * GroupBy function
-
-* `07 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/07%20-%20Lesson.ipynb>`_
- * Ways to calculate outliers
-
-* `08 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/08%20-%20Lesson.ipynb>`_
- * Read from Microsoft SQL databases
-
-* `09 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/09%20-%20Lesson.ipynb>`_
- * Export to CSV/EXCEL/TXT
-
-* `10 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/10%20-%20Lesson.ipynb>`_
- * Converting between different kinds of formats
-
-* `11 - Lesson: <http://nbviewer.ipython.org/urls/bitbucket.org/hrojas/learn-pandas/raw/master/lessons/11%20-%20Lesson.ipynb>`_
- * Combining data from various sources
-
+A set of lesson for new pandas users: `https://bitbucket.org/hrojas/learn-pandas>`__.
Practical data analysis with Python
-----------------------------------
-This `guide <http://wavedatalab.github.io/datawithpython>`_ is a comprehensive introduction to the data analysis process using the Python data ecosystem and an interesting open dataset.
-There are four sections covering selected topics as follows:
-
-* `Munging Data <http://wavedatalab.github.io/datawithpython/munge.html>`_
-
-* `Aggregating Data <http://wavedatalab.github.io/datawithpython/aggregate.html>`_
-
-* `Visualizing Data <http://wavedatalab.github.io/datawithpython/visualize.html>`_
-
-* `Time Series <http://wavedatalab.github.io/datawithpython/timeseries.html>`_
+This `guide <http://wavedatalab.github.io/datawithpython>`_ is an introduction to the data analysis process using the Python data ecosystem and an interesting open dataset.
+There are four sections covering selected topics as `munging data <http://wavedatalab.github.io/datawithpython/munge.html>`__,
+`aggregating data <http://wavedatalab.github.io/datawithpython/aggregate.html>`_, `visualizing data <http://wavedatalab.github.io/datawithpython/visualize.html>`_
+and `time series <http://wavedatalab.github.io/datawithpython/timeseries.html>`_.
.. _tutorial-exercises-new-users:
@@ -137,25 +48,6 @@ Exercises for new users
Practice your skills with real data sets and exercises.
For more resources, please visit the main `repository <https://github.com/guipsamora/pandas_exercises>`__.
-* `01 - Getting & Knowing Your Data <https://github.com/guipsamora/pandas_exercises/tree/master/01_Getting_%26_Knowing_Your_Data>`_
-
-* `02 - Filtering & Sorting <https://github.com/guipsamora/pandas_exercises/tree/master/02_Filtering_%26_Sorting>`_
-
-* `03 - Grouping <https://github.com/guipsamora/pandas_exercises/tree/master/03_Grouping>`_
-
-* `04 - Apply <https://github.com/guipsamora/pandas_exercises/tree/master/04_Apply>`_
-
-* `05 - Merge <https://github.com/guipsamora/pandas_exercises/tree/master/05_Merge>`_
-
-* `06 - Stats <https://github.com/guipsamora/pandas_exercises/tree/master/06_Stats>`_
-
-* `07 - Visualization <https://github.com/guipsamora/pandas_exercises/tree/master/07_Visualization>`_
-
-* `08 - Creating Series and DataFrames <https://github.com/guipsamora/pandas_exercises/tree/master/08_Creating_Series_and_DataFrames/Pokemon>`_
-
-* `09 - Time Series <https://github.com/guipsamora/pandas_exercises/tree/master/09_Time_Series>`_
-
-* `10 - Deleting <https://github.com/guipsamora/pandas_exercises/tree/master/10_Deleting>`_
.. _tutorial-modern:
| Triggered by discussion in https://github.com/pandas-dev/pandas/pull/24117: currently some of the listed tutorials have a very long included table of content, others much less. Here I kept the current list of tutorials, but simply removed detailed tocs of all of them. Each of them can do that on the page you land on when clicking on the single link. | https://api.github.com/repos/pandas-dev/pandas/pulls/24152 | 2018-12-07T21:27:46Z | 2018-12-09T14:27:16Z | 2018-12-09T14:27:16Z | 2018-12-09T16:08:44Z |
DOC: fix warnings in whatsnew about MI.labels -> codes rename | diff --git a/doc/source/whatsnew/v0.10.1.rst b/doc/source/whatsnew/v0.10.1.rst
index 5679babf07b73..a627454561759 100644
--- a/doc/source/whatsnew/v0.10.1.rst
+++ b/doc/source/whatsnew/v0.10.1.rst
@@ -97,22 +97,58 @@ columns, this is equivalent to passing a
``HDFStore`` now serializes MultiIndex dataframes when appending tables.
-.. ipython:: python
-
- index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
- ['one', 'two', 'three']],
- labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
- [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
- names=['foo', 'bar'])
- df = DataFrame(np.random.randn(10, 3), index=index,
- columns=['A', 'B', 'C'])
- df
-
- store.append('mi',df)
- store.select('mi')
-
- # the levels are automatically included as data columns
- store.select('mi', "foo='bar'")
+.. code-block:: ipython
+
+ In [19]: index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ....: ['one', 'two', 'three']],
+ ....: labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ ....: [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ ....: names=['foo', 'bar'])
+ ....:
+
+ In [20]: df = DataFrame(np.random.randn(10, 3), index=index,
+ ....: columns=['A', 'B', 'C'])
+ ....:
+
+ In [21]: df
+ Out[21]:
+ A B C
+ foo bar
+ foo one -0.116619 0.295575 -1.047704
+ two 1.640556 1.905836 2.772115
+ three 0.088787 -1.144197 -0.633372
+ bar one 0.925372 -0.006438 -0.820408
+ two -0.600874 -1.039266 0.824758
+ baz two -0.824095 -0.337730 -0.927764
+ three -0.840123 0.248505 -0.109250
+ qux one 0.431977 -0.460710 0.336505
+ two -3.207595 -1.535854 0.409769
+ three -0.673145 -0.741113 -0.110891
+
+ In [22]: store.append('mi',df)
+
+ In [23]: store.select('mi')
+ Out[23]:
+ A B C
+ foo bar
+ foo one -0.116619 0.295575 -1.047704
+ two 1.640556 1.905836 2.772115
+ three 0.088787 -1.144197 -0.633372
+ bar one 0.925372 -0.006438 -0.820408
+ two -0.600874 -1.039266 0.824758
+ baz two -0.824095 -0.337730 -0.927764
+ three -0.840123 0.248505 -0.109250
+ qux one 0.431977 -0.460710 0.336505
+ two -3.207595 -1.535854 0.409769
+ three -0.673145 -0.741113 -0.110891
+
+ # the levels are automatically included as data columns
+ In [24]: store.select('mi', "foo='bar'")
+ Out[24]:
+ A B C
+ foo bar
+ bar one 0.925372 -0.006438 -0.820408
+ two -0.600874 -1.039266 0.824758
Multi-table creation via ``append_to_multiple`` and selection via
``select_as_multiple`` can create/select from multiple tables and return a
diff --git a/doc/source/whatsnew/v0.20.0.rst b/doc/source/whatsnew/v0.20.0.rst
index 8456449ee4419..2df686a79e837 100644
--- a/doc/source/whatsnew/v0.20.0.rst
+++ b/doc/source/whatsnew/v0.20.0.rst
@@ -899,8 +899,8 @@ doesn't behave as desired.
df = pd.DataFrame(
{'value': [1, 2, 3, 4]},
- index=pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
- labels=[[0, 0, 1, 1], [0, 1, 0, 1]]))
+ index=pd.MultiIndex([['a', 'b'], ['bb', 'aa']],
+ [[0, 0, 1, 1], [0, 1, 0, 1]]))
df
Previous Behavior:
| Follow-up on https://github.com/pandas-dev/pandas/pull/23752 to get rid of 2 warnings. | https://api.github.com/repos/pandas-dev/pandas/pulls/24150 | 2018-12-07T20:51:10Z | 2018-12-07T21:15:16Z | 2018-12-07T21:15:16Z | 2018-12-07T21:43:21Z |
BUG: Fixed block placment from reindex(?) | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 53ae3200d2adb..083e0198c1d15 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1297,6 +1297,7 @@ Datetimelike
- Bug in the :class:`Series` repr with period-dtype data missing a space before the data (:issue:`23601`)
- Bug in :func:`date_range` when decrementing a start date to a past end date by a negative frequency (:issue:`23270`)
- Bug in :meth:`Series.min` which would return ``NaN`` instead of ``NaT`` when called on a series of ``NaT`` (:issue:`23282`)
+- Bug in :meth:`Series.combine_first` not properly aligning categoricals, so that missing values in ``self`` where not filled by valid values from ``other`` (:issue:`24147`)
- Bug in :func:`DataFrame.combine` with datetimelike values raising a TypeError (:issue:`23079`)
- Bug in :func:`date_range` with frequency of ``Day`` or higher where dates sufficiently far in the future could wrap around to the past instead of raising ``OutOfBoundsDatetime`` (:issue:`14187`)
- Bug in :class:`PeriodIndex` with attribute ``freq.n`` greater than 1 where adding a :class:`DateOffset` object would return incorrect results (:issue:`23215`)
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 681530ed494d7..e0f26357cae6f 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -64,7 +64,8 @@ cdef class BlockPlacement:
return '%s(%r)' % (self.__class__.__name__, v)
- __repr__ = __str__
+ def __repr__(self):
+ return str(self)
def __len__(self):
cdef:
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 9c2d4cd5729d2..51c47a81f8e2f 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1887,7 +1887,7 @@ def take_nd(self, indexer, axis=0, new_mgr_locs=None, fill_tuple=None):
allow_fill=True)
# if we are a 1-dim object, then always place at 0
- if self.ndim == 1:
+ if self.ndim == 1 and new_mgr_locs is None:
new_mgr_locs = [0]
else:
if new_mgr_locs is None:
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index e9a89c1af2f22..f28f69c9d2893 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -164,6 +164,15 @@ def test_combine_add(self, data_repeated):
orig_data1._from_sequence([a + val for a in list(orig_data1)]))
self.assert_series_equal(result, expected)
+ @pytest.mark.xfail(reason="GH-24147", strict=True)
+ def test_combine_first(self, data):
+ # https://github.com/pandas-dev/pandas/issues/24147
+ a = pd.Series(data[:3])
+ b = pd.Series(data[2:5], index=[2, 3, 4])
+ result = a.combine_first(b)
+ expected = pd.Series(data[:5])
+ self.assert_series_equal(result, expected)
+
@pytest.mark.parametrize('frame', [True, False])
@pytest.mark.parametrize('periods, indices', [
(-2, [2, 3, 4, -1, -1]),
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 14ef6237e8ddd..26cd39c4b807c 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -1283,3 +1283,12 @@ def test_validate_ndim():
with pytest.raises(ValueError, match=msg):
make_block(values, placement, ndim=2)
+
+
+def test_block_shape():
+ idx = pd.Index([0, 1, 2, 3, 4])
+ a = pd.Series([1, 2, 3]).reindex(idx)
+ b = pd.Series(pd.Categorical([1, 2, 3])).reindex(idx)
+
+ assert (a._data.blocks[0].mgr_locs.indexer ==
+ b._data.blocks[0].mgr_locs.indexer)
| Closes https://github.com/pandas-dev/pandas/issues/24147 | https://api.github.com/repos/pandas-dev/pandas/pulls/24149 | 2018-12-07T20:32:05Z | 2018-12-07T22:05:33Z | 2018-12-07T22:05:32Z | 2018-12-07T22:05:36Z |
CLN: minor cleanups for MultiIndex.codes | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 53ae3200d2adb..aa8fbbd9b8dd8 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1102,8 +1102,9 @@ Deprecations
- :attr:`MultiIndex.labels` has been deprecated and replaced by :attr:`MultiIndex.codes`.
The functionality is unchanged. The new name better reflects the natures of
- these codes and makes the ``MultiIndex`` API more similar to the API for :class:`CategoricalIndex`(:issue:`13443`).
+ these codes and makes the ``MultiIndex`` API more similar to the API for :class:`CategoricalIndex` (:issue:`13443`).
As a consequence, other uses of the name ``labels`` in ``MultiIndex`` have also been deprecated and replaced with ``codes``:
+
- You should initialize a ``MultiIndex`` instance using a parameter named ``codes`` rather than ``labels``.
- ``MultiIndex.set_labels`` has been deprecated in favor of :meth:`MultiIndex.set_codes`.
- For method :meth:`MultiIndex.copy`, the ``labels`` parameter has been deprecated and replaced by a ``codes`` parameter.
diff --git a/pandas/core/indexes/frozen.py b/pandas/core/indexes/frozen.py
index 46731069d88b8..982645ebd5124 100644
--- a/pandas/core/indexes/frozen.py
+++ b/pandas/core/indexes/frozen.py
@@ -4,7 +4,7 @@
These are used for:
- .names (FrozenList)
-- .levels & .labels (FrozenNDArray)
+- .levels & .codes (FrozenNDArray)
"""
| Minor cleanup for #23752. The whatsnew didn't parse properly and other small stuff. | https://api.github.com/repos/pandas-dev/pandas/pulls/24146 | 2018-12-07T18:28:49Z | 2018-12-07T20:36:29Z | 2018-12-07T20:36:29Z | 2018-12-13T02:46:09Z |
REF/TST: Add more pytest idiom to util/test_util | diff --git a/pandas/tests/util/test_deprecate_kwarg.py b/pandas/tests/util/test_deprecate_kwarg.py
new file mode 100644
index 0000000000000..7287df9db8a62
--- /dev/null
+++ b/pandas/tests/util/test_deprecate_kwarg.py
@@ -0,0 +1,93 @@
+# -*- coding: utf-8 -*-
+import pytest
+
+from pandas.util._decorators import deprecate_kwarg
+
+import pandas.util.testing as tm
+
+
+@deprecate_kwarg("old", "new")
+def _f1(new=False):
+ return new
+
+
+_f2_mappings = {"yes": True, "no": False}
+
+
+@deprecate_kwarg("old", "new", _f2_mappings)
+def _f2(new=False):
+ return new
+
+
+def _f3_mapping(x):
+ return x + 1
+
+
+@deprecate_kwarg("old", "new", _f3_mapping)
+def _f3(new=0):
+ return new
+
+
+@pytest.mark.parametrize("key,klass", [
+ ("old", FutureWarning),
+ ("new", None)
+])
+def test_deprecate_kwarg(key, klass):
+ x = 78
+
+ with tm.assert_produces_warning(klass):
+ assert _f1(**{key: x}) == x
+
+
+@pytest.mark.parametrize("key", list(_f2_mappings.keys()))
+def test_dict_deprecate_kwarg(key):
+ with tm.assert_produces_warning(FutureWarning):
+ assert _f2(old=key) == _f2_mappings[key]
+
+
+@pytest.mark.parametrize("key", ["bogus", 12345, -1.23])
+def test_missing_deprecate_kwarg(key):
+ with tm.assert_produces_warning(FutureWarning):
+ assert _f2(old=key) == key
+
+
+@pytest.mark.parametrize("x", [1, -1.4, 0])
+def test_callable_deprecate_kwarg(x):
+ with tm.assert_produces_warning(FutureWarning):
+ assert _f3(old=x) == _f3_mapping(x)
+
+
+def test_callable_deprecate_kwarg_fail():
+ msg = "((can only|cannot) concatenate)|(must be str)|(Can't convert)"
+
+ with pytest.raises(TypeError, match=msg):
+ _f3(old="hello")
+
+
+def test_bad_deprecate_kwarg():
+ msg = "mapping from old to new argument values must be dict or callable!"
+
+ with pytest.raises(TypeError, match=msg):
+ @deprecate_kwarg("old", "new", 0)
+ def f4(new=None):
+ return new
+
+
+@deprecate_kwarg("old", None)
+def _f4(old=True, unchanged=True):
+ return old, unchanged
+
+
+@pytest.mark.parametrize("key", ["old", "unchanged"])
+def test_deprecate_keyword(key):
+ x = 9
+
+ if key == "old":
+ klass = FutureWarning
+ expected = (x, True)
+ else:
+ klass = None
+ expected = (True, x)
+
+ with tm.assert_produces_warning(klass):
+ assert _f4(**{key: x}) == expected
diff --git a/pandas/tests/util/test_locale.py b/pandas/tests/util/test_locale.py
new file mode 100644
index 0000000000000..b848b22994e7a
--- /dev/null
+++ b/pandas/tests/util/test_locale.py
@@ -0,0 +1,94 @@
+# -*- coding: utf-8 -*-
+import codecs
+import locale
+import os
+
+import pytest
+
+from pandas.compat import is_platform_windows
+
+import pandas.core.common as com
+import pandas.util.testing as tm
+
+_all_locales = tm.get_locales() or []
+_current_locale = locale.getlocale()
+
+# Don't run any of these tests if we are on Windows or have no locales.
+pytestmark = pytest.mark.skipif(is_platform_windows() or not _all_locales,
+ reason="Need non-Windows and locales")
+
+_skip_if_only_one_locale = pytest.mark.skipif(
+ len(_all_locales) <= 1, reason="Need multiple locales for meaningful test")
+
+
+def test_can_set_locale_valid_set():
+ # Can set the default locale.
+ assert tm.can_set_locale("")
+
+
+def test_can_set_locale_invalid_set():
+ # Cannot set an invalid locale.
+ assert not tm.can_set_locale("non-existent_locale")
+
+
+def test_can_set_locale_invalid_get(monkeypatch):
+ # see gh-22129
+ #
+ # In some cases, an invalid locale can be set,
+ # but a subsequent getlocale() raises a ValueError.
+
+ def mock_get_locale():
+ raise ValueError()
+
+ with monkeypatch.context() as m:
+ m.setattr(locale, "getlocale", mock_get_locale)
+ assert not tm.can_set_locale("")
+
+
+def test_get_locales_at_least_one():
+ # see gh-9744
+ assert len(_all_locales) > 0
+
+
+@_skip_if_only_one_locale
+def test_get_locales_prefix():
+ first_locale = _all_locales[0]
+ assert len(tm.get_locales(prefix=first_locale[:2])) > 0
+
+
+@_skip_if_only_one_locale
+def test_set_locale():
+ if com._all_none(_current_locale):
+ # Not sure why, but on some Travis runs with pytest,
+ # getlocale() returned (None, None).
+ pytest.skip("Current locale is not set.")
+
+ locale_override = os.environ.get("LOCALE_OVERRIDE", None)
+
+ if locale_override is None:
+ lang, enc = "it_CH", "UTF-8"
+ elif locale_override == "C":
+ lang, enc = "en_US", "ascii"
+ else:
+ lang, enc = locale_override.split(".")
+
+ enc = codecs.lookup(enc).name
+ new_locale = lang, enc
+
+ if not tm.can_set_locale(new_locale):
+ msg = "unsupported locale setting"
+
+ with pytest.raises(locale.Error, match=msg):
+ with tm.set_locale(new_locale):
+ pass
+ else:
+ with tm.set_locale(new_locale) as normalized_locale:
+ new_lang, new_enc = normalized_locale.split(".")
+ new_enc = codecs.lookup(enc).name
+
+ normalized_locale = new_lang, new_enc
+ assert normalized_locale == new_locale
+
+ # Once we exit the "with" statement, locale should be back to what it was.
+ current_locale = locale.getlocale()
+ assert current_locale == _current_locale
diff --git a/pandas/tests/util/test_move.py b/pandas/tests/util/test_move.py
new file mode 100644
index 0000000000000..c12e2f7a167ad
--- /dev/null
+++ b/pandas/tests/util/test_move.py
@@ -0,0 +1,79 @@
+# -*- coding: utf-8 -*-
+import sys
+from uuid import uuid4
+
+import pytest
+
+from pandas.compat import PY3, intern
+from pandas.util._move import BadMove, move_into_mutable_buffer, stolenbuf
+
+
+def test_cannot_create_instance_of_stolen_buffer():
+ # Stolen buffers need to be created through the smart constructor
+ # "move_into_mutable_buffer," which has a bunch of checks in it.
+
+ msg = "cannot create 'pandas.util._move.stolenbuf' instances"
+ with pytest.raises(TypeError, match=msg):
+ stolenbuf()
+
+
+def test_more_than_one_ref():
+ # Test case for when we try to use "move_into_mutable_buffer"
+ # when the object being moved has other references.
+
+ b = b"testing"
+
+ with pytest.raises(BadMove) as e:
+ def handle_success(type_, value, tb):
+ assert value.args[0] is b
+ return type(e).handle_success(e, type_, value, tb) # super
+
+ e.handle_success = handle_success
+ move_into_mutable_buffer(b)
+
+
+def test_exactly_one_ref():
+ # Test case for when the object being moved has exactly one reference.
+
+ b = b"testing"
+
+ # We need to pass an expression on the stack to ensure that there are
+ # not extra references hanging around. We cannot rewrite this test as
+ # buf = b[:-3]
+ # as_stolen_buf = move_into_mutable_buffer(buf)
+ # because then we would have more than one reference to buf.
+ as_stolen_buf = move_into_mutable_buffer(b[:-3])
+
+ # Materialize as byte-array to show that it is mutable.
+ assert bytearray(as_stolen_buf) == b"test"
+
+
+@pytest.mark.skipif(PY3, reason="bytes objects cannot be interned in PY3")
+def test_interned():
+ salt = uuid4().hex
+
+ def make_string():
+ # We need to actually create a new string so that it has refcount
+ # one. We use a uuid so that we know the string could not already
+ # be in the intern table.
+ return "".join(("testing: ", salt))
+
+ # This should work, the string has one reference on the stack.
+ move_into_mutable_buffer(make_string())
+ refcount = [None] # nonlocal
+
+ def ref_capture(ob):
+ # Subtract two because those are the references owned by this frame:
+ # 1. The local variables of this stack frame.
+ # 2. The python data stack of this stack frame.
+ refcount[0] = sys.getrefcount(ob) - 2
+ return ob
+
+ with pytest.raises(BadMove, match="testing"):
+ # If we intern the string, it will still have one reference. Now,
+ # it is in the intern table, so if other people intern the same
+ # string while the mutable buffer holds the first string they will
+ # be the same instance.
+ move_into_mutable_buffer(ref_capture(intern(make_string()))) # noqa
+
+ assert refcount[0] == 1
diff --git a/pandas/tests/util/test_safe_import.py b/pandas/tests/util/test_safe_import.py
new file mode 100644
index 0000000000000..a9c52ef788390
--- /dev/null
+++ b/pandas/tests/util/test_safe_import.py
@@ -0,0 +1,45 @@
+# -*- coding: utf-8 -*-
+import sys
+import types
+
+import pytest
+
+import pandas.util._test_decorators as td
+
+
+@pytest.mark.parametrize("name", ["foo", "hello123"])
+def test_safe_import_non_existent(name):
+ assert not td.safe_import(name)
+
+
+def test_safe_import_exists():
+ assert td.safe_import("pandas")
+
+
+@pytest.mark.parametrize("min_version,valid", [
+ ("0.0.0", True),
+ ("99.99.99", False)
+])
+def test_safe_import_versions(min_version, valid):
+ result = td.safe_import("pandas", min_version=min_version)
+ result = result if valid else not result
+ assert result
+
+
+@pytest.mark.parametrize("min_version,valid", [
+ (None, False),
+ ("1.0", True),
+ ("2.0", False)
+])
+def test_safe_import_dummy(monkeypatch, min_version, valid):
+ mod_name = "hello123"
+
+ mod = types.ModuleType(mod_name)
+ mod.__version__ = "1.5"
+
+ if min_version is not None:
+ monkeypatch.setitem(sys.modules, mod_name, mod)
+
+ result = td.safe_import(mod_name, min_version=min_version)
+ result = result if valid else not result
+ assert result
diff --git a/pandas/tests/util/test_util.py b/pandas/tests/util/test_util.py
index a6cb54ee43909..e4b2f0a75051a 100644
--- a/pandas/tests/util/test_util.py
+++ b/pandas/tests/util/test_util.py
@@ -1,532 +1,50 @@
# -*- coding: utf-8 -*-
-import codecs
-from collections import OrderedDict
-import locale
-import os
-import sys
-from uuid import uuid4
-
import pytest
-from pandas.compat import PY3, intern
from pandas.util._decorators import deprecate_kwarg, make_signature
-from pandas.util._move import BadMove, move_into_mutable_buffer, stolenbuf
-import pandas.util._test_decorators as td
-from pandas.util._validators import (
- validate_args, validate_args_and_kwargs, validate_bool_kwarg,
- validate_kwargs)
+from pandas.util._validators import validate_kwargs
-import pandas.core.common as com
import pandas.util.testing as tm
-class TestDecorators(object):
-
- def setup_method(self, method):
- @deprecate_kwarg('old', 'new')
- def _f1(new=False):
- return new
-
- @deprecate_kwarg('old', 'new', {'yes': True, 'no': False})
- def _f2(new=False):
- return new
-
- @deprecate_kwarg('old', 'new', lambda x: x + 1)
- def _f3(new=0):
- return new
-
- @deprecate_kwarg('old', None)
- def _f4(old=True, unchanged=True):
- return old
-
- self.f1 = _f1
- self.f2 = _f2
- self.f3 = _f3
- self.f4 = _f4
-
- def test_deprecate_kwarg(self):
- x = 78
- with tm.assert_produces_warning(FutureWarning):
- result = self.f1(old=x)
- assert result is x
- with tm.assert_produces_warning(None):
- self.f1(new=x)
-
- def test_dict_deprecate_kwarg(self):
- x = 'yes'
- with tm.assert_produces_warning(FutureWarning):
- result = self.f2(old=x)
- assert result
-
- def test_missing_deprecate_kwarg(self):
- x = 'bogus'
- with tm.assert_produces_warning(FutureWarning):
- result = self.f2(old=x)
- assert result == 'bogus'
-
- def test_callable_deprecate_kwarg(self):
- x = 5
- with tm.assert_produces_warning(FutureWarning):
- result = self.f3(old=x)
- assert result == x + 1
- with pytest.raises(TypeError):
- self.f3(old='hello')
-
- def test_bad_deprecate_kwarg(self):
- with pytest.raises(TypeError):
- @deprecate_kwarg('old', 'new', 0)
- def f4(new=None):
- pass
-
- def test_deprecate_keyword(self):
- x = 9
- with tm.assert_produces_warning(FutureWarning):
- result = self.f4(old=x)
- assert result is x
- with tm.assert_produces_warning(None):
- result = self.f4(unchanged=x)
- assert result is True
-
-
def test_rands():
r = tm.rands(10)
assert(len(r) == 10)
-def test_rands_array():
+def test_rands_array_1d():
arr = tm.rands_array(5, size=10)
assert(arr.shape == (10,))
assert(len(arr[0]) == 5)
+
+def test_rands_array_2d():
arr = tm.rands_array(7, size=(10, 10))
assert(arr.shape == (10, 10))
assert(len(arr[1, 1]) == 7)
-class TestValidateArgs(object):
- fname = 'func'
-
- def test_bad_min_fname_arg_count(self):
- msg = "'max_fname_arg_count' must be non-negative"
- with pytest.raises(ValueError, match=msg):
- validate_args(self.fname, (None,), -1, 'foo')
-
- def test_bad_arg_length_max_value_single(self):
- args = (None, None)
- compat_args = ('foo',)
-
- min_fname_arg_count = 0
- max_length = len(compat_args) + min_fname_arg_count
- actual_length = len(args) + min_fname_arg_count
- msg = (r"{fname}\(\) takes at most {max_length} "
- r"argument \({actual_length} given\)"
- .format(fname=self.fname, max_length=max_length,
- actual_length=actual_length))
-
- with pytest.raises(TypeError, match=msg):
- validate_args(self.fname, args,
- min_fname_arg_count,
- compat_args)
-
- def test_bad_arg_length_max_value_multiple(self):
- args = (None, None)
- compat_args = dict(foo=None)
-
- min_fname_arg_count = 2
- max_length = len(compat_args) + min_fname_arg_count
- actual_length = len(args) + min_fname_arg_count
- msg = (r"{fname}\(\) takes at most {max_length} "
- r"arguments \({actual_length} given\)"
- .format(fname=self.fname, max_length=max_length,
- actual_length=actual_length))
-
- with pytest.raises(TypeError, match=msg):
- validate_args(self.fname, args,
- min_fname_arg_count,
- compat_args)
-
- def test_not_all_defaults(self):
- bad_arg = 'foo'
- msg = ("the '{arg}' parameter is not supported "
- r"in the pandas implementation of {func}\(\)".
- format(arg=bad_arg, func=self.fname))
-
- compat_args = OrderedDict()
- compat_args['foo'] = 2
- compat_args['bar'] = -1
- compat_args['baz'] = 3
-
- arg_vals = (1, -1, 3)
-
- for i in range(1, 3):
- with pytest.raises(ValueError, match=msg):
- validate_args(self.fname, arg_vals[:i], 2, compat_args)
-
- def test_validation(self):
- # No exceptions should be thrown
- validate_args(self.fname, (None,), 2, dict(out=None))
-
- compat_args = OrderedDict()
- compat_args['axis'] = 1
- compat_args['out'] = None
-
- validate_args(self.fname, (1, None), 2, compat_args)
-
-
-class TestValidateKwargs(object):
- fname = 'func'
-
- def test_bad_kwarg(self):
- goodarg = 'f'
- badarg = goodarg + 'o'
-
- compat_args = OrderedDict()
- compat_args[goodarg] = 'foo'
- compat_args[badarg + 'o'] = 'bar'
- kwargs = {goodarg: 'foo', badarg: 'bar'}
- msg = (r"{fname}\(\) got an unexpected "
- r"keyword argument '{arg}'".format(
- fname=self.fname, arg=badarg))
-
- with pytest.raises(TypeError, match=msg):
- validate_kwargs(self.fname, kwargs, compat_args)
-
- def test_not_all_none(self):
- bad_arg = 'foo'
- msg = (r"the '{arg}' parameter is not supported "
- r"in the pandas implementation of {func}\(\)".
- format(arg=bad_arg, func=self.fname))
-
- compat_args = OrderedDict()
- compat_args['foo'] = 1
- compat_args['bar'] = 's'
- compat_args['baz'] = None
-
- kwarg_keys = ('foo', 'bar', 'baz')
- kwarg_vals = (2, 's', None)
-
- for i in range(1, 3):
- kwargs = dict(zip(kwarg_keys[:i],
- kwarg_vals[:i]))
-
- with pytest.raises(ValueError, match=msg):
- validate_kwargs(self.fname, kwargs, compat_args)
-
- def test_validation(self):
- # No exceptions should be thrown
- compat_args = OrderedDict()
- compat_args['f'] = None
- compat_args['b'] = 1
- compat_args['ba'] = 's'
- kwargs = dict(f=None, b=1)
- validate_kwargs(self.fname, kwargs, compat_args)
-
- def test_validate_bool_kwarg(self):
- arg_names = ['inplace', 'copy']
- invalid_values = [1, "True", [1, 2, 3], 5.0]
- valid_values = [True, False, None]
-
- for name in arg_names:
- for value in invalid_values:
- msg = ("For argument \"%s\" "
- "expected type bool, "
- "received type %s" %
- (name, type(value).__name__))
- with pytest.raises(ValueError, match=msg):
- validate_bool_kwarg(value, name)
-
- for value in valid_values:
- assert validate_bool_kwarg(value, name) == value
-
-
-class TestValidateKwargsAndArgs(object):
- fname = 'func'
-
- def test_invalid_total_length_max_length_one(self):
- compat_args = ('foo',)
- kwargs = {'foo': 'FOO'}
- args = ('FoO', 'BaZ')
-
- min_fname_arg_count = 0
- max_length = len(compat_args) + min_fname_arg_count
- actual_length = len(kwargs) + len(args) + min_fname_arg_count
- msg = (r"{fname}\(\) takes at most {max_length} "
- r"argument \({actual_length} given\)"
- .format(fname=self.fname, max_length=max_length,
- actual_length=actual_length))
-
- with pytest.raises(TypeError, match=msg):
- validate_args_and_kwargs(self.fname, args, kwargs,
- min_fname_arg_count,
- compat_args)
-
- def test_invalid_total_length_max_length_multiple(self):
- compat_args = ('foo', 'bar', 'baz')
- kwargs = {'foo': 'FOO', 'bar': 'BAR'}
- args = ('FoO', 'BaZ')
-
- min_fname_arg_count = 2
- max_length = len(compat_args) + min_fname_arg_count
- actual_length = len(kwargs) + len(args) + min_fname_arg_count
- msg = (r"{fname}\(\) takes at most {max_length} "
- r"arguments \({actual_length} given\)"
- .format(fname=self.fname, max_length=max_length,
- actual_length=actual_length))
-
- with pytest.raises(TypeError, match=msg):
- validate_args_and_kwargs(self.fname, args, kwargs,
- min_fname_arg_count,
- compat_args)
-
- def test_no_args_with_kwargs(self):
- bad_arg = 'bar'
- min_fname_arg_count = 2
-
- compat_args = OrderedDict()
- compat_args['foo'] = -5
- compat_args[bad_arg] = 1
-
- msg = (r"the '{arg}' parameter is not supported "
- r"in the pandas implementation of {func}\(\)".
- format(arg=bad_arg, func=self.fname))
-
- args = ()
- kwargs = {'foo': -5, bad_arg: 2}
- with pytest.raises(ValueError, match=msg):
- validate_args_and_kwargs(self.fname, args, kwargs,
- min_fname_arg_count, compat_args)
-
- args = (-5, 2)
- kwargs = {}
- with pytest.raises(ValueError, match=msg):
- validate_args_and_kwargs(self.fname, args, kwargs,
- min_fname_arg_count, compat_args)
-
- def test_duplicate_argument(self):
- min_fname_arg_count = 2
- compat_args = OrderedDict()
- compat_args['foo'] = None
- compat_args['bar'] = None
- compat_args['baz'] = None
- kwargs = {'foo': None, 'bar': None}
- args = (None,) # duplicate value for 'foo'
-
- msg = (r"{fname}\(\) got multiple values for keyword "
- r"argument '{arg}'".format(fname=self.fname, arg='foo'))
-
- with pytest.raises(TypeError, match=msg):
- validate_args_and_kwargs(self.fname, args, kwargs,
- min_fname_arg_count,
- compat_args)
-
- def test_validation(self):
- # No exceptions should be thrown
- compat_args = OrderedDict()
- compat_args['foo'] = 1
- compat_args['bar'] = None
- compat_args['baz'] = -2
- kwargs = {'baz': -2}
- args = (1, None)
-
- min_fname_arg_count = 2
- validate_args_and_kwargs(self.fname, args, kwargs,
- min_fname_arg_count,
- compat_args)
-
-
-class TestMove(object):
-
- def test_cannot_create_instance_of_stolenbuffer(self):
- """Stolen buffers need to be created through the smart constructor
- ``move_into_mutable_buffer`` which has a bunch of checks in it.
- """
- msg = "cannot create 'pandas.util._move.stolenbuf' instances"
- with pytest.raises(TypeError, match=msg):
- stolenbuf()
-
- def test_more_than_one_ref(self):
- """Test case for when we try to use ``move_into_mutable_buffer`` when
- the object being moved has other references.
- """
- b = b'testing'
-
- with pytest.raises(BadMove) as e:
- def handle_success(type_, value, tb):
- assert value.args[0] is b
- return type(e).handle_success(e, type_, value, tb) # super
-
- e.handle_success = handle_success
- move_into_mutable_buffer(b)
-
- def test_exactly_one_ref(self):
- """Test case for when the object being moved has exactly one reference.
- """
- b = b'testing'
-
- # We need to pass an expression on the stack to ensure that there are
- # not extra references hanging around. We cannot rewrite this test as
- # buf = b[:-3]
- # as_stolen_buf = move_into_mutable_buffer(buf)
- # because then we would have more than one reference to buf.
- as_stolen_buf = move_into_mutable_buffer(b[:-3])
-
- # materialize as bytearray to show that it is mutable
- assert bytearray(as_stolen_buf) == b'test'
-
- @pytest.mark.skipif(PY3, reason='bytes objects cannot be interned in py3')
- def test_interned(self):
- salt = uuid4().hex
-
- def make_string():
- # We need to actually create a new string so that it has refcount
- # one. We use a uuid so that we know the string could not already
- # be in the intern table.
- return ''.join(('testing: ', salt))
-
- # This should work, the string has one reference on the stack.
- move_into_mutable_buffer(make_string())
-
- refcount = [None] # nonlocal
-
- def ref_capture(ob):
- # Subtract two because those are the references owned by this
- # frame:
- # 1. The local variables of this stack frame.
- # 2. The python data stack of this stack frame.
- refcount[0] = sys.getrefcount(ob) - 2
- return ob
-
- with pytest.raises(BadMove):
- # If we intern the string it will still have one reference but now
- # it is in the intern table so if other people intern the same
- # string while the mutable buffer holds the first string they will
- # be the same instance.
- move_into_mutable_buffer(ref_capture(intern(make_string()))) # noqa
-
- assert refcount[0] == 1
-
-
-def test_numpy_errstate_is_default():
+def test_numpy_err_state_is_default():
# The defaults since numpy 1.6.0
- expected = {'over': 'warn', 'divide': 'warn', 'invalid': 'warn',
- 'under': 'ignore'}
+ expected = {"over": "warn", "divide": "warn",
+ "invalid": "warn", "under": "ignore"}
import numpy as np
- from pandas.compat import numpy # noqa
- # The errstate should be unchanged after that import.
- assert np.geterr() == expected
-
-
-@td.skip_if_windows
-class TestLocaleUtils(object):
-
- @classmethod
- def setup_class(cls):
- cls.locales = tm.get_locales()
- cls.current_locale = locale.getlocale()
-
- if not cls.locales:
- pytest.skip("No locales found")
-
- @classmethod
- def teardown_class(cls):
- del cls.locales
- del cls.current_locale
-
- def test_can_set_locale_valid_set(self):
- # Setting the default locale should return True
- assert tm.can_set_locale('') is True
-
- def test_can_set_locale_invalid_set(self):
- # Setting an invalid locale should return False
- assert tm.can_set_locale('non-existent_locale') is False
-
- def test_can_set_locale_invalid_get(self, monkeypatch):
- # In some cases, an invalid locale can be set,
- # but a subsequent getlocale() raises a ValueError
- # See GH 22129
-
- def mockgetlocale():
- raise ValueError()
-
- with monkeypatch.context() as m:
- m.setattr(locale, 'getlocale', mockgetlocale)
- assert tm.can_set_locale('') is False
-
- def test_get_locales(self):
- # all systems should have at least a single locale
- # GH9744
- assert len(tm.get_locales()) > 0
- def test_get_locales_prefix(self):
- if len(self.locales) == 1:
- pytest.skip("Only a single locale found, no point in "
- "trying to test filtering locale prefixes")
- first_locale = self.locales[0]
- assert len(tm.get_locales(prefix=first_locale[:2])) > 0
-
- def test_set_locale(self):
- if len(self.locales) == 1:
- pytest.skip("Only a single locale found, no point in "
- "trying to test setting another locale")
-
- if com._all_none(*self.current_locale):
- # Not sure why, but on some travis runs with pytest,
- # getlocale() returned (None, None).
- pytest.skip("Current locale is not set.")
-
- locale_override = os.environ.get('LOCALE_OVERRIDE', None)
-
- if locale_override is None:
- lang, enc = 'it_CH', 'UTF-8'
- elif locale_override == 'C':
- lang, enc = 'en_US', 'ascii'
- else:
- lang, enc = locale_override.split('.')
-
- enc = codecs.lookup(enc).name
- new_locale = lang, enc
-
- if not tm.can_set_locale(new_locale):
- with pytest.raises(locale.Error):
- with tm.set_locale(new_locale):
- pass
- else:
- with tm.set_locale(new_locale) as normalized_locale:
- new_lang, new_enc = normalized_locale.split('.')
- new_enc = codecs.lookup(enc).name
- normalized_locale = new_lang, new_enc
- assert normalized_locale == new_locale
-
- current_locale = locale.getlocale()
- assert current_locale == self.current_locale
-
-
-def test_make_signature():
- # See GH 17608
- # Case where the func does not have default kwargs
- sig = make_signature(validate_kwargs)
- assert sig == (['fname', 'kwargs', 'compat_args'],
- ['fname', 'kwargs', 'compat_args'])
-
- # Case where the func does have default kwargs
- sig = make_signature(deprecate_kwarg)
- assert sig == (['old_arg_name', 'new_arg_name',
- 'mapping=None', 'stacklevel=2'],
- ['old_arg_name', 'new_arg_name', 'mapping', 'stacklevel'])
-
-
-def test_safe_import(monkeypatch):
- assert not td.safe_import("foo")
- assert not td.safe_import("pandas", min_version="99.99.99")
+ # The error state should be unchanged after that import.
+ assert np.geterr() == expected
- # Create dummy module to be imported
- import types
- import sys
- mod_name = "hello123"
- mod = types.ModuleType(mod_name)
- mod.__version__ = "1.5"
- assert not td.safe_import(mod_name)
- monkeypatch.setitem(sys.modules, mod_name, mod)
- assert not td.safe_import(mod_name, min_version="2.0")
- assert td.safe_import(mod_name, min_version="1.0")
+@pytest.mark.parametrize("func,expected", [
+ # Case where the func does not have default kwargs.
+ (validate_kwargs, (["fname", "kwargs", "compat_args"],
+ ["fname", "kwargs", "compat_args"])),
+
+ # Case where the func does have default kwargs.
+ (deprecate_kwarg, (["old_arg_name", "new_arg_name",
+ "mapping=None", "stacklevel=2"],
+ ["old_arg_name", "new_arg_name",
+ "mapping", "stacklevel"]))
+])
+def test_make_signature(func, expected):
+ # see gh-17608
+ assert make_signature(func) == expected
diff --git a/pandas/tests/util/test_validate_args.py b/pandas/tests/util/test_validate_args.py
new file mode 100644
index 0000000000000..ca71b0c9d2522
--- /dev/null
+++ b/pandas/tests/util/test_validate_args.py
@@ -0,0 +1,76 @@
+# -*- coding: utf-8 -*-
+from collections import OrderedDict
+
+import pytest
+
+from pandas.util._validators import validate_args
+
+_fname = "func"
+
+
+def test_bad_min_fname_arg_count():
+ msg = "'max_fname_arg_count' must be non-negative"
+
+ with pytest.raises(ValueError, match=msg):
+ validate_args(_fname, (None,), -1, "foo")
+
+
+def test_bad_arg_length_max_value_single():
+ args = (None, None)
+ compat_args = ("foo",)
+
+ min_fname_arg_count = 0
+ max_length = len(compat_args) + min_fname_arg_count
+ actual_length = len(args) + min_fname_arg_count
+ msg = (r"{fname}\(\) takes at most {max_length} "
+ r"argument \({actual_length} given\)"
+ .format(fname=_fname, max_length=max_length,
+ actual_length=actual_length))
+
+ with pytest.raises(TypeError, match=msg):
+ validate_args(_fname, args, min_fname_arg_count, compat_args)
+
+
+def test_bad_arg_length_max_value_multiple():
+ args = (None, None)
+ compat_args = dict(foo=None)
+
+ min_fname_arg_count = 2
+ max_length = len(compat_args) + min_fname_arg_count
+ actual_length = len(args) + min_fname_arg_count
+ msg = (r"{fname}\(\) takes at most {max_length} "
+ r"arguments \({actual_length} given\)"
+ .format(fname=_fname, max_length=max_length,
+ actual_length=actual_length))
+
+ with pytest.raises(TypeError, match=msg):
+ validate_args(_fname, args, min_fname_arg_count, compat_args)
+
+
+@pytest.mark.parametrize("i", range(1, 3))
+def test_not_all_defaults(i):
+ bad_arg = "foo"
+ msg = ("the '{arg}' parameter is not supported "
+ r"in the pandas implementation of {func}\(\)".
+ format(arg=bad_arg, func=_fname))
+
+ compat_args = OrderedDict()
+ compat_args["foo"] = 2
+ compat_args["bar"] = -1
+ compat_args["baz"] = 3
+
+ arg_vals = (1, -1, 3)
+
+ with pytest.raises(ValueError, match=msg):
+ validate_args(_fname, arg_vals[:i], 2, compat_args)
+
+
+def test_validation():
+ # No exceptions should be raised.
+ validate_args(_fname, (None,), 2, dict(out=None))
+
+ compat_args = OrderedDict()
+ compat_args["axis"] = 1
+ compat_args["out"] = None
+
+ validate_args(_fname, (1, None), 2, compat_args)
diff --git a/pandas/tests/util/test_validate_args_and_kwargs.py b/pandas/tests/util/test_validate_args_and_kwargs.py
new file mode 100644
index 0000000000000..c3c0b3dedc085
--- /dev/null
+++ b/pandas/tests/util/test_validate_args_and_kwargs.py
@@ -0,0 +1,105 @@
+# -*- coding: utf-8 -*-
+from collections import OrderedDict
+
+import pytest
+
+from pandas.util._validators import validate_args_and_kwargs
+
+_fname = "func"
+
+
+def test_invalid_total_length_max_length_one():
+ compat_args = ("foo",)
+ kwargs = {"foo": "FOO"}
+ args = ("FoO", "BaZ")
+
+ min_fname_arg_count = 0
+ max_length = len(compat_args) + min_fname_arg_count
+ actual_length = len(kwargs) + len(args) + min_fname_arg_count
+
+ msg = (r"{fname}\(\) takes at most {max_length} "
+ r"argument \({actual_length} given\)"
+ .format(fname=_fname, max_length=max_length,
+ actual_length=actual_length))
+
+ with pytest.raises(TypeError, match=msg):
+ validate_args_and_kwargs(_fname, args, kwargs,
+ min_fname_arg_count,
+ compat_args)
+
+
+def test_invalid_total_length_max_length_multiple():
+ compat_args = ("foo", "bar", "baz")
+ kwargs = {"foo": "FOO", "bar": "BAR"}
+ args = ("FoO", "BaZ")
+
+ min_fname_arg_count = 2
+ max_length = len(compat_args) + min_fname_arg_count
+ actual_length = len(kwargs) + len(args) + min_fname_arg_count
+
+ msg = (r"{fname}\(\) takes at most {max_length} "
+ r"arguments \({actual_length} given\)"
+ .format(fname=_fname, max_length=max_length,
+ actual_length=actual_length))
+
+ with pytest.raises(TypeError, match=msg):
+ validate_args_and_kwargs(_fname, args, kwargs,
+ min_fname_arg_count,
+ compat_args)
+
+
+@pytest.mark.parametrize("args,kwargs", [
+ ((), {"foo": -5, "bar": 2}),
+ ((-5, 2), {})
+])
+def test_missing_args_or_kwargs(args, kwargs):
+ bad_arg = "bar"
+ min_fname_arg_count = 2
+
+ compat_args = OrderedDict()
+ compat_args["foo"] = -5
+ compat_args[bad_arg] = 1
+
+ msg = (r"the '{arg}' parameter is not supported "
+ r"in the pandas implementation of {func}\(\)".
+ format(arg=bad_arg, func=_fname))
+
+ with pytest.raises(ValueError, match=msg):
+ validate_args_and_kwargs(_fname, args, kwargs,
+ min_fname_arg_count, compat_args)
+
+
+def test_duplicate_argument():
+ min_fname_arg_count = 2
+
+ compat_args = OrderedDict()
+ compat_args["foo"] = None
+ compat_args["bar"] = None
+ compat_args["baz"] = None
+
+ kwargs = {"foo": None, "bar": None}
+ args = (None,) # duplicate value for "foo"
+
+ msg = (r"{fname}\(\) got multiple values for keyword "
+ r"argument '{arg}'".format(fname=_fname, arg="foo"))
+
+ with pytest.raises(TypeError, match=msg):
+ validate_args_and_kwargs(_fname, args, kwargs,
+ min_fname_arg_count,
+ compat_args)
+
+
+def test_validation():
+ # No exceptions should be raised.
+ compat_args = OrderedDict()
+ compat_args["foo"] = 1
+ compat_args["bar"] = None
+ compat_args["baz"] = -2
+ kwargs = {"baz": -2}
+
+ args = (1, None)
+ min_fname_arg_count = 2
+
+ validate_args_and_kwargs(_fname, args, kwargs,
+ min_fname_arg_count,
+ compat_args)
diff --git a/pandas/tests/util/test_validate_kwargs.py b/pandas/tests/util/test_validate_kwargs.py
new file mode 100644
index 0000000000000..f36818ddfc9a8
--- /dev/null
+++ b/pandas/tests/util/test_validate_kwargs.py
@@ -0,0 +1,72 @@
+# -*- coding: utf-8 -*-
+from collections import OrderedDict
+
+import pytest
+
+from pandas.util._validators import validate_bool_kwarg, validate_kwargs
+
+_fname = "func"
+
+
+def test_bad_kwarg():
+ good_arg = "f"
+ bad_arg = good_arg + "o"
+
+ compat_args = OrderedDict()
+ compat_args[good_arg] = "foo"
+ compat_args[bad_arg + "o"] = "bar"
+ kwargs = {good_arg: "foo", bad_arg: "bar"}
+
+ msg = (r"{fname}\(\) got an unexpected "
+ r"keyword argument '{arg}'".format(fname=_fname, arg=bad_arg))
+
+ with pytest.raises(TypeError, match=msg):
+ validate_kwargs(_fname, kwargs, compat_args)
+
+
+@pytest.mark.parametrize("i", range(1, 3))
+def test_not_all_none(i):
+ bad_arg = "foo"
+ msg = (r"the '{arg}' parameter is not supported "
+ r"in the pandas implementation of {func}\(\)".
+ format(arg=bad_arg, func=_fname))
+
+ compat_args = OrderedDict()
+ compat_args["foo"] = 1
+ compat_args["bar"] = "s"
+ compat_args["baz"] = None
+
+ kwarg_keys = ("foo", "bar", "baz")
+ kwarg_vals = (2, "s", None)
+
+ kwargs = dict(zip(kwarg_keys[:i], kwarg_vals[:i]))
+
+ with pytest.raises(ValueError, match=msg):
+ validate_kwargs(_fname, kwargs, compat_args)
+
+
+def test_validation():
+ # No exceptions should be raised.
+ compat_args = OrderedDict()
+ compat_args["f"] = None
+ compat_args["b"] = 1
+ compat_args["ba"] = "s"
+
+ kwargs = dict(f=None, b=1)
+ validate_kwargs(_fname, kwargs, compat_args)
+
+
+@pytest.mark.parametrize("name", ["inplace", "copy"])
+@pytest.mark.parametrize("value", [1, "True", [1, 2, 3], 5.0])
+def test_validate_bool_kwarg_fail(name, value):
+ msg = ("For argument \"%s\" expected type bool, received type %s" %
+ (name, type(value).__name__))
+
+ with pytest.raises(ValueError, match=msg):
+ validate_bool_kwarg(value, name)
+
+
+@pytest.mark.parametrize("name", ["inplace", "copy"])
+@pytest.mark.parametrize("value", [True, False, None])
+def test_validate_bool_kwarg(name, value):
+ assert validate_bool_kwarg(value, name) == value
| Also breaks up `test_util` into multiple test modules by the function or method tested. | https://api.github.com/repos/pandas-dev/pandas/pulls/24141 | 2018-12-07T10:38:13Z | 2018-12-07T21:14:44Z | 2018-12-07T21:14:44Z | 2018-12-07T23:52:33Z |
Added log10 to the list of unary functions df.eval can handle | diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst
index 1c873d604cfe0..e40e078ccf075 100644
--- a/doc/source/enhancingperf.rst
+++ b/doc/source/enhancingperf.rst
@@ -482,7 +482,7 @@ These operations are supported by :func:`pandas.eval`:
* Simple variable evaluation, e.g., ``pd.eval('df')`` (this is not very useful)
* Math functions: `sin`, `cos`, `exp`, `log`, `expm1`, `log1p`,
`sqrt`, `sinh`, `cosh`, `tanh`, `arcsin`, `arccos`, `arctan`, `arccosh`,
- `arcsinh`, `arctanh`, `abs` and `arctan2`.
+ `arcsinh`, `arctanh`, `abs`, `arctan2` and `log10`.
This Python syntax is **not** allowed:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 4e12b22c8ccac..3da5ee210bc1f 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1375,6 +1375,7 @@ Numeric
- :meth:`Series.agg` can now handle numpy NaN-aware methods like :func:`numpy.nansum` (:issue:`19629`)
- Bug in :meth:`Series.rank` and :meth:`DataFrame.rank` when ``pct=True`` and more than 2:sup:`24` rows are present resulted in percentages greater than 1.0 (:issue:`18271`)
- Calls such as :meth:`DataFrame.round` with a non-unique :meth:`CategoricalIndex` now return expected data. Previously, data would be improperly duplicated (:issue:`21809`).
+- Added ``log10`` to the list of supported functions in :meth:`DataFrame.eval` (:issue:`24139`)
Strings
^^^^^^^
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index 9e9f124352229..cbdb3525d5e88 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -23,7 +23,7 @@
_unary_math_ops = ('sin', 'cos', 'exp', 'log', 'expm1', 'log1p',
'sqrt', 'sinh', 'cosh', 'tanh', 'arcsin', 'arccos',
- 'arctan', 'arccosh', 'arcsinh', 'arctanh', 'abs')
+ 'arctan', 'arccosh', 'arcsinh', 'arctanh', 'abs', 'log10')
_binary_math_ops = ('arctan2',)
_mathops = _unary_math_ops + _binary_math_ops
| - [x] closes https://github.com/pandas-dev/pandas/issues/24139
- [x] tests ~~added~~ / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
`(pandas) ➜ pandas git:(binary_math_log) git diff upstream/master -u -- "*.py" | flake8 --diff
`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24140 | 2018-12-07T07:57:31Z | 2018-12-09T18:52:10Z | 2018-12-09T18:52:10Z | 2018-12-10T08:59:21Z |
Fix repr of DataFrame with IntervalIndex | diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index e526aa72affee..14e73b957d519 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1015,10 +1015,11 @@ def _format_with_header(self, header, **kwargs):
def _format_native_types(self, na_rep='', quoting=None, **kwargs):
""" actually format my specific types """
- from pandas.io.formats.format import IntervalArrayFormatter
- return IntervalArrayFormatter(values=self,
- na_rep=na_rep,
- justify='all').get_result()
+ from pandas.io.formats.format import ExtensionArrayFormatter
+ return ExtensionArrayFormatter(values=self,
+ na_rep=na_rep,
+ justify='all',
+ leading_space=False).get_result()
def _format_data(self, name=None):
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 8452eb562a8e6..9b371d00d8072 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -840,7 +840,34 @@ def _get_column_name_list(self):
def format_array(values, formatter, float_format=None, na_rep='NaN',
- digits=None, space=None, justify='right', decimal='.'):
+ digits=None, space=None, justify='right', decimal='.',
+ leading_space=None):
+ """
+ Format an array for printing.
+
+ Parameters
+ ----------
+ values
+ formatter
+ float_format
+ na_rep
+ digits
+ space
+ justify
+ decimal
+ leading_space : bool, optional
+ Whether the array should be formatted with a leading space.
+ When an array as a column of a Series or DataFrame, we do want
+ the leading space to pad between columns.
+
+ When formatting an Index subclass
+ (e.g. IntervalIndex._format_native_types), we don't want the
+ leading space since it should be left-aligned.
+
+ Returns
+ -------
+ List[str]
+ """
if is_datetime64_dtype(values.dtype):
fmt_klass = Datetime64Formatter
@@ -868,7 +895,8 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
fmt_obj = fmt_klass(values, digits=digits, na_rep=na_rep,
float_format=float_format, formatter=formatter,
- space=space, justify=justify, decimal=decimal)
+ space=space, justify=justify, decimal=decimal,
+ leading_space=leading_space)
return fmt_obj.get_result()
@@ -877,7 +905,7 @@ class GenericArrayFormatter(object):
def __init__(self, values, digits=7, formatter=None, na_rep='NaN',
space=12, float_format=None, justify='right', decimal='.',
- quoting=None, fixed_width=True):
+ quoting=None, fixed_width=True, leading_space=None):
self.values = values
self.digits = digits
self.na_rep = na_rep
@@ -888,6 +916,7 @@ def __init__(self, values, digits=7, formatter=None, na_rep='NaN',
self.decimal = decimal
self.quoting = quoting
self.fixed_width = fixed_width
+ self.leading_space = leading_space
def get_result(self):
fmt_values = self._format_strings()
@@ -927,7 +956,9 @@ def _format(x):
vals = vals.values
is_float_type = lib.map_infer(vals, is_float) & notna(vals)
- leading_space = is_float_type.any()
+ leading_space = self.leading_space
+ if leading_space is None:
+ leading_space = is_float_type.any()
fmt_values = []
for i, v in enumerate(vals):
@@ -936,7 +967,13 @@ def _format(x):
elif is_float_type[i]:
fmt_values.append(float_format(v))
else:
- fmt_values.append(u' {v}'.format(v=_format(v)))
+ if leading_space is False:
+ # False specifically, so that the default is
+ # to include a space if we get here.
+ tpl = u'{v}'
+ else:
+ tpl = u' {v}'
+ fmt_values.append(tpl.format(v=_format(v)))
return fmt_values
@@ -1135,7 +1172,8 @@ def _format_strings(self):
formatter,
float_format=self.float_format,
na_rep=self.na_rep, digits=self.digits,
- space=self.space, justify=self.justify)
+ space=self.space, justify=self.justify,
+ leading_space=self.leading_space)
return fmt_values
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index c4dac6948cd7a..1eedaa6ad90d1 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -377,7 +377,21 @@ def test_repr_max_seq_item_setting(self):
def test_repr_roundtrip(self):
super(TestIntervalIndex, self).test_repr_roundtrip()
- # TODO: check this behavior is consistent with test_interval_new.py
+ def test_frame_repr(self):
+ # https://github.com/pandas-dev/pandas/pull/24134/files
+ df = pd.DataFrame({'A': [1, 2, 3, 4]},
+ index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4]))
+ result = repr(df)
+ expected = (
+ ' A\n'
+ '(0, 1] 1\n'
+ '(1, 2] 2\n'
+ '(2, 3] 3\n'
+ '(3, 4] 4'
+ )
+ assert result == expected
+
+ # TODO: check this behavior is consistent with test_interval_new.py
def test_get_item(self, closed):
i = IntervalIndex.from_arrays((0, 1, np.nan), (1, 2, np.nan),
closed=closed)
diff --git a/pandas/tests/indexes/period/test_formats.py b/pandas/tests/indexes/period/test_formats.py
index d4035efa2b866..5b2940372b9d7 100644
--- a/pandas/tests/indexes/period/test_formats.py
+++ b/pandas/tests/indexes/period/test_formats.py
@@ -49,6 +49,18 @@ def test_to_native_types():
class TestPeriodIndexRendering(object):
+
+ def test_frame_repr(self):
+ df = pd.DataFrame({"A": [1, 2, 3]},
+ index=pd.date_range('2000', periods=3))
+ result = repr(df)
+ expected = (
+ ' A\n'
+ '2000-01-01 1\n'
+ '2000-01-02 2\n'
+ '2000-01-03 3')
+ assert result == expected
+
@pytest.mark.parametrize('method', ['__repr__', '__unicode__', '__str__'])
def test_representation(self, method):
# GH#7601
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 937e5e5a6af51..bb537f30821e4 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -812,6 +812,13 @@ def test_equals_categoridcal_unordered(self):
assert not a.equals(c)
assert not b.equals(c)
+ def test_frame_repr(self):
+ df = pd.DataFrame({"A": [1, 2, 3]},
+ index=pd.CategoricalIndex(['a', 'b', 'c']))
+ result = repr(df)
+ expected = ' A\na 1\nb 2\nc 3'
+ assert result == expected
+
def test_string_categorical_index_repr(self):
# short
idx = pd.CategoricalIndex(['a', 'bb', 'ccc'])
| @TomAugspurger after the repr PR, the docs build catched an error: the repr of a DataFrame with an IntervalIndex started failing:
```
In [1]: df = pd.DataFrame({'A': [1, 2, 3, 4]},
...: index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4]))
In [2]: df
Out[2]: ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~/miniconda3/envs/dev/lib/python3.5/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/miniconda3/envs/dev/lib/python3.5/site-packages/IPython/lib/pretty.py in pretty(self, obj)
400 if cls is not object \
401 and callable(cls.__dict__.get('__repr__')):
--> 402 return _repr_pprint(obj, self, cycle)
403
404 return _default_pprint(obj, self, cycle)
~/miniconda3/envs/dev/lib/python3.5/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
695 """A pprint that just redirects to the normal repr function."""
696 # Find newlines and replace them with p.break_()
--> 697 output = repr(obj)
698 for idx,output_line in enumerate(output.splitlines()):
699 if idx:
~/scipy/pandas/pandas/core/base.py in __repr__(self)
75 Yields Bytestring in Py2, Unicode String in py3.
76 """
---> 77 return str(self)
78
79
~/scipy/pandas/pandas/core/base.py in __str__(self)
54
55 if compat.PY3:
---> 56 return self.__unicode__()
57 return self.__bytes__()
58
~/scipy/pandas/pandas/core/frame.py in __unicode__(self)
626 width = None
627 self.to_string(buf=buf, max_rows=max_rows, max_cols=max_cols,
--> 628 line_width=width, show_dimensions=show_dimensions)
629
630 return buf.getvalue()
~/scipy/pandas/pandas/core/frame.py in to_string(self, buf, columns, col_space, header, index, na_rep, formatters, float_format, sparsify, index_names, justify, max_rows, max_cols, show_dimensions, decimal, line_width)
707 decimal=decimal,
708 line_width=line_width)
--> 709 formatter.to_string()
710
711 if buf is None:
~/scipy/pandas/pandas/io/formats/format.py in to_string(self)
601 else:
602
--> 603 strcols = self._to_str_columns()
604 if self.line_width is None: # no need to wrap around just print
605 # the whole frame
~/scipy/pandas/pandas/io/formats/format.py in _to_str_columns(self)
510 # may include levels names also
511
--> 512 str_index = self._get_formatted_index(frame)
513
514 if not is_list_like(self.header) and not self.header:
~/scipy/pandas/pandas/io/formats/format.py in _get_formatted_index(self, frame)
807 names=show_index_names, formatter=fmt)
808 else:
--> 809 fmt_index = [index.format(name=show_index_names, formatter=fmt)]
810 fmt_index = [tuple(_make_fixed_width(list(x), justify='left',
811 minimum=(self.col_space or 0),
~/scipy/pandas/pandas/core/indexes/base.py in format(self, name, formatter, **kwargs)
993 return header + list(self.map(formatter))
994
--> 995 return self._format_with_header(header, **kwargs)
996
997 def _format_with_header(self, header, na_rep='NaN', **kwargs):
~/scipy/pandas/pandas/core/indexes/interval.py in _format_with_header(self, header, **kwargs)
1012
1013 def _format_with_header(self, header, **kwargs):
-> 1014 return header + list(self._format_native_types(**kwargs))
1015
1016 def _format_native_types(self, na_rep='', quoting=None, **kwargs):
~/scipy/pandas/pandas/core/indexes/interval.py in _format_native_types(self, na_rep, quoting, **kwargs)
1016 def _format_native_types(self, na_rep='', quoting=None, **kwargs):
1017 """ actually format my specific types """
-> 1018 from pandas.io.formats.format import IntervalArrayFormatter
1019 return IntervalArrayFormatter(values=self,
1020 na_rep=na_rep,
ImportError: cannot import name 'IntervalArrayFormatter'
```
What is in this PR "fixes" the immediate error, but, I see a difference with what was before:
On 0.23.4:
```
In [2]: i = pd.io.formats.format.IntervalArrayFormatter(pd.interval_range(1, 5))
In [3]: i.get_result()
Out[3]: ['(1, 2]', '(2, 3]', '(3, 4]', '(4, 5]']
```
On master:
```
In [22]: i = pd.io.formats.format.ExtensionArrayFormatter(pd.interval_range(1, 5))
In [23]: i.get_result()
Out[23]: [' (1, 2]', ' (2, 3]', ' (3, 4]', ' (4, 5]']
```
So there is now an extra space. So which means also the DataFrame repr starts with a space.
(still need to add tests, and it might this also breaks existing tests due to the whitespace change)
| https://api.github.com/repos/pandas-dev/pandas/pulls/24134 | 2018-12-06T22:10:22Z | 2018-12-13T00:51:11Z | 2018-12-13T00:51:11Z | 2018-12-16T07:54:28Z |
DOC: Fix order section in docstrings (part II) | diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index dae88d3b707bf..1484a1299b246 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -158,6 +158,16 @@ cdef class Interval(IntervalMixin):
Whether the interval is closed on the left-side, right-side, both or
neither. See the Notes for more detailed explanation.
+ See Also
+ --------
+ IntervalIndex : An Index of Interval objects that are all closed on the
+ same side.
+ cut : Convert continuous data into discrete bins (Categorical
+ of Interval objects).
+ qcut : Convert continuous data into bins (Categorical of Interval objects)
+ based on quantiles.
+ Period : Represents a period of time.
+
Notes
-----
The parameters `left` and `right` must be from the same type, you must be
@@ -226,16 +236,6 @@ cdef class Interval(IntervalMixin):
>>> volume_1 = pd.Interval('Ant', 'Dog', closed='both')
>>> 'Bee' in volume_1
True
-
- See Also
- --------
- IntervalIndex : An Index of Interval objects that are all closed on the
- same side.
- cut : Convert continuous data into discrete bins (Categorical
- of Interval objects).
- qcut : Convert continuous data into bins (Categorical of Interval objects)
- based on quantiles.
- Period : Represents a period of time.
"""
_typ = "interval"
@@ -387,6 +387,11 @@ cdef class Interval(IntervalMixin):
bool
``True`` if the two intervals overlap, else ``False``.
+ See Also
+ --------
+ IntervalArray.overlaps : The corresponding method for IntervalArray
+ IntervalIndex.overlaps : The corresponding method for IntervalIndex
+
Examples
--------
>>> i1 = pd.Interval(0, 2)
@@ -409,11 +414,6 @@ cdef class Interval(IntervalMixin):
>>> i6 = pd.Interval(1, 2, closed='neither')
>>> i4.overlaps(i6)
False
-
- See Also
- --------
- IntervalArray.overlaps : The corresponding method for IntervalArray
- IntervalIndex.overlaps : The corresponding method for IntervalIndex
"""
if not isinstance(other, Interval):
msg = '`other` must be an Interval, got {other}'
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 42ec235992089..d61b3aa3135d1 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -471,10 +471,6 @@ class NaTType(_NaT):
"""
Round the Timestamp to the specified resolution
- Returns
- -------
- a new Timestamp rounded to the given resolution of `freq`
-
Parameters
----------
freq : a freq string indicating the rounding resolution
@@ -497,6 +493,10 @@ class NaTType(_NaT):
.. versionadded:: 0.24.0
+ Returns
+ -------
+ a new Timestamp rounded to the given resolution of `freq`
+
Raises
------
ValueError if the freq cannot be converted
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index b0bead2f66ce4..904089cacf537 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1059,6 +1059,10 @@ cdef class _Timedelta(timedelta):
-------
formatted : str
+ See Also
+ --------
+ Timestamp.isoformat
+
Notes
-----
The longest component is days, whose value may be larger than
@@ -1081,10 +1085,6 @@ cdef class _Timedelta(timedelta):
'P0DT0H0M10S'
>>> pd.Timedelta(days=500.5).isoformat()
'P500DT12H0MS'
-
- See Also
- --------
- Timestamp.isoformat
"""
components = self.components
seconds = '{}.{:0>3}{:0>3}{:0>3}'.format(components.seconds,
@@ -1210,14 +1210,14 @@ class Timedelta(_Timedelta):
"""
Round the Timedelta to the specified resolution
- Returns
- -------
- a new Timedelta rounded to the given resolution of `freq`
-
Parameters
----------
freq : a freq string indicating the rounding resolution
+ Returns
+ -------
+ a new Timedelta rounded to the given resolution of `freq`
+
Raises
------
ValueError if the freq cannot be converted
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 472ac0ee6d45c..eda2b2fb6ca98 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -776,10 +776,6 @@ class Timestamp(_Timestamp):
"""
Round the Timestamp to the specified resolution
- Returns
- -------
- a new Timestamp rounded to the given resolution of `freq`
-
Parameters
----------
freq : a freq string indicating the rounding resolution
@@ -802,6 +798,10 @@ class Timestamp(_Timestamp):
.. versionadded:: 0.24.0
+ Returns
+ -------
+ a new Timestamp rounded to the given resolution of `freq`
+
Raises
------
ValueError if the freq cannot be converted
diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py
index 93b4ce31a1e25..961488ff12e58 100644
--- a/pandas/core/accessor.py
+++ b/pandas/core/accessor.py
@@ -201,6 +201,10 @@ def decorator(accessor):
Name under which the accessor should be registered. A warning is issued
if this name conflicts with a preexisting attribute.
+See Also
+--------
+%(others)s
+
Notes
-----
When accessed, your accessor will be initialized with the pandas object
@@ -250,10 +254,6 @@ def plot(self):
(5.0, 10.0)
>>> ds.geo.plot()
# plots data on a map
-
-See Also
---------
-%(others)s
"""
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 4b915922cef93..f8f3ea7f0e72a 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -976,6 +976,14 @@ class GroupBy(_GroupBy):
name : string
Most users should ignore this
+ Returns
+ -------
+ **Attributes**
+ groups : dict
+ {group name -> group labels}
+ len(grouped) : int
+ Number of groups
+
Notes
-----
After grouping, see aggregate, apply, and transform functions. Here are
@@ -1009,14 +1017,6 @@ class GroupBy(_GroupBy):
See the online documentation for full exposition on these topics and much
more
-
- Returns
- -------
- **Attributes**
- groups : dict
- {group name -> group labels}
- len(grouped) : int
- Number of groups
"""
def _bool_agg(self, val_test, skipna):
"""
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 7b842d141e839..9920fcbcbd2b8 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -51,13 +51,13 @@ class Resampler(_GroupBy):
kind : str or None
'period', 'timestamp' to override default index treatement
- Notes
- -----
- After resampling, see aggregate, apply, and transform functions.
-
Returns
-------
a Resampler of the appropriate type
+
+ Notes
+ -----
+ After resampling, see aggregate, apply, and transform functions.
"""
# to the groupby descriptor
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 8c4803a732dd8..7aa4b51e0f55a 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -1294,6 +1294,13 @@ def kurt(self, **kwargs):
Returned object type is determined by the caller of the %(name)s
calculation.
+ See Also
+ --------
+ pandas.Series.quantile : Computes value at the given quantile over all data
+ in Series.
+ pandas.DataFrame.quantile : Computes values at the given quantile over
+ requested axis in DataFrame.
+
Examples
--------
>>> s = pd.Series([1, 2, 3, 4])
@@ -1310,13 +1317,6 @@ def kurt(self, **kwargs):
2 2.5
3 3.5
dtype: float64
-
- See Also
- --------
- pandas.Series.quantile : Computes value at the given quantile over all data
- in Series.
- pandas.DataFrame.quantile : Computes values at the given quantile over
- requested axis in DataFrame.
""")
def quantile(self, quantile, interpolation='linear', **kwargs):
| - [X] closes #24125
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
This is to build on #24126 .
As I commented in the issue, this pull request will fix additional 10 out of 24 errors. 14 errors will remain and are referenced in the issue. | https://api.github.com/repos/pandas-dev/pandas/pulls/24132 | 2018-12-06T20:28:38Z | 2018-12-16T02:08:26Z | 2018-12-16T02:08:26Z | 2018-12-20T00:34:06Z |
BUG: date_range issue with sub-second granularity | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 4e12b22c8ccac..57a605533b099 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1305,6 +1305,7 @@ Datetimelike
- Bug in :class:`DatetimeIndex` where calling ``np.array(dtindex, dtype=object)`` would incorrectly return an array of ``long`` objects (:issue:`23524`)
- Bug in :class:`Index` where passing a timezone-aware :class:`DatetimeIndex` and `dtype=object` would incorrectly raise a ``ValueError`` (:issue:`23524`)
- Bug in :class:`Index` where calling ``np.array(dtindex, dtype=object)`` on a timezone-naive :class:`DatetimeIndex` would return an array of ``datetime`` objects instead of :class:`Timestamp` objects, potentially losing nanosecond portions of the timestamps (:issue:`23524`)
+- Bug in :func:`date_range` where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (:issue:`24110`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index a92e2f6157b40..ba1ff3826aa18 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -307,7 +307,12 @@ def _generate_range(cls, start, end, periods, freq, tz=None,
end = end.tz_localize(tz).asm8
else:
# Create a linearly spaced date_range in local time
- arr = np.linspace(start.value, end.value, periods)
+ # Nanosecond-granularity timestamps aren't always correctly
+ # representable with doubles, so we limit the range that we
+ # pass to np.linspace as much as possible
+ arr = np.linspace(
+ 0, end.value - start.value,
+ periods, dtype='int64') + start.value
index = cls._simple_new(
arr.astype('M8[ns]', copy=False), freq=None, tz=tz
)
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 54a04ab6f80fd..11cefec4f34cf 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -769,3 +769,14 @@ def test_all_custom_freq(self, freq):
msg = 'invalid custom frequency string: {freq}'
with pytest.raises(ValueError, match=msg.format(freq=bad_freq)):
bdate_range(START, END, freq=bad_freq)
+
+ @pytest.mark.parametrize('start_end', [
+ ('2018-01-01T00:00:01.000Z', '2018-01-03T00:00:01.000Z'),
+ ('2018-01-01T00:00:00.010Z', '2018-01-03T00:00:00.010Z'),
+ ('2001-01-01T00:00:00.010Z', '2001-01-03T00:00:00.010Z')])
+ def test_range_with_millisecond_resolution(self, start_end):
+ # https://github.com/pandas-dev/pandas/issues/24110
+ start, end = start_end
+ result = pd.date_range(start=start, end=end, periods=2, closed='left')
+ expected = DatetimeIndex([start])
+ tm.assert_index_equal(result, expected)
| Improves (but doesn't completely resolve) #24110, to avoid rounding
issues with sub-second granularity timestamps when creating a
date range.
- [x] closes #24110
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/24129 | 2018-12-06T15:00:13Z | 2018-12-09T22:25:41Z | 2018-12-09T22:25:41Z | 2018-12-09T22:26:20Z |
ENH: fill_value argument for shift #15486 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 1fb43de5f4c5a..2693c98e56582 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -31,6 +31,7 @@ New features
- :func:`read_feather` now accepts ``columns`` as an argument, allowing the user to specify which columns should be read. (:issue:`24025`)
- :func:`DataFrame.to_html` now accepts ``render_links`` as an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame.
See the :ref:`section on writing HTML <io.html>` in the IO docs for example usage. (:issue:`2679`)
+- :meth:`DataFrame.shift` :meth:`Series.shift`, :meth:`ExtensionArray.shift`, :meth:`SparseArray.shift`, :meth:`Period.shift`, :meth:`GroupBy.shift`, :meth:`Categorical.shift`, :meth:`NDFrame.shift` and :meth:`Block.shift` now accept `fill_value` as an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (:issue:`15486`)
.. _whatsnew_0240.values_api:
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index cf145064fd7b1..262f043e1f0a1 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -15,6 +15,7 @@
from pandas.core.dtypes.common import is_list_like
from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
+from pandas.core.dtypes.missing import isna
from pandas.core import ops
@@ -446,8 +447,8 @@ def dropna(self):
"""
return self[~self.isna()]
- def shift(self, periods=1):
- # type: (int) -> ExtensionArray
+ def shift(self, periods=1, fill_value=None):
+ # type: (int, object) -> ExtensionArray
"""
Shift values by desired number.
@@ -462,6 +463,12 @@ def shift(self, periods=1):
The number of periods to shift. Negative values are allowed
for shifting backwards.
+ fill_value : object, optional
+ The scalar value to use for newly introduced missing values.
+ The default is ``self.dtype.na_value``
+
+ .. versionadded:: 0.24.0
+
Returns
-------
shifted : ExtensionArray
@@ -480,8 +487,11 @@ def shift(self, periods=1):
if not len(self) or periods == 0:
return self.copy()
+ if isna(fill_value):
+ fill_value = self.dtype.na_value
+
empty = self._from_sequence(
- [self.dtype.na_value] * min(abs(periods), len(self)),
+ [fill_value] * min(abs(periods), len(self)),
dtype=self.dtype
)
if periods > 0:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6ccb8dc5d2725..86fbb6f966089 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1258,7 +1258,7 @@ def shape(self):
return tuple([len(self._codes)])
- def shift(self, periods):
+ def shift(self, periods, fill_value=None):
"""
Shift Categorical by desired number of periods.
@@ -1266,6 +1266,10 @@ def shift(self, periods):
----------
periods : int
Number of periods to move, can be positive or negative
+ fill_value : object, optional
+ The scalar value to use for newly introduced missing values.
+
+ .. versionadded:: 0.24.0
Returns
-------
@@ -1278,10 +1282,18 @@ def shift(self, periods):
raise NotImplementedError("Categorical with ndim > 1.")
if np.prod(codes.shape) and (periods != 0):
codes = np.roll(codes, ensure_platform_int(periods), axis=0)
+ if isna(fill_value):
+ fill_value = -1
+ elif fill_value in self.categories:
+ fill_value = self.categories.get_loc(fill_value)
+ else:
+ raise ValueError("'fill_value={}' is not present "
+ "in this Categorical's "
+ "categories".format(fill_value))
if periods > 0:
- codes[:periods] = -1
+ codes[:periods] = fill_value
else:
- codes[periods:] = -1
+ codes[periods:] = fill_value
return self.from_codes(codes, categories=self.categories,
ordered=self.ordered)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 60febc5f5636d..a29454d2563e4 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -453,7 +453,7 @@ def value_counts(self, dropna=False):
# --------------------------------------------------------------------
- def shift(self, periods=1):
+ def shift(self, periods=1, fill_value=None):
"""
Shift values by desired number.
@@ -467,6 +467,9 @@ def shift(self, periods=1):
periods : int, default 1
The number of periods to shift. Negative values are allowed
for shifting backwards.
+ fill_value : optional, default NaT
+
+ .. versionadded:: 0.24.0
Returns
-------
@@ -475,7 +478,7 @@ def shift(self, periods=1):
# TODO(DatetimeArray): remove
# The semantics for Index.shift differ from EA.shift
# then just call super.
- return ExtensionArray.shift(self, periods)
+ return ExtensionArray.shift(self, periods, fill_value=fill_value)
def _time_shift(self, n, freq=None):
"""
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 9e1d2efc21b81..e4a8c21bbb839 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -889,12 +889,15 @@ def fillna(self, value=None, method=None, limit=None):
return self._simple_new(new_values, self._sparse_index, new_dtype)
- def shift(self, periods=1):
+ def shift(self, periods=1, fill_value=None):
if not len(self) or periods == 0:
return self.copy()
- subtype = np.result_type(np.nan, self.dtype.subtype)
+ if isna(fill_value):
+ fill_value = self.dtype.na_value
+
+ subtype = np.result_type(fill_value, self.dtype.subtype)
if subtype != self.dtype.subtype:
# just coerce up front
@@ -903,7 +906,7 @@ def shift(self, periods=1):
arr = self
empty = self._from_sequence(
- [self.dtype.na_value] * min(abs(periods), len(self)),
+ [fill_value] * min(abs(periods), len(self)),
dtype=arr.dtype
)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c4537db254132..e99bb88fe80b5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3939,9 +3939,9 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
method=method)
@Appender(_shared_docs['shift'] % _shared_doc_kwargs)
- def shift(self, periods=1, freq=None, axis=0):
+ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
return super(DataFrame, self).shift(periods=periods, freq=freq,
- axis=axis)
+ axis=axis, fill_value=fill_value)
def set_index(self, keys, drop=True, append=False, inplace=False,
verify_integrity=False):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 6eb6bc124c80a..d893ebded330d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8849,6 +8849,14 @@ def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None,
extend the index when shifting and preserve the original data.
axis : {0 or 'index', 1 or 'columns', None}, default None
Shift direction.
+ fill_value : object, optional
+ The scalar value to use for newly introduced missing values.
+ the default depends on the dtype of `self`.
+ For numeric data, ``np.nan`` is used.
+ For datetime, timedelta, or period data, etc. :attr:`NaT` is used.
+ For extension dtypes, ``self.dtype.na_value`` is used.
+
+ .. versionchanged:: 0.24.0
Returns
-------
@@ -8884,16 +8892,25 @@ def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None,
2 NaN 15.0 18.0
3 NaN 30.0 33.0
4 NaN 45.0 48.0
+
+ >>> df.shift(periods=3, fill_value=0)
+ Col1 Col2 Col3
+ 0 0 0 0
+ 1 0 0 0
+ 2 0 0 0
+ 3 10 13 17
+ 4 20 23 27
""")
@Appender(_shared_docs['shift'] % _shared_doc_kwargs)
- def shift(self, periods=1, freq=None, axis=0):
+ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
if periods == 0:
return self.copy()
block_axis = self._get_block_manager_axis(axis)
if freq is None:
- new_data = self._data.shift(periods=periods, axis=block_axis)
+ new_data = self._data.shift(periods=periods, axis=block_axis,
+ fill_value=fill_value)
else:
return self.tshift(periods, freq)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index f7c6ccdc25395..60b6c843492c7 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1994,7 +1994,7 @@ def _get_cythonized_result(self, how, grouper, aggregate=False,
@Substitution(name='groupby')
@Appender(_common_see_also)
- def shift(self, periods=1, freq=None, axis=0):
+ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
"""
Shift each group by periods observations.
@@ -2004,10 +2004,14 @@ def shift(self, periods=1, freq=None, axis=0):
number of periods to shift
freq : frequency string
axis : axis to shift, default 0
+ fill_value : optional
+
+ .. versionadded:: 0.24.0
"""
- if freq is not None or axis != 0:
- return self.apply(lambda x: x.shift(periods, freq, axis))
+ if freq is not None or axis != 0 or not isna(fill_value):
+ return self.apply(lambda x: x.shift(periods, freq,
+ axis, fill_value))
return self._get_cythonized_result('group_shift_indexer',
self.grouper, cython_dtype=np.int64,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 1383ce09bc2d0..0ccfab93c2830 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1261,12 +1261,12 @@ def diff(self, n, axis=1):
new_values = algos.diff(self.values, n, axis=axis)
return [self.make_block(values=new_values)]
- def shift(self, periods, axis=0):
+ def shift(self, periods, axis=0, fill_value=None):
""" shift the block by periods, possibly upcast """
# convert integer to float if necessary. need to do a lot more than
# that, handle boolean etc also
- new_values, fill_value = maybe_upcast(self.values)
+ new_values, fill_value = maybe_upcast(self.values, fill_value)
# make sure array sent to np.roll is c_contiguous
f_ordered = new_values.flags.f_contiguous
@@ -1955,7 +1955,7 @@ def interpolate(self, method='pad', axis=0, inplace=False, limit=None,
limit=limit),
placement=self.mgr_locs)
- def shift(self, periods, axis=0):
+ def shift(self, periods, axis=0, fill_value=None):
"""
Shift the block by `periods`.
@@ -1963,9 +1963,11 @@ def shift(self, periods, axis=0):
ExtensionBlock.
"""
# type: (int, Optional[BlockPlacement]) -> List[ExtensionBlock]
- return [self.make_block_same_class(self.values.shift(periods=periods),
- placement=self.mgr_locs,
- ndim=self.ndim)]
+ return [
+ self.make_block_same_class(
+ self.values.shift(periods=periods, fill_value=fill_value),
+ placement=self.mgr_locs, ndim=self.ndim)
+ ]
def where(self, other, cond, align=True, errors='raise',
try_cast=False, axis=0, transpose=False):
@@ -3023,7 +3025,7 @@ def _try_coerce_result(self, result):
def _box_func(self):
return lambda x: tslibs.Timestamp(x, tz=self.dtype.tz)
- def shift(self, periods, axis=0):
+ def shift(self, periods, axis=0, fill_value=None):
""" shift the block by periods """
# think about moving this to the DatetimeIndex. This is a non-freq
@@ -3038,10 +3040,12 @@ def shift(self, periods, axis=0):
new_values = self.values.asi8.take(indexer)
+ if isna(fill_value):
+ fill_value = tslibs.iNaT
if periods > 0:
- new_values[:periods] = tslibs.iNaT
+ new_values[:periods] = fill_value
else:
- new_values[periods:] = tslibs.iNaT
+ new_values[periods:] = fill_value
new_values = self.values._shallow_copy(new_values)
return [self.make_block_same_class(new_values,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 773f2d17cf0fc..5ef316b000a7f 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3716,8 +3716,9 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
regex=regex, method=method)
@Appender(generic._shared_docs['shift'] % _shared_doc_kwargs)
- def shift(self, periods=1, freq=None, axis=0):
- return super(Series, self).shift(periods=periods, freq=freq, axis=axis)
+ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
+ return super(Series, self).shift(periods=periods, freq=freq, axis=axis,
+ fill_value=fill_value)
def reindex_axis(self, labels, axis=0, **kwargs):
"""
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 7d8cc34ae1462..9c13a20726553 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -10,6 +10,7 @@
import pandas.util._test_decorators as td
import pandas as pd
+from pandas import isna
from pandas.core.sparse.api import SparseArray, SparseDtype, SparseSeries
import pandas.util.testing as tm
from pandas.util.testing import assert_almost_equal
@@ -262,6 +263,18 @@ def test_take_negative(self):
exp = SparseArray(np.take(self.arr_data, [-4, -3, -2]))
tm.assert_sp_array_equal(self.arr.take([-4, -3, -2]), exp)
+ @pytest.mark.parametrize('fill_value', [0, None, np.nan])
+ def test_shift_fill_value(self, fill_value):
+ # GH #24128
+ sparse = SparseArray(np.array([1, 0, 0, 3, 0]),
+ fill_value=8.0)
+ res = sparse.shift(1, fill_value=fill_value)
+ if isna(fill_value):
+ fill_value = res.dtype.na_value
+ exp = SparseArray(np.array([fill_value, 1, 0, 0, 3]),
+ fill_value=8.0)
+ tm.assert_sp_array_equal(res, exp)
+
def test_bad_take(self):
with pytest.raises(IndexError, match="bounds"):
self.arr.take([11])
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 4a409a84f3db4..2bcdb8aa7488c 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -221,6 +221,17 @@ def test_shift_empty_array(self, data, periods):
expected = empty
self.assert_extension_array_equal(result, expected)
+ def test_shift_fill_value(self, data):
+ arr = data[:4]
+ fill_value = data[0]
+ result = arr.shift(1, fill_value=fill_value)
+ expected = data.take([0, 0, 1, 2])
+ self.assert_extension_array_equal(result, expected)
+
+ result = arr.shift(-2, fill_value=fill_value)
+ expected = data.take([2, 3, 0, 0])
+ self.assert_extension_array_equal(result, expected)
+
@pytest.mark.parametrize("as_frame", [True, False])
def test_hash_pandas_object_works(self, data, as_frame):
# https://github.com/pandas-dev/pandas/issues/23066
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index 52f0b30bf0f0c..02b83aaf5c131 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -320,6 +320,20 @@ def test_shift_categorical(self):
xp = DataFrame({'one': s1.shift(1), 'two': s2.shift(1)})
assert_frame_equal(rs, xp)
+ def test_shift_fill_value(self):
+ # GH #24128
+ df = DataFrame([1, 2, 3, 4, 5],
+ index=date_range('1/1/2000', periods=5, freq='H'))
+ exp = DataFrame([0, 1, 2, 3, 4],
+ index=date_range('1/1/2000', periods=5, freq='H'))
+ result = df.shift(1, fill_value=0)
+ assert_frame_equal(result, exp)
+
+ exp = DataFrame([0, 0, 1, 2, 3],
+ index=date_range('1/1/2000', periods=5, freq='H'))
+ result = df.shift(2, fill_value=0)
+ assert_frame_equal(result, exp)
+
def test_shift_empty(self):
# Regression test for #8019
df = DataFrame({'foo': []})
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 264f2567e45c1..7eda113be0e36 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -9,7 +9,8 @@
from pandas.compat import PY37
from pandas import (Index, MultiIndex, CategoricalIndex,
DataFrame, Categorical, Series, qcut)
-from pandas.util.testing import assert_frame_equal, assert_series_equal
+from pandas.util.testing import (assert_equal,
+ assert_frame_equal, assert_series_equal)
import pandas.util.testing as tm
@@ -860,3 +861,13 @@ def test_groupby_multiindex_categorical_datetime():
expected = pd.DataFrame(
{'values': [0, 4, 8, 3, 4, 5, 6, np.nan, 2]}, index=idx)
assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize('fill_value', [None, np.nan, pd.NaT])
+def test_shift(fill_value):
+ ct = pd.Categorical(['a', 'b', 'c', 'd'],
+ categories=['a', 'b', 'c', 'd'], ordered=False)
+ expected = pd.Categorical([None, 'a', 'b', 'c'],
+ categories=['a', 'b', 'c', 'd'], ordered=False)
+ res = ct.shift(1, fill_value=fill_value)
+ assert_equal(res, expected)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 6d9f60df45ec8..e9de46bba03f1 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1617,6 +1617,23 @@ def test_group_shift_with_null_key():
assert_frame_equal(result, expected)
+def test_group_shift_with_fill_value():
+ # GH #24128
+ n_rows = 24
+ df = DataFrame([(i % 12, i % 3, i)
+ for i in range(n_rows)], dtype=float,
+ columns=["A", "B", "Z"], index=None)
+ g = df.groupby(["A", "B"])
+
+ expected = DataFrame([(i + 12 if i < n_rows - 12
+ else 0)
+ for i in range(n_rows)], dtype=float,
+ columns=["Z"], index=None)
+ result = g.shift(-1, fill_value=0)[["Z"]]
+
+ assert_frame_equal(result, expected)
+
+
def test_pivot_table_values_key_error():
# This test is designed to replicate the error in issue #14938
df = pd.DataFrame({'eventDate':
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index ce464184cd8d6..2425437591dfc 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -129,6 +129,38 @@ def test_shift2(self):
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04'])
pytest.raises(NullFrequencyError, idx.shift, 1)
+ def test_shift_fill_value(self):
+ # GH #24128
+ ts = Series([1.0, 2.0, 3.0, 4.0, 5.0],
+ index=date_range('1/1/2000', periods=5, freq='H'))
+
+ exp = Series([0.0, 1.0, 2.0, 3.0, 4.0],
+ index=date_range('1/1/2000', periods=5, freq='H'))
+ # check that fill value works
+ result = ts.shift(1, fill_value=0.0)
+ tm.assert_series_equal(result, exp)
+
+ exp = Series([0.0, 0.0, 1.0, 2.0, 3.0],
+ index=date_range('1/1/2000', periods=5, freq='H'))
+ result = ts.shift(2, fill_value=0.0)
+ tm.assert_series_equal(result, exp)
+
+ ts = pd.Series([1, 2, 3])
+ res = ts.shift(2, fill_value=0)
+ assert res.dtype == ts.dtype
+
+ def test_categorical_shift_fill_value(self):
+ ts = pd.Series(['a', 'b', 'c', 'd'], dtype="category")
+ res = ts.shift(1, fill_value='a')
+ expected = pd.Series(pd.Categorical(['a', 'a', 'b', 'c'],
+ categories=['a', 'b', 'c', 'd'],
+ ordered=False))
+ tm.assert_equal(res, expected)
+
+ # check for incorrect fill_value
+ with pytest.raises(ValueError):
+ ts.shift(1, fill_value='f')
+
def test_shift_dst(self):
# GH 13926
dates = date_range('2016-11-06', freq='H', periods=10, tz='US/Eastern')
| - [ ] closes #15486
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24128 | 2018-12-06T11:34:43Z | 2018-12-26T01:29:55Z | 2018-12-26T01:29:55Z | 2018-12-26T01:38:22Z |
Fix error GL07 | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 1a4368ee8ea98..3c4fe519e4181 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -289,6 +289,11 @@ def unique(values):
- If the input is a Categorical dtype, the return is a Categorical
- If the input is a Series/ndarray, the return will be an ndarray
+ See Also
+ --------
+ pandas.Index.unique
+ pandas.Series.unique
+
Examples
--------
>>> pd.unique(pd.Series([2, 1, 3, 3]))
@@ -338,11 +343,6 @@ def unique(values):
>>> pd.unique([('a', 'b'), ('b', 'a'), ('a', 'c'), ('b', 'a')])
array([('a', 'b'), ('b', 'a'), ('a', 'c')], dtype=object)
-
- See Also
- --------
- pandas.Index.unique
- pandas.Series.unique
"""
values = _ensure_arraylike(values)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 938ca53b5fdce..6e96fc75daec9 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -273,6 +273,16 @@ class Categorical(ExtensionArray, PandasObject):
If an explicit ``ordered=True`` is given but no `categories` and the
`values` are not sortable.
+ See Also
+ --------
+ pandas.api.types.CategoricalDtype : Type for categorical data.
+ CategoricalIndex : An Index with an underlying ``Categorical``.
+
+ Notes
+ -----
+ See the `user guide
+ <http://pandas.pydata.org/pandas-docs/stable/categorical.html>`_ for more.
+
Examples
--------
>>> pd.Categorical([1, 2, 3, 1, 2, 3])
@@ -293,16 +303,6 @@ class Categorical(ExtensionArray, PandasObject):
Categories (3, object): [c < b < a]
>>> c.min()
'c'
-
- Notes
- -----
- See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/categorical.html>`_ for more.
-
- See Also
- --------
- pandas.api.types.CategoricalDtype : Type for categorical data.
- CategoricalIndex : An Index with an underlying ``Categorical``.
"""
# For comparisons, so that numpy uses our implementation if the compare
@@ -827,11 +827,6 @@ def set_categories(self, new_categories, ordered=None, rename=False,
dtypes on python3, which does not considers a S1 string equal to a
single char python string.
- Raises
- ------
- ValueError
- If new_categories does not validate as categories
-
Parameters
----------
new_categories : Index-like
@@ -850,6 +845,11 @@ def set_categories(self, new_categories, ordered=None, rename=False,
-------
cat : Categorical with reordered categories or None if inplace.
+ Raises
+ ------
+ ValueError
+ If new_categories does not validate as categories
+
See Also
--------
rename_categories
@@ -882,12 +882,6 @@ def rename_categories(self, new_categories, inplace=False):
"""
Renames categories.
- Raises
- ------
- ValueError
- If new categories are list-like and do not have the same number of
- items than the current categories or do not validate as categories
-
Parameters
----------
new_categories : list-like, dict-like or callable
@@ -922,6 +916,12 @@ def rename_categories(self, new_categories, inplace=False):
With ``inplace=False``, the new categorical is returned.
With ``inplace=True``, there is no return value.
+ Raises
+ ------
+ ValueError
+ If new categories are list-like and do not have the same number of
+ items than the current categories or do not validate as categories
+
See Also
--------
reorder_categories
@@ -979,12 +979,6 @@ def reorder_categories(self, new_categories, ordered=None, inplace=False):
`new_categories` need to include all old categories and no new category
items.
- Raises
- ------
- ValueError
- If the new categories do not contain all old category items or any
- new ones
-
Parameters
----------
new_categories : Index-like
@@ -1000,6 +994,12 @@ def reorder_categories(self, new_categories, ordered=None, inplace=False):
-------
cat : Categorical with reordered categories or None if inplace.
+ Raises
+ ------
+ ValueError
+ If the new categories do not contain all old category items or any
+ new ones
+
See Also
--------
rename_categories
@@ -1022,12 +1022,6 @@ def add_categories(self, new_categories, inplace=False):
`new_categories` will be included at the last/highest place in the
categories and will be unused directly after this call.
- Raises
- ------
- ValueError
- If the new categories include old categories or do not validate as
- categories
-
Parameters
----------
new_categories : category or list-like of category
@@ -1040,6 +1034,12 @@ def add_categories(self, new_categories, inplace=False):
-------
cat : Categorical with new categories added or None if inplace.
+ Raises
+ ------
+ ValueError
+ If the new categories include old categories or do not validate as
+ categories
+
See Also
--------
rename_categories
@@ -1072,11 +1072,6 @@ def remove_categories(self, removals, inplace=False):
`removals` must be included in the old categories. Values which were in
the removed categories will be set to NaN
- Raises
- ------
- ValueError
- If the removals are not contained in the categories
-
Parameters
----------
removals : category or list of categories
@@ -1089,6 +1084,11 @@ def remove_categories(self, removals, inplace=False):
-------
cat : Categorical with removed categories or None if inplace.
+ Raises
+ ------
+ ValueError
+ If the removals are not contained in the categories
+
See Also
--------
rename_categories
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index a92e2f6157b40..b74ede4547249 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -892,6 +892,11 @@ def to_period(self, freq=None):
When converting a DatetimeArray/Index with non-regular values,
so that a frequency cannot be inferred.
+ See Also
+ --------
+ PeriodIndex: Immutable ndarray holding ordinal values.
+ DatetimeIndex.to_pydatetime: Return DatetimeIndex as object.
+
Examples
--------
>>> df = pd.DataFrame({"y": [1,2,3]},
@@ -908,11 +913,6 @@ def to_period(self, freq=None):
>>> idx.to_period()
PeriodIndex(['2017-01-01', '2017-01-02'],
dtype='period[D]', freq='D')
-
- See Also
- --------
- PeriodIndex: Immutable ndarray holding ordinal values.
- DatetimeIndex.to_pydatetime: Return DatetimeIndex as object.
"""
from pandas.core.arrays import PeriodArray
@@ -1087,17 +1087,17 @@ def date(self):
by 6. This method is available on both Series with datetime
values (using the `dt` accessor) or DatetimeIndex.
+ Returns
+ -------
+ Series or Index
+ Containing integers indicating the day number.
+
See Also
--------
Series.dt.dayofweek : Alias.
Series.dt.weekday : Alias.
Series.dt.day_name : Returns the name of the day of the week.
- Returns
- -------
- Series or Index
- Containing integers indicating the day number.
-
Examples
--------
>>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 785fb02c4d95d..1a1648a3b8480 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -84,14 +84,6 @@
set_closed
%(extra_methods)s\
-%(examples)s\
-
-Notes
-------
-See the `user guide
-<http://pandas.pydata.org/pandas-docs/stable/advanced.html#intervalindex>`_
-for more.
-
See Also
--------
Index : The base pandas Index type.
@@ -99,6 +91,14 @@
interval_range : Function to create a fixed frequency IntervalIndex.
cut : Bin values into discrete Intervals.
qcut : Bin values into equal-sized Intervals based on rank or sample quantiles.
+
+Notes
+------
+See the `user guide
+<http://pandas.pydata.org/pandas-docs/stable/advanced.html#intervalindex>`_
+for more.
+
+%(examples)s\
"""
@@ -236,18 +236,18 @@ def _from_factorized(cls, values, original):
.. versionadded:: 0.23.0
+ See Also
+ --------
+ interval_range : Function to create a fixed frequency IntervalIndex.
+ %(klass)s.from_arrays : Construct from a left and right array.
+ %(klass)s.from_tuples : Construct from a sequence of tuples.
+
Examples
--------
>>> pd.%(klass)s.from_breaks([0, 1, 2, 3])
%(klass)s([(0, 1], (1, 2], (2, 3]]
closed='right',
dtype='interval[int64]')
-
- See Also
- --------
- interval_range : Function to create a fixed frequency IntervalIndex.
- %(klass)s.from_arrays : Construct from a left and right array.
- %(klass)s.from_tuples : Construct from a sequence of tuples.
"""
@classmethod
@@ -281,14 +281,6 @@ def from_breaks(cls, breaks, closed='right', copy=False, dtype=None):
-------
%(klass)s
- Notes
- -----
- Each element of `left` must be less than or equal to the `right`
- element at the same position. If an element is missing, it must be
- missing in both `left` and `right`. A TypeError is raised when
- using an unsupported type for `left` or `right`. At the moment,
- 'category', 'object', and 'string' subtypes are not supported.
-
Raises
------
ValueError
@@ -304,6 +296,13 @@ def from_breaks(cls, breaks, closed='right', copy=False, dtype=None):
%(klass)s.from_tuples : Construct an %(klass)s from an
array-like of tuples.
+ Notes
+ -----
+ Each element of `left` must be less than or equal to the `right`
+ element at the same position. If an element is missing, it must be
+ missing in both `left` and `right`. A TypeError is raised when
+ using an unsupported type for `left` or `right`. At the moment,
+ 'category', 'object', and 'string' subtypes are not supported.
Examples
--------
@@ -339,6 +338,16 @@ def from_arrays(cls, left, right, closed='right', copy=False, dtype=None):
..versionadded:: 0.23.0
+ See Also
+ --------
+ interval_range : Function to create a fixed frequency IntervalIndex.
+ %(klass)s.from_arrays : Construct an %(klass)s from a left and
+ right array.
+ %(klass)s.from_breaks : Construct an %(klass)s from an array of
+ splits.
+ %(klass)s.from_tuples : Construct an %(klass)s from an
+ array-like of tuples.
+
Examples
--------
>>> pd.%(klass)s.from_intervals([pd.Interval(0, 1),
@@ -352,16 +361,6 @@ def from_arrays(cls, left, right, closed='right', copy=False, dtype=None):
>>> pd.Index([pd.Interval(0, 1), pd.Interval(1, 2)])
%(klass)s([(0, 1], (1, 2]]
closed='right', dtype='interval[int64]')
-
- See Also
- --------
- interval_range : Function to create a fixed frequency IntervalIndex.
- %(klass)s.from_arrays : Construct an %(klass)s from a left and
- right array.
- %(klass)s.from_breaks : Construct an %(klass)s from an array of
- splits.
- %(klass)s.from_tuples : Construct an %(klass)s from an
- array-like of tuples.
"""
_interval_shared_docs['from_tuples'] = """
@@ -381,13 +380,6 @@ def from_arrays(cls, left, right, closed='right', copy=False, dtype=None):
..versionadded:: 0.23.0
-
- Examples
- --------
- >>> pd.%(klass)s.from_tuples([(0, 1), (1, 2)])
- %(klass)s([(0, 1], (1, 2]],
- closed='right', dtype='interval[int64]')
-
See Also
--------
interval_range : Function to create a fixed frequency IntervalIndex.
@@ -395,6 +387,12 @@ def from_arrays(cls, left, right, closed='right', copy=False, dtype=None):
right array.
%(klass)s.from_breaks : Construct an %(klass)s from an array of
splits.
+
+ Examples
+ --------
+ >>> pd.%(klass)s.from_tuples([(0, 1), (1, 2)])
+ %(klass)s([(0, 1], (1, 2]],
+ closed='right', dtype='interval[int64]')
"""
@classmethod
@@ -1052,6 +1050,10 @@ def repeat(self, repeats, **kwargs):
ndarray
Boolean array positionally indicating where an overlap occurs.
+ See Also
+ --------
+ Interval.overlaps : Check whether two Interval objects overlap.
+
Examples
--------
>>> intervals = %(constructor)s.from_tuples([(0, 1), (1, 3), (2, 4)])
@@ -1071,10 +1073,6 @@ def repeat(self, repeats, **kwargs):
>>> intervals.overlaps(pd.Interval(1, 2, closed='right'))
array([False, True, False])
-
- See Also
- --------
- Interval.overlaps : Check whether two Interval objects overlap.
"""
@Appender(_interval_shared_docs['overlaps'] % _shared_docs_kwargs)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index e7c3a45a710e0..e224b6a50d332 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1298,14 +1298,14 @@ def memory_usage(self, deep=False):
-------
bytes used
+ See Also
+ --------
+ numpy.ndarray.nbytes
+
Notes
-----
Memory usage does not include memory consumed by elements that
are not components of the array if deep=False or if used on PyPy
-
- See Also
- --------
- numpy.ndarray.nbytes
"""
if hasattr(self.values, 'memory_usage'):
return self.values.memory_usage(deep=deep)
diff --git a/pandas/core/computation/eval.py b/pandas/core/computation/eval.py
index 4b9ba02ed85a4..b768ed6df303e 100644
--- a/pandas/core/computation/eval.py
+++ b/pandas/core/computation/eval.py
@@ -246,6 +246,11 @@ def eval(expr, parser='pandas', engine=None, truediv=True,
- Item assignment is provided and `inplace=False`, but the `target`
does not support the `.copy()` method
+ See Also
+ --------
+ pandas.DataFrame.query
+ pandas.DataFrame.eval
+
Notes
-----
The ``dtype`` of any objects involved in an arithmetic ``%`` operation are
@@ -253,11 +258,6 @@ def eval(expr, parser='pandas', engine=None, truediv=True,
See the :ref:`enhancing performance <enhancingperf.eval>` documentation for
more details.
-
- See Also
- --------
- pandas.DataFrame.query
- pandas.DataFrame.eval
"""
from pandas.core.computation.expr import Expr
diff --git a/pandas/core/dtypes/base.py b/pandas/core/dtypes/base.py
index aa81e88abf28e..ab1cb9cf2499a 100644
--- a/pandas/core/dtypes/base.py
+++ b/pandas/core/dtypes/base.py
@@ -151,6 +151,11 @@ class ExtensionDtype(_DtypeOpsMixin):
.. versionadded:: 0.23.0
+ See Also
+ --------
+ pandas.api.extensions.register_extension_dtype
+ pandas.api.extensions.ExtensionArray
+
Notes
-----
The interface includes the following abstract methods that must
@@ -199,11 +204,6 @@ class property**.
Methods and properties required by the interface raise
``pandas.errors.AbstractMethodError`` and no ``register`` method is
provided for registering virtual subclasses.
-
- See Also
- --------
- pandas.api.extensions.register_extension_dtype
- pandas.api.extensions.ExtensionArray
"""
def __str__(self):
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 82f931c1469b7..0db76cd934d19 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -186,6 +186,10 @@ class CategoricalDtype(PandasExtensionDtype, ExtensionDtype):
-------
None
+ See Also
+ --------
+ pandas.Categorical
+
Notes
-----
This class is useful for specifying the type of a ``Categorical``
@@ -202,10 +206,6 @@ class CategoricalDtype(PandasExtensionDtype, ExtensionDtype):
3 NaN
dtype: category
Categories (2, object): [b < a]
-
- See Also
- --------
- pandas.Categorical
"""
# TODO: Document public vs. private API
name = 'category'
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2a8d58b8867b7..f7f1855a4fabc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -211,18 +211,18 @@
DataFrame
A DataFrame of the two merged objects.
-Notes
------
-Support for specifying index levels as the `on`, `left_on`, and
-`right_on` parameters was added in version 0.23.0
-Support for merging named Series objects was added in version 0.24.0
-
See Also
--------
merge_ordered : Merge with optional filling/interpolation.
merge_asof : Merge on nearest keys.
DataFrame.join : Similar method using indices.
+Notes
+-----
+Support for specifying index levels as the `on`, `left_on`, and
+`right_on` parameters was added in version 0.23.0
+Support for merging named Series objects was added in version 0.24.0
+
Examples
--------
@@ -309,6 +309,13 @@ class DataFrame(NDFrame):
copy : boolean, default False
Copy data from inputs. Only affects DataFrame / 2d ndarray input
+ See Also
+ --------
+ DataFrame.from_records : Constructor from tuples, also record arrays.
+ DataFrame.from_dict : From dicts of Series, arrays, or dicts.
+ DataFrame.from_items : From sequence of (key, value) pairs
+ pandas.read_csv, pandas.read_table, pandas.read_clipboard.
+
Examples
--------
Constructing DataFrame from a dictionary.
@@ -344,13 +351,6 @@ class DataFrame(NDFrame):
0 1 2 3
1 4 5 6
2 7 8 9
-
- See Also
- --------
- DataFrame.from_records : Constructor from tuples, also record arrays.
- DataFrame.from_dict : From dicts of Series, arrays, or dicts.
- DataFrame.from_items : From sequence of (key, value) pairs
- pandas.read_csv, pandas.read_table, pandas.read_clipboard.
"""
@property
@@ -786,6 +786,21 @@ def iterrows(self):
"""
Iterate over DataFrame rows as (index, Series) pairs.
+ Yields
+ ------
+ index : label or tuple of label
+ The index of the row. A tuple for a `MultiIndex`.
+ data : Series
+ The data of the row as a Series.
+
+ it : generator
+ A generator that iterates over the rows of the frame.
+
+ See Also
+ --------
+ itertuples : Iterate over DataFrame rows as namedtuples of the values.
+ iteritems : Iterate over (column name, Series) pairs.
+
Notes
-----
@@ -812,21 +827,6 @@ def iterrows(self):
This is not guaranteed to work in all cases. Depending on the
data types, the iterator returns a copy and not a view, and writing
to it will have no effect.
-
- Yields
- ------
- index : label or tuple of label
- The index of the row. A tuple for a `MultiIndex`.
- data : Series
- The data of the row as a Series.
-
- it : generator
- A generator that iterates over the rows of the frame.
-
- See Also
- --------
- itertuples : Iterate over DataFrame rows as namedtuples of the values.
- iteritems : Iterate over (column name, Series) pairs.
"""
columns = self.columns
klass = self._constructor_sliced
@@ -853,18 +853,18 @@ def itertuples(self, index=True, name="Pandas"):
field possibly being the index and following fields being the
column values.
- Notes
- -----
- The column names will be renamed to positional names if they are
- invalid Python identifiers, repeated, or start with an underscore.
- With a large number of columns (>255), regular tuples are returned.
-
See Also
--------
DataFrame.iterrows : Iterate over DataFrame rows as (index, Series)
pairs.
DataFrame.iteritems : Iterate over (column name, Series) pairs.
+ Notes
+ -----
+ The column names will be renamed to positional names if they are
+ invalid Python identifiers, repeated, or start with an underscore.
+ With a large number of columns (>255), regular tuples are returned.
+
Examples
--------
>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
@@ -1714,13 +1714,13 @@ def from_csv(cls, path, header=0, sep=',', index_col=0, parse_dates=True,
datetime format based on the first datetime string. If the format
can be inferred, there often will be a large parsing speed-up.
- See Also
- --------
- pandas.read_csv
-
Returns
-------
y : DataFrame
+
+ See Also
+ --------
+ pandas.read_csv
"""
warnings.warn("from_csv is deprecated. Please use read_csv(...) "
@@ -2858,6 +2858,11 @@ def query(self, expr, inplace=False, **kwargs):
-------
q : DataFrame
+ See Also
+ --------
+ pandas.eval
+ DataFrame.eval
+
Notes
-----
The result of the evaluation of this expression is first passed to
@@ -2893,11 +2898,6 @@ def query(self, expr, inplace=False, **kwargs):
For further details and examples see the ``query`` documentation in
:ref:`indexing <indexing.query>`.
- See Also
- --------
- pandas.eval
- DataFrame.eval
-
Examples
--------
>>> df = pd.DataFrame(np.random.randn(10, 2), columns=list('ab'))
@@ -3037,6 +3037,12 @@ def select_dtypes(self, include=None, exclude=None):
A selection of dtypes or strings to be included/excluded. At least
one of these parameters must be supplied.
+ Returns
+ -------
+ subset : DataFrame
+ The subset of the frame including the dtypes in ``include`` and
+ excluding the dtypes in ``exclude``.
+
Raises
------
ValueError
@@ -3044,12 +3050,6 @@ def select_dtypes(self, include=None, exclude=None):
* If ``include`` and ``exclude`` have overlapping elements
* If any kind of string dtype is passed in.
- Returns
- -------
- subset : DataFrame
- The subset of the frame including the dtypes in ``include`` and
- excluding the dtypes in ``exclude``.
-
Notes
-----
* To select all *numeric* types, use ``np.number`` or ``'number'``
@@ -3675,6 +3675,11 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
-------
dropped : pandas.DataFrame
+ Raises
+ ------
+ KeyError
+ If none of the labels are found in the selected axis
+
See Also
--------
DataFrame.loc : Label-location based indexer for selection by label.
@@ -3684,11 +3689,6 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
removed, optionally only considering certain columns.
Series.drop : Return Series with specified index labels removed.
- Raises
- ------
- KeyError
- If none of the labels are found in the selected axis
-
Examples
--------
>>> df = pd.DataFrame(np.arange(12).reshape(3,4),
@@ -4960,6 +4960,11 @@ def combine(self, other, func, fill_value=None, overwrite=True):
-------
result : DataFrame
+ See Also
+ --------
+ DataFrame.combine_first : Combine two DataFrame objects and default to
+ non-null values in frame calling the method.
+
Examples
--------
Combine using a simple function that chooses the smaller column.
@@ -5032,11 +5037,6 @@ def combine(self, other, func, fill_value=None, overwrite=True):
0 0.0 NaN NaN
1 0.0 3.0 1.0
2 NaN 3.0 1.0
-
- See Also
- --------
- DataFrame.combine_first : Combine two DataFrame objects and default to
- non-null values in frame calling the method.
"""
other_idxlen = len(other.index) # save for compare
@@ -5118,6 +5118,11 @@ def combine_first(self, other):
-------
combined : DataFrame
+ See Also
+ --------
+ DataFrame.combine : Perform series-wise operation on two DataFrames
+ using a given function.
+
Examples
--------
@@ -5138,11 +5143,6 @@ def combine_first(self, other):
0 NaN 4.0 NaN
1 0.0 3.0 1.0
2 NaN 3.0 1.0
-
- See Also
- --------
- DataFrame.combine : Perform series-wise operation on two DataFrames
- using a given function.
"""
import pandas.core.computation.expressions as expressions
@@ -5476,6 +5476,15 @@ def pivot(self, index=None, columns=None, values=None):
Name of the row / column that will contain the totals
when margins is True.
+ Returns
+ -------
+ table : DataFrame
+
+ See Also
+ --------
+ DataFrame.pivot : Pivot without aggregation that can handle
+ non-numeric data.
+
Examples
--------
>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
@@ -5551,15 +5560,6 @@ def pivot(self, index=None, columns=None, values=None):
small 5.500000 9 8.500000 8
foo large 2.000000 5 4.500000 4
small 2.333333 6 4.333333 2
-
- Returns
- -------
- table : DataFrame
-
- See Also
- --------
- DataFrame.pivot : Pivot without aggregation that can handle
- non-numeric data.
"""
@Substitution('')
@@ -5763,6 +5763,10 @@ def unstack(self, level=-1, fill_value=None):
.. versionadded:: 0.18.0
+ Returns
+ -------
+ unstacked : DataFrame or Series
+
See Also
--------
DataFrame.pivot : Pivot a table based on column values.
@@ -5798,10 +5802,6 @@ def unstack(self, level=-1, fill_value=None):
two a 3.0
b 4.0
dtype: float64
-
- Returns
- -------
- unstacked : DataFrame or Series
"""
from pandas.core.reshape.reshape import unstack
return unstack(self, level, fill_value)
@@ -5895,7 +5895,6 @@ def unstack(self, level=-1, fill_value=None):
0 a B E 1
1 b B E 3
2 c B E 5
-
""")
@Appender(_shared_docs['melt'] %
@@ -6199,6 +6198,16 @@ def apply(self, func, axis=0, broadcast=None, raw=False, reduce=None,
Additional keyword arguments to pass as keywords arguments to
`func`.
+ Returns
+ -------
+ applied : Series or DataFrame
+
+ See Also
+ --------
+ DataFrame.applymap: For elementwise operations.
+ DataFrame.aggregate: Only perform aggregating type operations.
+ DataFrame.transform: Only perform transforming type operations.
+
Notes
-----
In the current implementation apply calls `func` twice on the
@@ -6207,12 +6216,6 @@ def apply(self, func, axis=0, broadcast=None, raw=False, reduce=None,
side-effects, as they will take effect twice for the first
column/row.
- See Also
- --------
- DataFrame.applymap: For elementwise operations.
- DataFrame.aggregate: Only perform aggregating type operations.
- DataFrame.transform: Only perform transforming type operations.
-
Examples
--------
@@ -6282,10 +6285,6 @@ def apply(self, func, axis=0, broadcast=None, raw=False, reduce=None,
0 1 2
1 1 2
2 1 2
-
- Returns
- -------
- applied : Series or DataFrame
"""
from pandas.core.apply import frame_apply
op = frame_apply(self,
@@ -6388,6 +6387,11 @@ def append(self, other, ignore_index=False,
-------
appended : DataFrame
+ See Also
+ --------
+ pandas.concat : General function to concatenate DataFrame, Series
+ or Panel objects.
+
Notes
-----
If a list of dict/series is passed and the keys are all contained in
@@ -6399,11 +6403,6 @@ def append(self, other, ignore_index=False,
those rows to a list and then concatenate the list with the original
DataFrame all at once.
- See Also
- --------
- pandas.concat : General function to concatenate DataFrame, Series
- or Panel objects.
-
Examples
--------
@@ -6541,6 +6540,10 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='',
DataFrame
A dataframe containing columns from both the caller and `other`.
+ See Also
+ --------
+ DataFrame.merge : For column(s)-on-columns(s) operations.
+
Notes
-----
Parameters `on`, `lsuffix`, and `rsuffix` are not supported when
@@ -6549,10 +6552,6 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='',
Support for specifying index levels as the `on` parameter was added
in version 0.23.0.
- See Also
- --------
- DataFrame.merge : For column(s)-on-columns(s) operations.
-
Examples
--------
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
@@ -7300,22 +7299,22 @@ def idxmin(self, axis=0, skipna=True):
Exclude NA/null values. If an entire row/column is NA, the result
will be NA.
+ Returns
+ -------
+ idxmin : Series
+
Raises
------
ValueError
* If the row/column is empty
- Returns
- -------
- idxmin : Series
+ See Also
+ --------
+ Series.idxmin
Notes
-----
This method is the DataFrame version of ``ndarray.argmin``.
-
- See Also
- --------
- Series.idxmin
"""
axis = self._get_axis_number(axis)
indices = nanops.nanargmin(self.values, axis=axis, skipna=skipna)
@@ -7336,22 +7335,22 @@ def idxmax(self, axis=0, skipna=True):
Exclude NA/null values. If an entire row/column is NA, the result
will be NA.
+ Returns
+ -------
+ idxmax : Series
+
Raises
------
ValueError
* If the row/column is empty
- Returns
- -------
- idxmax : Series
+ See Also
+ --------
+ Series.idxmax
Notes
-----
This method is the DataFrame version of ``ndarray.argmax``.
-
- See Also
- --------
- Series.idxmax
"""
axis = self._get_axis_number(axis)
indices = nanops.nanargmax(self.values, axis=axis, skipna=skipna)
@@ -7493,6 +7492,11 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
- If ``q`` is a float, a Series will be returned where the
index is the columns of self and the values are the quantiles.
+ See Also
+ --------
+ pandas.core.window.Rolling.quantile
+ numpy.percentile
+
Examples
--------
@@ -7520,11 +7524,6 @@ def quantile(self, q=0.5, axis=0, numeric_only=True,
B 2010-07-02 12:00:00
C 1 days 12:00:00
Name: 0.5, dtype: object
-
- See Also
- --------
- pandas.core.window.Rolling.quantile
- numpy.percentile
"""
self._check_percentile(q)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b3cb5c3be67f9..0eb0e14c9054e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -638,14 +638,14 @@ def transpose(self, *args, **kwargs):
Make a copy of the underlying data. Mixed-dtype data will
always result in a copy
+ Returns
+ -------
+ y : same as input
+
Examples
--------
>>> p.transpose(2, 0, 1)
>>> p.transpose(2, 0, 1, copy=True)
-
- Returns
- -------
- y : same as input
"""
# construct the args
@@ -1847,6 +1847,11 @@ def empty(self):
bool
If DataFrame is empty, return True, if not return False.
+ See Also
+ --------
+ pandas.Series.dropna
+ pandas.DataFrame.dropna
+
Notes
-----
If DataFrame contains only NaNs, it is still not considered empty. See
@@ -1875,11 +1880,6 @@ def empty(self):
False
>>> df.dropna().empty
True
-
- See Also
- --------
- pandas.Series.dropna
- pandas.DataFrame.dropna
"""
return any(len(self._get_axis(a)) == 0 for a in self._AXIS_ORDERS)
@@ -2051,6 +2051,11 @@ def _repr_data_resource_(self):
.. versionadded:: 0.20.0.
+ See Also
+ --------
+ read_excel
+ ExcelWriter
+
Notes
-----
For compatibility with :meth:`~DataFrame.to_csv`,
@@ -2059,11 +2064,6 @@ def _repr_data_resource_(self):
Once a workbook has been saved it is not possible write further data
without rewriting the whole workbook.
- See Also
- --------
- read_excel
- ExcelWriter
-
Examples
--------
@@ -2640,6 +2640,10 @@ def to_xarray(self):
DataFrame.to_hdf : Write DataFrame to an HDF5 file.
DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
+ Notes
+ -----
+ See the `xarray docs <http://xarray.pydata.org/en/stable/>`__
+
Examples
--------
>>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2),
@@ -2695,10 +2699,6 @@ class (index) object 'bird' 'bird' 'mammal' 'mammal'
* animal (animal) object 'falcon' 'parrot'
Data variables:
speed (date, animal) int64 350 18 361 15
-
- Notes
- -----
- See the `xarray docs <http://xarray.pydata.org/en/stable/>`__
"""
try:
@@ -4133,6 +4133,10 @@ def reindex(self, *args, **kwargs):
.. versionadded:: 0.21.0 (list-like tolerance)
+ Returns
+ -------
+ %(klass)s with changed index.
+
See Also
--------
DataFrame.set_index : Set row labels.
@@ -4283,10 +4287,6 @@ def reindex(self, *args, **kwargs):
in the original dataframe, use the ``fillna()`` method.
See the :ref:`user guide <basics.reindexing>` for more.
-
- Returns
- -------
- %(klass)s with changed index.
"""
# TODO: Decide if we care about having different examples for different
# kinds
@@ -4399,6 +4399,10 @@ def _reindex_multi(self, axes, copy, fill_value):
.. versionadded:: 0.21.0 (list-like tolerance)
+ Returns
+ -------
+ %(klass)s
+
See Also
--------
DataFrame.set_index : Set row labels.
@@ -4406,10 +4410,6 @@ def _reindex_multi(self, axes, copy, fill_value):
DataFrame.reindex : Change to new indices or expand indices.
DataFrame.reindex_like : Change to same indices as other DataFrame.
- Returns
- -------
- %(klass)s
-
Examples
--------
>>> df.reindex_axis(['A', 'B', 'C'], axis=1)
@@ -4483,6 +4483,18 @@ def filter(self, items=None, like=None, regex=None, axis=None):
-------
same type as input object
+ See Also
+ --------
+ DataFrame.loc
+
+ Notes
+ -----
+ The ``items``, ``like``, and ``regex`` parameters are
+ enforced to be mutually exclusive.
+
+ ``axis`` defaults to the info axis that is used when indexing
+ with ``[]``.
+
Examples
--------
>>> df = pd.DataFrame(np.array(([1,2,3], [4,5,6])),
@@ -4505,18 +4517,6 @@ def filter(self, items=None, like=None, regex=None, axis=None):
>>> df.filter(like='bbi', axis=0)
one two three
rabbit 4 5 6
-
- See Also
- --------
- DataFrame.loc
-
- Notes
- -----
- The ``items``, ``like``, and ``regex`` parameters are
- enforced to be mutually exclusive.
-
- ``axis`` defaults to the info axis that is used when indexing
- with ``[]``.
"""
import re
@@ -4850,6 +4850,12 @@ def sample(self, n=None, frac=None, replace=False, weights=None,
-------
object : the return type of ``func``.
+ See Also
+ --------
+ DataFrame.apply
+ DataFrame.applymap
+ Series.map
+
Notes
-----
@@ -4873,12 +4879,6 @@ def sample(self, n=None, frac=None, replace=False, weights=None,
... .pipe(g, arg1=a)
... .pipe((f, 'arg2'), arg1=a, arg3=c)
... )
-
- See Also
- --------
- DataFrame.apply
- DataFrame.applymap
- Series.map
""")
@Appender(_shared_docs['pipe'] % _shared_doc_kwargs)
@@ -5181,6 +5181,10 @@ def as_matrix(self, columns=None):
If the caller is heterogeneous and contains booleans or objects,
the result will be of dtype=object. See Notes.
+ See Also
+ --------
+ DataFrame.values
+
Notes
-----
Return is NOT a Numpy-matrix, rather, a Numpy-array.
@@ -5197,10 +5201,6 @@ def as_matrix(self, columns=None):
This method is provided for backwards compatibility. Generally,
it is recommended to use '.values'.
-
- See Also
- --------
- DataFrame.values
"""
warnings.warn("Method .as_matrix will be removed in a future version. "
"Use .values instead.", FutureWarning, stacklevel=2)
@@ -5225,6 +5225,24 @@ def values(self):
numpy.ndarray
The values of the DataFrame.
+ See Also
+ --------
+ DataFrame.to_numpy : Recommended alternative to this method.
+ pandas.DataFrame.index : Retrieve the index labels.
+ pandas.DataFrame.columns : Retrieving the column names.
+
+ Notes
+ -----
+ The dtype will be a lower-common-denominator dtype (implicit
+ upcasting); that is to say if the dtypes (even of numeric types)
+ are mixed, the one that accommodates all will be chosen. Use this
+ with care if you are not dealing with the blocks.
+
+ e.g. If the dtypes are float16 and float32, dtype will be upcast to
+ float32. If dtypes are int32 and uint8, dtype will be upcast to
+ int32. By :func:`numpy.find_common_type` convention, mixing int64
+ and uint64 will result in a float64 dtype.
+
Examples
--------
A DataFrame where all columns are the same type (e.g., int64) results
@@ -5263,24 +5281,6 @@ def values(self):
array([['parrot', 24.0, 'second'],
['lion', 80.5, 1],
['monkey', nan, None]], dtype=object)
-
- Notes
- -----
- The dtype will be a lower-common-denominator dtype (implicit
- upcasting); that is to say if the dtypes (even of numeric types)
- are mixed, the one that accommodates all will be chosen. Use this
- with care if you are not dealing with the blocks.
-
- e.g. If the dtypes are float16 and float32, dtype will be upcast to
- float32. If dtypes are int32 and uint8, dtype will be upcast to
- int32. By :func:`numpy.find_common_type` convention, mixing int64
- and uint64 will result in a float64 dtype.
-
- See Also
- --------
- DataFrame.to_numpy : Recommended alternative to this method.
- pandas.DataFrame.index : Retrieve the index labels.
- pandas.DataFrame.columns : Retrieving the column names.
"""
self._consolidate_inplace()
return self._data.as_array(transpose=self._AXIS_REVERSED)
@@ -5568,6 +5568,13 @@ def astype(self, dtype, copy=True, errors='raise', **kwargs):
-------
casted : same type as caller
+ See Also
+ --------
+ to_datetime : Convert argument to datetime.
+ to_timedelta : Convert argument to timedelta.
+ to_numeric : Convert argument to a numeric type.
+ numpy.ndarray.astype : Cast a numpy array to a specified type.
+
Examples
--------
>>> ser = pd.Series([1, 2], dtype='int32')
@@ -5608,13 +5615,6 @@ def astype(self, dtype, copy=True, errors='raise', **kwargs):
0 10
1 2
dtype: int64
-
- See Also
- --------
- to_datetime : Convert argument to datetime.
- to_timedelta : Convert argument to timedelta.
- to_numeric : Convert argument to a numeric type.
- numpy.ndarray.astype : Cast a numpy array to a specified type.
"""
if is_dict_like(dtype):
if self.ndim == 1: # i.e. Series
@@ -5832,15 +5832,15 @@ def convert_objects(self, convert_dates=True, convert_numeric=False,
conversion was done). Note: This is meant for internal use, and
should not be confused with inplace.
+ Returns
+ -------
+ converted : same as input object
+
See Also
--------
to_datetime : Convert argument to datetime.
to_timedelta : Convert argument to timedelta.
to_numeric : Convert argument to numeric type.
-
- Returns
- -------
- converted : same as input object
"""
msg = ("convert_objects is deprecated. To re-infer data dtypes for "
"object columns, use {klass}.infer_objects()\nFor all "
@@ -5866,16 +5866,16 @@ def infer_objects(self):
.. versionadded:: 0.21.0
+ Returns
+ -------
+ converted : same type as input object
+
See Also
--------
to_datetime : Convert argument to datetime.
to_timedelta : Convert argument to timedelta.
to_numeric : Convert argument to numeric type.
- Returns
- -------
- converted : same type as input object
-
Examples
--------
>>> df = pd.DataFrame({"A": ["a", 1, 2, 3]})
@@ -5939,15 +5939,15 @@ def fillna(self, value=None, method=None, axis=None, inplace=False,
or the string 'infer' which will try to downcast to an appropriate
equal type (e.g. float64 to int64 if possible)
+ Returns
+ -------
+ filled : %(klass)s
+
See Also
--------
interpolate : Fill NaN values using interpolation.
reindex, asfreq
- Returns
- -------
- filled : %(klass)s
-
Examples
--------
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
@@ -6829,10 +6829,6 @@ def asof(self, where, subset=None):
For DataFrame, if not `None`, only use these columns to
check for NaNs.
- Notes
- -----
- Dates are assumed to be sorted. Raises if this is not the case.
-
Returns
-------
scalar, Series, or DataFrame
@@ -6847,6 +6843,10 @@ def asof(self, where, subset=None):
--------
merge_asof : Perform an asof merge. Similar to left join.
+ Notes
+ -----
+ Dates are assumed to be sorted. Raises if this is not the case.
+
Examples
--------
A Series and a scalar `where`.
@@ -7188,17 +7188,17 @@ def clip(self, lower=None, upper=None, axis=None, inplace=False,
Additional keywords have no effect but might be accepted
for compatibility with numpy.
- See Also
- --------
- clip_lower : Clip values below specified threshold(s).
- clip_upper : Clip values above specified threshold(s).
-
Returns
-------
Series or DataFrame
Same type as calling object with the values outside the
clip boundaries replaced
+ See Also
+ --------
+ clip_lower : Clip values below specified threshold(s).
+ clip_upper : Clip values above specified threshold(s).
+
Examples
--------
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
@@ -7624,6 +7624,15 @@ def asfreq(self, freq, method=None, how=None, normalize=False,
-------
converted : same type as caller
+ See Also
+ --------
+ reindex
+
+ Notes
+ -----
+ To learn more about the frequency strings, please see `this link
+ <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
+
Examples
--------
@@ -7674,15 +7683,6 @@ def asfreq(self, freq, method=None, how=None, normalize=False,
2000-01-01 00:02:00 2.0
2000-01-01 00:02:30 3.0
2000-01-01 00:03:00 3.0
-
- See Also
- --------
- reindex
-
- Notes
- -----
- To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
from pandas.core.resample import asfreq
return asfreq(self, freq, method=method, how=how, normalize=normalize,
@@ -7692,11 +7692,6 @@ def at_time(self, time, asof=False, axis=None):
"""
Select values at particular time of day (e.g. 9:30AM).
- Raises
- ------
- TypeError
- If the index is not a :class:`DatetimeIndex`
-
Parameters
----------
time : datetime.time or string
@@ -7704,11 +7699,23 @@ def at_time(self, time, asof=False, axis=None):
.. versionadded:: 0.24.0
-
Returns
-------
values_at_time : same type as caller
+ Raises
+ ------
+ TypeError
+ If the index is not a :class:`DatetimeIndex`
+
+ See Also
+ --------
+ between_time : Select values between particular times of the day.
+ first : Select initial periods of time series based on a date offset.
+ last : Select final periods of time series based on a date offset.
+ DatetimeIndex.indexer_at_time : Get just the index locations for
+ values at particular time of the day.
+
Examples
--------
>>> i = pd.date_range('2018-04-09', periods=4, freq='12H')
@@ -7724,14 +7731,6 @@ def at_time(self, time, asof=False, axis=None):
A
2018-04-09 12:00:00 2
2018-04-10 12:00:00 4
-
- See Also
- --------
- between_time : Select values between particular times of the day.
- first : Select initial periods of time series based on a date offset.
- last : Select final periods of time series based on a date offset.
- DatetimeIndex.indexer_at_time : Get just the index locations for
- values at particular time of the day.
"""
if axis is None:
axis = self._stat_axis_number
@@ -7753,11 +7752,6 @@ def between_time(self, start_time, end_time, include_start=True,
By setting ``start_time`` to be later than ``end_time``,
you can get the times that are *not* between the two times.
- Raises
- ------
- TypeError
- If the index is not a :class:`DatetimeIndex`
-
Parameters
----------
start_time : datetime.time or string
@@ -7772,6 +7766,19 @@ def between_time(self, start_time, end_time, include_start=True,
-------
values_between_time : same type as caller
+ Raises
+ ------
+ TypeError
+ If the index is not a :class:`DatetimeIndex`
+
+ See Also
+ --------
+ at_time : Select values at a particular time of the day.
+ first : Select initial periods of time series based on a date offset.
+ last : Select final periods of time series based on a date offset.
+ DatetimeIndex.indexer_between_time : Get just the index locations for
+ values between particular times of the day.
+
Examples
--------
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
@@ -7795,14 +7802,6 @@ def between_time(self, start_time, end_time, include_start=True,
A
2018-04-09 00:00:00 1
2018-04-12 01:00:00 4
-
- See Also
- --------
- at_time : Select values at a particular time of the day.
- first : Select initial periods of time series based on a date offset.
- last : Select final periods of time series based on a date offset.
- DatetimeIndex.indexer_between_time : Get just the index locations for
- values between particular times of the day.
"""
if axis is None:
axis = self._stat_axis_number
@@ -7890,6 +7889,12 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
-------
Resampler object
+ See Also
+ --------
+ groupby : Group by mapping, function, label, or list of labels.
+ Series.resample : Resample a Series.
+ DataFrame.resample: Resample a DataFrame.
+
Notes
-----
See the `user guide
@@ -7899,12 +7904,6 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
To learn more about the offset strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
- See Also
- --------
- groupby : Group by mapping, function, label, or list of labels.
- Series.resample : Resample a Series.
- DataFrame.resample: Resample a DataFrame.
-
Examples
--------
@@ -8122,14 +8121,24 @@ def first(self, offset):
Convenience method for subsetting initial periods of time series data
based on a date offset.
+ Parameters
+ ----------
+ offset : string, DateOffset, dateutil.relativedelta
+
+ Returns
+ -------
+ subset : same type as caller
+
Raises
------
TypeError
If the index is not a :class:`DatetimeIndex`
- Parameters
- ----------
- offset : string, DateOffset, dateutil.relativedelta
+ See Also
+ --------
+ last : Select final periods of time series based on a date offset.
+ at_time : Select values at a particular time of the day.
+ between_time : Select values between particular times of the day.
Examples
--------
@@ -8152,16 +8161,6 @@ def first(self, offset):
Notice the data for 3 first calender days were returned, not the first
3 days observed in the dataset, and therefore data for 2018-04-13 was
not returned.
-
- Returns
- -------
- subset : same type as caller
-
- See Also
- --------
- last : Select final periods of time series based on a date offset.
- at_time : Select values at a particular time of the day.
- between_time : Select values between particular times of the day.
"""
if not isinstance(self.index, DatetimeIndex):
raise TypeError("'first' only supports a DatetimeIndex index")
@@ -8185,14 +8184,24 @@ def last(self, offset):
Convenience method for subsetting final periods of time series data
based on a date offset.
+ Parameters
+ ----------
+ offset : string, DateOffset, dateutil.relativedelta
+
+ Returns
+ -------
+ subset : same type as caller
+
Raises
------
TypeError
If the index is not a :class:`DatetimeIndex`
- Parameters
- ----------
- offset : string, DateOffset, dateutil.relativedelta
+ See Also
+ --------
+ first : Select initial periods of time series based on a date offset.
+ at_time : Select values at a particular time of the day.
+ between_time : Select values between particular times of the day.
Examples
--------
@@ -8215,16 +8224,6 @@ def last(self, offset):
Notice the data for 3 last calender days were returned, not the last
3 observed days in the dataset, and therefore data for 2018-04-11 was
not returned.
-
- Returns
- -------
- subset : same type as caller
-
- See Also
- --------
- first : Select initial periods of time series based on a date offset.
- at_time : Select values at a particular time of the day.
- between_time : Select values between particular times of the day.
"""
if not isinstance(self.index, DatetimeIndex):
raise TypeError("'last' only supports a DatetimeIndex index")
@@ -8691,6 +8690,11 @@ def _where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
-------
wh : same type as caller
+ See Also
+ --------
+ :func:`DataFrame.%(name_other)s` : Return an object of same shape as
+ self.
+
Notes
-----
The %(name)s method is an application of the if-then idiom. For each
@@ -8705,11 +8709,6 @@ def _where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
For further details and examples see the ``%(name)s`` documentation in
:ref:`indexing <indexing.where_mask>`.
- See Also
- --------
- :func:`DataFrame.%(name_other)s` : Return an object of same shape as
- self.
-
Examples
--------
>>> s = pd.Series(range(5))
@@ -8891,14 +8890,14 @@ def slice_shift(self, periods=1, axis=0):
periods : int
Number of periods to move, can be positive or negative
+ Returns
+ -------
+ shifted : same type as caller
+
Notes
-----
While the `slice_shift` is faster than `shift`, you may pay for it
later during alignment.
-
- Returns
- -------
- shifted : same type as caller
"""
if periods == 0:
return self
@@ -8929,15 +8928,15 @@ def tshift(self, periods=1, freq=None, axis=0):
axis : int or basestring
Corresponds to the axis that contains the Index
+ Returns
+ -------
+ shifted : NDFrame
+
Notes
-----
If freq is not specified then tries to use the freq or inferred_freq
attributes of the index. If neither of those attributes exist, a
ValueError is thrown
-
- Returns
- -------
- shifted : NDFrame
"""
index = self._get_axis(axis)
@@ -9325,6 +9324,10 @@ def abs(self):
abs
Series/DataFrame containing the absolute value of each element.
+ See Also
+ --------
+ numpy.absolute : Calculate the absolute value element-wise.
+
Notes
-----
For ``complex`` inputs, ``1.2 + 1j``, the absolute value is
@@ -9376,10 +9379,6 @@ def abs(self):
0 4 10 100
2 6 30 -30
3 7 40 -50
-
- See Also
- --------
- numpy.absolute : Calculate the absolute value element-wise.
"""
return np.abs(self)
@@ -10085,14 +10084,14 @@ def transform(self, func, *args, **kwargs):
_shared_docs['valid_index'] = """
Return index for %(position)s non-NA/null value.
+ Returns
+ --------
+ scalar : type of index
+
Notes
--------
If all elements are non-NA/null, returns None.
Also returns None for empty %(klass)s.
-
- Returns
- --------
- scalar : type of index
"""
def _find_valid_index(self, how):
@@ -10303,7 +10302,6 @@ def _doc_parms(cls):
Returns
-------
%(outname)s : %(name1)s or %(name2)s\n
-%(examples)s
See Also
--------
core.window.Expanding.%(accum_func_name)s : Similar functionality
@@ -10314,6 +10312,8 @@ def _doc_parms(cls):
%(name2)s.cummin : Return cumulative minimum over %(name2)s axis.
%(name2)s.cumsum : Return cumulative sum over %(name2)s axis.
%(name2)s.cumprod : Return cumulative product over %(name2)s axis.
+
+%(examples)s
"""
_cummin_examples = """\
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 26e437355fa8b..2f54f61818aa6 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -634,6 +634,10 @@ def filter(self, func, dropna=True, *args, **kwargs): # noqa
dropna : Drop groups that do not pass the filter. True by default;
if False, groups that evaluate False are filled with NaNs.
+ Returns
+ -------
+ filtered : DataFrame
+
Notes
-----
Each subframe is endowed the attribute 'name' in case you need to know
@@ -651,10 +655,6 @@ def filter(self, func, dropna=True, *args, **kwargs): # noqa
1 bar 2 5.0
3 bar 4 1.0
5 bar 6 9.0
-
- Returns
- -------
- filtered : DataFrame
"""
indices = []
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 253860d83f49e..45eaa3efa948a 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -78,18 +78,6 @@ class providing the base-class of operations.
-------
applied : Series or DataFrame
- Notes
- -----
- In the current implementation `apply` calls `func` twice on the
- first group to decide whether it can take a fast or slow code
- path. This can lead to unexpected behavior if `func` has
- side-effects, as they will take effect twice for the first
- group.
-
- Examples
- --------
- {examples}
-
See Also
--------
pipe : Apply function to the full GroupBy object instead of to each
@@ -165,6 +153,18 @@ class providing the base-class of operations.
a 1
b 0
dtype: int64
+
+ Notes
+ -----
+ In the current implementation `apply` calls `func` twice on the
+ first group to decide whether it can take a fast or slow code
+ path. This can lead to unexpected behavior if `func` has
+ side-effects, as they will take effect twice for the first
+ group.
+
+ Examples
+ --------
+ {examples}
""")
_pipe_template = """\
@@ -204,6 +204,13 @@ class providing the base-class of operations.
-------
object : the return type of `func`.
+See Also
+--------
+pandas.Series.pipe : Apply a function with arguments to a series.
+pandas.DataFrame.pipe: Apply a function with arguments to a dataframe.
+apply : Apply function to each group instead of to the
+ full %(klass)s object.
+
Notes
-----
See more `here
@@ -212,13 +219,6 @@ class providing the base-class of operations.
Examples
--------
%(examples)s
-
-See Also
---------
-pandas.Series.pipe : Apply a function with arguments to a series.
-pandas.DataFrame.pipe: Apply a function with arguments to a dataframe.
-apply : Apply function to each group instead of to the
- full %(klass)s object.
"""
_transform_template = """
@@ -231,6 +231,14 @@ class providing the base-class of operations.
f : function
Function to apply to each group
+Returns
+-------
+%(klass)s
+
+See Also
+--------
+aggregate, transform
+
Notes
-----
Each group is endowed the attribute 'name' in case you need to know
@@ -248,14 +256,6 @@ class providing the base-class of operations.
* f must not mutate groups. Mutation is not supported and may
produce unexpected results.
-Returns
--------
-%(klass)s
-
-See Also
---------
-aggregate, transform
-
Examples
--------
@@ -285,7 +285,6 @@ class providing the base-class of operations.
3 3 8.0
4 4 6.0
5 3 8.0
-
"""
@@ -1705,6 +1704,10 @@ def ngroup(self, ascending=True):
ascending : bool, default True
If False, number in reverse, from number of group - 1 to 0.
+ See Also
+ --------
+ .cumcount : Number the rows in each group.
+
Examples
--------
@@ -1741,10 +1744,6 @@ def ngroup(self, ascending=True):
4 2
5 0
dtype: int64
-
- See Also
- --------
- .cumcount : Number the rows in each group.
"""
with _group_selection_context(self):
@@ -1768,6 +1767,10 @@ def cumcount(self, ascending=True):
ascending : bool, default True
If False, number in reverse, from length of group - 1 to 0.
+ See Also
+ --------
+ .ngroup : Number the groups themselves.
+
Examples
--------
@@ -1797,10 +1800,6 @@ def cumcount(self, ascending=True):
4 0
5 0
dtype: int64
-
- See Also
- --------
- .ngroup : Number the groups themselves.
"""
with _group_selection_context(self):
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index 6138f73726e0a..82b6b22be4a79 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -212,6 +212,10 @@ def to_pytimedelta(self):
a : numpy.ndarray
1D array containing data with `datetime.timedelta` type.
+ See Also
+ --------
+ datetime.timedelta
+
Examples
--------
>>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='d'))
@@ -227,10 +231,6 @@ def to_pytimedelta(self):
array([datetime.timedelta(0), datetime.timedelta(1),
datetime.timedelta(2), datetime.timedelta(3),
datetime.timedelta(4)], dtype=object)
-
- See Also
- --------
- datetime.timedelta
"""
return self._get_values().to_pytimedelta()
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 88510e84a29a5..c097694800cd7 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -181,6 +181,15 @@ class Index(IndexOpsMixin, PandasObject):
tupleize_cols : bool (default: True)
When True, attempt to create a MultiIndex if possible
+ See Also
+ ---------
+ RangeIndex : Index implementing a monotonic integer range.
+ CategoricalIndex : Index of :class:`Categorical` s.
+ MultiIndex : A multi-level, or hierarchical, Index.
+ IntervalIndex : An Index of :class:`Interval` s.
+ DatetimeIndex, TimedeltaIndex, PeriodIndex
+ Int64Index, UInt64Index, Float64Index
+
Notes
-----
An Index instance can **only** contain hashable objects
@@ -192,15 +201,6 @@ class Index(IndexOpsMixin, PandasObject):
>>> pd.Index(list('abc'))
Index(['a', 'b', 'c'], dtype='object')
-
- See Also
- ---------
- RangeIndex : Index implementing a monotonic integer range.
- CategoricalIndex : Index of :class:`Categorical` s.
- MultiIndex : A multi-level, or hierarchical, Index.
- IntervalIndex : An Index of :class:`Interval` s.
- DatetimeIndex, TimedeltaIndex, PeriodIndex
- Int64Index, UInt64Index, Float64Index
"""
# To hand over control to subclasses
_join_precedence = 1
@@ -2069,6 +2069,16 @@ def duplicated(self, keep='first'):
occurrence.
- ``False`` : Mark all duplicates as ``True``.
+ Returns
+ -------
+ numpy.ndarray
+
+ See Also
+ --------
+ pandas.Series.duplicated : Equivalent method on pandas.Series.
+ pandas.DataFrame.duplicated : Equivalent method on pandas.DataFrame.
+ pandas.Index.drop_duplicates : Remove duplicate values from Index.
+
Examples
--------
By default, for each set of duplicated values, the first occurrence is
@@ -2093,16 +2103,6 @@ def duplicated(self, keep='first'):
>>> idx.duplicated(keep=False)
array([ True, False, True, False, True])
-
- Returns
- -------
- numpy.ndarray
-
- See Also
- --------
- pandas.Series.duplicated : Equivalent method on pandas.Series.
- pandas.DataFrame.duplicated : Equivalent method on pandas.DataFrame.
- pandas.Index.drop_duplicates : Remove duplicate values from Index.
"""
return super(Index, self).duplicated(keep=keep)
@@ -4187,6 +4187,11 @@ def shift(self, periods=1, freq=None):
--------
Series.shift : Shift values of Series.
+ Notes
+ -----
+ This method is only implemented for datetime-like index classes,
+ i.e., DatetimeIndex, PeriodIndex and TimedeltaIndex.
+
Examples
--------
Put the first 5 month starts of 2011 into an index.
@@ -4211,11 +4216,6 @@ def shift(self, periods=1, freq=None):
DatetimeIndex(['2011-11-01', '2011-12-01', '2012-01-01', '2012-02-01',
'2012-03-01'],
dtype='datetime64[ns]', freq='MS')
-
- Notes
- -----
- This method is only implemented for datetime-like index classes,
- i.e., DatetimeIndex, PeriodIndex and TimedeltaIndex.
"""
raise NotImplementedError("Not supported for type %s" %
type(self).__name__)
@@ -4768,6 +4768,10 @@ def slice_locs(self, start=None, end=None, step=None, kind=None):
-------
start, end : int
+ See Also
+ --------
+ Index.get_loc : Get location for a single label.
+
Notes
-----
This method only works if the index is monotonic or unique.
@@ -4777,10 +4781,6 @@ def slice_locs(self, start=None, end=None, step=None, kind=None):
>>> idx = pd.Index(list('abcd'))
>>> idx.slice_locs(start='b', end='c')
(1, 3)
-
- See Also
- --------
- Index.get_loc : Get location for a single label.
"""
inc = (step is None or step >= 0)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index fd4a1527c07b7..c30e64fcf04da 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1526,6 +1526,10 @@ def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
**kwargs
For compatibility. Has no effect on the result.
+ Returns
+ -------
+ DatetimeIndex
+
Notes
-----
Of the four parameters: ``start``, ``end``, ``periods``, and ``freq``,
@@ -1536,10 +1540,6 @@ def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
- Returns
- -------
- DatetimeIndex
-
Examples
--------
Note how the two weekend days are skipped in the result.
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 444f9e21b0bdc..e526aa72affee 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -486,6 +486,12 @@ def is_overlapping(self):
bool
Boolean indicating if the IntervalIndex has overlapping intervals.
+ See Also
+ --------
+ Interval.overlaps : Check whether two Interval objects overlap.
+ IntervalIndex.overlaps : Check an IntervalIndex elementwise for
+ overlaps.
+
Examples
--------
>>> index = pd.IntervalIndex.from_tuples([(0, 2), (1, 3), (4, 5)])
@@ -515,12 +521,6 @@ def is_overlapping(self):
dtype='interval[int64]')
>>> index.is_overlapping
False
-
- See Also
- --------
- Interval.overlaps : Check whether two Interval objects overlap.
- IntervalIndex.overlaps : Check an IntervalIndex elementwise for
- overlaps.
"""
# GH 23309
return self._engine.is_overlapping
@@ -1180,6 +1180,14 @@ def interval_range(start=None, end=None, periods=None, freq=None,
Whether the intervals are closed on the left-side, right-side, both
or neither.
+ Returns
+ -------
+ rng : IntervalIndex
+
+ See Also
+ --------
+ IntervalIndex : An Index of intervals that are all closed on the same side.
+
Notes
-----
Of the four parameters ``start``, ``end``, ``periods``, and ``freq``,
@@ -1190,10 +1198,6 @@ def interval_range(start=None, end=None, periods=None, freq=None,
To learn more about datetime-like frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
- Returns
- -------
- rng : IntervalIndex
-
Examples
--------
Numeric ``start`` and ``end`` is supported.
@@ -1241,10 +1245,6 @@ def interval_range(start=None, end=None, periods=None, freq=None,
>>> pd.interval_range(end=5, periods=4, closed='both')
IntervalIndex([[1, 2], [2, 3], [3, 4], [4, 5]]
closed='both', dtype='interval[int64]')
-
- See Also
- --------
- IntervalIndex : An Index of intervals that are all closed on the same side.
"""
start = com.maybe_box_datetimelike(start)
end = com.maybe_box_datetimelike(end)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 5e26a3c6c439e..34fca8fe58449 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -145,34 +145,6 @@ class MultiIndex(Index):
verify_integrity : boolean, default True
Check that the levels/codes are consistent and valid
- Examples
- ---------
- A new ``MultiIndex`` is typically constructed using one of the helper
- methods :meth:`MultiIndex.from_arrays`, :meth:`MultiIndex.from_product`
- and :meth:`MultiIndex.from_tuples`. For example (using ``.from_arrays``):
-
- >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
- >>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
- MultiIndex(levels=[[1, 2], ['blue', 'red']],
- labels=[[0, 0, 1, 1], [1, 0, 1, 0]],
- names=['number', 'color'])
-
- See further examples for how to construct a MultiIndex in the doc strings
- of the mentioned helper methods.
-
- Notes
- -----
- See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/advanced.html>`_ for more.
-
- See Also
- --------
- MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
- MultiIndex.from_product : Create a MultiIndex from the cartesian product
- of iterables.
- MultiIndex.from_tuples : Convert list of tuples to a MultiIndex.
- Index : The base pandas Index type.
-
Attributes
----------
names
@@ -196,6 +168,34 @@ class MultiIndex(Index):
swaplevel
reorder_levels
remove_unused_levels
+
+ See Also
+ --------
+ MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
+ MultiIndex.from_product : Create a MultiIndex from the cartesian product
+ of iterables.
+ MultiIndex.from_tuples : Convert list of tuples to a MultiIndex.
+ Index : The base pandas Index type.
+
+ Notes
+ -----
+ See the `user guide
+ <http://pandas.pydata.org/pandas-docs/stable/advanced.html>`_ for more.
+
+ Examples
+ ---------
+ A new ``MultiIndex`` is typically constructed using one of the helper
+ methods :meth:`MultiIndex.from_arrays`, :meth:`MultiIndex.from_product`
+ and :meth:`MultiIndex.from_tuples`. For example (using ``.from_arrays``):
+
+ >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
+ >>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
+ MultiIndex(levels=[[1, 2], ['blue', 'red']],
+ labels=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ names=['number', 'color'])
+
+ See further examples for how to construct a MultiIndex in the doc strings
+ of the mentioned helper methods.
"""
# initialize to zero-length tuples to make everything work
@@ -303,16 +303,16 @@ def from_arrays(cls, arrays, sortorder=None, names=None):
-------
index : MultiIndex
- Examples
- --------
- >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
- >>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
-
See Also
--------
MultiIndex.from_tuples : Convert list of tuples to MultiIndex.
MultiIndex.from_product : Make a MultiIndex from cartesian product
of iterables.
+
+ Examples
+ --------
+ >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
+ >>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
"""
if not is_list_like(arrays):
raise TypeError("Input must be a list / sequence of array-likes.")
@@ -351,17 +351,17 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
-------
index : MultiIndex
- Examples
- --------
- >>> tuples = [(1, u'red'), (1, u'blue'),
- (2, u'red'), (2, u'blue')]
- >>> pd.MultiIndex.from_tuples(tuples, names=('number', 'color'))
-
See Also
--------
MultiIndex.from_arrays : Convert list of arrays to MultiIndex
MultiIndex.from_product : Make a MultiIndex from cartesian product
of iterables
+
+ Examples
+ --------
+ >>> tuples = [(1, u'red'), (1, u'blue'),
+ (2, u'red'), (2, u'blue')]
+ >>> pd.MultiIndex.from_tuples(tuples, names=('number', 'color'))
"""
if not is_list_like(tuples):
raise TypeError('Input must be a list / sequence of tuple-likes.')
@@ -404,6 +404,11 @@ def from_product(cls, iterables, sortorder=None, names=None):
-------
index : MultiIndex
+ See Also
+ --------
+ MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
+ MultiIndex.from_tuples : Convert list of tuples to MultiIndex.
+
Examples
--------
>>> numbers = [0, 1, 2]
@@ -413,11 +418,6 @@ def from_product(cls, iterables, sortorder=None, names=None):
MultiIndex(levels=[[0, 1, 2], [u'green', u'purple']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]],
names=[u'number', u'color'])
-
- See Also
- --------
- MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
- MultiIndex.from_tuples : Convert list of tuples to MultiIndex.
"""
from pandas.core.arrays.categorical import _factorize_from_iterables
from pandas.core.reshape.util import cartesian_product
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 491176bc586a8..9d6a56200df6e 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -147,13 +147,13 @@ def insert(self, loc, item):
-------
None
- Notes
- -----
- An Index instance can **only** contain hashable objects.
-
See Also
--------
Index : The base pandas Index type.
+
+ Notes
+ -----
+ An Index instance can **only** contain hashable objects.
"""
_int64_descr_args = dict(
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 3d69a0a84f7ae..23a8ab54c4e6d 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -989,6 +989,10 @@ def period_range(start=None, end=None, periods=None, freq='D', name=None):
name : string, default None
Name of the resulting PeriodIndex
+ Returns
+ -------
+ prng : PeriodIndex
+
Notes
-----
Of the three parameters: ``start``, ``end``, and ``periods``, exactly two
@@ -997,10 +1001,6 @@ def period_range(start=None, end=None, periods=None, freq='D', name=None):
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
- Returns
- -------
- prng : PeriodIndex
-
Examples
--------
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 364aadb9523f0..0da924de244ed 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -46,11 +46,6 @@ class RangeIndex(Int64Index):
copy : bool, default False
Unused, accepted for homogeneity with other index types.
- See Also
- --------
- Index : The base pandas Index type.
- Int64Index : Index of int64 data.
-
Attributes
----------
None
@@ -58,6 +53,11 @@ class RangeIndex(Int64Index):
Methods
-------
from_range
+
+ See Also
+ --------
+ Index : The base pandas Index type.
+ Int64Index : Index of int64 data.
"""
_typ = 'rangeindex'
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 1c84e592d3a0d..5d52696992c30 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -80,19 +80,6 @@ class TimedeltaIndex(TimedeltaArray, DatetimeIndexOpsMixin,
name : object
Name to be stored in the index
- Notes
- -----
-
- To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
-
- See Also
- ---------
- Index : The base pandas Index type.
- Timedelta : Represents a duration between two dates or times.
- DatetimeIndex : Index of datetime64 data.
- PeriodIndex : Index of Period data.
-
Attributes
----------
days
@@ -110,6 +97,19 @@ class TimedeltaIndex(TimedeltaArray, DatetimeIndexOpsMixin,
floor
ceil
to_frame
+
+ See Also
+ ---------
+ Index : The base pandas Index type.
+ Timedelta : Represents a duration between two dates or times.
+ DatetimeIndex : Index of datetime64 data.
+ PeriodIndex : Index of Period data.
+
+ Notes
+ -----
+
+ To learn more about the frequency strings, please see `this link
+ <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
_typ = 'timedeltaindex'
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 0914324a03f84..ab4ad693a462e 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1559,6 +1559,11 @@ class _LocIndexer(_LocationIndexer):
See more at :ref:`Selection by Label <indexing.label>`
+ Raises
+ ------
+ KeyError:
+ when any items are not found
+
See Also
--------
DataFrame.at : Access a single value for a row/column label pair.
@@ -1765,11 +1770,6 @@ class _LocIndexer(_LocationIndexer):
sidewinder mark i 10 20
mark ii 1 4
viper mark ii 7 1
-
- Raises
- ------
- KeyError:
- when any items are not found
"""
_valid_types = ("labels (MUST BE IN THE INDEX), slices of labels (BOTH "
@@ -2291,6 +2291,11 @@ class _AtIndexer(_ScalarAccessIndexer):
``at`` if you only need to get or set a single value in a DataFrame
or Series.
+ Raises
+ ------
+ KeyError
+ When label does not exist in DataFrame
+
See Also
--------
DataFrame.iat : Access a single value for a row/column pair by integer
@@ -2323,11 +2328,6 @@ class _AtIndexer(_ScalarAccessIndexer):
>>> df.loc[5].at['B']
4
-
- Raises
- ------
- KeyError
- When label does not exist in DataFrame
"""
_takeable = False
@@ -2362,6 +2362,11 @@ class _iAtIndexer(_ScalarAccessIndexer):
``iat`` if you only need to get or set a single value in a DataFrame
or Series.
+ Raises
+ ------
+ IndexError
+ When integer position is out of bounds
+
See Also
--------
DataFrame.at : Access a single value for a row/column label pair.
@@ -2393,11 +2398,6 @@ class _iAtIndexer(_ScalarAccessIndexer):
>>> df.loc[0].iat[1]
2
-
- Raises
- ------
- IndexError
- When integer position is out of bounds
"""
_takeable = True
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index bd5268808e7b2..41e3f4581587e 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -468,6 +468,10 @@ def _get_op_name(op, special):
-------
result : Series
+See Also
+--------
+Series.{reverse}
+
Examples
--------
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
@@ -491,10 +495,6 @@ def _get_op_name(op, special):
d 1.0
e NaN
dtype: float64
-
-See Also
---------
-Series.{reverse}
"""
_arith_doc_FRAME = """
@@ -515,13 +515,13 @@ def _get_op_name(op, special):
Broadcast across a level, matching Index values on the
passed MultiIndex level
-Notes
------
-Mismatched indices will be unioned together
-
Returns
-------
result : DataFrame
+
+Notes
+-----
+Mismatched indices will be unioned together
"""
_flex_doc_FRAME = """
@@ -549,10 +549,6 @@ def _get_op_name(op, special):
If data in both corresponding DataFrame locations is missing
the result will be missing.
-Notes
------
-Mismatched indices will be unioned together.
-
Returns
-------
DataFrame
@@ -569,6 +565,10 @@ def _get_op_name(op, special):
DataFrame.mod : Calculate modulo (remainder after division).
DataFrame.pow : Calculate exponential power.
+Notes
+-----
+Mismatched indices will be unioned together.
+
Examples
--------
>>> df = pd.DataFrame({{'angles': [0, 3, 4],
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index bb3412a3d7c0c..540192d1a592c 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -1012,6 +1012,10 @@ def apply(self, func, axis='major', **kwargs):
axes
Additional keyword arguments will be passed as keywords to the function
+ Returns
+ -------
+ result : Panel, DataFrame, or Series
+
Examples
--------
@@ -1032,10 +1036,6 @@ def apply(self, func, axis='major', **kwargs):
items x major), as a Series
>>> p.apply(lambda x: x.shape, axis=(0,1)) # doctest: +SKIP
-
- Returns
- -------
- result : Panel, DataFrame, or Series
"""
if kwargs and not isinstance(func, np.ufunc):
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index f2cf17f8f060d..dc1f94c479a37 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -212,6 +212,12 @@ def pipe(self, func, *args, **kwargs):
return super(Resampler, self).pipe(func, *args, **kwargs)
_agg_doc = dedent("""
+ See Also
+ --------
+ pandas.DataFrame.groupby.aggregate
+ pandas.DataFrame.resample.transform
+ pandas.DataFrame.aggregate
+
Examples
--------
>>> s = pd.Series([1,2,3,4,5],
@@ -245,12 +251,6 @@ def pipe(self, func, *args, **kwargs):
2013-01-01 00:00:00 3 2.121320
2013-01-01 00:00:02 7 4.949747
2013-01-01 00:00:04 5 NaN
-
- See Also
- --------
- pandas.DataFrame.groupby.aggregate
- pandas.DataFrame.resample.transform
- pandas.DataFrame.aggregate
""")
@Appender(_agg_doc)
@@ -286,13 +286,13 @@ def transform(self, arg, *args, **kwargs):
func : function
To apply to each group. Should return a Series with the same index
- Examples
- --------
- >>> resampled.transform(lambda x: (x - x.mean()) / x.std())
-
Returns
-------
transformed : Series
+
+ Examples
+ --------
+ >>> resampled.transform(lambda x: (x - x.mean()) / x.std())
"""
return self._selected_obj.groupby(self.groupby).transform(
arg, *args, **kwargs)
@@ -635,6 +635,10 @@ def fillna(self, method, limit=None):
pandas.DataFrame.fillna : Fill NaN values in the DataFrame using the
specified method, which can be 'bfill' and 'ffill'.
+ References
+ ----------
+ .. [1] https://en.wikipedia.org/wiki/Imputation_(statistics)
+
Examples
--------
Resampling a Series:
@@ -746,10 +750,6 @@ def fillna(self, method, limit=None):
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5
-
- References
- ----------
- .. [1] https://en.wikipedia.org/wiki/Imputation_(statistics)
"""
return self._upsample(method, limit=limit)
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index b13b22d2e8266..53671e00e88b4 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -87,6 +87,13 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
``DataFrame``, a ``DataFrame`` is returned. When concatenating along
the columns (axis=1), a ``DataFrame`` is returned.
+ See Also
+ --------
+ Series.append
+ DataFrame.append
+ DataFrame.join
+ DataFrame.merge
+
Notes
-----
The keys, levels, and names arguments are all optional.
@@ -95,13 +102,6 @@ def concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
pandas objects can be found `here
<http://pandas.pydata.org/pandas-docs/stable/merging.html>`__.
- See Also
- --------
- Series.append
- DataFrame.append
- DataFrame.join
- DataFrame.merge
-
Examples
--------
Combine two ``Series``.
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index aafc0de64ee12..312a108ad3380 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -232,6 +232,12 @@ def wide_to_long(df, stubnames, i, j, sep="", suffix=r'\d+'):
A DataFrame that contains each stub name as a variable, with new index
(i, j)
+ Notes
+ -----
+ All extra variables are left untouched. This simply uses
+ `pandas.melt` under the hood, but is hard-coded to "do the right thing"
+ in a typical case.
+
Examples
--------
>>> np.random.seed(123)
@@ -403,12 +409,6 @@ def wide_to_long(df, stubnames, i, j, sep="", suffix=r'\d+'):
two 3.4
3 one 2.1
two 2.9
-
- Notes
- -----
- All extra variables are left untouched. This simply uses
- `pandas.melt` under the hood, but is hard-coded to "do the right thing"
- in a typical case.
"""
def get_var_names(df, stub, sep, suffix):
regex = r'^{stub}{sep}{suffix}$'.format(
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index c0c016f9a8caa..e6e6c1c99b509 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -169,6 +169,17 @@ def merge_ordered(left, right, on=None,
.. versionadded:: 0.19.0
+ Returns
+ -------
+ merged : DataFrame
+ The output type will the be same as 'left', if it is a subclass
+ of DataFrame.
+
+ See Also
+ --------
+ merge
+ merge_asof
+
Examples
--------
>>> A >>> B
@@ -192,17 +203,6 @@ def merge_ordered(left, right, on=None,
7 b c 2 2.0
8 b d 2 3.0
9 b e 3 3.0
-
- Returns
- -------
- merged : DataFrame
- The output type will the be same as 'left', if it is a subclass
- of DataFrame.
-
- See Also
- --------
- merge
- merge_asof
"""
def _merger(x, y):
# perform the ordered merge operation
@@ -315,6 +315,11 @@ def merge_asof(left, right, on=None,
-------
merged : DataFrame
+ See Also
+ --------
+ merge
+ merge_ordered
+
Examples
--------
>>> left = pd.DataFrame({'a': [1, 5, 10], 'left_val': ['a', 'b', 'c']})
@@ -444,11 +449,6 @@ def merge_asof(left, right, on=None,
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
-
- See Also
- --------
- merge
- merge_ordered
"""
op = _AsOfMerge(left, right,
on=on, left_on=left_on, right_on=right_on,
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 84faab017163f..61ac5d9ed6a2e 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -433,6 +433,10 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None,
.. versionadded:: 0.18.1
+ Returns
+ -------
+ crosstab : DataFrame
+
Notes
-----
Any Series passed will have their name attributes used unless row or column
@@ -484,10 +488,6 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None,
a 1 0 0
b 0 1 0
c 0 0 0
-
- Returns
- -------
- crosstab : DataFrame
"""
index = com.maybe_make_list(index)
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index ba86d3d9ba25f..ff4f9b7847019 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -724,6 +724,10 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False,
-------
dummies : DataFrame or SparseDataFrame
+ See Also
+ --------
+ Series.str.get_dummies
+
Examples
--------
>>> s = pd.Series(list('abca'))
@@ -779,10 +783,6 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False,
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 0.0 0.0 1.0
-
- See Also
- --------
- Series.str.get_dummies
"""
from pandas.core.reshape.concat import concat
from itertools import cycle
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 6d951a7a5228a..4f9465354a47b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -542,6 +542,10 @@ def nonzero(self):
but it will always be a one-item tuple because series only have
one dimension.
+ See Also
+ --------
+ numpy.nonzero
+
Examples
--------
>>> s = pd.Series([0, 3, 0, 4])
@@ -560,10 +564,6 @@ def nonzero(self):
b 3
d 4
dtype: int64
-
- See Also
- --------
- numpy.nonzero
"""
return self._values.nonzero()
@@ -1646,6 +1646,16 @@ def duplicated(self, keep='first'):
occurrence.
- ``False`` : Mark all duplicates as ``True``.
+ Returns
+ -------
+ pandas.core.series.Series
+
+ See Also
+ --------
+ Index.duplicated : Equivalent method on pandas.Index.
+ DataFrame.duplicated : Equivalent method on pandas.DataFrame.
+ Series.drop_duplicates : Remove duplicate values from Series.
+
Examples
--------
By default, for each set of duplicated values, the first occurrence is
@@ -1690,16 +1700,6 @@ def duplicated(self, keep='first'):
3 False
4 True
dtype: bool
-
- Returns
- -------
- pandas.core.series.Series
-
- See Also
- --------
- Index.duplicated : Equivalent method on pandas.Index.
- DataFrame.duplicated : Equivalent method on pandas.DataFrame.
- Series.drop_duplicates : Remove duplicate values from Series.
"""
return super(Series, self).duplicated(keep=keep)
@@ -1731,12 +1731,6 @@ def idxmin(self, axis=0, skipna=True, *args, **kwargs):
ValueError
If the Series is empty.
- Notes
- -----
- This method is the Series version of ``ndarray.argmin``. This method
- returns the label of the minimum, while ``ndarray.argmin`` returns
- the position. To get the position, use ``series.values.argmin()``.
-
See Also
--------
numpy.argmin : Return indices of the minimum values
@@ -1746,6 +1740,12 @@ def idxmin(self, axis=0, skipna=True, *args, **kwargs):
Series.idxmax : Return index *label* of the first occurrence
of maximum of values.
+ Notes
+ -----
+ This method is the Series version of ``ndarray.argmin``. This method
+ returns the label of the minimum, while ``ndarray.argmin`` returns
+ the position. To get the position, use ``series.values.argmin()``.
+
Examples
--------
>>> s = pd.Series(data=[1, None, 4, 1],
@@ -1800,12 +1800,6 @@ def idxmax(self, axis=0, skipna=True, *args, **kwargs):
ValueError
If the Series is empty.
- Notes
- -----
- This method is the Series version of ``ndarray.argmax``. This method
- returns the label of the maximum, while ``ndarray.argmax`` returns
- the position. To get the position, use ``series.values.argmax()``.
-
See Also
--------
numpy.argmax : Return indices of the maximum values
@@ -1815,6 +1809,12 @@ def idxmax(self, axis=0, skipna=True, *args, **kwargs):
Series.idxmin : Return index *label* of the first occurrence
of minimum of values.
+ Notes
+ -----
+ This method is the Series version of ``ndarray.argmax``. This method
+ returns the label of the maximum, while ``ndarray.argmax`` returns
+ the position. To get the position, use ``series.values.argmax()``.
+
Examples
--------
>>> s = pd.Series(data=[1, None, 4, 3, 4],
@@ -1917,6 +1917,11 @@ def quantile(self, q=0.5, interpolation='linear'):
if ``q`` is an array, a Series will be returned where the
index is ``q`` and the values are the quantiles.
+ See Also
+ --------
+ core.window.Rolling.quantile
+ numpy.percentile
+
Examples
--------
>>> s = pd.Series([1, 2, 3, 4])
@@ -1927,11 +1932,6 @@ def quantile(self, q=0.5, interpolation='linear'):
0.50 2.50
0.75 3.25
dtype: float64
-
- See Also
- --------
- core.window.Rolling.quantile
- numpy.percentile
"""
self._check_percentile(q)
@@ -2235,21 +2235,21 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
verify_integrity : boolean, default False
If True, raise Exception on creating index with duplicates
- Notes
- -----
- Iteratively appending to a Series can be more computationally intensive
- than a single concatenate. A better solution is to append values to a
- list and then concatenate the list with the original Series all at
- once.
+ Returns
+ -------
+ appended : Series
See Also
--------
concat : General function to concatenate DataFrame, Series
or Panel objects.
- Returns
- -------
- appended : Series
+ Notes
+ -----
+ Iteratively appending to a Series can be more computationally intensive
+ than a single concatenate. A better solution is to append values to a
+ list and then concatenate the list with the original Series all at
+ once.
Examples
--------
@@ -2922,17 +2922,17 @@ def nlargest(self, n=5, keep='first'):
Series
The `n` largest values in the Series, sorted in decreasing order.
- Notes
- -----
- Faster than ``.sort_values(ascending=False).head(n)`` for small `n`
- relative to the size of the ``Series`` object.
-
See Also
--------
Series.nsmallest: Get the `n` smallest elements.
Series.sort_values: Sort Series by values.
Series.head: Return the first `n` rows.
+ Notes
+ -----
+ Faster than ``.sort_values(ascending=False).head(n)`` for small `n`
+ relative to the size of the ``Series`` object.
+
Examples
--------
>>> countries_population = {"Italy": 59000000, "France": 65000000,
@@ -3018,17 +3018,17 @@ def nsmallest(self, n=5, keep='first'):
Series
The `n` smallest values in the Series, sorted in increasing order.
- Notes
- -----
- Faster than ``.sort_values().head(n)`` for small `n` relative to
- the size of the ``Series`` object.
-
See Also
--------
Series.nlargest: Get the `n` largest elements.
Series.sort_values: Sort Series by values.
Series.head: Return the first `n` rows.
+ Notes
+ -----
+ Faster than ``.sort_values().head(n)`` for small `n` relative to
+ the size of the ``Series`` object.
+
Examples
--------
>>> countries_population = {"Italy": 59000000, "France": 65000000,
@@ -3149,6 +3149,10 @@ def unstack(self, level=-1, fill_value=None):
.. versionadded:: 0.18.0
+ Returns
+ -------
+ unstacked : DataFrame
+
Examples
--------
>>> s = pd.Series([1, 2, 3, 4],
@@ -3169,10 +3173,6 @@ def unstack(self, level=-1, fill_value=None):
one two
a 1 3
b 2 4
-
- Returns
- -------
- unstacked : DataFrame
"""
from pandas.core.reshape.reshape import unstack
return unstack(self, level, fill_value)
@@ -3275,6 +3275,12 @@ def _gotitem(self, key, ndim, subset=None):
return self
_agg_doc = dedent("""
+ See Also
+ --------
+ Series.apply : Invoke function on a Series.
+ Series.transform : Transform function producing
+ a Series with like indexes.
+
Examples
--------
@@ -3293,12 +3299,6 @@ def _gotitem(self, key, ndim, subset=None):
min 1
max 4
dtype: int64
-
- See Also
- --------
- Series.apply : Invoke function on a Series.
- Series.transform : Transform function producing
- a Series with like indexes.
""")
@Appender(_agg_doc)
@@ -3637,6 +3637,11 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
-------
dropped : pandas.Series
+ Raises
+ ------
+ KeyError
+ If none of the labels are found in the index.
+
See Also
--------
Series.reindex : Return only specified index labels of Series.
@@ -3644,11 +3649,6 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
Series.drop_duplicates : Return Series with duplicate values removed.
DataFrame.drop : Drop specified labels from rows or columns.
- Raises
- ------
- KeyError
- If none of the labels are found in the index.
-
Examples
--------
>>> s = pd.Series(data=np.arange(3), index=['A','B','C'])
@@ -3893,15 +3893,15 @@ def between(self, left, right, inclusive=True):
Series
Each element will be a boolean.
- Notes
- -----
- This function is equivalent to ``(left <= ser) & (ser <= right)``
-
See Also
--------
Series.gt : Greater than of series and other.
Series.lt : Less than of series and other.
+ Notes
+ -----
+ This function is equivalent to ``(left <= ser) & (ser <= right)``
+
Examples
--------
>>> s = pd.Series([2, 0, 4, 8, np.nan])
@@ -3991,13 +3991,13 @@ def from_csv(cls, path, sep=',', parse_dates=True, header=None,
datetime format based on the first datetime string. If the format
can be inferred, there often will be a large parsing speed-up.
- See Also
- --------
- read_csv
-
Returns
-------
y : Series
+
+ See Also
+ --------
+ read_csv
"""
# We're calling `DataFrame.from_csv` in the implementation,
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index d3d38d26ee86b..20ac13ed0ef71 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -123,17 +123,17 @@ def str_count(arr, pat, flags=0):
counts : Series or Index
Same type as the calling object containing the integer counts.
+ See Also
+ --------
+ re : Standard library module for regular expressions.
+ str.count : Standard library version, without regular expression support.
+
Notes
-----
Some characters need to be escaped when passing in `pat`.
eg. ``'$'`` has a special meaning in regex and must be escaped when
finding this literal character.
- See Also
- --------
- re : Standard library module for regular expressions.
- str.count : Standard library version, without regular expression support.
-
Examples
--------
>>> s = pd.Series(['A', 'B', 'Aaba', 'Baca', np.nan, 'CABA', 'cat'])
@@ -978,6 +978,10 @@ def str_get_dummies(arr, sep='|'):
-------
dummies : DataFrame
+ See Also
+ --------
+ get_dummies
+
Examples
--------
>>> pd.Series(['a|b', 'a', 'a|c']).str.get_dummies()
@@ -991,10 +995,6 @@ def str_get_dummies(arr, sep='|'):
0 1 1 0
1 0 0 0
2 1 0 1
-
- See Also
- --------
- get_dummies
"""
arr = arr.fillna('')
try:
@@ -1039,16 +1039,16 @@ def str_join(arr, sep):
AttributeError
If the supplied Series contains neither strings nor lists.
- Notes
- -----
- If any of the list items is not a string object, the result of the join
- will be `NaN`.
-
See Also
--------
str.join : Standard library version of this method.
Series.str.split : Split strings around given separator/delimiter.
+ Notes
+ -----
+ If any of the list items is not a string object, the result of the join
+ will be `NaN`.
+
Examples
--------
Example with a list that contains non-string elements.
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 86bb4e4b94382..4fca5216e24f3 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -479,6 +479,11 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
return will have datetime.datetime type (or corresponding
array/Series).
+ See Also
+ --------
+ pandas.DataFrame.astype : Cast argument to a specified dtype.
+ pandas.to_timedelta : Convert argument to timedelta.
+
Examples
--------
Assembling a datetime from multiple columns of a DataFrame. The keys can be
@@ -542,11 +547,6 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
0 1960-01-02
1 1960-01-03
2 1960-01-04
-
- See Also
- --------
- pandas.DataFrame.astype : Cast argument to a specified dtype.
- pandas.to_timedelta : Convert argument to timedelta.
"""
if arg is None:
return None
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 1d4973de92b99..803723dab46ff 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -53,6 +53,13 @@ def to_numeric(arg, errors='raise', downcast=None):
ret : numeric if parsing succeeded.
Return type depends on input. Series if Series, otherwise ndarray
+ See Also
+ --------
+ pandas.DataFrame.astype : Cast argument to a specified dtype.
+ pandas.to_datetime : Convert argument to datetime.
+ pandas.to_timedelta : Convert argument to timedelta.
+ numpy.ndarray.astype : Cast a numpy array to a specified type.
+
Examples
--------
Take separate series and convert to numeric, coercing when told to
@@ -86,13 +93,6 @@ def to_numeric(arg, errors='raise', downcast=None):
2 2.0
3 -3.0
dtype: float64
-
- See Also
- --------
- pandas.DataFrame.astype : Cast argument to a specified dtype.
- pandas.to_datetime : Convert argument to datetime.
- pandas.to_timedelta : Convert argument to timedelta.
- numpy.ndarray.astype : Cast a numpy array to a specified type.
"""
if downcast not in (None, 'integer', 'signed', 'unsigned', 'float'):
raise ValueError('invalid downcasting method provided')
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 6c4dde54bd061..8c4803a732dd8 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -478,6 +478,40 @@ class Window(_Window):
-------
a Window or Rolling sub-classed for the particular operation
+ See Also
+ --------
+ expanding : Provides expanding transformations.
+ ewm : Provides exponential weighted functions.
+
+ Notes
+ -----
+ By default, the result is set to the right edge of the window. This can be
+ changed to the center of the window by setting ``center=True``.
+
+ To learn more about the offsets & frequency strings, please see `this link
+ <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
+
+ The recognized win_types are:
+
+ * ``boxcar``
+ * ``triang``
+ * ``blackman``
+ * ``hamming``
+ * ``bartlett``
+ * ``parzen``
+ * ``bohman``
+ * ``blackmanharris``
+ * ``nuttall``
+ * ``barthann``
+ * ``kaiser`` (needs beta)
+ * ``gaussian`` (needs std)
+ * ``general_gaussian`` (needs power, width)
+ * ``slepian`` (needs width).
+
+ If ``win_type=None`` all points are evenly weighted. To learn more about
+ different window types see `scipy.signal window functions
+ <https://docs.scipy.org/doc/scipy/reference/signal.html#window-functions>`__.
+
Examples
--------
@@ -550,40 +584,6 @@ class Window(_Window):
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
-
- Notes
- -----
- By default, the result is set to the right edge of the window. This can be
- changed to the center of the window by setting ``center=True``.
-
- To learn more about the offsets & frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
-
- The recognized win_types are:
-
- * ``boxcar``
- * ``triang``
- * ``blackman``
- * ``hamming``
- * ``bartlett``
- * ``parzen``
- * ``bohman``
- * ``blackmanharris``
- * ``nuttall``
- * ``barthann``
- * ``kaiser`` (needs beta)
- * ``gaussian`` (needs std)
- * ``general_gaussian`` (needs power, width)
- * ``slepian`` (needs width).
-
- If ``win_type=None`` all points are evenly weighted. To learn more about
- different window types see `scipy.signal window functions
- <https://docs.scipy.org/doc/scipy/reference/signal.html#window-functions>`__.
-
- See Also
- --------
- expanding : Provides expanding transformations.
- ewm : Provides exponential weighted functions.
"""
def validate(self):
@@ -1596,6 +1596,11 @@ def _validate_freq(self):
"index".format(self.window))
_agg_doc = dedent("""
+ See Also
+ --------
+ pandas.Series.rolling
+ pandas.DataFrame.rolling
+
Examples
--------
@@ -1638,11 +1643,6 @@ def _validate_freq(self):
7 2.718061 -1.647453
8 -0.289082 -1.647453
9 0.212668 -1.647453
-
- See Also
- --------
- pandas.Series.rolling
- pandas.DataFrame.rolling
""")
@Appender(_agg_doc)
@@ -1821,6 +1821,16 @@ class Expanding(_Rolling_and_Expanding):
-------
a Window sub-classed for the particular operation
+ See Also
+ --------
+ rolling : Provides rolling window calculations.
+ ewm : Provides exponential weighted functions.
+
+ Notes
+ -----
+ By default, the result is set to the right edge of the window. This can be
+ changed to the center of the window by setting ``center=True``.
+
Examples
--------
@@ -1839,16 +1849,6 @@ class Expanding(_Rolling_and_Expanding):
2 3.0
3 3.0
4 7.0
-
- Notes
- -----
- By default, the result is set to the right edge of the window. This can be
- changed to the center of the window by setting ``center=True``.
-
- See Also
- --------
- rolling : Provides rolling window calculations.
- ewm : Provides exponential weighted functions.
"""
_attributes = ['min_periods', 'center', 'axis']
@@ -2116,24 +2116,10 @@ class EWM(_Rolling):
-------
a Window sub-classed for the particular operation
- Examples
+ See Also
--------
-
- >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
- B
- 0 0.0
- 1 1.0
- 2 2.0
- 3 NaN
- 4 4.0
-
- >>> df.ewm(com=0.5).mean()
- B
- 0 0.000000
- 1 0.750000
- 2 1.615385
- 3 1.615385
- 4 3.670213
+ rolling : Provides rolling window calculations.
+ expanding : Provides expanding transformations.
Notes
-----
@@ -2162,10 +2148,24 @@ class EWM(_Rolling):
More details can be found at
http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows
- See Also
+ Examples
--------
- rolling : Provides rolling window calculations.
- expanding : Provides expanding transformations.
+
+ >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
+ B
+ 0 0.0
+ 1 1.0
+ 2 2.0
+ 3 NaN
+ 4 4.0
+
+ >>> df.ewm(com=0.5).mean()
+ B
+ 0 0.000000
+ 1 0.750000
+ 2 1.615385
+ 3 1.615385
+ 4 3.670213
"""
_attributes = ['com', 'min_periods', 'adjust', 'ignore_na', 'axis']
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 03d873467dc10..c2c9a688f3f0a 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -930,6 +930,14 @@ class ExcelWriter(object):
.. versionadded:: 0.24.0
+ Attributes
+ ----------
+ None
+
+ Methods
+ -------
+ None
+
Notes
-----
None of the methods and properties are considered public.
@@ -961,14 +969,6 @@ class ExcelWriter(object):
>>> with ExcelWriter('path_to_file.xlsx', mode='a') as writer:
... df.to_excel(writer, sheet_name='Sheet3')
-
- Attributes
- ----------
- None
-
- Methods
- -------
- None
"""
# Defining an ExcelWriter implementation (see abstract methods for more...)
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 4fdcb978b4695..e83220a476f9b 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -78,6 +78,10 @@ class Styler(object):
template : Jinja2 Template
loader : Jinja2 Loader
+ See Also
+ --------
+ pandas.DataFrame.style
+
Notes
-----
Most styling will be done by passing style functions into
@@ -106,10 +110,6 @@ class Styler(object):
* Blank cells include ``blank``
* Data cells include ``data``
-
- See Also
- --------
- pandas.DataFrame.style
"""
loader = PackageLoader("pandas", "io/formats/templates")
env = Environment(
@@ -906,17 +906,17 @@ def background_gradient(self, cmap='PuBu', low=0, high=0, axis=0,
-------
self : Styler
+ Raises
+ ------
+ ValueError
+ If ``text_color_threshold`` is not a value from 0 to 1.
+
Notes
-----
Set ``text_color_threshold`` or tune ``low`` and ``high`` to keep the
text legible by not using the entire range of the color map. The range
of the data is extended by ``low * (x.max() - x.min())`` and ``high *
(x.max() - x.min())`` before normalizing.
-
- Raises
- ------
- ValueError
- If ``text_color_threshold`` is not a value from 0 to 1.
"""
subset = _maybe_numeric_slice(self.data, subset)
subset = _non_reducing_slice(subset)
diff --git a/pandas/io/html.py b/pandas/io/html.py
index c967bdd29df1f..74934740a6957 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -1041,6 +1041,10 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
-------
dfs : list of DataFrames
+ See Also
+ --------
+ pandas.read_csv
+
Notes
-----
Before using this function you should read the :ref:`gotchas about the
@@ -1072,10 +1076,6 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
--------
See the :ref:`read_html documentation in the IO section of the docs
<io.read_html>` for some examples of reading in HTML tables.
-
- See Also
- --------
- pandas.read_csv
"""
_importers()
diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py
index 21c8064ebcac5..4bbccc8339d7c 100644
--- a/pandas/io/json/json.py
+++ b/pandas/io/json/json.py
@@ -344,6 +344,10 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
-------
result : Series or DataFrame, depending on the value of `typ`.
+ See Also
+ --------
+ DataFrame.to_json
+
Notes
-----
Specific to ``orient='table'``, if a :class:`DataFrame` with a literal
@@ -355,10 +359,6 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
limitation is encountered with a :class:`MultiIndex` and any names
beginning with ``'level_'``.
- See Also
- --------
- DataFrame.to_json
-
Examples
--------
diff --git a/pandas/io/json/table_schema.py b/pandas/io/json/table_schema.py
index 2c2ecf75bbe7b..2bd93b19d4225 100644
--- a/pandas/io/json/table_schema.py
+++ b/pandas/io/json/table_schema.py
@@ -201,6 +201,16 @@ def build_table_schema(data, index=True, primary_key=None, version=True):
-------
schema : dict
+ Notes
+ -----
+ See `_as_json_table_type` for conversion types.
+ Timedeltas as converted to ISO8601 duration format with
+ 9 decimal places after the seconds field for nanosecond precision.
+
+ Categoricals are converted to the `any` dtype, and use the `enum` field
+ constraint to list the allowed values. The `ordered` attribute is included
+ in an `ordered` field.
+
Examples
--------
>>> df = pd.DataFrame(
@@ -215,16 +225,6 @@ def build_table_schema(data, index=True, primary_key=None, version=True):
{'name': 'C', 'type': 'datetime'}],
'pandas_version': '0.20.0',
'primaryKey': ['idx']}
-
- Notes
- -----
- See `_as_json_table_type` for conversion types.
- Timedeltas as converted to ISO8601 duration format with
- 9 decimal places after the seconds field for nanosecond precision.
-
- Categoricals are converted to the `any` dtype, and use the `enum` field
- constraint to list the allowed values. The `ordered` attribute is included
- in an `ordered` field.
"""
if index is True:
data = set_default_names(data)
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index e65e3dff1936a..e54d29148c6d0 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -218,14 +218,14 @@ def read_sql_table(table_name, con, schema=None, index_col=None,
-------
DataFrame
- Notes
- -----
- Any datetime values with time zone information will be converted to UTC.
-
See Also
--------
read_sql_query : Read SQL query into a DataFrame.
read_sql
+
+ Notes
+ -----
+ Any datetime values with time zone information will be converted to UTC.
"""
con = _engine_builder(con)
@@ -296,15 +296,15 @@ def read_sql_query(sql, con, index_col=None, coerce_float=True, params=None,
-------
DataFrame
- Notes
- -----
- Any datetime values with time zone information parsed via the `parse_dates`
- parameter will be converted to UTC.
-
See Also
--------
read_sql_table : Read SQL database table into a DataFrame.
read_sql
+
+ Notes
+ -----
+ Any datetime values with time zone information parsed via the `parse_dates`
+ parameter will be converted to UTC.
"""
pandas_sql = pandasSQL_builder(con)
return pandas_sql.read_query(
| - [ ] refs #24125
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Original number of errors: 286
Remaining errors: 24 | https://api.github.com/repos/pandas-dev/pandas/pulls/24126 | 2018-12-06T07:40:51Z | 2018-12-07T12:53:58Z | 2018-12-07T12:53:58Z | 2018-12-07T12:54:11Z |
REF/TST: Add more pytest idiom to scalar/test_nat | diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index d2a31de5c0938..abf95b276cda1 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -5,24 +5,24 @@
import pytz
from pandas._libs.tslibs import iNaT
+import pandas.compat as compat
from pandas import (
DatetimeIndex, Index, NaT, Period, Series, Timedelta, TimedeltaIndex,
- Timestamp, isna)
+ Timestamp)
from pandas.core.arrays import PeriodArray
from pandas.util import testing as tm
-@pytest.mark.parametrize('nat, idx', [(Timestamp('NaT'), DatetimeIndex),
- (Timedelta('NaT'), TimedeltaIndex),
- (Period('NaT', freq='M'), PeriodArray)])
+@pytest.mark.parametrize("nat,idx", [(Timestamp("NaT"), DatetimeIndex),
+ (Timedelta("NaT"), TimedeltaIndex),
+ (Period("NaT", freq="M"), PeriodArray)])
def test_nat_fields(nat, idx):
for field in idx._field_ops:
-
# weekday is a property of DTI, but a method
# on NaT/Timestamp for compat with datetime
- if field == 'weekday':
+ if field == "weekday":
continue
result = getattr(NaT, field)
@@ -41,289 +41,301 @@ def test_nat_fields(nat, idx):
def test_nat_vector_field_access():
- idx = DatetimeIndex(['1/1/2000', None, None, '1/4/2000'])
+ idx = DatetimeIndex(["1/1/2000", None, None, "1/4/2000"])
for field in DatetimeIndex._field_ops:
# weekday is a property of DTI, but a method
# on NaT/Timestamp for compat with datetime
- if field == 'weekday':
+ if field == "weekday":
continue
result = getattr(idx, field)
expected = Index([getattr(x, field) for x in idx])
tm.assert_index_equal(result, expected)
- s = Series(idx)
+ ser = Series(idx)
for field in DatetimeIndex._field_ops:
-
# weekday is a property of DTI, but a method
# on NaT/Timestamp for compat with datetime
- if field == 'weekday':
+ if field == "weekday":
continue
- result = getattr(s.dt, field)
+ result = getattr(ser.dt, field)
expected = [getattr(x, field) for x in idx]
tm.assert_series_equal(result, Series(expected))
for field in DatetimeIndex._bool_ops:
- result = getattr(s.dt, field)
+ result = getattr(ser.dt, field)
expected = [getattr(x, field) for x in idx]
tm.assert_series_equal(result, Series(expected))
-@pytest.mark.parametrize('klass', [Timestamp, Timedelta, Period])
-def test_identity(klass):
- assert klass(None) is NaT
-
- result = klass(np.nan)
- assert result is NaT
-
- result = klass(None)
- assert result is NaT
-
- result = klass(iNaT)
- assert result is NaT
-
- result = klass(np.nan)
- assert result is NaT
-
- result = klass(float('nan'))
- assert result is NaT
-
- result = klass(NaT)
- assert result is NaT
-
- result = klass('NaT')
- assert result is NaT
-
- assert isna(klass('nat'))
-
-
-@pytest.mark.parametrize('klass', [Timestamp, Timedelta, Period])
-def test_equality(klass):
-
- # nat
- if klass is not Period:
- klass('').value == iNaT
- klass('nat').value == iNaT
- klass('NAT').value == iNaT
- klass(None).value == iNaT
- klass(np.nan).value == iNaT
- assert isna(klass('nat'))
-
-
-@pytest.mark.parametrize('klass', [Timestamp, Timedelta])
-def test_round_nat(klass):
- # GH14940
- ts = klass('nat')
- for method in ["round", "floor", "ceil"]:
- round_method = getattr(ts, method)
- for freq in ["s", "5s", "min", "5min", "h", "5h"]:
- assert round_method(freq) is ts
-
-
-def test_NaT_methods():
- # GH 9513
- # GH 17329 for `timestamp`
- raise_methods = ['astimezone', 'combine', 'ctime', 'dst',
- 'fromordinal', 'fromtimestamp', 'isocalendar',
- 'strftime', 'strptime', 'time', 'timestamp',
- 'timetuple', 'timetz', 'toordinal', 'tzname',
- 'utcfromtimestamp', 'utcnow', 'utcoffset',
- 'utctimetuple', 'timestamp']
- nat_methods = ['date', 'now', 'replace', 'to_datetime', 'today',
- 'tz_convert', 'tz_localize']
- nan_methods = ['weekday', 'isoweekday']
+@pytest.mark.parametrize("klass", [Timestamp, Timedelta, Period])
+@pytest.mark.parametrize("value", [None, np.nan, iNaT, float("nan"),
+ NaT, "NaT", "nat"])
+def test_identity(klass, value):
+ assert klass(value) is NaT
+
+
+@pytest.mark.parametrize("klass", [Timestamp, Timedelta, Period])
+@pytest.mark.parametrize("value", ["", "nat", "NAT", None, np.nan])
+def test_equality(klass, value):
+ if klass is Period and value == "":
+ pytest.skip("Period cannot parse empty string")
+
+ assert klass(value).value == iNaT
+
+
+@pytest.mark.parametrize("klass", [Timestamp, Timedelta])
+@pytest.mark.parametrize("method", ["round", "floor", "ceil"])
+@pytest.mark.parametrize("freq", ["s", "5s", "min", "5min", "h", "5h"])
+def test_round_nat(klass, method, freq):
+ # see gh-14940
+ ts = klass("nat")
+
+ round_method = getattr(ts, method)
+ assert round_method(freq) is ts
+
+
+@pytest.mark.parametrize("method", [
+ "astimezone", "combine", "ctime", "dst", "fromordinal",
+ "fromtimestamp", "isocalendar", "strftime", "strptime",
+ "time", "timestamp", "timetuple", "timetz", "toordinal",
+ "tzname", "utcfromtimestamp", "utcnow", "utcoffset",
+ "utctimetuple", "timestamp"
+])
+def test_nat_methods_raise(method):
+ # see gh-9513, gh-17329
+ msg = "NaTType does not support {method}".format(method=method)
+
+ with pytest.raises(ValueError, match=msg):
+ getattr(NaT, method)()
+
+
+@pytest.mark.parametrize("method", [
+ "weekday", "isoweekday"
+])
+def test_nat_methods_nan(method):
+ # see gh-9513, gh-17329
+ assert np.isnan(getattr(NaT, method)())
+
+
+@pytest.mark.parametrize("method", [
+ "date", "now", "replace", "today",
+ "tz_convert", "tz_localize"
+])
+def test_nat_methods_nat(method):
+ # see gh-8254, gh-9513, gh-17329
+ assert getattr(NaT, method)() is NaT
+
+
+@pytest.mark.parametrize("get_nat", [
+ lambda x: NaT,
+ lambda x: Timedelta(x),
+ lambda x: Timestamp(x)
+])
+def test_nat_iso_format(get_nat):
+ # see gh-12300
+ assert get_nat("NaT").isoformat() == "NaT"
+
+
+@pytest.mark.parametrize("klass,expected", [
+ (Timestamp, ["freqstr", "normalize", "to_julian_date", "to_period", "tz"]),
+ (Timedelta, ["components", "delta", "is_populated", "to_pytimedelta",
+ "to_timedelta64", "view"])
+])
+def test_missing_public_nat_methods(klass, expected):
+ # see gh-17327
+ #
+ # NaT should have *most* of the Timestamp and Timedelta methods.
+ # Here, we check which public methods NaT does not have. We
+ # ignore any missing private methods.
+ nat_names = dir(NaT)
+ klass_names = dir(klass)
- for method in raise_methods:
- if hasattr(NaT, method):
- with pytest.raises(ValueError):
- getattr(NaT, method)()
+ missing = [x for x in klass_names if x not in nat_names and
+ not x.startswith("_")]
+ missing.sort()
- for method in nan_methods:
- if hasattr(NaT, method):
- assert np.isnan(getattr(NaT, method)())
+ assert missing == expected
- for method in nat_methods:
- if hasattr(NaT, method):
- # see gh-8254
- exp_warning = None
- if method == 'to_datetime':
- exp_warning = FutureWarning
- with tm.assert_produces_warning(
- exp_warning, check_stacklevel=False):
- assert getattr(NaT, method)() is NaT
- # GH 12300
- assert NaT.isoformat() == 'NaT'
+def _get_overlap_public_nat_methods(klass, as_tuple=False):
+ """
+ Get overlapping public methods between NaT and another class.
+ Parameters
+ ----------
+ klass : type
+ The class to compare with NaT
+ as_tuple : bool, default False
+ Whether to return a list of tuples of the form (klass, method).
-def test_NaT_docstrings():
- # GH#17327
+ Returns
+ -------
+ overlap : list
+ """
nat_names = dir(NaT)
-
- # NaT should have *most* of the Timestamp methods, with matching
- # docstrings. The attributes that are not expected to be present in NaT
- # are private methods plus `ts_expected` below.
- ts_names = dir(Timestamp)
- ts_missing = [x for x in ts_names if x not in nat_names and
- not x.startswith('_')]
- ts_missing.sort()
- ts_expected = ['freqstr', 'normalize',
- 'to_julian_date',
- 'to_period', 'tz']
- assert ts_missing == ts_expected
-
- ts_overlap = [x for x in nat_names if x in ts_names and
- not x.startswith('_') and
- callable(getattr(Timestamp, x))]
- for name in ts_overlap:
- tsdoc = getattr(Timestamp, name).__doc__
- natdoc = getattr(NaT, name).__doc__
- assert tsdoc == natdoc
-
- # NaT should have *most* of the Timedelta methods, with matching
- # docstrings. The attributes that are not expected to be present in NaT
- # are private methods plus `td_expected` below.
- # For methods that are both Timestamp and Timedelta methods, the
- # Timestamp docstring takes priority.
- td_names = dir(Timedelta)
- td_missing = [x for x in td_names if x not in nat_names and
- not x.startswith('_')]
- td_missing.sort()
- td_expected = ['components', 'delta', 'is_populated',
- 'to_pytimedelta', 'to_timedelta64', 'view']
- assert td_missing == td_expected
-
- td_overlap = [x for x in nat_names if x in td_names and
- x not in ts_names and # Timestamp __doc__ takes priority
- not x.startswith('_') and
- callable(getattr(Timedelta, x))]
- assert td_overlap == ['total_seconds']
- for name in td_overlap:
- tddoc = getattr(Timedelta, name).__doc__
- natdoc = getattr(NaT, name).__doc__
- assert tddoc == natdoc
-
-
-@pytest.mark.parametrize('klass', [Timestamp, Timedelta])
-def test_isoformat(klass):
-
- result = klass('NaT').isoformat()
- expected = 'NaT'
- assert result == expected
-
-
-def test_nat_arithmetic():
- # GH 6873
- i = 2
- f = 1.5
-
- for (left, right) in [(NaT, i), (NaT, f), (NaT, np.nan)]:
- assert left / right is NaT
- assert left * right is NaT
- assert right * left is NaT
- with pytest.raises(TypeError):
- right / left
-
- # Timestamp / datetime
- t = Timestamp('2014-01-01')
- dt = datetime(2014, 1, 1)
- for (left, right) in [(NaT, NaT), (NaT, t), (NaT, dt)]:
- # NaT __add__ or __sub__ Timestamp-like (or inverse) returns NaT
- assert right + left is NaT
- assert left + right is NaT
- assert left - right is NaT
- assert right - left is NaT
-
- # timedelta-like
- # offsets are tested in test_offsets.py
-
- delta = timedelta(3600)
- td = Timedelta('5s')
-
- for (left, right) in [(NaT, delta), (NaT, td)]:
- # NaT + timedelta-like returns NaT
- assert right + left is NaT
- assert left + right is NaT
- assert right - left is NaT
- assert left - right is NaT
- assert np.isnan(left / right)
- assert np.isnan(right / left)
-
- # GH 11718
- t_utc = Timestamp('2014-01-01', tz='UTC')
- t_tz = Timestamp('2014-01-01', tz='US/Eastern')
- dt_tz = pytz.timezone('Asia/Tokyo').localize(dt)
-
- for (left, right) in [(NaT, t_utc), (NaT, t_tz),
- (NaT, dt_tz)]:
- # NaT __add__ or __sub__ Timestamp-like (or inverse) returns NaT
- assert right + left is NaT
- assert left + right is NaT
- assert left - right is NaT
- assert right - left is NaT
-
- # int addition / subtraction
- for (left, right) in [(NaT, 2), (NaT, 0), (NaT, -3)]:
- assert right + left is NaT
- assert left + right is NaT
- assert left - right is NaT
- assert right - left is NaT
-
-
-def test_nat_rfloordiv_timedelta():
- # GH#18846
+ klass_names = dir(klass)
+
+ overlap = [x for x in nat_names if x in klass_names and
+ not x.startswith("_") and
+ callable(getattr(klass, x))]
+
+ # Timestamp takes precedence over Timedelta in terms of overlap.
+ if klass is Timedelta:
+ ts_names = dir(Timestamp)
+ overlap = [x for x in overlap if x not in ts_names]
+
+ if as_tuple:
+ overlap = [(klass, method) for method in overlap]
+
+ overlap.sort()
+ return overlap
+
+
+@pytest.mark.parametrize("klass,expected", [
+ (Timestamp, ["astimezone", "ceil", "combine", "ctime", "date", "day_name",
+ "dst", "floor", "fromisoformat", "fromordinal",
+ "fromtimestamp", "isocalendar", "isoformat", "isoweekday",
+ "month_name", "now", "replace", "round", "strftime",
+ "strptime", "time", "timestamp", "timetuple", "timetz",
+ "to_datetime64", "to_pydatetime", "today", "toordinal",
+ "tz_convert", "tz_localize", "tzname", "utcfromtimestamp",
+ "utcnow", "utcoffset", "utctimetuple", "weekday"]),
+ (Timedelta, ["total_seconds"])
+])
+def test_overlap_public_nat_methods(klass, expected):
+ # see gh-17327
+ #
+ # NaT should have *most* of the Timestamp and Timedelta methods.
+ # In case when Timestamp, Timedelta, and NaT are overlap, the overlap
+ # is considered to be with Timestamp and NaT, not Timedelta.
+
+ # "fromisoformat" was introduced in 3.7
+ if klass is Timestamp and not compat.PY37:
+ expected.remove("fromisoformat")
+
+ assert _get_overlap_public_nat_methods(klass) == expected
+
+
+@pytest.mark.parametrize("compare", (
+ _get_overlap_public_nat_methods(Timestamp, True) +
+ _get_overlap_public_nat_methods(Timedelta, True))
+)
+def test_nat_doc_strings(compare):
+ # see gh-17327
+ #
+ # The docstrings for overlapping methods should match.
+ klass, method = compare
+ klass_doc = getattr(klass, method).__doc__
+
+ nat_doc = getattr(NaT, method).__doc__
+ assert klass_doc == nat_doc
+
+
+_ops = {
+ "left_plus_right": lambda a, b: a + b,
+ "right_plus_left": lambda a, b: b + a,
+ "left_minus_right": lambda a, b: a - b,
+ "right_minus_left": lambda a, b: b - a,
+ "left_times_right": lambda a, b: a * b,
+ "right_times_left": lambda a, b: b * a,
+ "left_div_right": lambda a, b: a / b,
+ "right_div_left": lambda a, b: b / a,
+}
+
+
+@pytest.mark.parametrize("op_name", list(_ops.keys()))
+@pytest.mark.parametrize("value,val_type", [
+ (2, "scalar"),
+ (1.5, "scalar"),
+ (np.nan, "scalar"),
+ (timedelta(3600), "timedelta"),
+ (Timedelta("5s"), "timedelta"),
+ (datetime(2014, 1, 1), "timestamp"),
+ (Timestamp("2014-01-01"), "timestamp"),
+ (Timestamp("2014-01-01", tz="UTC"), "timestamp"),
+ (Timestamp("2014-01-01", tz="US/Eastern"), "timestamp"),
+ (pytz.timezone("Asia/Tokyo").localize(datetime(2014, 1, 1)), "timestamp"),
+])
+def test_nat_arithmetic_scalar(op_name, value, val_type):
+ # see gh-6873
+ invalid_ops = {
+ "scalar": {"right_div_left"},
+ "timedelta": {"left_times_right", "right_times_left"},
+ "timestamp": {"left_times_right", "right_times_left",
+ "left_div_right", "right_div_left"}
+ }
+
+ op = _ops[op_name]
+
+ if op_name in invalid_ops.get(val_type, set()):
+ if (val_type == "timedelta" and "times" in op_name and
+ isinstance(value, Timedelta)):
+ msg = "Cannot multiply"
+ else:
+ msg = "unsupported operand type"
+
+ with pytest.raises(TypeError, match=msg):
+ op(NaT, value)
+ else:
+ if val_type == "timedelta" and "div" in op_name:
+ expected = np.nan
+ else:
+ expected = NaT
+
+ assert op(NaT, value) is expected
+
+
+@pytest.mark.parametrize("val,expected", [
+ (np.nan, NaT),
+ (NaT, np.nan),
+ (np.timedelta64("NaT"), np.nan)
+])
+def test_nat_rfloordiv_timedelta(val, expected):
+ # see gh-#18846
+ #
# See also test_timedelta.TestTimedeltaArithmetic.test_floordiv
td = Timedelta(hours=3, minutes=4)
-
- assert td // np.nan is NaT
- assert np.isnan(td // NaT)
- assert np.isnan(td // np.timedelta64('NaT'))
-
-
-def test_nat_arithmetic_index():
- # GH 11718
-
- dti = DatetimeIndex(['2011-01-01', '2011-01-02'], name='x')
- exp = DatetimeIndex([NaT, NaT], name='x')
- tm.assert_index_equal(dti + NaT, exp)
- tm.assert_index_equal(NaT + dti, exp)
-
- dti_tz = DatetimeIndex(['2011-01-01', '2011-01-02'],
- tz='US/Eastern', name='x')
- exp = DatetimeIndex([NaT, NaT], name='x', tz='US/Eastern')
- tm.assert_index_equal(dti_tz + NaT, exp)
- tm.assert_index_equal(NaT + dti_tz, exp)
-
- exp = TimedeltaIndex([NaT, NaT], name='x')
- for (left, right) in [(NaT, dti), (NaT, dti_tz)]:
- tm.assert_index_equal(left - right, exp)
- tm.assert_index_equal(right - left, exp)
-
- # timedelta # GH#19124
- tdi = TimedeltaIndex(['1 day', '2 day'], name='x')
- tdi_nat = TimedeltaIndex([NaT, NaT], name='x')
-
- tm.assert_index_equal(tdi + NaT, tdi_nat)
- tm.assert_index_equal(NaT + tdi, tdi_nat)
- tm.assert_index_equal(tdi - NaT, tdi_nat)
- tm.assert_index_equal(NaT - tdi, tdi_nat)
-
-
-@pytest.mark.parametrize('box', [TimedeltaIndex, Series])
-def test_nat_arithmetic_td64_vector(box):
- # GH#19124
- vec = box(['1 day', '2 day'], dtype='timedelta64[ns]')
- box_nat = box([NaT, NaT], dtype='timedelta64[ns]')
-
- tm.assert_equal(vec + NaT, box_nat)
- tm.assert_equal(NaT + vec, box_nat)
- tm.assert_equal(vec - NaT, box_nat)
- tm.assert_equal(NaT - vec, box_nat)
+ assert td // val is expected
+
+
+@pytest.mark.parametrize("op_name", [
+ "left_plus_right", "right_plus_left",
+ "left_minus_right", "right_minus_left"
+])
+@pytest.mark.parametrize("value", [
+ DatetimeIndex(["2011-01-01", "2011-01-02"], name="x"),
+ DatetimeIndex(["2011-01-01", "2011-01-02"], name="x"),
+ TimedeltaIndex(["1 day", "2 day"], name="x"),
+])
+def test_nat_arithmetic_index(op_name, value):
+ # see gh-11718
+ exp_name = "x"
+ exp_data = [NaT] * 2
+
+ if isinstance(value, DatetimeIndex) and "plus" in op_name:
+ expected = DatetimeIndex(exp_data, name=exp_name, tz=value.tz)
+ else:
+ expected = TimedeltaIndex(exp_data, name=exp_name)
+
+ tm.assert_index_equal(_ops[op_name](NaT, value), expected)
+
+
+@pytest.mark.parametrize("op_name", [
+ "left_plus_right", "right_plus_left",
+ "left_minus_right", "right_minus_left"
+])
+@pytest.mark.parametrize("box", [TimedeltaIndex, Series])
+def test_nat_arithmetic_td64_vector(op_name, box):
+ # see gh-19124
+ vec = box(["1 day", "2 day"], dtype="timedelta64[ns]")
+ box_nat = box([NaT, NaT], dtype="timedelta64[ns]")
+ tm.assert_equal(_ops[op_name](vec, NaT), box_nat)
def test_nat_pinned_docstrings():
- # GH17327
+ # see gh-17327
assert NaT.ctime.__doc__ == datetime.ctime.__doc__
| https://api.github.com/repos/pandas-dev/pandas/pulls/24120 | 2018-12-06T00:47:48Z | 2018-12-06T12:17:56Z | 2018-12-06T12:17:56Z | 2018-12-07T02:43:32Z | |
PERF: speed up PeriodArray creation by exposing dayfirst/yearfirst params | diff --git a/asv_bench/benchmarks/period.py b/asv_bench/benchmarks/period.py
index 8f341c8b415fe..6d2c7156a0a3d 100644
--- a/asv_bench/benchmarks/period.py
+++ b/asv_bench/benchmarks/period.py
@@ -1,5 +1,6 @@
from pandas import (
DataFrame, Period, PeriodIndex, Series, date_range, period_range)
+from pandas.tseries.frequencies import to_offset
class PeriodProperties(object):
@@ -35,25 +36,48 @@ def time_asfreq(self, freq):
self.per.asfreq('A')
+class PeriodConstructor(object):
+ params = [['D'], [True, False]]
+ param_names = ['freq', 'is_offset']
+
+ def setup(self, freq, is_offset):
+ if is_offset:
+ self.freq = to_offset(freq)
+ else:
+ self.freq = freq
+
+ def time_period_constructor(self, freq, is_offset):
+ Period('2012-06-01', freq=freq)
+
+
class PeriodIndexConstructor(object):
- params = ['D']
- param_names = ['freq']
+ params = [['D'], [True, False]]
+ param_names = ['freq', 'is_offset']
- def setup(self, freq):
+ def setup(self, freq, is_offset):
self.rng = date_range('1985', periods=1000)
self.rng2 = date_range('1985', periods=1000).to_pydatetime()
self.ints = list(range(2000, 3000))
-
- def time_from_date_range(self, freq):
+ self.daily_ints = date_range('1/1/2000', periods=1000,
+ freq=freq).strftime('%Y%m%d').map(int)
+ if is_offset:
+ self.freq = to_offset(freq)
+ else:
+ self.freq = freq
+
+ def time_from_date_range(self, freq, is_offset):
PeriodIndex(self.rng, freq=freq)
- def time_from_pydatetime(self, freq):
+ def time_from_pydatetime(self, freq, is_offset):
PeriodIndex(self.rng2, freq=freq)
- def time_from_ints(self, freq):
+ def time_from_ints(self, freq, is_offset):
PeriodIndex(self.ints, freq=freq)
+ def time_from_ints_daily(self, freq, is_offset):
+ PeriodIndex(self.daily_ints, freq=freq)
+
class DataFramePeriodColumn(object):
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 29ab51c582a97..affef80571fce 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1294,6 +1294,7 @@ Performance Improvements
- Improved performance of :meth:`~DataFrame.where` for Categorical data (:issue:`24077`)
- Improved performance of iterating over a :class:`Series`. Using :meth:`DataFrame.itertuples` now creates iterators
without internally allocating lists of all elements (:issue:`20783`)
+- Improved performance of :class:`Period` constructor, additionally benefitting ``PeriodArray`` and ``PeriodIndex`` creation (:issue:`24084` and :issue:`24118`)
.. _whatsnew_0240.docs:
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 37f11af81dfd6..3a03018141f5a 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -47,6 +47,20 @@ cdef set _not_datelike_strings = {'a', 'A', 'm', 'M', 'p', 'P', 't', 'T'}
# ----------------------------------------------------------------------
+_get_option = None
+
+
+def get_option(param):
+ """ Defer import of get_option to break an import cycle that caused
+ significant performance degradation in Period construction. See
+ GH#24118 for details
+ """
+ global _get_option
+ if _get_option is None:
+ from pandas.core.config import get_option
+ _get_option = get_option
+ return _get_option(param)
+
def parse_datetime_string(date_string, freq=None, dayfirst=False,
yearfirst=False, **kwargs):
@@ -117,7 +131,6 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
freq = freq.rule_code
if dayfirst is None or yearfirst is None:
- from pandas.core.config import get_option
if dayfirst is None:
dayfirst = get_option("display.date_dayfirst")
if yearfirst is None:
| As much of the time creating a `PeriodArray` from `int`s is actually spent importing/querying `get_option('display.date_dayfirst')` and its `yearfirst` cousin, this PR exposes those parameters in `Period.__new__()` to allow them to be queried once per array creation.
This yields a ~15x speedup:
```
asv compare upstream/master HEAD -s --sort ratio
Benchmarks that have improved:
before after ratio
[210538e4] [32288230]
<period_dayfirst~1> <period_dayfirst>
- 102±0.9ms 7.08±0.2ms 0.07 period.PeriodIndexConstructor.time_from_ints('D', True)
- 101±2ms 6.56±0.3ms 0.06 period.PeriodIndexConstructor.time_from_ints('D', False)
```
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24118 | 2018-12-05T22:45:16Z | 2018-12-29T14:23:06Z | 2018-12-29T14:23:06Z | 2018-12-29T14:23:20Z |
DOC: Add pandas video series to tutorials.rst | diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst
index 0ea0e04f9a1b3..c39227e67ffe8 100644
--- a/doc/source/tutorials.rst
+++ b/doc/source/tutorials.rst
@@ -86,6 +86,14 @@ Video Tutorials
* `Pandas: .head() to .tail() <https://www.youtube.com/watch?v=7vuO9QXDN50>`_
(2016) (1:26)
`GitHub repo <https://github.com/TomAugspurger/pydata-chi-h2t>`__
+* `Data analysis in Python with pandas <https://www.youtube.com/playlist?list=PL5-da3qGB5ICCsgW1MxlZ0Hq8LL5U3u9y>`_
+ (2016-2018)
+ `GitHub repo <https://github.com/justmarkham/pandas-videos>`_ and
+ `Jupyter Notebook <http://nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/pandas.ipynb>`_
+* `Best practices with pandas <https://www.youtube.com/playlist?list=PL5-da3qGB5IBITZj_dYSFqnd_15JgqwA6>`_
+ (2018)
+ `GitHub repo <https://github.com/justmarkham/pycon-2018-tutorial>`_ and
+ `Jupyter Notebook <http://nbviewer.jupyter.org/github/justmarkham/pycon-2018-tutorial/blob/master/tutorial.ipynb>`_
Various Tutorials
| This is a 6-hour video tutorial series that is freely available on YouTube. If you would like to quickly preview the contents of the series, you can see all of the code in this [Jupyter Notebook](http://nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/pandas.ipynb).
Collectively, these videos have nearly 20,000 likes across 1 million views. If you would like to read some testimonials about the quality of the instruction, [please see here](https://www.dataschool.io/testimonials/#aboutmypandasvideoshttpwwwdataschoolioeasierdataanalysiswithpandas).
Please let me know if you have any questions or would like me to make any changes. Thanks for your consideration! | https://api.github.com/repos/pandas-dev/pandas/pulls/24117 | 2018-12-05T20:50:30Z | 2018-12-17T23:47:09Z | 2018-12-17T23:47:09Z | 2018-12-21T14:00:59Z |
CLN: Follow-up to #24100 | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 33f71bcb2fef2..dcc6a5c836485 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -854,173 +854,159 @@ def _time_shift(self, periods, freq=None):
return self._generate_range(start=start, end=end, periods=None,
freq=self.freq)
- @classmethod
- def _add_datetimelike_methods(cls):
- """
- add in the datetimelike methods (as we may have to override the
- superclass)
- """
+ def __add__(self, other):
+ other = lib.item_from_zerodim(other)
+ if isinstance(other, (ABCSeries, ABCDataFrame)):
+ return NotImplemented
- def __add__(self, other):
- other = lib.item_from_zerodim(other)
- if isinstance(other, (ABCSeries, ABCDataFrame)):
- return NotImplemented
-
- # scalar others
- elif other is NaT:
- result = self._add_nat()
- elif isinstance(other, (Tick, timedelta, np.timedelta64)):
- result = self._add_delta(other)
- elif isinstance(other, DateOffset):
- # specifically _not_ a Tick
- result = self._add_offset(other)
- elif isinstance(other, (datetime, np.datetime64)):
- result = self._add_datetimelike_scalar(other)
- elif lib.is_integer(other):
- # This check must come after the check for np.timedelta64
- # as is_integer returns True for these
- maybe_integer_op_deprecated(self)
- result = self._time_shift(other)
-
- # array-like others
- elif is_timedelta64_dtype(other):
- # TimedeltaIndex, ndarray[timedelta64]
- result = self._add_delta(other)
- elif is_offsetlike(other):
- # Array/Index of DateOffset objects
- result = self._addsub_offset_array(other, operator.add)
- elif is_datetime64_dtype(other) or is_datetime64tz_dtype(other):
- # DatetimeIndex, ndarray[datetime64]
- return self._add_datetime_arraylike(other)
- elif is_integer_dtype(other):
- maybe_integer_op_deprecated(self)
- result = self._addsub_int_array(other, operator.add)
- elif is_float_dtype(other):
- # Explicitly catch invalid dtypes
- raise TypeError("cannot add {dtype}-dtype to {cls}"
- .format(dtype=other.dtype,
- cls=type(self).__name__))
- elif is_period_dtype(other):
- # if self is a TimedeltaArray and other is a PeriodArray with
- # a timedelta-like (i.e. Tick) freq, this operation is valid.
- # Defer to the PeriodArray implementation.
- # In remaining cases, this will end up raising TypeError.
- return NotImplemented
- elif is_extension_array_dtype(other):
- # Categorical op will raise; defer explicitly
- return NotImplemented
- else: # pragma: no cover
- return NotImplemented
-
- if is_timedelta64_dtype(result) and isinstance(result, np.ndarray):
- from pandas.core.arrays import TimedeltaArrayMixin
- # TODO: infer freq?
- return TimedeltaArrayMixin(result)
- return result
+ # scalar others
+ elif other is NaT:
+ result = self._add_nat()
+ elif isinstance(other, (Tick, timedelta, np.timedelta64)):
+ result = self._add_delta(other)
+ elif isinstance(other, DateOffset):
+ # specifically _not_ a Tick
+ result = self._add_offset(other)
+ elif isinstance(other, (datetime, np.datetime64)):
+ result = self._add_datetimelike_scalar(other)
+ elif lib.is_integer(other):
+ # This check must come after the check for np.timedelta64
+ # as is_integer returns True for these
+ maybe_integer_op_deprecated(self)
+ result = self._time_shift(other)
+
+ # array-like others
+ elif is_timedelta64_dtype(other):
+ # TimedeltaIndex, ndarray[timedelta64]
+ result = self._add_delta(other)
+ elif is_offsetlike(other):
+ # Array/Index of DateOffset objects
+ result = self._addsub_offset_array(other, operator.add)
+ elif is_datetime64_dtype(other) or is_datetime64tz_dtype(other):
+ # DatetimeIndex, ndarray[datetime64]
+ return self._add_datetime_arraylike(other)
+ elif is_integer_dtype(other):
+ maybe_integer_op_deprecated(self)
+ result = self._addsub_int_array(other, operator.add)
+ elif is_float_dtype(other):
+ # Explicitly catch invalid dtypes
+ raise TypeError("cannot add {dtype}-dtype to {cls}"
+ .format(dtype=other.dtype,
+ cls=type(self).__name__))
+ elif is_period_dtype(other):
+ # if self is a TimedeltaArray and other is a PeriodArray with
+ # a timedelta-like (i.e. Tick) freq, this operation is valid.
+ # Defer to the PeriodArray implementation.
+ # In remaining cases, this will end up raising TypeError.
+ return NotImplemented
+ elif is_extension_array_dtype(other):
+ # Categorical op will raise; defer explicitly
+ return NotImplemented
+ else: # pragma: no cover
+ return NotImplemented
- cls.__add__ = __add__
-
- def __radd__(self, other):
- # alias for __add__
- return self.__add__(other)
- cls.__radd__ = __radd__
-
- def __sub__(self, other):
- other = lib.item_from_zerodim(other)
- if isinstance(other, (ABCSeries, ABCDataFrame)):
- return NotImplemented
-
- # scalar others
- elif other is NaT:
- result = self._sub_nat()
- elif isinstance(other, (Tick, timedelta, np.timedelta64)):
- result = self._add_delta(-other)
- elif isinstance(other, DateOffset):
- # specifically _not_ a Tick
- result = self._add_offset(-other)
- elif isinstance(other, (datetime, np.datetime64)):
- result = self._sub_datetimelike_scalar(other)
- elif lib.is_integer(other):
- # This check must come after the check for np.timedelta64
- # as is_integer returns True for these
- maybe_integer_op_deprecated(self)
- result = self._time_shift(-other)
-
- elif isinstance(other, Period):
- result = self._sub_period(other)
-
- # array-like others
- elif is_timedelta64_dtype(other):
- # TimedeltaIndex, ndarray[timedelta64]
- result = self._add_delta(-other)
- elif is_offsetlike(other):
- # Array/Index of DateOffset objects
- result = self._addsub_offset_array(other, operator.sub)
- elif is_datetime64_dtype(other) or is_datetime64tz_dtype(other):
- # DatetimeIndex, ndarray[datetime64]
- result = self._sub_datetime_arraylike(other)
- elif is_period_dtype(other):
- # PeriodIndex
- result = self._sub_period_array(other)
- elif is_integer_dtype(other):
- maybe_integer_op_deprecated(self)
- result = self._addsub_int_array(other, operator.sub)
- elif isinstance(other, ABCIndexClass):
- raise TypeError("cannot subtract {cls} and {typ}"
- .format(cls=type(self).__name__,
- typ=type(other).__name__))
- elif is_float_dtype(other):
- # Explicitly catch invalid dtypes
- raise TypeError("cannot subtract {dtype}-dtype from {cls}"
- .format(dtype=other.dtype,
- cls=type(self).__name__))
- elif is_extension_array_dtype(other):
- # Categorical op will raise; defer explicitly
- return NotImplemented
- else: # pragma: no cover
- return NotImplemented
-
- if is_timedelta64_dtype(result) and isinstance(result, np.ndarray):
- from pandas.core.arrays import TimedeltaArrayMixin
- # TODO: infer freq?
- return TimedeltaArrayMixin(result)
- return result
+ if is_timedelta64_dtype(result) and isinstance(result, np.ndarray):
+ from pandas.core.arrays import TimedeltaArrayMixin
+ # TODO: infer freq?
+ return TimedeltaArrayMixin(result)
+ return result
+
+ def __radd__(self, other):
+ # alias for __add__
+ return self.__add__(other)
+
+ def __sub__(self, other):
+ other = lib.item_from_zerodim(other)
+ if isinstance(other, (ABCSeries, ABCDataFrame)):
+ return NotImplemented
+
+ # scalar others
+ elif other is NaT:
+ result = self._sub_nat()
+ elif isinstance(other, (Tick, timedelta, np.timedelta64)):
+ result = self._add_delta(-other)
+ elif isinstance(other, DateOffset):
+ # specifically _not_ a Tick
+ result = self._add_offset(-other)
+ elif isinstance(other, (datetime, np.datetime64)):
+ result = self._sub_datetimelike_scalar(other)
+ elif lib.is_integer(other):
+ # This check must come after the check for np.timedelta64
+ # as is_integer returns True for these
+ maybe_integer_op_deprecated(self)
+ result = self._time_shift(-other)
+
+ elif isinstance(other, Period):
+ result = self._sub_period(other)
+
+ # array-like others
+ elif is_timedelta64_dtype(other):
+ # TimedeltaIndex, ndarray[timedelta64]
+ result = self._add_delta(-other)
+ elif is_offsetlike(other):
+ # Array/Index of DateOffset objects
+ result = self._addsub_offset_array(other, operator.sub)
+ elif is_datetime64_dtype(other) or is_datetime64tz_dtype(other):
+ # DatetimeIndex, ndarray[datetime64]
+ result = self._sub_datetime_arraylike(other)
+ elif is_period_dtype(other):
+ # PeriodIndex
+ result = self._sub_period_array(other)
+ elif is_integer_dtype(other):
+ maybe_integer_op_deprecated(self)
+ result = self._addsub_int_array(other, operator.sub)
+ elif isinstance(other, ABCIndexClass):
+ raise TypeError("cannot subtract {cls} and {typ}"
+ .format(cls=type(self).__name__,
+ typ=type(other).__name__))
+ elif is_float_dtype(other):
+ # Explicitly catch invalid dtypes
+ raise TypeError("cannot subtract {dtype}-dtype from {cls}"
+ .format(dtype=other.dtype,
+ cls=type(self).__name__))
+ elif is_extension_array_dtype(other):
+ # Categorical op will raise; defer explicitly
+ return NotImplemented
+ else: # pragma: no cover
+ return NotImplemented
+
+ if is_timedelta64_dtype(result) and isinstance(result, np.ndarray):
+ from pandas.core.arrays import TimedeltaArrayMixin
+ # TODO: infer freq?
+ return TimedeltaArrayMixin(result)
+ return result
+
+ def __rsub__(self, other):
+ if is_datetime64_dtype(other) and is_timedelta64_dtype(self):
+ # ndarray[datetime64] cannot be subtracted from self, so
+ # we need to wrap in DatetimeArray/Index and flip the operation
+ if not isinstance(other, DatetimeLikeArrayMixin):
+ # Avoid down-casting DatetimeIndex
+ from pandas.core.arrays import DatetimeArrayMixin
+ other = DatetimeArrayMixin(other)
+ return other - self
+ elif (is_datetime64_any_dtype(self) and hasattr(other, 'dtype') and
+ not is_datetime64_any_dtype(other)):
+ # GH#19959 datetime - datetime is well-defined as timedelta,
+ # but any other type - datetime is not well-defined.
+ raise TypeError("cannot subtract {cls} from {typ}"
+ .format(cls=type(self).__name__,
+ typ=type(other).__name__))
+ elif is_period_dtype(self) and is_timedelta64_dtype(other):
+ # TODO: Can we simplify/generalize these cases at all?
+ raise TypeError("cannot subtract {cls} from {dtype}"
+ .format(cls=type(self).__name__,
+ dtype=other.dtype))
+ return -(self - other)
+
+ # FIXME: DTA/TDA/PA inplace methods should actually be inplace, GH#24115
+ def __iadd__(self, other):
+ # alias for __add__
+ return self.__add__(other)
- cls.__sub__ = __sub__
-
- def __rsub__(self, other):
- if is_datetime64_dtype(other) and is_timedelta64_dtype(self):
- # ndarray[datetime64] cannot be subtracted from self, so
- # we need to wrap in DatetimeArray/Index and flip the operation
- if not isinstance(other, DatetimeLikeArrayMixin):
- # Avoid down-casting DatetimeIndex
- from pandas.core.arrays import DatetimeArrayMixin
- other = DatetimeArrayMixin(other)
- return other - self
- elif (is_datetime64_any_dtype(self) and hasattr(other, 'dtype') and
- not is_datetime64_any_dtype(other)):
- # GH#19959 datetime - datetime is well-defined as timedelta,
- # but any other type - datetime is not well-defined.
- raise TypeError("cannot subtract {cls} from {typ}"
- .format(cls=type(self).__name__,
- typ=type(other).__name__))
- elif is_period_dtype(self) and is_timedelta64_dtype(other):
- # TODO: Can we simplify/generalize these cases at all?
- raise TypeError("cannot subtract {cls} from {dtype}"
- .format(cls=type(self).__name__,
- dtype=other.dtype))
- return -(self - other)
- cls.__rsub__ = __rsub__
-
- def __iadd__(self, other):
- # alias for __add__
- return self.__add__(other)
- cls.__iadd__ = __iadd__
-
- def __isub__(self, other):
- # alias for __sub__
- return self.__sub__(other)
- cls.__isub__ = __isub__
+ def __isub__(self, other):
+ # alias for __sub__
+ return self.__sub__(other)
# --------------------------------------------------------------
# Comparison Methods
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index f453a9b734d17..119a6360c12f7 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1438,7 +1438,6 @@ def to_julian_date(self):
DatetimeArrayMixin._add_comparison_ops()
-DatetimeArrayMixin._add_datetimelike_methods()
# -------------------------------------------------------------------
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 4d466ef7281b7..1993440a2853b 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -830,7 +830,6 @@ def _values_for_argsort(self):
PeriodArray._add_comparison_ops()
-PeriodArray._add_datetimelike_methods()
# -------------------------------------------------------------------
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index f803144e0a78f..a393fe21ef6a8 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -734,7 +734,6 @@ def f(x):
TimedeltaArrayMixin._add_comparison_ops()
-TimedeltaArrayMixin._add_datetimelike_methods()
# ---------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2c1fa5ef4439e..a46fbcfa1a31f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -463,19 +463,6 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
NDFrame.__init__(self, mgr, fastpath=True)
- def _init_dict(self, data, index, columns, dtype=None):
- """
- Segregate Series based on type and coerce into matrices.
- Needs to handle a lot of exceptional cases.
- """
- return init_dict(data, index, columns, dtype=dtype)
- # TODO: Can we get rid of this as a method?
-
- def _init_ndarray(self, values, index, columns, dtype=None, copy=False):
- # input must be a ndarray, list, Series, index
- return init_ndarray(values, index, columns, dtype=dtype, copy=copy)
- # TODO: can we just get rid of this as a method?
-
# ----------------------------------------------------------------------
@property
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 5859dc9e858b7..3b93fd6b7a9ef 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -1,6 +1,6 @@
"""
Functions for preparing various inputs passed to the DataFrame or Series
-constructors before passing them to aBlockManager.
+constructors before passing them to a BlockManager.
"""
from collections import OrderedDict
@@ -191,9 +191,9 @@ def init_dict(data, index, columns, dtype=None):
nan_dtype = object
else:
nan_dtype = dtype
- v = construct_1d_arraylike_from_scalar(np.nan, len(index),
- nan_dtype)
- arrays.loc[missing] = [v] * missing.sum()
+ val = construct_1d_arraylike_from_scalar(np.nan, len(index),
+ nan_dtype)
+ arrays.loc[missing] = [val] * missing.sum()
else:
keys = com.dict_keys_to_ordered_list(data)
@@ -246,28 +246,28 @@ def _homogenize(data, index, dtype=None):
oindex = None
homogenized = []
- for v in data:
- if isinstance(v, ABCSeries):
+ for val in data:
+ if isinstance(val, ABCSeries):
if dtype is not None:
- v = v.astype(dtype)
- if v.index is not index:
+ val = val.astype(dtype)
+ if val.index is not index:
# Forces alignment. No need to copy data since we
# are putting it into an ndarray later
- v = v.reindex(index, copy=False)
+ val = val.reindex(index, copy=False)
else:
- if isinstance(v, dict):
+ if isinstance(val, dict):
if oindex is None:
oindex = index.astype('O')
if isinstance(index, (ABCDatetimeIndex, ABCTimedeltaIndex)):
- v = com.dict_compat(v)
+ val = com.dict_compat(val)
else:
- v = dict(v)
- v = lib.fast_multiget(v, oindex.values, default=np.nan)
- v = sanitize_array(v, index, dtype=dtype, copy=False,
- raise_cast_failure=False)
+ val = dict(val)
+ val = lib.fast_multiget(val, oindex.values, default=np.nan)
+ val = sanitize_array(val, index, dtype=dtype, copy=False,
+ raise_cast_failure=False)
- homogenized.append(v)
+ homogenized.append(val)
return homogenized
@@ -284,16 +284,16 @@ def extract_index(data):
have_series = False
have_dicts = False
- for v in data:
- if isinstance(v, ABCSeries):
+ for val in data:
+ if isinstance(val, ABCSeries):
have_series = True
- indexes.append(v.index)
- elif isinstance(v, dict):
+ indexes.append(val.index)
+ elif isinstance(val, dict):
have_dicts = True
- indexes.append(list(v.keys()))
- elif is_list_like(v) and getattr(v, 'ndim', 1) == 1:
+ indexes.append(list(val.keys()))
+ elif is_list_like(val) and getattr(val, 'ndim', 1) == 1:
have_raw_arrays = True
- raw_lengths.append(len(v))
+ raw_lengths.append(len(val))
if not indexes and not raw_lengths:
raise ValueError('If using all scalar values, you must pass'
@@ -313,8 +313,9 @@ def extract_index(data):
if have_series:
if lengths[0] != len(index):
- msg = ('array length %d does not match index length %d' %
- (lengths[0], len(index)))
+ msg = ('array length {length} does not match index '
+ 'length {idx_len}'
+ .format(length=lengths[0], idx_len=len(index)))
raise ValueError(msg)
else:
index = ibase.default_index(lengths[0])
@@ -344,7 +345,7 @@ def get_names_from_index(data):
if n is not None:
index[i] = n
else:
- index[i] = 'Unnamed %d' % count
+ index[i] = 'Unnamed {count}'.format(count=count)
count += 1
return index
@@ -506,7 +507,7 @@ def sanitize_index(data, index, copy=False):
return data
if len(data) != len(index):
- raise ValueError('Length of values does not match length of ' 'index')
+ raise ValueError('Length of values does not match length of index')
if isinstance(data, ABCIndexClass) and not copy:
pass
| Avoid 1-letter variable names
Modernize string formatting
Avoid an unnecessary level of indirection in add_datetimelike_methods for DTA/TDA/PA | https://api.github.com/repos/pandas-dev/pandas/pulls/24116 | 2018-12-05T19:27:45Z | 2018-12-05T22:43:46Z | 2018-12-05T22:43:46Z | 2018-12-06T00:15:12Z |
BUG/Perf: Support ExtensionArrays in where | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 6b8d548251061..a18c26f911f1d 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -675,6 +675,7 @@ changes were made:
- ``SparseDataFrame.combine`` and ``DataFrame.combine_first`` no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
- Setting :attr:`SparseArray.fill_value` to a fill value with a different dtype is now allowed.
- ``DataFrame[column]`` is now a :class:`Series` with sparse values, rather than a :class:`SparseSeries`, when slicing a single column with sparse values (:issue:`23559`).
+- The result of :meth:`Series.where` is now a ``Series`` with sparse values, like with other extension arrays (:issue:`24077`)
Some new warnings are issued for operations that require or are likely to materialize a large dense array:
@@ -1113,6 +1114,8 @@ Deprecations
- :func:`pandas.types.is_datetimetz` is deprecated in favor of `pandas.types.is_datetime64tz` (:issue:`23917`)
- Creating a :class:`TimedeltaIndex` or :class:`DatetimeIndex` by passing range arguments `start`, `end`, and `periods` is deprecated in favor of :func:`timedelta_range` and :func:`date_range` (:issue:`23919`)
- Passing a string alias like ``'datetime64[ns, UTC]'`` as the `unit` parameter to :class:`DatetimeTZDtype` is deprecated. Use :class:`DatetimeTZDtype.construct_from_string` instead (:issue:`23990`).
+- In :meth:`Series.where` with Categorical data, providing an ``other`` that is not present in the categories is deprecated. Convert the categorical to a different dtype or add the ``other`` to the categories first (:issue:`24077`).
+
.. _whatsnew_0240.deprecations.datetimelike_int_ops:
@@ -1223,6 +1226,7 @@ Performance Improvements
- Improved performance of :meth:`DatetimeIndex.tz_localize` and various ``DatetimeIndex`` attributes with dateutil UTC timezone (:issue:`23772`)
- Fixed a performance regression on Windows with Python 3.7 of :func:`pd.read_csv` (:issue:`23516`)
- Improved performance of :class:`Categorical` constructor for `Series` objects (:issue:`23814`)
+- Improved performance of :meth:`~DataFrame.where` for Categorical data (:issue:`24077`)
.. _whatsnew_0240.docs:
@@ -1249,6 +1253,7 @@ Categorical
- In meth:`Series.unstack`, specifying a ``fill_value`` not present in the categories now raises a ``TypeError`` rather than ignoring the ``fill_value`` (:issue:`23284`)
- Bug when resampling :meth:`Dataframe.resample()` and aggregating on categorical data, the categorical dtype was getting lost. (:issue:`23227`)
- Bug in many methods of the ``.str``-accessor, which always failed on calling the ``CategoricalIndex.str`` constructor (:issue:`23555`, :issue:`23556`)
+- Bug in :meth:`Series.where` losing the categorical dtype for categorical data (:issue:`24077`)
Datetimelike
^^^^^^^^^^^^
@@ -1285,6 +1290,7 @@ Datetimelike
- Bug in :class:`DatetimeIndex` where calling ``np.array(dtindex, dtype=object)`` would incorrectly return an array of ``long`` objects (:issue:`23524`)
- Bug in :class:`Index` where passing a timezone-aware :class:`DatetimeIndex` and `dtype=object` would incorrectly raise a ``ValueError`` (:issue:`23524`)
- Bug in :class:`Index` where calling ``np.array(dtindex, dtype=object)`` on a timezone-naive :class:`DatetimeIndex` would return an array of ``datetime`` objects instead of :class:`Timestamp` objects, potentially losing nanosecond portions of the timestamps (:issue:`23524`)
+- Bug in :class:`Categorical.__setitem__` not allowing setting with another ``Categorical`` when both are undordered and have the same categories, but in a different order (:issue:`24142`)
- Bug in :func:`date_range` where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (:issue:`24110`)
Timedelta
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 15abffa9a5d23..cf145064fd7b1 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -220,6 +220,8 @@ def __setitem__(self, key, value):
# example, a string like '2018-01-01' is coerced to a datetime
# when setting on a datetime64ns array. In general, if the
# __init__ method coerces that value, then so should __setitem__
+ # Note, also, that Series/DataFrame.where internally use __setitem__
+ # on a copy of the data.
raise NotImplementedError(_not_implemented_message.format(
type(self), '__setitem__')
)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6e96fc75daec9..abadd64b441b4 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2078,11 +2078,21 @@ def __setitem__(self, key, value):
`Categorical` does not have the same categories
"""
+ if isinstance(value, (ABCIndexClass, ABCSeries)):
+ value = value.array
+
# require identical categories set
if isinstance(value, Categorical):
- if not value.categories.equals(self.categories):
+ if not is_dtype_equal(self, value):
raise ValueError("Cannot set a Categorical with another, "
"without identical categories")
+ if not self.categories.equals(value.categories):
+ new_codes = _recode_for_categories(
+ value.codes, value.categories, self.categories
+ )
+ value = Categorical.from_codes(new_codes,
+ categories=self.categories,
+ ordered=self.ordered)
rvalue = value if is_list_like(value) else [value]
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 0bd899149d940..9e1d2efc21b81 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -706,6 +706,8 @@ def __array__(self, dtype=None, copy=True):
def __setitem__(self, key, value):
# I suppose we could allow setting of non-fill_value elements.
+ # TODO(SparseArray.__setitem__): remove special cases in
+ # ExtensionBlock.where
msg = "SparseArray does not support item assignment via setitem"
raise TypeError(msg)
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 9ce4949992f4c..f1a05ec607b59 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -501,10 +501,13 @@ def _can_reindex(self, indexer):
@Appender(_index_shared_docs['where'])
def where(self, cond, other=None):
+ # TODO: Investigate an alternative implementation with
+ # 1. copy the underyling Categorical
+ # 2. setitem with `cond` and `other`
+ # 3. Rebuild CategoricalIndex.
if other is None:
other = self._na_value
values = np.where(cond, self.values, other)
-
cat = Categorical(values, dtype=self.dtype)
return self._shallow_copy(cat, **self._get_attributes_dict())
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index d37da14ab5d2c..1383ce09bc2d0 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -28,7 +28,8 @@
from pandas.core.dtypes.dtypes import (
CategoricalDtype, DatetimeTZDtype, ExtensionDtype, PandasExtensionDtype)
from pandas.core.dtypes.generic import (
- ABCDatetimeIndex, ABCExtensionArray, ABCIndexClass, ABCSeries)
+ ABCDataFrame, ABCDatetimeIndex, ABCExtensionArray, ABCIndexClass,
+ ABCSeries)
from pandas.core.dtypes.missing import (
_isna_compat, array_equivalent, is_null_datelike_scalar, isna, notna)
@@ -1886,7 +1887,6 @@ def take_nd(self, indexer, axis=0, new_mgr_locs=None, fill_tuple=None):
new_values = self.values.take(indexer, fill_value=fill_value,
allow_fill=True)
- # if we are a 1-dim object, then always place at 0
if self.ndim == 1 and new_mgr_locs is None:
new_mgr_locs = [0]
else:
@@ -1967,6 +1967,57 @@ def shift(self, periods, axis=0):
placement=self.mgr_locs,
ndim=self.ndim)]
+ def where(self, other, cond, align=True, errors='raise',
+ try_cast=False, axis=0, transpose=False):
+ # Extract the underlying arrays.
+ if isinstance(other, (ABCIndexClass, ABCSeries)):
+ other = other.array
+
+ elif isinstance(other, ABCDataFrame):
+ # ExtensionArrays are 1-D, so if we get here then
+ # `other` should be a DataFrame with a single column.
+ assert other.shape[1] == 1
+ other = other.iloc[:, 0].array
+
+ if isinstance(cond, ABCDataFrame):
+ assert cond.shape[1] == 1
+ cond = cond.iloc[:, 0].array
+
+ elif isinstance(cond, (ABCIndexClass, ABCSeries)):
+ cond = cond.array
+
+ if lib.is_scalar(other) and isna(other):
+ # The default `other` for Series / Frame is np.nan
+ # we want to replace that with the correct NA value
+ # for the type
+ other = self.dtype.na_value
+
+ if is_sparse(self.values):
+ # TODO(SparseArray.__setitem__): remove this if condition
+ # We need to re-infer the type of the data after doing the
+ # where, for cases where the subtypes don't match
+ dtype = None
+ else:
+ dtype = self.dtype
+
+ try:
+ result = self.values.copy()
+ icond = ~cond
+ if lib.is_scalar(other):
+ result[icond] = other
+ else:
+ result[icond] = other[icond]
+ except (NotImplementedError, TypeError):
+ # NotImplementedError for class not implementing `__setitem__`
+ # TypeError for SparseArray, which implements just to raise
+ # a TypeError
+ result = self._holder._from_sequence(
+ np.where(cond, self.values, other),
+ dtype=dtype,
+ )
+
+ return self.make_block_same_class(result, placement=self.mgr_locs)
+
@property
def _ftype(self):
return getattr(self.values, '_pandas_ftype', Block._ftype)
@@ -2658,6 +2709,33 @@ def concat_same_type(self, to_concat, placement=None):
values, placement=placement or slice(0, len(values), 1),
ndim=self.ndim)
+ def where(self, other, cond, align=True, errors='raise',
+ try_cast=False, axis=0, transpose=False):
+ # TODO(CategoricalBlock.where):
+ # This can all be deleted in favor of ExtensionBlock.where once
+ # we enforce the deprecation.
+ object_msg = (
+ "Implicitly converting categorical to object-dtype ndarray. "
+ "One or more of the values in 'other' are not present in this "
+ "categorical's categories. A future version of pandas will raise "
+ "a ValueError when 'other' contains different categories.\n\n"
+ "To preserve the current behavior, add the new categories to "
+ "the categorical before calling 'where', or convert the "
+ "categorical to a different dtype."
+ )
+ try:
+ # Attempt to do preserve categorical dtype.
+ result = super(CategoricalBlock, self).where(
+ other, cond, align, errors, try_cast, axis, transpose
+ )
+ except (TypeError, ValueError):
+ warnings.warn(object_msg, FutureWarning, stacklevel=6)
+ result = self.astype(object).where(other, cond, align=align,
+ errors=errors,
+ try_cast=try_cast,
+ axis=axis, transpose=transpose)
+ return result
+
class DatetimeBlock(DatetimeLikeBlockMixin, Block):
__slots__ = ()
diff --git a/pandas/tests/arrays/categorical/test_indexing.py b/pandas/tests/arrays/categorical/test_indexing.py
index 8df5728f7d895..44b4589d5a663 100644
--- a/pandas/tests/arrays/categorical/test_indexing.py
+++ b/pandas/tests/arrays/categorical/test_indexing.py
@@ -3,6 +3,7 @@
import numpy as np
import pytest
+import pandas as pd
from pandas import Categorical, CategoricalIndex, Index, PeriodIndex, Series
import pandas.core.common as com
from pandas.tests.arrays.categorical.common import TestCategorical
@@ -43,6 +44,45 @@ def test_setitem(self):
tm.assert_categorical_equal(c, expected)
+ @pytest.mark.parametrize('other', [
+ pd.Categorical(['b', 'a']),
+ pd.Categorical(['b', 'a'], categories=['b', 'a']),
+ ])
+ def test_setitem_same_but_unordered(self, other):
+ # GH-24142
+ target = pd.Categorical(['a', 'b'], categories=['a', 'b'])
+ mask = np.array([True, False])
+ target[mask] = other[mask]
+ expected = pd.Categorical(['b', 'b'], categories=['a', 'b'])
+ tm.assert_categorical_equal(target, expected)
+
+ @pytest.mark.parametrize('other', [
+ pd.Categorical(['b', 'a'], categories=['b', 'a', 'c']),
+ pd.Categorical(['b', 'a'], categories=['a', 'b', 'c']),
+ pd.Categorical(['a', 'a'], categories=['a']),
+ pd.Categorical(['b', 'b'], categories=['b']),
+ ])
+ def test_setitem_different_unordered_raises(self, other):
+ # GH-24142
+ target = pd.Categorical(['a', 'b'], categories=['a', 'b'])
+ mask = np.array([True, False])
+ with pytest.raises(ValueError):
+ target[mask] = other[mask]
+
+ @pytest.mark.parametrize('other', [
+ pd.Categorical(['b', 'a']),
+ pd.Categorical(['b', 'a'], categories=['b', 'a'], ordered=True),
+ pd.Categorical(['b', 'a'], categories=['a', 'b', 'c'], ordered=True),
+ ])
+ def test_setitem_same_ordered_rasies(self, other):
+ # Gh-24142
+ target = pd.Categorical(['a', 'b'], categories=['a', 'b'],
+ ordered=True)
+ mask = np.array([True, False])
+
+ with pytest.raises(ValueError):
+ target[mask] = other[mask]
+
class TestCategoricalIndexing(object):
@@ -122,6 +162,60 @@ def test_get_indexer_non_unique(self, idx_values, key_values, key_class):
tm.assert_numpy_array_equal(expected, result)
tm.assert_numpy_array_equal(exp_miss, res_miss)
+ def test_where_unobserved_nan(self):
+ ser = pd.Series(pd.Categorical(['a', 'b']))
+ result = ser.where([True, False])
+ expected = pd.Series(pd.Categorical(['a', None],
+ categories=['a', 'b']))
+ tm.assert_series_equal(result, expected)
+
+ # all NA
+ ser = pd.Series(pd.Categorical(['a', 'b']))
+ result = ser.where([False, False])
+ expected = pd.Series(pd.Categorical([None, None],
+ categories=['a', 'b']))
+ tm.assert_series_equal(result, expected)
+
+ def test_where_unobserved_categories(self):
+ ser = pd.Series(
+ Categorical(['a', 'b', 'c'], categories=['d', 'c', 'b', 'a'])
+ )
+ result = ser.where([True, True, False], other='b')
+ expected = pd.Series(
+ Categorical(['a', 'b', 'b'], categories=ser.cat.categories)
+ )
+ tm.assert_series_equal(result, expected)
+
+ def test_where_other_categorical(self):
+ ser = pd.Series(
+ Categorical(['a', 'b', 'c'], categories=['d', 'c', 'b', 'a'])
+ )
+ other = Categorical(['b', 'c', 'a'], categories=['a', 'c', 'b', 'd'])
+ result = ser.where([True, False, True], other)
+ expected = pd.Series(Categorical(['a', 'c', 'c'], dtype=ser.dtype))
+ tm.assert_series_equal(result, expected)
+
+ def test_where_warns(self):
+ ser = pd.Series(Categorical(['a', 'b', 'c']))
+ with tm.assert_produces_warning(FutureWarning):
+ result = ser.where([True, False, True], 'd')
+
+ expected = pd.Series(np.array(['a', 'd', 'c'], dtype='object'))
+ tm.assert_series_equal(result, expected)
+
+ def test_where_ordered_differs_rasies(self):
+ ser = pd.Series(
+ Categorical(['a', 'b', 'c'], categories=['d', 'c', 'b', 'a'],
+ ordered=True)
+ )
+ other = Categorical(['b', 'c', 'a'], categories=['a', 'c', 'b', 'd'],
+ ordered=True)
+ with tm.assert_produces_warning(FutureWarning):
+ result = ser.where([True, False, True], other)
+
+ expected = pd.Series(np.array(['a', 'c', 'c'], dtype=object))
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.parametrize("index", [True, False])
def test_mask_with_boolean(index):
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index a04579dbbb6b1..9604010571294 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -2,7 +2,8 @@
import numpy as np
import pytest
-from pandas import Index, IntervalIndex, date_range, timedelta_range
+import pandas as pd
+from pandas import Index, Interval, IntervalIndex, date_range, timedelta_range
from pandas.core.arrays import IntervalArray
import pandas.util.testing as tm
@@ -50,6 +51,17 @@ def test_set_closed(self, closed, new_closed):
expected = IntervalArray.from_breaks(range(10), closed=new_closed)
tm.assert_extension_array_equal(result, expected)
+ @pytest.mark.parametrize('other', [
+ Interval(0, 1, closed='right'),
+ IntervalArray.from_breaks([1, 2, 3, 4], closed='right'),
+ ])
+ def test_where_raises(self, other):
+ ser = pd.Series(IntervalArray.from_breaks([1, 2, 3, 4],
+ closed='left'))
+ match = "'value.closed' is 'right', expected 'left'."
+ with pytest.raises(ValueError, match=match):
+ ser.where([True, False, True], other=other)
+
class TestSetitem(object):
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index b8cef92f6a6d4..7d8cc34ae1462 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -357,10 +357,10 @@ def setitem():
def setslice():
self.arr[1:5] = 2
- with pytest.raises(TypeError, match="item assignment"):
+ with pytest.raises(TypeError, match="assignment via setitem"):
setitem()
- with pytest.raises(TypeError, match="item assignment"):
+ with pytest.raises(TypeError, match="assignment via setitem"):
setslice()
def test_constructor_from_too_large_array(self):
diff --git a/pandas/tests/arrays/test_period.py b/pandas/tests/arrays/test_period.py
index bf139bb0ce616..4425cc8eb1139 100644
--- a/pandas/tests/arrays/test_period.py
+++ b/pandas/tests/arrays/test_period.py
@@ -197,6 +197,21 @@ def test_sub_period():
arr - other
+# ----------------------------------------------------------------------------
+# Methods
+
+@pytest.mark.parametrize('other', [
+ pd.Period('2000', freq='H'),
+ period_array(['2000', '2001', '2000'], freq='H')
+])
+def test_where_different_freq_raises(other):
+ ser = pd.Series(period_array(['2000', '2001', '2002'], freq='D'))
+ cond = np.array([True, False, True])
+ with pytest.raises(IncompatibleFrequency,
+ match="Input has different freq=H"):
+ ser.where(cond, other)
+
+
# ----------------------------------------------------------------------------
# Printing
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 677963078176d..4a409a84f3db4 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -164,7 +164,6 @@ def test_combine_add(self, data_repeated):
orig_data1._from_sequence([a + val for a in list(orig_data1)]))
self.assert_series_equal(result, expected)
- @pytest.mark.xfail(reason="GH-24147", strict=True)
def test_combine_first(self, data):
# https://github.com/pandas-dev/pandas/issues/24147
a = pd.Series(data[:3])
@@ -231,3 +230,37 @@ def test_hash_pandas_object_works(self, data, as_frame):
a = pd.util.hash_pandas_object(data)
b = pd.util.hash_pandas_object(data)
self.assert_equal(a, b)
+
+ @pytest.mark.parametrize("as_frame", [True, False])
+ def test_where_series(self, data, na_value, as_frame):
+ assert data[0] != data[1]
+ cls = type(data)
+ a, b = data[:2]
+
+ ser = pd.Series(cls._from_sequence([a, a, b, b], dtype=data.dtype))
+ cond = np.array([True, True, False, False])
+
+ if as_frame:
+ ser = ser.to_frame(name='a')
+ cond = cond.reshape(-1, 1)
+
+ result = ser.where(cond)
+ expected = pd.Series(cls._from_sequence([a, a, na_value, na_value],
+ dtype=data.dtype))
+
+ if as_frame:
+ expected = expected.to_frame(name='a')
+ self.assert_equal(result, expected)
+
+ # array other
+ cond = np.array([True, False, True, True])
+ other = cls._from_sequence([a, b, a, b], dtype=data.dtype)
+ if as_frame:
+ other = pd.DataFrame({"a": other})
+ cond = pd.DataFrame({"a": cond})
+ result = ser.where(cond, other)
+ expected = pd.Series(cls._from_sequence([a, b, b, b],
+ dtype=data.dtype))
+ if as_frame:
+ expected = expected.to_frame(name='a')
+ self.assert_equal(result, expected)
diff --git a/pandas/tests/extension/conftest.py b/pandas/tests/extension/conftest.py
index 7758bd01840ae..5349dd919f2a2 100644
--- a/pandas/tests/extension/conftest.py
+++ b/pandas/tests/extension/conftest.py
@@ -11,7 +11,11 @@ def dtype():
@pytest.fixture
def data():
- """Length-100 array for this type."""
+ """Length-100 array for this type.
+
+ * data[0] and data[1] should both be non missing
+ * data[0] and data[1] should not gbe equal
+ """
raise NotImplementedError
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index a941b562fe1ec..a35997b07fd83 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -217,10 +217,21 @@ def test_combine_le(self, data_repeated):
def test_combine_add(self, data_repeated):
pass
+ @pytest.mark.skip(reason="combine for JSONArray not supported")
+ def test_combine_first(self, data):
+ pass
+
@unhashable
def test_hash_pandas_object_works(self, data, kind):
super().test_hash_pandas_object_works(data, kind)
+ @pytest.mark.skip(reason="broadcasting error")
+ def test_where_series(self, data, na_value):
+ # Fails with
+ # *** ValueError: operands could not be broadcast together
+ # with shapes (4,) (4,) (0,)
+ super().test_where_series(data, na_value)
+
class TestCasting(BaseJSON, base.BaseCastingTests):
@pytest.mark.skip(reason="failing on np.array(self, dtype=str)")
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index 5b873b337880e..6106bc3d58620 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -25,7 +25,14 @@
def make_data():
- return np.random.choice(list(string.ascii_letters), size=100)
+ while True:
+ values = np.random.choice(list(string.ascii_letters), size=100)
+ # ensure we meet the requirements
+ # 1. first two not null
+ # 2. first and second are different
+ if values[0] != values[1]:
+ break
+ return values
@pytest.fixture
@@ -35,7 +42,11 @@ def dtype():
@pytest.fixture
def data():
- """Length-100 PeriodArray for semantics test."""
+ """Length-100 array for this type.
+
+ * data[0] and data[1] should both be non missing
+ * data[0] and data[1] should not gbe equal
+ """
return Categorical(make_data())
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 891e5f4dd9a95..ea849a78cda12 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -13,6 +13,8 @@ def make_data(fill_value):
data = np.random.uniform(size=100)
else:
data = np.random.randint(1, 100, size=100)
+ if data[0] == data[1]:
+ data[0] += 1
data[2::3] = fill_value
return data
@@ -255,6 +257,35 @@ def test_fillna_copy_series(self, data_missing):
def test_fillna_length_mismatch(self, data_missing):
pass
+ def test_where_series(self, data, na_value):
+ assert data[0] != data[1]
+ cls = type(data)
+ a, b = data[:2]
+
+ ser = pd.Series(cls._from_sequence([a, a, b, b], dtype=data.dtype))
+
+ cond = np.array([True, True, False, False])
+ result = ser.where(cond)
+
+ new_dtype = SparseDtype('float', 0.0)
+ expected = pd.Series(cls._from_sequence([a, a, na_value, na_value],
+ dtype=new_dtype))
+ self.assert_series_equal(result, expected)
+
+ other = cls._from_sequence([a, b, a, b], dtype=data.dtype)
+ cond = np.array([True, False, True, True])
+ result = ser.where(cond, other)
+ expected = pd.Series(cls._from_sequence([a, b, b, b],
+ dtype=data.dtype))
+ self.assert_series_equal(result, expected)
+
+ def test_combine_first(self, data):
+ if data.dtype.subtype == 'int':
+ # Right now this is upcasted to float, just like combine_first
+ # for Series[int]
+ pytest.skip("TODO(SparseArray.__setitem__ will preserve dtype.")
+ super(TestMethods, self).test_combine_first(data)
+
class TestCasting(BaseSparseTests, base.BaseCastingTests):
pass
| We need some way to do `.where` on EA object for DatetimeArray. Adding it
to the interface is, I think, the easiest way.
Initially I started to write a version on ExtensionBlock, but it proved
to be unwieldy. to write a version that performed well for all types.
It *may* be possible to do using `_ndarray_values` but we'd need a few more
things around that (missing values, converting an arbitrary array to the
"same' ndarary_values, error handling, re-constructing). It seemed easier
to push this down to the array.
The implementation on ExtensionArray is readable, but likely slow since
it'll involve a conversion to object-dtype.
Closes #24077
Closes #24142
Closes #16983 | https://api.github.com/repos/pandas-dev/pandas/pulls/24114 | 2018-12-05T17:27:22Z | 2018-12-10T15:21:09Z | 2018-12-10T15:21:08Z | 2018-12-10T15:21:12Z |
BUG: use internal linkage (static variables) in move.c | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index eab5956735f12..703192f911ccb 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1582,6 +1582,7 @@ Other
- Logical operations ``&, |, ^`` between :class:`Series` and :class:`Index` will no longer raise ``ValueError`` (:issue:`22092`)
- Checking PEP 3141 numbers in :func:`~pandas.api.types.is_scalar` function returns ``True`` (:issue:`22903`)
- Bug in :meth:`DataFrame.combine_first` in which column types were unexpectedly converted to float (:issue:`20699`)
+- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (:issue:`24113`)
.. _whatsnew_0.24.0.contributors:
diff --git a/pandas/util/move.c b/pandas/util/move.c
index 9a8af5bbfbdf6..62860adb1c1f6 100644
--- a/pandas/util/move.c
+++ b/pandas/util/move.c
@@ -20,7 +20,7 @@
#define Py_TPFLAGS_HAVE_NEWBUFFER 0
#endif
-PyObject *badmove; /* bad move exception class */
+static PyObject *badmove; /* bad move exception class */
typedef struct {
PyObject_HEAD
@@ -28,7 +28,7 @@ typedef struct {
PyObject *invalid_bytes;
} stolenbufobject;
-PyTypeObject stolenbuf_type; /* forward declare type */
+static PyTypeObject stolenbuf_type; /* forward declare type */
static void
stolenbuf_dealloc(stolenbufobject *self)
@@ -71,7 +71,7 @@ stolenbuf_getsegcount(stolenbufobject *self, Py_ssize_t *len)
return 1;
}
-PyBufferProcs stolenbuf_as_buffer = {
+static PyBufferProcs stolenbuf_as_buffer = {
(readbufferproc) stolenbuf_getreadwritebuf,
(writebufferproc) stolenbuf_getreadwritebuf,
(segcountproc) stolenbuf_getsegcount,
@@ -81,7 +81,7 @@ PyBufferProcs stolenbuf_as_buffer = {
#else /* Python 3 */
-PyBufferProcs stolenbuf_as_buffer = {
+static PyBufferProcs stolenbuf_as_buffer = {
(getbufferproc) stolenbuf_getbuffer,
NULL,
};
@@ -91,7 +91,7 @@ PyBufferProcs stolenbuf_as_buffer = {
PyDoc_STRVAR(stolenbuf_doc,
"A buffer that is wrapping a stolen bytes object's buffer.");
-PyTypeObject stolenbuf_type = {
+static PyTypeObject stolenbuf_type = {
PyVarObject_HEAD_INIT(NULL, 0)
"pandas.util._move.stolenbuf", /* tp_name */
sizeof(stolenbufobject), /* tp_basicsize */
@@ -185,7 +185,7 @@ move_into_mutable_buffer(PyObject *self, PyObject *bytes_rvalue)
return (PyObject*) ret;
}
-PyMethodDef methods[] = {
+static PyMethodDef methods[] = {
{"move_into_mutable_buffer",
(PyCFunction) move_into_mutable_buffer,
METH_O,
@@ -196,7 +196,7 @@ PyMethodDef methods[] = {
#define MODULE_NAME "pandas.util._move"
#if !COMPILING_IN_PY2
-PyModuleDef _move_module = {
+static PyModuleDef move_module = {
PyModuleDef_HEAD_INIT,
MODULE_NAME,
NULL,
@@ -242,7 +242,7 @@ init_move(void)
}
#if !COMPILING_IN_PY2
- if (!(m = PyModule_Create(&_move_module)))
+ if (!(m = PyModule_Create(&move_module)))
#else
if (!(m = Py_InitModule(MODULE_NAME, methods)))
#endif /* !COMPILING_IN_PY2 */
| move.c declared global variables without explicitly declaring them static, so
they had global visibility. This meant that if you linked against a shared
object that provided the same symbols, the wrong data would be read. In
particular: linking to libgtk, which provides the symbol 'methods', would cause
an import error of pandas.util._move.
- [x] closes #19706
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24113 | 2018-12-05T16:50:49Z | 2018-12-06T21:11:36Z | 2018-12-06T21:11:35Z | 2018-12-06T21:11:37Z |
REF/TST: Add pytest idiom to reshape/test_tile | diff --git a/pandas/tests/reshape/test_cut.py b/pandas/tests/reshape/test_cut.py
new file mode 100644
index 0000000000000..458b4b13248dd
--- /dev/null
+++ b/pandas/tests/reshape/test_cut.py
@@ -0,0 +1,447 @@
+import numpy as np
+import pytest
+
+import pandas as pd
+from pandas import (
+ Categorical, DataFrame, DatetimeIndex, Index, Interval, IntervalIndex,
+ Series, TimedeltaIndex, Timestamp, cut, date_range, isna, qcut,
+ timedelta_range, to_datetime)
+from pandas.api.types import CategoricalDtype as CDT
+import pandas.core.reshape.tile as tmod
+import pandas.util.testing as tm
+
+
+def test_simple():
+ data = np.ones(5, dtype="int64")
+ result = cut(data, 4, labels=False)
+
+ expected = np.array([1, 1, 1, 1, 1])
+ tm.assert_numpy_array_equal(result, expected, check_dtype=False)
+
+
+def test_bins():
+ data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1])
+ result, bins = cut(data, 3, retbins=True)
+
+ intervals = IntervalIndex.from_breaks(bins.round(3))
+ intervals = intervals.take([0, 0, 0, 1, 2, 0])
+ expected = Categorical(intervals, ordered=True)
+
+ tm.assert_categorical_equal(result, expected)
+ tm.assert_almost_equal(bins, np.array([0.1905, 3.36666667,
+ 6.53333333, 9.7]))
+
+
+def test_right():
+ data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
+ result, bins = cut(data, 4, right=True, retbins=True)
+
+ intervals = IntervalIndex.from_breaks(bins.round(3))
+ expected = Categorical(intervals, ordered=True)
+ expected = expected.take([0, 0, 0, 2, 3, 0, 0])
+
+ tm.assert_categorical_equal(result, expected)
+ tm.assert_almost_equal(bins, np.array([0.1905, 2.575, 4.95, 7.325, 9.7]))
+
+
+def test_no_right():
+ data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
+ result, bins = cut(data, 4, right=False, retbins=True)
+
+ intervals = IntervalIndex.from_breaks(bins.round(3), closed="left")
+ intervals = intervals.take([0, 0, 0, 2, 3, 0, 1])
+ expected = Categorical(intervals, ordered=True)
+
+ tm.assert_categorical_equal(result, expected)
+ tm.assert_almost_equal(bins, np.array([0.2, 2.575, 4.95, 7.325, 9.7095]))
+
+
+def test_array_like():
+ data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
+ result, bins = cut(data, 3, retbins=True)
+
+ intervals = IntervalIndex.from_breaks(bins.round(3))
+ intervals = intervals.take([0, 0, 0, 1, 2, 0])
+ expected = Categorical(intervals, ordered=True)
+
+ tm.assert_categorical_equal(result, expected)
+ tm.assert_almost_equal(bins, np.array([0.1905, 3.36666667,
+ 6.53333333, 9.7]))
+
+
+def test_bins_from_interval_index():
+ c = cut(range(5), 3)
+ expected = c
+ result = cut(range(5), bins=expected.categories)
+ tm.assert_categorical_equal(result, expected)
+
+ expected = Categorical.from_codes(np.append(c.codes, -1),
+ categories=c.categories,
+ ordered=True)
+ result = cut(range(6), bins=expected.categories)
+ tm.assert_categorical_equal(result, expected)
+
+
+def test_bins_from_interval_index_doc_example():
+ # Make sure we preserve the bins.
+ ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
+ c = cut(ages, bins=[0, 18, 35, 70])
+ expected = IntervalIndex.from_tuples([(0, 18), (18, 35), (35, 70)])
+ tm.assert_index_equal(c.categories, expected)
+
+ result = cut([25, 20, 50], bins=c.categories)
+ tm.assert_index_equal(result.categories, expected)
+ tm.assert_numpy_array_equal(result.codes,
+ np.array([1, 1, 2], dtype="int8"))
+
+
+def test_bins_not_overlapping_from_interval_index():
+ # see gh-23980
+ msg = "Overlapping IntervalIndex is not accepted"
+ ii = IntervalIndex.from_tuples([(0, 10), (2, 12), (4, 14)])
+
+ with pytest.raises(ValueError, match=msg):
+ cut([5, 6], bins=ii)
+
+
+def test_bins_not_monotonic():
+ msg = "bins must increase monotonically"
+ data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
+
+ with pytest.raises(ValueError, match=msg):
+ cut(data, [0.1, 1.5, 1, 10])
+
+
+def test_wrong_num_labels():
+ msg = "Bin labels must be one fewer than the number of bin edges"
+ data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
+
+ with pytest.raises(ValueError, match=msg):
+ cut(data, [0, 1, 10], labels=["foo", "bar", "baz"])
+
+
+@pytest.mark.parametrize("x,bins,msg", [
+ ([], 2, "Cannot cut empty array"),
+ ([1, 2, 3], 0.5, "`bins` should be a positive integer")
+])
+def test_cut_corner(x, bins, msg):
+ with pytest.raises(ValueError, match=msg):
+ cut(x, bins)
+
+
+@pytest.mark.parametrize("arg", [2, np.eye(2), DataFrame(np.eye(2))])
+@pytest.mark.parametrize("cut_func", [cut, qcut])
+def test_cut_not_1d_arg(arg, cut_func):
+ msg = "Input array must be 1 dimensional"
+ with pytest.raises(ValueError, match=msg):
+ cut_func(arg, 2)
+
+
+def test_cut_out_of_range_more():
+ # see gh-1511
+ name = "x"
+
+ ser = Series([0, -1, 0, 1, -3], name=name)
+ ind = cut(ser, [0, 1], labels=False)
+
+ exp = Series([np.nan, np.nan, np.nan, 0, np.nan], name=name)
+ tm.assert_series_equal(ind, exp)
+
+
+@pytest.mark.parametrize("right,breaks,closed", [
+ (True, [-1e-3, 0.25, 0.5, 0.75, 1], "right"),
+ (False, [0, 0.25, 0.5, 0.75, 1 + 1e-3], "left")
+])
+def test_labels(right, breaks, closed):
+ arr = np.tile(np.arange(0, 1.01, 0.1), 4)
+
+ result, bins = cut(arr, 4, retbins=True, right=right)
+ ex_levels = IntervalIndex.from_breaks(breaks, closed=closed)
+ tm.assert_index_equal(result.categories, ex_levels)
+
+
+def test_cut_pass_series_name_to_factor():
+ name = "foo"
+ ser = Series(np.random.randn(100), name=name)
+
+ factor = cut(ser, 4)
+ assert factor.name == name
+
+
+def test_label_precision():
+ arr = np.arange(0, 0.73, 0.01)
+ result = cut(arr, 4, precision=2)
+
+ ex_levels = IntervalIndex.from_breaks([-0.00072, 0.18, 0.36, 0.54, 0.72])
+ tm.assert_index_equal(result.categories, ex_levels)
+
+
+@pytest.mark.parametrize("labels", [None, False])
+def test_na_handling(labels):
+ arr = np.arange(0, 0.75, 0.01)
+ arr[::3] = np.nan
+
+ result = cut(arr, 4, labels=labels)
+ result = np.asarray(result)
+
+ expected = np.where(isna(arr), np.nan, result)
+ tm.assert_almost_equal(result, expected)
+
+
+def test_inf_handling():
+ data = np.arange(6)
+ data_ser = Series(data, dtype="int64")
+
+ bins = [-np.inf, 2, 4, np.inf]
+ result = cut(data, bins)
+ result_ser = cut(data_ser, bins)
+
+ ex_uniques = IntervalIndex.from_breaks(bins)
+ tm.assert_index_equal(result.categories, ex_uniques)
+
+ assert result[5] == Interval(4, np.inf)
+ assert result[0] == Interval(-np.inf, 2)
+ assert result_ser[5] == Interval(4, np.inf)
+ assert result_ser[0] == Interval(-np.inf, 2)
+
+
+def test_cut_out_of_bounds():
+ arr = np.random.randn(100)
+ result = cut(arr, [-1, 0, 1])
+
+ mask = isna(result)
+ ex_mask = (arr < -1) | (arr > 1)
+ tm.assert_numpy_array_equal(mask, ex_mask)
+
+
+@pytest.mark.parametrize("get_labels,get_expected", [
+ (lambda labels: labels,
+ lambda labels: Categorical(["Medium"] + 4 * ["Small"] +
+ ["Medium", "Large"],
+ categories=labels, ordered=True)),
+ (lambda labels: Categorical.from_codes([0, 1, 2], labels),
+ lambda labels: Categorical.from_codes([1] + 4 * [0] + [1, 2], labels))
+])
+def test_cut_pass_labels(get_labels, get_expected):
+ bins = [0, 25, 50, 100]
+ arr = [50, 5, 10, 15, 20, 30, 70]
+ labels = ["Small", "Medium", "Large"]
+
+ result = cut(arr, bins, labels=get_labels(labels))
+ tm.assert_categorical_equal(result, get_expected(labels))
+
+
+def test_cut_pass_labels_compat():
+ # see gh-16459
+ arr = [50, 5, 10, 15, 20, 30, 70]
+ labels = ["Good", "Medium", "Bad"]
+
+ result = cut(arr, 3, labels=labels)
+ exp = cut(arr, 3, labels=Categorical(labels, categories=labels,
+ ordered=True))
+ tm.assert_categorical_equal(result, exp)
+
+
+@pytest.mark.parametrize("x", [np.arange(11.), np.arange(11.) / 1e10])
+def test_round_frac_just_works(x):
+ # It works.
+ cut(x, 2)
+
+
+@pytest.mark.parametrize("val,precision,expected", [
+ (-117.9998, 3, -118),
+ (117.9998, 3, 118),
+ (117.9998, 2, 118),
+ (0.000123456, 2, 0.00012)
+])
+def test_round_frac(val, precision, expected):
+ # see gh-1979
+ result = tmod._round_frac(val, precision=precision)
+ assert result == expected
+
+
+def test_cut_return_intervals():
+ ser = Series([0, 1, 2, 3, 4, 5, 6, 7, 8])
+ result = cut(ser, 3)
+
+ exp_bins = np.linspace(0, 8, num=4).round(3)
+ exp_bins[0] -= 0.008
+
+ expected = Series(IntervalIndex.from_breaks(exp_bins, closed="right").take(
+ [0, 0, 0, 1, 1, 1, 2, 2, 2])).astype(CDT(ordered=True))
+ tm.assert_series_equal(result, expected)
+
+
+def test_series_ret_bins():
+ # see gh-8589
+ ser = Series(np.arange(4))
+ result, bins = cut(ser, 2, retbins=True)
+
+ expected = Series(IntervalIndex.from_breaks(
+ [-0.003, 1.5, 3], closed="right").repeat(2)).astype(CDT(ordered=True))
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("kwargs,msg", [
+ (dict(duplicates="drop"), None),
+ (dict(), "Bin edges must be unique"),
+ (dict(duplicates="raise"), "Bin edges must be unique"),
+ (dict(duplicates="foo"), "invalid value for 'duplicates' parameter")
+])
+def test_cut_duplicates_bin(kwargs, msg):
+ # see gh-20947
+ bins = [0, 2, 4, 6, 10, 10]
+ values = Series(np.array([1, 3, 5, 7, 9]), index=["a", "b", "c", "d", "e"])
+
+ if msg is not None:
+ with pytest.raises(ValueError, match=msg):
+ cut(values, bins, **kwargs)
+ else:
+ result = cut(values, bins, **kwargs)
+ expected = cut(values, pd.unique(bins))
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("data", [9.0, -9.0, 0.0])
+@pytest.mark.parametrize("length", [1, 2])
+def test_single_bin(data, length):
+ # see gh-14652, gh-15428
+ ser = Series([data] * length)
+ result = cut(ser, 1, labels=False)
+
+ expected = Series([0] * length)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "array_1_writeable,array_2_writeable",
+ [(True, True), (True, False), (False, False)])
+def test_cut_read_only(array_1_writeable, array_2_writeable):
+ # issue 18773
+ array_1 = np.arange(0, 100, 10)
+ array_1.flags.writeable = array_1_writeable
+
+ array_2 = np.arange(0, 100, 10)
+ array_2.flags.writeable = array_2_writeable
+
+ hundred_elements = np.arange(100)
+ tm.assert_categorical_equal(cut(hundred_elements, array_1),
+ cut(hundred_elements, array_2))
+
+
+@pytest.mark.parametrize("conv", [
+ lambda v: Timestamp(v),
+ lambda v: to_datetime(v),
+ lambda v: np.datetime64(v),
+ lambda v: Timestamp(v).to_pydatetime(),
+])
+def test_datetime_bin(conv):
+ data = [np.datetime64("2012-12-13"), np.datetime64("2012-12-15")]
+ bin_data = ["2012-12-12", "2012-12-14", "2012-12-16"]
+
+ expected = Series(IntervalIndex([
+ Interval(Timestamp(bin_data[0]), Timestamp(bin_data[1])),
+ Interval(Timestamp(bin_data[1]), Timestamp(bin_data[2]))])).astype(
+ CDT(ordered=True))
+
+ bins = [conv(v) for v in bin_data]
+ result = Series(cut(data, bins=bins))
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("data", [
+ to_datetime(Series(["2013-01-01", "2013-01-02", "2013-01-03"])),
+ [np.datetime64("2013-01-01"), np.datetime64("2013-01-02"),
+ np.datetime64("2013-01-03")],
+ np.array([np.datetime64("2013-01-01"), np.datetime64("2013-01-02"),
+ np.datetime64("2013-01-03")]),
+ DatetimeIndex(["2013-01-01", "2013-01-02", "2013-01-03"])
+])
+def test_datetime_cut(data):
+ # see gh-14714
+ #
+ # Testing time data when it comes in various collection types.
+ result, _ = cut(data, 3, retbins=True)
+ expected = Series(IntervalIndex([
+ Interval(Timestamp("2012-12-31 23:57:07.200000"),
+ Timestamp("2013-01-01 16:00:00")),
+ Interval(Timestamp("2013-01-01 16:00:00"),
+ Timestamp("2013-01-02 08:00:00")),
+ Interval(Timestamp("2013-01-02 08:00:00"),
+ Timestamp("2013-01-03 00:00:00"))])).astype(CDT(ordered=True))
+ tm.assert_series_equal(Series(result), expected)
+
+
+@pytest.mark.parametrize("bins", [
+ 3, [Timestamp("2013-01-01 04:57:07.200000"),
+ Timestamp("2013-01-01 21:00:00"),
+ Timestamp("2013-01-02 13:00:00"),
+ Timestamp("2013-01-03 05:00:00")]])
+@pytest.mark.parametrize("box", [list, np.array, Index, Series])
+def test_datetime_tz_cut(bins, box):
+ # see gh-19872
+ tz = "US/Eastern"
+ s = Series(date_range("20130101", periods=3, tz=tz))
+
+ if not isinstance(bins, int):
+ bins = box(bins)
+
+ result = cut(s, bins)
+ expected = Series(IntervalIndex([
+ Interval(Timestamp("2012-12-31 23:57:07.200000", tz=tz),
+ Timestamp("2013-01-01 16:00:00", tz=tz)),
+ Interval(Timestamp("2013-01-01 16:00:00", tz=tz),
+ Timestamp("2013-01-02 08:00:00", tz=tz)),
+ Interval(Timestamp("2013-01-02 08:00:00", tz=tz),
+ Timestamp("2013-01-03 00:00:00", tz=tz))])).astype(
+ CDT(ordered=True))
+ tm.assert_series_equal(result, expected)
+
+
+def test_datetime_nan_error():
+ msg = "bins must be of datetime64 dtype"
+
+ with pytest.raises(ValueError, match=msg):
+ cut(date_range("20130101", periods=3), bins=[0, 2, 4])
+
+
+def test_datetime_nan_mask():
+ result = cut(date_range("20130102", periods=5),
+ bins=date_range("20130101", periods=2))
+
+ mask = result.categories.isna()
+ tm.assert_numpy_array_equal(mask, np.array([False]))
+
+ mask = result.isna()
+ tm.assert_numpy_array_equal(mask, np.array([False, True, True,
+ True, True]))
+
+
+@pytest.mark.parametrize("tz", [None, "UTC", "US/Pacific"])
+def test_datetime_cut_roundtrip(tz):
+ # see gh-19891
+ ser = Series(date_range("20180101", periods=3, tz=tz))
+ result, result_bins = cut(ser, 2, retbins=True)
+
+ expected = cut(ser, result_bins)
+ tm.assert_series_equal(result, expected)
+
+ expected_bins = DatetimeIndex(["2017-12-31 23:57:07.200000",
+ "2018-01-02 00:00:00",
+ "2018-01-03 00:00:00"])
+ expected_bins = expected_bins.tz_localize(tz)
+ tm.assert_index_equal(result_bins, expected_bins)
+
+
+def test_timedelta_cut_roundtrip():
+ # see gh-19891
+ ser = Series(timedelta_range("1day", periods=3))
+ result, result_bins = cut(ser, 2, retbins=True)
+
+ expected = cut(ser, result_bins)
+ tm.assert_series_equal(result, expected)
+
+ expected_bins = TimedeltaIndex(["0 days 23:57:07.200000",
+ "2 days 00:00:00",
+ "3 days 00:00:00"])
+ tm.assert_index_equal(result_bins, expected_bins)
diff --git a/pandas/tests/reshape/test_qcut.py b/pandas/tests/reshape/test_qcut.py
new file mode 100644
index 0000000000000..997df7fd7aa4c
--- /dev/null
+++ b/pandas/tests/reshape/test_qcut.py
@@ -0,0 +1,199 @@
+import os
+
+import numpy as np
+import pytest
+
+from pandas.compat import zip
+
+from pandas import (
+ Categorical, DatetimeIndex, Interval, IntervalIndex, NaT, Series,
+ TimedeltaIndex, Timestamp, cut, date_range, isna, qcut, timedelta_range)
+from pandas.api.types import CategoricalDtype as CDT
+from pandas.core.algorithms import quantile
+import pandas.util.testing as tm
+
+from pandas.tseries.offsets import Day, Nano
+
+
+def test_qcut():
+ arr = np.random.randn(1000)
+
+ # We store the bins as Index that have been
+ # rounded to comparisons are a bit tricky.
+ labels, bins = qcut(arr, 4, retbins=True)
+ ex_bins = quantile(arr, [0, .25, .5, .75, 1.])
+
+ result = labels.categories.left.values
+ assert np.allclose(result, ex_bins[:-1], atol=1e-2)
+
+ result = labels.categories.right.values
+ assert np.allclose(result, ex_bins[1:], atol=1e-2)
+
+ ex_levels = cut(arr, ex_bins, include_lowest=True)
+ tm.assert_categorical_equal(labels, ex_levels)
+
+
+def test_qcut_bounds():
+ arr = np.random.randn(1000)
+
+ factor = qcut(arr, 10, labels=False)
+ assert len(np.unique(factor)) == 10
+
+
+def test_qcut_specify_quantiles():
+ arr = np.random.randn(100)
+ factor = qcut(arr, [0, .25, .5, .75, 1.])
+
+ expected = qcut(arr, 4)
+ tm.assert_categorical_equal(factor, expected)
+
+
+def test_qcut_all_bins_same():
+ with pytest.raises(ValueError, match="edges.*unique"):
+ qcut([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 3)
+
+
+def test_qcut_include_lowest():
+ values = np.arange(10)
+ ii = qcut(values, 4)
+
+ ex_levels = IntervalIndex([Interval(-0.001, 2.25), Interval(2.25, 4.5),
+ Interval(4.5, 6.75), Interval(6.75, 9)])
+ tm.assert_index_equal(ii.categories, ex_levels)
+
+
+def test_qcut_nas():
+ arr = np.random.randn(100)
+ arr[:20] = np.nan
+
+ result = qcut(arr, 4)
+ assert isna(result[:20]).all()
+
+
+def test_qcut_index():
+ result = qcut([0, 2], 2)
+ intervals = [Interval(-0.001, 1), Interval(1, 2)]
+
+ expected = Categorical(intervals, ordered=True)
+ tm.assert_categorical_equal(result, expected)
+
+
+def test_qcut_binning_issues(datapath):
+ # see gh-1978, gh-1979
+ cut_file = datapath(os.path.join("reshape", "data", "cut_data.csv"))
+ arr = np.loadtxt(cut_file)
+ result = qcut(arr, 20)
+
+ starts = []
+ ends = []
+
+ for lev in np.unique(result):
+ s = lev.left
+ e = lev.right
+ assert s != e
+
+ starts.append(float(s))
+ ends.append(float(e))
+
+ for (sp, sn), (ep, en) in zip(zip(starts[:-1], starts[1:]),
+ zip(ends[:-1], ends[1:])):
+ assert sp < sn
+ assert ep < en
+ assert ep <= sn
+
+
+def test_qcut_return_intervals():
+ ser = Series([0, 1, 2, 3, 4, 5, 6, 7, 8])
+ res = qcut(ser, [0, 0.333, 0.666, 1])
+
+ exp_levels = np.array([Interval(-0.001, 2.664),
+ Interval(2.664, 5.328), Interval(5.328, 8)])
+ exp = Series(exp_levels.take([0, 0, 0, 1, 1, 1, 2, 2, 2])).astype(
+ CDT(ordered=True))
+ tm.assert_series_equal(res, exp)
+
+
+@pytest.mark.parametrize("kwargs,msg", [
+ (dict(duplicates="drop"), None),
+ (dict(), "Bin edges must be unique"),
+ (dict(duplicates="raise"), "Bin edges must be unique"),
+ (dict(duplicates="foo"), "invalid value for 'duplicates' parameter")
+])
+def test_qcut_duplicates_bin(kwargs, msg):
+ # see gh-7751
+ values = [0, 0, 0, 0, 1, 2, 3]
+
+ if msg is not None:
+ with pytest.raises(ValueError, match=msg):
+ qcut(values, 3, **kwargs)
+ else:
+ result = qcut(values, 3, **kwargs)
+ expected = IntervalIndex([Interval(-0.001, 1), Interval(1, 3)])
+ tm.assert_index_equal(result.categories, expected)
+
+
+@pytest.mark.parametrize("data,start,end", [
+ (9.0, 8.999, 9.0),
+ (0.0, -0.001, 0.0),
+ (-9.0, -9.001, -9.0),
+])
+@pytest.mark.parametrize("length", [1, 2])
+@pytest.mark.parametrize("labels", [None, False])
+def test_single_quantile(data, start, end, length, labels):
+ # see gh-15431
+ ser = Series([data] * length)
+ result = qcut(ser, 1, labels=labels)
+
+ if labels is None:
+ intervals = IntervalIndex([Interval(start, end)] *
+ length, closed="right")
+ expected = Series(intervals).astype(CDT(ordered=True))
+ else:
+ expected = Series([0] * length)
+
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("ser", [
+ Series(DatetimeIndex(["20180101", NaT, "20180103"])),
+ Series(TimedeltaIndex(["0 days", NaT, "2 days"]))],
+ ids=lambda x: str(x.dtype))
+def test_qcut_nat(ser):
+ # see gh-19768
+ intervals = IntervalIndex.from_tuples([
+ (ser[0] - Nano(), ser[2] - Day()),
+ np.nan, (ser[2] - Day(), ser[2])])
+ expected = Series(Categorical(intervals, ordered=True))
+
+ result = qcut(ser, 2)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("bins", [3, np.linspace(0, 1, 4)])
+def test_datetime_tz_qcut(bins):
+ # see gh-19872
+ tz = "US/Eastern"
+ ser = Series(date_range("20130101", periods=3, tz=tz))
+
+ result = qcut(ser, bins)
+ expected = Series(IntervalIndex([
+ Interval(Timestamp("2012-12-31 23:59:59.999999999", tz=tz),
+ Timestamp("2013-01-01 16:00:00", tz=tz)),
+ Interval(Timestamp("2013-01-01 16:00:00", tz=tz),
+ Timestamp("2013-01-02 08:00:00", tz=tz)),
+ Interval(Timestamp("2013-01-02 08:00:00", tz=tz),
+ Timestamp("2013-01-03 00:00:00", tz=tz))])).astype(
+ CDT(ordered=True))
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("arg,expected_bins", [
+ [timedelta_range("1day", periods=3),
+ TimedeltaIndex(["1 days", "2 days", "3 days"])],
+ [date_range("20180101", periods=3),
+ DatetimeIndex(["2018-01-01", "2018-01-02", "2018-01-03"])]])
+def test_date_like_qcut_bins(arg, expected_bins):
+ # see gh-19891
+ ser = Series(arg)
+ result, result_bins = qcut(ser, 2, retbins=True)
+ tm.assert_index_equal(result_bins, expected_bins)
diff --git a/pandas/tests/reshape/test_tile.py b/pandas/tests/reshape/test_tile.py
deleted file mode 100644
index 19f1a9a8b65c7..0000000000000
--- a/pandas/tests/reshape/test_tile.py
+++ /dev/null
@@ -1,651 +0,0 @@
-import os
-import pytest
-
-import numpy as np
-from pandas.compat import zip
-
-import pandas as pd
-from pandas import (DataFrame, Series, isna, to_datetime, DatetimeIndex, Index,
- Timestamp, Interval, IntervalIndex, Categorical,
- cut, qcut, date_range, timedelta_range, NaT,
- TimedeltaIndex)
-from pandas.tseries.offsets import Nano, Day
-import pandas.util.testing as tm
-from pandas.api.types import CategoricalDtype as CDT
-
-from pandas.core.algorithms import quantile
-import pandas.core.reshape.tile as tmod
-
-
-class TestCut(object):
-
- def test_simple(self):
- data = np.ones(5, dtype='int64')
- result = cut(data, 4, labels=False)
- expected = np.array([1, 1, 1, 1, 1])
- tm.assert_numpy_array_equal(result, expected,
- check_dtype=False)
-
- def test_bins(self):
- data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1])
- result, bins = cut(data, 3, retbins=True)
-
- intervals = IntervalIndex.from_breaks(bins.round(3))
- intervals = intervals.take([0, 0, 0, 1, 2, 0])
- expected = Categorical(intervals, ordered=True)
- tm.assert_categorical_equal(result, expected)
- tm.assert_almost_equal(bins, np.array([0.1905, 3.36666667,
- 6.53333333, 9.7]))
-
- def test_right(self):
- data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
- result, bins = cut(data, 4, right=True, retbins=True)
- intervals = IntervalIndex.from_breaks(bins.round(3))
- expected = Categorical(intervals, ordered=True)
- expected = expected.take([0, 0, 0, 2, 3, 0, 0])
- tm.assert_categorical_equal(result, expected)
- tm.assert_almost_equal(bins, np.array([0.1905, 2.575, 4.95,
- 7.325, 9.7]))
-
- def test_noright(self):
- data = np.array([.2, 1.4, 2.5, 6.2, 9.7, 2.1, 2.575])
- result, bins = cut(data, 4, right=False, retbins=True)
- intervals = IntervalIndex.from_breaks(bins.round(3), closed='left')
- intervals = intervals.take([0, 0, 0, 2, 3, 0, 1])
- expected = Categorical(intervals, ordered=True)
- tm.assert_categorical_equal(result, expected)
- tm.assert_almost_equal(bins, np.array([0.2, 2.575, 4.95,
- 7.325, 9.7095]))
-
- def test_arraylike(self):
- data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
- result, bins = cut(data, 3, retbins=True)
- intervals = IntervalIndex.from_breaks(bins.round(3))
- intervals = intervals.take([0, 0, 0, 1, 2, 0])
- expected = Categorical(intervals, ordered=True)
- tm.assert_categorical_equal(result, expected)
- tm.assert_almost_equal(bins, np.array([0.1905, 3.36666667,
- 6.53333333, 9.7]))
-
- def test_bins_from_intervalindex(self):
- c = cut(range(5), 3)
- expected = c
- result = cut(range(5), bins=expected.categories)
- tm.assert_categorical_equal(result, expected)
-
- expected = Categorical.from_codes(np.append(c.codes, -1),
- categories=c.categories,
- ordered=True)
- result = cut(range(6), bins=expected.categories)
- tm.assert_categorical_equal(result, expected)
-
- # doc example
- # make sure we preserve the bins
- ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
- c = cut(ages, bins=[0, 18, 35, 70])
- expected = IntervalIndex.from_tuples([(0, 18), (18, 35), (35, 70)])
- tm.assert_index_equal(c.categories, expected)
-
- result = cut([25, 20, 50], bins=c.categories)
- tm.assert_index_equal(result.categories, expected)
- tm.assert_numpy_array_equal(result.codes,
- np.array([1, 1, 2], dtype='int8'))
-
- def test_bins_not_overlapping_from_intervalindex(self):
- # see gh-23980
- msg = "Overlapping IntervalIndex is not accepted"
- ii = IntervalIndex.from_tuples([(0, 10), (2, 12), (4, 14)])
-
- with pytest.raises(ValueError, match=msg):
- cut([5, 6], bins=ii)
-
- def test_bins_not_monotonic(self):
- data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
- pytest.raises(ValueError, cut, data, [0.1, 1.5, 1, 10])
-
- def test_wrong_num_labels(self):
- data = [.2, 1.4, 2.5, 6.2, 9.7, 2.1]
- pytest.raises(ValueError, cut, data, [0, 1, 10],
- labels=['foo', 'bar', 'baz'])
-
- def test_cut_corner(self):
- # h3h
- pytest.raises(ValueError, cut, [], 2)
-
- pytest.raises(ValueError, cut, [1, 2, 3], 0.5)
-
- @pytest.mark.parametrize('arg', [2, np.eye(2), DataFrame(np.eye(2))])
- @pytest.mark.parametrize('cut_func', [cut, qcut])
- def test_cut_not_1d_arg(self, arg, cut_func):
- with pytest.raises(ValueError):
- cut_func(arg, 2)
-
- def test_cut_out_of_range_more(self):
- # #1511
- s = Series([0, -1, 0, 1, -3], name='x')
- ind = cut(s, [0, 1], labels=False)
- exp = Series([np.nan, np.nan, np.nan, 0, np.nan], name='x')
- tm.assert_series_equal(ind, exp)
-
- def test_labels(self):
- arr = np.tile(np.arange(0, 1.01, 0.1), 4)
-
- result, bins = cut(arr, 4, retbins=True)
- ex_levels = IntervalIndex.from_breaks([-1e-3, 0.25, 0.5, 0.75, 1])
- tm.assert_index_equal(result.categories, ex_levels)
-
- result, bins = cut(arr, 4, retbins=True, right=False)
- ex_levels = IntervalIndex.from_breaks([0, 0.25, 0.5, 0.75, 1 + 1e-3],
- closed='left')
- tm.assert_index_equal(result.categories, ex_levels)
-
- def test_cut_pass_series_name_to_factor(self):
- s = Series(np.random.randn(100), name='foo')
-
- factor = cut(s, 4)
- assert factor.name == 'foo'
-
- def test_label_precision(self):
- arr = np.arange(0, 0.73, 0.01)
-
- result = cut(arr, 4, precision=2)
- ex_levels = IntervalIndex.from_breaks([-0.00072, 0.18, 0.36,
- 0.54, 0.72])
- tm.assert_index_equal(result.categories, ex_levels)
-
- def test_na_handling(self):
- arr = np.arange(0, 0.75, 0.01)
- arr[::3] = np.nan
-
- result = cut(arr, 4)
-
- result_arr = np.asarray(result)
-
- ex_arr = np.where(isna(arr), np.nan, result_arr)
-
- tm.assert_almost_equal(result_arr, ex_arr)
-
- result = cut(arr, 4, labels=False)
- ex_result = np.where(isna(arr), np.nan, result)
- tm.assert_almost_equal(result, ex_result)
-
- def test_inf_handling(self):
- data = np.arange(6)
- data_ser = Series(data, dtype='int64')
-
- bins = [-np.inf, 2, 4, np.inf]
- result = cut(data, bins)
- result_ser = cut(data_ser, bins)
-
- ex_uniques = IntervalIndex.from_breaks(bins)
- tm.assert_index_equal(result.categories, ex_uniques)
- assert result[5] == Interval(4, np.inf)
- assert result[0] == Interval(-np.inf, 2)
- assert result_ser[5] == Interval(4, np.inf)
- assert result_ser[0] == Interval(-np.inf, 2)
-
- def test_qcut(self):
- arr = np.random.randn(1000)
-
- # We store the bins as Index that have been rounded
- # to comparisons are a bit tricky.
- labels, bins = qcut(arr, 4, retbins=True)
- ex_bins = quantile(arr, [0, .25, .5, .75, 1.])
- result = labels.categories.left.values
- assert np.allclose(result, ex_bins[:-1], atol=1e-2)
- result = labels.categories.right.values
- assert np.allclose(result, ex_bins[1:], atol=1e-2)
-
- ex_levels = cut(arr, ex_bins, include_lowest=True)
- tm.assert_categorical_equal(labels, ex_levels)
-
- def test_qcut_bounds(self):
- arr = np.random.randn(1000)
-
- factor = qcut(arr, 10, labels=False)
- assert len(np.unique(factor)) == 10
-
- def test_qcut_specify_quantiles(self):
- arr = np.random.randn(100)
-
- factor = qcut(arr, [0, .25, .5, .75, 1.])
- expected = qcut(arr, 4)
- tm.assert_categorical_equal(factor, expected)
-
- def test_qcut_all_bins_same(self):
- with pytest.raises(ValueError, match="edges.*unique"):
- qcut([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 3)
-
- def test_cut_out_of_bounds(self):
- arr = np.random.randn(100)
-
- result = cut(arr, [-1, 0, 1])
-
- mask = isna(result)
- ex_mask = (arr < -1) | (arr > 1)
- tm.assert_numpy_array_equal(mask, ex_mask)
-
- def test_cut_pass_labels(self):
- arr = [50, 5, 10, 15, 20, 30, 70]
- bins = [0, 25, 50, 100]
- labels = ['Small', 'Medium', 'Large']
-
- result = cut(arr, bins, labels=labels)
- exp = Categorical(['Medium'] + 4 * ['Small'] + ['Medium', 'Large'],
- categories=labels,
- ordered=True)
- tm.assert_categorical_equal(result, exp)
-
- result = cut(arr, bins, labels=Categorical.from_codes([0, 1, 2],
- labels))
- exp = Categorical.from_codes([1] + 4 * [0] + [1, 2], labels)
- tm.assert_categorical_equal(result, exp)
-
- # issue 16459
- labels = ['Good', 'Medium', 'Bad']
- result = cut(arr, 3, labels=labels)
- exp = cut(arr, 3, labels=Categorical(labels, categories=labels,
- ordered=True))
- tm.assert_categorical_equal(result, exp)
-
- def test_qcut_include_lowest(self):
- values = np.arange(10)
-
- ii = qcut(values, 4)
-
- ex_levels = IntervalIndex(
- [Interval(-0.001, 2.25),
- Interval(2.25, 4.5),
- Interval(4.5, 6.75),
- Interval(6.75, 9)])
- tm.assert_index_equal(ii.categories, ex_levels)
-
- def test_qcut_nas(self):
- arr = np.random.randn(100)
- arr[:20] = np.nan
-
- result = qcut(arr, 4)
- assert isna(result[:20]).all()
-
- def test_qcut_index(self):
- result = qcut([0, 2], 2)
- intervals = [Interval(-0.001, 1), Interval(1, 2)]
- expected = Categorical(intervals, ordered=True)
- tm.assert_categorical_equal(result, expected)
-
- def test_round_frac(self):
- # it works
- result = cut(np.arange(11.), 2)
-
- result = cut(np.arange(11.) / 1e10, 2)
-
- # #1979, negative numbers
-
- result = tmod._round_frac(-117.9998, precision=3)
- assert result == -118
- result = tmod._round_frac(117.9998, precision=3)
- assert result == 118
-
- result = tmod._round_frac(117.9998, precision=2)
- assert result == 118
- result = tmod._round_frac(0.000123456, precision=2)
- assert result == 0.00012
-
- def test_qcut_binning_issues(self, datapath):
- # #1978, 1979
- cut_file = datapath(os.path.join('reshape', 'data', 'cut_data.csv'))
- arr = np.loadtxt(cut_file)
-
- result = qcut(arr, 20)
-
- starts = []
- ends = []
- for lev in np.unique(result):
- s = lev.left
- e = lev.right
- assert s != e
-
- starts.append(float(s))
- ends.append(float(e))
-
- for (sp, sn), (ep, en) in zip(zip(starts[:-1], starts[1:]),
- zip(ends[:-1], ends[1:])):
- assert sp < sn
- assert ep < en
- assert ep <= sn
-
- def test_cut_return_intervals(self):
- s = Series([0, 1, 2, 3, 4, 5, 6, 7, 8])
- res = cut(s, 3)
- exp_bins = np.linspace(0, 8, num=4).round(3)
- exp_bins[0] -= 0.008
- exp = Series(IntervalIndex.from_breaks(exp_bins, closed='right').take(
- [0, 0, 0, 1, 1, 1, 2, 2, 2])).astype(CDT(ordered=True))
- tm.assert_series_equal(res, exp)
-
- def test_qcut_return_intervals(self):
- s = Series([0, 1, 2, 3, 4, 5, 6, 7, 8])
- res = qcut(s, [0, 0.333, 0.666, 1])
- exp_levels = np.array([Interval(-0.001, 2.664),
- Interval(2.664, 5.328), Interval(5.328, 8)])
- exp = Series(exp_levels.take([0, 0, 0, 1, 1, 1, 2, 2, 2])).astype(
- CDT(ordered=True))
- tm.assert_series_equal(res, exp)
-
- def test_series_retbins(self):
- # GH 8589
- s = Series(np.arange(4))
- result, bins = cut(s, 2, retbins=True)
- expected = Series(IntervalIndex.from_breaks(
- [-0.003, 1.5, 3], closed='right').repeat(2)).astype(
- CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- result, bins = qcut(s, 2, retbins=True)
- expected = Series(IntervalIndex.from_breaks(
- [-0.001, 1.5, 3], closed='right').repeat(2)).astype(
- CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- def test_cut_duplicates_bin(self):
- # issue 20947
- values = Series(np.array([1, 3, 5, 7, 9]),
- index=["a", "b", "c", "d", "e"])
- bins = [0, 2, 4, 6, 10, 10]
- result = cut(values, bins, duplicates='drop')
- expected = cut(values, pd.unique(bins))
- tm.assert_series_equal(result, expected)
-
- pytest.raises(ValueError, cut, values, bins)
- pytest.raises(ValueError, cut, values, bins, duplicates='raise')
-
- # invalid
- pytest.raises(ValueError, cut, values, bins, duplicates='foo')
-
- def test_qcut_duplicates_bin(self):
- # GH 7751
- values = [0, 0, 0, 0, 1, 2, 3]
- expected = IntervalIndex([Interval(-0.001, 1), Interval(1, 3)])
-
- result = qcut(values, 3, duplicates='drop')
- tm.assert_index_equal(result.categories, expected)
-
- pytest.raises(ValueError, qcut, values, 3)
- pytest.raises(ValueError, qcut, values, 3, duplicates='raise')
-
- # invalid
- pytest.raises(ValueError, qcut, values, 3, duplicates='foo')
-
- def test_single_quantile(self):
- # issue 15431
- expected = Series([0, 0])
-
- s = Series([9., 9.])
- result = qcut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
- result = qcut(s, 1)
- intervals = IntervalIndex([Interval(8.999, 9.0),
- Interval(8.999, 9.0)], closed='right')
- expected = Series(intervals).astype(CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- s = Series([-9., -9.])
- expected = Series([0, 0])
- result = qcut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
- result = qcut(s, 1)
- intervals = IntervalIndex([Interval(-9.001, -9.0),
- Interval(-9.001, -9.0)], closed='right')
- expected = Series(intervals).astype(CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- s = Series([0., 0.])
- expected = Series([0, 0])
- result = qcut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
- result = qcut(s, 1)
- intervals = IntervalIndex([Interval(-0.001, 0.0),
- Interval(-0.001, 0.0)], closed='right')
- expected = Series(intervals).astype(CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- s = Series([9])
- expected = Series([0])
- result = qcut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
- result = qcut(s, 1)
- intervals = IntervalIndex([Interval(8.999, 9.0)], closed='right')
- expected = Series(intervals).astype(CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- s = Series([-9])
- expected = Series([0])
- result = qcut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
- result = qcut(s, 1)
- intervals = IntervalIndex([Interval(-9.001, -9.0)], closed='right')
- expected = Series(intervals).astype(CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- s = Series([0])
- expected = Series([0])
- result = qcut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
- result = qcut(s, 1)
- intervals = IntervalIndex([Interval(-0.001, 0.0)], closed='right')
- expected = Series(intervals).astype(CDT(ordered=True))
- tm.assert_series_equal(result, expected)
-
- def test_single_bin(self):
- # issue 14652
- expected = Series([0, 0])
-
- s = Series([9., 9.])
- result = cut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
-
- s = Series([-9., -9.])
- result = cut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
-
- expected = Series([0])
-
- s = Series([9])
- result = cut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
-
- s = Series([-9])
- result = cut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
-
- # issue 15428
- expected = Series([0, 0])
-
- s = Series([0., 0.])
- result = cut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
-
- expected = Series([0])
-
- s = Series([0])
- result = cut(s, 1, labels=False)
- tm.assert_series_equal(result, expected)
-
- @pytest.mark.parametrize(
- "array_1_writeable, array_2_writeable",
- [(True, True), (True, False), (False, False)])
- def test_cut_read_only(self, array_1_writeable, array_2_writeable):
- # issue 18773
- array_1 = np.arange(0, 100, 10)
- array_1.flags.writeable = array_1_writeable
-
- array_2 = np.arange(0, 100, 10)
- array_2.flags.writeable = array_2_writeable
-
- hundred_elements = np.arange(100)
-
- tm.assert_categorical_equal(cut(hundred_elements, array_1),
- cut(hundred_elements, array_2))
-
-
-class TestDatelike(object):
-
- @pytest.mark.parametrize('s', [
- Series(DatetimeIndex(['20180101', NaT, '20180103'])),
- Series(TimedeltaIndex(['0 days', NaT, '2 days']))],
- ids=lambda x: str(x.dtype))
- def test_qcut_nat(self, s):
- # GH 19768
- intervals = IntervalIndex.from_tuples(
- [(s[0] - Nano(), s[2] - Day()), np.nan, (s[2] - Day(), s[2])])
- expected = Series(Categorical(intervals, ordered=True))
- result = qcut(s, 2)
- tm.assert_series_equal(result, expected)
-
- def test_datetime_cut(self):
- # GH 14714
- # testing for time data to be present as series
- data = to_datetime(Series(['2013-01-01', '2013-01-02', '2013-01-03']))
-
- result, bins = cut(data, 3, retbins=True)
- expected = (
- Series(IntervalIndex([
- Interval(Timestamp('2012-12-31 23:57:07.200000'),
- Timestamp('2013-01-01 16:00:00')),
- Interval(Timestamp('2013-01-01 16:00:00'),
- Timestamp('2013-01-02 08:00:00')),
- Interval(Timestamp('2013-01-02 08:00:00'),
- Timestamp('2013-01-03 00:00:00'))]))
- .astype(CDT(ordered=True)))
-
- tm.assert_series_equal(result, expected)
-
- # testing for time data to be present as list
- data = [np.datetime64('2013-01-01'), np.datetime64('2013-01-02'),
- np.datetime64('2013-01-03')]
- result, bins = cut(data, 3, retbins=True)
- tm.assert_series_equal(Series(result), expected)
-
- # testing for time data to be present as ndarray
- data = np.array([np.datetime64('2013-01-01'),
- np.datetime64('2013-01-02'),
- np.datetime64('2013-01-03')])
- result, bins = cut(data, 3, retbins=True)
- tm.assert_series_equal(Series(result), expected)
-
- # testing for time data to be present as datetime index
- data = DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03'])
- result, bins = cut(data, 3, retbins=True)
- tm.assert_series_equal(Series(result), expected)
-
- @pytest.mark.parametrize('bins', [
- 3, [Timestamp('2013-01-01 04:57:07.200000'),
- Timestamp('2013-01-01 21:00:00'),
- Timestamp('2013-01-02 13:00:00'),
- Timestamp('2013-01-03 05:00:00')]])
- @pytest.mark.parametrize('box', [list, np.array, Index, Series])
- def test_datetimetz_cut(self, bins, box):
- # GH 19872
- tz = 'US/Eastern'
- s = Series(date_range('20130101', periods=3, tz=tz))
- if not isinstance(bins, int):
- bins = box(bins)
- result = cut(s, bins)
- expected = (
- Series(IntervalIndex([
- Interval(Timestamp('2012-12-31 23:57:07.200000', tz=tz),
- Timestamp('2013-01-01 16:00:00', tz=tz)),
- Interval(Timestamp('2013-01-01 16:00:00', tz=tz),
- Timestamp('2013-01-02 08:00:00', tz=tz)),
- Interval(Timestamp('2013-01-02 08:00:00', tz=tz),
- Timestamp('2013-01-03 00:00:00', tz=tz))]))
- .astype(CDT(ordered=True)))
- tm.assert_series_equal(result, expected)
-
- @pytest.mark.parametrize('bins', [3, np.linspace(0, 1, 4)])
- def test_datetimetz_qcut(self, bins):
- # GH 19872
- tz = 'US/Eastern'
- s = Series(date_range('20130101', periods=3, tz=tz))
- result = qcut(s, bins)
- expected = (
- Series(IntervalIndex([
- Interval(Timestamp('2012-12-31 23:59:59.999999999', tz=tz),
- Timestamp('2013-01-01 16:00:00', tz=tz)),
- Interval(Timestamp('2013-01-01 16:00:00', tz=tz),
- Timestamp('2013-01-02 08:00:00', tz=tz)),
- Interval(Timestamp('2013-01-02 08:00:00', tz=tz),
- Timestamp('2013-01-03 00:00:00', tz=tz))]))
- .astype(CDT(ordered=True)))
- tm.assert_series_equal(result, expected)
-
- def test_datetime_bin(self):
- data = [np.datetime64('2012-12-13'), np.datetime64('2012-12-15')]
- bin_data = ['2012-12-12', '2012-12-14', '2012-12-16']
- expected = (
- Series(IntervalIndex([
- Interval(Timestamp(bin_data[0]), Timestamp(bin_data[1])),
- Interval(Timestamp(bin_data[1]), Timestamp(bin_data[2]))]))
- .astype(CDT(ordered=True)))
-
- for conv in [Timestamp, Timestamp, np.datetime64]:
- bins = [conv(v) for v in bin_data]
- result = cut(data, bins=bins)
- tm.assert_series_equal(Series(result), expected)
-
- bin_pydatetime = [Timestamp(v).to_pydatetime() for v in bin_data]
- result = cut(data, bins=bin_pydatetime)
- tm.assert_series_equal(Series(result), expected)
-
- bins = to_datetime(bin_data)
- result = cut(data, bins=bin_pydatetime)
- tm.assert_series_equal(Series(result), expected)
-
- def test_datetime_nan(self):
-
- def f():
- cut(date_range('20130101', periods=3), bins=[0, 2, 4])
- pytest.raises(ValueError, f)
-
- result = cut(date_range('20130102', periods=5),
- bins=date_range('20130101', periods=2))
- mask = result.categories.isna()
- tm.assert_numpy_array_equal(mask, np.array([False]))
- mask = result.isna()
- tm.assert_numpy_array_equal(
- mask, np.array([False, True, True, True, True]))
-
- @pytest.mark.parametrize('tz', [None, 'UTC', 'US/Pacific'])
- def test_datetime_cut_roundtrip(self, tz):
- # GH 19891
- s = Series(date_range('20180101', periods=3, tz=tz))
- result, result_bins = cut(s, 2, retbins=True)
- expected = cut(s, result_bins)
- tm.assert_series_equal(result, expected)
- expected_bins = DatetimeIndex(['2017-12-31 23:57:07.200000',
- '2018-01-02 00:00:00',
- '2018-01-03 00:00:00'])
- expected_bins = expected_bins.tz_localize(tz)
- tm.assert_index_equal(result_bins, expected_bins)
-
- def test_timedelta_cut_roundtrip(self):
- # GH 19891
- s = Series(timedelta_range('1day', periods=3))
- result, result_bins = cut(s, 2, retbins=True)
- expected = cut(s, result_bins)
- tm.assert_series_equal(result, expected)
- expected_bins = TimedeltaIndex(['0 days 23:57:07.200000',
- '2 days 00:00:00',
- '3 days 00:00:00'])
- tm.assert_index_equal(result_bins, expected_bins)
-
- @pytest.mark.parametrize('arg, expected_bins', [
- [timedelta_range('1day', periods=3),
- TimedeltaIndex(['1 days', '2 days', '3 days'])],
- [date_range('20180101', periods=3),
- DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'])]])
- def test_datelike_qcut_bins(self, arg, expected_bins):
- # GH 19891
- s = Series(arg)
- result, result_bins = qcut(s, 2, retbins=True)
- tm.assert_index_equal(result_bins, expected_bins)
| https://api.github.com/repos/pandas-dev/pandas/pulls/24107 | 2018-12-05T06:23:30Z | 2018-12-06T12:18:44Z | 2018-12-06T12:18:44Z | 2018-12-07T02:42:13Z | |
DOC: Corrects 'reindex_axis' docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 0eb0e14c9054e..1e26c3f45f660 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4359,49 +4359,49 @@ def _reindex_multi(self, axes, copy, fill_value):
return NotImplemented
_shared_docs['reindex_axis'] = ("""
- Conform input object to new index
- with optional filling logic, placing NA/NaN in locations having
- no value in the previous index. A new object is produced unless
- the new index is equivalent to the current one and copy=False.
+ Conform input object to new index.
+
+ .. deprecated:: 0.21.0
+ Use `reindex` instead.
+
+ By default, places NaN in locations having no value in the
+ previous index. A new object is produced unless the new index
+ is equivalent to the current one and copy=False.
Parameters
----------
labels : array-like
New labels / index to conform to. Preferably an Index object to
- avoid duplicating data
+ avoid duplicating data.
axis : %(axes_single_arg)s
+ Indicate whether to use rows or columns.
method : {None, 'backfill'/'bfill', 'pad'/'ffill', 'nearest'}, optional
Method to use for filling holes in reindexed DataFrame:
- * default: don't fill gaps
+ * default: don't fill gaps.
* pad / ffill: propagate last valid observation forward to next
- valid
- * backfill / bfill: use next valid observation to fill gap
- * nearest: use nearest valid observations to fill gap
+ valid.
+ * backfill / bfill: use next valid observation to fill gap.
+ * nearest: use nearest valid observations to fill gap.
- copy : boolean, default True
- Return a new object, even if the passed indexes are the same
- level : int or name
+ level : int or str
Broadcast across a level, matching Index values on the
- passed MultiIndex level
- limit : int, default None
- Maximum number of consecutive elements to forward or backward fill
- tolerance : optional
- Maximum distance between original and new labels for inexact
- matches. The values of the index at the matching locations most
- satisfy the equation ``abs(index[indexer] - target) <= tolerance``.
-
- Tolerance may be a scalar value, which applies the same tolerance
- to all values, or list-like, which applies variable tolerance per
- element. List-like includes list, tuple, array, Series, and must be
- the same size as the index and its dtype must exactly match the
- index's type.
+ passed MultiIndex level.
+ copy : bool, default True
+ Return a new object, even if the passed indexes are the same.
+ limit : int, optional
+ Maximum number of consecutive elements to forward or backward fill.
+ fill_value : float, default NaN
+ Value used to fill in locations having no value in the previous
+ index.
.. versionadded:: 0.21.0 (list-like tolerance)
Returns
-------
%(klass)s
+ Returns a new DataFrame object with new indices, unless the new
+ index is equivalent to the current one and copy=False.
See Also
--------
@@ -4412,7 +4412,17 @@ def _reindex_multi(self, axes, copy, fill_value):
Examples
--------
- >>> df.reindex_axis(['A', 'B', 'C'], axis=1)
+ >>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
+ ... index=['dog', 'hawk'])
+ >>> df
+ num_legs num_wings
+ dog 4 0
+ hawk 2 2
+ >>> df.reindex_axis(['num_wings', 'num_legs', 'num_heads'],
+ ... axis='columns')
+ num_wings num_legs num_heads
+ dog 0 4 NaN
+ hawk 2 2 NaN
""")
@Appender(_shared_docs['reindex_axis'] % _shared_doc_kwargs)
| The docstring included a 'tolerance' parameter which is missing from the source. This has been removed.
The docstring did not describe the 'fill_value' parameter. This has been added, mostly reusing the wording in the summary part of the existing docstring.
- [x] closes #23960
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24105 | 2018-12-05T00:44:51Z | 2018-12-09T08:02:20Z | 2018-12-09T08:02:19Z | 2018-12-09T08:03:21Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.