title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
BUG: infer-objects raising for bytes Series | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 6aac736abf10a..0999a9af3f72a 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1969,6 +1969,9 @@ def convert(
attempt to cast any object types to better types return a copy of
the block (if copy = True) by definition we ARE an ObjectBlock!!!!!
"""
+ if self.dtype != object:
+ return [self]
+
values = self.values
if values.ndim == 2:
# maybe_split ensures we only get here with values.shape[0] == 1,
diff --git a/pandas/tests/series/methods/test_infer_objects.py b/pandas/tests/series/methods/test_infer_objects.py
index 4417bdb8e1687..25bff46d682be 100644
--- a/pandas/tests/series/methods/test_infer_objects.py
+++ b/pandas/tests/series/methods/test_infer_objects.py
@@ -1,6 +1,9 @@
import numpy as np
-from pandas import interval_range
+from pandas import (
+ Series,
+ interval_range,
+)
import pandas._testing as tm
@@ -44,3 +47,10 @@ def test_infer_objects_interval(self, index_or_series):
result = obj.astype(object).infer_objects()
tm.assert_equal(result, obj)
+
+ def test_infer_objects_bytes(self):
+ # GH#49650
+ ser = Series([b"a"], dtype="bytes")
+ expected = ser.copy()
+ result = ser.infer_objects()
+ tm.assert_series_equal(result, expected)
| - [x] closes #49650 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/50067 | 2022-12-05T15:31:36Z | 2022-12-08T00:38:22Z | 2022-12-08T00:38:22Z | 2022-12-08T13:19:12Z |
BUG: loc.setitem raising when expanding empty frame with array value | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index d6e0bb2ae0830..279d65c68bb91 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -752,6 +752,7 @@ Indexing
- Bug in :meth:`DataFrame.loc` coercing dtypes when setting values with a list indexer (:issue:`49159`)
- Bug in :meth:`DataFrame.loc` raising ``ValueError`` with ``bool`` indexer and :class:`MultiIndex` (:issue:`47687`)
- Bug in :meth:`DataFrame.__setitem__` raising ``ValueError`` when right hand side is :class:`DataFrame` with :class:`MultiIndex` columns (:issue:`49121`)
+- Bug in :meth:`DataFrame.loc` when expanding an empty DataFrame and setting an array-like value (:issue:`49972`)
- Bug in :meth:`DataFrame.reindex` casting dtype to ``object`` when :class:`DataFrame` has single extension array column when re-indexing ``columns`` and ``index`` (:issue:`48190`)
- Bug in :func:`~DataFrame.describe` when formatting percentiles in the resulting index showed more decimals than needed (:issue:`46362`)
- Bug in :meth:`DataFrame.compare` does not recognize differences when comparing ``NA`` with value in nullable dtypes (:issue:`48939`)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 070ec7c7a2e4a..44106fc223dca 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1741,7 +1741,9 @@ def _setitem_with_indexer(self, indexer, value, name: str = "iloc"):
arr = extract_array(value, extract_numpy=True)
taker = -1 * np.ones(len(self.obj), dtype=np.intp)
empty_value = algos.take_nd(arr, taker)
- if not isinstance(value, ABCSeries):
+ if not isinstance(value, ABCSeries) and not isinstance(
+ indexer[0], dict
+ ):
# if not Series (in which case we need to align),
# we can short-circuit
empty_value[indexer[0]] = arr
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index 81a5e3d9947be..ab2e7affdfedf 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -1737,6 +1737,13 @@ def test_getitem_preserve_object_index_with_dates(self, indexer):
assert ser.index.dtype == object
+ def test_loc_setitem_empty_frame_expansion(self):
+ # GH#49972
+ result = DataFrame()
+ result.loc[0, 0] = np.asarray([0])
+ expected = DataFrame({0: [0.0]})
+ tm.assert_frame_equal(result, expected)
+
def test_loc_on_multiindex_one_level(self):
# GH#45779
df = DataFrame(
| - [x] closes #49972 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50065 | 2022-12-05T11:46:29Z | 2023-02-22T13:52:25Z | null | 2023-02-22T13:52:26Z |
DEPR: Enforce DataFrameGroupBy.__iter__ returning tuples of length 1 | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index b70dcb0ae99fa..b1400be59b3a1 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -596,7 +596,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation of silently dropping nuisance columns in groupby and resample operations when ``numeric_only=False`` (:issue:`41475`)
- Changed default of ``numeric_only`` in various :class:`.DataFrameGroupBy` methods; all methods now default to ``numeric_only=False`` (:issue:`46072`)
- Changed default of ``numeric_only`` to ``False`` in :class:`.Resampler` methods (:issue:`47177`)
--
+- When providing a list of columns of length one to :meth:`DataFrame.groupby`, the keys that are returned by iterating over the resulting :class:`DataFrameGroupBy` object will now be tuples of length one (:issue:`47761`)
.. ---------------------------------------------------------------------------
.. _whatsnew_200.performance:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 659ca228bdcb0..c7fb40e855ef7 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -70,7 +70,6 @@ class providing the base-class of operations.
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import ensure_dtype_can_hold_na
from pandas.core.dtypes.common import (
@@ -832,19 +831,11 @@ def __iter__(self) -> Iterator[tuple[Hashable, NDFrameT]]:
for each group
"""
keys = self.keys
+ result = self.grouper.get_iterator(self._selected_obj, axis=self.axis)
if isinstance(keys, list) and len(keys) == 1:
- warnings.warn(
- (
- "In a future version of pandas, a length 1 "
- "tuple will be returned when iterating over a "
- "groupby with a grouper equal to a list of "
- "length 1. Don't supply a list with a single grouper "
- "to avoid this warning."
- ),
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.grouper.get_iterator(self._selected_obj, axis=self.axis)
+ # GH#42795 - when keys is a list, return tuples even when length is 1
+ result = (((key,), group) for key, group in result)
+ return result
# To track operations that expand dimensions, like ohlc
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index a7104c2e21049..667656cb4de02 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2743,18 +2743,10 @@ def test_groupby_none_column_name():
def test_single_element_list_grouping():
# GH 42795
- df = DataFrame(
- {"a": [np.nan, 1], "b": [np.nan, 5], "c": [np.nan, 2]}, index=["x", "y"]
- )
- msg = (
- "In a future version of pandas, a length 1 "
- "tuple will be returned when iterating over "
- "a groupby with a grouper equal to a list of "
- "length 1. Don't supply a list with a single grouper "
- "to avoid this warning."
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- values, _ = next(iter(df.groupby(["a"])))
+ df = DataFrame({"a": [1, 2], "b": [np.nan, 5], "c": [np.nan, 2]}, index=["x", "y"])
+ result = [key for key, _ in df.groupby(["a"])]
+ expected = [(1,), (2,)]
+ assert result == expected
@pytest.mark.parametrize("func", ["sum", "cumsum", "cumprod", "prod"])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50064 | 2022-12-05T04:01:45Z | 2022-12-05T09:18:22Z | 2022-12-05T09:18:22Z | 2022-12-13T03:16:40Z |
DEPR: Enforce groupby.transform aligning with input index | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index d8a36b1711b6e..fb8462f7f58e4 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -776,12 +776,11 @@ as the one being grouped. The transform function must:
* (Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the *second* chunk.
-.. deprecated:: 1.5.0
+.. versionchanged:: 2.0.0
When using ``.transform`` on a grouped DataFrame and the transformation function
- returns a DataFrame, currently pandas does not align the result's index
- with the input's index. This behavior is deprecated and alignment will
- be performed in a future version of pandas. You can apply ``.to_numpy()`` to the
+ returns a DataFrame, pandas now aligns the result's index
+ with the input's index. You can call ``.to_numpy()`` on the
result of the transformation function to avoid alignment.
Similar to :ref:`groupby.aggregate.udfs`, the resulting dtype will reflect that of the
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index b1400be59b3a1..5b57baa9ec39a 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -596,7 +596,9 @@ Removal of prior version deprecations/changes
- Enforced deprecation of silently dropping nuisance columns in groupby and resample operations when ``numeric_only=False`` (:issue:`41475`)
- Changed default of ``numeric_only`` in various :class:`.DataFrameGroupBy` methods; all methods now default to ``numeric_only=False`` (:issue:`46072`)
- Changed default of ``numeric_only`` to ``False`` in :class:`.Resampler` methods (:issue:`47177`)
+- Using the method :meth:`DataFrameGroupBy.transform` with a callable that returns DataFrames will align to the input's index (:issue:`47244`)
- When providing a list of columns of length one to :meth:`DataFrame.groupby`, the keys that are returned by iterating over the resulting :class:`DataFrameGroupBy` object will now be tuples of length one (:issue:`47761`)
+-
.. ---------------------------------------------------------------------------
.. _whatsnew_200.performance:
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index d3e37a40614b3..819220d13566b 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -24,7 +24,6 @@
Union,
cast,
)
-import warnings
import numpy as np
@@ -51,7 +50,6 @@
Substitution,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
ensure_int64,
@@ -1392,33 +1390,15 @@ def _transform_general(self, func, *args, **kwargs):
applied.append(res)
# Compute and process with the remaining groups
- emit_alignment_warning = False
for name, group in gen:
if group.size == 0:
continue
object.__setattr__(group, "name", name)
res = path(group)
- if (
- not emit_alignment_warning
- and res.ndim == 2
- and not res.index.equals(group.index)
- ):
- emit_alignment_warning = True
res = _wrap_transform_general_frame(self.obj, group, res)
applied.append(res)
- if emit_alignment_warning:
- # GH#45648
- warnings.warn(
- "In a future version of pandas, returning a DataFrame in "
- "groupby.transform will align with the input's index. Apply "
- "`.to_numpy()` to the result in the transform function to keep "
- "the current behavior and silence this warning.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
concat_index = obj.columns if self.axis == 0 else obj.index
other_axis = 1 if self.axis == 0 else 0 # switches between 0 & 1
concatenated = concat(applied, axis=self.axis, verify_integrity=False)
@@ -2336,5 +2316,7 @@ def _wrap_transform_general_frame(
)
assert isinstance(res_frame, DataFrame)
return res_frame
+ elif isinstance(res, DataFrame) and not res.index.is_(group.index):
+ return res._align_frame(group)[0]
else:
return res
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index c7fb40e855ef7..6cb9bb7f23a06 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -471,12 +471,11 @@ class providing the base-class of operations.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
-.. deprecated:: 1.5.0
+.. versionchanged:: 2.0.0
When using ``.transform`` on a grouped DataFrame and the transformation function
- returns a DataFrame, currently pandas does not align the result's index
- with the input's index. This behavior is deprecated and alignment will
- be performed in a future version of pandas. You can apply ``.to_numpy()`` to the
+ returns a DataFrame, pandas now aligns the result's index
+ with the input's index. You can call ``.to_numpy()`` on the
result of the transformation function to avoid alignment.
Examples
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 8bdbc86d8659c..d0c8b53f13399 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -1466,8 +1466,8 @@ def test_null_group_str_transformer_series(request, dropna, transformation_func)
@pytest.mark.parametrize(
"func, series, expected_values",
[
- (Series.sort_values, False, [4, 5, 3, 1, 2]),
- (lambda x: x.head(1), False, ValueError),
+ (Series.sort_values, False, [5, 4, 3, 2, 1]),
+ (lambda x: x.head(1), False, [5.0, np.nan, 3, 2, np.nan]),
# SeriesGroupBy already has correct behavior
(Series.sort_values, True, [5, 4, 3, 2, 1]),
(lambda x: x.head(1), True, [5.0, np.nan, 3.0, 2.0, np.nan]),
@@ -1475,7 +1475,7 @@ def test_null_group_str_transformer_series(request, dropna, transformation_func)
)
@pytest.mark.parametrize("keys", [["a1"], ["a1", "a2"]])
@pytest.mark.parametrize("keys_in_index", [True, False])
-def test_transform_aligns_depr(func, series, expected_values, keys, keys_in_index):
+def test_transform_aligns(func, series, expected_values, keys, keys_in_index):
# GH#45648 - transform should align with the input's index
df = DataFrame({"a1": [1, 1, 3, 2, 2], "b": [5, 4, 3, 2, 1]})
if "a2" in keys:
@@ -1487,19 +1487,11 @@ def test_transform_aligns_depr(func, series, expected_values, keys, keys_in_inde
if series:
gb = gb["b"]
- warn = None if series else FutureWarning
- msg = "returning a DataFrame in groupby.transform will align"
- if expected_values is ValueError:
- with tm.assert_produces_warning(warn, match=msg):
- with pytest.raises(ValueError, match="Length mismatch"):
- gb.transform(func)
- else:
- with tm.assert_produces_warning(warn, match=msg):
- result = gb.transform(func)
- expected = DataFrame({"b": expected_values}, index=df.index)
- if series:
- expected = expected["b"]
- tm.assert_equal(result, expected)
+ result = gb.transform(func)
+ expected = DataFrame({"b": expected_values}, index=df.index)
+ if series:
+ expected = expected["b"]
+ tm.assert_equal(result, expected)
@pytest.mark.parametrize("keys", ["A", ["A", "B"]])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50063 | 2022-12-05T02:54:46Z | 2022-12-05T11:48:23Z | 2022-12-05T11:48:23Z | 2022-12-06T22:22:34Z |
TST: test to confirm rmod works for large series | diff --git a/pandas/tests/arrays/integer/test_arithmetic.py b/pandas/tests/arrays/integer/test_arithmetic.py
index 092cbc6c7997f..3baff790bca8b 100644
--- a/pandas/tests/arrays/integer/test_arithmetic.py
+++ b/pandas/tests/arrays/integer/test_arithmetic.py
@@ -233,6 +233,13 @@ def test_error_invalid_values(data, all_arithmetic_operators):
# TODO test unsigned overflow
+def test_rmod_large_series():
+ result = pd.Series([2] * 10001).rmod(-1)
+ expected = pd.Series([1] * 10001)
+
+ tm.assert_series_equal(result, expected)
+
+
def test_arith_coerce_scalar(data, all_arithmetic_operators):
op = tm.get_op_from_name(all_arithmetic_operators)
s = pd.Series(data)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
test for #29602 | https://api.github.com/repos/pandas-dev/pandas/pulls/50061 | 2022-12-04T22:51:43Z | 2023-01-12T22:05:35Z | null | 2023-01-12T22:05:36Z |
TST: add test for nested OrderedDict in constructor | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index ef80cc847a5b8..fcbca5e7fa4e7 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1395,6 +1395,16 @@ def test_constructor_list_of_dicts(self):
expected = DataFrame(index=[0])
tm.assert_frame_equal(result, expected)
+ def test_constructor_ordered_dict_nested_preserve_order(self):
+ # see gh-18166
+ nested1 = OrderedDict([("b", 1), ("a", 2)])
+ nested2 = OrderedDict([("b", 2), ("a", 5)])
+ data = OrderedDict([("col2", nested1), ("col1", nested2)])
+ result = DataFrame(data)
+ data = {"col2": [1, 2], "col1": [2, 5]}
+ expected = DataFrame(data=data, index=["b", "a"])
+ tm.assert_frame_equal(result, expected)
+
@pytest.mark.parametrize("dict_type", [dict, OrderedDict])
def test_constructor_ordered_dict_preserve_order(self, dict_type):
# see gh-13304
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
closes #18166 | https://api.github.com/repos/pandas-dev/pandas/pulls/50060 | 2022-12-04T22:49:56Z | 2022-12-06T17:15:09Z | 2022-12-06T17:15:09Z | 2022-12-06T20:48:55Z |
DOC: Remove all versionchanged / versionadded directives < 1.0.0 | diff --git a/doc/source/user_guide/dsintro.rst b/doc/source/user_guide/dsintro.rst
index 571f8980070af..9aa6423908cfd 100644
--- a/doc/source/user_guide/dsintro.rst
+++ b/doc/source/user_guide/dsintro.rst
@@ -712,10 +712,8 @@ The ufunc is applied to the underlying array in a :class:`Series`.
ser = pd.Series([1, 2, 3, 4])
np.exp(ser)
-.. versionchanged:: 0.25.0
-
- When multiple :class:`Series` are passed to a ufunc, they are aligned before
- performing the operation.
+When multiple :class:`Series` are passed to a ufunc, they are aligned before
+performing the operation.
Like other parts of the library, pandas will automatically align labeled inputs
as part of a ufunc with multiple inputs. For example, using :meth:`numpy.remainder`
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index d8a36b1711b6e..1e2b6d6fe4c20 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -629,8 +629,6 @@ For a grouped ``DataFrame``, you can rename in a similar manner:
Named aggregation
~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.25.0
-
To support column-specific aggregation *with control over the output column names*, pandas
accepts the special syntax in :meth:`DataFrameGroupBy.agg` and :meth:`SeriesGroupBy.agg`, known as "named aggregation", where
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index f1c212b53a87a..d6a00022efb79 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -294,8 +294,6 @@ cache_dates : boolean, default True
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
- .. versionadded:: 0.25.0
-
Iteration
+++++++++
@@ -3829,8 +3827,6 @@ using the `odfpy <https://pypi.org/project/odfpy/>`__ module. The semantics and
OpenDocument spreadsheets match what can be done for `Excel files`_ using
``engine='odf'``.
-.. versionadded:: 0.25
-
The :func:`~pandas.read_excel` method can read OpenDocument spreadsheets
.. code-block:: python
@@ -6134,8 +6130,6 @@ No official documentation is available for the SAS7BDAT format.
SPSS formats
------------
-.. versionadded:: 0.25.0
-
The top-level function :func:`read_spss` can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst
index adca9de6c130a..1b8fdc75d5e88 100644
--- a/doc/source/user_guide/reshaping.rst
+++ b/doc/source/user_guide/reshaping.rst
@@ -890,8 +890,6 @@ Note to subdivide over multiple columns we can pass in a list to the
Exploding a list-like column
----------------------------
-.. versionadded:: 0.25.0
-
Sometimes the values in a column are list-like.
.. ipython:: python
diff --git a/doc/source/user_guide/sparse.rst b/doc/source/user_guide/sparse.rst
index bc4eec1c23a35..c86208e70af3c 100644
--- a/doc/source/user_guide/sparse.rst
+++ b/doc/source/user_guide/sparse.rst
@@ -127,9 +127,6 @@ attributes and methods that are specific to sparse data.
This accessor is available only on data with ``SparseDtype``, and on the :class:`Series`
class itself for creating a Series with sparse data from a scipy COO matrix with.
-
-.. versionadded:: 0.25.0
-
A ``.sparse`` accessor has been added for :class:`DataFrame` as well.
See :ref:`api.frame.sparse` for more.
@@ -271,8 +268,6 @@ Interaction with *scipy.sparse*
Use :meth:`DataFrame.sparse.from_spmatrix` to create a :class:`DataFrame` with sparse values from a sparse matrix.
-.. versionadded:: 0.25.0
-
.. ipython:: python
from scipy.sparse import csr_matrix
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index 474068e43a4d4..78134872fa4b8 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -647,8 +647,6 @@ We are stopping on the included end-point as it is part of the index:
dft2 = dft2.swaplevel(0, 1).sort_index()
dft2.loc[idx[:, "2013-01-05"], :]
-.. versionadded:: 0.25.0
-
Slicing with string indexing also honors UTC offset.
.. ipython:: python
@@ -2348,8 +2346,6 @@ To return ``dateutil`` time zone objects, append ``dateutil/`` before the string
)
rng_utc.tz
-.. versionadded:: 0.25.0
-
.. ipython:: python
# datetime.timezone
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 5b2cb880195ec..fc2c486173b9d 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -126,8 +126,6 @@ cdef class IntervalMixin:
"""
Indicates if an interval is empty, meaning it contains no points.
- .. versionadded:: 0.25.0
-
Returns
-------
bool or ndarray
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 1f18f8cae4ae8..aa75c886a4491 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -265,8 +265,6 @@ cdef class _NaT(datetime):
"""
Convert the Timestamp to a NumPy datetime64 or timedelta64.
- .. versionadded:: 0.25.0
-
With the default 'dtype', this is an alias method for `NaT.to_datetime64()`.
The copy parameter is available here only for compatibility. Its value
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index fc276d5d024cd..618e0208154fa 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1243,8 +1243,6 @@ cdef class _Timedelta(timedelta):
"""
Convert the Timedelta to a NumPy timedelta64.
- .. versionadded:: 0.25.0
-
This is an alias method for `Timedelta.to_timedelta64()`. The dtype and
copy parameters are available here only for compatibility. Their values
will not affect the return value.
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index f987a2feb2717..108c884bba170 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1090,8 +1090,6 @@ cdef class _Timestamp(ABCTimestamp):
"""
Convert the Timestamp to a NumPy datetime64.
- .. versionadded:: 0.25.0
-
This is an alias method for `Timestamp.to_datetime64()`. The dtype and
copy parameters are available here only for compatibility. Their values
will not affect the return value.
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index cd719a5256ea3..de9e3ace4f0ca 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1499,8 +1499,6 @@ def searchsorted(
"""
Find indices where elements should be inserted to maintain order.
- .. versionadded:: 0.25.0
-
Find the indices into a sorted array `arr` (a) such that, if the
corresponding elements in `value` were inserted before the indices,
the order of `arr` would be preserved.
@@ -1727,8 +1725,6 @@ def safe_sort(
codes equal to ``-1``. If ``verify=False``, it is assumed there
are no out of bound codes. Ignored when ``codes`` is None.
- .. versionadded:: 0.25.0
-
Returns
-------
ordered : AnyArrayLike
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index a9af210e08741..030b1344d5348 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1568,9 +1568,7 @@ def argsort(
"""
Return the indices that would sort the Categorical.
- .. versionchanged:: 0.25.0
-
- Changed to sort missing values at the end.
+ Missing values are sorted at the end.
Parameters
----------
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index be20d825b0c80..c0a65a97d31f8 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1594,8 +1594,6 @@ def mean(self, *, skipna: bool = True, axis: AxisInt | None = 0):
"""
Return the mean value of the Array.
- .. versionadded:: 0.25.0
-
Parameters
----------
skipna : bool, default True
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 5ebd882cd3a13..3028e276361d3 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1596,8 +1596,6 @@ def repeat(
Return a boolean mask whether the value is contained in the Intervals
of the %(klass)s.
- .. versionadded:: 0.25.0
-
Parameters
----------
other : scalar
diff --git a/pandas/core/arrays/sparse/accessor.py b/pandas/core/arrays/sparse/accessor.py
index 5149dc0973b79..1d5fd09034ae0 100644
--- a/pandas/core/arrays/sparse/accessor.py
+++ b/pandas/core/arrays/sparse/accessor.py
@@ -193,8 +193,6 @@ def to_dense(self) -> Series:
"""
Convert a Series from sparse values to dense.
- .. versionadded:: 0.25.0
-
Returns
-------
Series:
@@ -227,8 +225,6 @@ def to_dense(self) -> Series:
class SparseFrameAccessor(BaseAccessor, PandasDelegate):
"""
DataFrame accessor for sparse data.
-
- .. versionadded:: 0.25.0
"""
def _validate(self, data):
@@ -241,8 +237,6 @@ def from_spmatrix(cls, data, index=None, columns=None) -> DataFrame:
"""
Create a new DataFrame from a scipy sparse matrix.
- .. versionadded:: 0.25.0
-
Parameters
----------
data : scipy.sparse.spmatrix
@@ -297,8 +291,6 @@ def to_dense(self) -> DataFrame:
"""
Convert a DataFrame with sparse values to dense.
- .. versionadded:: 0.25.0
-
Returns
-------
DataFrame
@@ -322,8 +314,6 @@ def to_coo(self):
"""
Return the contents of the frame as a sparse SciPy COO matrix.
- .. versionadded:: 0.25.0
-
Returns
-------
coo_matrix : scipy.sparse.spmatrix
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 7dce02f362b47..033917fe9eb2d 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -530,8 +530,6 @@ def from_spmatrix(cls: type[SparseArrayT], data: spmatrix) -> SparseArrayT:
"""
Create a SparseArray from a scipy.sparse matrix.
- .. versionadded:: 0.25.0
-
Parameters
----------
data : scipy.sparse.sp_matrix
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index eb3365d4f8410..acdfaba3d8d86 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -489,8 +489,7 @@ class DataFrame(NDFrame, OpsMixin):
occurs if data is a Series or a DataFrame itself. Alignment is done on
Series/DataFrame inputs.
- .. versionchanged:: 0.25.0
- If data is a list of dicts, column order follows insertion-order.
+ If data is a list of dicts, column order follows insertion-order.
index : Index or array-like
Index to use for resulting frame. Will default to RangeIndex if
@@ -3151,9 +3150,7 @@ def to_html(
header="Whether to print column labels, default True",
col_space_type="str or int, list or dict of int or str",
col_space="The minimum width of each column in CSS length "
- "units. An int is assumed to be px units.\n\n"
- " .. versionadded:: 0.25.0\n"
- " Ability to use str",
+ "units. An int is assumed to be px units.",
)
@Substitution(shared_params=fmt.common_docstring, returns=fmt.return_docstring)
def to_html(
@@ -4349,9 +4346,6 @@ def query(self, expr: str, inplace: bool = False, **kwargs) -> DataFrame | None:
For example, if one of your columns is called ``a a`` and you want
to sum it with ``b``, your query should be ```a a` + b``.
- .. versionadded:: 0.25.0
- Backtick quoting introduced.
-
.. versionadded:: 1.0.0
Expanding functionality of backtick quoting for more than only spaces.
@@ -8493,8 +8487,6 @@ def pivot(
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
- .. versionchanged:: 0.25.0
-
sort : bool, default True
Specifies if the result should be sorted.
@@ -8808,8 +8800,6 @@ def explode(
"""
Transform each element of a list-like to a row, replicating index values.
- .. versionadded:: 0.25.0
-
Parameters
----------
column : IndexLabel
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 0b55416d2bd7e..b962ca780f1d0 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2916,8 +2916,6 @@ def union(self, other, sort=None):
If the Index objects are incompatible, both Index objects will be
cast to dtype('object') first.
- .. versionchanged:: 0.25.0
-
Parameters
----------
other : Index or array-like
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index ae88b85aa06e1..a2281c6fd9540 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -607,8 +607,6 @@ def _union(self, other: Index, sort):
increasing and other is fully contained in self. Otherwise, returns
an unsorted ``Int64Index``
- .. versionadded:: 0.25.0
-
Returns
-------
union : Index
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index b6fa5da857910..7e0052cf896e4 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -256,7 +256,6 @@ def merge_ordered(
`right` should be left as-is, with no suffix. At least one of the
values must not be None.
- .. versionchanged:: 0.25.0
how : {'left', 'right', 'outer', 'inner'}, default 'outer'
* left: use only keys from left frame (SQL: left outer join)
* right: use only keys from right frame (SQL: right outer join)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 48bc07ca022ee..1e5f565934b50 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4117,8 +4117,6 @@ def explode(self, ignore_index: bool = False) -> Series:
"""
Transform each element of a list-like to a row.
- .. versionadded:: 0.25.0
-
Parameters
----------
ignore_index : bool, default False
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 07dc203e556e8..147fa622fdedc 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -551,9 +551,6 @@
The method to use when for replacement, when `to_replace` is a
scalar, list or tuple and `value` is ``None``.
- .. versionchanged:: 0.23.0
- Added to DataFrame.
-
Returns
-------
{klass}
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 8cd4cb976503d..9ac2e538b6a80 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -456,7 +456,6 @@ def cat(
to match the length of the calling Series/Index). To disable
alignment, use `.values` on any Series/Index/DataFrame in `others`.
- .. versionadded:: 0.23.0
.. versionchanged:: 1.0.0
Changed default of `join` from None to `'left'`.
@@ -2978,7 +2977,7 @@ def len(self):
_doc_args["casefold"] = {
"type": "be casefolded",
"method": "casefold",
- "version": "\n .. versionadded:: 0.25.0\n",
+ "version": "",
}
@Appender(_shared_docs["casemethods"] % _doc_args["lower"])
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 430343beb630b..fd0604903000e 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -824,9 +824,6 @@ def to_datetime(
out-of-bounds values will render the cache unusable and may slow down
parsing.
- .. versionchanged:: 0.25.0
- changed default value from :const:`False` to :const:`True`.
-
Returns
-------
datetime
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 0338cb018049b..37351049f0f56 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -312,8 +312,6 @@ def format_object_summary(
If False, only break lines when the a line of values gets wider
than the display width.
- .. versionadded:: 0.25.0
-
Returns
-------
summary string
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index ec2ffbcf9682c..8db93f882fefd 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -130,7 +130,6 @@ def read_gbq(
package. It also requires the ``google-cloud-bigquery-storage`` and
``fastavro`` packages.
- .. versionadded:: 0.25.0
max_results : int, optional
If set, limit the maximum number of rows to fetch from the query
results.
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 5f02822b68d6d..f2780d5fa6832 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -549,19 +549,11 @@ def read_json(
For all ``orient`` values except ``'table'``, default is True.
- .. versionchanged:: 0.25.0
-
- Not applicable for ``orient='table'``.
-
convert_axes : bool, default None
Try to convert the axes to the proper dtypes.
For all ``orient`` values except ``'table'``, default is True.
- .. versionchanged:: 0.25.0
-
- Not applicable for ``orient='table'``.
-
convert_dates : bool or list of str, default True
If True then default datelike columns may be converted (depending on
keep_default_dates).
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 4b2d0d9beea3f..2e320c6cb9b8f 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -66,8 +66,6 @@ def nested_to_record(
max_level: int, optional, default: None
The max depth to normalize.
- .. versionadded:: 0.25.0
-
Returns
-------
d - dict or list of dicts, matching `ds`
@@ -289,8 +287,6 @@ def _json_normalize(
Max number of levels(depth of dict) to normalize.
if None, normalizes all levels.
- .. versionadded:: 0.25.0
-
Returns
-------
frame : DataFrame
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index d9c2403a19d0c..23a335ec0b965 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -267,7 +267,6 @@
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
- .. versionadded:: 0.25.0
iterator : bool, default False
Return TextFileReader object for iteration or getting chunks with
``get_chunk()``.
diff --git a/pandas/io/spss.py b/pandas/io/spss.py
index 32efd6ca1180c..630b78497d32f 100644
--- a/pandas/io/spss.py
+++ b/pandas/io/spss.py
@@ -24,8 +24,6 @@ def read_spss(
"""
Load an SPSS file from the file path, returning a DataFrame.
- .. versionadded:: 0.25.0
-
Parameters
----------
path : str or Path
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index aef1eb5d59e68..c8b217319b91d 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -690,18 +690,12 @@ class PlotAccessor(PandasObject):
logx : bool or 'sym', default False
Use log scaling or symlog scaling on x axis.
- .. versionchanged:: 0.25.0
-
logy : bool or 'sym' default False
Use log scaling or symlog scaling on y axis.
- .. versionchanged:: 0.25.0
-
loglog : bool or 'sym', default False
Use log scaling or symlog scaling on both x and y axes.
- .. versionchanged:: 0.25.0
-
xticks : sequence
Values to use for the xticks.
yticks : sequence
| **Description**
Remove all `versionchanged` / `versionadded` directives for versions < `1.0.0`
**Checks**
- [x] closes https://github.com/noatamir/pydata-global-sprints/issues/10)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). | https://api.github.com/repos/pandas-dev/pandas/pulls/50057 | 2022-12-04T16:25:18Z | 2022-12-05T09:24:09Z | 2022-12-05T09:24:09Z | 2022-12-05T09:24:21Z |
BUG: read_csv with engine pyarrow parsing multiple date columns | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index e1ac9e3309de7..67f5484044cd5 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -386,6 +386,7 @@ I/O
- :meth:`DataFrame.to_orc` now raising ``ValueError`` when non-default :class:`Index` is given (:issue:`51828`)
- :meth:`DataFrame.to_sql` now raising ``ValueError`` when the name param is left empty while using SQLAlchemy to connect (:issue:`52675`)
- Bug in :func:`json_normalize`, fix json_normalize cannot parse metadata fields list type (:issue:`37782`)
+- Bug in :func:`read_csv` where it would error when ``parse_dates`` was set to a list or dictionary with ``engine="pyarrow"`` (:issue:`47961`)
- Bug in :func:`read_hdf` not properly closing store after a ``IndexError`` is raised (:issue:`52781`)
- Bug in :func:`read_html`, style elements were read into DataFrames (:issue:`52197`)
- Bug in :func:`read_html`, tail texts were removed together with elements containing ``display:none`` style (:issue:`51629`)
diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index e106db224c3dc..d34b3ae1372fd 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -58,6 +58,21 @@ def _get_pyarrow_options(self) -> None:
if pandas_name in self.kwds and self.kwds.get(pandas_name) is not None:
self.kwds[pyarrow_name] = self.kwds.pop(pandas_name)
+ # Date format handling
+ # If we get a string, we need to convert it into a list for pyarrow
+ # If we get a dict, we want to parse those separately
+ date_format = self.date_format
+ if isinstance(date_format, str):
+ date_format = [date_format]
+ else:
+ # In case of dict, we don't want to propagate through, so
+ # just set to pyarrow default of None
+
+ # Ideally, in future we disable pyarrow dtype inference (read in as string)
+ # to prevent misreads.
+ date_format = None
+ self.kwds["timestamp_parsers"] = date_format
+
self.parse_options = {
option_name: option_value
for option_name, option_value in self.kwds.items()
@@ -76,6 +91,7 @@ def _get_pyarrow_options(self) -> None:
"true_values",
"false_values",
"decimal_point",
+ "timestamp_parsers",
)
}
self.convert_options["strings_can_be_null"] = "" in self.kwds["null_values"]
@@ -116,7 +132,7 @@ def _finalize_pandas_output(self, frame: DataFrame) -> DataFrame:
multi_index_named = False
frame.columns = self.names
# we only need the frame not the names
- frame.columns, frame = self._do_date_conversions(frame.columns, frame)
+ _, frame = self._do_date_conversions(frame.columns, frame)
if self.index_col is not None:
index_to_set = self.index_col.copy()
for i, item in enumerate(self.index_col):
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 564339cefa3aa..3208286489fbe 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -61,8 +61,10 @@
from pandas import (
ArrowDtype,
+ DataFrame,
DatetimeIndex,
StringDtype,
+ concat,
)
from pandas.core import algorithms
from pandas.core.arrays import (
@@ -92,8 +94,6 @@
Scalar,
)
- from pandas import DataFrame
-
class ParserBase:
class BadLineHandleMethod(Enum):
@@ -1304,7 +1304,10 @@ def _isindex(colspec):
new_cols.append(new_name)
date_cols.update(old_names)
- data_dict.update(new_data)
+ if isinstance(data_dict, DataFrame):
+ data_dict = concat([DataFrame(new_data), data_dict], axis=1, copy=False)
+ else:
+ data_dict.update(new_data)
new_cols.extend(columns)
if not keep_date_col:
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index 81de4f13de81d..b354f7d9da94d 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -139,9 +139,8 @@ def test_separator_date_conflict(all_parsers):
tm.assert_frame_equal(df, expected)
-@xfail_pyarrow
@pytest.mark.parametrize("keep_date_col", [True, False])
-def test_multiple_date_col_custom(all_parsers, keep_date_col):
+def test_multiple_date_col_custom(all_parsers, keep_date_col, request):
data = """\
KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
@@ -152,6 +151,14 @@ def test_multiple_date_col_custom(all_parsers, keep_date_col):
"""
parser = all_parsers
+ if keep_date_col and parser.engine == "pyarrow":
+ # For this to pass, we need to disable auto-inference on the date columns
+ # in parse_dates. We have no way of doing this though
+ mark = pytest.mark.xfail(
+ reason="pyarrow doesn't support disabling auto-inference on column numbers."
+ )
+ request.node.add_marker(mark)
+
def date_parser(*date_cols):
"""
Test date parser.
@@ -301,9 +308,8 @@ def test_concat_date_col_fail(container, dim):
parsing.concat_date_cols(date_cols)
-@xfail_pyarrow
@pytest.mark.parametrize("keep_date_col", [True, False])
-def test_multiple_date_col(all_parsers, keep_date_col):
+def test_multiple_date_col(all_parsers, keep_date_col, request):
data = """\
KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
@@ -313,6 +319,15 @@ def test_multiple_date_col(all_parsers, keep_date_col):
KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000
"""
parser = all_parsers
+
+ if keep_date_col and parser.engine == "pyarrow":
+ # For this to pass, we need to disable auto-inference on the date columns
+ # in parse_dates. We have no way of doing this though
+ mark = pytest.mark.xfail(
+ reason="pyarrow doesn't support disabling auto-inference on column numbers."
+ )
+ request.node.add_marker(mark)
+
kwds = {
"header": None,
"parse_dates": [[1, 2], [1, 3]],
@@ -469,7 +484,6 @@ def test_date_col_as_index_col(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
def test_multiple_date_cols_int_cast(all_parsers):
data = (
"KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
@@ -530,7 +544,6 @@ def test_multiple_date_cols_int_cast(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
def test_multiple_date_col_timestamp_parse(all_parsers):
parser = all_parsers
data = """05/31/2012,15:30:00.029,1306.25,1,E,0,,1306.25
@@ -1168,7 +1181,6 @@ def test_multiple_date_cols_chunked(all_parsers):
tm.assert_frame_equal(chunks[2], expected[4:])
-@xfail_pyarrow
def test_multiple_date_col_named_index_compat(all_parsers):
parser = all_parsers
data = """\
@@ -1192,7 +1204,6 @@ def test_multiple_date_col_named_index_compat(all_parsers):
tm.assert_frame_equal(with_indices, with_names)
-@xfail_pyarrow
def test_multiple_date_col_multiple_index_compat(all_parsers):
parser = all_parsers
data = """\
@@ -1408,7 +1419,6 @@ def test_parse_date_time_multi_level_column_name(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
@pytest.mark.parametrize(
"data,kwargs,expected",
[
@@ -1498,9 +1508,6 @@ def test_parse_date_time(all_parsers, data, kwargs, expected):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
-# From date_parser fallback behavior
-@pytest.mark.filterwarnings("ignore:elementwise comparison:FutureWarning")
def test_parse_date_fields(all_parsers):
parser = all_parsers
data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11."
@@ -1510,7 +1517,7 @@ def test_parse_date_fields(all_parsers):
StringIO(data),
header=0,
parse_dates={"ymd": [0, 1, 2]},
- date_parser=pd.to_datetime,
+ date_parser=lambda x: x,
)
expected = DataFrame(
@@ -1520,7 +1527,6 @@ def test_parse_date_fields(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
@pytest.mark.parametrize(
("key", "value", "warn"),
[
@@ -1557,7 +1563,6 @@ def test_parse_date_all_fields(all_parsers, key, value, warn):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
@pytest.mark.parametrize(
("key", "value", "warn"),
[
@@ -1594,7 +1599,6 @@ def test_datetime_fractional_seconds(all_parsers, key, value, warn):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
def test_generic(all_parsers):
parser = all_parsers
data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11."
| - [ ] closes #47961 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50056 | 2022-12-04T16:23:37Z | 2023-05-18T21:21:05Z | 2023-05-18T21:21:05Z | 2023-05-18T21:21:17Z |
CI: Pin pyarrow smaller than 10 | diff --git a/.github/actions/setup-conda/action.yml b/.github/actions/setup-conda/action.yml
index 002d0020c2df1..7d1e54052f938 100644
--- a/.github/actions/setup-conda/action.yml
+++ b/.github/actions/setup-conda/action.yml
@@ -18,7 +18,7 @@ runs:
- name: Set Arrow version in ${{ inputs.environment-file }} to ${{ inputs.pyarrow-version }}
run: |
grep -q ' - pyarrow' ${{ inputs.environment-file }}
- sed -i"" -e "s/ - pyarrow/ - pyarrow=${{ inputs.pyarrow-version }}/" ${{ inputs.environment-file }}
+ sed -i"" -e "s/ - pyarrow<10/ - pyarrow=${{ inputs.pyarrow-version }}/" ${{ inputs.environment-file }}
cat ${{ inputs.environment-file }}
shell: bash
if: ${{ inputs.pyarrow-version }}
diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 1f6f73f3c963f..25623d9826030 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -42,7 +42,7 @@ dependencies:
- psycopg2
- pymysql
- pytables
- - pyarrow
+ - pyarrow<10
- pyreadstat
- python-snappy
- pyxlsb
diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml
index 1c59b9db9b1fa..15ce02204ee99 100644
--- a/ci/deps/actions-38-downstream_compat.yaml
+++ b/ci/deps/actions-38-downstream_compat.yaml
@@ -41,7 +41,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
- pyreadstat
- pytables
diff --git a/ci/deps/actions-38.yaml b/ci/deps/actions-38.yaml
index 48b9d18771afb..c93af914c2277 100644
--- a/ci/deps/actions-38.yaml
+++ b/ci/deps/actions-38.yaml
@@ -40,7 +40,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
- pyreadstat
- pytables
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 59191ad107d12..f6240ae7611b9 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -41,7 +41,7 @@ dependencies:
- pandas-gbq
- psycopg2
- pymysql
- - pyarrow
+ - pyarrow<10
- pyreadstat
- pytables
- python-snappy
diff --git a/ci/deps/circle-38-arm64.yaml b/ci/deps/circle-38-arm64.yaml
index 63547d3521489..2095850aa4d3a 100644
--- a/ci/deps/circle-38-arm64.yaml
+++ b/ci/deps/circle-38-arm64.yaml
@@ -40,7 +40,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
# Not provided on ARM
#- pyreadstat
diff --git a/environment.yml b/environment.yml
index 87c5f5d031fcf..70884f4ca98a3 100644
--- a/environment.yml
+++ b/environment.yml
@@ -42,7 +42,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
- pyreadstat
- pytables
diff --git a/requirements-dev.txt b/requirements-dev.txt
index debbdb635901c..caa3dd49add3b 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -31,7 +31,7 @@ openpyxl
odfpy
pandas-gbq
psycopg2-binary
-pyarrow
+pyarrow<10
pymysql
pyreadstat
tables
| lets see if this fixes ci, matplotlib is a bit weird | https://api.github.com/repos/pandas-dev/pandas/pulls/50055 | 2022-12-04T16:20:55Z | 2022-12-04T17:27:27Z | 2022-12-04T17:27:27Z | 2022-12-15T13:42:58Z |
BUG: to_datetime with decimal number doesn't fail for %Y%m%d | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 5b57baa9ec39a..b9b955acb757f 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -661,6 +661,8 @@ Datetimelike
- Bug in ``pandas.tseries.holiday.Holiday`` where a half-open date interval causes inconsistent return types from :meth:`USFederalHolidayCalendar.holidays` (:issue:`49075`)
- Bug in rendering :class:`DatetimeIndex` and :class:`Series` and :class:`DataFrame` with timezone-aware dtypes with ``dateutil`` or ``zoneinfo`` timezones near daylight-savings transitions (:issue:`49684`)
- Bug in :func:`to_datetime` was raising ``ValueError`` when parsing :class:`Timestamp`, ``datetime``, or ``np.datetime64`` objects with non-ISO8601 ``format`` (:issue:`49298`, :issue:`50036`)
+- Bug in :func:`to_datetime` with ``exact`` and ``format='%Y%m%d'`` wasn't raising if the input didn't match the format (:issue:`50051`)
+- Bug in :func:`to_datetime` with ``errors='ignore'`` and ``format='%Y%m%d'`` was returning out-of-bounds inputs as ``datetime.datetime`` objects instead of returning the input (:issue:`50054`)
-
Timedelta
diff --git a/pandas/_libs/tslibs/parsing.pyi b/pandas/_libs/tslibs/parsing.pyi
index db1388672b37c..d4287622ab0ab 100644
--- a/pandas/_libs/tslibs/parsing.pyi
+++ b/pandas/_libs/tslibs/parsing.pyi
@@ -27,11 +27,6 @@ def try_parse_dates(
dayfirst: bool = ...,
default: datetime | None = ...,
) -> npt.NDArray[np.object_]: ...
-def try_parse_year_month_day(
- years: npt.NDArray[np.object_], # object[:]
- months: npt.NDArray[np.object_], # object[:]
- days: npt.NDArray[np.object_], # object[:]
-) -> npt.NDArray[np.object_]: ...
def try_parse_datetime_components(
years: npt.NDArray[np.object_], # object[:]
months: npt.NDArray[np.object_], # object[:]
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 25a2722c48bd6..cdccccf1f0a5d 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -744,25 +744,6 @@ def try_parse_dates(
return result.base # .base to access underlying ndarray
-def try_parse_year_month_day(
- object[:] years, object[:] months, object[:] days
-) -> np.ndarray:
- cdef:
- Py_ssize_t i, n
- object[::1] result
-
- n = len(years)
- # TODO(cython3): Use len instead of `shape[0]`
- if months.shape[0] != n or days.shape[0] != n:
- raise ValueError("Length of years/months/days must all be equal")
- result = np.empty(n, dtype="O")
-
- for i in range(n):
- result[i] = datetime(int(years[i]), int(months[i]), int(days[i]))
-
- return result.base # .base to access underlying ndarray
-
-
def try_parse_datetime_components(object[:] years,
object[:] months,
object[:] days,
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index fd0604903000e..3d0eaa4f993bd 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -24,8 +24,6 @@
Timedelta,
Timestamp,
iNaT,
- nat_strings,
- parsing,
timezones as libtimezones,
)
from pandas._libs.tslibs.parsing import (
@@ -38,7 +36,6 @@
AnyArrayLike,
ArrayLike,
DateTimeErrorChoices,
- npt,
)
from pandas.core.dtypes.common import (
@@ -57,13 +54,11 @@
ABCDataFrame,
ABCSeries,
)
-from pandas.core.dtypes.missing import notna
from pandas.arrays import (
DatetimeArray,
IntegerArray,
)
-from pandas.core import algorithms
from pandas.core.algorithms import unique
from pandas.core.arrays.base import ExtensionArray
from pandas.core.arrays.datetimes import (
@@ -407,7 +402,6 @@ def _convert_listlike_datetimes(
# warn if passing timedelta64, raise for PeriodDtype
# NB: this must come after unit transformation
- orig_arg = arg
try:
arg, _ = maybe_convert_dtype(arg, copy=False, tz=libtimezones.maybe_get_tz(tz))
except TypeError:
@@ -435,8 +429,8 @@ def _convert_listlike_datetimes(
require_iso8601 = not infer_datetime_format
if format is not None and not require_iso8601:
- res = _to_datetime_with_format(
- arg, orig_arg, name, utc, format, exact, errors, infer_datetime_format
+ res = _array_strptime_with_fallback(
+ arg, name, utc, format, exact, errors, infer_datetime_format
)
if res is not None:
return res
@@ -510,43 +504,6 @@ def _array_strptime_with_fallback(
return _box_as_indexlike(result, utc=utc, name=name)
-def _to_datetime_with_format(
- arg,
- orig_arg,
- name,
- utc: bool,
- fmt: str,
- exact: bool,
- errors: str,
- infer_datetime_format: bool,
-) -> Index | None:
- """
- Try parsing with the given format, returning None on failure.
- """
- result = None
-
- # shortcut formatting here
- if fmt == "%Y%m%d":
- # pass orig_arg as float-dtype may have been converted to
- # datetime64[ns]
- orig_arg = ensure_object(orig_arg)
- try:
- # may return None without raising
- result = _attempt_YYYYMMDD(orig_arg, errors=errors)
- except (ValueError, TypeError, OutOfBoundsDatetime) as err:
- raise ValueError(
- "cannot convert the input to '%Y%m%d' date format"
- ) from err
- if result is not None:
- return _box_as_indexlike(result, utc=utc, name=name)
-
- # fallback
- res = _array_strptime_with_fallback(
- arg, name, utc, fmt, exact, errors, infer_datetime_format
- )
- return res
-
-
def _to_datetime_with_unit(arg, unit, name, utc: bool, errors: str) -> Index:
"""
to_datetime specalized to the case where a 'unit' is passed.
@@ -973,7 +930,7 @@ def to_datetime(
in addition to forcing non-dates (or non-parseable dates) to :const:`NaT`.
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')
- datetime.datetime(1300, 1, 1, 0, 0)
+ '13000101'
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')
NaT
@@ -1241,60 +1198,6 @@ def coerce(values):
return values
-def _attempt_YYYYMMDD(arg: npt.NDArray[np.object_], errors: str) -> np.ndarray | None:
- """
- try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like,
- arg is a passed in as an object dtype, but could really be ints/strings
- with nan-like/or floats (e.g. with nan)
-
- Parameters
- ----------
- arg : np.ndarray[object]
- errors : {'raise','ignore','coerce'}
- """
-
- def calc(carg):
- # calculate the actual result
- carg = carg.astype(object, copy=False)
- parsed = parsing.try_parse_year_month_day(
- carg / 10000, carg / 100 % 100, carg % 100
- )
- return tslib.array_to_datetime(parsed, errors=errors)[0]
-
- def calc_with_mask(carg, mask):
- result = np.empty(carg.shape, dtype="M8[ns]")
- iresult = result.view("i8")
- iresult[~mask] = iNaT
-
- masked_result = calc(carg[mask].astype(np.float64).astype(np.int64))
- result[mask] = masked_result.astype("M8[ns]")
- return result
-
- # try intlike / strings that are ints
- try:
- return calc(arg.astype(np.int64))
- except (ValueError, OverflowError, TypeError):
- pass
-
- # a float with actual np.nan
- try:
- carg = arg.astype(np.float64)
- return calc_with_mask(carg, notna(carg))
- except (ValueError, OverflowError, TypeError):
- pass
-
- # string with NaN-like
- try:
- # error: Argument 2 to "isin" has incompatible type "List[Any]"; expected
- # "Union[Union[ExtensionArray, ndarray], Index, Series]"
- mask = ~algorithms.isin(arg, list(nat_strings)) # type: ignore[arg-type]
- return calc_with_mask(arg, mask)
- except (ValueError, OverflowError, TypeError):
- pass
-
- return None
-
-
__all__ = [
"DateParseError",
"should_cache",
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 0e9c8a3ce487b..5046b7adc8aef 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -117,6 +117,7 @@ def test_to_datetime_format_YYYYMMDD(self, cache):
tm.assert_series_equal(result, expected)
def test_to_datetime_format_YYYYMMDD_with_nat(self, cache):
+ # GH50051
ser = Series([19801222, 19801222] + [19810105] * 5)
# with NaT
expected = Series(
@@ -125,24 +126,21 @@ def test_to_datetime_format_YYYYMMDD_with_nat(self, cache):
expected[2] = np.nan
ser[2] = np.nan
- result = to_datetime(ser, format="%Y%m%d", cache=cache)
- tm.assert_series_equal(result, expected)
+ with pytest.raises(ValueError, match=None):
+ to_datetime(ser, format="%Y%m%d", cache=cache)
# string with NaT
ser2 = ser.apply(str)
ser2[2] = "nat"
- result = to_datetime(ser2, format="%Y%m%d", cache=cache)
- tm.assert_series_equal(result, expected)
+ with pytest.raises(ValueError, match=None):
+ to_datetime(ser2, format="%Y%m%d", cache=cache)
def test_to_datetime_format_YYYYMMDD_ignore(self, cache):
# coercion
- # GH 7930
+ # GH 7930, GH50054
ser = Series([20121231, 20141231, 99991231])
+ expected = Series([20121231, 20141231, 99991231], dtype=object)
result = to_datetime(ser, format="%Y%m%d", errors="ignore", cache=cache)
- expected = Series(
- [datetime(2012, 12, 31), datetime(2014, 12, 31), datetime(9999, 12, 31)],
- dtype=object,
- )
tm.assert_series_equal(result, expected)
def test_to_datetime_format_YYYYMMDD_coercion(self, cache):
| - [x] closes #50051 (Replace xxxx with the GitHub issue number)
- [x] closes #17410
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Previously, %Y%m%d was excluded from the ISO8601 fast path, and went down its own special path.
This:
- introduced complexity
- introduced bugs (like the one linked)
%Y%m%d is listed as an example of ISO8601 date on Wiki https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates:
> the standard allows both the "YYYY-MM-DD" and YYYYMMDD formats for complete calendar date representations
So, I'd suggest just getting rid of the special %Y%m%d path, and just going down the ISO8601 path
---
Performance degrades, and is now on-par with other non-ISO8601 formats. I think this is fine - if the code is buggy, it doesn't matter how fast it is.
On upstream/main:
```python
In [1]: dates = pd.date_range('1700', '2200').strftime('%Y%m%d')
In [2]: %%timeit
...: to_datetime(dates, format='%Y%m%d')
...:
...:
68.6 ms ± 2.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
On this branch:
```python
In [1]: dates = pd.date_range('1700', '2200').strftime('%Y%m%d')
In [2]: %%timeit
...: to_datetime(dates, format='%Y%m%d')
...:
...:
212 ms ± 6.57 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
Note that this is comparable performance with other non-ISO8601 formats:
```python
In [4]: %%timeit
...: to_datetime(dates, format='%Y foo %m bar %d')
...:
...:
235 ms ± 7.99 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
The fastest thing in this example would actually be to not specify a `format`, but then that would fail on 6-digit `%Y%m%d` dates (see https://github.com/pandas-dev/pandas/issues/17410#issuecomment-1337636330) | https://api.github.com/repos/pandas-dev/pandas/pulls/50054 | 2022-12-04T14:32:44Z | 2022-12-14T08:39:58Z | null | 2022-12-14T08:39:58Z |
BUG: fix AttributeError: CompileError object has no attribute orig | diff --git a/doc/source/whatsnew/v1.5.2.rst b/doc/source/whatsnew/v1.5.2.rst
index 6397016d827f2..af74c20e021a0 100644
--- a/doc/source/whatsnew/v1.5.2.rst
+++ b/doc/source/whatsnew/v1.5.2.rst
@@ -29,6 +29,7 @@ Bug fixes
~~~~~~~~~
- Bug in the Copy-on-Write implementation losing track of views in certain chained indexing cases (:issue:`48996`)
- Fixed memory leak in :meth:`.Styler.to_excel` (:issue:`49751`)
+- Fixed AttributeError: CompileError object has no attribute orig (:issue:`50062`)
.. ---------------------------------------------------------------------------
.. _whatsnew_152.other:
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index e3510f71bd0cd..0b66ee84ea0c8 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -1389,9 +1389,11 @@ def insert_records(
# https://stackoverflow.com/a/67358288/6067848
msg = r"""(\(1054, "Unknown column 'inf(e0)?' in 'field list'"\))(?#
)|inf can not be used with MySQL"""
- err_text = str(err.orig)
- if re.search(msg, err_text):
- raise ValueError("inf cannot be used with MySQL") from err
+ if hasattr(err, "orig"):
+ err_text = str(err.orig)
+ if re.search(msg, err_text):
+ raise ValueError("inf cannot be used with MySQL") from err
+ raise err
raise err
| - [x] closes #50062 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/v1.5.2.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50053 | 2022-12-04T13:24:19Z | 2023-01-18T20:27:33Z | null | 2023-04-11T09:12:54Z |
DEPR: remove Index._is backward_compat_public_numeric_index | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9e00278f85cbc..999ef0d7ac68c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -90,7 +90,6 @@
ensure_platform_int,
is_bool_dtype,
is_categorical_dtype,
- is_complex_dtype,
is_dtype_equal,
is_ea_or_datetimelike_dtype,
is_extension_array_dtype,
@@ -107,7 +106,6 @@
is_scalar,
is_signed_integer_dtype,
is_string_dtype,
- is_unsigned_integer_dtype,
needs_i8_conversion,
pandas_dtype,
validate_all_hashable,
@@ -125,7 +123,6 @@
ABCDatetimeIndex,
ABCMultiIndex,
ABCPeriodIndex,
- ABCRangeIndex,
ABCSeries,
ABCTimedeltaIndex,
)
@@ -392,11 +389,6 @@ def _outer_indexer(
_attributes: list[str] = ["name"]
_can_hold_strings: bool = True
- # Whether this index is a NumericIndex, but not a Int64Index, Float64Index,
- # UInt64Index or RangeIndex. Needed for backwards compat. Remove this attribute and
- # associated code in pandas 2.0.
- _is_backward_compat_public_numeric_index: bool = False
-
@property
def _engine_type(
self,
@@ -446,13 +438,6 @@ def __new__(
elif is_ea_or_datetimelike_dtype(data_dtype):
pass
- # index-like
- elif (
- isinstance(data, Index)
- and data._is_backward_compat_public_numeric_index
- and dtype is None
- ):
- return data._constructor(data, name=name, copy=copy)
elif isinstance(data, (np.ndarray, Index, ABCSeries)):
if isinstance(data, ABCMultiIndex):
@@ -981,34 +966,6 @@ def astype(self, dtype, copy: bool = True):
new_values = astype_array(values, dtype=dtype, copy=copy)
# pass copy=False because any copying will be done in the astype above
- if not self._is_backward_compat_public_numeric_index and not isinstance(
- self, ABCRangeIndex
- ):
- # this block is needed so e.g. Int64Index.astype("int32") returns
- # Int64Index and not a NumericIndex with dtype int32.
- # When Int64Index etc. are removed from the code base, removed this also.
- if (
- isinstance(dtype, np.dtype)
- and is_numeric_dtype(dtype)
- and not is_complex_dtype(dtype)
- ):
- from pandas.core.api import (
- Float64Index,
- Int64Index,
- UInt64Index,
- )
-
- klass: type[Index]
- if is_signed_integer_dtype(dtype):
- klass = Int64Index
- elif is_unsigned_integer_dtype(dtype):
- klass = UInt64Index
- elif is_float_dtype(dtype):
- klass = Float64Index
- else:
- klass = Index
- return klass(new_values, name=self.name, dtype=dtype, copy=False)
-
return Index(new_values, name=self.name, dtype=new_values.dtype, copy=False)
_index_shared_docs[
@@ -5059,10 +5016,6 @@ def _concat(self, to_concat: list[Index], name: Hashable) -> Index:
result = concat_compat(to_concat_vals)
- is_numeric = result.dtype.kind in ["i", "u", "f"]
- if self._is_backward_compat_public_numeric_index and is_numeric:
- return type(self)._simple_new(result, name=name)
-
return Index._with_infer(result, name=name)
def putmask(self, mask, value) -> Index:
@@ -6460,12 +6413,7 @@ def insert(self, loc: int, item) -> Index:
loc = loc if loc >= 0 else loc - 1
new_values[loc] = item
- if self._typ == "numericindex":
- # Use self._constructor instead of Index to retain NumericIndex GH#43921
- # TODO(2.0) can use Index instead of self._constructor
- return self._constructor(new_values, name=self.name)
- else:
- return Index._with_infer(new_values, name=self.name)
+ return Index._with_infer(new_values, name=self.name)
def drop(
self,
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 6c172a2034524..ff24f97106c51 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -21,7 +21,6 @@
from pandas.core.dtypes.common import (
is_categorical_dtype,
is_scalar,
- pandas_dtype,
)
from pandas.core.dtypes.missing import (
is_valid_na_for_dtype,
@@ -274,30 +273,6 @@ def _is_dtype_compat(self, other) -> Categorical:
return other
- @doc(Index.astype)
- def astype(self, dtype: Dtype, copy: bool = True) -> Index:
- from pandas.core.api import NumericIndex
-
- dtype = pandas_dtype(dtype)
-
- categories = self.categories
- # the super method always returns Int64Index, UInt64Index and Float64Index
- # but if the categories are a NumericIndex with dtype float32, we want to
- # return an index with the same dtype as self.categories.
- if categories._is_backward_compat_public_numeric_index:
- assert isinstance(categories, NumericIndex) # mypy complaint fix
- try:
- categories._validate_dtype(dtype)
- except ValueError:
- pass
- else:
- new_values = self._data.astype(dtype, copy=copy)
- # pass copy=False because any copying has been done in the
- # _data.astype call above
- return categories._constructor(new_values, name=self.name, copy=False)
-
- return super().astype(dtype, copy=copy)
-
def equals(self, other: object) -> bool:
"""
Determine if two CategoricalIndex objects contain the same elements.
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index af3ff54bb9e2b..3ea7b30f7e9f1 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -87,7 +87,6 @@ class NumericIndex(Index):
"numeric type",
)
_can_hold_strings = False
- _is_backward_compat_public_numeric_index: bool = True
_engine_types: dict[np.dtype, type[libindex.IndexEngine]] = {
np.dtype(np.int8): libindex.Int8Engine,
@@ -214,12 +213,7 @@ def _ensure_dtype(cls, dtype: Dtype | None) -> np.dtype | None:
# float16 not supported (no indexing engine)
raise NotImplementedError("float16 indexes are not supported")
- if cls._is_backward_compat_public_numeric_index:
- # dtype for NumericIndex
- return dtype
- else:
- # dtype for Int64Index, UInt64Index etc. Needed for backwards compat.
- return cls._default_dtype
+ return dtype
# ----------------------------------------------------------------
# Indexing Methods
@@ -415,7 +409,6 @@ class Float64Index(NumericIndex):
_typ = "float64index"
_default_dtype = np.dtype(np.float64)
_dtype_validation_metadata = (is_float_dtype, "float")
- _is_backward_compat_public_numeric_index: bool = False
@property
def _engine_type(self) -> type[libindex.Float64Engine]:
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index e17a0d070be6a..73e4a51ca3e7c 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -103,7 +103,6 @@ class RangeIndex(NumericIndex):
_typ = "rangeindex"
_dtype_validation_metadata = (is_signed_integer_dtype, "signed integer")
_range: range
- _is_backward_compat_public_numeric_index: bool = False
@property
def _engine_type(self) -> type[libindex.Int64Engine]:
diff --git a/pandas/tests/base/test_unique.py b/pandas/tests/base/test_unique.py
index b1b0479f397b1..624d3d68d37fd 100644
--- a/pandas/tests/base/test_unique.py
+++ b/pandas/tests/base/test_unique.py
@@ -5,7 +5,6 @@
import pandas as pd
import pandas._testing as tm
-from pandas.core.api import NumericIndex
from pandas.tests.base.common import allow_na_ops
@@ -20,9 +19,6 @@ def test_unique(index_or_series_obj):
expected = pd.MultiIndex.from_tuples(unique_values)
expected.names = obj.names
tm.assert_index_equal(result, expected, exact=True)
- elif isinstance(obj, pd.Index) and obj._is_backward_compat_public_numeric_index:
- expected = NumericIndex(unique_values, dtype=obj.dtype)
- tm.assert_index_equal(result, expected, exact=True)
elif isinstance(obj, pd.Index):
expected = pd.Index(unique_values, dtype=obj.dtype)
if is_datetime64tz_dtype(obj.dtype):
@@ -58,10 +54,7 @@ def test_unique_null(null_obj, index_or_series_obj):
unique_values_not_null = [val for val in unique_values_raw if not pd.isnull(val)]
unique_values = [null_obj] + unique_values_not_null
- if isinstance(obj, pd.Index) and obj._is_backward_compat_public_numeric_index:
- expected = NumericIndex(unique_values, dtype=obj.dtype)
- tm.assert_index_equal(result, expected, exact=True)
- elif isinstance(obj, pd.Index):
+ if isinstance(obj, pd.Index):
expected = pd.Index(unique_values, dtype=obj.dtype)
if is_datetime64tz_dtype(obj.dtype):
result = result.normalize()
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 6b0046dbe619c..825c703b7972e 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -624,14 +624,10 @@ def test_map_dictlike(self, mapper, simple_index):
# empty mappable
dtype = None
- if idx._is_backward_compat_public_numeric_index:
- new_index_cls = NumericIndex
- if idx.dtype.kind == "f":
- dtype = idx.dtype
- else:
- new_index_cls = Float64Index
+ if idx.dtype.kind == "f":
+ dtype = idx.dtype
- expected = new_index_cls([np.nan] * len(idx), dtype=dtype)
+ expected = Index([np.nan] * len(idx), dtype=dtype)
result = idx.map(mapper(expected, idx))
tm.assert_index_equal(result, expected)
@@ -880,13 +876,9 @@ def test_insert_na(self, nulls_fixture, simple_index):
expected = Index([index[0], pd.NaT] + list(index[1:]), dtype=object)
else:
expected = Index([index[0], np.nan] + list(index[1:]))
-
- if index._is_backward_compat_public_numeric_index:
- # GH#43921 we preserve NumericIndex
- if index.dtype.kind == "f":
- expected = NumericIndex(expected, dtype=index.dtype)
- else:
- expected = NumericIndex(expected)
+ # GH#43921 we preserve float dtype
+ if index.dtype.kind == "f":
+ expected = Index(expected, dtype=index.dtype)
result = index.insert(1, na_val)
tm.assert_index_equal(result, expected, exact=True)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index af15cbc2f7929..1012734bab234 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -18,6 +18,8 @@
)
from pandas.util._test_decorators import async_mark
+from pandas.core.dtypes.common import is_numeric_dtype
+
import pandas as pd
from pandas import (
CategoricalIndex,
@@ -592,13 +594,11 @@ def test_map_dictlike(self, index, mapper, request):
if index.empty:
# to match proper result coercion for uints
expected = Index([])
- elif index._is_backward_compat_public_numeric_index:
+ elif is_numeric_dtype(index.dtype):
expected = index._constructor(rng, dtype=index.dtype)
elif type(index) is Index and index.dtype != object:
# i.e. EA-backed, for now just Nullable
expected = Index(rng, dtype=index.dtype)
- elif index.dtype.kind == "u":
- expected = Index(rng, dtype=index.dtype)
else:
expected = Index(rng)
| Removes the attribute `Index._is backward_compat_public_numeric_index`and related stuff. This is the last PR before actually removing Int64/UInt64/Float64Index from the code base.
This PR builds on top of #49560 (which is not yet merged), so please exclude the first commit from review of this PR.
I'd also truly appreciate if someone could review #49560, as it's quite a heavy PR with lots of dtype changes. | https://api.github.com/repos/pandas-dev/pandas/pulls/50052 | 2022-12-04T11:43:44Z | 2023-01-16T17:44:37Z | 2023-01-16T17:44:37Z | 2023-01-16T17:55:45Z |
DOC: Add tips on where to locate test | diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst
index 91f3d51460f99..b05f026bbbb44 100644
--- a/doc/source/development/contributing_codebase.rst
+++ b/doc/source/development/contributing_codebase.rst
@@ -338,7 +338,22 @@ Writing tests
All tests should go into the ``tests`` subdirectory of the specific package.
This folder contains many current examples of tests, and we suggest looking to these for
-inspiration. Ideally, there should be one, and only one, obvious place for a test to reside.
+inspiration.
+
+As a general tip, you can use the search functionality in your integrated development
+environment (IDE) or the git grep command in a terminal to find test files in which the method
+is called. If you are unsure of the best location to put your test, take your best guess,
+but note that reviewers may request that you move the test to a different location.
+
+To use git grep, you can run the following command in a terminal:
+
+``git grep "function_name("``
+
+This will search through all files in your repository for the text ``function_name(``.
+This can be a useful way to quickly locate the function in the
+codebase and determine the best location to add a test for it.
+
+Ideally, there should be one, and only one, obvious place for a test to reside.
Until we reach that ideal, these are some rules of thumb for where a test should
be located.
| Closes #50014 | https://api.github.com/repos/pandas-dev/pandas/pulls/50050 | 2022-12-04T11:28:45Z | 2022-12-06T21:07:41Z | 2022-12-06T21:07:41Z | 2022-12-06T21:07:50Z |
DOC: Improve groupby().ngroup() explanation for missing groups | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 659ca228bdcb0..9a813e866e8d0 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -3212,6 +3212,9 @@ def ngroup(self, ascending: bool = True):
would be seen when iterating over the groupby object, not the
order they are first observed.
+ Groups with missing keys (where `pd.isna()` is True) will be labeled with `NaN`
+ and will be skipped from the count.
+
Parameters
----------
ascending : bool, default True
@@ -3228,38 +3231,38 @@ def ngroup(self, ascending: bool = True):
Examples
--------
- >>> df = pd.DataFrame({"A": list("aaabba")})
+ >>> df = pd.DataFrame({"color": ["red", None, "red", "blue", "blue", "red"]})
>>> df
- A
- 0 a
- 1 a
- 2 a
- 3 b
- 4 b
- 5 a
- >>> df.groupby('A').ngroup()
- 0 0
- 1 0
- 2 0
- 3 1
- 4 1
- 5 0
- dtype: int64
- >>> df.groupby('A').ngroup(ascending=False)
+ color
+ 0 red
+ 1 None
+ 2 red
+ 3 blue
+ 4 blue
+ 5 red
+ >>> df.groupby("color").ngroup()
+ 0 1.0
+ 1 NaN
+ 2 1.0
+ 3 0.0
+ 4 0.0
+ 5 1.0
+ dtype: float64
+ >>> df.groupby("color", dropna=False).ngroup()
0 1
- 1 1
+ 1 2
2 1
3 0
4 0
5 1
dtype: int64
- >>> df.groupby(["A", [1,1,2,3,2,1]]).ngroup()
- 0 0
+ >>> df.groupby("color", dropna=False).ngroup(ascending=False)
+ 0 1
1 0
2 1
- 3 3
+ 3 2
4 2
- 5 0
+ 5 1
dtype: int64
"""
with self._group_selection_context():
| The exact prose and examples are open to improvements if you have ideas.
I figure this didn't require a separate issue, please let me know if I'm missing some step in the process. | https://api.github.com/repos/pandas-dev/pandas/pulls/50049 | 2022-12-04T04:43:20Z | 2023-01-03T21:14:37Z | 2023-01-03T21:14:37Z | 2023-01-04T02:45:52Z |
ENH: Add nullable keyword to read_sql | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index d6e0bb2ae0830..2db5f977721d8 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -37,6 +37,7 @@ The ``use_nullable_dtypes`` keyword argument has been expanded to the following
* :func:`read_csv`
* :func:`read_excel`
+* :func:`read_sql`
Additionally a new global configuration, ``io.nullable_backend`` can now be used in conjunction with the parameter ``use_nullable_dtypes=True`` in the following functions
to select the nullable dtypes implementation.
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 07fab0080a747..9bdfd7991689b 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -31,9 +31,11 @@
)
from pandas.core.dtypes.common import (
is_1d_only_ea_dtype,
+ is_bool_dtype,
is_datetime_or_timedelta_dtype,
is_dtype_equal,
is_extension_array_dtype,
+ is_float_dtype,
is_integer_dtype,
is_list_like,
is_named_tuple,
@@ -49,7 +51,13 @@
algorithms,
common as com,
)
-from pandas.core.arrays import ExtensionArray
+from pandas.core.arrays import (
+ BooleanArray,
+ ExtensionArray,
+ FloatingArray,
+ IntegerArray,
+)
+from pandas.core.arrays.string_ import StringDtype
from pandas.core.construction import (
ensure_wrapped_if_datetimelike,
extract_array,
@@ -900,7 +908,7 @@ def _finalize_columns_and_data(
raise ValueError(err) from err
if len(contents) and contents[0].dtype == np.object_:
- contents = _convert_object_array(contents, dtype=dtype)
+ contents = convert_object_array(contents, dtype=dtype)
return contents, columns
@@ -963,8 +971,11 @@ def _validate_or_indexify_columns(
return columns
-def _convert_object_array(
- content: list[npt.NDArray[np.object_]], dtype: DtypeObj | None
+def convert_object_array(
+ content: list[npt.NDArray[np.object_]],
+ dtype: DtypeObj | None,
+ use_nullable_dtypes: bool = False,
+ coerce_float: bool = False,
) -> list[ArrayLike]:
"""
Internal function to convert object array.
@@ -973,20 +984,37 @@ def _convert_object_array(
----------
content: List[np.ndarray]
dtype: np.dtype or ExtensionDtype
+ use_nullable_dtypes: Controls if nullable dtypes are returned.
+ coerce_float: Cast floats that are integers to int.
Returns
-------
List[ArrayLike]
"""
# provide soft conversion of object dtypes
+
def convert(arr):
if dtype != np.dtype("O"):
- arr = lib.maybe_convert_objects(arr)
+ arr = lib.maybe_convert_objects(
+ arr,
+ try_float=coerce_float,
+ convert_to_nullable_dtype=use_nullable_dtypes,
+ )
if dtype is None:
if arr.dtype == np.dtype("O"):
# i.e. maybe_convert_objects didn't convert
arr = maybe_infer_to_datetimelike(arr)
+ if use_nullable_dtypes and arr.dtype == np.dtype("O"):
+ arr = StringDtype().construct_array_type()._from_sequence(arr)
+ elif use_nullable_dtypes and isinstance(arr, np.ndarray):
+ if is_integer_dtype(arr.dtype):
+ arr = IntegerArray(arr, np.zeros(arr.shape, dtype=np.bool_))
+ elif is_bool_dtype(arr.dtype):
+ arr = BooleanArray(arr, np.zeros(arr.shape, dtype=np.bool_))
+ elif is_float_dtype(arr.dtype):
+ arr = FloatingArray(arr, np.isnan(arr))
+
elif isinstance(dtype, ExtensionDtype):
# TODO: test(s) that get here
# TODO: try to de-duplicate this convert function with
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index e3510f71bd0cd..4c1dca180c6e9 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -58,6 +58,7 @@
)
from pandas.core.base import PandasObject
import pandas.core.common as com
+from pandas.core.internals.construction import convert_object_array
from pandas.core.tools.datetimes import to_datetime
if TYPE_CHECKING:
@@ -139,6 +140,25 @@ def _parse_date_columns(data_frame, parse_dates):
return data_frame
+def _convert_arrays_to_dataframe(
+ data,
+ columns,
+ coerce_float: bool = True,
+ use_nullable_dtypes: bool = False,
+) -> DataFrame:
+ content = lib.to_object_array_tuples(data)
+ arrays = convert_object_array(
+ list(content.T),
+ dtype=None,
+ coerce_float=coerce_float,
+ use_nullable_dtypes=use_nullable_dtypes,
+ )
+ if arrays:
+ return DataFrame(dict(zip(columns, arrays)))
+ else:
+ return DataFrame(columns=columns)
+
+
def _wrap_result(
data,
columns,
@@ -146,9 +166,12 @@ def _wrap_result(
coerce_float: bool = True,
parse_dates=None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
):
"""Wrap result set of query in a DataFrame."""
- frame = DataFrame.from_records(data, columns=columns, coerce_float=coerce_float)
+ frame = _convert_arrays_to_dataframe(
+ data, columns, coerce_float, use_nullable_dtypes
+ )
if dtype:
frame = frame.astype(dtype)
@@ -156,7 +179,7 @@ def _wrap_result(
frame = _parse_date_columns(frame, parse_dates)
if index_col is not None:
- frame.set_index(index_col, inplace=True)
+ frame = frame.set_index(index_col)
return frame
@@ -418,6 +441,7 @@ def read_sql(
parse_dates=...,
columns: list[str] = ...,
chunksize: None = ...,
+ use_nullable_dtypes: bool = ...,
) -> DataFrame:
...
@@ -432,6 +456,7 @@ def read_sql(
parse_dates=...,
columns: list[str] = ...,
chunksize: int = ...,
+ use_nullable_dtypes: bool = ...,
) -> Iterator[DataFrame]:
...
@@ -445,6 +470,7 @@ def read_sql(
parse_dates=None,
columns: list[str] | None = None,
chunksize: int | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL query or database table into a DataFrame.
@@ -492,6 +518,12 @@ def read_sql(
chunksize : int, default None
If specified, return an iterator where `chunksize` is the
number of rows to include in each chunk.
+ use_nullable_dtypes : bool = False
+ Whether to use nullable dtypes as default when reading data. If
+ set to True, nullable dtypes are used for all dtypes that have a nullable
+ implementation, even if no nulls are present.
+
+ .. versionadded:: 2.0
Returns
-------
@@ -571,6 +603,7 @@ def read_sql(
coerce_float=coerce_float,
parse_dates=parse_dates,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
try:
@@ -587,6 +620,7 @@ def read_sql(
parse_dates=parse_dates,
columns=columns,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
return pandas_sql.read_query(
@@ -596,6 +630,7 @@ def read_sql(
coerce_float=coerce_float,
parse_dates=parse_dates,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
@@ -983,6 +1018,7 @@ def _query_iterator(
columns,
coerce_float: bool = True,
parse_dates=None,
+ use_nullable_dtypes: bool = False,
):
"""Return generator through chunked result set."""
has_read_data = False
@@ -996,11 +1032,13 @@ def _query_iterator(
break
has_read_data = True
- self.frame = DataFrame.from_records(
- data, columns=columns, coerce_float=coerce_float
+ self.frame = _convert_arrays_to_dataframe(
+ data, columns, coerce_float, use_nullable_dtypes
)
- self._harmonize_columns(parse_dates=parse_dates)
+ self._harmonize_columns(
+ parse_dates=parse_dates, use_nullable_dtypes=use_nullable_dtypes
+ )
if self.index is not None:
self.frame.set_index(self.index, inplace=True)
@@ -1013,6 +1051,7 @@ def read(
parse_dates=None,
columns=None,
chunksize=None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
from sqlalchemy import select
@@ -1034,14 +1073,17 @@ def read(
column_names,
coerce_float=coerce_float,
parse_dates=parse_dates,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
data = result.fetchall()
- self.frame = DataFrame.from_records(
- data, columns=column_names, coerce_float=coerce_float
+ self.frame = _convert_arrays_to_dataframe(
+ data, column_names, coerce_float, use_nullable_dtypes
)
- self._harmonize_columns(parse_dates=parse_dates)
+ self._harmonize_columns(
+ parse_dates=parse_dates, use_nullable_dtypes=use_nullable_dtypes
+ )
if self.index is not None:
self.frame.set_index(self.index, inplace=True)
@@ -1124,7 +1166,9 @@ def _create_table_setup(self):
meta = MetaData()
return Table(self.name, meta, *columns, schema=schema)
- def _harmonize_columns(self, parse_dates=None) -> None:
+ def _harmonize_columns(
+ self, parse_dates=None, use_nullable_dtypes: bool = False
+ ) -> None:
"""
Make the DataFrame's column types align with the SQL table
column types.
@@ -1164,11 +1208,11 @@ def _harmonize_columns(self, parse_dates=None) -> None:
# Convert tz-aware Datetime SQL columns to UTC
utc = col_type is DatetimeTZDtype
self.frame[col_name] = _handle_date_column(df_col, utc=utc)
- elif col_type is float:
+ elif not use_nullable_dtypes and col_type is float:
# floats support NA, can always convert!
self.frame[col_name] = df_col.astype(col_type, copy=False)
- elif len(df_col) == df_col.count():
+ elif not use_nullable_dtypes and len(df_col) == df_col.count():
# No NA values, can convert ints and bools
if col_type is np.dtype("int64") or col_type is bool:
self.frame[col_name] = df_col.astype(col_type, copy=False)
@@ -1290,6 +1334,7 @@ def read_table(
columns=None,
schema: str | None = None,
chunksize: int | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
raise NotImplementedError
@@ -1303,6 +1348,7 @@ def read_query(
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
pass
@@ -1466,6 +1512,7 @@ def read_table(
columns=None,
schema: str | None = None,
chunksize: int | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL database table into a DataFrame.
@@ -1498,6 +1545,12 @@ def read_table(
chunksize : int, default None
If specified, return an iterator where `chunksize` is the number
of rows to include in each chunk.
+ use_nullable_dtypes : bool = False
+ Whether to use nullable dtypes as default when reading data. If
+ set to True, nullable dtypes are used for all dtypes that have a nullable
+ implementation, even if no nulls are present.
+
+ .. versionadded:: 2.0
Returns
-------
@@ -1516,6 +1569,7 @@ def read_table(
parse_dates=parse_dates,
columns=columns,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
@staticmethod
@@ -1527,6 +1581,7 @@ def _query_iterator(
coerce_float: bool = True,
parse_dates=None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
):
"""Return generator through chunked result set"""
has_read_data = False
@@ -1540,6 +1595,7 @@ def _query_iterator(
index_col=index_col,
coerce_float=coerce_float,
parse_dates=parse_dates,
+ use_nullable_dtypes=use_nullable_dtypes,
)
break
@@ -1551,6 +1607,7 @@ def _query_iterator(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
def read_query(
@@ -1562,6 +1619,7 @@ def read_query(
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL query into a DataFrame.
@@ -1623,6 +1681,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
data = result.fetchall()
@@ -1633,6 +1692,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
return frame
@@ -2089,6 +2149,7 @@ def _query_iterator(
coerce_float: bool = True,
parse_dates=None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
):
"""Return generator through chunked result set"""
has_read_data = False
@@ -2112,6 +2173,7 @@ def _query_iterator(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
def read_query(
@@ -2123,6 +2185,7 @@ def read_query(
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
args = _convert_params(sql, params)
@@ -2138,6 +2201,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
data = self._fetchall_as_list(cursor)
@@ -2150,6 +2214,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
return frame
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 394fceb69b788..db37b1785af5c 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -53,6 +53,10 @@
to_timedelta,
)
import pandas._testing as tm
+from pandas.core.arrays import (
+ ArrowStringArray,
+ StringArray,
+)
from pandas.io import sql
from pandas.io.sql import (
@@ -2266,6 +2270,94 @@ def test_get_engine_auto_error_message(self):
pass
# TODO(GH#36893) fill this in when we add more engines
+ @pytest.mark.parametrize("storage", ["pyarrow", "python"])
+ def test_read_sql_nullable_dtypes(self, storage):
+ # GH#50048
+ table = "test"
+ df = self.nullable_data()
+ df.to_sql(table, self.conn, index=False, if_exists="replace")
+
+ with pd.option_context("mode.string_storage", storage):
+ result = pd.read_sql(
+ f"Select * from {table}", self.conn, use_nullable_dtypes=True
+ )
+ expected = self.nullable_expected(storage)
+ tm.assert_frame_equal(result, expected)
+
+ with pd.option_context("mode.string_storage", storage):
+ iterator = pd.read_sql(
+ f"Select * from {table}",
+ self.conn,
+ use_nullable_dtypes=True,
+ chunksize=3,
+ )
+ expected = self.nullable_expected(storage)
+ for result in iterator:
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("storage", ["pyarrow", "python"])
+ def test_read_sql_nullable_dtypes_table(self, storage):
+ # GH#50048
+ table = "test"
+ df = self.nullable_data()
+ df.to_sql(table, self.conn, index=False, if_exists="replace")
+
+ with pd.option_context("mode.string_storage", storage):
+ result = pd.read_sql(table, self.conn, use_nullable_dtypes=True)
+ expected = self.nullable_expected(storage)
+ tm.assert_frame_equal(result, expected)
+
+ with pd.option_context("mode.string_storage", storage):
+ iterator = pd.read_sql(
+ f"Select * from {table}",
+ self.conn,
+ use_nullable_dtypes=True,
+ chunksize=3,
+ )
+ expected = self.nullable_expected(storage)
+ for result in iterator:
+ tm.assert_frame_equal(result, expected)
+
+ def nullable_data(self) -> DataFrame:
+ return DataFrame(
+ {
+ "a": Series([1, np.nan, 3], dtype="Int64"),
+ "b": Series([1, 2, 3], dtype="Int64"),
+ "c": Series([1.5, np.nan, 2.5], dtype="Float64"),
+ "d": Series([1.5, 2.0, 2.5], dtype="Float64"),
+ "e": [True, False, None],
+ "f": [True, False, True],
+ "g": ["a", "b", "c"],
+ "h": ["a", "b", None],
+ }
+ )
+
+ def nullable_expected(self, storage) -> DataFrame:
+
+ string_array: StringArray | ArrowStringArray
+ string_array_na: StringArray | ArrowStringArray
+ if storage == "python":
+ string_array = StringArray(np.array(["a", "b", "c"], dtype=np.object_))
+ string_array_na = StringArray(np.array(["a", "b", pd.NA], dtype=np.object_))
+
+ else:
+ pa = pytest.importorskip("pyarrow")
+ string_array = ArrowStringArray(pa.array(["a", "b", "c"]))
+ string_array_na = ArrowStringArray(pa.array(["a", "b", None]))
+
+ return DataFrame(
+ {
+ "a": Series([1, np.nan, 3], dtype="Int64"),
+ "b": Series([1, 2, 3], dtype="Int64"),
+ "c": Series([1.5, np.nan, 2.5], dtype="Float64"),
+ "d": Series([1.5, 2.0, 2.5], dtype="Float64"),
+ "e": Series([True, False, pd.NA], dtype="boolean"),
+ "f": Series([True, False, True], dtype="boolean"),
+ "g": string_array,
+ "h": string_array_na,
+ }
+ )
+
class TestSQLiteAlchemy(_TestSQLAlchemy):
"""
@@ -2349,6 +2441,14 @@ class Test(BaseModel):
assert list(df.columns) == ["id", "string_column"]
+ def nullable_expected(self, storage) -> DataFrame:
+ return super().nullable_expected(storage).astype({"e": "Int64", "f": "Int64"})
+
+ @pytest.mark.parametrize("storage", ["pyarrow", "python"])
+ def test_read_sql_nullable_dtypes_table(self, storage):
+ # GH#50048 Not supported for sqlite
+ pass
+
@pytest.mark.db
class TestMySQLAlchemy(_TestSQLAlchemy):
@@ -2376,6 +2476,9 @@ def setup_driver(cls):
def test_default_type_conversion(self):
pass
+ def nullable_expected(self, storage) -> DataFrame:
+ return super().nullable_expected(storage).astype({"e": "Int64", "f": "Int64"})
+
@pytest.mark.db
class TestPostgreSQLAlchemy(_TestSQLAlchemy):
| Sits on top of #50047
Functionality wise, this should work now.
More broadly, I am not sure that this is the best approach we could take here. Since the ``convert_to_nullable_type`` in ``lib.maybe_convert_objects`` is not used right now except here, we could also make this strict and return the appropriate Array from the Cython code, not only when nulls are present. This would avoid the re-cast in the non-cython code part. | https://api.github.com/repos/pandas-dev/pandas/pulls/50048 | 2022-12-03T22:01:39Z | 2022-12-13T02:15:44Z | 2022-12-13T02:15:44Z | 2023-05-03T22:47:17Z |
ENH: maybe_convert_objects add boolean support with NA | diff --git a/pandas/_libs/lib.pyi b/pandas/_libs/lib.pyi
index 3cbc04fb2f5cd..9bc02e90ebb9e 100644
--- a/pandas/_libs/lib.pyi
+++ b/pandas/_libs/lib.pyi
@@ -75,7 +75,7 @@ def maybe_convert_objects(
convert_timedelta: Literal[False] = ...,
convert_period: Literal[False] = ...,
convert_interval: Literal[False] = ...,
- convert_to_nullable_integer: Literal[False] = ...,
+ convert_to_nullable_dtype: Literal[False] = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> npt.NDArray[np.object_ | np.number]: ...
@overload # both convert_datetime and convert_to_nullable_integer False -> np.ndarray
@@ -88,7 +88,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: Literal[False] = ...,
convert_interval: Literal[False] = ...,
- convert_to_nullable_integer: Literal[False] = ...,
+ convert_to_nullable_dtype: Literal[False] = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> np.ndarray: ...
@overload
@@ -101,7 +101,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: bool = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: Literal[True] = ...,
+ convert_to_nullable_dtype: Literal[True] = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
@@ -114,7 +114,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: bool = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: bool = ...,
+ convert_to_nullable_dtype: bool = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
@@ -127,7 +127,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: Literal[True] = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: bool = ...,
+ convert_to_nullable_dtype: bool = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
@@ -140,7 +140,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: bool = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: bool = ...,
+ convert_to_nullable_dtype: bool = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 81e0f3de748ff..462537af3383a 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -1309,10 +1309,14 @@ cdef class Seen:
@property
def is_bool(self):
# i.e. not (anything but bool)
- return not (
- self.datetime_ or self.datetimetz_ or self.timedelta_ or self.nat_
- or self.period_ or self.interval_
- or self.numeric_ or self.nan_ or self.null_ or self.object_
+ return self.is_bool_or_na and not (self.nan_ or self.null_)
+
+ @property
+ def is_bool_or_na(self):
+ # i.e. not (anything but bool or missing values)
+ return self.bool_ and not (
+ self.datetime_ or self.datetimetz_ or self.nat_ or self.timedelta_
+ or self.period_ or self.interval_ or self.numeric_ or self.object_
)
@@ -2335,7 +2339,7 @@ def maybe_convert_objects(ndarray[object] objects,
bint convert_timedelta=False,
bint convert_period=False,
bint convert_interval=False,
- bint convert_to_nullable_integer=False,
+ bint convert_to_nullable_dtype=False,
object dtype_if_all_nat=None) -> "ArrayLike":
"""
Type inference function-- convert object array to proper dtype
@@ -2362,9 +2366,9 @@ def maybe_convert_objects(ndarray[object] objects,
convert_interval : bool, default False
If an array-like object contains only Interval objects (with matching
dtypes and closedness) or NaN, whether to convert to IntervalArray.
- convert_to_nullable_integer : bool, default False
- If an array-like object contains only integer values (and NaN) is
- encountered, whether to convert and return an IntegerArray.
+ convert_to_nullable_dtype : bool, default False
+ If an array-like object contains only integer or boolean values (and NaN) is
+ encountered, whether to convert and return an Boolean/IntegerArray.
dtype_if_all_nat : np.dtype, ExtensionDtype, or None, default None
Dtype to cast to if we have all-NaT.
@@ -2446,7 +2450,7 @@ def maybe_convert_objects(ndarray[object] objects,
seen.int_ = True
floats[i] = <float64_t>val
complexes[i] = <double complex>val
- if not seen.null_ or convert_to_nullable_integer:
+ if not seen.null_ or convert_to_nullable_dtype:
seen.saw_int(val)
if ((seen.uint_ and seen.sint_) or
@@ -2606,6 +2610,9 @@ def maybe_convert_objects(ndarray[object] objects,
if seen.is_bool:
# is_bool property rules out everything else
return bools.view(np.bool_)
+ elif convert_to_nullable_dtype and seen.is_bool_or_na:
+ from pandas.core.arrays import BooleanArray
+ return BooleanArray(bools.view(np.bool_), mask)
seen.object_ = True
if not seen.object_:
@@ -2617,7 +2624,7 @@ def maybe_convert_objects(ndarray[object] objects,
elif seen.float_:
result = floats
elif seen.int_ or seen.uint_:
- if convert_to_nullable_integer:
+ if convert_to_nullable_dtype:
from pandas.core.arrays import IntegerArray
if seen.uint_:
result = IntegerArray(uints, mask)
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 015c121ca684a..c9b61afb5eb25 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -859,7 +859,7 @@ def test_maybe_convert_objects_timedelta64_nat(self):
def test_maybe_convert_objects_nullable_integer(self, exp):
# GH27335
arr = np.array([2, np.NaN], dtype=object)
- result = lib.maybe_convert_objects(arr, convert_to_nullable_integer=True)
+ result = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
tm.assert_extension_array_equal(result, exp)
@@ -869,7 +869,7 @@ def test_maybe_convert_objects_nullable_integer(self, exp):
def test_maybe_convert_objects_nullable_none(self, dtype, val):
# GH#50043
arr = np.array([val, None, 3], dtype="object")
- result = lib.maybe_convert_objects(arr, convert_to_nullable_integer=True)
+ result = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
expected = IntegerArray(
np.array([val, 0, 3], dtype=dtype), np.array([False, True, False])
)
@@ -930,6 +930,28 @@ def test_maybe_convert_objects_bool_nan(self):
out = lib.maybe_convert_objects(ind.values, safe=1)
tm.assert_numpy_array_equal(out, exp)
+ def test_maybe_convert_objects_nullable_boolean(self):
+ # GH50047
+ arr = np.array([True, False], dtype=object)
+ exp = np.array([True, False])
+ out = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
+ tm.assert_numpy_array_equal(out, exp)
+
+ arr = np.array([True, False, pd.NaT], dtype=object)
+ exp = np.array([True, False, pd.NaT], dtype=object)
+ out = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
+ tm.assert_numpy_array_equal(out, exp)
+
+ @pytest.mark.parametrize("val", [None, np.nan])
+ def test_maybe_convert_objects_nullable_boolean_na(self, val):
+ # GH50047
+ arr = np.array([True, False, val], dtype=object)
+ exp = BooleanArray(
+ np.array([True, False, False]), np.array([False, False, True])
+ )
+ out = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
+ tm.assert_extension_array_equal(out, exp)
+
@pytest.mark.parametrize(
"data0",
[
| This is necessary for read_sql and nullables.
I could not find a usage of ``convert_to_nullable_integer``, so we should be able to repurpose the keyword
| https://api.github.com/repos/pandas-dev/pandas/pulls/50047 | 2022-12-03T20:52:09Z | 2022-12-09T00:20:58Z | 2022-12-09T00:20:58Z | 2022-12-09T09:24:41Z |
Docker Hub Sample images | diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index 280b6ed601f08..c4fc3575fdfa8 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -133,33 +133,6 @@ jobs:
asv machine --yes
asv run --quick --dry-run --strict --durations=30 --python=same
- build_docker_dev_environment:
- name: Build Docker Dev Environment
- runs-on: ubuntu-22.04
- defaults:
- run:
- shell: bash -el {0}
-
- concurrency:
- # https://github.community/t/concurrecy-not-work-for-push/183068/7
- group: ${{ github.event_name == 'push' && github.run_number || github.ref }}-build_docker_dev_environment
- cancel-in-progress: true
-
- steps:
- - name: Clean up dangling images
- run: docker image prune -f
-
- - name: Checkout
- uses: actions/checkout@v3
- with:
- fetch-depth: 0
-
- - name: Build image
- run: docker build --pull --no-cache --tag pandas-dev-env .
-
- - name: Show environment
- run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())"
-
requirements-dev-text-installable:
name: Test install requirements-dev.txt
runs-on: ubuntu-22.04
diff --git a/ci/dockerfiles/Dockerfile.alpine b/ci/dockerfiles/Dockerfile.alpine
new file mode 100644
index 0000000000000..17c08a1589b20
--- /dev/null
+++ b/ci/dockerfiles/Dockerfile.alpine
@@ -0,0 +1,10 @@
+FROM python:3.10.8-alpine
+
+RUN apk update & apk upgrade
+RUN apk add gcc g++ libc-dev
+
+COPY requirements-minimal.txt /tmp
+RUN python -m pip install -r /tmp/requirements-minimal.txt
+
+WORKDIR /home/pandas
+CMD ["/bin/sh"]
diff --git a/ci/dockerfiles/Dockerfile.mamba-all b/ci/dockerfiles/Dockerfile.mamba-all
new file mode 100644
index 0000000000000..dda31596d2a39
--- /dev/null
+++ b/ci/dockerfiles/Dockerfile.mamba-all
@@ -0,0 +1,13 @@
+FROM quay.io/condaforge/mambaforge
+
+RUN apt update && apt upgrade -y
+RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata
+
+RUN mamba env create -f \
+ https://raw.githubusercontent.com/pandas-dev/pandas/main/environment.yml
+
+RUN mamba init
+RUN echo "\nmamba activate pandas-dev" >> ~/.bashrc
+RUN mamba clean --all -qy
+
+WORKDIR /home/pandas
diff --git a/ci/dockerfiles/Dockerfile.mamba-minimal b/ci/dockerfiles/Dockerfile.mamba-minimal
new file mode 100644
index 0000000000000..0e49053f536b5
--- /dev/null
+++ b/ci/dockerfiles/Dockerfile.mamba-minimal
@@ -0,0 +1,21 @@
+FROM quay.io/condaforge/mambaforge
+
+RUN apt update && apt upgrade -y
+RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata
+
+RUN mamba create -n pandas-dev \
+ cython \
+ hypothesis \
+ numpy \
+ pytest \
+ pytest-asyncio \
+ python=3.10.8 \
+ pytz \
+ python-dateutil \
+ versioneer
+
+RUN mamba init
+RUN echo "\nmamba activate pandas-dev" >> ~/.bashrc
+RUN mamba clean --all -qy
+
+WORKDIR /home/pandas
diff --git a/Dockerfile b/ci/dockerfiles/Dockerfile.pip-all
similarity index 59%
rename from Dockerfile
rename to ci/dockerfiles/Dockerfile.pip-all
index 7230dcab20f6e..d22a02593e4e0 100644
--- a/Dockerfile
+++ b/ci/dockerfiles/Dockerfile.pip-all
@@ -1,13 +1,12 @@
FROM python:3.10.8
-WORKDIR /home/pandas
-RUN apt-get update && apt-get -y upgrade
-RUN apt-get install -y build-essential
+RUN apt update && apt -y upgrade
# hdf5 needed for pytables installation
RUN apt-get install -y libhdf5-dev
-RUN python -m pip install --upgrade pip
-RUN python -m pip install \
+RUN python -m pip install --use-deprecated=legacy-resolver \
-r https://raw.githubusercontent.com/pandas-dev/pandas/main/requirements-dev.txt
+
+WORKDIR /home/pandas
CMD ["/bin/bash"]
diff --git a/ci/dockerfiles/Dockerfile.pip-minimal b/ci/dockerfiles/Dockerfile.pip-minimal
new file mode 100644
index 0000000000000..b88ab50b89136
--- /dev/null
+++ b/ci/dockerfiles/Dockerfile.pip-minimal
@@ -0,0 +1,10 @@
+FROM python:3.10.8-slim
+
+RUN apt update && apt upgrade -y
+RUN apt install -y gcc g++
+
+COPY requirements-minimal.txt /tmp
+RUN python -m pip install -r /tmp/requirements-minimal.txt
+
+WORKDIR /home/pandas
+CMD ["/bin/bash"]
diff --git a/ci/dockerfiles/requirements-minimal.txt b/ci/dockerfiles/requirements-minimal.txt
new file mode 100644
index 0000000000000..7de6f10d2824f
--- /dev/null
+++ b/ci/dockerfiles/requirements-minimal.txt
@@ -0,0 +1,8 @@
+cython
+hypothesis
+numpy
+pytest
+pytest-asyncio
+pytz
+python-dateutil
+versioneer
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index 942edd863a19a..815a161557f48 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -157,56 +157,42 @@ should already exist.
Option 3: using Docker
~~~~~~~~~~~~~~~~~~~~~~
-pandas provides a ``DockerFile`` in the root directory to build a Docker image
-with a full pandas development environment.
+Instead of manually setting up a development environment, you can use `Docker
+<https://docs.docker.com/get-docker/>`_. pandas provides pre-built images that serve a
+variety of users. These images include:
-**Docker Commands**
+ * alpine - a lightweight image for the absolute minimalist (note: this is experimental)
+ * pip-minimal - a pip-based installation with the minimum set of packages for building / testing
+ * mamba-minimal - a mamba-based installation with the minimum set of packages for building / testing
+ * pip-all - a pip-based installation with all testing dependencies
+ * mamba-all - a mamba-based installation with all testing dependencies
-Build the Docker image::
+If you are a new user and the image size is no concern to you, we suggest opting for either image
+that includes all of the dependencies, as this will ensure you can run the test suite without any
+caveats.
- # Build the image
- docker build -t pandas-dev .
+To use any of the images, you should first start with ``docker pull pandas/pandas:<tag>``,
+where tag is one of *alpine*, *pip-minimal*, *mamba-minimal*, *pip-all* or *mamba-all*. You can then run
+the image without any extra configuration.
-Run Container::
+To illustrate, if you wanted to use the *pip-all* image, from the root of your local pandas project
+you would run:
- # Run a container and bind your local repo to the container
- # This command assumes you are running from your local repo
- # but if not alter ${PWD} to match your local repo path
- docker run -it --rm -v ${PWD}:/home/pandas pandas-dev
-
-*Even easier, you can integrate Docker with the following IDEs:*
-
-**Visual Studio Code**
-
-You can use the DockerFile to launch a remote session with Visual Studio Code,
-a popular free IDE, using the ``.devcontainer.json`` file.
-See https://code.visualstudio.com/docs/remote/containers for details.
-
-**PyCharm (Professional)**
-
-Enable Docker support and use the Services tool window to build and manage images as well as
-run and interact with containers.
-See https://www.jetbrains.com/help/pycharm/docker.html for details.
-
-Step 3: build and install pandas
---------------------------------
+.. code-block:: bash
-You can now run::
+ docker pull pandas/pandas:pip-all
+ docker run --rm -it -v ${PWD}:/home/pandas pandas/pandas:pip-all
- # Build and install pandas
- python setup.py build_ext -j 4
- python -m pip install -e . --no-build-isolation --no-use-pep517
+Similarly for *mamba-all*
-At this point you should be able to import pandas from your locally built version::
+.. code-block:: bash
- $ python
- >>> import pandas
- >>> print(pandas.__version__) # note: the exact output may differ
- 2.0.0.dev0+880.g2b9e661fbb.dirty
+ docker pull pandas/pandas:mamba-all
+ docker run --rm -it -v ${PWD}:/home/pandas pandas/pandas:mamba-all
-This will create the new environment, and not touch any of your existing environments,
-nor any existing Python installation.
+The *mamba-* images will automatically activate the appropriate virtual environment for you on entry.
.. note::
- You will need to repeat this step each time the C extensions change, for example
- if you modified any file in ``pandas/_libs`` or if you did a fetch and merge from ``upstream/main``.
+
+ You may run the images from a directory besides the root of the pandas project - just be
+ sure to substitute ${PWD} in the commands above to point to your local pandas repository
|
Still working on mamba. Opened an open source account inquiry with Docker so will eventually place there. In the interim can place on personal account for testing | https://api.github.com/repos/pandas-dev/pandas/pulls/50046 | 2022-12-03T19:48:34Z | 2023-05-17T16:59:43Z | null | 2023-05-17T16:59:43Z |
BUG: Change FutureWarning to DeprecationWarning for inplace setitem with DataFrame.(i)loc | diff --git a/doc/source/whatsnew/v1.5.3.rst b/doc/source/whatsnew/v1.5.3.rst
index 489a6fda9ffab..0b670219f830c 100644
--- a/doc/source/whatsnew/v1.5.3.rst
+++ b/doc/source/whatsnew/v1.5.3.rst
@@ -48,6 +48,7 @@ Other
as pandas works toward compatibility with SQLAlchemy 2.0.
- Reverted deprecation (:issue:`45324`) of behavior of :meth:`Series.__getitem__` and :meth:`Series.__setitem__` slicing with an integer :class:`Index`; this will remain positional (:issue:`49612`)
+- A ``FutureWarning`` raised when attempting to set values inplace with :meth:`DataFrame.loc` or :meth:`DataFrame.loc` has been changed to a ``DeprecationWarning`` (:issue:`48673`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 198903f5fceff..dd06d9bee4428 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2026,7 +2026,7 @@ def _setitem_single_column(self, loc: int, value, plane_indexer):
"array. To retain the old behavior, use either "
"`df[df.columns[i]] = newvals` or, if columns are non-unique, "
"`df.isetitem(i, newvals)`",
- FutureWarning,
+ DeprecationWarning,
stacklevel=find_stack_level(),
)
# TODO: how to get future behavior?
diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index 8dbf7d47374a6..83b1679b0da7e 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -400,7 +400,7 @@ def test_setitem_frame_2d_values(self, data):
warn = None
if has_can_hold_element and not isinstance(data.dtype, PandasDtype):
# PandasDtype excluded because it isn't *really* supported.
- warn = FutureWarning
+ warn = DeprecationWarning
with tm.assert_produces_warning(warn, match=msg):
df.iloc[:] = df
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index acd742c54b908..e2a99348f45aa 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -785,7 +785,7 @@ def test_getitem_setitem_float_labels(self, using_array_manager):
assert len(result) == 5
cp = df.copy()
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
with tm.assert_produces_warning(warn, match=msg):
cp.loc[1.0:5.0] = 0
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index cf0ff4e3603f3..e33c6d6a805cf 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -408,7 +408,7 @@ def test_setitem_frame_length_0_str_key(self, indexer):
def test_setitem_frame_duplicate_columns(self, using_array_manager):
# GH#15695
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
cols = ["A", "B", "C"] * 2
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index fba8978d2128c..c7e0a10c0d7d0 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -384,7 +384,7 @@ def test_where_datetime(self, using_array_manager):
expected = df.copy()
expected.loc[[0, 1], "A"] = np.nan
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
with tm.assert_produces_warning(warn, match=msg):
expected.loc[:, "C"] = np.nan
@@ -571,7 +571,7 @@ def test_where_axis_multiple_dtypes(self, using_array_manager):
d2 = df.copy().drop(1, axis=1)
expected = df.copy()
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
with tm.assert_produces_warning(warn, match=msg):
expected.loc[:, 1] = np.nan
diff --git a/pandas/tests/frame/methods/test_dropna.py b/pandas/tests/frame/methods/test_dropna.py
index 62351aa89c914..53d9f75494d92 100644
--- a/pandas/tests/frame/methods/test_dropna.py
+++ b/pandas/tests/frame/methods/test_dropna.py
@@ -221,7 +221,7 @@ def test_dropna_with_duplicate_columns(self):
df.iloc[0, 0] = np.nan
df.iloc[1, 1] = np.nan
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 3] = np.nan
expected = df.dropna(subset=["A", "B", "C"], how="all")
expected.columns = ["A", "A", "B", "C"]
diff --git a/pandas/tests/frame/methods/test_rename.py b/pandas/tests/frame/methods/test_rename.py
index f4443953a0d52..405518c372b2c 100644
--- a/pandas/tests/frame/methods/test_rename.py
+++ b/pandas/tests/frame/methods/test_rename.py
@@ -178,7 +178,9 @@ def test_rename_nocopy(self, float_frame, using_copy_on_write):
# TODO(CoW) this also shouldn't warn in case of CoW, but the heuristic
# checking if the array shares memory doesn't work if CoW happened
- with tm.assert_produces_warning(FutureWarning if using_copy_on_write else None):
+ with tm.assert_produces_warning(
+ DeprecationWarning if using_copy_on_write else None
+ ):
# This loc setitem already happens inplace, so no warning
# that this will change in the future
renamed.loc[:, "foo"] = 1.0
diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py
index bfc3c8e0a25eb..9b4dcf58590e3 100644
--- a/pandas/tests/frame/methods/test_shift.py
+++ b/pandas/tests/frame/methods/test_shift.py
@@ -372,7 +372,7 @@ def test_shift_duplicate_columns(self, using_array_manager):
warn = None
if using_array_manager:
- warn = FutureWarning
+ warn = DeprecationWarning
shifted = []
for columns in column_lists:
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index b4f027f3a832a..16021facb3986 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2604,7 +2604,9 @@ def check_views(c_only: bool = False):
# FIXME(GH#35417): until GH#35417, iloc.setitem into EA values does not preserve
# view, so we have to check in the other direction
- with tm.assert_produces_warning(FutureWarning, match="will attempt to set"):
+ with tm.assert_produces_warning(
+ DeprecationWarning, match="will attempt to set"
+ ):
df.iloc[:, 2] = pd.array([45, 46], dtype=c.dtype)
assert df.dtypes.iloc[2] == c.dtype
if not copy and not using_copy_on_write:
diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py
index 2c28800fb181f..38861a2b04409 100644
--- a/pandas/tests/frame/test_nonunique_indexes.py
+++ b/pandas/tests/frame/test_nonunique_indexes.py
@@ -323,7 +323,9 @@ def test_dup_columns_across_dtype(self):
def test_set_value_by_index(self, using_array_manager):
# See gh-12344
warn = (
- FutureWarning if using_array_manager and not is_platform_windows() else None
+ DeprecationWarning
+ if using_array_manager and not is_platform_windows()
+ else None
)
msg = "will attempt to set the values inplace"
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index 69e5d5e3d5447..e22559802cbec 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -23,7 +23,7 @@
class TestDataFrameReshape:
def test_stack_unstack(self, float_frame, using_array_manager):
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
df = float_frame.copy()
diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py
index d4354766a203b..5cf044280b391 100644
--- a/pandas/tests/indexing/multiindex/test_loc.py
+++ b/pandas/tests/indexing/multiindex/test_loc.py
@@ -541,9 +541,9 @@ def test_loc_setitem_single_column_slice():
)
expected = df.copy()
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "B"] = np.arange(4)
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
expected.iloc[:, 2] = np.arange(4)
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 8cc6b6e73aaea..dcc95d9e41a5a 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -84,7 +84,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key, using_array_manage
overwrite = isinstance(key, slice) and key == slice(None)
warn = None
if overwrite:
- warn = FutureWarning
+ warn = DeprecationWarning
msg = "will attempt to set the values inplace instead"
with tm.assert_produces_warning(warn, match=msg):
indexer(df)[key, 0] = cat
@@ -108,7 +108,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key, using_array_manage
frame = DataFrame({0: np.array([0, 1, 2], dtype=object), 1: range(3)})
df = frame.copy()
orig_vals = df.values
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
indexer(df)[key, 0] = cat
expected = DataFrame({0: cat, 1: range(3)})
tm.assert_frame_equal(df, expected)
@@ -904,7 +904,7 @@ def test_iloc_setitem_categorical_updates_inplace(self, using_copy_on_write):
# This should modify our original values in-place
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0] = cat[::-1]
if not using_copy_on_write:
@@ -1314,7 +1314,7 @@ def test_iloc_setitem_dtypes_duplicate_columns(
# GH#22035
df = DataFrame([[init_value, "str", "str2"]], columns=["a", "b", "b"])
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0] = df.iloc[:, 0].astype(dtypes)
expected_df = DataFrame(
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 069e5a62895af..210c75b075011 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -550,7 +550,7 @@ def test_astype_assignment(self):
df = df_orig.copy()
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0:2] = df.iloc[:, 0:2].astype(np.int64)
expected = DataFrame(
[[1, 2, "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -558,7 +558,7 @@ def test_astype_assignment(self):
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0:2] = df.iloc[:, 0:2]._convert(datetime=True, numeric=True)
expected = DataFrame(
[[1, 2, "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -567,7 +567,7 @@ def test_astype_assignment(self):
# GH5702 (loc)
df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "A"] = df.loc[:, "A"].astype(np.int64)
expected = DataFrame(
[[1, "2", "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -575,7 +575,7 @@ def test_astype_assignment(self):
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, ["B", "C"]] = df.loc[:, ["B", "C"]].astype(np.int64)
expected = DataFrame(
[["1", 2, 3, ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -586,13 +586,13 @@ def test_astype_assignment_full_replacements(self):
# full replacements / no nans
df = DataFrame({"A": [1.0, 2.0, 3.0, 4.0]})
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0] = df["A"].astype(np.int64)
expected = DataFrame({"A": [1, 2, 3, 4]})
tm.assert_frame_equal(df, expected)
df = DataFrame({"A": [1.0, 2.0, 3.0, 4.0]})
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "A"] = df["A"].astype(np.int64)
expected = DataFrame({"A": [1, 2, 3, 4]})
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index e62fb98b0782d..235ad3d213a62 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -368,7 +368,7 @@ def test_loc_setitem_dtype(self):
df = DataFrame({"id": ["A"], "a": [1.2], "b": [0.0], "c": [-2.5]})
cols = ["a", "b", "c"]
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, cols] = df.loc[:, cols].astype("float32")
expected = DataFrame(
@@ -633,11 +633,11 @@ def test_loc_setitem_consistency_slice_column_len(self):
df = DataFrame(values, index=mi, columns=cols)
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, ("Respondent", "StartDate")] = to_datetime(
df.loc[:, ("Respondent", "StartDate")]
)
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, ("Respondent", "EndDate")] = to_datetime(
df.loc[:, ("Respondent", "EndDate")]
)
@@ -720,7 +720,7 @@ def test_loc_setitem_frame_with_reindex_mixed(self):
df = DataFrame(index=[3, 5, 4], columns=["A", "B"], dtype=float)
df["B"] = "string"
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[[4, 3, 5], "A"] = np.array([1, 2, 3], dtype="int64")
ser = Series([2, 3, 1], index=[3, 5, 4], dtype="int64")
expected = DataFrame({"A": ser})
@@ -732,7 +732,7 @@ def test_loc_setitem_frame_with_inverted_slice(self):
df = DataFrame(index=[1, 2, 3], columns=["A", "B"], dtype=float)
df["B"] = "string"
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[slice(3, 0, -1), "A"] = np.array([1, 2, 3], dtype="int64")
expected = DataFrame({"A": [3, 2, 1], "B": "string"}, index=[1, 2, 3])
tm.assert_frame_equal(df, expected)
@@ -909,7 +909,7 @@ def test_loc_setitem_missing_columns(self, index, box, expected):
warn = None
if isinstance(index[0], slice) and index[0] == slice(None):
- warn = FutureWarning
+ warn = DeprecationWarning
msg = "will attempt to set the values inplace instead"
with tm.assert_produces_warning(warn, match=msg):
@@ -1425,7 +1425,7 @@ def test_loc_setitem_single_row_categorical(self):
categories = Categorical(df["Alpha"], categories=["a", "b", "c"])
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "Alpha"] = categories
result = df["Alpha"]
@@ -3211,3 +3211,11 @@ def test_getitem_loc_str_periodindex(self):
index = pd.period_range(start="2000", periods=20, freq="B")
series = Series(range(20), index=index)
assert series.loc["2000-01-14"] == 9
+
+ def test_deprecation_warnings_raised_loc(self):
+ # GH#48673
+ with tm.assert_produces_warning(DeprecationWarning):
+ values = np.arange(4).reshape(2, 2)
+ df = DataFrame(values, columns=["a", "b"])
+ new = np.array([10, 11]).astype(np.int16)
+ df.loc[:, "a"] = new
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 938056902e745..f973bdf7ea6f6 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -312,7 +312,7 @@ def test_partial_setting_frame(self, using_array_manager):
df = df_orig.copy()
df["B"] = df["B"].astype(np.float64)
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "B"] = df.loc[:, "A"]
tm.assert_frame_equal(df, expected)
| - [X] closes #48673
- [X] [Tests added and passed](
- [ ] Added [type annotations]
- [ ] All [code checks passed].
- [X] Added an entry in the latest `doc/source/whatsnew/v1.5.3.rst` file.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50044 | 2022-12-03T18:43:53Z | 2023-01-18T04:42:41Z | 2023-01-18T04:42:41Z | 2023-01-18T14:38:30Z |
BUG: Fix bug in maybe_convert_objects with None and nullable | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index e35cf2fb13768..81e0f3de748ff 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2446,7 +2446,7 @@ def maybe_convert_objects(ndarray[object] objects,
seen.int_ = True
floats[i] = <float64_t>val
complexes[i] = <double complex>val
- if not seen.null_:
+ if not seen.null_ or convert_to_nullable_integer:
seen.saw_int(val)
if ((seen.uint_ and seen.sint_) or
@@ -2616,10 +2616,13 @@ def maybe_convert_objects(ndarray[object] objects,
result = complexes
elif seen.float_:
result = floats
- elif seen.int_:
+ elif seen.int_ or seen.uint_:
if convert_to_nullable_integer:
from pandas.core.arrays import IntegerArray
- result = IntegerArray(ints, mask)
+ if seen.uint_:
+ result = IntegerArray(uints, mask)
+ else:
+ result = IntegerArray(ints, mask)
else:
result = floats
elif seen.nan_:
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index df2afad51abf8..015c121ca684a 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -863,6 +863,18 @@ def test_maybe_convert_objects_nullable_integer(self, exp):
tm.assert_extension_array_equal(result, exp)
+ @pytest.mark.parametrize(
+ "dtype, val", [("int64", 1), ("uint64", np.iinfo(np.int64).max + 1)]
+ )
+ def test_maybe_convert_objects_nullable_none(self, dtype, val):
+ # GH#50043
+ arr = np.array([val, None, 3], dtype="object")
+ result = lib.maybe_convert_objects(arr, convert_to_nullable_integer=True)
+ expected = IntegerArray(
+ np.array([val, 0, 3], dtype=dtype), np.array([False, True, False])
+ )
+ tm.assert_extension_array_equal(result, expected)
+
@pytest.mark.parametrize(
"convert_to_masked_nullable, exp",
[
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I don't think that this is user visible right now. Stumbled on this when working on nullables for read_sql | https://api.github.com/repos/pandas-dev/pandas/pulls/50043 | 2022-12-03T18:40:50Z | 2022-12-03T20:50:16Z | 2022-12-03T20:50:16Z | 2022-12-03T20:50:19Z |
ENH skip 'now' and 'today' when inferring format for array | diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index b78174483be51..35a4131d11d50 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -431,7 +431,11 @@ def first_non_null(values: ndarray) -> int:
val = values[i]
if checknull_with_nat_and_na(val):
continue
- if isinstance(val, str) and (len(val) == 0 or val in nat_strings):
+ if (
+ isinstance(val, str)
+ and
+ (len(val) == 0 or val in ("now", "today", *nat_strings))
+ ):
continue
return i
else:
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 7df45975475dd..2921a01918808 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -2318,6 +2318,8 @@ class TestGuessDatetimeFormat:
["", "2011-12-30 00:00:00.000000"],
["NaT", "2011-12-30 00:00:00.000000"],
["2011-12-30 00:00:00.000000", "random_string"],
+ ["now", "2011-12-30 00:00:00.000000"],
+ ["today", "2011-12-30 00:00:00.000000"],
],
)
def test_guess_datetime_format_for_array(self, test_list):
| breaking this off from https://github.com/pandas-dev/pandas/pull/49024/files
haven't added a whatsnew note as it's not user facing (yet! but it will make a difference after PDEP4) | https://api.github.com/repos/pandas-dev/pandas/pulls/50039 | 2022-12-03T14:21:48Z | 2022-12-04T16:33:17Z | 2022-12-04T16:33:17Z | 2022-12-05T21:02:48Z |
BUG: to_datetime fails with np.datetime64 and non-ISO format | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index d8609737b8c7a..b70dcb0ae99fa 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -658,7 +658,7 @@ Datetimelike
- Bug in subtracting a ``datetime`` scalar from :class:`DatetimeIndex` failing to retain the original ``freq`` attribute (:issue:`48818`)
- Bug in ``pandas.tseries.holiday.Holiday`` where a half-open date interval causes inconsistent return types from :meth:`USFederalHolidayCalendar.holidays` (:issue:`49075`)
- Bug in rendering :class:`DatetimeIndex` and :class:`Series` and :class:`DataFrame` with timezone-aware dtypes with ``dateutil`` or ``zoneinfo`` timezones near daylight-savings transitions (:issue:`49684`)
-- Bug in :func:`to_datetime` was raising ``ValueError`` when parsing :class:`Timestamp` or ``datetime`` objects with non-ISO8601 ``format`` (:issue:`49298`)
+- Bug in :func:`to_datetime` was raising ``ValueError`` when parsing :class:`Timestamp`, ``datetime``, or ``np.datetime64`` objects with non-ISO8601 ``format`` (:issue:`49298`, :issue:`50036`)
-
Timedelta
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index c56b4891da428..9a315106b75cd 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -14,13 +14,17 @@ from _thread import allocate_lock as _thread_allocate_lock
import numpy as np
import pytz
+cimport numpy as cnp
from numpy cimport (
int64_t,
ndarray,
)
from pandas._libs.missing cimport checknull_with_nat_and_na
-from pandas._libs.tslibs.conversion cimport convert_timezone
+from pandas._libs.tslibs.conversion cimport (
+ convert_timezone,
+ get_datetime64_nanos,
+)
from pandas._libs.tslibs.nattype cimport (
NPY_NAT,
c_nat_strings as nat_strings,
@@ -33,6 +37,9 @@ from pandas._libs.tslibs.np_datetime cimport (
pydatetime_to_dt64,
)
from pandas._libs.tslibs.timestamps cimport _Timestamp
+from pandas._libs.util cimport is_datetime64_object
+
+cnp.import_array()
cdef dict _parse_code_table = {"y": 0,
@@ -166,6 +173,9 @@ def array_strptime(
check_dts_bounds(&dts)
result_timezone[i] = val.tzinfo
continue
+ elif is_datetime64_object(val):
+ iresult[i] = get_datetime64_nanos(val, NPY_FR_ns)
+ continue
else:
val = str(val)
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 7df45975475dd..59be88e245fd0 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -758,6 +758,19 @@ def test_to_datetime_today_now_unicode_bytes(self, arg):
def test_to_datetime_dt64s(self, cache, dt):
assert to_datetime(dt, cache=cache) == Timestamp(dt)
+ @pytest.mark.parametrize(
+ "arg, format",
+ [
+ ("2001-01-01", "%Y-%m-%d"),
+ ("01-01-2001", "%d-%m-%Y"),
+ ],
+ )
+ def test_to_datetime_dt64s_and_str(self, arg, format):
+ # https://github.com/pandas-dev/pandas/issues/50036
+ result = to_datetime([arg, np.datetime64("2020-01-01")], format=format)
+ expected = DatetimeIndex(["2001-01-01", "2020-01-01"])
+ tm.assert_index_equal(result, expected)
+
@pytest.mark.parametrize(
"dt", [np.datetime64("1000-01-01"), np.datetime64("5000-01-02")]
)
| - [x] closes #50036 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50038 | 2022-12-03T12:41:37Z | 2022-12-04T15:07:03Z | 2022-12-04T15:07:03Z | 2022-12-04T15:07:03Z |
test: Add test for loc assignment changes datetime dtype | diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index be67ce50a0634..81a5e3d9947be 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -29,6 +29,7 @@
date_range,
isna,
notna,
+ to_datetime,
)
import pandas._testing as tm
@@ -1454,6 +1455,26 @@ def test_loc_bool_multiindex(self, dtype, indexer):
)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.parametrize("utc", [False, True])
+ @pytest.mark.parametrize("indexer", ["date", ["date"]])
+ def test_loc_datetime_assignment_dtype_does_not_change(self, utc, indexer):
+ # GH#49837
+ df = DataFrame(
+ {
+ "date": to_datetime(
+ [datetime(2022, 1, 20), datetime(2022, 1, 22)], utc=utc
+ ),
+ "update": [True, False],
+ }
+ )
+ expected = df.copy(deep=True)
+
+ update_df = df[df["update"]]
+
+ df.loc[df["update"], indexer] = update_df["date"]
+
+ tm.assert_frame_equal(df, expected)
+
class TestDataFrameIndexingUInt64:
def test_setitem(self, uint64_frame):
| - [x] closes #49837
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/50037 | 2022-12-03T12:00:02Z | 2022-12-05T09:25:21Z | 2022-12-05T09:25:21Z | 2022-12-05T09:25:32Z |
ENH: Index.infer_objects | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 0ecdde89d9013..e4fef0049d119 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -83,6 +83,7 @@ Other enhancements
- :func:`timedelta_range` now supports a ``unit`` keyword ("s", "ms", "us", or "ns") to specify the desired resolution of the output index (:issue:`49824`)
- :meth:`DataFrame.to_json` now supports a ``mode`` keyword with supported inputs 'w' and 'a'. Defaulting to 'w', 'a' can be used when lines=True and orient='records' to append record oriented json lines to an existing json file. (:issue:`35849`)
- Added ``name`` parameter to :meth:`IntervalIndex.from_breaks`, :meth:`IntervalIndex.from_arrays` and :meth:`IntervalIndex.from_tuples` (:issue:`48911`)
+- Added :meth:`Index.infer_objects` analogous to :meth:`Series.infer_objects` (:issue:`50034`)
- :meth:`DataFrame.plot.hist` now recognizes ``xlabel`` and ``ylabel`` arguments (:issue:`49793`)
-
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ce06b6bc01581..dc0359426f07c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6521,6 +6521,36 @@ def drop(
indexer = indexer[~mask]
return self.delete(indexer)
+ def infer_objects(self, copy: bool = True) -> Index:
+ """
+ If we have an object dtype, try to infer a non-object dtype.
+
+ Parameters
+ ----------
+ copy : bool, default True
+ Whether to make a copy in cases where no inference occurs.
+ """
+ if self._is_multi:
+ raise NotImplementedError(
+ "infer_objects is not implemented for MultiIndex. "
+ "Use index.to_frame().infer_objects() instead."
+ )
+ if self.dtype != object:
+ return self.copy() if copy else self
+
+ values = self._values
+ values = cast("npt.NDArray[np.object_]", values)
+ res_values = lib.maybe_convert_objects(
+ values,
+ convert_datetime=True,
+ convert_timedelta=True,
+ convert_period=True,
+ convert_interval=True,
+ )
+ if copy and res_values is values:
+ return self.copy()
+ return Index(res_values, name=self.name)
+
# --------------------------------------------------------------------
# Generated Arithmetic, Comparison, and Unary Methods
diff --git a/pandas/tests/indexes/multi/test_analytics.py b/pandas/tests/indexes/multi/test_analytics.py
index 8803862615858..fb6f56b0fcba7 100644
--- a/pandas/tests/indexes/multi/test_analytics.py
+++ b/pandas/tests/indexes/multi/test_analytics.py
@@ -12,6 +12,11 @@
from pandas.core.api import UInt64Index
+def test_infer_objects(idx):
+ with pytest.raises(NotImplementedError, match="to_frame"):
+ idx.infer_objects()
+
+
def test_shift(idx):
# GH8083 test the base class for shift
diff --git a/pandas/tests/series/methods/test_infer_objects.py b/pandas/tests/series/methods/test_infer_objects.py
index bb83f62f5ebb5..4710aaf54de31 100644
--- a/pandas/tests/series/methods/test_infer_objects.py
+++ b/pandas/tests/series/methods/test_infer_objects.py
@@ -1,23 +1,24 @@
import numpy as np
-from pandas import Series
import pandas._testing as tm
class TestInferObjects:
- def test_infer_objects_series(self):
+ def test_infer_objects_series(self, index_or_series):
# GH#11221
- actual = Series(np.array([1, 2, 3], dtype="O")).infer_objects()
- expected = Series([1, 2, 3])
- tm.assert_series_equal(actual, expected)
+ actual = index_or_series(np.array([1, 2, 3], dtype="O")).infer_objects()
+ expected = index_or_series([1, 2, 3])
+ tm.assert_equal(actual, expected)
- actual = Series(np.array([1, 2, 3, None], dtype="O")).infer_objects()
- expected = Series([1.0, 2.0, 3.0, np.nan])
- tm.assert_series_equal(actual, expected)
+ actual = index_or_series(np.array([1, 2, 3, None], dtype="O")).infer_objects()
+ expected = index_or_series([1.0, 2.0, 3.0, np.nan])
+ tm.assert_equal(actual, expected)
# only soft conversions, unconvertable pass thru unchanged
- actual = Series(np.array([1, 2, 3, None, "a"], dtype="O")).infer_objects()
- expected = Series([1, 2, 3, None, "a"])
+
+ obj = index_or_series(np.array([1, 2, 3, None, "a"], dtype="O"))
+ actual = obj.infer_objects()
+ expected = index_or_series([1, 2, 3, None, "a"], dtype=object)
assert actual.dtype == "object"
- tm.assert_series_equal(actual, expected)
+ tm.assert_equal(actual, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50034 | 2022-12-03T02:54:45Z | 2022-12-06T17:43:39Z | 2022-12-06T17:43:39Z | 2022-12-06T18:31:59Z |
TST: adding new test for groupby cumsum with named aggregate | diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 339c6560d6212..3a9dbe9dfb384 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -56,6 +56,30 @@ def test_groupby_agg_extension(self, data_for_grouping):
result = df.groupby("A").first()
self.assert_frame_equal(result, expected)
+ def test_groupby_agg_extension_timedelta_cumsum_with_named_aggregation(self):
+ # GH#41720
+ expected = pd.DataFrame(
+ {
+ "td": {
+ 0: pd.Timedelta("0 days 01:00:00"),
+ 1: pd.Timedelta("0 days 01:15:00"),
+ 2: pd.Timedelta("0 days 01:15:00"),
+ }
+ }
+ )
+ df = pd.DataFrame(
+ {
+ "td": pd.Series(
+ ["0 days 01:00:00", "0 days 00:15:00", "0 days 01:15:00"],
+ dtype="timedelta64[ns]",
+ ),
+ "grps": ["a", "a", "b"],
+ }
+ )
+ gb = df.groupby("grps")
+ result = gb.agg(td=("td", "cumsum"))
+ self.assert_frame_equal(result, expected)
+
def test_groupby_extension_no_sort(self, data_for_grouping):
df = pd.DataFrame({"A": [1, 1, 2, 2, 3, 3, 1, 4], "B": data_for_grouping})
result = df.groupby("B", sort=False).A.mean()
| - [ ] closes #41720
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50033 | 2022-12-03T01:33:17Z | 2022-12-08T03:40:45Z | 2022-12-08T03:40:45Z | 2022-12-08T03:40:55Z |
REF: remove soft_convert_objects | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 36c713cab7123..f227eb46273a5 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -41,7 +41,6 @@
IntCastingNaNError,
LossySetitemError,
)
-from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
@@ -952,54 +951,6 @@ def coerce_indexer_dtype(indexer, categories) -> np.ndarray:
return ensure_int64(indexer)
-def soft_convert_objects(
- values: np.ndarray,
- *,
- datetime: bool = True,
- timedelta: bool = True,
- period: bool = True,
- copy: bool = True,
-) -> ArrayLike:
- """
- Try to coerce datetime, timedelta, and numeric object-dtype columns
- to inferred dtype.
-
- Parameters
- ----------
- values : np.ndarray[object]
- datetime : bool, default True
- timedelta : bool, default True
- period : bool, default True
- copy : bool, default True
-
- Returns
- -------
- np.ndarray or ExtensionArray
- """
- validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(timedelta, "timedelta")
- validate_bool_kwarg(copy, "copy")
-
- conversion_count = sum((datetime, timedelta))
- if conversion_count == 0:
- raise ValueError("At least one of datetime or timedelta must be True.")
-
- # Soft conversions
- if datetime or timedelta or period:
- # GH 20380, when datetime is beyond year 2262, hence outside
- # bound of nanosecond-resolution 64-bit integers.
- converted = lib.maybe_convert_objects(
- values,
- convert_datetime=datetime,
- convert_timedelta=timedelta,
- convert_period=period,
- )
- if converted is not values:
- return converted
-
- return values
-
-
def convert_dtypes(
input_array: ArrayLike,
convert_string: bool = True,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d1e48a3d10a1e..37b7af13fc7c4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6354,9 +6354,8 @@ def infer_objects(self: NDFrameT) -> NDFrameT:
A int64
dtype: object
"""
- return self._constructor(
- self._mgr.convert(datetime=True, timedelta=True, copy=True)
- ).__finalize__(self, method="infer_objects")
+ new_mgr = self._mgr.convert()
+ return self._constructor(new_mgr).__finalize__(self, method="infer_objects")
@final
def convert_dtypes(
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 8ddab458e35a9..918c70ff91da5 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -32,7 +32,6 @@
from pandas.core.dtypes.cast import (
ensure_dtype_can_hold_na,
infer_dtype_from_scalar,
- soft_convert_objects,
)
from pandas.core.dtypes.common import (
ensure_platform_int,
@@ -375,25 +374,19 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T:
def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
return self.apply(astype_array_safe, dtype=dtype, copy=copy, errors=errors)
- def convert(
- self: T,
- *,
- copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
- ) -> T:
+ def convert(self: T) -> T:
def _convert(arr):
if is_object_dtype(arr.dtype):
# extract PandasArray for tests that patch PandasArray._typ
arr = np.asarray(arr)
- return soft_convert_objects(
+ return lib.maybe_convert_objects(
arr,
- datetime=datetime,
- timedelta=timedelta,
- copy=copy,
+ convert_datetime=True,
+ convert_timedelta=True,
+ convert_period=True,
)
else:
- return arr.copy() if copy else arr
+ return arr.copy()
return self.apply(_convert)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 57a0fc81515c5..95300c888eede 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -44,7 +44,6 @@
find_result_type,
maybe_downcast_to_dtype,
np_can_hold_element,
- soft_convert_objects,
)
from pandas.core.dtypes.common import (
ensure_platform_int,
@@ -429,7 +428,7 @@ def _maybe_downcast(self, blocks: list[Block], downcast=None) -> list[Block]:
# but ATM it breaks too much existing code.
# split and convert the blocks
- return extend_blocks([blk.convert(datetime=True) for blk in blocks])
+ return extend_blocks([blk.convert() for blk in blocks])
if downcast is None:
return blocks
@@ -451,8 +450,6 @@ def convert(
self,
*,
copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
) -> list[Block]:
"""
attempt to coerce any object types to better types return a copy
@@ -1967,8 +1964,6 @@ def convert(
self,
*,
copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
) -> list[Block]:
"""
attempt to cast any object types to better types return a copy of
@@ -1980,11 +1975,11 @@ def convert(
# avoid doing .ravel as that might make a copy
values = values[0]
- res_values = soft_convert_objects(
+ res_values = lib.maybe_convert_objects(
values,
- datetime=datetime,
- timedelta=timedelta,
- copy=copy,
+ convert_datetime=True,
+ convert_timedelta=True,
+ convert_period=True,
)
res_values = ensure_block_shape(res_values, self.ndim)
return [self.make_block(res_values)]
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 306fea06963ec..d1eee23f1908c 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -441,18 +441,10 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T:
def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
return self.apply("astype", dtype=dtype, copy=copy, errors=errors)
- def convert(
- self: T,
- *,
- copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
- ) -> T:
+ def convert(self: T) -> T:
return self.apply(
"convert",
- copy=copy,
- datetime=datetime,
- timedelta=timedelta,
+ copy=True,
)
def replace(self: T, to_replace, value, inplace: bool) -> T:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50031 | 2022-12-03T00:22:26Z | 2022-12-03T19:55:56Z | 2022-12-03T19:55:56Z | 2022-12-03T21:42:30Z |
DEPR: Remove option use_inf_as null | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index bde97f3714219..d8609737b8c7a 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -386,6 +386,7 @@ Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Removed deprecated :attr:`Timestamp.freq`, :attr:`Timestamp.freqstr` and argument ``freq`` from the :class:`Timestamp` constructor and :meth:`Timestamp.fromordinal` (:issue:`14146`)
- Removed deprecated :class:`CategoricalBlock`, :meth:`Block.is_categorical`, require datetime64 and timedelta64 values to be wrapped in :class:`DatetimeArray` or :class:`TimedeltaArray` before passing to :meth:`Block.make_block_same_class`, require ``DatetimeTZBlock.values`` to have the correct ndim when passing to the :class:`BlockManager` constructor, and removed the "fastpath" keyword from the :class:`SingleBlockManager` constructor (:issue:`40226`, :issue:`40571`)
+- Removed deprecated global option ``use_inf_as_null`` in favor of ``use_inf_as_na`` (:issue:`17126`)
- Removed deprecated module ``pandas.core.index`` (:issue:`30193`)
- Removed deprecated alias ``pandas.core.tools.datetimes.to_time``, import the function directly from ``pandas.core.tools.times`` instead (:issue:`34145`)
- Removed deprecated :meth:`Categorical.to_dense`, use ``np.asarray(cat)`` instead (:issue:`32639`)
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index b101b25a10a80..d1a52798360bd 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -458,12 +458,6 @@ def is_terminal() -> bool:
with cf.config_prefix("mode"):
cf.register_option("sim_interactive", False, tc_sim_interactive_doc)
-use_inf_as_null_doc = """
-: boolean
- use_inf_as_null had been deprecated and will be removed in a future
- version. Use `use_inf_as_na` instead.
-"""
-
use_inf_as_na_doc = """
: boolean
True means treat None, NaN, INF, -INF as NA (old way),
@@ -483,14 +477,6 @@ def use_inf_as_na_cb(key) -> None:
with cf.config_prefix("mode"):
cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb)
- cf.register_option(
- "use_inf_as_null", False, use_inf_as_null_doc, cb=use_inf_as_na_cb
- )
-
-
-cf.deprecate_option(
- "mode.use_inf_as_null", msg=use_inf_as_null_doc, rkey="mode.use_inf_as_na"
-)
data_manager_doc = """
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index d956b2c3fcd42..3c0f962b90086 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -36,20 +36,6 @@ def test_isna_for_inf(self):
tm.assert_series_equal(r, e)
tm.assert_series_equal(dr, de)
- @pytest.mark.parametrize(
- "method, expected",
- [
- ["isna", Series([False, True, True, False])],
- ["dropna", Series(["a", 1.0], index=[0, 3])],
- ],
- )
- def test_isnull_for_inf_deprecated(self, method, expected):
- # gh-17115
- s = Series(["a", np.inf, np.nan, 1.0])
- with pd.option_context("mode.use_inf_as_null", True):
- result = getattr(s, method)()
- tm.assert_series_equal(result, expected)
-
def test_timedelta64_nan(self):
td = Series([timedelta(days=i) for i in range(10)])
| Introduced in https://github.com/pandas-dev/pandas/pull/17126
| https://api.github.com/repos/pandas-dev/pandas/pulls/50030 | 2022-12-02T23:48:18Z | 2022-12-03T02:48:20Z | 2022-12-03T02:48:20Z | 2022-12-05T19:42:39Z |
DOC: Groupby transform should mention that parameter can be a string | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index e791e956473c1..02e8236524cb7 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -427,7 +427,51 @@ def _aggregate_named(self, func, *args, **kwargs):
return result
- @Substitution(klass="Series")
+ __examples_series_doc = dedent(
+ """
+ >>> ser = pd.Series(
+ ... [390.0, 350.0, 30.0, 20.0],
+ ... index=["Falcon", "Falcon", "Parrot", "Parrot"],
+ ... name="Max Speed")
+ >>> grouped = ser.groupby([1, 1, 2, 2])
+ >>> grouped.transform(lambda x: (x - x.mean()) / x.std())
+ Falcon 0.707107
+ Falcon -0.707107
+ Parrot 0.707107
+ Parrot -0.707107
+ Name: Max Speed, dtype: float64
+
+ Broadcast result of the transformation
+
+ >>> grouped.transform(lambda x: x.max() - x.min())
+ Falcon 40.0
+ Falcon 40.0
+ Parrot 10.0
+ Parrot 10.0
+ Name: Max Speed, dtype: float64
+
+ >>> grouped.transform("mean")
+ Falcon 370.0
+ Falcon 370.0
+ Parrot 25.0
+ Parrot 25.0
+ Name: Max Speed, dtype: float64
+
+ .. versionchanged:: 1.3.0
+
+ The resulting dtype will reflect the return value of the passed ``func``,
+ for example:
+
+ >>> grouped.transform(lambda x: x.astype(int).max())
+ Falcon 390
+ Falcon 390
+ Parrot 30
+ Parrot 30
+ Name: Max Speed, dtype: int64
+ """
+ )
+
+ @Substitution(klass="Series", example=__examples_series_doc)
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
return self._transform(
@@ -1407,7 +1451,61 @@ def _transform_general(self, func, *args, **kwargs):
concatenated = concatenated.reindex(concat_index, axis=other_axis, copy=False)
return self._set_result_index_ordered(concatenated)
- @Substitution(klass="DataFrame")
+ __examples_dataframe_doc = dedent(
+ """
+ >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
+ ... 'foo', 'bar'],
+ ... 'B' : ['one', 'one', 'two', 'three',
+ ... 'two', 'two'],
+ ... 'C' : [1, 5, 5, 2, 5, 5],
+ ... 'D' : [2.0, 5., 8., 1., 2., 9.]})
+ >>> grouped = df.groupby('A')[['C', 'D']]
+ >>> grouped.transform(lambda x: (x - x.mean()) / x.std())
+ C D
+ 0 -1.154701 -0.577350
+ 1 0.577350 0.000000
+ 2 0.577350 1.154701
+ 3 -1.154701 -1.000000
+ 4 0.577350 -0.577350
+ 5 0.577350 1.000000
+
+ Broadcast result of the transformation
+
+ >>> grouped.transform(lambda x: x.max() - x.min())
+ C D
+ 0 4.0 6.0
+ 1 3.0 8.0
+ 2 4.0 6.0
+ 3 3.0 8.0
+ 4 4.0 6.0
+ 5 3.0 8.0
+
+ >>> grouped.transform("mean")
+ C D
+ 0 3.666667 4.0
+ 1 4.000000 5.0
+ 2 3.666667 4.0
+ 3 4.000000 5.0
+ 4 3.666667 4.0
+ 5 4.000000 5.0
+
+ .. versionchanged:: 1.3.0
+
+ The resulting dtype will reflect the return value of the passed ``func``,
+ for example:
+
+ >>> grouped.transform(lambda x: x.astype(int).max())
+ C D
+ 0 5 8
+ 1 5 9
+ 2 5 8
+ 3 5 9
+ 4 5 8
+ 5 5 9
+ """
+ )
+
+ @Substitution(klass="DataFrame", example=__examples_dataframe_doc)
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
return self._transform(
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 52d18e8ffe540..ab030aaa66d13 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -402,15 +402,22 @@ class providing the base-class of operations.
f : function, str
Function to apply to each group. See the Notes section below for requirements.
- Can also accept a Numba JIT function with
- ``engine='numba'`` specified.
+ Accepted inputs are:
+
+ - String
+ - Python function
+ - Numba JIT function with ``engine='numba'`` specified.
+ Only passing a single function is supported with this engine.
If the ``'numba'`` engine is chosen, the function must be
a user defined function with ``values`` and ``index`` as the
first and second arguments respectively in the function signature.
Each group's index will be passed to the user defined function
and optionally available for use.
+ If a string is chosen, then it needs to be the name
+ of the groupby method you want to use.
+
.. versionchanged:: 1.1.0
*args
Positional arguments to pass to func.
@@ -480,48 +487,7 @@ class providing the base-class of operations.
Examples
--------
-
->>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
-... 'foo', 'bar'],
-... 'B' : ['one', 'one', 'two', 'three',
-... 'two', 'two'],
-... 'C' : [1, 5, 5, 2, 5, 5],
-... 'D' : [2.0, 5., 8., 1., 2., 9.]})
->>> grouped = df.groupby('A')[['C', 'D']]
->>> grouped.transform(lambda x: (x - x.mean()) / x.std())
- C D
-0 -1.154701 -0.577350
-1 0.577350 0.000000
-2 0.577350 1.154701
-3 -1.154701 -1.000000
-4 0.577350 -0.577350
-5 0.577350 1.000000
-
-Broadcast result of the transformation
-
->>> grouped.transform(lambda x: x.max() - x.min())
- C D
-0 4.0 6.0
-1 3.0 8.0
-2 4.0 6.0
-3 3.0 8.0
-4 4.0 6.0
-5 3.0 8.0
-
-.. versionchanged:: 1.3.0
-
- The resulting dtype will reflect the return value of the passed ``func``,
- for example:
-
->>> grouped.transform(lambda x: x.astype(int).max())
- C D
-0 5 8
-1 5 9
-2 5 8
-3 5 9
-4 5 8
-5 5 9
-"""
+%(example)s"""
_agg_template = """
Aggregate using one or more operations over the specified axis.
| - [ ] closes https://github.com/pandas-dev/pandas/issues/49961
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `pandas/core/groupby/groupby.py`
| https://api.github.com/repos/pandas-dev/pandas/pulls/50029 | 2022-12-02T23:31:09Z | 2022-12-13T03:06:42Z | 2022-12-13T03:06:42Z | 2022-12-13T04:55:06Z |
API: make FloatingArray.astype consistent with IntegerArray/BooleanArray | diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 1fd6482f650da..3aa6a12160b73 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -38,7 +38,6 @@
from pandas.util._decorators import doc
from pandas.util._validators import validate_fillna_kwargs
-from pandas.core.dtypes.astype import astype_nansafe
from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.common import (
is_bool,
@@ -492,10 +491,6 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:
raise ValueError("cannot convert float NaN to bool")
data = self.to_numpy(dtype=dtype, na_value=na_value, copy=copy)
- if self.dtype.kind == "f":
- # TODO: make this consistent between IntegerArray/FloatingArray,
- # see test_astype_str
- return astype_nansafe(data, dtype, copy=False)
return data
__array_priority__ = 1000 # higher than ndarray so ops dispatch to us
diff --git a/pandas/tests/arrays/floating/test_astype.py b/pandas/tests/arrays/floating/test_astype.py
index 5a6e0988a0897..ade3dbd2c99da 100644
--- a/pandas/tests/arrays/floating/test_astype.py
+++ b/pandas/tests/arrays/floating/test_astype.py
@@ -65,7 +65,7 @@ def test_astype_to_integer_array():
def test_astype_str():
a = pd.array([0.1, 0.2, None], dtype="Float64")
- expected = np.array(["0.1", "0.2", "<NA>"], dtype=object)
+ expected = np.array(["0.1", "0.2", "<NA>"], dtype="U32")
tm.assert_numpy_array_equal(a.astype(str), expected)
tm.assert_numpy_array_equal(a.astype("str"), expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50028 | 2022-12-02T21:55:37Z | 2022-12-05T09:27:23Z | 2022-12-05T09:27:23Z | 2022-12-05T16:47:43Z |
DOC: add examples BusinessMonthEnd(0) and SemiMonthEnd(0) | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 4c6493652b216..70d437e0eea11 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -2460,16 +2460,25 @@ cdef class BusinessMonthEnd(MonthOffset):
"""
DateOffset increments between the last business day of the month.
+ BusinessMonthEnd goes to the next date which is the last business day of the month.
+ To get the last business day of the current month pass the parameter n equals 0.
+
Examples
--------
- >>> from pandas.tseries.offsets import BMonthEnd
- >>> ts = pd.Timestamp('2020-05-24 05:01:15')
- >>> ts + BMonthEnd()
- Timestamp('2020-05-29 05:01:15')
- >>> ts + BMonthEnd(2)
- Timestamp('2020-06-30 05:01:15')
- >>> ts + BMonthEnd(-2)
- Timestamp('2020-03-31 05:01:15')
+ >>> ts = pd.Timestamp(2022, 11, 29)
+ >>> ts + pd.offsets.BMonthEnd()
+ Timestamp('2022-11-30 00:00:00')
+
+ >>> ts = pd.Timestamp(2022, 11, 30)
+ >>> ts + pd.offsets.BMonthEnd()
+ Timestamp('2022-12-30 00:00:00')
+
+ If you want to get the end of the current business month
+ pass the parameter n equals 0:
+
+ >>> ts = pd.Timestamp(2022, 11, 30)
+ >>> ts + pd.offsets.BMonthEnd(0)
+ Timestamp('2022-11-30 00:00:00')
"""
_prefix = "BM"
_day_opt = "business_end"
@@ -2642,11 +2651,24 @@ cdef class SemiMonthEnd(SemiMonthOffset):
Examples
--------
- >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts = pd.Timestamp(2022, 1, 14)
>>> ts + pd.offsets.SemiMonthEnd()
Timestamp('2022-01-15 00:00:00')
- """
+ >>> ts = pd.Timestamp(2022, 1, 15)
+ >>> ts + pd.offsets.SemiMonthEnd()
+ Timestamp('2022-01-31 00:00:00')
+
+ >>> ts = pd.Timestamp(2022, 1, 31)
+ >>> ts + pd.offsets.SemiMonthEnd()
+ Timestamp('2022-02-15 00:00:00')
+
+ If you want to get the result for the current month pass the parameter n equals 0:
+
+ >>> ts = pd.Timestamp(2022, 1, 15)
+ >>> ts + pd.offsets.SemiMonthEnd(0)
+ Timestamp('2022-01-15 00:00:00')
+ """
_prefix = "SM"
_min_day_of_month = 1
| This PR is related to PR #49958.
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Updated docs for `BusinessMonthEnd` and `SemiMonthEnd`. Added more examples to highlight “the last day of the month” behavior. Gave examples that use the parameter n equals 0. | https://api.github.com/repos/pandas-dev/pandas/pulls/50027 | 2022-12-02T21:35:45Z | 2022-12-03T12:47:11Z | 2022-12-03T12:47:11Z | 2022-12-03T12:47:11Z |
REF: remove NDFrame._convert | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 038c889e4d5f7..d1e48a3d10a1e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6315,37 +6315,6 @@ def __deepcopy__(self: NDFrameT, memo=None) -> NDFrameT:
"""
return self.copy(deep=True)
- @final
- def _convert(
- self: NDFrameT,
- *,
- datetime: bool_t = False,
- timedelta: bool_t = False,
- ) -> NDFrameT:
- """
- Attempt to infer better dtype for object columns.
-
- Parameters
- ----------
- datetime : bool, default False
- If True, convert to date where possible.
- timedelta : bool, default False
- If True, convert to timedelta where possible.
-
- Returns
- -------
- converted : same as input object
- """
- validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(timedelta, "timedelta")
- return self._constructor(
- self._mgr.convert(
- datetime=datetime,
- timedelta=timedelta,
- copy=True,
- )
- ).__finalize__(self)
-
@final
def infer_objects(self: NDFrameT) -> NDFrameT:
"""
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index a80892a145a70..d3e37a40614b3 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1681,9 +1681,8 @@ def _wrap_agged_manager(self, mgr: Manager2D) -> DataFrame:
if self.axis == 1:
result = result.T
- # Note: we only need to pass datetime=True in order to get numeric
- # values converted
- return self._reindex_output(result)._convert(datetime=True)
+ # Note: we really only care about inferring numeric dtypes here
+ return self._reindex_output(result).infer_objects()
def _iterate_column_groupbys(self, obj: DataFrame | Series):
for i, colname in enumerate(obj.columns):
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 324076cd38917..3a634a60e784e 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -600,9 +600,9 @@ def _compute_plot_data(self):
self.subplots = True
data = reconstruct_data_with_by(self.data, by=self.by, cols=self.columns)
- # GH16953, _convert is needed as fallback, for ``Series``
+ # GH16953, infer_objects is needed as fallback, for ``Series``
# with ``dtype == object``
- data = data._convert(datetime=True, timedelta=True)
+ data = data.infer_objects()
include_type = [np.number, "datetime", "datetimetz", "timedelta"]
# GH23719, allow plotting boolean
diff --git a/pandas/plotting/_matplotlib/hist.py b/pandas/plotting/_matplotlib/hist.py
index 27c69a31f31a2..337628aa3bc2e 100644
--- a/pandas/plotting/_matplotlib/hist.py
+++ b/pandas/plotting/_matplotlib/hist.py
@@ -78,7 +78,7 @@ def _args_adjust(self) -> None:
def _calculate_bins(self, data: DataFrame) -> np.ndarray:
"""Calculate bins given data"""
- nd_values = data._convert(datetime=True)._get_numeric_data()
+ nd_values = data.infer_objects()._get_numeric_data()
values = np.ravel(nd_values)
values = values[~isna(values)]
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 7bf1621d0acea..e7c2618d388c2 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -461,7 +461,7 @@ def test_apply_convert_objects():
}
)
- result = expected.apply(lambda x: x, axis=1)._convert(datetime=True)
+ result = expected.apply(lambda x: x, axis=1)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_convert.py b/pandas/tests/frame/methods/test_convert.py
deleted file mode 100644
index c6c70210d1cc4..0000000000000
--- a/pandas/tests/frame/methods/test_convert.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas import DataFrame
-import pandas._testing as tm
-
-
-class TestConvert:
- def test_convert_objects(self, float_string_frame):
-
- oops = float_string_frame.T.T
- converted = oops._convert(datetime=True)
- tm.assert_frame_equal(converted, float_string_frame)
- assert converted["A"].dtype == np.float64
-
- # force numeric conversion
- float_string_frame["H"] = "1."
- float_string_frame["I"] = "1"
-
- # add in some items that will be nan
- float_string_frame["J"] = "1."
- float_string_frame["K"] = "1"
- float_string_frame.loc[float_string_frame.index[0:5], ["J", "K"]] = "garbled"
- converted = float_string_frame._convert(datetime=True)
- tm.assert_frame_equal(converted, float_string_frame)
-
- # via astype
- converted = float_string_frame.copy()
- converted["H"] = converted["H"].astype("float64")
- converted["I"] = converted["I"].astype("int64")
- assert converted["H"].dtype == "float64"
- assert converted["I"].dtype == "int64"
-
- # via astype, but errors
- converted = float_string_frame.copy()
- with pytest.raises(ValueError, match="invalid literal"):
- converted["H"].astype("int32")
-
- def test_convert_objects_no_conversion(self):
- mixed1 = DataFrame({"a": [1, 2, 3], "b": [4.0, 5, 6], "c": ["x", "y", "z"]})
- mixed2 = mixed1._convert(datetime=True)
- tm.assert_frame_equal(mixed1, mixed2)
diff --git a/pandas/tests/io/pytables/test_append.py b/pandas/tests/io/pytables/test_append.py
index 4ae31f300cb6f..5633b9f8a71c7 100644
--- a/pandas/tests/io/pytables/test_append.py
+++ b/pandas/tests/io/pytables/test_append.py
@@ -592,7 +592,6 @@ def check_col(key, name, size):
df_dc.loc[df_dc.index[7:9], "string"] = "bar"
df_dc["string2"] = "cool"
df_dc["datetime"] = Timestamp("20010102")
- df_dc = df_dc._convert(datetime=True)
df_dc.loc[df_dc.index[3:5], ["A", "B", "datetime"]] = np.nan
_maybe_remove(store, "df_dc")
diff --git a/pandas/tests/io/pytables/test_errors.py b/pandas/tests/io/pytables/test_errors.py
index 7e590df95f952..7629e8ca7dfc2 100644
--- a/pandas/tests/io/pytables/test_errors.py
+++ b/pandas/tests/io/pytables/test_errors.py
@@ -75,7 +75,7 @@ def test_unimplemented_dtypes_table_columns(setup_path):
df["obj1"] = "foo"
df["obj2"] = "bar"
df["datetime1"] = datetime.date(2001, 1, 2)
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with ensure_clean_store(setup_path) as store:
# this fails because we have a date in the object block......
diff --git a/pandas/tests/io/pytables/test_put.py b/pandas/tests/io/pytables/test_put.py
index 2699d33950412..349fe74cb8e71 100644
--- a/pandas/tests/io/pytables/test_put.py
+++ b/pandas/tests/io/pytables/test_put.py
@@ -197,7 +197,7 @@ def test_put_mixed_type(setup_path):
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[df.index[3:6], ["obj1"]] = np.nan
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with ensure_clean_store(setup_path) as store:
_maybe_remove(store, "df")
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 22873b0096817..1263d61b55cd5 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -129,7 +129,7 @@ def test_repr(setup_path):
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[df.index[3:6], ["obj1"]] = np.nan
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with catch_warnings(record=True):
simplefilter("ignore", pd.errors.PerformanceWarning)
@@ -444,7 +444,7 @@ def test_table_mixed_dtypes(setup_path):
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[df.index[3:6], ["obj1"]] = np.nan
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with ensure_clean_store(setup_path) as store:
store.append("df1_mixed", df)
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index b1fcdd8df01ad..ffc5afcc70bb9 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -627,7 +627,7 @@ def try_remove_ws(x):
]
dfnew = df.applymap(try_remove_ws).replace(old, new)
gtnew = ground_truth.applymap(try_remove_ws)
- converted = dfnew._convert(datetime=True)
+ converted = dfnew
date_cols = ["Closing Date", "Updated Date"]
converted[date_cols] = converted[date_cols].apply(to_datetime)
tm.assert_frame_equal(converted, gtnew)
diff --git a/pandas/tests/series/methods/test_convert.py b/pandas/tests/series/methods/test_convert.py
deleted file mode 100644
index f979a28154d4e..0000000000000
--- a/pandas/tests/series/methods/test_convert.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from datetime import datetime
-
-import pytest
-
-from pandas import (
- Series,
- Timestamp,
-)
-import pandas._testing as tm
-
-
-class TestConvert:
- def test_convert(self):
- # GH#10265
- dt = datetime(2001, 1, 1, 0, 0)
- td = dt - datetime(2000, 1, 1, 0, 0)
-
- # Test coercion with mixed types
- ser = Series(["a", "3.1415", dt, td])
-
- # Test standard conversion returns original
- results = ser._convert(datetime=True)
- tm.assert_series_equal(results, ser)
-
- results = ser._convert(timedelta=True)
- tm.assert_series_equal(results, ser)
-
- def test_convert_numeric_strings_with_other_true_args(self):
- # test pass-through and non-conversion when other types selected
- ser = Series(["1.0", "2.0", "3.0"])
- results = ser._convert(datetime=True, timedelta=True)
- tm.assert_series_equal(results, ser)
-
- def test_convert_datetime_objects(self):
- ser = Series(
- [datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], dtype="O"
- )
- results = ser._convert(datetime=True, timedelta=True)
- expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)])
- tm.assert_series_equal(results, expected)
- results = ser._convert(datetime=False, timedelta=True)
- tm.assert_series_equal(results, ser)
-
- def test_convert_datetime64(self):
- # no-op if already dt64 dtype
- ser = Series(
- [
- datetime(2001, 1, 1, 0, 0),
- datetime(2001, 1, 2, 0, 0),
- datetime(2001, 1, 3, 0, 0),
- ]
- )
-
- result = ser._convert(datetime=True)
- expected = Series(
- [Timestamp("20010101"), Timestamp("20010102"), Timestamp("20010103")],
- dtype="M8[ns]",
- )
- tm.assert_series_equal(result, expected)
-
- result = ser._convert(datetime=True)
- tm.assert_series_equal(result, expected)
-
- def test_convert_timedeltas(self):
- td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0)
- ser = Series([td, td], dtype="O")
- results = ser._convert(datetime=True, timedelta=True)
- expected = Series([td, td])
- tm.assert_series_equal(results, expected)
- results = ser._convert(datetime=True, timedelta=False)
- tm.assert_series_equal(results, ser)
-
- def test_convert_preserve_non_object(self):
- # preserve if non-object
- ser = Series([1], dtype="float32")
- result = ser._convert(datetime=True)
- tm.assert_series_equal(result, ser)
-
- def test_convert_no_arg_error(self):
- ser = Series(["1.0", "2"])
- msg = r"At least one of datetime or timedelta must be True\."
- with pytest.raises(ValueError, match=msg):
- ser._convert()
-
- def test_convert_preserve_bool(self):
- ser = Series([1, True, 3, 5], dtype=object)
- res = ser._convert(datetime=True)
- tm.assert_series_equal(res, ser)
-
- def test_convert_preserve_all_bool(self):
- ser = Series([False, True, False, False], dtype=object)
- res = ser._convert(datetime=True)
- expected = Series([False, True, False, False], dtype=bool)
- tm.assert_series_equal(res, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50026 | 2022-12-02T20:15:38Z | 2022-12-02T23:03:23Z | 2022-12-02T23:03:23Z | 2022-12-02T23:09:46Z |
BUG: when flooring, ambiguous parameter unnecessarily used (and raising Error) | diff --git a/doc/source/whatsnew/v1.5.3.rst b/doc/source/whatsnew/v1.5.3.rst
index b6c1c857717c7..763f9f87194d5 100644
--- a/doc/source/whatsnew/v1.5.3.rst
+++ b/doc/source/whatsnew/v1.5.3.rst
@@ -33,7 +33,6 @@ Bug fixes
Other
~~~~~
-
--
.. ---------------------------------------------------------------------------
.. _whatsnew_153.contributors:
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 62b0ea5307e41..d0eed405c944c 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -671,7 +671,7 @@ Timezones
^^^^^^^^^
- Bug in :meth:`Series.astype` and :meth:`DataFrame.astype` with object-dtype containing multiple timezone-aware ``datetime`` objects with heterogeneous timezones to a :class:`DatetimeTZDtype` incorrectly raising (:issue:`32581`)
- Bug in :func:`to_datetime` was failing to parse date strings with timezone name when ``format`` was specified with ``%Z`` (:issue:`49748`)
--
+- Better error message when passing invalid values to ``ambiguous`` parameter in :meth:`Timestamp.tz_localize` (:issue:`49565`)
Numeric
^^^^^^^
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index f987a2feb2717..f25114c273bcf 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -6,8 +6,8 @@ construction requirements, we need to do object instantiation in python
(see Timestamp class below). This will serve as a C extension type that
shadows the python class, where we do any heavy lifting.
"""
-import warnings
+import warnings
cimport cython
import numpy as np
@@ -1946,8 +1946,11 @@ default 'raise'
>>> pd.NaT.tz_localize()
NaT
"""
- if ambiguous == "infer":
- raise ValueError("Cannot infer offset with only one time.")
+ if not isinstance(ambiguous, bool) and ambiguous not in {"NaT", "raise"}:
+ raise ValueError(
+ "'ambiguous' parameter must be one of: "
+ "True, False, 'NaT', 'raise' (default)"
+ )
nonexistent_options = ("raise", "NaT", "shift_forward", "shift_backward")
if nonexistent not in nonexistent_options and not PyDelta_Check(nonexistent):
diff --git a/pandas/tests/scalar/timestamp/test_timezones.py b/pandas/tests/scalar/timestamp/test_timezones.py
index 3ebffaad23910..d7db99333cd03 100644
--- a/pandas/tests/scalar/timestamp/test_timezones.py
+++ b/pandas/tests/scalar/timestamp/test_timezones.py
@@ -6,6 +6,7 @@
datetime,
timedelta,
)
+import re
import dateutil
from dateutil.tz import (
@@ -102,7 +103,10 @@ def test_tz_localize_ambiguous(self):
ts_no_dst = ts.tz_localize("US/Eastern", ambiguous=False)
assert (ts_no_dst.value - ts_dst.value) / 1e9 == 3600
- msg = "Cannot infer offset with only one time"
+ msg = re.escape(
+ "'ambiguous' parameter must be one of: "
+ "True, False, 'NaT', 'raise' (default)"
+ )
with pytest.raises(ValueError, match=msg):
ts.tz_localize("US/Eastern", ambiguous="infer")
@@ -182,8 +186,8 @@ def test_tz_localize_ambiguous_compat(self):
pytz_zone = "Europe/London"
dateutil_zone = "dateutil/Europe/London"
- result_pytz = naive.tz_localize(pytz_zone, ambiguous=0)
- result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=0)
+ result_pytz = naive.tz_localize(pytz_zone, ambiguous=False)
+ result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=False)
assert result_pytz.value == result_dateutil.value
assert result_pytz.value == 1382835600000000000
@@ -194,8 +198,8 @@ def test_tz_localize_ambiguous_compat(self):
assert str(result_pytz) == str(result_dateutil)
# 1 hour difference
- result_pytz = naive.tz_localize(pytz_zone, ambiguous=1)
- result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=1)
+ result_pytz = naive.tz_localize(pytz_zone, ambiguous=True)
+ result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=True)
assert result_pytz.value == result_dateutil.value
assert result_pytz.value == 1382832000000000000
@@ -357,7 +361,6 @@ def test_astimezone(self, tzstr):
@td.skip_if_windows
def test_tz_convert_utc_with_system_utc(self):
-
# from system utc to real utc
ts = Timestamp("2001-01-05 11:56", tz=timezones.maybe_get_tz("dateutil/UTC"))
# check that the time hasn't changed.
| - [x] closes #49565
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50024 | 2022-12-02T19:00:38Z | 2022-12-12T18:28:15Z | 2022-12-12T18:28:15Z | 2022-12-12T18:28:22Z |
BUG: pd.DateOffset handle milliseconds | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 11185e0370e30..99ed779487c34 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -799,6 +799,7 @@ Datetimelike
- Bug in :class:`Timestamp` was showing ``UserWarning``, which was not actionable by users, when parsing non-ISO8601 delimited date strings (:issue:`50232`)
- Bug in :func:`to_datetime` was showing misleading ``ValueError`` when parsing dates with format containing ISO week directive and ISO weekday directive (:issue:`50308`)
- Bug in :func:`to_datetime` was not raising ``ValueError`` when invalid format was passed and ``errors`` was ``'ignore'`` or ``'coerce'`` (:issue:`50266`)
+- Bug in :class:`DateOffset` was throwing ``TypeError`` when constructing with milliseconds and another super-daily argument (:issue:`49897`)
-
Timedelta
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index f9905f297be10..470d1e89e5b88 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -298,43 +298,54 @@ _relativedelta_kwds = {"years", "months", "weeks", "days", "year", "month",
cdef _determine_offset(kwds):
- # timedelta is used for sub-daily plural offsets and all singular
- # offsets, relativedelta is used for plural offsets of daily length or
- # more, nanosecond(s) are handled by apply_wraps
- kwds_no_nanos = dict(
- (k, v) for k, v in kwds.items()
- if k not in ("nanosecond", "nanoseconds")
- )
- # TODO: Are nanosecond and nanoseconds allowed somewhere?
-
- _kwds_use_relativedelta = ("years", "months", "weeks", "days",
- "year", "month", "week", "day", "weekday",
- "hour", "minute", "second", "microsecond",
- "millisecond")
-
- use_relativedelta = False
- if len(kwds_no_nanos) > 0:
- if any(k in _kwds_use_relativedelta for k in kwds_no_nanos):
- if "millisecond" in kwds_no_nanos:
- raise NotImplementedError(
- "Using DateOffset to replace `millisecond` component in "
- "datetime object is not supported. Use "
- "`microsecond=timestamp.microsecond % 1000 + ms * 1000` "
- "instead."
- )
- offset = relativedelta(**kwds_no_nanos)
- use_relativedelta = True
- else:
- # sub-daily offset - use timedelta (tz-aware)
- offset = timedelta(**kwds_no_nanos)
- elif any(nano in kwds for nano in ("nanosecond", "nanoseconds")):
- offset = timedelta(days=0)
- else:
- # GH 45643/45890: (historically) defaults to 1 day for non-nano
- # since datetime.timedelta doesn't handle nanoseconds
- offset = timedelta(days=1)
- return offset, use_relativedelta
+ if not kwds:
+ # GH 45643/45890: (historically) defaults to 1 day
+ return timedelta(days=1), False
+
+ if "millisecond" in kwds:
+ raise NotImplementedError(
+ "Using DateOffset to replace `millisecond` component in "
+ "datetime object is not supported. Use "
+ "`microsecond=timestamp.microsecond % 1000 + ms * 1000` "
+ "instead."
+ )
+
+ nanos = {"nanosecond", "nanoseconds"}
+
+ # nanos are handled by apply_wraps
+ if all(k in nanos for k in kwds):
+ return timedelta(days=0), False
+ kwds_no_nanos = {k: v for k, v in kwds.items() if k not in nanos}
+
+ kwds_use_relativedelta = {
+ "year", "month", "day", "hour", "minute",
+ "second", "microsecond", "weekday", "years", "months", "weeks", "days",
+ "hours", "minutes", "seconds", "microseconds"
+ }
+
+ # "weeks" and "days" are left out despite being valid args for timedelta,
+ # because (historically) timedelta is used only for sub-daily.
+ kwds_use_timedelta = {
+ "seconds", "microseconds", "milliseconds", "minutes", "hours",
+ }
+
+ if all(k in kwds_use_timedelta for k in kwds_no_nanos):
+ # Sub-daily offset - use timedelta (tz-aware)
+ # This also handles "milliseconds" (plur): see GH 49897
+ return timedelta(**kwds_no_nanos), False
+
+ # convert milliseconds to microseconds, so relativedelta can parse it
+ if "milliseconds" in kwds_no_nanos:
+ micro = kwds_no_nanos.pop("milliseconds") * 1000
+ kwds_no_nanos["microseconds"] = kwds_no_nanos.get("microseconds", 0) + micro
+
+ if all(k in kwds_use_relativedelta for k in kwds_no_nanos):
+ return relativedelta(**kwds_no_nanos), True
+
+ raise ValueError(
+ f"Invalid argument/s or bad combination of arguments: {list(kwds.keys())}"
+ )
# ---------------------------------------------------------------------
# Mixins & Singletons
@@ -1163,7 +1174,6 @@ cdef class RelativeDeltaOffset(BaseOffset):
def __init__(self, n=1, normalize=False, **kwds):
BaseOffset.__init__(self, n, normalize)
-
off, use_rd = _determine_offset(kwds)
object.__setattr__(self, "_offset", off)
object.__setattr__(self, "_use_relativedelta", use_rd)
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index 63594c2b2c48a..135227d66d541 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -739,6 +739,33 @@ def test_eq(self):
assert DateOffset(milliseconds=3) != DateOffset(milliseconds=7)
+ @pytest.mark.parametrize(
+ "offset_kwargs, expected_arg",
+ [
+ ({"microseconds": 1, "milliseconds": 1}, "2022-01-01 00:00:00.001001"),
+ ({"seconds": 1, "milliseconds": 1}, "2022-01-01 00:00:01.001"),
+ ({"minutes": 1, "milliseconds": 1}, "2022-01-01 00:01:00.001"),
+ ({"hours": 1, "milliseconds": 1}, "2022-01-01 01:00:00.001"),
+ ({"days": 1, "milliseconds": 1}, "2022-01-02 00:00:00.001"),
+ ({"weeks": 1, "milliseconds": 1}, "2022-01-08 00:00:00.001"),
+ ({"months": 1, "milliseconds": 1}, "2022-02-01 00:00:00.001"),
+ ({"years": 1, "milliseconds": 1}, "2023-01-01 00:00:00.001"),
+ ],
+ )
+ def test_milliseconds_combination(self, offset_kwargs, expected_arg):
+ # GH 49897
+ offset = DateOffset(**offset_kwargs)
+ ts = Timestamp("2022-01-01")
+ result = ts + offset
+ expected = Timestamp(expected_arg)
+
+ assert result == expected
+
+ def test_offset_invalid_arguments(self):
+ msg = "^Invalid argument/s or bad combination of arguments"
+ with pytest.raises(ValueError, match=msg):
+ DateOffset(picoseconds=1)
+
class TestOffsetNames:
def test_get_offset_name(self):
| - [X] closes #49897
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50020 | 2022-12-02T18:06:15Z | 2022-12-22T22:49:14Z | 2022-12-22T22:49:14Z | 2023-01-22T19:58:41Z |
TST: Add test for isin for filtering with mixed types | diff --git a/pandas/tests/series/methods/test_isin.py b/pandas/tests/series/methods/test_isin.py
index 449724508fcaa..92ebee9ffa7a5 100644
--- a/pandas/tests/series/methods/test_isin.py
+++ b/pandas/tests/series/methods/test_isin.py
@@ -220,3 +220,17 @@ def test_isin_complex_numbers(array, expected):
# GH 17927
result = Series(array).isin([1j, 1 + 1j, 1 + 2j])
tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "data,is_in",
+ [([1, [2]], [1]), (["simple str", [{"values": 3}]], ["simple str"])],
+)
+def test_isin_filtering_with_mixed_object_types(data, is_in):
+ # GH 20883
+
+ ser = Series(data)
+ result = ser.isin(is_in)
+ expected = Series([True, False])
+
+ tm.assert_series_equal(result, expected)
| - [x] closes #20883
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Added tests to check that `.isin` works as expected with different types in the series.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50019 | 2022-12-02T17:12:45Z | 2022-12-02T19:58:42Z | 2022-12-02T19:58:42Z | 2022-12-05T09:10:44Z |
DEV: enable liveserver for docs on gitpod | diff --git a/.gitpod.yml b/.gitpod.yml
index 6bba39823791e..877c16eefb5d6 100644
--- a/.gitpod.yml
+++ b/.gitpod.yml
@@ -32,6 +32,7 @@ vscode:
- yzhang.markdown-all-in-one
- eamodio.gitlens
- lextudio.restructuredtext
+ - ritwickdey.liveserver
# add or remove what you think is generally useful to most contributors
# avoid adding too many. they each open a pop-up window
| Another follow-up for https://github.com/pandas-dev/pandas/issues/47790 | https://api.github.com/repos/pandas-dev/pandas/pulls/50018 | 2022-12-02T17:06:38Z | 2022-12-02T20:29:54Z | 2022-12-02T20:29:54Z | 2022-12-02T20:29:58Z |
DOC: align description of sort argument with implementation | diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 5ce69d2c2ab4c..aced5a73a1f02 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -195,10 +195,7 @@ def concat(
Check whether the new concatenated axis contains duplicates. This can
be very expensive relative to the actual data concatenation.
sort : bool, default False
- Sort non-concatenation axis if it is not already aligned when `join`
- is 'outer'.
- This has no effect when ``join='inner'``, which already preserves
- the order of the non-concatenation axis.
+ Sort non-concatenation axis if it is not already aligned.
.. versionchanged:: 1.0.0
| If sort is provided to concat, the non-concatenation axis is always sorted, so remove the conditional.
- [x] closes #49646
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50017 | 2022-12-02T17:04:24Z | 2022-12-02T18:33:35Z | 2022-12-02T18:33:35Z | 2023-07-04T14:01:06Z |
ENH: add copy on write for df reorder_levels GH49473 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c7f0a7ced7576..eb3365d4f8410 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7407,7 +7407,7 @@ def swaplevel(self, i: Axis = -2, j: Axis = -1, axis: Axis = 0) -> DataFrame:
result.columns = result.columns.swaplevel(i, j)
return result
- def reorder_levels(self, order: Sequence[Axis], axis: Axis = 0) -> DataFrame:
+ def reorder_levels(self, order: Sequence[int | str], axis: Axis = 0) -> DataFrame:
"""
Rearrange index levels using input order. May not drop or duplicate levels.
@@ -7452,7 +7452,7 @@ class diet
if not isinstance(self._get_axis(axis), MultiIndex): # pragma: no cover
raise TypeError("Can only reorder levels on a hierarchical axis.")
- result = self.copy()
+ result = self.copy(deep=None)
if axis == 0:
assert isinstance(result.index, MultiIndex)
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 6707f1411cbc7..8015eb93988c9 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -3,6 +3,7 @@
from pandas import (
DataFrame,
+ MultiIndex,
Series,
)
import pandas._testing as tm
@@ -293,7 +294,25 @@ def test_assign(using_copy_on_write):
else:
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
- # modify df2 to trigger CoW for that block
+ df2.iloc[0, 0] = 0
+ if using_copy_on_write:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ tm.assert_frame_equal(df, df_orig)
+
+
+def test_reorder_levels(using_copy_on_write):
+ index = MultiIndex.from_tuples(
+ [(1, 1), (1, 2), (2, 1), (2, 2)], names=["one", "two"]
+ )
+ df = DataFrame({"a": [1, 2, 3, 4]}, index=index)
+ df_orig = df.copy()
+ df2 = df.reorder_levels(order=["two", "one"])
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ else:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+
df2.iloc[0, 0] = 0
if using_copy_on_write:
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
Added copy-on-write to `df.reorder_levels()`.
Progress towards #49473 via [PyData pandas sprint](https://github.com/noatamir/pydata-global-sprints/issues/11).
Changed type hint because following the existing one caused an error. | https://api.github.com/repos/pandas-dev/pandas/pulls/50016 | 2022-12-02T16:59:06Z | 2022-12-02T21:06:43Z | 2022-12-02T21:06:43Z | 2022-12-02T21:07:00Z |
BUG: frame[object].astype(M8[unsupported]) not raising | diff --git a/pandas/core/dtypes/astype.py b/pandas/core/dtypes/astype.py
index 7b5c77af7864b..53c2cfd345e32 100644
--- a/pandas/core/dtypes/astype.py
+++ b/pandas/core/dtypes/astype.py
@@ -135,16 +135,15 @@ def astype_nansafe(
elif is_object_dtype(arr.dtype):
# if we have a datetime/timedelta array of objects
- # then coerce to a proper dtype and recall astype_nansafe
+ # then coerce to datetime64[ns] and use DatetimeArray.astype
if is_datetime64_dtype(dtype):
from pandas import to_datetime
- return astype_nansafe(
- to_datetime(arr.ravel()).values.reshape(arr.shape),
- dtype,
- copy=copy,
- )
+ dti = to_datetime(arr.ravel())
+ dta = dti._data.reshape(arr.shape)
+ return dta.astype(dtype, copy=False)._ndarray
+
elif is_timedelta64_dtype(dtype):
# bc we know arr.dtype == object, this is equivalent to
# `np.asarray(to_timedelta(arr))`, but using a lower-level API that
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index c8a3c992248ad..472ae80dc1838 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -757,7 +757,13 @@ def test_astype_datetime64_bad_dtype_raises(from_type, to_type):
to_type = np.dtype(to_type)
- with pytest.raises(TypeError, match="cannot astype"):
+ msg = "|".join(
+ [
+ "cannot astype a timedelta",
+ "cannot astype a datetimelike",
+ ]
+ )
+ with pytest.raises(TypeError, match=msg):
astype_nansafe(arr, dtype=to_type)
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index 96ef49acdcb21..9d56dba9b480d 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -377,6 +377,16 @@ def test_astype_column_metadata(self, dtype):
df = df.astype(dtype)
tm.assert_index_equal(df.columns, columns)
+ @pytest.mark.parametrize("unit", ["Y", "M", "W", "D", "h", "m"])
+ def test_astype_from_object_to_datetime_unit(self, unit):
+ vals = [
+ ["2015-01-01", "2015-01-02", "2015-01-03"],
+ ["2017-01-01", "2017-01-02", "2017-02-03"],
+ ]
+ df = DataFrame(vals, dtype=object)
+ with pytest.raises(TypeError, match="Cannot cast"):
+ df.astype(f"M8[{unit}]")
+
@pytest.mark.parametrize("dtype", ["M8", "m8"])
@pytest.mark.parametrize("unit", ["ns", "us", "ms", "s", "h", "m", "D"])
def test_astype_from_datetimelike_to_object(self, dtype, unit):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 32a4dc06d08e2..ef80cc847a5b8 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1955,19 +1955,11 @@ def test_constructor_datetimes_with_nulls(self, arr):
@pytest.mark.parametrize("order", ["K", "A", "C", "F"])
@pytest.mark.parametrize(
- "dtype",
- [
- "datetime64[M]",
- "datetime64[D]",
- "datetime64[h]",
- "datetime64[m]",
- "datetime64[s]",
- "datetime64[ms]",
- "datetime64[us]",
- "datetime64[ns]",
- ],
+ "unit",
+ ["M", "D", "h", "m", "s", "ms", "us", "ns"],
)
- def test_constructor_datetimes_non_ns(self, order, dtype):
+ def test_constructor_datetimes_non_ns(self, order, unit):
+ dtype = f"datetime64[{unit}]"
na = np.array(
[
["2015-01-01", "2015-01-02", "2015-01-03"],
@@ -1977,13 +1969,16 @@ def test_constructor_datetimes_non_ns(self, order, dtype):
order=order,
)
df = DataFrame(na)
- expected = DataFrame(
- [
- ["2015-01-01", "2015-01-02", "2015-01-03"],
- ["2017-01-01", "2017-01-02", "2017-02-03"],
- ]
- )
- expected = expected.astype(dtype=dtype)
+ expected = DataFrame(na.astype("M8[ns]"))
+ if unit in ["M", "D", "h", "m"]:
+ with pytest.raises(TypeError, match="Cannot cast"):
+ expected.astype(dtype)
+
+ # instead the constructor casts to the closest supported reso, i.e. "s"
+ expected = expected.astype("datetime64[s]")
+ else:
+ expected = expected.astype(dtype=dtype)
+
tm.assert_frame_equal(df, expected)
@pytest.mark.parametrize("order", ["K", "A", "C", "F"])
diff --git a/pandas/tests/io/xml/test_xml_dtypes.py b/pandas/tests/io/xml/test_xml_dtypes.py
index 5629830767c3c..412c8a8dde175 100644
--- a/pandas/tests/io/xml/test_xml_dtypes.py
+++ b/pandas/tests/io/xml/test_xml_dtypes.py
@@ -128,14 +128,14 @@ def test_dtypes_with_names(parser):
df_result = read_xml(
xml_dates,
names=["Col1", "Col2", "Col3", "Col4"],
- dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64"},
+ dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64[ns]"},
parser=parser,
)
df_iter = read_xml_iterparse(
xml_dates,
parser=parser,
names=["Col1", "Col2", "Col3", "Col4"],
- dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64"},
+ dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64[ns]"},
iterparse={"row": ["shape", "degrees", "sides", "date"]},
)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 946e7e48148b4..ab589dc26a3ac 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -730,13 +730,13 @@ def test_other_datetime_unit(self, unit):
ser = Series([None, None], index=[101, 102], name="days")
dtype = f"datetime64[{unit}]"
- df2 = ser.astype(dtype).to_frame("days")
if unit in ["D", "h", "m"]:
# not supported so we cast to the nearest supported unit, seconds
exp_dtype = "datetime64[s]"
else:
exp_dtype = dtype
+ df2 = ser.astype(exp_dtype).to_frame("days")
assert df2["days"].dtype == exp_dtype
result = df1.merge(df2, left_on="entity_id", right_index=True)
| Already has a whatsnew _whatsnew_200.api_breaking.astype_to_unsupported_datetimelike | https://api.github.com/repos/pandas-dev/pandas/pulls/50015 | 2022-12-02T16:40:27Z | 2022-12-02T18:56:51Z | 2022-12-02T18:56:51Z | 2022-12-02T19:17:10Z |
STYLE double-quote cython strings #21 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 0779f9c95f7b4..18f3644a0e0ae 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -27,6 +27,7 @@ repos:
rev: v0.9.1
hooks:
- id: cython-lint
+ - id: double-quote-cython-strings
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 7b9fe6422544c..fcd30ab1faec8 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -180,7 +180,7 @@ def is_lexsorted(list_of_arrays: list) -> bint:
cdef int64_t **vecs = <int64_t**>malloc(nlevels * sizeof(int64_t*))
for i in range(nlevels):
arr = list_of_arrays[i]
- assert arr.dtype.name == 'int64'
+ assert arr.dtype.name == "int64"
vecs[i] = <int64_t*>cnp.PyArray_DATA(arr)
# Assume uniqueness??
@@ -514,9 +514,9 @@ def validate_limit(nobs: int | None, limit=None) -> int:
lim = nobs
else:
if not util.is_integer_object(limit):
- raise ValueError('Limit must be an integer')
+ raise ValueError("Limit must be an integer")
if limit < 1:
- raise ValueError('Limit must be greater than 0')
+ raise ValueError("Limit must be greater than 0")
lim = limit
return lim
@@ -958,7 +958,7 @@ def rank_1d(
if not ascending:
tiebreak = TIEBREAK_FIRST_DESCENDING
- keep_na = na_option == 'keep'
+ keep_na = na_option == "keep"
N = len(values)
if labels is not None:
@@ -984,7 +984,7 @@ def rank_1d(
# with mask, without obfuscating location of missing data
# in values array
if numeric_object_t is object and values.dtype != np.object_:
- masked_vals = values.astype('O')
+ masked_vals = values.astype("O")
else:
masked_vals = values.copy()
@@ -1005,7 +1005,7 @@ def rank_1d(
# If descending, fill with highest value since descending
# will flip the ordering to still end up with lowest rank.
# Symmetric logic applies to `na_option == 'bottom'`
- nans_rank_highest = ascending ^ (na_option == 'top')
+ nans_rank_highest = ascending ^ (na_option == "top")
nan_fill_val = get_rank_nan_fill_val(nans_rank_highest, <numeric_object_t>0)
if nans_rank_highest:
order = [masked_vals, mask]
@@ -1345,7 +1345,7 @@ def rank_2d(
if not ascending:
tiebreak = TIEBREAK_FIRST_DESCENDING
- keep_na = na_option == 'keep'
+ keep_na = na_option == "keep"
# For cases where a mask is not possible, we can avoid mask checks
check_mask = (
@@ -1362,9 +1362,9 @@ def rank_2d(
if numeric_object_t is object:
if values.dtype != np.object_:
- values = values.astype('O')
+ values = values.astype("O")
- nans_rank_highest = ascending ^ (na_option == 'top')
+ nans_rank_highest = ascending ^ (na_option == "top")
if check_mask:
nan_fill_val = get_rank_nan_fill_val(nans_rank_highest, <numeric_object_t>0)
@@ -1385,7 +1385,7 @@ def rank_2d(
order = (values, ~np.asarray(mask))
n, k = (<object>values).shape
- out = np.empty((n, k), dtype='f8', order='F')
+ out = np.empty((n, k), dtype="f8", order="F")
grp_sizes = np.ones(n, dtype=np.int64)
# lexsort is slower, so only use if we need to worry about the mask
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index a351ad6e461f3..a5b9bf02dcbe2 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -604,12 +604,12 @@ def group_any_all(
intp_t lab
int8_t flag_val, val
- if val_test == 'all':
+ if val_test == "all":
# Because the 'all' value of an empty iterable in Python is True we can
# start with an array full of ones and set to zero when a False value
# is encountered
flag_val = 0
- elif val_test == 'any':
+ elif val_test == "any":
# Because the 'any' value of an empty iterable in Python is False we
# can start with an array full of zeros and set to one only if any
# value encountered is True
@@ -1061,7 +1061,7 @@ def group_ohlc(
N, K = (<object>values).shape
if out.shape[1] != 4:
- raise ValueError('Output array must have 4 columns')
+ raise ValueError("Output array must have 4 columns")
if K > 1:
raise NotImplementedError("Argument 'values' must have only one dimension")
@@ -1157,11 +1157,11 @@ def group_quantile(
)
inter_methods = {
- 'linear': INTERPOLATION_LINEAR,
- 'lower': INTERPOLATION_LOWER,
- 'higher': INTERPOLATION_HIGHER,
- 'nearest': INTERPOLATION_NEAREST,
- 'midpoint': INTERPOLATION_MIDPOINT,
+ "linear": INTERPOLATION_LINEAR,
+ "lower": INTERPOLATION_LOWER,
+ "higher": INTERPOLATION_HIGHER,
+ "nearest": INTERPOLATION_NEAREST,
+ "midpoint": INTERPOLATION_MIDPOINT,
}
interp = inter_methods[interpolation]
diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 27edc83c6f329..eb4e957f644ac 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -184,8 +184,8 @@ cdef class IndexEngine:
if self.is_monotonic_increasing:
values = self.values
try:
- left = values.searchsorted(val, side='left')
- right = values.searchsorted(val, side='right')
+ left = values.searchsorted(val, side="left")
+ right = values.searchsorted(val, side="right")
except TypeError:
# e.g. GH#29189 get_loc(None) with a Float64Index
# 2021-09-29 Now only reached for object-dtype
@@ -353,8 +353,8 @@ cdef class IndexEngine:
remaining_stargets = set()
for starget in stargets:
try:
- start = values.searchsorted(starget, side='left')
- end = values.searchsorted(starget, side='right')
+ start = values.searchsorted(starget, side="left")
+ end = values.searchsorted(starget, side="right")
except TypeError: # e.g. if we tried to search for string in int array
remaining_stargets.add(starget)
else:
@@ -551,7 +551,7 @@ cdef class DatetimeEngine(Int64Engine):
return self._get_loc_duplicates(conv)
values = self.values
- loc = values.searchsorted(conv, side='left')
+ loc = values.searchsorted(conv, side="left")
if loc == len(values) or values[loc] != conv:
raise KeyError(val)
@@ -655,8 +655,8 @@ cdef class BaseMultiIndexCodesEngine:
# with positive integers (-1 for NaN becomes 1). This enables us to
# differentiate between values that are missing in other and matching
# NaNs. We will set values that are not found to 0 later:
- labels_arr = np.array(labels, dtype='int64').T + multiindex_nulls_shift
- codes = labels_arr.astype('uint64', copy=False)
+ labels_arr = np.array(labels, dtype="int64").T + multiindex_nulls_shift
+ codes = labels_arr.astype("uint64", copy=False)
self.level_has_nans = [-1 in lab for lab in labels]
# Map each codes combination in the index to an integer unambiguously
@@ -693,7 +693,7 @@ cdef class BaseMultiIndexCodesEngine:
if self.level_has_nans[i] and codes.hasnans:
result[codes.isna()] += 1
level_codes.append(result)
- return self._codes_to_ints(np.array(level_codes, dtype='uint64').T)
+ return self._codes_to_ints(np.array(level_codes, dtype="uint64").T)
def get_indexer(self, target: np.ndarray) -> np.ndarray:
"""
@@ -754,12 +754,12 @@ cdef class BaseMultiIndexCodesEngine:
ndarray[int64_t, ndim=1] new_codes, new_target_codes
ndarray[intp_t, ndim=1] sorted_indexer
- target_order = np.argsort(target).astype('int64')
+ target_order = np.argsort(target).astype("int64")
target_values = target[target_order]
num_values, num_target_values = len(values), len(target_values)
new_codes, new_target_codes = (
- np.empty((num_values,)).astype('int64'),
- np.empty((num_target_values,)).astype('int64'),
+ np.empty((num_values,)).astype("int64"),
+ np.empty((num_target_values,)).astype("int64"),
)
# `values` and `target_values` are both sorted, so we walk through them
@@ -809,7 +809,7 @@ cdef class BaseMultiIndexCodesEngine:
raise KeyError(key)
# Transform indices into single integer:
- lab_int = self._codes_to_ints(np.array(indices, dtype='uint64'))
+ lab_int = self._codes_to_ints(np.array(indices, dtype="uint64"))
return self._base.get_loc(self, lab_int)
@@ -940,8 +940,8 @@ cdef class SharedEngine:
if self.is_monotonic_increasing:
values = self.values
try:
- left = values.searchsorted(val, side='left')
- right = values.searchsorted(val, side='right')
+ left = values.searchsorted(val, side="left")
+ right = values.searchsorted(val, side="right")
except TypeError:
# e.g. GH#29189 get_loc(None) with a Float64Index
raise KeyError(val)
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 43e33ef3e7d7e..ee51a4fd402fb 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -69,7 +69,7 @@ cdef class BlockPlacement:
or not cnp.PyArray_ISWRITEABLE(val)
or (<ndarray>val).descr.type_num != cnp.NPY_INTP
):
- arr = np.require(val, dtype=np.intp, requirements='W')
+ arr = np.require(val, dtype=np.intp, requirements="W")
else:
arr = val
# Caller is responsible for ensuring arr.ndim == 1
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 7ed635718e674..5b2cb880195ec 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -42,7 +42,7 @@ from pandas._libs.tslibs.util cimport (
is_timedelta64_object,
)
-VALID_CLOSED = frozenset(['left', 'right', 'both', 'neither'])
+VALID_CLOSED = frozenset(["left", "right", "both", "neither"])
cdef class IntervalMixin:
@@ -59,7 +59,7 @@ cdef class IntervalMixin:
bool
True if the Interval is closed on the left-side.
"""
- return self.closed in ('left', 'both')
+ return self.closed in ("left", "both")
@property
def closed_right(self):
@@ -73,7 +73,7 @@ cdef class IntervalMixin:
bool
True if the Interval is closed on the left-side.
"""
- return self.closed in ('right', 'both')
+ return self.closed in ("right", "both")
@property
def open_left(self):
@@ -172,9 +172,9 @@ cdef class IntervalMixin:
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False])
"""
- return (self.right == self.left) & (self.closed != 'both')
+ return (self.right == self.left) & (self.closed != "both")
- def _check_closed_matches(self, other, name='other'):
+ def _check_closed_matches(self, other, name="other"):
"""
Check if the closed attribute of `other` matches.
@@ -197,9 +197,9 @@ cdef class IntervalMixin:
cdef bint _interval_like(other):
- return (hasattr(other, 'left')
- and hasattr(other, 'right')
- and hasattr(other, 'closed'))
+ return (hasattr(other, "left")
+ and hasattr(other, "right")
+ and hasattr(other, "closed"))
cdef class Interval(IntervalMixin):
@@ -311,7 +311,7 @@ cdef class Interval(IntervalMixin):
Either ``left``, ``right``, ``both`` or ``neither``.
"""
- def __init__(self, left, right, str closed='right'):
+ def __init__(self, left, right, str closed="right"):
# note: it is faster to just do these checks than to use a special
# constructor (__cinit__/__new__) to avoid them
@@ -343,8 +343,8 @@ cdef class Interval(IntervalMixin):
def __contains__(self, key) -> bool:
if _interval_like(key):
- key_closed_left = key.closed in ('left', 'both')
- key_closed_right = key.closed in ('right', 'both')
+ key_closed_left = key.closed in ("left", "both")
+ key_closed_right = key.closed in ("right", "both")
if self.open_left and key_closed_left:
left_contained = self.left < key.left
else:
@@ -389,15 +389,15 @@ cdef class Interval(IntervalMixin):
left, right = self._repr_base()
name = type(self).__name__
- repr_str = f'{name}({repr(left)}, {repr(right)}, closed={repr(self.closed)})'
+ repr_str = f"{name}({repr(left)}, {repr(right)}, closed={repr(self.closed)})"
return repr_str
def __str__(self) -> str:
left, right = self._repr_base()
- start_symbol = '[' if self.closed_left else '('
- end_symbol = ']' if self.closed_right else ')'
- return f'{start_symbol}{left}, {right}{end_symbol}'
+ start_symbol = "[" if self.closed_left else "("
+ end_symbol = "]" if self.closed_right else ")"
+ return f"{start_symbol}{left}, {right}{end_symbol}"
def __add__(self, y):
if (
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 4890f82c5fdda..e35cf2fb13768 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -394,7 +394,7 @@ def dicts_to_array(dicts: list, columns: list):
k = len(columns)
n = len(dicts)
- result = np.empty((n, k), dtype='O')
+ result = np.empty((n, k), dtype="O")
for i in range(n):
row = dicts[i]
@@ -768,7 +768,7 @@ def is_all_arraylike(obj: list) -> bool:
for i in range(n):
val = obj[i]
if not (isinstance(val, list) or
- util.is_array(val) or hasattr(val, '_data')):
+ util.is_array(val) or hasattr(val, "_data")):
# TODO: EA?
# exclude tuples, frozensets as they may be contained in an Index
all_arrays = False
@@ -786,7 +786,7 @@ def is_all_arraylike(obj: list) -> bool:
@cython.boundscheck(False)
@cython.wraparound(False)
def generate_bins_dt64(ndarray[int64_t, ndim=1] values, const int64_t[:] binner,
- object closed='left', bint hasnans=False):
+ object closed="left", bint hasnans=False):
"""
Int64 (datetime64) version of generic python version in ``groupby.py``.
"""
@@ -794,7 +794,7 @@ def generate_bins_dt64(ndarray[int64_t, ndim=1] values, const int64_t[:] binner,
Py_ssize_t lenidx, lenbin, i, j, bc
ndarray[int64_t, ndim=1] bins
int64_t r_bin, nat_count
- bint right_closed = closed == 'right'
+ bint right_closed = closed == "right"
nat_count = 0
if hasnans:
@@ -873,7 +873,7 @@ def get_level_sorter(
for i in range(len(starts) - 1):
l, r = starts[i], starts[i + 1]
- out[l:r] = l + codes[l:r].argsort(kind='mergesort')
+ out[l:r] = l + codes[l:r].argsort(kind="mergesort")
return out
@@ -892,7 +892,7 @@ def count_level_2d(ndarray[uint8_t, ndim=2, cast=True] mask,
n, k = (<object>mask).shape
if axis == 0:
- counts = np.zeros((max_bin, k), dtype='i8')
+ counts = np.zeros((max_bin, k), dtype="i8")
with nogil:
for i in range(n):
for j in range(k):
@@ -900,7 +900,7 @@ def count_level_2d(ndarray[uint8_t, ndim=2, cast=True] mask,
counts[labels[i], j] += 1
else: # axis == 1
- counts = np.zeros((n, max_bin), dtype='i8')
+ counts = np.zeros((n, max_bin), dtype="i8")
with nogil:
for i in range(n):
for j in range(k):
@@ -1051,7 +1051,7 @@ cpdef bint is_decimal(object obj):
cpdef bint is_interval(object obj):
- return getattr(obj, '_typ', '_typ') == 'interval'
+ return getattr(obj, "_typ", "_typ") == "interval"
def is_period(val: object) -> bool:
@@ -1163,17 +1163,17 @@ _TYPE_MAP = {
# types only exist on certain platform
try:
np.float128
- _TYPE_MAP['float128'] = 'floating'
+ _TYPE_MAP["float128"] = "floating"
except AttributeError:
pass
try:
np.complex256
- _TYPE_MAP['complex256'] = 'complex'
+ _TYPE_MAP["complex256"] = "complex"
except AttributeError:
pass
try:
np.float16
- _TYPE_MAP['float16'] = 'floating'
+ _TYPE_MAP["float16"] = "floating"
except AttributeError:
pass
@@ -1921,7 +1921,7 @@ def is_datetime_with_singletz_array(values: ndarray) -> bool:
for i in range(n):
base_val = values[i]
if base_val is not NaT and base_val is not None and not util.is_nan(base_val):
- base_tz = getattr(base_val, 'tzinfo', None)
+ base_tz = getattr(base_val, "tzinfo", None)
break
for j in range(i, n):
@@ -1929,7 +1929,7 @@ def is_datetime_with_singletz_array(values: ndarray) -> bool:
# NaT can coexist with tz-aware datetimes, so skip if encountered
val = values[j]
if val is not NaT and val is not None and not util.is_nan(val):
- tz = getattr(val, 'tzinfo', None)
+ tz = getattr(val, "tzinfo", None)
if not tz_compare(base_tz, tz):
return False
@@ -2133,7 +2133,7 @@ def maybe_convert_numeric(
returns a boolean mask for the converted values, otherwise returns None.
"""
if len(values) == 0:
- return (np.array([], dtype='i8'), None)
+ return (np.array([], dtype="i8"), None)
# fastpath for ints - try to convert all based on first value
cdef:
@@ -2141,7 +2141,7 @@ def maybe_convert_numeric(
if util.is_integer_object(val):
try:
- maybe_ints = values.astype('i8')
+ maybe_ints = values.astype("i8")
if (maybe_ints == values).all():
return (maybe_ints, None)
except (ValueError, OverflowError, TypeError):
@@ -2231,7 +2231,7 @@ def maybe_convert_numeric(
mask[i] = 1
seen.saw_null()
floats[i] = complexes[i] = NaN
- elif hasattr(val, '__len__') and len(val) == 0:
+ elif hasattr(val, "__len__") and len(val) == 0:
if convert_empty or seen.coerce_numeric:
seen.saw_null()
floats[i] = complexes[i] = NaN
@@ -2469,7 +2469,7 @@ def maybe_convert_objects(ndarray[object] objects,
# if we have an tz's attached then return the objects
if convert_datetime:
- if getattr(val, 'tzinfo', None) is not None:
+ if getattr(val, "tzinfo", None) is not None:
seen.datetimetz_ = True
break
else:
@@ -2900,11 +2900,11 @@ def fast_multiget(dict mapping, ndarray keys, default=np.nan) -> np.ndarray:
cdef:
Py_ssize_t i, n = len(keys)
object val
- ndarray[object] output = np.empty(n, dtype='O')
+ ndarray[object] output = np.empty(n, dtype="O")
if n == 0:
# kludge, for Series
- return np.empty(0, dtype='f8')
+ return np.empty(0, dtype="f8")
for i in range(n):
val = keys[i]
diff --git a/pandas/_libs/ops.pyx b/pandas/_libs/ops.pyx
index 308756e378dde..478e7eaee90c1 100644
--- a/pandas/_libs/ops.pyx
+++ b/pandas/_libs/ops.pyx
@@ -66,7 +66,7 @@ def scalar_compare(object[:] values, object val, object op) -> ndarray:
elif op is operator.ne:
flag = Py_NE
else:
- raise ValueError('Unrecognized operator')
+ raise ValueError("Unrecognized operator")
result = np.empty(n, dtype=bool).view(np.uint8)
isnull_val = checknull(val)
@@ -134,7 +134,7 @@ def vec_compare(ndarray[object] left, ndarray[object] right, object op) -> ndarr
int flag
if n != <Py_ssize_t>len(right):
- raise ValueError(f'Arrays were different lengths: {n} vs {len(right)}')
+ raise ValueError(f"Arrays were different lengths: {n} vs {len(right)}")
if op is operator.lt:
flag = Py_LT
@@ -149,7 +149,7 @@ def vec_compare(ndarray[object] left, ndarray[object] right, object op) -> ndarr
elif op is operator.ne:
flag = Py_NE
else:
- raise ValueError('Unrecognized operator')
+ raise ValueError("Unrecognized operator")
result = np.empty(n, dtype=bool).view(np.uint8)
@@ -234,7 +234,7 @@ def vec_binop(object[:] left, object[:] right, object op) -> ndarray:
object[::1] result
if n != <Py_ssize_t>len(right):
- raise ValueError(f'Arrays were different lengths: {n} vs {len(right)}')
+ raise ValueError(f"Arrays were different lengths: {n} vs {len(right)}")
result = np.empty(n, dtype=object)
@@ -271,8 +271,8 @@ def maybe_convert_bool(ndarray[object] arr,
result = np.empty(n, dtype=np.uint8)
mask = np.zeros(n, dtype=np.uint8)
# the defaults
- true_vals = {'True', 'TRUE', 'true'}
- false_vals = {'False', 'FALSE', 'false'}
+ true_vals = {"True", "TRUE", "true"}
+ false_vals = {"False", "FALSE", "false"}
if true_values is not None:
true_vals = true_vals | set(true_values)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 85d74e201d5bb..73005c7b5cfa0 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -342,7 +342,7 @@ cdef class TextReader:
set unnamed_cols # set[str]
def __cinit__(self, source,
- delimiter=b',', # bytes | str
+ delimiter=b",", # bytes | str
header=0,
int64_t header_start=0,
uint64_t header_end=0,
@@ -358,7 +358,7 @@ cdef class TextReader:
quoting=0, # int
lineterminator=None, # bytes | str
comment=None,
- decimal=b'.', # bytes | str
+ decimal=b".", # bytes | str
thousands=None, # bytes | str
dtype=None,
usecols=None,
@@ -403,7 +403,7 @@ cdef class TextReader:
self.parser.delim_whitespace = delim_whitespace
else:
if len(delimiter) > 1:
- raise ValueError('only length-1 separators excluded right now')
+ raise ValueError("only length-1 separators excluded right now")
self.parser.delimiter = <char>ord(delimiter)
# ----------------------------------------
@@ -415,26 +415,26 @@ cdef class TextReader:
if lineterminator is not None:
if len(lineterminator) != 1:
- raise ValueError('Only length-1 line terminators supported')
+ raise ValueError("Only length-1 line terminators supported")
self.parser.lineterminator = <char>ord(lineterminator)
if len(decimal) != 1:
- raise ValueError('Only length-1 decimal markers supported')
+ raise ValueError("Only length-1 decimal markers supported")
self.parser.decimal = <char>ord(decimal)
if thousands is not None:
if len(thousands) != 1:
- raise ValueError('Only length-1 thousands markers supported')
+ raise ValueError("Only length-1 thousands markers supported")
self.parser.thousands = <char>ord(thousands)
if escapechar is not None:
if len(escapechar) != 1:
- raise ValueError('Only length-1 escapes supported')
+ raise ValueError("Only length-1 escapes supported")
self.parser.escapechar = <char>ord(escapechar)
self._set_quoting(quotechar, quoting)
- dtype_order = ['int64', 'float64', 'bool', 'object']
+ dtype_order = ["int64", "float64", "bool", "object"]
if quoting == QUOTE_NONNUMERIC:
# consistent with csv module semantics, cast all to float
dtype_order = dtype_order[1:]
@@ -442,7 +442,7 @@ cdef class TextReader:
if comment is not None:
if len(comment) > 1:
- raise ValueError('Only length-1 comment characters supported')
+ raise ValueError("Only length-1 comment characters supported")
self.parser.commentchar = <char>ord(comment)
self.parser.on_bad_lines = on_bad_lines
@@ -491,8 +491,8 @@ cdef class TextReader:
elif float_precision == "high" or float_precision is None:
self.parser.double_converter = precise_xstrtod
else:
- raise ValueError(f'Unrecognized float_precision option: '
- f'{float_precision}')
+ raise ValueError(f"Unrecognized float_precision option: "
+ f"{float_precision}")
# Caller is responsible for ensuring we have one of
# - None
@@ -582,7 +582,7 @@ cdef class TextReader:
dtype = type(quote_char).__name__
raise TypeError(f'"quotechar" must be string, not {dtype}')
- if quote_char is None or quote_char == '':
+ if quote_char is None or quote_char == "":
if quoting != QUOTE_NONE:
raise TypeError("quotechar must be set if quoting enabled")
self.parser.quoting = quoting
@@ -647,11 +647,11 @@ cdef class TextReader:
self.parser.lines < hr):
msg = self.orig_header
if isinstance(msg, list):
- joined = ','.join(str(m) for m in msg)
+ joined = ",".join(str(m) for m in msg)
msg = f"[{joined}], len of {len(msg)},"
raise ParserError(
- f'Passed header={msg} but only '
- f'{self.parser.lines} lines in file')
+ f"Passed header={msg} but only "
+ f"{self.parser.lines} lines in file")
else:
field_count = self.parser.line_fields[hr]
@@ -666,11 +666,11 @@ cdef class TextReader:
name = PyUnicode_DecodeUTF8(word, strlen(word),
self.encoding_errors)
- if name == '':
+ if name == "":
if self.has_mi_columns:
- name = f'Unnamed: {i}_level_{level}'
+ name = f"Unnamed: {i}_level_{level}"
else:
- name = f'Unnamed: {i}'
+ name = f"Unnamed: {i}"
unnamed_count += 1
unnamed_col_indices.append(i)
@@ -693,7 +693,7 @@ cdef class TextReader:
if cur_count > 0:
while cur_count > 0:
counts[old_col] = cur_count + 1
- col = f'{old_col}.{cur_count}'
+ col = f"{old_col}.{cur_count}"
if col in this_header:
cur_count += 1
else:
@@ -779,8 +779,8 @@ cdef class TextReader:
elif self.names is None and nuse < passed_count:
self.leading_cols = field_count - passed_count
elif passed_count != field_count:
- raise ValueError('Number of passed names did not match number of '
- 'header fields in the file')
+ raise ValueError("Number of passed names did not match number of "
+ "header fields in the file")
# oh boy, #2442, #2981
elif self.allow_leading_cols and passed_count < field_count:
self.leading_cols = field_count - passed_count
@@ -854,7 +854,7 @@ cdef class TextReader:
self.parser.warn_msg = NULL
if status < 0:
- raise_parser_error('Error tokenizing data', self.parser)
+ raise_parser_error("Error tokenizing data", self.parser)
# -> dict[int, "ArrayLike"]
cdef _read_rows(self, rows, bint trim):
@@ -871,8 +871,8 @@ cdef class TextReader:
self._tokenize_rows(irows - buffered_lines)
if self.skipfooter > 0:
- raise ValueError('skipfooter can only be used to read '
- 'the whole file')
+ raise ValueError("skipfooter can only be used to read "
+ "the whole file")
else:
with nogil:
status = tokenize_all_rows(self.parser, self.encoding_errors)
@@ -885,15 +885,15 @@ cdef class TextReader:
self.parser.warn_msg = NULL
if status < 0:
- raise_parser_error('Error tokenizing data', self.parser)
+ raise_parser_error("Error tokenizing data", self.parser)
if self.parser_start >= self.parser.lines:
raise StopIteration
- self._end_clock('Tokenization')
+ self._end_clock("Tokenization")
self._start_clock()
columns = self._convert_column_data(rows)
- self._end_clock('Type conversion')
+ self._end_clock("Type conversion")
self._start_clock()
if len(columns) > 0:
rows_read = len(list(columns.values())[0])
@@ -903,7 +903,7 @@ cdef class TextReader:
parser_trim_buffers(self.parser)
self.parser_start -= rows_read
- self._end_clock('Parser memory cleanup')
+ self._end_clock("Parser memory cleanup")
return columns
@@ -913,7 +913,7 @@ cdef class TextReader:
cdef _end_clock(self, str what):
if self.verbose:
elapsed = time.time() - self.clocks.pop(-1)
- print(f'{what} took: {elapsed * 1000:.2f} ms')
+ print(f"{what} took: {elapsed * 1000:.2f} ms")
def set_noconvert(self, i: int) -> None:
self.noconvert.add(i)
@@ -1060,7 +1060,7 @@ cdef class TextReader:
)
if col_res is None:
- raise ParserError(f'Unable to parse column {i}')
+ raise ParserError(f"Unable to parse column {i}")
results[i] = col_res
@@ -1098,11 +1098,11 @@ cdef class TextReader:
# dtype successfully. As a result, we leave the data
# column AS IS with object dtype.
col_res, na_count = self._convert_with_dtype(
- np.dtype('object'), i, start, end, 0,
+ np.dtype("object"), i, start, end, 0,
0, na_hashset, na_flist)
except OverflowError:
col_res, na_count = self._convert_with_dtype(
- np.dtype('object'), i, start, end, na_filter,
+ np.dtype("object"), i, start, end, na_filter,
0, na_hashset, na_flist)
if col_res is not None:
@@ -1131,7 +1131,7 @@ cdef class TextReader:
# only allow safe casts, eg. with a nan you cannot safely cast to int
try:
- col_res = col_res.astype(col_dtype, casting='safe')
+ col_res = col_res.astype(col_dtype, casting="safe")
except TypeError:
# float -> int conversions can fail the above
@@ -1200,7 +1200,7 @@ cdef class TextReader:
na_filter, na_hashset)
na_count = 0
- if result is not None and dtype != 'int64':
+ if result is not None and dtype != "int64":
result = result.astype(dtype)
return result, na_count
@@ -1209,7 +1209,7 @@ cdef class TextReader:
result, na_count = _try_double(self.parser, i, start, end,
na_filter, na_hashset, na_flist)
- if result is not None and dtype != 'float64':
+ if result is not None and dtype != "float64":
result = result.astype(dtype)
return result, na_count
elif is_bool_dtype(dtype):
@@ -1221,7 +1221,7 @@ cdef class TextReader:
raise ValueError(f"Bool column has NA values in column {i}")
return result, na_count
- elif dtype.kind == 'S':
+ elif dtype.kind == "S":
# TODO: na handling
width = dtype.itemsize
if width > 0:
@@ -1231,7 +1231,7 @@ cdef class TextReader:
# treat as a regular string parsing
return self._string_convert(i, start, end, na_filter,
na_hashset)
- elif dtype.kind == 'U':
+ elif dtype.kind == "U":
width = dtype.itemsize
if width > 0:
raise TypeError(f"the dtype {dtype} is not supported for parsing")
@@ -1345,8 +1345,8 @@ cdef _close(TextReader reader):
cdef:
- object _true_values = [b'True', b'TRUE', b'true']
- object _false_values = [b'False', b'FALSE', b'false']
+ object _true_values = [b"True", b"TRUE", b"true"]
+ object _false_values = [b"False", b"FALSE", b"false"]
def _ensure_encoded(list lst):
@@ -1356,7 +1356,7 @@ def _ensure_encoded(list lst):
if isinstance(x, str):
x = PyUnicode_AsUTF8String(x)
elif not isinstance(x, bytes):
- x = str(x).encode('utf-8')
+ x = str(x).encode("utf-8")
result.append(x)
return result
@@ -1565,7 +1565,7 @@ cdef _to_fw_string(parser_t *parser, int64_t col, int64_t line_start,
char *data
ndarray result
- result = np.empty(line_end - line_start, dtype=f'|S{width}')
+ result = np.empty(line_end - line_start, dtype=f"|S{width}")
data = <char*>result.data
with nogil:
@@ -1591,13 +1591,13 @@ cdef inline void _to_fw_string_nogil(parser_t *parser, int64_t col,
cdef:
- char* cinf = b'inf'
- char* cposinf = b'+inf'
- char* cneginf = b'-inf'
+ char* cinf = b"inf"
+ char* cposinf = b"+inf"
+ char* cneginf = b"-inf"
- char* cinfty = b'Infinity'
- char* cposinfty = b'+Infinity'
- char* cneginfty = b'-Infinity'
+ char* cinfty = b"Infinity"
+ char* cposinfty = b"+Infinity"
+ char* cneginfty = b"-Infinity"
# -> tuple[ndarray[float64_t], int] | tuple[None, None]
@@ -1726,14 +1726,14 @@ cdef _try_uint64(parser_t *parser, int64_t col,
if error != 0:
if error == ERROR_OVERFLOW:
# Can't get the word variable
- raise OverflowError('Overflow')
+ raise OverflowError("Overflow")
return None
if uint64_conflict(&state):
- raise ValueError('Cannot convert to numerical dtype')
+ raise ValueError("Cannot convert to numerical dtype")
if state.seen_sint:
- raise OverflowError('Overflow')
+ raise OverflowError("Overflow")
return result
@@ -1796,7 +1796,7 @@ cdef _try_int64(parser_t *parser, int64_t col,
if error != 0:
if error == ERROR_OVERFLOW:
# Can't get the word variable
- raise OverflowError('Overflow')
+ raise OverflowError("Overflow")
return None, None
return result, na_count
@@ -1944,7 +1944,7 @@ cdef kh_str_starts_t* kset_from_list(list values) except NULL:
# None creeps in sometimes, which isn't possible here
if not isinstance(val, bytes):
kh_destroy_str_starts(table)
- raise ValueError('Must be all encoded bytes')
+ raise ValueError("Must be all encoded bytes")
kh_put_str_starts_item(table, PyBytes_AsString(val), &ret)
@@ -2009,11 +2009,11 @@ cdef raise_parser_error(object base, parser_t *parser):
Py_XDECREF(type)
raise old_exc
- message = f'{base}. C error: '
+ message = f"{base}. C error: "
if parser.error_msg != NULL:
- message += parser.error_msg.decode('utf-8')
+ message += parser.error_msg.decode("utf-8")
else:
- message += 'no error message set'
+ message += "no error message set"
raise ParserError(message)
@@ -2078,7 +2078,7 @@ cdef _apply_converter(object f, parser_t *parser, int64_t col,
cdef list _maybe_encode(list values):
if values is None:
return []
- return [x.encode('utf-8') if isinstance(x, str) else x for x in values]
+ return [x.encode("utf-8") if isinstance(x, str) else x for x in values]
def sanitize_objects(ndarray[object] values, set na_values) -> int:
diff --git a/pandas/_libs/properties.pyx b/pandas/_libs/properties.pyx
index 3354290a5f535..33cd2ef27a995 100644
--- a/pandas/_libs/properties.pyx
+++ b/pandas/_libs/properties.pyx
@@ -14,7 +14,7 @@ cdef class CachedProperty:
def __init__(self, fget):
self.fget = fget
self.name = fget.__name__
- self.__doc__ = getattr(fget, '__doc__', None)
+ self.__doc__ = getattr(fget, "__doc__", None)
def __get__(self, obj, typ):
if obj is None:
@@ -22,7 +22,7 @@ cdef class CachedProperty:
return self
# Get the cache or set a default one if needed
- cache = getattr(obj, '_cache', None)
+ cache = getattr(obj, "_cache", None)
if cache is None:
try:
cache = obj._cache = {}
diff --git a/pandas/_libs/reshape.pyx b/pandas/_libs/reshape.pyx
index a012bd92cd573..946ba5ddaa248 100644
--- a/pandas/_libs/reshape.pyx
+++ b/pandas/_libs/reshape.pyx
@@ -103,7 +103,7 @@ def explode(ndarray[object] values):
# find the resulting len
n = len(values)
- counts = np.zeros(n, dtype='int64')
+ counts = np.zeros(n, dtype="int64")
for i in range(n):
v = values[i]
@@ -116,7 +116,7 @@ def explode(ndarray[object] values):
else:
counts[i] += 1
- result = np.empty(counts.sum(), dtype='object')
+ result = np.empty(counts.sum(), dtype="object")
count = 0
for i in range(n):
v = values[i]
diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
index 031417fa50be0..45ddade7b4eb5 100644
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -62,8 +62,8 @@ cdef class IntIndex(SparseIndex):
return IntIndex, args
def __repr__(self) -> str:
- output = 'IntIndex\n'
- output += f'Indices: {repr(self.indices)}\n'
+ output = "IntIndex\n"
+ output += f"Indices: {repr(self.indices)}\n"
return output
@property
@@ -134,7 +134,7 @@ cdef class IntIndex(SparseIndex):
y = y_.to_int_index()
if self.length != y.length:
- raise Exception('Indices must reference same underlying length')
+ raise Exception("Indices must reference same underlying length")
xindices = self.indices
yindices = y.indices
@@ -168,7 +168,7 @@ cdef class IntIndex(SparseIndex):
y = y_.to_int_index()
if self.length != y.length:
- raise ValueError('Indices must reference same underlying length')
+ raise ValueError("Indices must reference same underlying length")
new_indices = np.union1d(self.indices, y.indices)
return IntIndex(self.length, new_indices)
@@ -311,9 +311,9 @@ cdef class BlockIndex(SparseIndex):
return BlockIndex, args
def __repr__(self) -> str:
- output = 'BlockIndex\n'
- output += f'Block locations: {repr(self.blocs)}\n'
- output += f'Block lengths: {repr(self.blengths)}'
+ output = "BlockIndex\n"
+ output += f"Block locations: {repr(self.blocs)}\n"
+ output += f"Block lengths: {repr(self.blengths)}"
return output
@@ -340,23 +340,23 @@ cdef class BlockIndex(SparseIndex):
blengths = self.blengths
if len(blocs) != len(blengths):
- raise ValueError('block bound arrays must be same length')
+ raise ValueError("block bound arrays must be same length")
for i in range(self.nblocks):
if i > 0:
if blocs[i] <= blocs[i - 1]:
- raise ValueError('Locations not in ascending order')
+ raise ValueError("Locations not in ascending order")
if i < self.nblocks - 1:
if blocs[i] + blengths[i] > blocs[i + 1]:
- raise ValueError(f'Block {i} overlaps')
+ raise ValueError(f"Block {i} overlaps")
else:
if blocs[i] + blengths[i] > self.length:
- raise ValueError(f'Block {i} extends beyond end')
+ raise ValueError(f"Block {i} extends beyond end")
# no zero-length blocks
if blengths[i] == 0:
- raise ValueError(f'Zero-length block {i}')
+ raise ValueError(f"Zero-length block {i}")
def equals(self, other: object) -> bool:
if not isinstance(other, BlockIndex):
@@ -411,7 +411,7 @@ cdef class BlockIndex(SparseIndex):
y = other.to_block_index()
if self.length != y.length:
- raise Exception('Indices must reference same underlying length')
+ raise Exception("Indices must reference same underlying length")
xloc = self.blocs
xlen = self.blengths
@@ -565,7 +565,7 @@ cdef class BlockMerge:
self.y = y
if x.length != y.length:
- raise Exception('Indices must reference same underlying length')
+ raise Exception("Indices must reference same underlying length")
self.xstart = self.x.blocs
self.ystart = self.y.blocs
@@ -660,7 +660,7 @@ cdef class BlockUnion(BlockMerge):
int32_t xi, yi, ynblocks, nend
if mode != 0 and mode != 1:
- raise Exception('Mode must be 0 or 1')
+ raise Exception("Mode must be 0 or 1")
# so symmetric code will work
if mode == 0:
diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index b7457f94f3447..733879154b9d6 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -21,15 +21,15 @@ from pandas.core.dtypes.missing import (
cdef bint isiterable(obj):
- return hasattr(obj, '__iter__')
+ return hasattr(obj, "__iter__")
cdef bint has_length(obj):
- return hasattr(obj, '__len__')
+ return hasattr(obj, "__len__")
cdef bint is_dictlike(obj):
- return hasattr(obj, 'keys') and hasattr(obj, '__getitem__')
+ return hasattr(obj, "keys") and hasattr(obj, "__getitem__")
cpdef assert_dict_equal(a, b, bint compare_keys=True):
@@ -91,7 +91,7 @@ cpdef assert_almost_equal(a, b,
Py_ssize_t i, na, nb
double fa, fb
bint is_unequal = False, a_is_ndarray, b_is_ndarray
- str first_diff = ''
+ str first_diff = ""
if lobj is None:
lobj = a
@@ -110,9 +110,9 @@ cpdef assert_almost_equal(a, b,
if obj is None:
if a_is_ndarray or b_is_ndarray:
- obj = 'numpy array'
+ obj = "numpy array"
else:
- obj = 'Iterable'
+ obj = "Iterable"
if isiterable(a):
@@ -131,11 +131,11 @@ cpdef assert_almost_equal(a, b,
if a.shape != b.shape:
from pandas._testing import raise_assert_detail
raise_assert_detail(
- obj, f'{obj} shapes are different', a.shape, b.shape)
+ obj, f"{obj} shapes are different", a.shape, b.shape)
if check_dtype and not is_dtype_equal(a.dtype, b.dtype):
from pandas._testing import assert_attr_equal
- assert_attr_equal('dtype', a, b, obj=obj)
+ assert_attr_equal("dtype", a, b, obj=obj)
if array_equivalent(a, b, strict_nan=True):
return True
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 7fee48c0a5d1f..b78174483be51 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -86,9 +86,9 @@ def _test_parse_iso8601(ts: str):
obj = _TSObject()
- if ts == 'now':
+ if ts == "now":
return Timestamp.utcnow()
- elif ts == 'today':
+ elif ts == "today":
return Timestamp.now().normalize()
string_to_dts(ts, &obj.dts, &out_bestunit, &out_local, &out_tzoffset, True)
@@ -145,7 +145,7 @@ def format_array_from_datetime(
cnp.flatiter it = cnp.PyArray_IterNew(values)
if na_rep is None:
- na_rep = 'NaT'
+ na_rep = "NaT"
if tz is None:
# if we don't have a format nor tz, then choose
@@ -182,21 +182,21 @@ def format_array_from_datetime(
elif basic_format_day:
pandas_datetime_to_datetimestruct(val, reso, &dts)
- res = f'{dts.year}-{dts.month:02d}-{dts.day:02d}'
+ res = f"{dts.year}-{dts.month:02d}-{dts.day:02d}"
elif basic_format:
pandas_datetime_to_datetimestruct(val, reso, &dts)
- res = (f'{dts.year}-{dts.month:02d}-{dts.day:02d} '
- f'{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}')
+ res = (f"{dts.year}-{dts.month:02d}-{dts.day:02d} "
+ f"{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}")
if show_ns:
ns = dts.ps // 1000
- res += f'.{ns + dts.us * 1000:09d}'
+ res += f".{ns + dts.us * 1000:09d}"
elif show_us:
- res += f'.{dts.us:06d}'
+ res += f".{dts.us:06d}"
elif show_ms:
- res += f'.{dts.us // 1000:03d}'
+ res += f".{dts.us // 1000:03d}"
else:
@@ -266,9 +266,9 @@ def array_with_unit_to_datetime(
int64_t mult
int prec = 0
ndarray[float64_t] fvalues
- bint is_ignore = errors=='ignore'
- bint is_coerce = errors=='coerce'
- bint is_raise = errors=='raise'
+ bint is_ignore = errors=="ignore"
+ bint is_coerce = errors=="coerce"
+ bint is_raise = errors=="raise"
bint need_to_iterate = True
ndarray[int64_t] iresult
ndarray[object] oresult
@@ -324,8 +324,8 @@ def array_with_unit_to_datetime(
return result, tz
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
+ result = np.empty(n, dtype="M8[ns]")
+ iresult = result.view("i8")
try:
for i in range(n):
@@ -442,7 +442,7 @@ def first_non_null(values: ndarray) -> int:
@cython.boundscheck(False)
cpdef array_to_datetime(
ndarray[object] values,
- str errors='raise',
+ str errors="raise",
bint dayfirst=False,
bint yearfirst=False,
bint utc=False,
@@ -494,9 +494,9 @@ cpdef array_to_datetime(
bint seen_integer = False
bint seen_datetime = False
bint seen_datetime_offset = False
- bint is_raise = errors=='raise'
- bint is_ignore = errors=='ignore'
- bint is_coerce = errors=='coerce'
+ bint is_raise = errors=="raise"
+ bint is_ignore = errors=="ignore"
+ bint is_coerce = errors=="coerce"
bint is_same_offsets
_TSObject _ts
int64_t value
@@ -511,8 +511,8 @@ cpdef array_to_datetime(
# specify error conditions
assert is_raise or is_ignore or is_coerce
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
+ result = np.empty(n, dtype="M8[ns]")
+ iresult = result.view("i8")
try:
for i in range(n):
@@ -571,7 +571,7 @@ cpdef array_to_datetime(
# if we have previously (or in future accept
# datetimes/strings, then we must coerce)
try:
- iresult[i] = cast_from_unit(val, 'ns')
+ iresult[i] = cast_from_unit(val, "ns")
except OverflowError:
iresult[i] = NPY_NAT
@@ -632,7 +632,7 @@ cpdef array_to_datetime(
else:
# Add a marker for naive string, to track if we are
# parsing mixed naive and aware strings
- out_tzoffset_vals.add('naive')
+ out_tzoffset_vals.add("naive")
_ts = convert_datetime_to_tsobject(py_dt, None)
iresult[i] = _ts.value
@@ -653,7 +653,7 @@ cpdef array_to_datetime(
else:
# Add a marker for naive string, to track if we are
# parsing mixed naive and aware strings
- out_tzoffset_vals.add('naive')
+ out_tzoffset_vals.add("naive")
iresult[i] = value
check_dts_bounds(&dts)
@@ -791,9 +791,9 @@ cdef _array_to_datetime_object(
cdef:
Py_ssize_t i, n = len(values)
object val
- bint is_ignore = errors == 'ignore'
- bint is_coerce = errors == 'coerce'
- bint is_raise = errors == 'raise'
+ bint is_ignore = errors == "ignore"
+ bint is_coerce = errors == "coerce"
+ bint is_raise = errors == "raise"
ndarray[object] oresult
npy_datetimestruct dts
@@ -816,7 +816,7 @@ cdef _array_to_datetime_object(
val = str(val)
if len(val) == 0 or val in nat_strings:
- oresult[i] = 'NaT'
+ oresult[i] = "NaT"
continue
try:
oresult[i] = parse_datetime_string(val, dayfirst=dayfirst,
diff --git a/pandas/_libs/tslibs/ccalendar.pyx b/pandas/_libs/tslibs/ccalendar.pyx
index 00ee15b73f551..19c732e2a313b 100644
--- a/pandas/_libs/tslibs/ccalendar.pyx
+++ b/pandas/_libs/tslibs/ccalendar.pyx
@@ -29,21 +29,21 @@ cdef int32_t* month_offset = [
0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366]
# Canonical location for other modules to find name constants
-MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL',
- 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']
+MONTHS = ["JAN", "FEB", "MAR", "APR", "MAY", "JUN", "JUL",
+ "AUG", "SEP", "OCT", "NOV", "DEC"]
# The first blank line is consistent with calendar.month_name in the calendar
# standard library
-MONTHS_FULL = ['', 'January', 'February', 'March', 'April', 'May', 'June',
- 'July', 'August', 'September', 'October', 'November',
- 'December']
+MONTHS_FULL = ["", "January", "February", "March", "April", "May", "June",
+ "July", "August", "September", "October", "November",
+ "December"]
MONTH_NUMBERS = {name: num for num, name in enumerate(MONTHS)}
cdef dict c_MONTH_NUMBERS = MONTH_NUMBERS
MONTH_ALIASES = {(num + 1): name for num, name in enumerate(MONTHS)}
MONTH_TO_CAL_NUM = {name: num + 1 for num, name in enumerate(MONTHS)}
-DAYS = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
-DAYS_FULL = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday',
- 'Saturday', 'Sunday']
+DAYS = ["MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"]
+DAYS_FULL = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
+ "Saturday", "Sunday"]
int_to_weekday = {num: name for num, name in enumerate(DAYS)}
weekday_to_int = {int_to_weekday[key]: key for key in int_to_weekday}
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 17facf9e16f4b..1b6dace6e90b1 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -76,8 +76,8 @@ from pandas._libs.tslibs.tzconversion cimport (
# ----------------------------------------------------------------------
# Constants
-DT64NS_DTYPE = np.dtype('M8[ns]')
-TD64NS_DTYPE = np.dtype('m8[ns]')
+DT64NS_DTYPE = np.dtype("M8[ns]")
+TD64NS_DTYPE = np.dtype("m8[ns]")
# ----------------------------------------------------------------------
@@ -315,8 +315,8 @@ cdef _TSObject convert_to_tsobject(object ts, tzinfo tz, str unit,
if isinstance(ts, Period):
raise ValueError("Cannot convert Period to Timestamp "
"unambiguously. Use to_timestamp")
- raise TypeError(f'Cannot convert input [{ts}] of type {type(ts)} to '
- f'Timestamp')
+ raise TypeError(f"Cannot convert input [{ts}] of type {type(ts)} to "
+ f"Timestamp")
maybe_localize_tso(obj, tz, obj.creso)
return obj
@@ -497,11 +497,11 @@ cdef _TSObject _convert_str_to_tsobject(object ts, tzinfo tz, str unit,
obj.value = NPY_NAT
obj.tzinfo = tz
return obj
- elif ts == 'now':
+ elif ts == "now":
# Issue 9000, we short-circuit rather than going
# into np_datetime_strings which returns utc
dt = datetime.now(tz)
- elif ts == 'today':
+ elif ts == "today":
# Issue 9000, we short-circuit rather than going
# into np_datetime_strings which returns a normalized datetime
dt = datetime.now(tz)
@@ -702,19 +702,19 @@ cdef tzinfo convert_timezone(
if utc_convert:
pass
elif found_naive:
- raise ValueError('Tz-aware datetime.datetime '
- 'cannot be converted to '
- 'datetime64 unless utc=True')
+ raise ValueError("Tz-aware datetime.datetime "
+ "cannot be converted to "
+ "datetime64 unless utc=True")
elif tz_out is not None and not tz_compare(tz_out, tz_in):
- raise ValueError('Tz-aware datetime.datetime '
- 'cannot be converted to '
- 'datetime64 unless utc=True')
+ raise ValueError("Tz-aware datetime.datetime "
+ "cannot be converted to "
+ "datetime64 unless utc=True")
else:
tz_out = tz_in
else:
if found_tz and not utc_convert:
- raise ValueError('Cannot mix tz-aware with '
- 'tz-naive values')
+ raise ValueError("Cannot mix tz-aware with "
+ "tz-naive values")
return tz_out
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index dda26ad3bebc6..7e5d1d13cbda3 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -73,13 +73,13 @@ def build_field_sarray(const int64_t[:] dtindex, NPY_DATETIMEUNIT reso):
out = np.empty(count, dtype=sa_dtype)
- years = out['Y']
- months = out['M']
- days = out['D']
- hours = out['h']
- minutes = out['m']
- seconds = out['s']
- mus = out['u']
+ years = out["Y"]
+ months = out["M"]
+ days = out["D"]
+ hours = out["h"]
+ minutes = out["m"]
+ seconds = out["s"]
+ mus = out["u"]
for i in range(count):
pandas_datetime_to_datetimestruct(dtindex[i], reso, &dts)
@@ -154,11 +154,11 @@ def get_date_name_field(
out = np.empty(count, dtype=object)
- if field == 'day_name':
+ if field == "day_name":
if locale is None:
names = np.array(DAYS_FULL, dtype=np.object_)
else:
- names = np.array(_get_locale_names('f_weekday', locale),
+ names = np.array(_get_locale_names("f_weekday", locale),
dtype=np.object_)
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -169,11 +169,11 @@ def get_date_name_field(
dow = dayofweek(dts.year, dts.month, dts.day)
out[i] = names[dow].capitalize()
- elif field == 'month_name':
+ elif field == "month_name":
if locale is None:
names = np.array(MONTHS_FULL, dtype=np.object_)
else:
- names = np.array(_get_locale_names('f_month', locale),
+ names = np.array(_get_locale_names("f_month", locale),
dtype=np.object_)
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -237,20 +237,20 @@ def get_start_end_field(
npy_datetimestruct dts
int compare_month, modby
- out = np.zeros(count, dtype='int8')
+ out = np.zeros(count, dtype="int8")
if freqstr:
- if freqstr == 'C':
+ if freqstr == "C":
raise ValueError(f"Custom business days is not supported by {field}")
- is_business = freqstr[0] == 'B'
+ is_business = freqstr[0] == "B"
# YearBegin(), BYearBegin() use month = starting month of year.
# QuarterBegin(), BQuarterBegin() use startingMonth = starting
# month of year. Other offsets use month, startingMonth as ending
# month of year.
- if (freqstr[0:2] in ['MS', 'QS', 'AS']) or (
- freqstr[1:3] in ['MS', 'QS', 'AS']):
+ if (freqstr[0:2] in ["MS", "QS", "AS"]) or (
+ freqstr[1:3] in ["MS", "QS", "AS"]):
end_month = 12 if month_kw == 1 else month_kw - 1
start_month = month_kw
else:
@@ -339,9 +339,9 @@ def get_date_field(
ndarray[int32_t] out
npy_datetimestruct dts
- out = np.empty(count, dtype='i4')
+ out = np.empty(count, dtype="i4")
- if field == 'Y':
+ if field == "Y":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -352,7 +352,7 @@ def get_date_field(
out[i] = dts.year
return out
- elif field == 'M':
+ elif field == "M":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -363,7 +363,7 @@ def get_date_field(
out[i] = dts.month
return out
- elif field == 'D':
+ elif field == "D":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -374,7 +374,7 @@ def get_date_field(
out[i] = dts.day
return out
- elif field == 'h':
+ elif field == "h":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -386,7 +386,7 @@ def get_date_field(
# TODO: can we de-dup with period.pyx <accessor>s?
return out
- elif field == 'm':
+ elif field == "m":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -397,7 +397,7 @@ def get_date_field(
out[i] = dts.min
return out
- elif field == 's':
+ elif field == "s":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -408,7 +408,7 @@ def get_date_field(
out[i] = dts.sec
return out
- elif field == 'us':
+ elif field == "us":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -419,7 +419,7 @@ def get_date_field(
out[i] = dts.us
return out
- elif field == 'ns':
+ elif field == "ns":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -429,7 +429,7 @@ def get_date_field(
pandas_datetime_to_datetimestruct(dtindex[i], reso, &dts)
out[i] = dts.ps // 1000
return out
- elif field == 'doy':
+ elif field == "doy":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -440,7 +440,7 @@ def get_date_field(
out[i] = get_day_of_year(dts.year, dts.month, dts.day)
return out
- elif field == 'dow':
+ elif field == "dow":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -451,7 +451,7 @@ def get_date_field(
out[i] = dayofweek(dts.year, dts.month, dts.day)
return out
- elif field == 'woy':
+ elif field == "woy":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -462,7 +462,7 @@ def get_date_field(
out[i] = get_week_of_year(dts.year, dts.month, dts.day)
return out
- elif field == 'q':
+ elif field == "q":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -474,7 +474,7 @@ def get_date_field(
out[i] = ((out[i] - 1) // 3) + 1
return out
- elif field == 'dim':
+ elif field == "dim":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -484,8 +484,8 @@ def get_date_field(
pandas_datetime_to_datetimestruct(dtindex[i], reso, &dts)
out[i] = get_days_in_month(dts.year, dts.month)
return out
- elif field == 'is_leap_year':
- return isleapyear_arr(get_date_field(dtindex, 'Y', reso=reso))
+ elif field == "is_leap_year":
+ return isleapyear_arr(get_date_field(dtindex, "Y", reso=reso))
raise ValueError(f"Field {field} not supported")
@@ -506,9 +506,9 @@ def get_timedelta_field(
ndarray[int32_t] out
pandas_timedeltastruct tds
- out = np.empty(count, dtype='i4')
+ out = np.empty(count, dtype="i4")
- if field == 'days':
+ if field == "days":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -519,7 +519,7 @@ def get_timedelta_field(
out[i] = tds.days
return out
- elif field == 'seconds':
+ elif field == "seconds":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -530,7 +530,7 @@ def get_timedelta_field(
out[i] = tds.seconds
return out
- elif field == 'microseconds':
+ elif field == "microseconds":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -541,7 +541,7 @@ def get_timedelta_field(
out[i] = tds.microseconds
return out
- elif field == 'nanoseconds':
+ elif field == "nanoseconds":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -560,7 +560,7 @@ cpdef isleapyear_arr(ndarray years):
cdef:
ndarray[int8_t] out
- out = np.zeros(len(years), dtype='int8')
+ out = np.zeros(len(years), dtype="int8")
out[np.logical_or(years % 400 == 0,
np.logical_and(years % 4 == 0,
years % 100 > 0))] = 1
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index dcb7358d8e69a..1f18f8cae4ae8 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -259,7 +259,7 @@ cdef class _NaT(datetime):
"""
Return a numpy.datetime64 object with 'ns' precision.
"""
- return np.datetime64('NaT', "ns")
+ return np.datetime64("NaT", "ns")
def to_numpy(self, dtype=None, copy=False) -> np.datetime64 | np.timedelta64:
"""
diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx
index d49c41e54764f..e5f683c56da9b 100644
--- a/pandas/_libs/tslibs/np_datetime.pyx
+++ b/pandas/_libs/tslibs/np_datetime.pyx
@@ -211,10 +211,10 @@ cdef check_dts_bounds(npy_datetimestruct *dts, NPY_DATETIMEUNIT unit=NPY_FR_ns):
error = True
if error:
- fmt = (f'{dts.year}-{dts.month:02d}-{dts.day:02d} '
- f'{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}')
+ fmt = (f"{dts.year}-{dts.month:02d}-{dts.day:02d} "
+ f"{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}")
# TODO: "nanosecond" in the message assumes NPY_FR_ns
- raise OutOfBoundsDatetime(f'Out of bounds nanosecond timestamp: {fmt}')
+ raise OutOfBoundsDatetime(f"Out of bounds nanosecond timestamp: {fmt}")
# ----------------------------------------------------------------------
@@ -289,7 +289,7 @@ cdef inline int string_to_dts(
buf = get_c_string_buf_and_size(val, &length)
if format is None:
- format_buf = b''
+ format_buf = b""
format_length = 0
exact = False
else:
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 4c6493652b216..97554556b0082 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -240,9 +240,9 @@ cdef _get_calendar(weekmask, holidays, calendar):
holidays = [_to_dt64D(dt) for dt in holidays]
holidays = tuple(sorted(holidays))
- kwargs = {'weekmask': weekmask}
+ kwargs = {"weekmask": weekmask}
if holidays:
- kwargs['holidays'] = holidays
+ kwargs["holidays"] = holidays
busdaycalendar = np.busdaycalendar(**kwargs)
return busdaycalendar, holidays
@@ -253,7 +253,7 @@ cdef _to_dt64D(dt):
# > np.datetime64(dt.datetime(2013,5,1),dtype='datetime64[D]')
# numpy.datetime64('2013-05-01T02:00:00.000000+0200')
# Thus astype is needed to cast datetime to datetime64[D]
- if getattr(dt, 'tzinfo', None) is not None:
+ if getattr(dt, "tzinfo", None) is not None:
# Get the nanosecond timestamp,
# equiv `Timestamp(dt).value` or `dt.timestamp() * 10**9`
# The `naive` must be the `dt` naive wall time
@@ -274,7 +274,7 @@ cdef _to_dt64D(dt):
cdef _validate_business_time(t_input):
if isinstance(t_input, str):
try:
- t = time.strptime(t_input, '%H:%M')
+ t = time.strptime(t_input, "%H:%M")
return dt_time(hour=t.tm_hour, minute=t.tm_min)
except ValueError:
raise ValueError("time data must match '%H:%M' format")
@@ -303,14 +303,14 @@ cdef _determine_offset(kwds):
# more, nanosecond(s) are handled by apply_wraps
kwds_no_nanos = dict(
(k, v) for k, v in kwds.items()
- if k not in ('nanosecond', 'nanoseconds')
+ if k not in ("nanosecond", "nanoseconds")
)
# TODO: Are nanosecond and nanoseconds allowed somewhere?
- _kwds_use_relativedelta = ('years', 'months', 'weeks', 'days',
- 'year', 'month', 'week', 'day', 'weekday',
- 'hour', 'minute', 'second', 'microsecond',
- 'millisecond')
+ _kwds_use_relativedelta = ("years", "months", "weeks", "days",
+ "year", "month", "week", "day", "weekday",
+ "hour", "minute", "second", "microsecond",
+ "millisecond")
use_relativedelta = False
if len(kwds_no_nanos) > 0:
@@ -327,7 +327,7 @@ cdef _determine_offset(kwds):
else:
# sub-daily offset - use timedelta (tz-aware)
offset = timedelta(**kwds_no_nanos)
- elif any(nano in kwds for nano in ('nanosecond', 'nanoseconds')):
+ elif any(nano in kwds for nano in ("nanosecond", "nanoseconds")):
offset = timedelta(days=0)
else:
# GH 45643/45890: (historically) defaults to 1 day for non-nano
@@ -424,11 +424,11 @@ cdef class BaseOffset:
# cython attributes are not in __dict__
all_paras[attr] = getattr(self, attr)
- if 'holidays' in all_paras and not all_paras['holidays']:
- all_paras.pop('holidays')
- exclude = ['kwds', 'name', 'calendar']
+ if "holidays" in all_paras and not all_paras["holidays"]:
+ all_paras.pop("holidays")
+ exclude = ["kwds", "name", "calendar"]
attrs = [(k, v) for k, v in all_paras.items()
- if (k not in exclude) and (k[0] != '_')]
+ if (k not in exclude) and (k[0] != "_")]
attrs = sorted(set(attrs))
params = tuple([str(type(self))] + attrs)
return params
@@ -481,7 +481,7 @@ cdef class BaseOffset:
def __sub__(self, other):
if PyDateTime_Check(other):
- raise TypeError('Cannot subtract datetime from offset.')
+ raise TypeError("Cannot subtract datetime from offset.")
elif type(other) == type(self):
return type(self)(self.n - other.n, normalize=self.normalize,
**self.kwds)
@@ -736,13 +736,13 @@ cdef class BaseOffset:
ValueError if n != int(n)
"""
if util.is_timedelta64_object(n):
- raise TypeError(f'`n` argument must be an integer, got {type(n)}')
+ raise TypeError(f"`n` argument must be an integer, got {type(n)}")
try:
nint = int(n)
except (ValueError, TypeError):
- raise TypeError(f'`n` argument must be an integer, got {type(n)}')
+ raise TypeError(f"`n` argument must be an integer, got {type(n)}")
if n != nint:
- raise ValueError(f'`n` argument must be an integer, got {n}')
+ raise ValueError(f"`n` argument must be an integer, got {n}")
return nint
def __setstate__(self, state):
@@ -1700,7 +1700,7 @@ cdef class BusinessHour(BusinessMixin):
out = super()._repr_attrs()
# Use python string formatting to be faster than strftime
hours = ",".join(
- f'{st.hour:02d}:{st.minute:02d}-{en.hour:02d}:{en.minute:02d}'
+ f"{st.hour:02d}:{st.minute:02d}-{en.hour:02d}:{en.minute:02d}"
for st, en in zip(self.start, self.end)
)
attrs = [f"{self._prefix}={hours}"]
@@ -3675,7 +3675,7 @@ cdef class _CustomBusinessMonth(BusinessMixin):
Define default roll function to be called in apply method.
"""
cbday_kwds = self.kwds.copy()
- cbday_kwds['offset'] = timedelta(0)
+ cbday_kwds["offset"] = timedelta(0)
cbday = CustomBusinessDay(n=1, normalize=False, **cbday_kwds)
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 232169f3844b3..25a2722c48bd6 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -94,7 +94,7 @@ PARSING_WARNING_MSG = (
)
cdef:
- set _not_datelike_strings = {'a', 'A', 'm', 'M', 'p', 'P', 't', 'T'}
+ set _not_datelike_strings = {"a", "A", "m", "M", "p", "P", "t", "T"}
# ----------------------------------------------------------------------
cdef:
@@ -165,38 +165,38 @@ cdef inline object _parse_delimited_date(str date_string, bint dayfirst):
month = _parse_2digit(buf)
day = _parse_2digit(buf + 3)
year = _parse_4digit(buf + 6)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 9 and _is_delimiter(buf[1]) and _is_delimiter(buf[4]):
# parsing M?DD?YYYY and D?MM?YYYY dates
month = _parse_1digit(buf)
day = _parse_2digit(buf + 2)
year = _parse_4digit(buf + 5)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 9 and _is_delimiter(buf[2]) and _is_delimiter(buf[4]):
# parsing MM?D?YYYY and DD?M?YYYY dates
month = _parse_2digit(buf)
day = _parse_1digit(buf + 3)
year = _parse_4digit(buf + 5)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 8 and _is_delimiter(buf[1]) and _is_delimiter(buf[3]):
# parsing M?D?YYYY and D?M?YYYY dates
month = _parse_1digit(buf)
day = _parse_1digit(buf + 2)
year = _parse_4digit(buf + 4)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 7 and _is_delimiter(buf[2]):
# parsing MM?YYYY dates
- if buf[2] == b'.':
+ if buf[2] == b".":
# we cannot reliably tell whether e.g. 10.2010 is a float
# or a date, thus we refuse to parse it here
return None, None
month = _parse_2digit(buf)
year = _parse_4digit(buf + 3)
- reso = 'month'
+ reso = "month"
else:
return None, None
@@ -214,16 +214,16 @@ cdef inline object _parse_delimited_date(str date_string, bint dayfirst):
if dayfirst and not swapped_day_and_month:
warnings.warn(
PARSING_WARNING_MSG.format(
- format='MM/DD/YYYY',
- dayfirst='True',
+ format="MM/DD/YYYY",
+ dayfirst="True",
),
stacklevel=find_stack_level(),
)
elif not dayfirst and swapped_day_and_month:
warnings.warn(
PARSING_WARNING_MSG.format(
- format='DD/MM/YYYY',
- dayfirst='False (the default)',
+ format="DD/MM/YYYY",
+ dayfirst="False (the default)",
),
stacklevel=find_stack_level(),
)
@@ -255,11 +255,11 @@ cdef inline bint does_string_look_like_time(str parse_string):
buf = get_c_string_buf_and_size(parse_string, &length)
if length >= 4:
- if buf[1] == b':':
+ if buf[1] == b":":
# h:MM format
hour = getdigit_ascii(buf[0], -1)
minute = _parse_2digit(buf + 2)
- elif buf[2] == b':':
+ elif buf[2] == b":":
# HH:MM format
hour = _parse_2digit(buf)
minute = _parse_2digit(buf + 3)
@@ -289,7 +289,7 @@ def parse_datetime_string(
datetime dt
if not _does_string_look_like_datetime(date_string):
- raise ValueError(f'Given date string {date_string} not likely a datetime')
+ raise ValueError(f"Given date string {date_string} not likely a datetime")
if does_string_look_like_time(date_string):
# use current datetime as default, not pass _DEFAULT_DATETIME
@@ -323,7 +323,7 @@ def parse_datetime_string(
except TypeError:
# following may be raised from dateutil
# TypeError: 'NoneType' object is not iterable
- raise ValueError(f'Given date string {date_string} not likely a datetime')
+ raise ValueError(f"Given date string {date_string} not likely a datetime")
return dt
@@ -399,7 +399,7 @@ cdef parse_datetime_string_with_reso(
int out_tzoffset
if not _does_string_look_like_datetime(date_string):
- raise ValueError(f'Given date string {date_string} not likely a datetime')
+ raise ValueError(f"Given date string {date_string} not likely a datetime")
parsed, reso = _parse_delimited_date(date_string, dayfirst)
if parsed is not None:
@@ -478,7 +478,7 @@ cpdef bint _does_string_look_like_datetime(str py_string):
buf = get_c_string_buf_and_size(py_string, &length)
if length >= 1:
first = buf[0]
- if first == b'0':
+ if first == b"0":
# Strings starting with 0 are more consistent with a
# date-like string than a number
return True
@@ -492,7 +492,7 @@ cpdef bint _does_string_look_like_datetime(str py_string):
# a float number can be used, b'\0' - not to use a thousand
# separator, 1 - skip extra spaces before and after,
converted_date = xstrtod(buf, &endptr,
- b'.', b'e', b'\0', 1, &error, NULL)
+ b".", b"e", b"\0", 1, &error, NULL)
# if there were no errors and the whole line was parsed, then ...
if error == 0 and endptr == buf + length:
return converted_date >= 1000
@@ -512,7 +512,7 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
assert isinstance(date_string, str)
if date_string in nat_strings:
- return NaT, ''
+ return NaT, ""
date_string = date_string.upper()
date_len = len(date_string)
@@ -521,21 +521,21 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
# parse year only like 2000
try:
ret = default.replace(year=int(date_string))
- return ret, 'year'
+ return ret, "year"
except ValueError:
pass
try:
if 4 <= date_len <= 7:
- i = date_string.index('Q', 1, 6)
+ i = date_string.index("Q", 1, 6)
if i == 1:
quarter = int(date_string[0])
if date_len == 4 or (date_len == 5
- and date_string[i + 1] == '-'):
+ and date_string[i + 1] == "-"):
# r'(\d)Q-?(\d\d)')
year = 2000 + int(date_string[-2:])
elif date_len == 6 or (date_len == 7
- and date_string[i + 1] == '-'):
+ and date_string[i + 1] == "-"):
# r'(\d)Q-?(\d\d\d\d)')
year = int(date_string[-4:])
else:
@@ -543,14 +543,14 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
elif i == 2 or i == 3:
# r'(\d\d)-?Q(\d)'
if date_len == 4 or (date_len == 5
- and date_string[i - 1] == '-'):
+ and date_string[i - 1] == "-"):
quarter = int(date_string[-1])
year = 2000 + int(date_string[:2])
else:
raise ValueError
elif i == 4 or i == 5:
if date_len == 6 or (date_len == 7
- and date_string[i - 1] == '-'):
+ and date_string[i - 1] == "-"):
# r'(\d\d\d\d)-?Q(\d)'
quarter = int(date_string[-1])
year = int(date_string[:4])
@@ -558,9 +558,9 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
raise ValueError
if not (1 <= quarter <= 4):
- raise DateParseError(f'Incorrect quarterly string is given, '
- f'quarter must be '
- f'between 1 and 4: {date_string}')
+ raise DateParseError(f"Incorrect quarterly string is given, "
+ f"quarter must be "
+ f"between 1 and 4: {date_string}")
try:
# GH#1228
@@ -571,30 +571,30 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
f"freq: {freq}")
ret = default.replace(year=year, month=month)
- return ret, 'quarter'
+ return ret, "quarter"
except DateParseError:
raise
except ValueError:
pass
- if date_len == 6 and freq == 'M':
+ if date_len == 6 and freq == "M":
year = int(date_string[:4])
month = int(date_string[4:6])
try:
ret = default.replace(year=year, month=month)
- return ret, 'month'
+ return ret, "month"
except ValueError:
pass
- for pat in ['%Y-%m', '%b %Y', '%b-%Y']:
+ for pat in ["%Y-%m", "%b %Y", "%b-%Y"]:
try:
ret = datetime.strptime(date_string, pat)
- return ret, 'month'
+ return ret, "month"
except ValueError:
pass
- raise ValueError(f'Unable to parse {date_string}')
+ raise ValueError(f"Unable to parse {date_string}")
cpdef quarter_to_myear(int year, int quarter, str freq):
@@ -664,11 +664,11 @@ cdef dateutil_parse(
if reso is None:
raise ValueError(f"Unable to parse datetime string: {timestr}")
- if reso == 'microsecond':
- if repl['microsecond'] == 0:
- reso = 'second'
- elif repl['microsecond'] % 1000 == 0:
- reso = 'millisecond'
+ if reso == "microsecond":
+ if repl["microsecond"] == 0:
+ reso = "second"
+ elif repl["microsecond"] % 1000 == 0:
+ reso = "millisecond"
ret = default.replace(**repl)
if res.weekday is not None and not res.day:
@@ -712,7 +712,7 @@ def try_parse_dates(
object[::1] result
n = len(values)
- result = np.empty(n, dtype='O')
+ result = np.empty(n, dtype="O")
if parser is None:
if default is None: # GH2618
@@ -725,7 +725,7 @@ def try_parse_dates(
# EAFP here
try:
for i in range(n):
- if values[i] == '':
+ if values[i] == "":
result[i] = np.nan
else:
result[i] = parse_date(values[i])
@@ -736,7 +736,7 @@ def try_parse_dates(
parse_date = parser
for i in range(n):
- if values[i] == '':
+ if values[i] == "":
result[i] = np.nan
else:
result[i] = parse_date(values[i])
@@ -754,8 +754,8 @@ def try_parse_year_month_day(
n = len(years)
# TODO(cython3): Use len instead of `shape[0]`
if months.shape[0] != n or days.shape[0] != n:
- raise ValueError('Length of years/months/days must all be equal')
- result = np.empty(n, dtype='O')
+ raise ValueError("Length of years/months/days must all be equal")
+ result = np.empty(n, dtype="O")
for i in range(n):
result[i] = datetime(int(years[i]), int(months[i]), int(days[i]))
@@ -786,8 +786,8 @@ def try_parse_datetime_components(object[:] years,
or minutes.shape[0] != n
or seconds.shape[0] != n
):
- raise ValueError('Length of all datetime components must be equal')
- result = np.empty(n, dtype='O')
+ raise ValueError("Length of all datetime components must be equal")
+ result = np.empty(n, dtype="O")
for i in range(n):
float_secs = float(seconds[i])
@@ -818,15 +818,15 @@ def try_parse_datetime_components(object[:] years,
# Copyright (c) 2017 - dateutil contributors
class _timelex:
def __init__(self, instream):
- if getattr(instream, 'decode', None) is not None:
+ if getattr(instream, "decode", None) is not None:
instream = instream.decode()
if isinstance(instream, str):
self.stream = instream
- elif getattr(instream, 'read', None) is None:
+ elif getattr(instream, "read", None) is None:
raise TypeError(
- 'Parser must be a string or character stream, not '
- f'{type(instream).__name__}')
+ "Parser must be a string or character stream, not "
+ f"{type(instream).__name__}")
else:
self.stream = instream.read()
@@ -846,7 +846,7 @@ class _timelex:
cdef:
Py_ssize_t n
- stream = self.stream.replace('\x00', '')
+ stream = self.stream.replace("\x00", "")
# TODO: Change \s --> \s+ (this doesn't match existing behavior)
# TODO: change the punctuation block to punc+ (does not match existing)
@@ -865,10 +865,10 @@ class _timelex:
# Kludge to match ,-decimal behavior; it'd be better to do this
# later in the process and have a simpler tokenization
if (token is not None and token.isdigit() and
- tokens[n + 1] == ',' and tokens[n + 2].isdigit()):
+ tokens[n + 1] == "," and tokens[n + 2].isdigit()):
# Have to check None b/c it might be replaced during the loop
# TODO: I _really_ don't faking the value here
- tokens[n] = token + '.' + tokens[n + 2]
+ tokens[n] = token + "." + tokens[n + 2]
tokens[n + 1] = None
tokens[n + 2] = None
@@ -889,12 +889,12 @@ def format_is_iso(f: str) -> bint:
Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different
but must be consistent. Leading 0s in dates and times are optional.
"""
- iso_template = '%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S{micro_or_tz}'.format
- excluded_formats = ['%Y%m%d', '%Y%m', '%Y']
+ iso_template = "%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S{micro_or_tz}".format
+ excluded_formats = ["%Y%m%d", "%Y%m", "%Y"]
- for date_sep in [' ', '/', '\\', '-', '.', '']:
- for time_sep in [' ', 'T']:
- for micro_or_tz in ['', '%z', '.%f', '.%f%z']:
+ for date_sep in [" ", "/", "\\", "-", ".", ""]:
+ for time_sep in [" ", "T"]:
+ for micro_or_tz in ["", "%z", ".%f", ".%f%z"]:
if (iso_template(date_sep=date_sep,
time_sep=time_sep,
micro_or_tz=micro_or_tz,
@@ -922,25 +922,25 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
datetime format string (for `strftime` or `strptime`),
or None if it can't be guessed.
"""
- day_attribute_and_format = (('day',), '%d', 2)
+ day_attribute_and_format = (("day",), "%d", 2)
# attr name, format, padding (if any)
datetime_attrs_to_format = [
- (('year', 'month', 'day'), '%Y%m%d', 0),
- (('year',), '%Y', 0),
- (('month',), '%B', 0),
- (('month',), '%b', 0),
- (('month',), '%m', 2),
+ (("year", "month", "day"), "%Y%m%d", 0),
+ (("year",), "%Y", 0),
+ (("month",), "%B", 0),
+ (("month",), "%b", 0),
+ (("month",), "%m", 2),
day_attribute_and_format,
- (('hour',), '%H', 2),
- (('minute',), '%M', 2),
- (('second',), '%S', 2),
- (('second', 'microsecond'), '%S.%f', 0),
- (('tzinfo',), '%z', 0),
- (('tzinfo',), '%Z', 0),
- (('day_of_week',), '%a', 0),
- (('day_of_week',), '%A', 0),
- (('meridiem',), '%p', 0),
+ (("hour",), "%H", 2),
+ (("minute",), "%M", 2),
+ (("second",), "%S", 2),
+ (("second", "microsecond"), "%S.%f", 0),
+ (("tzinfo",), "%z", 0),
+ (("tzinfo",), "%Z", 0),
+ (("day_of_week",), "%a", 0),
+ (("day_of_week",), "%A", 0),
+ (("meridiem",), "%p", 0),
]
if dayfirst:
@@ -967,13 +967,13 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# instead of ‘+09:00’.
if parsed_datetime.tzinfo is not None:
offset_index = None
- if len(tokens) > 0 and tokens[-1] == 'Z':
+ if len(tokens) > 0 and tokens[-1] == "Z":
# the last 'Z' means zero offset
offset_index = -1
- elif len(tokens) > 1 and tokens[-2] in ('+', '-'):
+ elif len(tokens) > 1 and tokens[-2] in ("+", "-"):
# ex. [..., '+', '0900']
offset_index = -2
- elif len(tokens) > 3 and tokens[-4] in ('+', '-'):
+ elif len(tokens) > 3 and tokens[-4] in ("+", "-"):
# ex. [..., '+', '09', ':', '00']
offset_index = -4
@@ -1017,10 +1017,10 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# We make exceptions for %Y and %Y-%m (only with the `-` separator)
# as they conform with ISO8601.
if (
- len({'year', 'month', 'day'} & found_attrs) != 3
- and format_guess != ['%Y']
+ len({"year", "month", "day"} & found_attrs) != 3
+ and format_guess != ["%Y"]
and not (
- format_guess == ['%Y', None, '%m'] and tokens[1] == '-'
+ format_guess == ["%Y", None, "%m"] and tokens[1] == "-"
)
):
return None
@@ -1042,7 +1042,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
output_format.append(tokens[i])
- guessed_format = ''.join(output_format)
+ guessed_format = "".join(output_format)
try:
array_strptime(np.asarray([dt_str], dtype=object), guessed_format)
@@ -1050,7 +1050,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# Doesn't parse, so this can't be the correct format.
return None
# rebuild string, capturing any inferred padding
- dt_str = ''.join(tokens)
+ dt_str = "".join(tokens)
if parsed_datetime.strftime(guessed_format) == dt_str:
return guessed_format
else:
@@ -1059,16 +1059,16 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
cdef str _fill_token(token: str, padding: int):
cdef str token_filled
- if '.' not in token:
+ if "." not in token:
token_filled = token.zfill(padding)
else:
- seconds, nanoseconds = token.split('.')
- seconds = f'{int(seconds):02d}'
+ seconds, nanoseconds = token.split(".")
+ seconds = f"{int(seconds):02d}"
# right-pad so we get nanoseconds, then only take
# first 6 digits (microseconds) as stdlib datetime
# doesn't support nanoseconds
- nanoseconds = nanoseconds.ljust(9, '0')[:6]
- token_filled = f'{seconds}.{nanoseconds}'
+ nanoseconds = nanoseconds.ljust(9, "0")[:6]
+ token_filled = f"{seconds}.{nanoseconds}"
return token_filled
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 0e7cfa4dd9670..cc9c2d631bcd9 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1163,29 +1163,29 @@ cdef str period_format(int64_t value, int freq, object fmt=None):
if fmt is None:
freq_group = get_freq_group(freq)
if freq_group == FR_ANN:
- fmt = b'%Y'
+ fmt = b"%Y"
elif freq_group == FR_QTR:
- fmt = b'%FQ%q'
+ fmt = b"%FQ%q"
elif freq_group == FR_MTH:
- fmt = b'%Y-%m'
+ fmt = b"%Y-%m"
elif freq_group == FR_WK:
left = period_asfreq(value, freq, FR_DAY, 0)
right = period_asfreq(value, freq, FR_DAY, 1)
return f"{period_format(left, FR_DAY)}/{period_format(right, FR_DAY)}"
elif freq_group == FR_BUS or freq_group == FR_DAY:
- fmt = b'%Y-%m-%d'
+ fmt = b"%Y-%m-%d"
elif freq_group == FR_HR:
- fmt = b'%Y-%m-%d %H:00'
+ fmt = b"%Y-%m-%d %H:00"
elif freq_group == FR_MIN:
- fmt = b'%Y-%m-%d %H:%M'
+ fmt = b"%Y-%m-%d %H:%M"
elif freq_group == FR_SEC:
- fmt = b'%Y-%m-%d %H:%M:%S'
+ fmt = b"%Y-%m-%d %H:%M:%S"
elif freq_group == FR_MS:
- fmt = b'%Y-%m-%d %H:%M:%S.%l'
+ fmt = b"%Y-%m-%d %H:%M:%S.%l"
elif freq_group == FR_US:
- fmt = b'%Y-%m-%d %H:%M:%S.%u'
+ fmt = b"%Y-%m-%d %H:%M:%S.%u"
elif freq_group == FR_NS:
- fmt = b'%Y-%m-%d %H:%M:%S.%n'
+ fmt = b"%Y-%m-%d %H:%M:%S.%n"
else:
raise ValueError(f"Unknown freq: {freq}")
@@ -1513,7 +1513,7 @@ def extract_freq(ndarray[object] values) -> BaseOffset:
if is_period_object(value):
return value.freq
- raise ValueError('freq not specified and cannot be inferred')
+ raise ValueError("freq not specified and cannot be inferred")
# -----------------------------------------------------------------------
# period helpers
@@ -1774,7 +1774,7 @@ cdef class _Period(PeriodMixin):
return NaT
return NotImplemented
- def asfreq(self, freq, how='E') -> "Period":
+ def asfreq(self, freq, how="E") -> "Period":
"""
Convert Period to desired frequency, at the start or end of the interval.
@@ -1795,7 +1795,7 @@ cdef class _Period(PeriodMixin):
base2 = freq_to_dtype_code(freq)
# self.n can't be negative or 0
- end = how == 'E'
+ end = how == "E"
if end:
ordinal = self.ordinal + self.freq.n - 1
else:
@@ -1826,13 +1826,13 @@ cdef class _Period(PeriodMixin):
"""
how = validate_end_alias(how)
- end = how == 'E'
+ end = how == "E"
if end:
if freq == "B" or self.freq == "B":
# roll forward to ensure we land on B date
adjust = np.timedelta64(1, "D") - np.timedelta64(1, "ns")
return self.to_timestamp(how="start") + adjust
- endpoint = (self + self.freq).to_timestamp(how='start')
+ endpoint = (self + self.freq).to_timestamp(how="start")
return endpoint - np.timedelta64(1, "ns")
if freq is None:
@@ -2530,7 +2530,7 @@ class Period(_Period):
if not util.is_integer_object(ordinal):
raise ValueError("Ordinal must be an integer")
if freq is None:
- raise ValueError('Must supply freq for ordinal value')
+ raise ValueError("Must supply freq for ordinal value")
elif value is None:
if (year is None and month is None and
@@ -2581,7 +2581,7 @@ class Period(_Period):
else:
nanosecond = ts.nanosecond
if nanosecond != 0:
- reso = 'nanosecond'
+ reso = "nanosecond"
if dt is NaT:
ordinal = NPY_NAT
@@ -2596,18 +2596,18 @@ class Period(_Period):
elif PyDateTime_Check(value):
dt = value
if freq is None:
- raise ValueError('Must supply freq for datetime value')
+ raise ValueError("Must supply freq for datetime value")
if isinstance(dt, Timestamp):
nanosecond = dt.nanosecond
elif util.is_datetime64_object(value):
dt = Timestamp(value)
if freq is None:
- raise ValueError('Must supply freq for datetime value')
+ raise ValueError("Must supply freq for datetime value")
nanosecond = dt.nanosecond
elif PyDate_Check(value):
dt = datetime(year=value.year, month=value.month, day=value.day)
if freq is None:
- raise ValueError('Must supply freq for datetime value')
+ raise ValueError("Must supply freq for datetime value")
else:
msg = "Value must be Period, string, integer, or datetime"
raise ValueError(msg)
@@ -2644,10 +2644,10 @@ cdef int64_t _ordinal_from_fields(int year, int month, quarter, int day,
def validate_end_alias(how: str) -> str: # Literal["E", "S"]
- how_dict = {'S': 'S', 'E': 'E',
- 'START': 'S', 'FINISH': 'E',
- 'BEGIN': 'S', 'END': 'E'}
+ how_dict = {"S": "S", "E": "E",
+ "START": "S", "FINISH": "E",
+ "BEGIN": "S", "END": "E"}
how = how_dict.get(str(how).upper())
- if how not in {'S', 'E'}:
- raise ValueError('How must be one of S or E')
+ if how not in {"S", "E"}:
+ raise ValueError("How must be one of S or E")
return how
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index 79944bc86a8cf..c56b4891da428 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -35,36 +35,36 @@ from pandas._libs.tslibs.np_datetime cimport (
from pandas._libs.tslibs.timestamps cimport _Timestamp
-cdef dict _parse_code_table = {'y': 0,
- 'Y': 1,
- 'm': 2,
- 'B': 3,
- 'b': 4,
- 'd': 5,
- 'H': 6,
- 'I': 7,
- 'M': 8,
- 'S': 9,
- 'f': 10,
- 'A': 11,
- 'a': 12,
- 'w': 13,
- 'j': 14,
- 'U': 15,
- 'W': 16,
- 'Z': 17,
- 'p': 18, # an additional key, only with I
- 'z': 19,
- 'G': 20,
- 'V': 21,
- 'u': 22}
+cdef dict _parse_code_table = {"y": 0,
+ "Y": 1,
+ "m": 2,
+ "B": 3,
+ "b": 4,
+ "d": 5,
+ "H": 6,
+ "I": 7,
+ "M": 8,
+ "S": 9,
+ "f": 10,
+ "A": 11,
+ "a": 12,
+ "w": 13,
+ "j": 14,
+ "U": 15,
+ "W": 16,
+ "Z": 17,
+ "p": 18, # an additional key, only with I
+ "z": 19,
+ "G": 20,
+ "V": 21,
+ "u": 22}
def array_strptime(
ndarray[object] values,
str fmt,
bint exact=True,
- errors='raise',
+ errors="raise",
bint utc=False,
):
"""
@@ -88,9 +88,9 @@ def array_strptime(
int iso_week, iso_year
int64_t us, ns
object val, group_key, ampm, found, timezone
- bint is_raise = errors=='raise'
- bint is_ignore = errors=='ignore'
- bint is_coerce = errors=='coerce'
+ bint is_raise = errors=="raise"
+ bint is_ignore = errors=="ignore"
+ bint is_coerce = errors=="coerce"
bint found_naive = False
bint found_tz = False
tzinfo tz_out = None
@@ -98,12 +98,12 @@ def array_strptime(
assert is_raise or is_ignore or is_coerce
if fmt is not None:
- if '%W' in fmt or '%U' in fmt:
- if '%Y' not in fmt and '%y' not in fmt:
+ if "%W" in fmt or "%U" in fmt:
+ if "%Y" not in fmt and "%y" not in fmt:
raise ValueError("Cannot use '%W' or '%U' without day and year")
- if '%A' not in fmt and '%a' not in fmt and '%w' not in fmt:
+ if "%A" not in fmt and "%a" not in fmt and "%w" not in fmt:
raise ValueError("Cannot use '%W' or '%U' without day and year")
- elif '%Z' in fmt and '%z' in fmt:
+ elif "%Z" in fmt and "%z" in fmt:
raise ValueError("Cannot parse both %Z and %z")
global _TimeRE_cache, _regex_cache
@@ -132,9 +132,9 @@ def array_strptime(
raise ValueError(f"stray % in format '{fmt}'")
_regex_cache[fmt] = format_regex
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
- result_timezone = np.empty(n, dtype='object')
+ result = np.empty(n, dtype="M8[ns]")
+ iresult = result.view("i8")
+ result_timezone = np.empty(n, dtype="object")
dts.us = dts.ps = dts.as = 0
@@ -216,7 +216,7 @@ def array_strptime(
parse_code = _parse_code_table[group_key]
if parse_code == 0:
- year = int(found_dict['y'])
+ year = int(found_dict["y"])
# Open Group specification for strptime() states that a %y
# value in the range of [00, 68] is in the century 2000, while
# [69,99] is in the century 1900
@@ -225,26 +225,26 @@ def array_strptime(
else:
year += 1900
elif parse_code == 1:
- year = int(found_dict['Y'])
+ year = int(found_dict["Y"])
elif parse_code == 2:
- month = int(found_dict['m'])
+ month = int(found_dict["m"])
# elif group_key == 'B':
elif parse_code == 3:
- month = locale_time.f_month.index(found_dict['B'].lower())
+ month = locale_time.f_month.index(found_dict["B"].lower())
# elif group_key == 'b':
elif parse_code == 4:
- month = locale_time.a_month.index(found_dict['b'].lower())
+ month = locale_time.a_month.index(found_dict["b"].lower())
# elif group_key == 'd':
elif parse_code == 5:
- day = int(found_dict['d'])
+ day = int(found_dict["d"])
# elif group_key == 'H':
elif parse_code == 6:
- hour = int(found_dict['H'])
+ hour = int(found_dict["H"])
elif parse_code == 7:
- hour = int(found_dict['I'])
- ampm = found_dict.get('p', '').lower()
+ hour = int(found_dict["I"])
+ ampm = found_dict.get("p", "").lower()
# If there was no AM/PM indicator, we'll treat this like AM
- if ampm in ('', locale_time.am_pm[0]):
+ if ampm in ("", locale_time.am_pm[0]):
# We're in AM so the hour is correct unless we're
# looking at 12 midnight.
# 12 midnight == 12 AM == hour 0
@@ -257,46 +257,46 @@ def array_strptime(
if hour != 12:
hour += 12
elif parse_code == 8:
- minute = int(found_dict['M'])
+ minute = int(found_dict["M"])
elif parse_code == 9:
- second = int(found_dict['S'])
+ second = int(found_dict["S"])
elif parse_code == 10:
- s = found_dict['f']
+ s = found_dict["f"]
# Pad to always return nanoseconds
s += "0" * (9 - len(s))
us = long(s)
ns = us % 1000
us = us // 1000
elif parse_code == 11:
- weekday = locale_time.f_weekday.index(found_dict['A'].lower())
+ weekday = locale_time.f_weekday.index(found_dict["A"].lower())
elif parse_code == 12:
- weekday = locale_time.a_weekday.index(found_dict['a'].lower())
+ weekday = locale_time.a_weekday.index(found_dict["a"].lower())
elif parse_code == 13:
- weekday = int(found_dict['w'])
+ weekday = int(found_dict["w"])
if weekday == 0:
weekday = 6
else:
weekday -= 1
elif parse_code == 14:
- julian = int(found_dict['j'])
+ julian = int(found_dict["j"])
elif parse_code == 15 or parse_code == 16:
week_of_year = int(found_dict[group_key])
- if group_key == 'U':
+ if group_key == "U":
# U starts week on Sunday.
week_of_year_start = 6
else:
# W starts week on Monday.
week_of_year_start = 0
elif parse_code == 17:
- timezone = pytz.timezone(found_dict['Z'])
+ timezone = pytz.timezone(found_dict["Z"])
elif parse_code == 19:
- timezone = parse_timezone_directive(found_dict['z'])
+ timezone = parse_timezone_directive(found_dict["z"])
elif parse_code == 20:
- iso_year = int(found_dict['G'])
+ iso_year = int(found_dict["G"])
elif parse_code == 21:
- iso_week = int(found_dict['V'])
+ iso_week = int(found_dict["V"])
elif parse_code == 22:
- weekday = int(found_dict['u'])
+ weekday = int(found_dict["u"])
weekday -= 1
# don't assume default values for ISO week/year
@@ -424,7 +424,7 @@ class TimeRE(_TimeRE):
if key == "Z":
# lazy computation
if self._Z is None:
- self._Z = self.__seqToRE(pytz.all_timezones, 'Z')
+ self._Z = self.__seqToRE(pytz.all_timezones, "Z")
# Note: handling Z is the key difference vs using the stdlib
# _strptime.TimeRE. test_to_datetime_parse_tzname_or_tzoffset with
# fmt='%Y-%m-%d %H:%M:%S %Z' fails with the stdlib version.
@@ -543,12 +543,12 @@ cdef tzinfo parse_timezone_directive(str z):
int total_minutes
object gmtoff_remainder, gmtoff_remainder_padding
- if z == 'Z':
+ if z == "Z":
return pytz.FixedOffset(0)
- if z[3] == ':':
+ if z[3] == ":":
z = z[:3] + z[4:]
if len(z) > 5:
- if z[5] != ':':
+ if z[5] != ":":
raise ValueError(f"Inconsistent use of : in {z}")
z = z[:5] + z[6:]
hours = int(z[1:3])
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 5cc97a722b7a6..fc276d5d024cd 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -410,7 +410,7 @@ def array_to_timedelta64(
# raise here otherwise we segfault below
raise TypeError("array_to_timedelta64 'values' must have object dtype")
- if errors not in {'ignore', 'raise', 'coerce'}:
+ if errors not in {"ignore", "raise", "coerce"}:
raise ValueError("errors must be one of {'ignore', 'raise', or 'coerce'}")
if unit is not None and errors != "coerce":
@@ -442,7 +442,7 @@ def array_to_timedelta64(
except (TypeError, ValueError):
cnp.PyArray_MultiIter_RESET(mi)
- parsed_unit = parse_timedelta_unit(unit or 'ns')
+ parsed_unit = parse_timedelta_unit(unit or "ns")
for i in range(n):
item = <object>(<PyObject**>cnp.PyArray_MultiIter_DATA(mi, 1))[0]
@@ -513,15 +513,15 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
for c in ts:
# skip whitespace / commas
- if c == ' ' or c == ',':
+ if c == " " or c == ",":
pass
# positive signs are ignored
- elif c == '+':
+ elif c == "+":
pass
# neg
- elif c == '-':
+ elif c == "-":
if neg or have_value or have_hhmmss:
raise ValueError("only leading negative signs are allowed")
@@ -550,7 +550,7 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
result += timedelta_as_neg(r, neg)
# hh:mm:ss.
- elif c == ':':
+ elif c == ":":
# we flip this off if we have a leading value
if have_value:
@@ -559,15 +559,15 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
# we are in the pattern hh:mm:ss pattern
if len(number):
if current_unit is None:
- current_unit = 'h'
+ current_unit = "h"
m = 1000000000 * 3600
- elif current_unit == 'h':
- current_unit = 'm'
+ elif current_unit == "h":
+ current_unit = "m"
m = 1000000000 * 60
- elif current_unit == 'm':
- current_unit = 's'
+ elif current_unit == "m":
+ current_unit = "s"
m = 1000000000
- r = <int64_t>int(''.join(number)) * m
+ r = <int64_t>int("".join(number)) * m
result += timedelta_as_neg(r, neg)
have_hhmmss = 1
else:
@@ -576,17 +576,17 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
unit, number = [], []
# after the decimal point
- elif c == '.':
+ elif c == ".":
if len(number) and current_unit is not None:
# by definition we had something like
# so we need to evaluate the final field from a
# hh:mm:ss (so current_unit is 'm')
- if current_unit != 'm':
+ if current_unit != "m":
raise ValueError("expected hh:mm:ss format before .")
m = 1000000000
- r = <int64_t>int(''.join(number)) * m
+ r = <int64_t>int("".join(number)) * m
result += timedelta_as_neg(r, neg)
have_value = 1
unit, number, frac = [], [], []
@@ -622,16 +622,16 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
else:
m = 1
frac = frac[:9]
- r = <int64_t>int(''.join(frac)) * m
+ r = <int64_t>int("".join(frac)) * m
result += timedelta_as_neg(r, neg)
# we have a regular format
# we must have seconds at this point (hence the unit is still 'm')
elif current_unit is not None:
- if current_unit != 'm':
+ if current_unit != "m":
raise ValueError("expected hh:mm:ss format")
m = 1000000000
- r = <int64_t>int(''.join(number)) * m
+ r = <int64_t>int("".join(number)) * m
result += timedelta_as_neg(r, neg)
# we have a last abbreviation
@@ -652,7 +652,7 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
if have_value:
raise ValueError("have leftover units")
if len(number):
- r = timedelta_from_spec(number, frac, 'ns')
+ r = timedelta_from_spec(number, frac, "ns")
result += timedelta_as_neg(r, neg)
return result
@@ -683,20 +683,20 @@ cdef inline timedelta_from_spec(object number, object frac, object unit):
cdef:
str n
- unit = ''.join(unit)
+ unit = "".join(unit)
if unit in ["M", "Y", "y"]:
raise ValueError(
"Units 'M', 'Y' and 'y' do not represent unambiguous timedelta "
"values and are not supported."
)
- if unit == 'M':
+ if unit == "M":
# To parse ISO 8601 string, 'M' should be treated as minute,
# not month
- unit = 'm'
+ unit = "m"
unit = parse_timedelta_unit(unit)
- n = ''.join(number) + '.' + ''.join(frac)
+ n = "".join(number) + "." + "".join(frac)
return cast_from_unit(float(n), unit)
@@ -770,9 +770,9 @@ def _binary_op_method_timedeltalike(op, name):
item = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other), other)
return f(self, item)
- elif other.dtype.kind in ['m', 'M']:
+ elif other.dtype.kind in ["m", "M"]:
return op(self.to_timedelta64(), other)
- elif other.dtype.kind == 'O':
+ elif other.dtype.kind == "O":
return np.array([op(self, x) for x in other])
else:
return NotImplemented
@@ -838,7 +838,7 @@ cdef inline int64_t parse_iso_format_string(str ts) except? -1:
unicode c
int64_t result = 0, r
int p = 0, sign = 1
- object dec_unit = 'ms', err_msg
+ object dec_unit = "ms", err_msg
bint have_dot = 0, have_value = 0, neg = 0
list number = [], unit = []
@@ -854,65 +854,65 @@ cdef inline int64_t parse_iso_format_string(str ts) except? -1:
have_value = 1
if have_dot:
- if p == 3 and dec_unit != 'ns':
+ if p == 3 and dec_unit != "ns":
unit.append(dec_unit)
- if dec_unit == 'ms':
- dec_unit = 'us'
- elif dec_unit == 'us':
- dec_unit = 'ns'
+ if dec_unit == "ms":
+ dec_unit = "us"
+ elif dec_unit == "us":
+ dec_unit = "ns"
p = 0
p += 1
if not len(unit):
number.append(c)
else:
- r = timedelta_from_spec(number, '0', unit)
+ r = timedelta_from_spec(number, "0", unit)
result += timedelta_as_neg(r, neg)
neg = 0
unit, number = [], [c]
else:
- if c == 'P' or c == 'T':
+ if c == "P" or c == "T":
pass # ignore marking characters P and T
- elif c == '-':
+ elif c == "-":
if neg or have_value:
raise ValueError(err_msg)
else:
neg = 1
elif c == "+":
pass
- elif c in ['W', 'D', 'H', 'M']:
- if c in ['H', 'M'] and len(number) > 2:
+ elif c in ["W", "D", "H", "M"]:
+ if c in ["H", "M"] and len(number) > 2:
raise ValueError(err_msg)
- if c == 'M':
- c = 'min'
+ if c == "M":
+ c = "min"
unit.append(c)
- r = timedelta_from_spec(number, '0', unit)
+ r = timedelta_from_spec(number, "0", unit)
result += timedelta_as_neg(r, neg)
neg = 0
unit, number = [], []
- elif c == '.':
+ elif c == ".":
# append any seconds
if len(number):
- r = timedelta_from_spec(number, '0', 'S')
+ r = timedelta_from_spec(number, "0", "S")
result += timedelta_as_neg(r, neg)
unit, number = [], []
have_dot = 1
- elif c == 'S':
+ elif c == "S":
if have_dot: # ms, us, or ns
if not len(number) or p > 3:
raise ValueError(err_msg)
# pad to 3 digits as required
pad = 3 - p
while pad > 0:
- number.append('0')
+ number.append("0")
pad -= 1
- r = timedelta_from_spec(number, '0', dec_unit)
+ r = timedelta_from_spec(number, "0", dec_unit)
result += timedelta_as_neg(r, neg)
else: # seconds
- r = timedelta_from_spec(number, '0', 'S')
+ r = timedelta_from_spec(number, "0", "S")
result += timedelta_as_neg(r, neg)
else:
raise ValueError(err_msg)
@@ -1435,7 +1435,7 @@ cdef class _Timedelta(timedelta):
else:
sign = " "
- if format == 'all':
+ if format == "all":
fmt = ("{days} days{sign}{hours:02}:{minutes:02}:{seconds:02}."
"{milliseconds:03}{microseconds:03}{nanoseconds:03}")
else:
@@ -1451,24 +1451,24 @@ cdef class _Timedelta(timedelta):
else:
seconds_fmt = "{seconds:02}"
- if format == 'sub_day' and not self._d:
+ if format == "sub_day" and not self._d:
fmt = "{hours:02}:{minutes:02}:" + seconds_fmt
- elif subs or format == 'long':
+ elif subs or format == "long":
fmt = "{days} days{sign}{hours:02}:{minutes:02}:" + seconds_fmt
else:
fmt = "{days} days"
comp_dict = self.components._asdict()
- comp_dict['sign'] = sign
+ comp_dict["sign"] = sign
return fmt.format(**comp_dict)
def __repr__(self) -> str:
- repr_based = self._repr_base(format='long')
+ repr_based = self._repr_base(format="long")
return f"Timedelta('{repr_based}')"
def __str__(self) -> str:
- return self._repr_base(format='long')
+ return self._repr_base(format="long")
def __bool__(self) -> bool:
return self.value != 0
@@ -1512,14 +1512,14 @@ cdef class _Timedelta(timedelta):
'P500DT12H0M0S'
"""
components = self.components
- seconds = (f'{components.seconds}.'
- f'{components.milliseconds:0>3}'
- f'{components.microseconds:0>3}'
- f'{components.nanoseconds:0>3}')
+ seconds = (f"{components.seconds}."
+ f"{components.milliseconds:0>3}"
+ f"{components.microseconds:0>3}"
+ f"{components.nanoseconds:0>3}")
# Trim unnecessary 0s, 1.000000000 -> 1
- seconds = seconds.rstrip('0').rstrip('.')
- tpl = (f'P{components.days}DT{components.hours}'
- f'H{components.minutes}M{seconds}S')
+ seconds = seconds.rstrip("0").rstrip(".")
+ tpl = (f"P{components.days}DT{components.hours}"
+ f"H{components.minutes}M{seconds}S")
return tpl
# ----------------------------------------------------------------
@@ -1665,22 +1665,22 @@ class Timedelta(_Timedelta):
# are taken into consideration.
seconds = int((
(
- (kwargs.get('days', 0) + kwargs.get('weeks', 0) * 7) * 24
- + kwargs.get('hours', 0)
+ (kwargs.get("days", 0) + kwargs.get("weeks", 0) * 7) * 24
+ + kwargs.get("hours", 0)
) * 3600
- + kwargs.get('minutes', 0) * 60
- + kwargs.get('seconds', 0)
+ + kwargs.get("minutes", 0) * 60
+ + kwargs.get("seconds", 0)
) * 1_000_000_000
)
value = np.timedelta64(
- int(kwargs.get('nanoseconds', 0))
- + int(kwargs.get('microseconds', 0) * 1_000)
- + int(kwargs.get('milliseconds', 0) * 1_000_000)
+ int(kwargs.get("nanoseconds", 0))
+ + int(kwargs.get("microseconds", 0) * 1_000)
+ + int(kwargs.get("milliseconds", 0) * 1_000_000)
+ seconds
)
- if unit in {'Y', 'y', 'M'}:
+ if unit in {"Y", "y", "M"}:
raise ValueError(
"Units 'M', 'Y', and 'y' are no longer supported, as they do not "
"represent unambiguous timedelta values durations."
@@ -1702,8 +1702,8 @@ class Timedelta(_Timedelta):
elif isinstance(value, str):
if unit is not None:
raise ValueError("unit must not be specified if the value is a str")
- if (len(value) > 0 and value[0] == 'P') or (
- len(value) > 1 and value[:2] == '-P'
+ if (len(value) > 0 and value[0] == "P") or (
+ len(value) > 1 and value[:2] == "-P"
):
value = parse_iso_format_string(value)
else:
@@ -1757,7 +1757,7 @@ class Timedelta(_Timedelta):
)
if is_timedelta64_object(value):
- value = value.view('i8')
+ value = value.view("i8")
# nat
if value == NPY_NAT:
@@ -1839,14 +1839,14 @@ class Timedelta(_Timedelta):
# Arithmetic Methods
# TODO: Can some of these be defined in the cython class?
- __neg__ = _op_unary_method(lambda x: -x, '__neg__')
- __pos__ = _op_unary_method(lambda x: x, '__pos__')
- __abs__ = _op_unary_method(lambda x: abs(x), '__abs__')
+ __neg__ = _op_unary_method(lambda x: -x, "__neg__")
+ __pos__ = _op_unary_method(lambda x: x, "__pos__")
+ __abs__ = _op_unary_method(lambda x: abs(x), "__abs__")
- __add__ = _binary_op_method_timedeltalike(lambda x, y: x + y, '__add__')
- __radd__ = _binary_op_method_timedeltalike(lambda x, y: x + y, '__radd__')
- __sub__ = _binary_op_method_timedeltalike(lambda x, y: x - y, '__sub__')
- __rsub__ = _binary_op_method_timedeltalike(lambda x, y: y - x, '__rsub__')
+ __add__ = _binary_op_method_timedeltalike(lambda x, y: x + y, "__add__")
+ __radd__ = _binary_op_method_timedeltalike(lambda x, y: x + y, "__radd__")
+ __sub__ = _binary_op_method_timedeltalike(lambda x, y: x - y, "__sub__")
+ __rsub__ = _binary_op_method_timedeltalike(lambda x, y: y - x, "__rsub__")
def __mul__(self, other):
if is_integer_object(other) or is_float_object(other):
@@ -1947,7 +1947,7 @@ class Timedelta(_Timedelta):
item = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other), other)
return self.__floordiv__(item)
- if other.dtype.kind == 'm':
+ if other.dtype.kind == "m":
# also timedelta-like
# TODO: could suppress
# RuntimeWarning: invalid value encountered in floor_divide
@@ -1959,13 +1959,13 @@ class Timedelta(_Timedelta):
result[mask] = np.nan
return result
- elif other.dtype.kind in ['i', 'u', 'f']:
+ elif other.dtype.kind in ["i", "u", "f"]:
if other.ndim == 0:
return self // other.item()
else:
return self.to_timedelta64() // other
- raise TypeError(f'Invalid dtype {other.dtype} for __floordiv__')
+ raise TypeError(f"Invalid dtype {other.dtype} for __floordiv__")
return NotImplemented
@@ -1987,7 +1987,7 @@ class Timedelta(_Timedelta):
item = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other), other)
return self.__rfloordiv__(item)
- if other.dtype.kind == 'm':
+ if other.dtype.kind == "m":
# also timedelta-like
# TODO: could suppress
# RuntimeWarning: invalid value encountered in floor_divide
@@ -2000,7 +2000,7 @@ class Timedelta(_Timedelta):
return result
# Includes integer array // Timedelta, disallowed in GH#19761
- raise TypeError(f'Invalid dtype {other.dtype} for __floordiv__')
+ raise TypeError(f"Invalid dtype {other.dtype} for __floordiv__")
return NotImplemented
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 8e9c8d40398d9..f987a2feb2717 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -434,7 +434,7 @@ cdef class _Timestamp(ABCTimestamp):
raise integer_op_not_supported(self)
elif is_array(other):
- if other.dtype.kind in ['i', 'u']:
+ if other.dtype.kind in ["i", "u"]:
raise integer_op_not_supported(self)
if other.dtype.kind == "m":
if self.tz is None:
@@ -465,7 +465,7 @@ cdef class _Timestamp(ABCTimestamp):
return self + neg_other
elif is_array(other):
- if other.dtype.kind in ['i', 'u']:
+ if other.dtype.kind in ["i", "u"]:
raise integer_op_not_supported(self)
if other.dtype.kind == "m":
if self.tz is None:
@@ -563,7 +563,7 @@ cdef class _Timestamp(ABCTimestamp):
if freq:
kwds = freq.kwds
- month_kw = kwds.get('startingMonth', kwds.get('month', 12))
+ month_kw = kwds.get("startingMonth", kwds.get("month", 12))
freqstr = freq.freqstr
else:
month_kw = 12
@@ -929,15 +929,15 @@ cdef class _Timestamp(ABCTimestamp):
zone = None
try:
- stamp += self.strftime('%z')
+ stamp += self.strftime("%z")
except ValueError:
year2000 = self.replace(year=2000)
- stamp += year2000.strftime('%z')
+ stamp += year2000.strftime("%z")
if self.tzinfo:
zone = get_timezone(self.tzinfo)
try:
- stamp += zone.strftime(' %%Z')
+ stamp += zone.strftime(" %%Z")
except AttributeError:
# e.g. tzlocal has no `strftime`
pass
@@ -954,16 +954,16 @@ cdef class _Timestamp(ABCTimestamp):
def _date_repr(self) -> str:
# Ideal here would be self.strftime("%Y-%m-%d"), but
# the datetime strftime() methods require year >= 1900 and is slower
- return f'{self.year}-{self.month:02d}-{self.day:02d}'
+ return f"{self.year}-{self.month:02d}-{self.day:02d}"
@property
def _time_repr(self) -> str:
- result = f'{self.hour:02d}:{self.minute:02d}:{self.second:02d}'
+ result = f"{self.hour:02d}:{self.minute:02d}:{self.second:02d}"
if self.nanosecond != 0:
- result += f'.{self.nanosecond + 1000 * self.microsecond:09d}'
+ result += f".{self.nanosecond + 1000 * self.microsecond:09d}"
elif self.microsecond != 0:
- result += f'.{self.microsecond:06d}'
+ result += f".{self.microsecond:06d}"
return result
@@ -1451,7 +1451,7 @@ class Timestamp(_Timestamp):
# GH#17690 tzinfo must be a datetime.tzinfo object, ensured
# by the cython annotation.
if tz is not None:
- raise ValueError('Can provide at most one of tz, tzinfo')
+ raise ValueError("Can provide at most one of tz, tzinfo")
# User passed tzinfo instead of tz; avoid silently ignoring
tz, tzinfo = tzinfo, None
@@ -1465,7 +1465,7 @@ class Timestamp(_Timestamp):
if (ts_input is not _no_input and not (
PyDateTime_Check(ts_input) and
- getattr(ts_input, 'tzinfo', None) is None)):
+ getattr(ts_input, "tzinfo", None) is None)):
raise ValueError(
"Cannot pass fold with possibly unambiguous input: int, "
"float, numpy.datetime64, str, or timezone-aware "
@@ -1479,7 +1479,7 @@ class Timestamp(_Timestamp):
"timezones."
)
- if hasattr(ts_input, 'fold'):
+ if hasattr(ts_input, "fold"):
ts_input = ts_input.replace(fold=fold)
# GH 30543 if pd.Timestamp already passed, return it
@@ -1536,7 +1536,7 @@ class Timestamp(_Timestamp):
# passed positionally see test_constructor_nanosecond
nanosecond = microsecond
- if getattr(ts_input, 'tzinfo', None) is not None and tz is not None:
+ if getattr(ts_input, "tzinfo", None) is not None and tz is not None:
raise ValueError("Cannot pass a datetime or Timestamp with tzinfo with "
"the tz parameter. Use tz_convert instead.")
@@ -1558,7 +1558,7 @@ class Timestamp(_Timestamp):
return create_timestamp_from_ts(ts.value, ts.dts, ts.tzinfo, ts.fold, ts.creso)
- def _round(self, freq, mode, ambiguous='raise', nonexistent='raise'):
+ def _round(self, freq, mode, ambiguous="raise", nonexistent="raise"):
cdef:
int64_t nanos
@@ -1581,7 +1581,7 @@ class Timestamp(_Timestamp):
)
return result
- def round(self, freq, ambiguous='raise', nonexistent='raise'):
+ def round(self, freq, ambiguous="raise", nonexistent="raise"):
"""
Round the Timestamp to the specified resolution.
@@ -1676,7 +1676,7 @@ timedelta}, default 'raise'
freq, RoundTo.NEAREST_HALF_EVEN, ambiguous, nonexistent
)
- def floor(self, freq, ambiguous='raise', nonexistent='raise'):
+ def floor(self, freq, ambiguous="raise", nonexistent="raise"):
"""
Return a new Timestamp floored to this resolution.
@@ -1765,7 +1765,7 @@ timedelta}, default 'raise'
"""
return self._round(freq, RoundTo.MINUS_INFTY, ambiguous, nonexistent)
- def ceil(self, freq, ambiguous='raise', nonexistent='raise'):
+ def ceil(self, freq, ambiguous="raise", nonexistent="raise"):
"""
Return a new Timestamp ceiled to this resolution.
@@ -1875,7 +1875,7 @@ timedelta}, default 'raise'
"Use tz_localize() or tz_convert() as appropriate"
)
- def tz_localize(self, tz, ambiguous='raise', nonexistent='raise'):
+ def tz_localize(self, tz, ambiguous="raise", nonexistent="raise"):
"""
Localize the Timestamp to a timezone.
@@ -1946,10 +1946,10 @@ default 'raise'
>>> pd.NaT.tz_localize()
NaT
"""
- if ambiguous == 'infer':
- raise ValueError('Cannot infer offset with only one time.')
+ if ambiguous == "infer":
+ raise ValueError("Cannot infer offset with only one time.")
- nonexistent_options = ('raise', 'NaT', 'shift_forward', 'shift_backward')
+ nonexistent_options = ("raise", "NaT", "shift_forward", "shift_backward")
if nonexistent not in nonexistent_options and not PyDelta_Check(nonexistent):
raise ValueError(
"The nonexistent argument must be one of 'raise', "
@@ -2122,21 +2122,21 @@ default 'raise'
return v
if year is not None:
- dts.year = validate('year', year)
+ dts.year = validate("year", year)
if month is not None:
- dts.month = validate('month', month)
+ dts.month = validate("month", month)
if day is not None:
- dts.day = validate('day', day)
+ dts.day = validate("day", day)
if hour is not None:
- dts.hour = validate('hour', hour)
+ dts.hour = validate("hour", hour)
if minute is not None:
- dts.min = validate('minute', minute)
+ dts.min = validate("minute", minute)
if second is not None:
- dts.sec = validate('second', second)
+ dts.sec = validate("second", second)
if microsecond is not None:
- dts.us = validate('microsecond', microsecond)
+ dts.us = validate("microsecond", microsecond)
if nanosecond is not None:
- dts.ps = validate('nanosecond', nanosecond) * 1000
+ dts.ps = validate("nanosecond", nanosecond) * 1000
if tzinfo is not object:
tzobj = tzinfo
@@ -2150,10 +2150,10 @@ default 'raise'
is_dst=not bool(fold))
tzobj = ts_input.tzinfo
else:
- kwargs = {'year': dts.year, 'month': dts.month, 'day': dts.day,
- 'hour': dts.hour, 'minute': dts.min, 'second': dts.sec,
- 'microsecond': dts.us, 'tzinfo': tzobj,
- 'fold': fold}
+ kwargs = {"year": dts.year, "month": dts.month, "day": dts.day,
+ "hour": dts.hour, "minute": dts.min, "second": dts.sec,
+ "microsecond": dts.us, "tzinfo": tzobj,
+ "fold": fold}
ts_input = datetime(**kwargs)
ts = convert_datetime_to_tsobject(
diff --git a/pandas/_libs/tslibs/timezones.pyx b/pandas/_libs/tslibs/timezones.pyx
index abf8bbc5ca5b9..8d7bebe5d46c2 100644
--- a/pandas/_libs/tslibs/timezones.pyx
+++ b/pandas/_libs/tslibs/timezones.pyx
@@ -97,12 +97,12 @@ cdef inline bint is_tzlocal(tzinfo tz):
cdef inline bint treat_tz_as_pytz(tzinfo tz):
- return (hasattr(tz, '_utc_transition_times') and
- hasattr(tz, '_transition_info'))
+ return (hasattr(tz, "_utc_transition_times") and
+ hasattr(tz, "_transition_info"))
cdef inline bint treat_tz_as_dateutil(tzinfo tz):
- return hasattr(tz, '_trans_list') and hasattr(tz, '_trans_idx')
+ return hasattr(tz, "_trans_list") and hasattr(tz, "_trans_idx")
# Returns str or tzinfo object
@@ -125,16 +125,16 @@ cpdef inline object get_timezone(tzinfo tz):
return tz
else:
if treat_tz_as_dateutil(tz):
- if '.tar.gz' in tz._filename:
+ if ".tar.gz" in tz._filename:
raise ValueError(
- 'Bad tz filename. Dateutil on python 3 on windows has a '
- 'bug which causes tzfile._filename to be the same for all '
- 'timezone files. Please construct dateutil timezones '
+ "Bad tz filename. Dateutil on python 3 on windows has a "
+ "bug which causes tzfile._filename to be the same for all "
+ "timezone files. Please construct dateutil timezones "
'implicitly by passing a string like "dateutil/Europe'
'/London" when you construct your pandas objects instead '
- 'of passing a timezone object. See '
- 'https://github.com/pandas-dev/pandas/pull/7362')
- return 'dateutil/' + tz._filename
+ "of passing a timezone object. See "
+ "https://github.com/pandas-dev/pandas/pull/7362")
+ return "dateutil/" + tz._filename
else:
# tz is a pytz timezone or unknown.
try:
@@ -152,19 +152,19 @@ cpdef inline tzinfo maybe_get_tz(object tz):
it to construct a timezone object. Otherwise, just return tz.
"""
if isinstance(tz, str):
- if tz == 'tzlocal()':
+ if tz == "tzlocal()":
tz = _dateutil_tzlocal()
- elif tz.startswith('dateutil/'):
+ elif tz.startswith("dateutil/"):
zone = tz[9:]
tz = dateutil_gettz(zone)
# On Python 3 on Windows, the filename is not always set correctly.
- if isinstance(tz, _dateutil_tzfile) and '.tar.gz' in tz._filename:
+ if isinstance(tz, _dateutil_tzfile) and ".tar.gz" in tz._filename:
tz._filename = zone
- elif tz[0] in {'-', '+'}:
+ elif tz[0] in {"-", "+"}:
hours = int(tz[0:3])
minutes = int(tz[0] + tz[4:6])
tz = timezone(timedelta(hours=hours, minutes=minutes))
- elif tz[0:4] in {'UTC-', 'UTC+'}:
+ elif tz[0:4] in {"UTC-", "UTC+"}:
hours = int(tz[3:6])
minutes = int(tz[3] + tz[7:9])
tz = timezone(timedelta(hours=hours, minutes=minutes))
@@ -211,16 +211,16 @@ cdef inline object tz_cache_key(tzinfo tz):
if isinstance(tz, _pytz_BaseTzInfo):
return tz.zone
elif isinstance(tz, _dateutil_tzfile):
- if '.tar.gz' in tz._filename:
- raise ValueError('Bad tz filename. Dateutil on python 3 on '
- 'windows has a bug which causes tzfile._filename '
- 'to be the same for all timezone files. Please '
- 'construct dateutil timezones implicitly by '
+ if ".tar.gz" in tz._filename:
+ raise ValueError("Bad tz filename. Dateutil on python 3 on "
+ "windows has a bug which causes tzfile._filename "
+ "to be the same for all timezone files. Please "
+ "construct dateutil timezones implicitly by "
'passing a string like "dateutil/Europe/London" '
- 'when you construct your pandas objects instead '
- 'of passing a timezone object. See '
- 'https://github.com/pandas-dev/pandas/pull/7362')
- return 'dateutil' + tz._filename
+ "when you construct your pandas objects instead "
+ "of passing a timezone object. See "
+ "https://github.com/pandas-dev/pandas/pull/7362")
+ return "dateutil" + tz._filename
else:
return None
@@ -276,7 +276,7 @@ cdef int64_t[::1] unbox_utcoffsets(object transinfo):
int64_t[::1] arr
sz = len(transinfo)
- arr = np.empty(sz, dtype='i8')
+ arr = np.empty(sz, dtype="i8")
for i in range(sz):
arr[i] = int(transinfo[i][0].total_seconds()) * 1_000_000_000
@@ -312,35 +312,35 @@ cdef object get_dst_info(tzinfo tz):
if cache_key not in dst_cache:
if treat_tz_as_pytz(tz):
- trans = np.array(tz._utc_transition_times, dtype='M8[ns]')
- trans = trans.view('i8')
+ trans = np.array(tz._utc_transition_times, dtype="M8[ns]")
+ trans = trans.view("i8")
if tz._utc_transition_times[0].year == 1:
trans[0] = NPY_NAT + 1
deltas = unbox_utcoffsets(tz._transition_info)
- typ = 'pytz'
+ typ = "pytz"
elif treat_tz_as_dateutil(tz):
if len(tz._trans_list):
# get utc trans times
trans_list = _get_utc_trans_times_from_dateutil_tz(tz)
trans = np.hstack([
- np.array([0], dtype='M8[s]'), # place holder for 1st item
- np.array(trans_list, dtype='M8[s]')]).astype(
- 'M8[ns]') # all trans listed
- trans = trans.view('i8')
+ np.array([0], dtype="M8[s]"), # place holder for 1st item
+ np.array(trans_list, dtype="M8[s]")]).astype(
+ "M8[ns]") # all trans listed
+ trans = trans.view("i8")
trans[0] = NPY_NAT + 1
# deltas
deltas = np.array([v.offset for v in (
- tz._ttinfo_before,) + tz._trans_idx], dtype='i8')
+ tz._ttinfo_before,) + tz._trans_idx], dtype="i8")
deltas *= 1_000_000_000
- typ = 'dateutil'
+ typ = "dateutil"
elif is_fixed_offset(tz):
trans = np.array([NPY_NAT + 1], dtype=np.int64)
deltas = np.array([tz._ttinfo_std.offset],
- dtype='i8') * 1_000_000_000
- typ = 'fixed'
+ dtype="i8") * 1_000_000_000
+ typ = "fixed"
else:
# 2018-07-12 this is not reached in the tests, and this case
# is not handled in any of the functions that call
@@ -367,8 +367,8 @@ def infer_tzinfo(datetime start, datetime end):
if start is not None and end is not None:
tz = start.tzinfo
if not tz_compare(tz, end.tzinfo):
- raise AssertionError(f'Inputs must both have the same timezone, '
- f'{tz} != {end.tzinfo}')
+ raise AssertionError(f"Inputs must both have the same timezone, "
+ f"{tz} != {end.tzinfo}")
elif start is not None:
tz = start.tzinfo
elif end is not None:
diff --git a/pandas/_libs/tslibs/tzconversion.pyx b/pandas/_libs/tslibs/tzconversion.pyx
index afdf6d3d9b001..f74c72dc4e35c 100644
--- a/pandas/_libs/tslibs/tzconversion.pyx
+++ b/pandas/_libs/tslibs/tzconversion.pyx
@@ -248,9 +248,9 @@ timedelta-like}
# silence false-positive compiler warning
ambiguous_array = np.empty(0, dtype=bool)
if isinstance(ambiguous, str):
- if ambiguous == 'infer':
+ if ambiguous == "infer":
infer_dst = True
- elif ambiguous == 'NaT':
+ elif ambiguous == "NaT":
fill = True
elif isinstance(ambiguous, bool):
is_dst = True
@@ -258,23 +258,23 @@ timedelta-like}
ambiguous_array = np.ones(len(vals), dtype=bool)
else:
ambiguous_array = np.zeros(len(vals), dtype=bool)
- elif hasattr(ambiguous, '__iter__'):
+ elif hasattr(ambiguous, "__iter__"):
is_dst = True
if len(ambiguous) != len(vals):
raise ValueError("Length of ambiguous bool-array must be "
"the same size as vals")
ambiguous_array = np.asarray(ambiguous, dtype=bool)
- if nonexistent == 'NaT':
+ if nonexistent == "NaT":
fill_nonexist = True
- elif nonexistent == 'shift_forward':
+ elif nonexistent == "shift_forward":
shift_forward = True
- elif nonexistent == 'shift_backward':
+ elif nonexistent == "shift_backward":
shift_backward = True
elif PyDelta_Check(nonexistent):
from .timedeltas import delta_to_nanoseconds
shift_delta = delta_to_nanoseconds(nonexistent, reso=creso)
- elif nonexistent not in ('raise', None):
+ elif nonexistent not in ("raise", None):
msg = ("nonexistent must be one of {'NaT', 'raise', 'shift_forward', "
"shift_backwards} or a timedelta object")
raise ValueError(msg)
diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index 702706f00455b..57ef3601b7461 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -1158,11 +1158,11 @@ cdef enum InterpolationType:
interpolation_types = {
- 'linear': LINEAR,
- 'lower': LOWER,
- 'higher': HIGHER,
- 'nearest': NEAREST,
- 'midpoint': MIDPOINT,
+ "linear": LINEAR,
+ "lower": LOWER,
+ "higher": HIGHER,
+ "nearest": NEAREST,
+ "midpoint": MIDPOINT,
}
@@ -1419,7 +1419,7 @@ def roll_apply(object obj,
# ndarray input
if raw and not arr.flags.c_contiguous:
- arr = arr.copy('C')
+ arr = arr.copy("C")
counts = roll_sum(np.isfinite(arr).astype(float), start, end, minp)
diff --git a/pandas/_libs/window/indexers.pyx b/pandas/_libs/window/indexers.pyx
index 465865dec23c4..02934346130a5 100644
--- a/pandas/_libs/window/indexers.pyx
+++ b/pandas/_libs/window/indexers.pyx
@@ -53,16 +53,16 @@ def calculate_variable_window_bounds(
Py_ssize_t i, j
if num_values <= 0:
- return np.empty(0, dtype='int64'), np.empty(0, dtype='int64')
+ return np.empty(0, dtype="int64"), np.empty(0, dtype="int64")
# default is 'right'
if closed is None:
- closed = 'right'
+ closed = "right"
- if closed in ['right', 'both']:
+ if closed in ["right", "both"]:
right_closed = True
- if closed in ['left', 'both']:
+ if closed in ["left", "both"]:
left_closed = True
# GH 43997:
@@ -76,9 +76,9 @@ def calculate_variable_window_bounds(
if index[num_values - 1] < index[0]:
index_growth_sign = -1
- start = np.empty(num_values, dtype='int64')
+ start = np.empty(num_values, dtype="int64")
start.fill(-1)
- end = np.empty(num_values, dtype='int64')
+ end = np.empty(num_values, dtype="int64")
end.fill(-1)
start[0] = 0
diff --git a/pandas/_libs/writers.pyx b/pandas/_libs/writers.pyx
index cd42b08a03474..fbd08687d7c82 100644
--- a/pandas/_libs/writers.pyx
+++ b/pandas/_libs/writers.pyx
@@ -89,14 +89,14 @@ def convert_json_to_lines(arr: str) -> str:
unsigned char val, newline, comma, left_bracket, right_bracket, quote
unsigned char backslash
- newline = ord('\n')
- comma = ord(',')
- left_bracket = ord('{')
- right_bracket = ord('}')
+ newline = ord("\n")
+ comma = ord(",")
+ left_bracket = ord("{")
+ right_bracket = ord("}")
quote = ord('"')
- backslash = ord('\\')
+ backslash = ord("\\")
- narr = np.frombuffer(arr.encode('utf-8'), dtype='u1').copy()
+ narr = np.frombuffer(arr.encode("utf-8"), dtype="u1").copy()
length = narr.shape[0]
for i in range(length):
val = narr[i]
@@ -114,7 +114,7 @@ def convert_json_to_lines(arr: str) -> str:
if not in_quotes:
num_open_brackets_seen -= 1
- return narr.tobytes().decode('utf-8') + '\n' # GH:36888
+ return narr.tobytes().decode("utf-8") + "\n" # GH:36888
# stata, pytables
diff --git a/pandas/io/sas/sas.pyx b/pandas/io/sas/sas.pyx
index 8c13566c656b7..7d0f549a2f976 100644
--- a/pandas/io/sas/sas.pyx
+++ b/pandas/io/sas/sas.pyx
@@ -343,7 +343,7 @@ cdef class Parser:
self.bit_offset = self.parser._page_bit_offset
self.subheader_pointer_length = self.parser._subheader_pointer_length
self.is_little_endian = parser.byte_order == "<"
- self.column_types = np.empty(self.column_count, dtype='int64')
+ self.column_types = np.empty(self.column_count, dtype="int64")
# page indicators
self.update_next_page()
@@ -352,9 +352,9 @@ cdef class Parser:
# map column types
for j in range(self.column_count):
- if column_types[j] == b'd':
+ if column_types[j] == b"d":
self.column_types[j] = column_type_decimal
- elif column_types[j] == b's':
+ elif column_types[j] == b"s":
self.column_types[j] = column_type_string
else:
raise ValueError(f"unknown column type: {self.parser.columns[j].ctype}")
| task here is:
add id: double-quote-cython-strings to
https://github.com/pandas-dev/pandas/blob/3d0d197cec32ed5ce30d28b922f329510c03f153/.pre-commit-config.yaml#L29
run pre-commit run double-quote-cython-strings --all-files
git add -u, git commit -m 'double quote cython strings', git push origin HEAD`
Motivation for this comes from https://github.com/pandas-dev/pandas/pull/49866#discussion_r1030853463
pyData 2022 sprint
https://github.com/noatamir/pydata-global-sprints/issues/21
@jorisvandenbossche, @MarcoGorelli, @WillAyd, @rhshadrach, @phofl @noatamir
| https://api.github.com/repos/pandas-dev/pandas/pulls/50013 | 2022-12-02T16:07:02Z | 2022-12-02T18:11:14Z | 2022-12-02T18:11:14Z | 2023-03-30T16:14:22Z |
DOC: Add missing :ref: to a link in a docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..aa61399f94330 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5168,7 +5168,7 @@ def drop(
Remove rows or columns by specifying label names and corresponding
axis, or by specifying directly index or column names. When using a
multi-index, labels on different levels can be removed by specifying
- the level. See the `user guide <advanced.shown_levels>`
+ the level. See the :ref:`user guide <advanced.shown_levels>`
for more information about the now unused levels.
Parameters
| - [x] closes https://github.com/noatamir/pydata-global-sprints/issues/18
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Docstring fix.
Validation outcome:
```
>>> python scripts/validate_docstrings.py pandas.DataFrame.drop
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-e6oti6qf because the default path (/home/gitpod/.cache/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
################################################################################
###################### Docstring (pandas.DataFrame.drop) ######################
################################################################################
Drop specified labels from rows or columns.
Remove rows or columns by specifying label names and corresponding
axis, or by specifying directly index or column names. When using a
multi-index, labels on different levels can be removed by specifying
the level. See the :ref:`user guide <advanced.shown_levels>`
for more information about the now unused levels.
Parameters
----------
labels : single label or list-like
Index or column labels to drop. A tuple will be used as a single
label and not treated as a list-like.
axis : {0 or 'index', 1 or 'columns'}, default 0
Whether to drop labels from the index (0 or 'index') or
columns (1 or 'columns').
index : single label or list-like
Alternative to specifying axis (``labels, axis=0``
is equivalent to ``index=labels``).
columns : single label or list-like
Alternative to specifying axis (``labels, axis=1``
is equivalent to ``columns=labels``).
level : int or level name, optional
For MultiIndex, level from which the labels will be removed.
inplace : bool, default False
If False, return a copy. Otherwise, do operation
inplace and return None.
errors : {'ignore', 'raise'}, default 'raise'
If 'ignore', suppress error and only existing labels are
dropped.
Returns
-------
DataFrame or None
DataFrame without the removed index or column labels or
None if ``inplace=True``.
Raises
------
KeyError
If any of the labels is not found in the selected axis.
See Also
--------
DataFrame.loc : Label-location based indexer for selection by label.
DataFrame.dropna : Return DataFrame with labels on given axis omitted
where (all or any) data are missing.
DataFrame.drop_duplicates : Return DataFrame with duplicate rows
removed, optionally only considering certain columns.
Series.drop : Return Series with specified index labels removed.
Examples
--------
>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
Drop columns
>>> df.drop(['B', 'C'], axis=1)
A D
0 0 3
1 4 7
2 8 11
>>> df.drop(columns=['B', 'C'])
A D
0 0 3
1 4 7
2 8 11
Drop a row by index
>>> df.drop([0, 1])
A B C D
2 8 9 10 11
Drop columns and/or rows of MultiIndex DataFrame
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
... data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
... [250, 150], [1.5, 0.8], [320, 250],
... [1, 0.8], [0.3, 0.2]])
>>> df
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
weight 1.0 0.8
length 0.3 0.2
Drop a specific index combination from the MultiIndex
DataFrame, i.e., drop the combination ``'falcon'`` and
``'weight'``, which deletes only the corresponding row
>>> df.drop(index=('falcon', 'weight'))
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
length 0.3 0.2
>>> df.drop(index='cow', columns='small')
big
lama speed 45.0
weight 200.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
>>> df.drop(index='length', level=1)
big small
lama speed 45.0 30.0
weight 200.0 100.0
cow speed 30.0 20.0
weight 250.0 150.0
falcon speed 320.0 250.0
weight 1.0 0.8
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.DataFrame.drop" correct. :)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/50012 | 2022-12-02T15:51:20Z | 2022-12-02T17:36:15Z | 2022-12-02T17:36:15Z | 2022-12-02T17:58:01Z |
REF: remove numeric arg from NDFrame._convert | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 60488a8ef9715..36c713cab7123 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -954,8 +954,8 @@ def coerce_indexer_dtype(indexer, categories) -> np.ndarray:
def soft_convert_objects(
values: np.ndarray,
+ *,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
period: bool = True,
copy: bool = True,
@@ -968,7 +968,6 @@ def soft_convert_objects(
----------
values : np.ndarray[object]
datetime : bool, default True
- numeric: bool, default True
timedelta : bool, default True
period : bool, default True
copy : bool, default True
@@ -978,16 +977,15 @@ def soft_convert_objects(
np.ndarray or ExtensionArray
"""
validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(numeric, "numeric")
validate_bool_kwarg(timedelta, "timedelta")
validate_bool_kwarg(copy, "copy")
- conversion_count = sum((datetime, numeric, timedelta))
+ conversion_count = sum((datetime, timedelta))
if conversion_count == 0:
- raise ValueError("At least one of datetime, numeric or timedelta must be True.")
+ raise ValueError("At least one of datetime or timedelta must be True.")
# Soft conversions
- if datetime or timedelta:
+ if datetime or timedelta or period:
# GH 20380, when datetime is beyond year 2262, hence outside
# bound of nanosecond-resolution 64-bit integers.
converted = lib.maybe_convert_objects(
@@ -999,13 +997,6 @@ def soft_convert_objects(
if converted is not values:
return converted
- if numeric and is_object_dtype(values.dtype):
- converted, _ = lib.maybe_convert_numeric(values, set(), coerce_numeric=True)
-
- # If all NaNs, then do not-alter
- values = converted if not isna(converted).all() else values
- values = values.copy() if copy else values
-
return values
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2de83bb7a4468..038c889e4d5f7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6318,8 +6318,8 @@ def __deepcopy__(self: NDFrameT, memo=None) -> NDFrameT:
@final
def _convert(
self: NDFrameT,
+ *,
datetime: bool_t = False,
- numeric: bool_t = False,
timedelta: bool_t = False,
) -> NDFrameT:
"""
@@ -6329,9 +6329,6 @@ def _convert(
----------
datetime : bool, default False
If True, convert to date where possible.
- numeric : bool, default False
- If True, attempt to convert to numbers (including strings), with
- unconvertible values becoming NaN.
timedelta : bool, default False
If True, convert to timedelta where possible.
@@ -6340,12 +6337,10 @@ def _convert(
converted : same as input object
"""
validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(numeric, "numeric")
validate_bool_kwarg(timedelta, "timedelta")
return self._constructor(
self._mgr.convert(
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
copy=True,
)
@@ -6390,11 +6385,8 @@ def infer_objects(self: NDFrameT) -> NDFrameT:
A int64
dtype: object
"""
- # numeric=False necessary to only soft convert;
- # python objects will still be converted to
- # native numpy numeric types
return self._constructor(
- self._mgr.convert(datetime=True, numeric=False, timedelta=True, copy=True)
+ self._mgr.convert(datetime=True, timedelta=True, copy=True)
).__finalize__(self, method="infer_objects")
@final
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index feca755fd43db..8ddab458e35a9 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -377,9 +377,9 @@ def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
def convert(
self: T,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> T:
def _convert(arr):
@@ -389,7 +389,6 @@ def _convert(arr):
return soft_convert_objects(
arr,
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
copy=copy,
)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f1856fce83160..57a0fc81515c5 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -429,9 +429,7 @@ def _maybe_downcast(self, blocks: list[Block], downcast=None) -> list[Block]:
# but ATM it breaks too much existing code.
# split and convert the blocks
- return extend_blocks(
- [blk.convert(datetime=True, numeric=False) for blk in blocks]
- )
+ return extend_blocks([blk.convert(datetime=True) for blk in blocks])
if downcast is None:
return blocks
@@ -451,9 +449,9 @@ def _downcast_2d(self, dtype) -> list[Block]:
def convert(
self,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> list[Block]:
"""
@@ -570,7 +568,7 @@ def replace(
if not (self.is_object and value is None):
# if the user *explicitly* gave None, we keep None, otherwise
# may downcast to NaN
- blocks = blk.convert(numeric=False, copy=False)
+ blocks = blk.convert(copy=False)
else:
blocks = [blk]
return blocks
@@ -642,7 +640,7 @@ def _replace_regex(
replace_regex(new_values, rx, value, mask)
block = self.make_block(new_values)
- return block.convert(numeric=False, copy=False)
+ return block.convert(copy=False)
@final
def replace_list(
@@ -712,9 +710,7 @@ def replace_list(
)
if convert and blk.is_object and not all(x is None for x in dest_list):
# GH#44498 avoid unwanted cast-back
- result = extend_blocks(
- [b.convert(numeric=False, copy=True) for b in result]
- )
+ result = extend_blocks([b.convert(copy=True) for b in result])
new_rb.extend(result)
rb = new_rb
return rb
@@ -1969,9 +1965,9 @@ def reduce(self, func) -> list[Block]:
@maybe_split
def convert(
self,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> list[Block]:
"""
@@ -1987,7 +1983,6 @@ def convert(
res_values = soft_convert_objects(
values,
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
copy=copy,
)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 20cc087adab23..306fea06963ec 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -443,16 +443,15 @@ def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
def convert(
self: T,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> T:
return self.apply(
"convert",
copy=copy,
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
)
diff --git a/pandas/tests/frame/methods/test_convert.py b/pandas/tests/frame/methods/test_convert.py
index 118af9f532abe..c6c70210d1cc4 100644
--- a/pandas/tests/frame/methods/test_convert.py
+++ b/pandas/tests/frame/methods/test_convert.py
@@ -1,10 +1,7 @@
import numpy as np
import pytest
-from pandas import (
- DataFrame,
- Series,
-)
+from pandas import DataFrame
import pandas._testing as tm
@@ -21,17 +18,11 @@ def test_convert_objects(self, float_string_frame):
float_string_frame["I"] = "1"
# add in some items that will be nan
- length = len(float_string_frame)
float_string_frame["J"] = "1."
float_string_frame["K"] = "1"
float_string_frame.loc[float_string_frame.index[0:5], ["J", "K"]] = "garbled"
- converted = float_string_frame._convert(datetime=True, numeric=True)
- assert converted["H"].dtype == "float64"
- assert converted["I"].dtype == "int64"
- assert converted["J"].dtype == "float64"
- assert converted["K"].dtype == "float64"
- assert len(converted["J"].dropna()) == length - 5
- assert len(converted["K"].dropna()) == length - 5
+ converted = float_string_frame._convert(datetime=True)
+ tm.assert_frame_equal(converted, float_string_frame)
# via astype
converted = float_string_frame.copy()
@@ -45,14 +36,6 @@ def test_convert_objects(self, float_string_frame):
with pytest.raises(ValueError, match="invalid literal"):
converted["H"].astype("int32")
- def test_convert_mixed_single_column(self):
- # GH#4119, not converting a mixed type (e.g.floats and object)
- # mixed in a single column
- df = DataFrame({"s": Series([1, "na", 3, 4])})
- result = df._convert(datetime=True, numeric=True)
- expected = DataFrame({"s": Series([1, np.nan, 3, 4])})
- tm.assert_frame_equal(result, expected)
-
def test_convert_objects_no_conversion(self):
mixed1 = DataFrame({"a": [1, 2, 3], "b": [4.0, 5, 6], "c": ["x", "y", "z"]})
mixed2 = mixed1._convert(datetime=True)
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index b3e59da4b0130..4d57b3c0adf0d 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -557,14 +557,6 @@ def test_astype_assignment(self):
)
tm.assert_frame_equal(df, expected)
- df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
- df.iloc[:, 0:2] = df.iloc[:, 0:2]._convert(datetime=True, numeric=True)
- expected = DataFrame(
- [[1, 2, "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
- )
- tm.assert_frame_equal(df, expected)
-
# GH5702 (loc)
df = df_orig.copy()
with tm.assert_produces_warning(FutureWarning, match=msg):
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index ecf247efd74bf..dc7960cde4a61 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -599,9 +599,9 @@ def _compare(old_mgr, new_mgr):
mgr.iset(0, np.array(["1"] * N, dtype=np.object_))
mgr.iset(1, np.array(["2."] * N, dtype=np.object_))
mgr.iset(2, np.array(["foo."] * N, dtype=np.object_))
- new_mgr = mgr.convert(numeric=True)
- assert new_mgr.iget(0).dtype == np.int64
- assert new_mgr.iget(1).dtype == np.float64
+ new_mgr = mgr.convert()
+ assert new_mgr.iget(0).dtype == np.object_
+ assert new_mgr.iget(1).dtype == np.object_
assert new_mgr.iget(2).dtype == np.object_
assert new_mgr.iget(3).dtype == np.int64
assert new_mgr.iget(4).dtype == np.float64
@@ -612,9 +612,9 @@ def _compare(old_mgr, new_mgr):
mgr.iset(0, np.array(["1"] * N, dtype=np.object_))
mgr.iset(1, np.array(["2."] * N, dtype=np.object_))
mgr.iset(2, np.array(["foo."] * N, dtype=np.object_))
- new_mgr = mgr.convert(numeric=True)
- assert new_mgr.iget(0).dtype == np.int64
- assert new_mgr.iget(1).dtype == np.float64
+ new_mgr = mgr.convert()
+ assert new_mgr.iget(0).dtype == np.object_
+ assert new_mgr.iget(1).dtype == np.object_
assert new_mgr.iget(2).dtype == np.object_
assert new_mgr.iget(3).dtype == np.int32
assert new_mgr.iget(4).dtype == np.bool_
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index daa2dffeaa143..b1fcdd8df01ad 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -627,7 +627,7 @@ def try_remove_ws(x):
]
dfnew = df.applymap(try_remove_ws).replace(old, new)
gtnew = ground_truth.applymap(try_remove_ws)
- converted = dfnew._convert(datetime=True, numeric=True)
+ converted = dfnew._convert(datetime=True)
date_cols = ["Closing Date", "Updated Date"]
converted[date_cols] = converted[date_cols].apply(to_datetime)
tm.assert_frame_equal(converted, gtnew)
diff --git a/pandas/tests/series/methods/test_convert.py b/pandas/tests/series/methods/test_convert.py
index 4832780e6d0d3..f979a28154d4e 100644
--- a/pandas/tests/series/methods/test_convert.py
+++ b/pandas/tests/series/methods/test_convert.py
@@ -1,6 +1,5 @@
from datetime import datetime
-import numpy as np
import pytest
from pandas import (
@@ -19,36 +18,27 @@ def test_convert(self):
# Test coercion with mixed types
ser = Series(["a", "3.1415", dt, td])
- results = ser._convert(numeric=True)
- expected = Series([np.nan, 3.1415, np.nan, np.nan])
- tm.assert_series_equal(results, expected)
-
# Test standard conversion returns original
results = ser._convert(datetime=True)
tm.assert_series_equal(results, ser)
- results = ser._convert(numeric=True)
- expected = Series([np.nan, 3.1415, np.nan, np.nan])
- tm.assert_series_equal(results, expected)
+
results = ser._convert(timedelta=True)
tm.assert_series_equal(results, ser)
def test_convert_numeric_strings_with_other_true_args(self):
# test pass-through and non-conversion when other types selected
ser = Series(["1.0", "2.0", "3.0"])
- results = ser._convert(datetime=True, numeric=True, timedelta=True)
- expected = Series([1.0, 2.0, 3.0])
- tm.assert_series_equal(results, expected)
- results = ser._convert(True, False, True)
+ results = ser._convert(datetime=True, timedelta=True)
tm.assert_series_equal(results, ser)
def test_convert_datetime_objects(self):
ser = Series(
[datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], dtype="O"
)
- results = ser._convert(datetime=True, numeric=True, timedelta=True)
+ results = ser._convert(datetime=True, timedelta=True)
expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)])
tm.assert_series_equal(results, expected)
- results = ser._convert(datetime=False, numeric=True, timedelta=True)
+ results = ser._convert(datetime=False, timedelta=True)
tm.assert_series_equal(results, ser)
def test_convert_datetime64(self):
@@ -74,46 +64,12 @@ def test_convert_datetime64(self):
def test_convert_timedeltas(self):
td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0)
ser = Series([td, td], dtype="O")
- results = ser._convert(datetime=True, numeric=True, timedelta=True)
+ results = ser._convert(datetime=True, timedelta=True)
expected = Series([td, td])
tm.assert_series_equal(results, expected)
- results = ser._convert(True, True, False)
+ results = ser._convert(datetime=True, timedelta=False)
tm.assert_series_equal(results, ser)
- def test_convert_numeric_strings(self):
- ser = Series([1.0, 2, 3], index=["a", "b", "c"])
- result = ser._convert(numeric=True)
- tm.assert_series_equal(result, ser)
-
- # force numeric conversion
- res = ser.copy().astype("O")
- res["a"] = "1"
- result = res._convert(numeric=True)
- tm.assert_series_equal(result, ser)
-
- res = ser.copy().astype("O")
- res["a"] = "1."
- result = res._convert(numeric=True)
- tm.assert_series_equal(result, ser)
-
- res = ser.copy().astype("O")
- res["a"] = "garbled"
- result = res._convert(numeric=True)
- expected = ser.copy()
- expected["a"] = np.nan
- tm.assert_series_equal(result, expected)
-
- def test_convert_mixed_type_noop(self):
- # GH 4119, not converting a mixed type (e.g.floats and object)
- ser = Series([1, "na", 3, 4])
- result = ser._convert(datetime=True, numeric=True)
- expected = Series([1, np.nan, 3, 4])
- tm.assert_series_equal(result, expected)
-
- ser = Series([1, "", 3, 4])
- result = ser._convert(datetime=True, numeric=True)
- tm.assert_series_equal(result, expected)
-
def test_convert_preserve_non_object(self):
# preserve if non-object
ser = Series([1], dtype="float32")
@@ -122,18 +78,17 @@ def test_convert_preserve_non_object(self):
def test_convert_no_arg_error(self):
ser = Series(["1.0", "2"])
- msg = r"At least one of datetime, numeric or timedelta must be True\."
+ msg = r"At least one of datetime or timedelta must be True\."
with pytest.raises(ValueError, match=msg):
ser._convert()
def test_convert_preserve_bool(self):
ser = Series([1, True, 3, 5], dtype=object)
- res = ser._convert(datetime=True, numeric=True)
- expected = Series([1, 1, 3, 5], dtype="i8")
- tm.assert_series_equal(res, expected)
+ res = ser._convert(datetime=True)
+ tm.assert_series_equal(res, ser)
def test_convert_preserve_all_bool(self):
ser = Series([False, True, False, False], dtype=object)
- res = ser._convert(datetime=True, numeric=True)
+ res = ser._convert(datetime=True)
expected = Series([False, True, False, False], dtype=bool)
tm.assert_series_equal(res, expected)
| Moving towards getting rid of _convert and soft_convert_objects and just using infer_objects and maybe_infer_objects | https://api.github.com/repos/pandas-dev/pandas/pulls/50011 | 2022-12-02T15:23:19Z | 2022-12-02T18:39:19Z | 2022-12-02T18:39:18Z | 2022-12-02T19:17:41Z |
ENH/TST: expand copy-on-write to assign() method | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..5358fdb0b4dbd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4887,7 +4887,7 @@ def assign(self, **kwargs) -> DataFrame:
Portland 17.0 62.6 290.15
Berkeley 25.0 77.0 298.15
"""
- data = self.copy()
+ data = self.copy(deep=None)
for k, v in kwargs.items():
data[k] = com.apply_if_callable(v, data)
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index bf65f153b10dd..6707f1411cbc7 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -280,3 +280,21 @@ def test_head_tail(method, using_copy_on_write):
# without CoW enabled, head and tail return views. Mutating df2 also mutates df.
df2.iloc[0, 0] = 1
tm.assert_frame_equal(df, df_orig)
+
+
+def test_assign(using_copy_on_write):
+ df = DataFrame({"a": [1, 2, 3]})
+ df_orig = df.copy()
+ df2 = df.assign()
+ df2._mgr._verify_integrity()
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ else:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+
+ # modify df2 to trigger CoW for that block
+ df2.iloc[0, 0] = 0
+ if using_copy_on_write:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ tm.assert_frame_equal(df, df_orig)
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Added copy-on-write to `df.assign()`.
Progress towards #49473 via [PyData pandas sprint](https://github.com/noatamir/pydata-global-sprints/issues/11). | https://api.github.com/repos/pandas-dev/pandas/pulls/50010 | 2022-12-02T15:16:44Z | 2022-12-02T17:36:36Z | 2022-12-02T17:36:36Z | 2022-12-02T17:36:37Z |
TST/DEV: remove geopandas downstream test + remove from environment.yml | diff --git a/environment.yml b/environment.yml
index 1a02b522fab06..87c5f5d031fcf 100644
--- a/environment.yml
+++ b/environment.yml
@@ -64,7 +64,6 @@ dependencies:
- cftime
- dask
- ipython
- - geopandas-base
- seaborn
- scikit-learn
- statsmodels
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 2f603f3700413..ab001d0b5a881 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -18,7 +18,7 @@
)
import pandas._testing as tm
-# geopandas, xarray, fsspec, fastparquet all produce these
+# xarray, fsspec, fastparquet all produce these
pytestmark = pytest.mark.filterwarnings(
"ignore:distutils Version classes are deprecated.*:DeprecationWarning"
)
@@ -223,15 +223,6 @@ def test_pandas_datareader():
pandas_datareader.DataReader("F", "quandl", "2017-01-01", "2017-02-01")
-def test_geopandas():
-
- geopandas = import_module("geopandas")
- gdf = geopandas.GeoDataFrame(
- {"col": [1, 2, 3], "geometry": geopandas.points_from_xy([1, 2, 3], [1, 2, 3])}
- )
- assert gdf[["col", "geometry"]].geometry.x.equals(Series([1.0, 2.0, 3.0]))
-
-
# Cython import warning
@pytest.mark.filterwarnings("ignore:can't resolve:ImportWarning")
def test_pyarrow(df):
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 19ed830eca07e..debbdb635901c 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -51,7 +51,6 @@ botocore
cftime
dask
ipython
-geopandas
seaborn
scikit-learn
statsmodels
| See https://github.com/pandas-dev/pandas/pull/49994 for context
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit)
| https://api.github.com/repos/pandas-dev/pandas/pulls/50008 | 2022-12-02T13:26:37Z | 2022-12-02T15:30:06Z | 2022-12-02T15:30:06Z | 2022-12-02T15:31:59Z |
DEV: update gitpod docker | diff --git a/gitpod/Dockerfile b/gitpod/Dockerfile
index 299267a11fdd1..7581abe822816 100644
--- a/gitpod/Dockerfile
+++ b/gitpod/Dockerfile
@@ -13,7 +13,7 @@
# are visible in the host and container.
# The docker image is retrieved from the pandas dockerhub repository
#
-# docker run --rm -it -v $(pwd):/home/pandas pandas/pandas-dev:<image-tag>
+# docker run --rm -it -v $(pwd):/home/pandas pythonpandas/pandas-dev:<image-tag>
#
# By default the container will activate the conda environment pandas-dev
# which contains all the dependencies needed for pandas development
@@ -86,9 +86,9 @@ RUN chmod a+rx /usr/local/bin/workspace_config && \
# the container to create a conda environment from it
COPY environment.yml /tmp/environment.yml
-RUN mamba env create -f /tmp/environment.yml
# ---- Create conda environment ----
-RUN conda activate $CONDA_ENV && \
+RUN mamba env create -f /tmp/environment.yml && \
+ conda activate $CONDA_ENV && \
mamba install ccache -y && \
# needed for docs rendering later on
python -m pip install --no-cache-dir sphinx-autobuild && \
| Follow-up on https://github.com/pandas-dev/pandas/pull/48107 | https://api.github.com/repos/pandas-dev/pandas/pulls/50007 | 2022-12-02T11:50:14Z | 2022-12-02T15:24:10Z | 2022-12-02T15:24:10Z | 2022-12-02T15:25:56Z |
DOC: Update Benchmark Link #49871 | diff --git a/doc/source/development/maintaining.rst b/doc/source/development/maintaining.rst
index 9e32e43f30dfc..6e9f622e18eea 100644
--- a/doc/source/development/maintaining.rst
+++ b/doc/source/development/maintaining.rst
@@ -318,7 +318,7 @@ Benchmark machine
-----------------
The team currently owns dedicated hardware for hosting a website for pandas' ASV performance benchmark. The results
-are published to http://pandas.pydata.org/speed/pandas/
+are published to https://asv-runner.github.io/asv-collection/pandas/
Configuration
`````````````
|
- [ ] Updated the broken benchmark-machine link in `doc/source/development/maintaining.rst` file fixing a bug #49871.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50006 | 2022-12-02T11:45:13Z | 2022-12-02T13:14:08Z | 2022-12-02T13:14:08Z | 2022-12-02T13:14:16Z |
TYP #37715: Fix mypy errors in accessor.py | diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 71a50c69bfee1..f8dad3b252223 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -562,6 +562,8 @@ def cat(
if sep is None:
sep = ""
+ data: Series | np.ndarray
+
if isinstance(self._orig, ABCIndex):
data = Series(self._orig, index=self._orig, dtype=self._orig.dtype)
else: # Series
@@ -569,9 +571,7 @@ def cat(
# concatenate Series/Index with itself if no "others"
if others is None:
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Series")
- data = ensure_object(data) # type: ignore[assignment]
+ data = ensure_object(data)
na_mask = isna(data)
if na_rep is None and na_mask.any():
return sep.join(data[~na_mask])
| - [ ] xref #37715
- [ ] Tests added and passed if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50005 | 2022-12-02T07:15:08Z | 2023-01-16T19:41:58Z | null | 2023-01-16T19:41:58Z |
DOC: remove okwarning from scale guide | diff --git a/doc/source/user_guide/scale.rst b/doc/source/user_guide/scale.rst
index 129f43dd36930..a974af4ffe1c5 100644
--- a/doc/source/user_guide/scale.rst
+++ b/doc/source/user_guide/scale.rst
@@ -257,7 +257,6 @@ We'll import ``dask.dataframe`` and notice that the API feels similar to pandas.
We can use Dask's ``read_parquet`` function, but provide a globstring of files to read in.
.. ipython:: python
- :okwarning:
import dask.dataframe as dd
@@ -287,7 +286,6 @@ column names and dtypes. That's because Dask hasn't actually read the data yet.
Rather than executing immediately, doing operations build up a **task graph**.
.. ipython:: python
- :okwarning:
ddf
ddf["name"]
@@ -346,7 +344,6 @@ known automatically. In this case, since we created the parquet files manually,
we need to supply the divisions manually.
.. ipython:: python
- :okwarning:
N = 12
starts = [f"20{i:>02d}-01-01" for i in range(N)]
@@ -359,7 +356,6 @@ we need to supply the divisions manually.
Now we can do things like fast random access with ``.loc``.
.. ipython:: python
- :okwarning:
ddf.loc["2002-01-01 12:01":"2002-01-01 12:05"].compute()
@@ -373,7 +369,6 @@ results will fit in memory, so we can safely call ``compute`` without running
out of memory. At that point it's just a regular pandas object.
.. ipython:: python
- :okwarning:
@savefig dask_resample.png
ddf[["x", "y"]].resample("1D").mean().cumsum().compute().plot()
| - [ ] closes #29960
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Updating an entry in the latest `doc/source/user_guide/scale.rst`
| https://api.github.com/repos/pandas-dev/pandas/pulls/50004 | 2022-12-02T01:06:49Z | 2022-12-03T18:48:23Z | 2022-12-03T18:48:23Z | 2022-12-03T18:48:30Z |
Adding to the function docs that groupby.transform() function parameter can be a string | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 497e0ef724373..c73a2d40a33e1 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -399,7 +399,7 @@ class providing the base-class of operations.
Parameters
----------
-f : function
+f : function, str
Function to apply to each group. See the Notes section below for requirements.
Can also accept a Numba JIT function with
| Docs change: Adding to the function docs that groupby.transform() function parameter can be a string
- [ ] closes #49961
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50002 | 2022-12-01T23:29:10Z | 2022-12-02T03:25:21Z | 2022-12-02T03:25:21Z | 2022-12-02T05:31:52Z |
REF: avoid _with_infer constructor | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 02ee13d60427e..43020ae471f10 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -274,7 +274,7 @@ def box_expected(expected, box_cls, transpose: bool = True):
else:
expected = pd.array(expected, copy=False)
elif box_cls is Index:
- expected = Index._with_infer(expected)
+ expected = Index(expected)
elif box_cls is Series:
expected = Series(expected)
elif box_cls is DataFrame:
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index c94b1068e5e65..cd719a5256ea3 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -894,7 +894,7 @@ def value_counts(
# For backwards compatibility, we let Index do its normal type
# inference, _except_ for if if infers from object to bool.
- idx = Index._with_infer(keys)
+ idx = Index(keys)
if idx.dtype == bool and keys.dtype == object:
idx = idx.astype(object)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 01a1ebd459616..0b55416d2bd7e 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2678,6 +2678,7 @@ def fillna(self, value=None, downcast=None):
if downcast is None:
# no need to care metadata other than name
# because it can't have freq if it has NaTs
+ # _with_infer needed for test_fillna_categorical
return Index._with_infer(result, name=self.name)
raise NotImplementedError(
f"{type(self).__name__}.fillna does not support 'downcast' "
@@ -4230,10 +4231,10 @@ def _reindex_non_unique(
new_indexer = np.arange(len(self.take(indexer)), dtype=np.intp)
new_indexer[~check] = -1
- if isinstance(self, ABCMultiIndex):
- new_index = type(self).from_tuples(new_labels, names=self.names)
+ if not isinstance(self, ABCMultiIndex):
+ new_index = Index(new_labels, name=self.name)
else:
- new_index = Index._with_infer(new_labels, name=self.name)
+ new_index = type(self).from_tuples(new_labels, names=self.names)
return new_index, indexer, new_indexer
# --------------------------------------------------------------------
@@ -6477,7 +6478,7 @@ def insert(self, loc: int, item) -> Index:
if self._typ == "numericindex":
# Use self._constructor instead of Index to retain NumericIndex GH#43921
# TODO(2.0) can use Index instead of self._constructor
- return self._constructor._with_infer(new_values, name=self.name)
+ return self._constructor(new_values, name=self.name)
else:
return Index._with_infer(new_values, name=self.name)
@@ -6850,7 +6851,7 @@ def ensure_index_from_sequences(sequences, names=None) -> Index:
if len(sequences) == 1:
if names is not None:
names = names[0]
- return Index._with_infer(sequences[0], name=names)
+ return Index(sequences[0], name=names)
else:
return MultiIndex.from_arrays(sequences, names=names)
@@ -6893,7 +6894,7 @@ def ensure_index(index_like: Axes, copy: bool = False) -> Index:
if isinstance(index_like, ABCSeries):
name = index_like.name
- return Index._with_infer(index_like, name=name, copy=copy)
+ return Index(index_like, name=name, copy=copy)
if is_iterator(index_like):
index_like = list(index_like)
@@ -6909,9 +6910,9 @@ def ensure_index(index_like: Axes, copy: bool = False) -> Index:
return MultiIndex.from_arrays(index_like)
else:
- return Index._with_infer(index_like, copy=copy, tupleize_cols=False)
+ return Index(index_like, copy=copy, tupleize_cols=False)
else:
- return Index._with_infer(index_like, copy=copy)
+ return Index(index_like, copy=copy)
def ensure_has_len(seq):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index f0b0ec23dba1a..012a92793acf9 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2112,7 +2112,7 @@ def append(self, other):
# setting names to None automatically
return MultiIndex.from_tuples(new_tuples)
except (TypeError, IndexError):
- return Index._with_infer(new_tuples)
+ return Index(new_tuples)
def argsort(self, *args, **kwargs) -> npt.NDArray[np.intp]:
if len(args) == 0 and len(kwargs) == 0:
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 71a50c69bfee1..8cd4cb976503d 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -319,7 +319,7 @@ def cons_row(x):
out = out.get_level_values(0)
return out
else:
- return Index._with_infer(result, name=name)
+ return Index(result, name=name)
else:
index = self._orig.index
# This is a mess.
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 5a5e46e0227aa..e0b18047aa0ec 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -344,9 +344,7 @@ def _hash_ndarray(
)
codes, categories = factorize(vals, sort=False)
- cat = Categorical(
- codes, Index._with_infer(categories), ordered=False, fastpath=True
- )
+ cat = Categorical(codes, Index(categories), ordered=False, fastpath=True)
return _hash_categorical(cat, encoding, hash_key)
try:
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 529dd6baa70c0..f2af85c2e388d 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1147,6 +1147,9 @@ def test_numarr_with_dtype_add_nan(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
+ if box is Index and dtype is object:
+ # TODO: avoid this; match behavior with Series
+ expected = expected.astype(np.float64)
result = np.nan + ser
tm.assert_equal(result, expected)
@@ -1162,6 +1165,9 @@ def test_numarr_with_dtype_add_int(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
+ if box is Index and dtype is object:
+ # TODO: avoid this; match behavior with Series
+ expected = expected.astype(np.int64)
result = 1 + ser
tm.assert_equal(result, expected)
diff --git a/pandas/tests/arrays/integer/test_dtypes.py b/pandas/tests/arrays/integer/test_dtypes.py
index 1566476c32989..f34953876f5f4 100644
--- a/pandas/tests/arrays/integer/test_dtypes.py
+++ b/pandas/tests/arrays/integer/test_dtypes.py
@@ -89,7 +89,7 @@ def test_astype_index(all_data, dropna):
other = all_data
dtype = all_data.dtype
- idx = pd.Index._with_infer(np.array(other))
+ idx = pd.Index(np.array(other))
assert isinstance(idx, ABCIndex)
result = idx.astype(dtype)
diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 1f46442ee13b0..339c6560d6212 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -33,7 +33,7 @@ def test_groupby_extension_agg(self, as_index, data_for_grouping):
_, uniques = pd.factorize(data_for_grouping, sort=True)
if as_index:
- index = pd.Index._with_infer(uniques, name="B")
+ index = pd.Index(uniques, name="B")
expected = pd.Series([3.0, 1.0, 4.0], index=index, name="A")
self.assert_series_equal(result, expected)
else:
@@ -61,7 +61,7 @@ def test_groupby_extension_no_sort(self, data_for_grouping):
result = df.groupby("B", sort=False).A.mean()
_, index = pd.factorize(data_for_grouping, sort=False)
- index = pd.Index._with_infer(index, name="B")
+ index = pd.Index(index, name="B")
expected = pd.Series([1.0, 3.0, 4.0], index=index, name="A")
self.assert_series_equal(result, expected)
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index ecc69113882c5..de7967a8578b5 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -391,7 +391,7 @@ def test_groupby_extension_agg(self, as_index, data_for_grouping):
_, uniques = pd.factorize(data_for_grouping, sort=True)
if as_index:
- index = pd.Index._with_infer(uniques, name="B")
+ index = pd.Index(uniques, name="B")
expected = pd.Series([3.0, 1.0, 4.0], index=index, name="A")
self.assert_series_equal(result, expected)
else:
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 535c2d3e7e0f3..530934df72606 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -20,7 +20,6 @@
DataFrame,
Series,
)
-from pandas.core.indexes.api import ensure_index
from pandas.tests.io.test_compression import _compression_to_extension
from pandas.io.parsers import read_csv
@@ -1144,7 +1143,7 @@ def _convert_categorical(from_frame: DataFrame) -> DataFrame:
if is_categorical_dtype(ser.dtype):
cat = ser._values.remove_unused_categories()
if cat.categories.dtype == object:
- categories = ensure_index(cat.categories._values)
+ categories = pd.Index._with_infer(cat.categories._values)
cat = cat.set_categories(categories)
from_frame[col] = cat
return from_frame
| Trying to get down to Just One constructor. | https://api.github.com/repos/pandas-dev/pandas/pulls/50001 | 2022-12-01T23:00:31Z | 2022-12-03T05:36:16Z | 2022-12-03T05:36:16Z | 2022-12-03T21:42:09Z |
API: dont do inference on object-dtype arithmetic results | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 7838ef8df4164..a3ba0557bc31c 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -367,6 +367,7 @@ Other API changes
- Passing a sequence containing a type that cannot be converted to :class:`Timedelta` to :func:`to_timedelta` or to the :class:`Series` or :class:`DataFrame` constructor with ``dtype="timedelta64[ns]"`` or to :class:`TimedeltaIndex` now raises ``TypeError`` instead of ``ValueError`` (:issue:`49525`)
- Changed behavior of :class:`Index` constructor with sequence containing at least one ``NaT`` and everything else either ``None`` or ``NaN`` to infer ``datetime64[ns]`` dtype instead of ``object``, matching :class:`Series` behavior (:issue:`49340`)
- :func:`read_stata` with parameter ``index_col`` set to ``None`` (the default) will now set the index on the returned :class:`DataFrame` to a :class:`RangeIndex` instead of a :class:`Int64Index` (:issue:`49745`)
+- Changed behavior of :class:`Index`, :class:`Series`, and :class:`DataFrame` arithmetic methods when working with object-dtypes, the results no longer do type inference on the result of the array operations, use ``result.infer_objects()`` to do type inference on the result (:issue:`49999`)
- Changed behavior of :class:`Index` constructor with an object-dtype ``numpy.ndarray`` containing all-``bool`` values or all-complex values, this will now retain object dtype, consistent with the :class:`Series` behavior (:issue:`49594`)
- Changed behavior of :meth:`DataFrame.shift` with ``axis=1``, an integer ``fill_value``, and homogeneous datetime-like dtype, this now fills new columns with integer dtypes instead of casting to datetimelike (:issue:`49842`)
- Files are now closed when encountering an exception in :func:`read_json` (:issue:`49921`)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index dc0359426f07c..7ee9d8ff91b6c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6615,10 +6615,10 @@ def _logical_method(self, other, op):
def _construct_result(self, result, name):
if isinstance(result, tuple):
return (
- Index._with_infer(result[0], name=name),
- Index._with_infer(result[1], name=name),
+ Index(result[0], name=name, dtype=result[0].dtype),
+ Index(result[1], name=name, dtype=result[1].dtype),
)
- return Index._with_infer(result, name=name)
+ return Index(result, name=name, dtype=result.dtype)
def _arith_method(self, other, op):
if (
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index bfedaca093a8e..e514bdcac5265 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -230,18 +230,27 @@ def align_method_FRAME(
def to_series(right):
msg = "Unable to coerce to Series, length must be {req_len}: given {given_len}"
+
+ # pass dtype to avoid doing inference, which would break consistency
+ # with Index/Series ops
+ dtype = None
+ if getattr(right, "dtype", None) == object:
+ # can't pass right.dtype unconditionally as that would break on e.g.
+ # datetime64[h] ndarray
+ dtype = object
+
if axis is not None and left._get_axis_name(axis) == "index":
if len(left.index) != len(right):
raise ValueError(
msg.format(req_len=len(left.index), given_len=len(right))
)
- right = left._constructor_sliced(right, index=left.index)
+ right = left._constructor_sliced(right, index=left.index, dtype=dtype)
else:
if len(left.columns) != len(right):
raise ValueError(
msg.format(req_len=len(left.columns), given_len=len(right))
)
- right = left._constructor_sliced(right, index=left.columns)
+ right = left._constructor_sliced(right, index=left.columns, dtype=dtype)
return right
if isinstance(right, np.ndarray):
@@ -250,13 +259,25 @@ def to_series(right):
right = to_series(right)
elif right.ndim == 2:
+ # We need to pass dtype=right.dtype to retain object dtype
+ # otherwise we lose consistency with Index and array ops
+ dtype = None
+ if getattr(right, "dtype", None) == object:
+ # can't pass right.dtype unconditionally as that would break on e.g.
+ # datetime64[h] ndarray
+ dtype = object
+
if right.shape == left.shape:
- right = left._constructor(right, index=left.index, columns=left.columns)
+ right = left._constructor(
+ right, index=left.index, columns=left.columns, dtype=dtype
+ )
elif right.shape[0] == left.shape[0] and right.shape[1] == 1:
# Broadcast across columns
right = np.broadcast_to(right, left.shape)
- right = left._constructor(right, index=left.index, columns=left.columns)
+ right = left._constructor(
+ right, index=left.index, columns=left.columns, dtype=dtype
+ )
elif right.shape[1] == left.shape[1] and right.shape[0] == 1:
# Broadcast along rows
@@ -406,7 +427,10 @@ def _maybe_align_series_as_frame(frame: DataFrame, series: Series, axis: AxisInt
rvalues = rvalues.reshape(1, -1)
rvalues = np.broadcast_to(rvalues, frame.shape)
- return type(frame)(rvalues, index=frame.index, columns=frame.columns)
+ # pass dtype to avoid doing inference
+ return type(frame)(
+ rvalues, index=frame.index, columns=frame.columns, dtype=rvalues.dtype
+ )
def flex_arith_method_FRAME(op):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1e5f565934b50..bf5a530a28b28 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2995,9 +2995,10 @@ def _construct_result(
assert isinstance(res2, Series)
return (res1, res2)
- # We do not pass dtype to ensure that the Series constructor
- # does inference in the case where `result` has object-dtype.
- out = self._constructor(result, index=self.index)
+ # TODO: result should always be ArrayLike, but this fails for some
+ # JSONArray tests
+ dtype = getattr(result, "dtype", None)
+ out = self._constructor(result, index=self.index, dtype=dtype)
out = out.__finalize__(self)
# Set the result's name after __finalize__ is called because __finalize__
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index f2af85c2e388d..529dd6baa70c0 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1147,9 +1147,6 @@ def test_numarr_with_dtype_add_nan(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
- if box is Index and dtype is object:
- # TODO: avoid this; match behavior with Series
- expected = expected.astype(np.float64)
result = np.nan + ser
tm.assert_equal(result, expected)
@@ -1165,9 +1162,6 @@ def test_numarr_with_dtype_add_int(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
- if box is Index and dtype is object:
- # TODO: avoid this; match behavior with Series
- expected = expected.astype(np.int64)
result = 1 + ser
tm.assert_equal(result, expected)
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index e107ff6b65c0f..cba2b9be255fb 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -187,7 +187,8 @@ def test_series_with_dtype_radd_timedelta(self, dtype):
dtype=dtype,
)
expected = Series(
- [pd.Timedelta("4 days"), pd.Timedelta("5 days"), pd.Timedelta("6 days")]
+ [pd.Timedelta("4 days"), pd.Timedelta("5 days"), pd.Timedelta("6 days")],
+ dtype=dtype,
)
result = pd.Timedelta("3 days") + ser
@@ -227,7 +228,9 @@ def test_mixed_timezone_series_ops_object(self):
name="xxx",
)
assert ser2.dtype == object
- exp = Series([pd.Timedelta("2 days"), pd.Timedelta("4 days")], name="xxx")
+ exp = Series(
+ [pd.Timedelta("2 days"), pd.Timedelta("4 days")], name="xxx", dtype=object
+ )
tm.assert_series_equal(ser2 - ser, exp)
tm.assert_series_equal(ser - ser2, -exp)
@@ -238,7 +241,11 @@ def test_mixed_timezone_series_ops_object(self):
)
assert ser.dtype == object
- exp = Series([pd.Timedelta("01:30:00"), pd.Timedelta("02:30:00")], name="xxx")
+ exp = Series(
+ [pd.Timedelta("01:30:00"), pd.Timedelta("02:30:00")],
+ name="xxx",
+ dtype=object,
+ )
tm.assert_series_equal(ser + pd.Timedelta("00:30:00"), exp)
tm.assert_series_equal(pd.Timedelta("00:30:00") + ser, exp)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 1fb1e96cea94b..f3ea741607692 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -1394,7 +1394,7 @@ def test_td64arr_addsub_anchored_offset_arraylike(self, obox, box_with_array):
# ------------------------------------------------------------------
# Unsorted
- def test_td64arr_add_sub_object_array(self, box_with_array):
+ def test_td64arr_add_sub_object_array(self, box_with_array, using_array_manager):
box = box_with_array
xbox = np.ndarray if box is pd.array else box
@@ -1410,6 +1410,11 @@ def test_td64arr_add_sub_object_array(self, box_with_array):
[Timedelta(days=2), Timedelta(days=4), Timestamp("2000-01-07")]
)
expected = tm.box_expected(expected, xbox)
+ if not using_array_manager:
+ # TODO: avoid mismatched behavior. This occurs bc inference
+ # can happen within TimedeltaArray method, which means results
+ # depend on whether we split blocks.
+ expected = expected.astype(object)
tm.assert_equal(result, expected)
msg = "unsupported operand type|cannot subtract a datelike"
@@ -1422,6 +1427,8 @@ def test_td64arr_add_sub_object_array(self, box_with_array):
expected = pd.Index([Timedelta(0), Timedelta(0), Timestamp("2000-01-01")])
expected = tm.box_expected(expected, xbox)
+ if not using_array_manager:
+ expected = expected.astype(object)
tm.assert_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Broken off from #49714, which also changes TimedeltaArray inference behavior. This is exclusively Index/Series/DataFrame. | https://api.github.com/repos/pandas-dev/pandas/pulls/49999 | 2022-12-01T21:54:19Z | 2022-12-07T18:15:40Z | 2022-12-07T18:15:40Z | 2022-12-16T23:32:56Z |
Fix styler excel example - take 2 | diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index d7cb70b0f5110..c4690730596ff 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -1110,8 +1110,7 @@ def format(
>>> df = pd.DataFrame({"A": [1, 0, -1]})
>>> pseudo_css = "number-format: 0§[Red](0)§-§@;"
- >>> df.style.applymap(lambda: pseudo_css).to_excel("formatted_file.xlsx")
- ... # doctest: +SKIP
+ >>> df.style.applymap(lambda v: pseudo_css).to_excel("formatted_file.xlsx")
.. figure:: ../../_static/style/format_excel_css.png
"""
| TypeError: <lambda>() takes 0 positional arguments but 1 was given.
Following https://github.com/pandas-dev/pandas/pull/49971#issuecomment-1334262698. | https://api.github.com/repos/pandas-dev/pandas/pulls/49996 | 2022-12-01T20:11:08Z | 2022-12-02T21:27:27Z | 2022-12-02T21:27:27Z | 2022-12-02T21:27:34Z |
DOC: Add example for read_csv with nullable dtype | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index a073087f6ec8f..f1c212b53a87a 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -464,6 +464,17 @@ worth trying.
os.remove("foo.csv")
+Setting ``use_nullable_dtypes=True`` will result in nullable dtypes for every column.
+
+.. ipython:: python
+
+ data = """a,b,c,d,e,f,g,h,i,j
+ 1,2.5,True,a,,,,,12-31-2019,
+ 3,4.5,False,b,6,7.5,True,a,12-31-2019,
+ """
+
+ pd.read_csv(StringIO(data), use_nullable_dtypes=True, parse_dates=["i"])
+
.. _io.categorical:
Specifying categorical dtype
| The first issue in the link: https://github.com/noatamir/pyladies-berlin-sprints/issues/4 | https://api.github.com/repos/pandas-dev/pandas/pulls/49995 | 2022-12-01T19:59:47Z | 2022-12-02T03:27:47Z | 2022-12-02T03:27:47Z | 2022-12-02T03:27:54Z |
DEV: remove geopandas from the environment.yml file | - [x] closes #https://github.com/noatamir/pyladies-berlin-sprints/issues/9
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49994 | 2022-12-01T19:21:51Z | 2022-12-01T19:35:23Z | null | 2023-04-29T20:07:07Z | |
Add new mamba environment creation command | diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index afa0d0306f1af..3b9075f045e69 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -119,7 +119,7 @@ We'll now kick off a three-step process:
.. code-block:: none
# Create and activate the build environment
- mamba env create
+ mamba env create --file environment.yml
mamba activate pandas-dev
# Build and install pandas
| - [X] closes #49959
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49993 | 2022-12-01T19:15:23Z | 2022-12-01T19:26:27Z | 2022-12-01T19:26:27Z | 2022-12-23T21:04:39Z |
fixed up standart library imports part 1 of 2 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 18f3644a0e0ae..77be428bb2c72 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -314,3 +314,9 @@ repos:
additional_dependencies:
- autotyping==22.9.0
- libcst==0.4.7
+ - id: stdlib-imports
+ name: Place standard library imports at top of file
+ entry: python -m scripts.standard_library_imports_should_be_global
+ language: python
+ types: [python]
+ exclude: ^versionneer\.py$
diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 3c1362b1ac83e..985545f46a8c0 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -45,15 +45,12 @@ if [[ -z "$CHECK" || "$CHECK" == "code" ]]; then
python -W error -c "
import sys
import pandas
-
-blocklist = {'bs4', 'gcsfs', 'html5lib', 'http', 'ipython', 'jinja2', 'hypothesis',
- 'lxml', 'matplotlib', 'openpyxl', 'py', 'pytest', 's3fs', 'scipy',
- 'tables', 'urllib.request', 'xlrd', 'xlsxwriter'}
+from scripts.standard_library_imports_should_be_global import BLOCKLIST
# GH#28227 for some of these check for top-level modules, while others are
# more specific (e.g. urllib.request)
import_mods = set(m.split('.')[0] for m in sys.modules) | set(sys.modules)
-mods = blocklist & import_mods
+mods = BLOCKLIST & import_mods
if mods:
sys.stderr.write('err: pandas should not import: {}\n'.format(', '.join(mods)))
sys.exit(len(mods))
diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index 4170bb7706bdd..752e100f87d84 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -54,7 +54,11 @@
ContextDecorator,
contextmanager,
)
+from itertools import groupby
+import keyword
import re
+from textwrap import wrap
+import tokenize
from typing import (
Any,
Callable,
@@ -481,9 +485,6 @@ def register_option(
ValueError if `validator` is specified and `defval` is not a valid value.
"""
- import keyword
- import tokenize
-
key = key.lower()
if key in _registered_options:
@@ -703,8 +704,6 @@ def _build_option_description(k: str) -> str:
def pp_options_list(keys: Iterable[str], width: int = 80, _print: bool = False):
"""Builds a concise listing of available options, grouped by prefix"""
- from itertools import groupby
- from textwrap import wrap
def pp(name: str, ks: Iterable[str]) -> list[str]:
pfx = "- " + name + ".[" if name else ""
diff --git a/pandas/_testing/_warnings.py b/pandas/_testing/_warnings.py
index f42004bdfdef3..19008a533d0a1 100644
--- a/pandas/_testing/_warnings.py
+++ b/pandas/_testing/_warnings.py
@@ -4,6 +4,10 @@
contextmanager,
nullcontext,
)
+from inspect import (
+ getframeinfo,
+ stack,
+)
import re
import sys
from typing import (
@@ -207,11 +211,6 @@ def _is_unexpected_warning(
def _assert_raised_with_correct_stacklevel(
actual_warning: warnings.WarningMessage,
) -> None:
- from inspect import (
- getframeinfo,
- stack,
- )
-
caller = getframeinfo(stack()[4][0])
msg = (
"Warning not set with correct stacklevel. "
diff --git a/pandas/_testing/contexts.py b/pandas/_testing/contexts.py
index e5f716c62eca7..83d940cf155d0 100644
--- a/pandas/_testing/contexts.py
+++ b/pandas/_testing/contexts.py
@@ -1,9 +1,11 @@
from __future__ import annotations
from contextlib import contextmanager
+import csv
import os
from pathlib import Path
import tempfile
+import time
from types import TracebackType
from typing import (
IO,
@@ -62,7 +64,6 @@ def set_timezone(tz: str) -> Generator[None, None, None]:
...
'EST'
"""
- import time
def setTZ(tz) -> None:
if tz is None:
@@ -163,8 +164,6 @@ def with_csv_dialect(name, **kwargs) -> Generator[None, None, None]:
--------
csv : Python's CSV library.
"""
- import csv
-
_BUILTIN_DIALECTS = {"excel", "excel-tab", "unix"}
if name in _BUILTIN_DIALECTS:
diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index 3c28bd0c0a843..704e04acb4dc1 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -3,6 +3,7 @@
from __future__ import annotations
from collections import abc
+import dataclasses
from numbers import Number
import re
from typing import Pattern
@@ -417,8 +418,6 @@ def is_dataclass(item):
"""
try:
- import dataclasses
-
return dataclasses.is_dataclass(item) and not isinstance(item, type)
except ImportError:
return False
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 5a71ac247422a..cb047ce8ae67d 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+from copy import deepcopy
from datetime import datetime
import functools
from itertools import zip_longest
@@ -1561,7 +1562,6 @@ def _validate_names(
Handles the quirks of having a singular 'name' parameter for general
Index and plural 'names' parameter for MultiIndex.
"""
- from copy import deepcopy
if names is not None and name is not None:
raise TypeError("Can only provide one of `names` and `name`")
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 8776d78ae6d9a..147e3fac4a81e 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+from copy import deepcopy
from functools import wraps
from sys import getsizeof
from typing import (
@@ -1166,7 +1167,6 @@ def copy( # type: ignore[override]
levels, codes = None, None
if deep:
- from copy import deepcopy
levels = deepcopy(self.levels)
codes = deepcopy(self.codes)
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 07fab0080a747..f82a6b76c764c 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -5,6 +5,7 @@
from __future__ import annotations
from collections import abc
+from dataclasses import asdict
from typing import (
Any,
Hashable,
@@ -726,8 +727,6 @@ def dataclasses_to_dicts(data):
[{'x': 1, 'y': 2}, {'x': 2, 'y': 3}]
"""
- from dataclasses import asdict
-
return list(map(asdict, data))
diff --git a/pandas/io/clipboard/__init__.py b/pandas/io/clipboard/__init__.py
index e574ed2c8059a..fa1e8cd613953 100644
--- a/pandas/io/clipboard/__init__.py
+++ b/pandas/io/clipboard/__init__.py
@@ -53,6 +53,19 @@
get_errno,
sizeof,
)
+from ctypes.wintypes import (
+ BOOL,
+ DWORD,
+ HANDLE,
+ HGLOBAL,
+ HINSTANCE,
+ HMENU,
+ HWND,
+ INT,
+ LPCSTR,
+ LPVOID,
+ UINT,
+)
import os
import platform
from shutil import which
@@ -322,19 +335,6 @@ def __setattr__(self, key, value):
def init_windows_clipboard():
global HGLOBAL, LPVOID, DWORD, LPCSTR, INT
global HWND, HINSTANCE, HMENU, BOOL, UINT, HANDLE
- from ctypes.wintypes import (
- BOOL,
- DWORD,
- HANDLE,
- HGLOBAL,
- HINSTANCE,
- HMENU,
- HWND,
- INT,
- LPCSTR,
- LPVOID,
- UINT,
- )
windll = ctypes.windll
msvcrt = ctypes.CDLL("msvcrt")
diff --git a/pandas/io/formats/xml.py b/pandas/io/formats/xml.py
index 60831b38dba31..8a414d885f6f9 100644
--- a/pandas/io/formats/xml.py
+++ b/pandas/io/formats/xml.py
@@ -9,6 +9,13 @@
TYPE_CHECKING,
Any,
)
+from xml.dom.minidom import parseString
+from xml.etree.ElementTree import (
+ Element,
+ SubElement,
+ register_namespace,
+ tostring,
+)
from pandas._typing import (
CompressionOptions,
@@ -336,12 +343,6 @@ class EtreeXMLFormatter(BaseXMLFormatter):
"""
def build_tree(self) -> bytes:
- from xml.etree.ElementTree import (
- Element,
- SubElement,
- tostring,
- )
-
self.root = Element(
f"{self.prefix_uri}{self.root_name}", attrib=self.other_namespaces()
)
@@ -375,8 +376,6 @@ def build_tree(self) -> bytes:
return self.out_xml
def get_prefix_uri(self) -> str:
- from xml.etree.ElementTree import register_namespace
-
uri = ""
if self.namespaces:
for p, n in self.namespaces.items():
@@ -393,8 +392,6 @@ def get_prefix_uri(self) -> str:
return uri
def build_elems(self, d: dict[str, Any], elem_row: Any) -> None:
- from xml.etree.ElementTree import SubElement
-
self._build_elems(SubElement, d, elem_row)
def prettify_tree(self) -> bytes:
@@ -404,8 +401,6 @@ def prettify_tree(self) -> bytes:
This method will pretty print xml with line breaks and indentation.
"""
- from xml.dom.minidom import parseString
-
dom = parseString(self.out_xml)
return dom.toprettyxml(indent=" ", encoding=self.encoding)
diff --git a/pandas/io/xml.py b/pandas/io/xml.py
index 4f61455826286..5a85613b22c71 100644
--- a/pandas/io/xml.py
+++ b/pandas/io/xml.py
@@ -10,6 +10,11 @@
Callable,
Sequence,
)
+from xml.etree.ElementTree import (
+ XMLParser,
+ iterparse,
+ parse,
+)
from pandas._typing import (
TYPE_CHECKING,
@@ -434,8 +439,6 @@ class _EtreeFrameParser(_XMLFrameParser):
"""
def parse_data(self) -> list[dict[str, str | None]]:
- from xml.etree.ElementTree import iterparse
-
if self.stylesheet is not None:
raise ValueError(
"To use stylesheet, you need lxml installed and selected as parser."
@@ -519,11 +522,6 @@ def _validate_names(self) -> None:
def _parse_doc(
self, raw_doc: FilePath | ReadBuffer[bytes] | ReadBuffer[str]
) -> Element:
- from xml.etree.ElementTree import (
- XMLParser,
- parse,
- )
-
handle_data = get_data_from_filepath(
filepath_or_buffer=raw_doc,
encoding=self.encoding,
diff --git a/pandas/util/_print_versions.py b/pandas/util/_print_versions.py
index 91d518d1ab496..080943d50df2d 100644
--- a/pandas/util/_print_versions.py
+++ b/pandas/util/_print_versions.py
@@ -3,6 +3,7 @@
import codecs
import json
import locale
+from optparse import OptionParser
import os
import platform
import struct
@@ -135,8 +136,6 @@ def show_versions(as_json: str | bool = False) -> None:
def main() -> int:
- from optparse import OptionParser
-
parser = OptionParser()
parser.add_option(
"-j",
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index 33830e96342f3..8e67e347322c1 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -28,6 +28,7 @@ def test_foo():
from contextlib import contextmanager
import gc
import locale
+import sys
from typing import (
Callable,
Generator,
@@ -99,8 +100,6 @@ def safe_import(mod_name: str, min_version: str | None = None):
if not min_version:
return mod
else:
- import sys
-
try:
version = getattr(sys.modules[mod_name], "__version__")
except AttributeError:
diff --git a/scripts/standard_library_imports_should_be_global.py b/scripts/standard_library_imports_should_be_global.py
new file mode 100644
index 0000000000000..e72b3eb8ce164
--- /dev/null
+++ b/scripts/standard_library_imports_should_be_global.py
@@ -0,0 +1,103 @@
+"""
+Check that standard library imports appear at the top of modules.
+
+Imports within functions should only be used to prevent circular imports
+, for optional dependencies, or if an import is slow.
+
+This is meant to be run as a pre-commit hook - to run it manually, you can do:
+
+ pre-commit run stdlib-imports --all-files
+
+"""
+import argparse
+import ast
+from ast import NodeVisitor
+import importlib
+import sys
+
+BLOCKLIST = {
+ "bs4",
+ "ctypes",
+ "gcsfs",
+ "html5lib",
+ "http",
+ "importlib.metadata",
+ "ipython",
+ "jinja2",
+ "hypothesis",
+ "lxml",
+ "matplotlib",
+ "openpyxl",
+ "py",
+ "pytest",
+ "s3fs",
+ "scipy",
+ "sqlite3",
+ "tables",
+ "urllib.error",
+ "urllib.request",
+ "xlrd",
+ "xlsxwriter",
+ "xml",
+}
+
+
+class Visitor(NodeVisitor):
+ def __init__(self, file) -> None:
+ self.ret = 0
+ self.file = file
+
+ def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
+ for _node in ast.walk(node):
+ if (
+ isinstance(_node, ast.ImportFrom)
+ and _node.__module__ != "__main__"
+ and _node.module not in BLOCKLIST
+ and _node.module.split(".")[0] not in BLOCKLIST
+ ):
+ try:
+ importlib.import_module(_node.module)
+ except Exception as exp: # noqa: F841
+ pass
+ else:
+ print(
+ f"{self.file}:{_node.lineno}:{_node.col_offset} standard "
+ f"library import '{_node.module}' should be at the top of "
+ "the file"
+ )
+ self.ret = 1
+ elif isinstance(_node, ast.Import):
+ for _name in _node.names:
+ if (
+ _name.name == "__main__"
+ or _name.name in BLOCKLIST
+ or _name.name.split(".")[0] in BLOCKLIST
+ ):
+ continue
+ try:
+ importlib.import_module(_name.name)
+ except Exception as exp: # noqa: F841
+ pass
+ else:
+ print(
+ f"{self.file}:{_node.lineno}:{_node.col_offset} standard "
+ f"library import '{_name.name}' should be at the top of "
+ "the file"
+ )
+ self.ret = 1
+ self.generic_visit(node)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument("paths", nargs="*")
+ args = parser.parse_args()
+ ret = 0
+ for file in args.paths:
+ with open(file, encoding="utf-8") as fd:
+ content = fd.read()
+ tree = ast.parse(content)
+ visitor = Visitor(file)
+ visitor.visit(tree)
+ ret |= visitor.ret
+ sys.exit(ret)
| all but the pandas/tests files in this draft PR
| https://api.github.com/repos/pandas-dev/pandas/pulls/49992 | 2022-12-01T17:55:42Z | 2022-12-07T18:31:31Z | null | 2022-12-07T18:31:46Z |
TST/CoW: copy-on-write tests for add_prefix and add_suffix | diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 8015eb93988c9..f5c7b31e59bc5 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -253,6 +253,45 @@ def test_set_index(using_copy_on_write):
tm.assert_frame_equal(df, df_orig)
+def test_add_prefix(using_copy_on_write):
+ # GH 49473
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [0.1, 0.2, 0.3]})
+ df_orig = df.copy()
+ df2 = df.add_prefix("CoW_")
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "CoW_a"), get_array(df, "a"))
+ df2.iloc[0, 0] = 0
+
+ assert not np.shares_memory(get_array(df2, "CoW_a"), get_array(df, "a"))
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "CoW_c"), get_array(df, "c"))
+ expected = DataFrame(
+ {"CoW_a": [0, 2, 3], "CoW_b": [4, 5, 6], "CoW_c": [0.1, 0.2, 0.3]}
+ )
+ tm.assert_frame_equal(df2, expected)
+ tm.assert_frame_equal(df, df_orig)
+
+
+def test_add_suffix(using_copy_on_write):
+ # GH 49473
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [0.1, 0.2, 0.3]})
+ df_orig = df.copy()
+ df2 = df.add_suffix("_CoW")
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a_CoW"), get_array(df, "a"))
+ df2.iloc[0, 0] = 0
+ assert not np.shares_memory(get_array(df2, "a_CoW"), get_array(df, "a"))
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "c_CoW"), get_array(df, "c"))
+ expected = DataFrame(
+ {"a_CoW": [0, 2, 3], "b_CoW": [4, 5, 6], "c_CoW": [0.1, 0.2, 0.3]}
+ )
+ tm.assert_frame_equal(df2, expected)
+ tm.assert_frame_equal(df, df_orig)
+
+
@pytest.mark.parametrize(
"method",
[
| [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
This PR is related to https://github.com/pandas-dev/pandas/issues/49473
`add_suffix` and `add_prefix` already had Copy-on-Write implemented, this PR will add test cases that explicitly tests the Copy-on-Write feature. | https://api.github.com/repos/pandas-dev/pandas/pulls/49991 | 2022-12-01T17:46:52Z | 2022-12-03T18:49:26Z | 2022-12-03T18:49:26Z | 2022-12-03T18:49:32Z |
Fix dataframe.replace when columns contain pd.na | diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index 0d058ead9d22c..8243b0705a4b2 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -88,12 +88,14 @@ def mask_missing(arr: ArrayLike, values_to_mask) -> npt.NDArray[np.bool_]:
# GH 21977
mask = np.zeros(arr.shape, dtype=bool)
+ arr_mask_na = ~isna(arr)
for x in nonna:
if is_numeric_v_string_like(arr, x):
# GH#29553 prevent numpy deprecation warnings
pass
else:
- new_mask = arr == x
+ new_mask = np.zeros_like(arr, dtype=bool)
+ new_mask[arr_mask_na] = arr[arr_mask_na] == x
if not isinstance(new_mask, np.ndarray):
# usually BooleanArray
new_mask = new_mask.to_numpy(dtype=bool, na_value=False)
| - [x] closes #47480
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature | https://api.github.com/repos/pandas-dev/pandas/pulls/49988 | 2022-12-01T13:49:09Z | 2022-12-01T18:31:13Z | null | 2022-12-01T18:31:14Z |
⬆️ UPGRADE: Autoupdate pre-commit config | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 0779f9c95f7b4..93e5328b088d6 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -57,12 +57,12 @@ repos:
- flake8-bugbear==22.7.1
- pandas-dev-flaker==0.5.0
- repo: https://github.com/pycqa/pylint
- rev: v2.15.6
+ rev: v2.15.7
hooks:
- id: pylint
stages: [manual]
- repo: https://github.com/pycqa/pylint
- rev: v2.15.6
+ rev: v2.15.7
hooks:
- id: pylint
alias: redefined-outer-name
@@ -83,7 +83,7 @@ repos:
hooks:
- id: isort
- repo: https://github.com/asottile/pyupgrade
- rev: v3.2.2
+ rev: v3.2.3
hooks:
- id: pyupgrade
args: [--py38-plus]
| <!-- START pr-commits -->
<!-- END pr-commits -->
## Base PullRequest
default branch (https://github.com/pandas-dev/pandas/tree/main)
## Command results
<details>
<summary>Details: </summary>
<details>
<summary><em>add path</em></summary>
```Shell
/home/runner/work/_actions/technote-space/create-pr-action/v2/node_modules/npm-check-updates/build/src/bin
```
</details>
<details>
<summary><em>pip install pre-commit</em></summary>
```Shell
Collecting pre-commit
Downloading pre_commit-2.20.0-py2.py3-none-any.whl (199 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 199.5/199.5 kB 12.5 MB/s eta 0:00:00
Collecting cfgv>=2.0.0
Downloading cfgv-3.3.1-py2.py3-none-any.whl (7.3 kB)
Collecting identify>=1.0.0
Downloading identify-2.5.9-py2.py3-none-any.whl (98 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.8/98.8 kB 40.4 MB/s eta 0:00:00
Collecting nodeenv>=0.11.1
Downloading nodeenv-1.7.0-py2.py3-none-any.whl (21 kB)
Collecting pyyaml>=5.1
Downloading PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (757 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 757.9/757.9 kB 59.9 MB/s eta 0:00:00
Collecting toml
Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting virtualenv>=20.0.8
Downloading virtualenv-20.17.0-py3-none-any.whl (8.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.8/8.8 MB 115.7 MB/s eta 0:00:00
Requirement already satisfied: setuptools in /opt/hostedtoolcache/Python/3.11.0/x64/lib/python3.11/site-packages (from nodeenv>=0.11.1->pre-commit) (65.5.0)
Collecting distlib<1,>=0.3.6
Downloading distlib-0.3.6-py2.py3-none-any.whl (468 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 468.5/468.5 kB 107.9 MB/s eta 0:00:00
Collecting filelock<4,>=3.4.1
Downloading filelock-3.8.0-py3-none-any.whl (10 kB)
Collecting platformdirs<3,>=2.4
Downloading platformdirs-2.5.4-py3-none-any.whl (14 kB)
Installing collected packages: distlib, toml, pyyaml, platformdirs, nodeenv, identify, filelock, cfgv, virtualenv, pre-commit
Successfully installed cfgv-3.3.1 distlib-0.3.6 filelock-3.8.0 identify-2.5.9 nodeenv-1.7.0 platformdirs-2.5.4 pre-commit-2.20.0 pyyaml-6.0 toml-0.10.2 virtualenv-20.17.0
```
</details>
<details>
<summary><em>pre-commit autoupdate || (exit 0);</em></summary>
```Shell
Updating https://github.com/MarcoGorelli/absolufy-imports ... [INFO] Initializing environment for https://github.com/MarcoGorelli/absolufy-imports.
already up to date.
Updating https://github.com/jendrikseipp/vulture ... [INFO] Initializing environment for https://github.com/jendrikseipp/vulture.
already up to date.
Updating https://github.com/codespell-project/codespell ... [INFO] Initializing environment for https://github.com/codespell-project/codespell.
already up to date.
Updating https://github.com/MarcoGorelli/cython-lint ... [INFO] Initializing environment for https://github.com/MarcoGorelli/cython-lint.
already up to date.
Updating https://github.com/pre-commit/pre-commit-hooks ... [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
already up to date.
Updating https://github.com/cpplint/cpplint ... [INFO] Initializing environment for https://github.com/cpplint/cpplint.
already up to date.
Updating https://github.com/PyCQA/flake8 ... [INFO] Initializing environment for https://github.com/PyCQA/flake8.
already up to date.
Updating https://github.com/pycqa/pylint ... [INFO] Initializing environment for https://github.com/pycqa/pylint.
updating v2.15.6 -> v2.15.7.
Updating https://github.com/pycqa/pylint ... updating v2.15.6 -> v2.15.7.
Updating https://github.com/PyCQA/isort ... [INFO] Initializing environment for https://github.com/PyCQA/isort.
already up to date.
Updating https://github.com/asottile/pyupgrade ... [INFO] Initializing environment for https://github.com/asottile/pyupgrade.
updating v3.2.2 -> v3.2.3.
Updating https://github.com/pre-commit/pygrep-hooks ... [INFO] Initializing environment for https://github.com/pre-commit/pygrep-hooks.
already up to date.
Updating https://github.com/sphinx-contrib/sphinx-lint ... [INFO] Initializing environment for https://github.com/sphinx-contrib/sphinx-lint.
already up to date.
Updating https://github.com/asottile/yesqa ... [INFO] Initializing environment for https://github.com/asottile/yesqa.
already up to date.
```
</details>
<details>
<summary><em>pre-commit run -a || (exit 0);</em></summary>
```Shell
[INFO] Initializing environment for https://github.com/codespell-project/codespell:tomli.
[INFO] Initializing environment for https://github.com/PyCQA/flake8:flake8-bugbear==22.7.1,flake8==6.0.0,pandas-dev-flaker==0.5.0.
[INFO] Initializing environment for https://github.com/asottile/yesqa:flake8-bugbear==22.7.1,flake8==6.0.0,pandas-dev-flaker==0.5.0.
[INFO] Initializing environment for local:black==22.10.0.
[INFO] Initializing environment for local:pyright@1.1.276.
[INFO] Initializing environment for local:flake8-rst==0.7.0,flake8==3.7.9.
[INFO] Initializing environment for local:pyyaml,toml.
[INFO] Initializing environment for local.
[INFO] Initializing environment for local:tomli.
[INFO] Initializing environment for local:flake8-pyi==22.8.1,flake8==5.0.4.
[INFO] Initializing environment for local:autotyping==22.9.0,libcst==0.4.7.
[INFO] Installing environment for https://github.com/MarcoGorelli/absolufy-imports.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/jendrikseipp/vulture.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/codespell-project/codespell.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/MarcoGorelli/cython-lint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/cpplint/cpplint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/PyCQA/flake8.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/PyCQA/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/asottile/pyupgrade.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/sphinx-contrib/sphinx-lint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/asottile/yesqa.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
absolufy-imports........................................................................................Passed
vulture.................................................................................................Passed
codespell...............................................................................................Passed
cython-lint.............................................................................................Passed
debug statements (python)...............................................................................Passed
fix end of files........................................................................................Passed
trim trailing whitespace................................................................................Passed
cpplint.................................................................................................Passed
flake8..................................................................................................Passed
isort...................................................................................................Passed
pyupgrade...............................................................................................Passed
rst ``code`` is two backticks...........................................................................Passed
rst directives end with two colons......................................................................Passed
rst ``inline code`` next to normal text.................................................................Passed
Sphinx lint.............................................................................................Passed
Strip unnecessary `# noqa`s.............................................................................Passed
black...................................................................................................Passed
flake8-rst..............................................................................................Failed
- hook id: flake8-rst
- exit code: 1
Traceback (most recent call last):
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 486, in run_ast_checks
ast = self.processor.build_ast()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/processor.py", line 212, in build_ast
return compile("".join(self.lines), "", "exec", PyCF_ONLY_AST)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 103
@tm.network # noqa
^
IndentationError: expected an indented block after 'with' statement on line 101
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/bin/flake8-rst", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8_rst/cli.py", line 16, in main
app.run(argv)
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/main/application.py", line 393, in run
self._run(argv)
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/main/application.py", line 381, in _run
self.run_checks()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/main/application.py", line 300, in run_checks
self.file_checker_manager.run()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 331, in run
self.run_serial()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 315, in run_serial
checker.run_checks()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 598, in run_checks
self.run_ast_checks()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 488, in run_ast_checks
row, column = self._extract_syntax_information(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 473, in _extract_syntax_information
lines = physical_line.rstrip("\n").split("\n")
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'rstrip'
Traceback (most recent call last):
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 486, in run_ast_checks
ast = self.processor.build_ast()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/processor.py", line 212, in build_ast
return compile("".join(self.lines), "", "exec", PyCF_ONLY_AST)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 27
df2.<TAB> # noqa: E225, E999
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/bin/flake8-rst", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8_rst/cli.py", line 16, in main
app.run(argv)
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/main/application.py", line 393, in run
self._run(argv)
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/main/application.py", line 381, in _run
self.run_checks()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/main/application.py", line 300, in run_checks
self.file_checker_manager.run()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 331, in run
self.run_serial()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 315, in run_serial
checker.run_checks()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 598, in run_checks
self.run_ast_checks()
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 488, in run_ast_checks
row, column = self._extract_syntax_information(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/.cache/pre-commit/repohygk9rqo/py_env-python3.11/lib/python3.11/site-packages/flake8/checker.py", line 473, in _extract_syntax_information
lines = physical_line.rstrip("\n").split("\n")
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'rstrip'
Unwanted patterns.......................................................................................Passed
Check Cython casting is `<type>obj`, not `<type> obj`...................................................Passed
Check for backticks incorrectly rendering because of missing spaces.....................................Passed
Check for unnecessary random seeds in asv benchmarks....................................................Passed
Check for usage of numpy testing or array_equal.........................................................Passed
Check for invalid EA testing............................................................................Passed
Generate pip dependency from conda......................................................................Passed
Check flake8 version is synced across flake8, yesqa, and environment.yml................................Passed
Validate correct capitalization among titles in documentation...........................................Passed
Import pandas.array as pd_array in core.................................................................Passed
Use pandas.io.common.urlopen instead of urllib.request.urlopen..........................................Passed
Use bool_t instead of bool in pandas/core/generic.py....................................................Passed
Use raise instead of return for exceptions..............................................................Passed
Ensure pandas errors are documented in doc/source/reference/testing.rst.................................Passed
Check for pg8000 not installed on CI for test_pg8000_sqlalchemy_passthrough_error.......................Passed
Check minimum version of dependencies are aligned.......................................................Passed
Validate errors locations...............................................................................Passed
flake8-pyi..............................................................................................Passed
import annotations from __future__......................................................................Passed
autotyping..............................................................................................Passed
```
</details>
</details>
## Changed files
<details>
<summary>Changed file: </summary>
- .pre-commit-config.yaml
</details>
<hr>
[:octocat: Repo](https://github.com/technote-space/create-pr-action) | [:memo: Issues](https://github.com/technote-space/create-pr-action/issues) | [:department_store: Marketplace](https://github.com/marketplace/actions/create-pr-action) | https://api.github.com/repos/pandas-dev/pandas/pulls/49984 | 2022-12-01T07:09:46Z | 2022-12-01T09:11:46Z | null | 2022-12-02T00:29:17Z |
DOC: fix RT02 errors in pd.io.formats.style #49968 | diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 4d685bd8e8858..6c62c4efde6bb 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -290,7 +290,7 @@ def concat(self, other: Styler) -> Styler:
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -415,7 +415,7 @@ def set_tooltips(
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -1424,7 +1424,7 @@ def set_td_classes(self, classes: DataFrame) -> Styler:
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -1727,7 +1727,7 @@ def apply(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -1844,7 +1844,7 @@ def apply_index(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -1948,7 +1948,7 @@ def applymap(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2003,7 +2003,7 @@ def set_table_attributes(self, attributes: str) -> Styler:
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2105,7 +2105,7 @@ def use(self, styles: dict[str, Any]) -> Styler:
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2156,7 +2156,7 @@ def set_uuid(self, uuid: str) -> Styler:
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2180,7 +2180,7 @@ def set_caption(self, caption: str | tuple) -> Styler:
Returns
-------
- self : Styler
+ Styler
"""
msg = "`caption` must be either a string or 2-tuple of strings."
if isinstance(caption, tuple):
@@ -2218,7 +2218,7 @@ def set_sticky(
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2379,7 +2379,7 @@ def set_table_styles(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2504,7 +2504,7 @@ def hide(
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2748,7 +2748,7 @@ def background_gradient(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2881,7 +2881,7 @@ def set_properties(self, subset: Subset | None = None, **kwargs) -> Styler:
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2978,7 +2978,7 @@ def bar( # pylint: disable=disallowed-name
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -3053,7 +3053,7 @@ def highlight_null(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3099,7 +3099,7 @@ def highlight_max(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3147,7 +3147,7 @@ def highlight_min(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3203,7 +3203,7 @@ def highlight_between(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3315,7 +3315,7 @@ def highlight_quantile(
Returns
-------
- self : Styler
+ Styler
See Also
--------
| Simplified return type for several Styler methods in pd.io.formats.style.
- [x] xref #49968
- [ ] tests added and passed
- [x] passes pre-commit code checks
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49983 | 2022-12-01T06:24:51Z | 2022-12-05T19:35:05Z | 2022-12-05T19:35:05Z | 2022-12-06T07:59:12Z |
Pip use psycopg2 wheel | diff --git a/requirements-dev.txt b/requirements-dev.txt
index eac825493845c..19ed830eca07e 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -30,7 +30,7 @@ numexpr>=2.8.0
openpyxl
odfpy
pandas-gbq
-psycopg2
+psycopg2-binary
pyarrow
pymysql
pyreadstat
diff --git a/scripts/generate_pip_deps_from_conda.py b/scripts/generate_pip_deps_from_conda.py
index 2ab45b32dee93..f25ac9a24b98b 100755
--- a/scripts/generate_pip_deps_from_conda.py
+++ b/scripts/generate_pip_deps_from_conda.py
@@ -22,7 +22,11 @@
EXCLUDE = {"python", "c-compiler", "cxx-compiler"}
REMAP_VERSION = {"tzdata": "2022.1"}
-RENAME = {"pytables": "tables", "geopandas-base": "geopandas"}
+RENAME = {
+ "pytables": "tables",
+ "geopandas-base": "geopandas",
+ "psycopg2": "psycopg2-binary",
+}
def conda_package_to_pip(package: str):
| For pip users this makes the installation a bit easier | https://api.github.com/repos/pandas-dev/pandas/pulls/49982 | 2022-12-01T04:00:09Z | 2022-12-01T18:43:58Z | 2022-12-01T18:43:58Z | 2023-04-12T20:17:39Z |
Streamline docker usage | diff --git a/.devcontainer.json b/.devcontainer.json
index 8bea96aea29c1..7c5d009260c64 100644
--- a/.devcontainer.json
+++ b/.devcontainer.json
@@ -9,8 +9,7 @@
// You can edit these settings after create using File > Preferences > Settings > Remote.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
- "python.condaPath": "/opt/conda/bin/conda",
- "python.pythonPath": "/opt/conda/bin/python",
+ "python.pythonPath": "/usr/local/bin/python",
"python.formatting.provider": "black",
"python.linting.enabled": true,
"python.linting.flake8Enabled": true,
diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index 540e9481befd6..98770854f53dd 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -158,7 +158,7 @@ jobs:
run: docker build --pull --no-cache --tag pandas-dev-env .
- name: Show environment
- run: docker run -w /home/pandas pandas-dev-env mamba run -n pandas-dev python -c "import pandas as pd; print(pd.show_versions())"
+ run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())"
requirements-dev-text-installable:
name: Test install requirements-dev.txt
diff --git a/Dockerfile b/Dockerfile
index 9de8695b24274..c987461e8cbb8 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,42 +1,13 @@
-FROM quay.io/condaforge/mambaforge
+FROM python:3.10.8
+WORKDIR /home/pandas
-# if you forked pandas, you can pass in your own GitHub username to use your fork
-# i.e. gh_username=myname
-ARG gh_username=pandas-dev
-ARG pandas_home="/home/pandas"
+RUN apt-get update && apt-get -y upgrade
+RUN apt-get install -y build-essential
-# Avoid warnings by switching to noninteractive
-ENV DEBIAN_FRONTEND=noninteractive
+# hdf5 needed for pytables installation
+RUN apt-get install -y libhdf5-dev
-# Configure apt and install packages
-RUN apt-get update \
- && apt-get -y install --no-install-recommends apt-utils git tzdata dialog 2>&1 \
- #
- # Configure timezone (fix for tests which try to read from "/etc/localtime")
- && ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime \
- && dpkg-reconfigure -f noninteractive tzdata \
- #
- # cleanup
- && apt-get autoremove -y \
- && apt-get clean -y \
- && rm -rf /var/lib/apt/lists/*
-
-# Switch back to dialog for any ad-hoc use of apt-get
-ENV DEBIAN_FRONTEND=dialog
-
-# Clone pandas repo
-RUN mkdir "$pandas_home" \
- && git clone "https://github.com/$gh_username/pandas.git" "$pandas_home" \
- && cd "$pandas_home" \
- && git remote add upstream "https://github.com/pandas-dev/pandas.git" \
- && git pull upstream main
-
-# Set up environment
-RUN mamba env create -f "$pandas_home/environment.yml"
-
-# Build C extensions and pandas
-SHELL ["mamba", "run", "--no-capture-output", "-n", "pandas-dev", "/bin/bash", "-c"]
-RUN cd "$pandas_home" \
- && export \
- && python setup.py build_ext -j 4 \
- && python -m pip install --no-build-isolation -e .
+RUN python -m pip install --upgrade pip
+RUN python -m pip install --use-deprecated=legacy-resolver \
+ -r https://raw.githubusercontent.com/pandas-dev/pandas/main/requirements-dev.txt
+CMD ["/bin/bash"]
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index 3b9075f045e69..69f7f054d865d 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -228,34 +228,22 @@ with a full pandas development environment.
Build the Docker image::
- # Build the image pandas-yourname-env
- docker build --tag pandas-yourname-env .
- # Or build the image by passing your GitHub username to use your own fork
- docker build --build-arg gh_username=yourname --tag pandas-yourname-env .
+ # Build the image
+ docker build -t pandas-dev .
Run Container::
# Run a container and bind your local repo to the container
- docker run -it -w /home/pandas --rm -v path-to-local-pandas-repo:/home/pandas pandas-yourname-env
+ # This command assumes you are running from your local repo
+ # but if not alter ${PWD} to match your local repo path
+ docker run -it --rm -v ${PWD}:/home/pandas pandas-dev
-Then a ``pandas-dev`` virtual environment will be available with all the development dependencies.
+When inside the running container you can build and install pandas the same way as the other methods
-.. code-block:: shell
-
- root@... :/home/pandas# conda env list
- # conda environments:
- #
- base * /opt/conda
- pandas-dev /opt/conda/envs/pandas-dev
-
-.. note::
- If you bind your local repo for the first time, you have to build the C extensions afterwards.
- Run the following command inside the container::
-
- python setup.py build_ext -j 4
+.. code-block:: bash
- You need to rebuild the C extensions anytime the Cython code in ``pandas/_libs`` changes.
- This most frequently occurs when changing or merging branches.
+ python setup.py build_ext -j 4
+ python -m pip install -e . --no-build-isolation --no-use-pep517
*Even easier, you can integrate Docker with the following IDEs:*
| Took a look at this setup today and found a few things that could be improved to make it easier for new contributors. Some quick comparisons:
- Image size current 6.1 GB vs new 2.75 GB
- Image build time of current somewhere in the 30-45 minute range versus new at 2.5 min
- Current image blurs the lines of responsibility between a Docker image and a container
- Current image puts another layer of virtualization in with mamba that is arguably unnecessary with Docker
The remaining bottleneck with image creation is https://github.com/pandas-dev/pandas/issues/48828 | https://api.github.com/repos/pandas-dev/pandas/pulls/49981 | 2022-12-01T03:46:58Z | 2022-12-02T10:03:18Z | 2022-12-02T10:03:18Z | 2022-12-23T21:19:01Z |
CLN: assorted | diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 17facf9e16f4b..8c27170f65353 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -666,20 +666,20 @@ cpdef inline datetime localize_pydatetime(datetime dt, tzinfo tz):
cdef tzinfo convert_timezone(
- tzinfo tz_in,
- tzinfo tz_out,
- bint found_naive,
- bint found_tz,
- bint utc_convert,
+ tzinfo tz_in,
+ tzinfo tz_out,
+ bint found_naive,
+ bint found_tz,
+ bint utc_convert,
):
"""
Validate that ``tz_in`` can be converted/localized to ``tz_out``.
Parameters
----------
- tz_in : tzinfo
+ tz_in : tzinfo or None
Timezone info of element being processed.
- tz_out : tzinfo
+ tz_out : tzinfo or None
Timezone info of output.
found_naive : bool
Whether a timezone-naive element has been found so far.
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index 0c99ae4b8e03d..5d7daec65c7d1 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -531,7 +531,7 @@ def assert_interval_array_equal(
def assert_period_array_equal(left, right, obj: str = "PeriodArray") -> None:
_check_isinstance(left, right, PeriodArray)
- assert_numpy_array_equal(left._data, right._data, obj=f"{obj}._data")
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
assert_attr_equal("freq", left, right, obj=obj)
@@ -541,7 +541,7 @@ def assert_datetime_array_equal(
__tracebackhide__ = True
_check_isinstance(left, right, DatetimeArray)
- assert_numpy_array_equal(left._data, right._data, obj=f"{obj}._data")
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
if check_freq:
assert_attr_equal("freq", left, right, obj=obj)
assert_attr_equal("tz", left, right, obj=obj)
@@ -552,7 +552,7 @@ def assert_timedelta_array_equal(
) -> None:
__tracebackhide__ = True
_check_isinstance(left, right, TimedeltaArray)
- assert_numpy_array_equal(left._data, right._data, obj=f"{obj}._data")
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
if check_freq:
assert_attr_equal("freq", left, right, obj=obj)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index a3c201b402b0f..f11d031b2f622 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1646,13 +1646,11 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
class ExtensionArraySupportsAnyAll(ExtensionArray):
- def any(self, *, skipna: bool = True) -> bool: # type: ignore[empty-body]
- # error: Missing return statement
- pass
+ def any(self, *, skipna: bool = True) -> bool:
+ raise AbstractMethodError(self)
- def all(self, *, skipna: bool = True) -> bool: # type: ignore[empty-body]
- # error: Missing return statement
- pass
+ def all(self, *, skipna: bool = True) -> bool:
+ raise AbstractMethodError(self)
class ExtensionOpsMixin:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index a9af210e08741..bf7e28d5a4b98 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -11,7 +11,6 @@
Literal,
Sequence,
TypeVar,
- Union,
cast,
overload,
)
@@ -511,7 +510,7 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:
result = self.copy() if copy else self
elif is_categorical_dtype(dtype):
- dtype = cast("Union[str, CategoricalDtype]", dtype)
+ dtype = cast(CategoricalDtype, dtype)
# GH 10696/18593/18630
dtype = self.dtype.update_dtype(dtype)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index be20d825b0c80..4f01c4892db6c 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -257,13 +257,6 @@ def _check_compatible_with(self, other: DTScalarOrNaT) -> None:
"""
raise AbstractMethodError(self)
- # ------------------------------------------------------------------
- # NDArrayBackedExtensionArray compat
-
- @cache_readonly
- def _data(self) -> np.ndarray:
- return self._ndarray
-
# ------------------------------------------------------------------
def _box_func(self, x):
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 60488a8ef9715..704897722e938 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1195,9 +1195,7 @@ def maybe_cast_to_datetime(
# TODO: _from_sequence would raise ValueError in cases where
# _ensure_nanosecond_dtype raises TypeError
- # Incompatible types in assignment (expression has type "Union[dtype[Any],
- # ExtensionDtype]", variable has type "Optional[dtype[Any]]")
- dtype = _ensure_nanosecond_dtype(dtype) # type: ignore[assignment]
+ _ensure_nanosecond_dtype(dtype)
if is_timedelta64_dtype(dtype):
res = TimedeltaArray._from_sequence(value, dtype=dtype)
@@ -1235,12 +1233,11 @@ def sanitize_to_nanoseconds(values: np.ndarray, copy: bool = False) -> np.ndarra
return values
-def _ensure_nanosecond_dtype(dtype: DtypeObj) -> DtypeObj:
+def _ensure_nanosecond_dtype(dtype: DtypeObj) -> None:
"""
Convert dtypes with granularity less than nanosecond to nanosecond
>>> _ensure_nanosecond_dtype(np.dtype("M8[us]"))
- dtype('<M8[us]')
>>> _ensure_nanosecond_dtype(np.dtype("M8[D]"))
Traceback (most recent call last):
@@ -1277,7 +1274,6 @@ def _ensure_nanosecond_dtype(dtype: DtypeObj) -> DtypeObj:
f"dtype={dtype} is not supported. Supported resolutions are 's', "
"'ms', 'us', and 'ns'"
)
- return dtype
# TODO: other value-dependent functions to standardize here include
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index a225d2cd12eac..000b5ebbdd2f7 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -18,7 +18,6 @@
import pandas._libs.missing as libmissing
from pandas._libs.tslibs import (
NaT,
- Period,
iNaT,
)
@@ -749,10 +748,8 @@ def isna_all(arr: ArrayLike) -> bool:
if dtype.kind == "f" and isinstance(dtype, np.dtype):
checker = nan_checker
- elif (
- (isinstance(dtype, np.dtype) and dtype.kind in ["m", "M"])
- or isinstance(dtype, DatetimeTZDtype)
- or dtype.type is Period
+ elif (isinstance(dtype, np.dtype) and dtype.kind in ["m", "M"]) or isinstance(
+ dtype, (DatetimeTZDtype, PeriodDtype)
):
# error: Incompatible types in assignment (expression has type
# "Callable[[Any], Any]", variable has type "ufunc")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..218c0e33af823 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7475,7 +7475,7 @@ def _cmp_method(self, other, op):
return self._construct_result(new_data)
def _arith_method(self, other, op):
- if ops.should_reindex_frame_op(self, other, op, 1, 1, None, None):
+ if ops.should_reindex_frame_op(self, other, op, 1, None, None):
return ops.frame_arith_method_with_reindex(self, other, op)
axis: Literal[1] = 1 # only relevant for Series other case
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 497e0ef724373..dba36066c7952 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1647,8 +1647,6 @@ def array_func(values: ArrayLike) -> ArrayLike:
return result
- # TypeError -> we may have an exception in trying to aggregate
- # continue and exclude the block
new_mgr = data.grouped_reduce(array_func)
res = self._wrap_agged_manager(new_mgr)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index f0b0ec23dba1a..9cbf3b6167305 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -841,7 +841,9 @@ def _set_levels(
self._reset_cache()
- def set_levels(self, levels, *, level=None, verify_integrity: bool = True):
+ def set_levels(
+ self, levels, *, level=None, verify_integrity: bool = True
+ ) -> MultiIndex:
"""
Set new levels on MultiIndex. Defaults to returning new index.
@@ -856,8 +858,7 @@ def set_levels(self, levels, *, level=None, verify_integrity: bool = True):
Returns
-------
- new index (of same type and class...etc) or None
- The same type as the caller or None if ``inplace=True``.
+ MultiIndex
Examples
--------
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index feca755fd43db..91216a9618365 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -758,9 +758,9 @@ def fast_xs(self, loc: int) -> SingleArrayManager:
result = dtype.construct_array_type()._from_sequence(values, dtype=dtype)
# for datetime64/timedelta64, the np.ndarray constructor cannot handle pd.NaT
elif is_datetime64_ns_dtype(dtype):
- result = DatetimeArray._from_sequence(values, dtype=dtype)._data
+ result = DatetimeArray._from_sequence(values, dtype=dtype)._ndarray
elif is_timedelta64_ns_dtype(dtype):
- result = TimedeltaArray._from_sequence(values, dtype=dtype)._data
+ result = TimedeltaArray._from_sequence(values, dtype=dtype)._ndarray
else:
result = np.array(values, dtype=dtype)
return SingleArrayManager([result], [self._axes[1]])
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f1856fce83160..c8a6750e165ea 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2291,6 +2291,6 @@ def external_values(values: ArrayLike) -> ArrayLike:
# NB: for datetime64tz this is different from np.asarray(values), since
# that returns an object-dtype ndarray of Timestamps.
# Avoid raising in .astype in casting from dt64tz to dt64
- return values._data
+ return values._ndarray
else:
return values
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index bfedaca093a8e..76d5fc8128a8f 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -6,7 +6,10 @@
from __future__ import annotations
import operator
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ cast,
+)
import numpy as np
@@ -312,7 +315,7 @@ def to_series(right):
def should_reindex_frame_op(
- left: DataFrame, right, op, axis, default_axis, fill_value, level
+ left: DataFrame, right, op, axis: int, fill_value, level
) -> bool:
"""
Check if this is an operation between DataFrames that will need to reindex.
@@ -326,7 +329,7 @@ def should_reindex_frame_op(
if not isinstance(right, ABCDataFrame):
return False
- if fill_value is None and level is None and axis is default_axis:
+ if fill_value is None and level is None and axis == 1:
# TODO: any other cases we should handle here?
# Intersection is always unique so we have to check the unique columns
@@ -411,17 +414,16 @@ def _maybe_align_series_as_frame(frame: DataFrame, series: Series, axis: AxisInt
def flex_arith_method_FRAME(op):
op_name = op.__name__.strip("_")
- default_axis = "columns"
na_op = get_array_op(op)
doc = make_flex_doc(op_name, "dataframe")
@Appender(doc)
- def f(self, other, axis=default_axis, level=None, fill_value=None):
+ def f(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ axis = self._get_axis_number(axis) if axis is not None else 1
+ axis = cast(int, axis)
- if should_reindex_frame_op(
- self, other, op, axis, default_axis, fill_value, level
- ):
+ if should_reindex_frame_op(self, other, op, axis, fill_value, level):
return frame_arith_method_with_reindex(self, other, op)
if isinstance(other, ABCSeries) and fill_value is not None:
@@ -429,8 +431,6 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
# through the DataFrame path
raise NotImplementedError(f"fill_value {fill_value} not supported.")
- axis = self._get_axis_number(axis) if axis is not None else 1
-
other = maybe_prepare_scalar_for_op(other, self.shape)
self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
@@ -456,14 +456,13 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
def flex_comp_method_FRAME(op):
op_name = op.__name__.strip("_")
- default_axis = "columns" # because we are "flex"
doc = _flex_comp_doc_FRAME.format(
op_name=op_name, desc=_op_descriptions[op_name]["desc"]
)
@Appender(doc)
- def f(self, other, axis=default_axis, level=None):
+ def f(self, other, axis: Axis = "columns", level=None):
axis = self._get_axis_number(axis) if axis is not None else 1
self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
diff --git a/pandas/tests/apply/test_invalid_arg.py b/pandas/tests/apply/test_invalid_arg.py
index 6ed962c8f68e6..252eff8f9a823 100644
--- a/pandas/tests/apply/test_invalid_arg.py
+++ b/pandas/tests/apply/test_invalid_arg.py
@@ -355,7 +355,6 @@ def test_transform_wont_agg_series(string_series, func):
@pytest.mark.parametrize(
"op_wrapper", [lambda x: x, lambda x: [x], lambda x: {"A": x}, lambda x: {"A": [x]}]
)
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_transform_reducer_raises(all_reductions, frame_or_series, op_wrapper):
# GH 35964
op = op_wrapper(all_reductions)
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index b4f1c5404d178..c35962d7d2e96 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -2437,7 +2437,7 @@ def test_dt64arr_addsub_object_dtype_2d():
assert isinstance(result, DatetimeArray)
assert result.freq is None
- tm.assert_numpy_array_equal(result._data, expected._data)
+ tm.assert_numpy_array_equal(result._ndarray, expected._ndarray)
with tm.assert_produces_warning(PerformanceWarning):
# Case where we expect to get a TimedeltaArray back
diff --git a/pandas/tests/arrays/datetimes/test_constructors.py b/pandas/tests/arrays/datetimes/test_constructors.py
index 992d047b1afef..6670d07a4c075 100644
--- a/pandas/tests/arrays/datetimes/test_constructors.py
+++ b/pandas/tests/arrays/datetimes/test_constructors.py
@@ -122,10 +122,10 @@ def test_freq_infer_raises(self):
def test_copy(self):
data = np.array([1, 2, 3], dtype="M8[ns]")
arr = DatetimeArray(data, copy=False)
- assert arr._data is data
+ assert arr._ndarray is data
arr = DatetimeArray(data, copy=True)
- assert arr._data is not data
+ assert arr._ndarray is not data
class TestSequenceToDT64NS:
diff --git a/pandas/tests/arrays/period/test_astype.py b/pandas/tests/arrays/period/test_astype.py
index e9245c9ca786b..475a85ca4b644 100644
--- a/pandas/tests/arrays/period/test_astype.py
+++ b/pandas/tests/arrays/period/test_astype.py
@@ -42,12 +42,12 @@ def test_astype_copies():
result = arr.astype(np.int64, copy=False)
# Add the `.base`, since we now use `.asi8` which returns a view.
- # We could maybe override it in PeriodArray to return ._data directly.
- assert result.base is arr._data
+ # We could maybe override it in PeriodArray to return ._ndarray directly.
+ assert result.base is arr._ndarray
result = arr.astype(np.int64, copy=True)
- assert result is not arr._data
- tm.assert_numpy_array_equal(result, arr._data.view("i8"))
+ assert result is not arr._ndarray
+ tm.assert_numpy_array_equal(result, arr._ndarray.view("i8"))
def test_astype_categorical():
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 3f310d0efa2ca..fbd6f362bd9e7 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -220,7 +220,7 @@ def test_unbox_scalar(self):
data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9
arr = self.array_cls(data, freq="D")
result = arr._unbox_scalar(arr[0])
- expected = arr._data.dtype.type
+ expected = arr._ndarray.dtype.type
assert isinstance(result, expected)
result = arr._unbox_scalar(NaT)
@@ -350,13 +350,13 @@ def test_getitem_near_implementation_bounds(self):
def test_getitem_2d(self, arr1d):
# 2d slicing on a 1D array
- expected = type(arr1d)(arr1d._data[:, np.newaxis], dtype=arr1d.dtype)
+ expected = type(arr1d)(arr1d._ndarray[:, np.newaxis], dtype=arr1d.dtype)
result = arr1d[:, np.newaxis]
tm.assert_equal(result, expected)
# Lookup on a 2D array
arr2d = expected
- expected = type(arr2d)(arr2d._data[:3, 0], dtype=arr2d.dtype)
+ expected = type(arr2d)(arr2d._ndarray[:3, 0], dtype=arr2d.dtype)
result = arr2d[:3, 0]
tm.assert_equal(result, expected)
@@ -366,7 +366,7 @@ def test_getitem_2d(self, arr1d):
assert result == expected
def test_iter_2d(self, arr1d):
- data2d = arr1d._data[:3, np.newaxis]
+ data2d = arr1d._ndarray[:3, np.newaxis]
arr2d = type(arr1d)._simple_new(data2d, dtype=arr1d.dtype)
result = list(arr2d)
assert len(result) == 3
@@ -376,7 +376,7 @@ def test_iter_2d(self, arr1d):
assert x.dtype == arr1d.dtype
def test_repr_2d(self, arr1d):
- data2d = arr1d._data[:3, np.newaxis]
+ data2d = arr1d._ndarray[:3, np.newaxis]
arr2d = type(arr1d)._simple_new(data2d, dtype=arr1d.dtype)
result = repr(arr2d)
@@ -632,7 +632,7 @@ def test_array_interface(self, datetime_index):
# default asarray gives the same underlying data (for tz naive)
result = np.asarray(arr)
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, copy=False)
@@ -641,7 +641,7 @@ def test_array_interface(self, datetime_index):
# specifying M8[ns] gives the same result as default
result = np.asarray(arr, dtype="datetime64[ns]")
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, dtype="datetime64[ns]", copy=False)
@@ -720,13 +720,13 @@ def test_array_i8_dtype(self, arr1d):
assert result.base is None
def test_from_array_keeps_base(self):
- # Ensure that DatetimeArray._data.base isn't lost.
+ # Ensure that DatetimeArray._ndarray.base isn't lost.
arr = np.array(["2000-01-01", "2000-01-02"], dtype="M8[ns]")
dta = DatetimeArray(arr)
- assert dta._data is arr
+ assert dta._ndarray is arr
dta = DatetimeArray(arr[:0])
- assert dta._data.base is arr
+ assert dta._ndarray.base is arr
def test_from_dti(self, arr1d):
arr = arr1d
@@ -941,7 +941,7 @@ def test_array_interface(self, timedelta_index):
# default asarray gives the same underlying data
result = np.asarray(arr)
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, copy=False)
@@ -950,7 +950,7 @@ def test_array_interface(self, timedelta_index):
# specifying m8[ns] gives the same result as default
result = np.asarray(arr, dtype="timedelta64[ns]")
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, dtype="timedelta64[ns]", copy=False)
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 89c9ba85fcfa9..cd58afe368960 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -659,7 +659,7 @@ def test_shift_fill_value(self):
dti = pd.date_range("2016-01-01", periods=3)
dta = dti._data
- expected = DatetimeArray(np.roll(dta._data, 1))
+ expected = DatetimeArray(np.roll(dta._ndarray, 1))
fv = dta[-1]
for fill_value in [fv, fv.to_pydatetime(), fv.to_datetime64()]:
diff --git a/pandas/tests/arrays/timedeltas/test_constructors.py b/pandas/tests/arrays/timedeltas/test_constructors.py
index d24fabfeecb26..3a076a6828a98 100644
--- a/pandas/tests/arrays/timedeltas/test_constructors.py
+++ b/pandas/tests/arrays/timedeltas/test_constructors.py
@@ -51,11 +51,11 @@ def test_incorrect_dtype_raises(self):
def test_copy(self):
data = np.array([1, 2, 3], dtype="m8[ns]")
arr = TimedeltaArray(data, copy=False)
- assert arr._data is data
+ assert arr._ndarray is data
arr = TimedeltaArray(data, copy=True)
- assert arr._data is not data
- assert arr._data.base is not data
+ assert arr._ndarray is not data
+ assert arr._ndarray.base is not data
def test_from_sequence_dtype(self):
msg = "dtype .*object.* cannot be converted to timedelta64"
diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py
index 703ac6c89fca8..f244b348c6763 100644
--- a/pandas/tests/base/test_conversion.py
+++ b/pandas/tests/base/test_conversion.py
@@ -237,11 +237,11 @@ def test_numpy_array_all_dtypes(any_numpy_dtype):
"arr, attr",
[
(pd.Categorical(["a", "b"]), "_codes"),
- (pd.core.arrays.period_array(["2000", "2001"], freq="D"), "_data"),
+ (pd.core.arrays.period_array(["2000", "2001"], freq="D"), "_ndarray"),
(pd.array([0, np.nan], dtype="Int64"), "_data"),
(IntervalArray.from_breaks([0, 1]), "_left"),
(SparseArray([0, 1]), "_sparse_values"),
- (DatetimeArray(np.array([1, 2], dtype="datetime64[ns]")), "_data"),
+ (DatetimeArray(np.array([1, 2], dtype="datetime64[ns]")), "_ndarray"),
# tz-aware Datetime
(
DatetimeArray(
@@ -250,20 +250,14 @@ def test_numpy_array_all_dtypes(any_numpy_dtype):
),
dtype=DatetimeTZDtype(tz="US/Central"),
),
- "_data",
+ "_ndarray",
),
],
)
def test_array(arr, attr, index_or_series, request):
box = index_or_series
- warn = None
- if arr.dtype.name in ("Sparse[int64, 0]") and box is pd.Index:
- mark = pytest.mark.xfail(reason="Index cannot yet store sparse dtype")
- request.node.add_marker(mark)
- warn = FutureWarning
- with tm.assert_produces_warning(warn):
- result = box(arr, copy=False).array
+ result = box(arr, copy=False).array
if attr:
arr = getattr(arr, attr)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index e6f1675bb8bc8..eb6ad4b575414 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -443,15 +443,12 @@ def test_reduce_series(
if not pa.types.is_boolean(pa_dtype):
request.node.add_marker(xfail_mark)
op_name = all_boolean_reductions
- s = pd.Series(data)
- result = getattr(s, op_name)(skipna=skipna)
+ ser = pd.Series(data)
+ result = getattr(ser, op_name)(skipna=skipna)
assert result is (op_name == "any")
class TestBaseGroupby(base.BaseGroupbyTests):
- def test_groupby_agg_extension(self, data_for_grouping, request):
- super().test_groupby_agg_extension(data_for_grouping)
-
def test_groupby_extension_no_sort(self, data_for_grouping, request):
pa_dtype = data_for_grouping.dtype.pyarrow_dtype
if pa.types.is_boolean(pa_dtype):
@@ -515,9 +512,6 @@ def test_in_numeric_groupby(self, data_for_grouping, request):
)
super().test_in_numeric_groupby(data_for_grouping)
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("as_index", [True, False])
def test_groupby_extension_agg(self, as_index, data_for_grouping, request):
pa_dtype = data_for_grouping.dtype.pyarrow_dtype
@@ -638,15 +632,17 @@ class TestBaseIndex(base.BaseIndexTests):
class TestBaseInterface(base.BaseInterfaceTests):
- @pytest.mark.xfail(reason="pyarrow.ChunkedArray does not support views.")
+ @pytest.mark.xfail(reason="GH 45419: pyarrow.ChunkedArray does not support views.")
def test_view(self, data):
super().test_view(data)
class TestBaseMissing(base.BaseMissingTests):
- @pytest.mark.filterwarnings("ignore:Falling back:pandas.errors.PerformanceWarning")
def test_dropna_array(self, data_missing):
- super().test_dropna_array(data_missing)
+ with tm.maybe_produces_warning(
+ PerformanceWarning, pa_version_under6p0, check_stacklevel=False
+ ):
+ super().test_dropna_array(data_missing)
def test_fillna_no_op_returns_copy(self, data):
with tm.maybe_produces_warning(
@@ -949,14 +945,26 @@ def test_combine_le(self, data_repeated):
def test_combine_add(self, data_repeated, request):
pa_dtype = next(data_repeated(1)).dtype.pyarrow_dtype
- if pa.types.is_temporal(pa_dtype):
- request.node.add_marker(
- pytest.mark.xfail(
- raises=TypeError,
- reason=f"{pa_dtype} cannot be added to {pa_dtype}",
- )
- )
- super().test_combine_add(data_repeated)
+ if pa.types.is_duration(pa_dtype):
+ # TODO: this fails on the scalar addition constructing 'expected'
+ # but not in the actual 'combine' call, so may be salvage-able
+ mark = pytest.mark.xfail(
+ raises=TypeError,
+ reason=f"{pa_dtype} cannot be added to {pa_dtype}",
+ )
+ request.node.add_marker(mark)
+ super().test_combine_add(data_repeated)
+
+ elif pa.types.is_temporal(pa_dtype):
+ # analogous to datetime64, these cannot be added
+ orig_data1, orig_data2 = data_repeated(2)
+ s1 = pd.Series(orig_data1)
+ s2 = pd.Series(orig_data2)
+ with pytest.raises(TypeError):
+ s1.combine(s2, lambda x1, x2: x1 + x2)
+
+ else:
+ super().test_combine_add(data_repeated)
def test_searchsorted(self, data_for_sorting, as_series, request):
pa_dtype = data_for_sorting.dtype.pyarrow_dtype
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 8331bed881ce1..e27f9fe9995ad 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -340,8 +340,8 @@ def test_setitem_dt64tz(self, timezone_frame):
v1 = df._mgr.arrays[1]
v2 = df._mgr.arrays[2]
tm.assert_extension_array_equal(v1, v2)
- v1base = v1._data.base
- v2base = v2._data.base
+ v1base = v1._ndarray.base
+ v2base = v2._ndarray.base
assert v1base is None or (id(v1base) != id(v2base))
# with nan
diff --git a/pandas/tests/frame/methods/test_rank.py b/pandas/tests/frame/methods/test_rank.py
index 5f648c76d0aa4..b533fc12f4a79 100644
--- a/pandas/tests/frame/methods/test_rank.py
+++ b/pandas/tests/frame/methods/test_rank.py
@@ -247,7 +247,6 @@ def test_rank_methods_frame(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("dtype", ["O", "f8", "i8"])
- @pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_rank_descending(self, method, dtype):
if "i" in dtype:
df = self.df.dropna().astype(dtype)
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index 8aedac036c2c9..ca1c7b8d71071 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -1185,7 +1185,6 @@ def test_zero_len_frame_with_series_corner_cases():
tm.assert_frame_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_frame_single_columns_object_sum_axis_1():
# GH 13758
data = {
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 6c6a923e363ae..4be754994bb28 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -145,7 +145,6 @@ class TestDataFrameAnalytics:
# ---------------------------------------------------------------------
# Reductions
- @pytest.mark.filterwarnings("ignore:Dropping of nuisance:FutureWarning")
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize(
"opname",
@@ -186,7 +185,6 @@ def test_stat_op_api_float_string_frame(self, float_string_frame, axis, opname):
if opname != "nunique":
getattr(float_string_frame, opname)(axis=axis, numeric_only=True)
- @pytest.mark.filterwarnings("ignore:Dropping of nuisance:FutureWarning")
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize(
"opname",
@@ -283,9 +281,6 @@ def kurt(x):
assert_stat_op_calc("skew", skewness, float_frame_with_na)
assert_stat_op_calc("kurt", kurt, float_frame_with_na)
- # TODO: Ensure warning isn't emitted in the first place
- # ignore mean of empty slice and all-NaN
- @pytest.mark.filterwarnings("ignore::RuntimeWarning")
def test_median(self, float_frame_with_na, int_frame):
def wrapper(x):
if isna(x).any():
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 86c8e36cb7bd4..4664052196797 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -286,7 +286,9 @@ def test_repr_column_name_unicode_truncation_bug(self):
with option_context("display.max_columns", 20):
assert "StringCol" in repr(df)
- @pytest.mark.filterwarnings("ignore::FutureWarning")
+ @pytest.mark.filterwarnings(
+ "ignore:.*DataFrame.to_latex` is expected to utilise:FutureWarning"
+ )
def test_latex_repr(self):
result = r"""\begin{tabular}{llll}
\toprule
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index a06304af7a2d0..f1adff58325ce 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -577,7 +577,6 @@ def stretch(row):
assert not isinstance(result, tm.SubclassedDataFrame)
tm.assert_series_equal(result, expected)
- @pytest.mark.filterwarnings("ignore:.*None will no longer:FutureWarning")
def test_subclassed_reductions(self, all_reductions):
# GH 25596
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 03b917edd357b..b1a4eb3fb7dd8 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1075,7 +1075,6 @@ def test_mangle_series_groupby(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.xfail(reason="GH-26611. kwargs for multi-agg.")
- @pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
def test_with_kwargs(self):
f1 = lambda x, y, b=1: x.sum() + y + b
f2 = lambda x, y, b=2: x.sum() + y * b
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 6a89c72354d04..eb667016b1e62 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -25,7 +25,6 @@
from pandas.io.formats.printing import pprint_thing
-@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
def test_agg_partial_failure_raises():
# GH#43741
diff --git a/pandas/tests/groupby/test_allowlist.py b/pandas/tests/groupby/test_allowlist.py
index 5a130da4937fd..1d18e7dc6c2cf 100644
--- a/pandas/tests/groupby/test_allowlist.py
+++ b/pandas/tests/groupby/test_allowlist.py
@@ -70,7 +70,6 @@ def raw_frame():
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize("skipna", [True, False])
@pytest.mark.parametrize("sort", [True, False])
-@pytest.mark.filterwarnings("ignore:The default value of numeric_only:FutureWarning")
def test_regression_allowlist_methods(raw_frame, op, axis, skipna, sort):
# GH6944
# GH 17537
diff --git a/pandas/tests/groupby/test_any_all.py b/pandas/tests/groupby/test_any_all.py
index 3f61a4ece66c0..e49238a9e6656 100644
--- a/pandas/tests/groupby/test_any_all.py
+++ b/pandas/tests/groupby/test_any_all.py
@@ -171,7 +171,6 @@ def test_object_type_missing_vals(bool_agg_func, data, expected_res, frame_or_se
tm.assert_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
def test_object_NA_raises_with_skipna_false(bool_agg_func):
# GH#37501
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index b35c4158bf420..b9e2bba0b0d31 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -311,7 +311,6 @@ def test_apply(ordered):
tm.assert_series_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*value of numeric_only.*:FutureWarning")
def test_observed(observed):
# multiple groupers, don't re-expand the output space
# of the grouper
@@ -1316,7 +1315,6 @@ def test_groupby_categorical_axis_1(code):
tm.assert_frame_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_groupby_cat_preserves_structure(observed, ordered):
# GH 28787
df = DataFrame(
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index ef39aabd83d22..7e5bfff53054a 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -137,9 +137,6 @@ def df(self):
)
return df
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("method", ["mean", "median"])
def test_averages(self, df, method):
# mean / median
@@ -217,9 +214,6 @@ def test_first_last(self, df, method):
self._check(df, method, expected_columns, expected_columns_numeric)
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("method", ["sum", "cumsum"])
def test_sum_cumsum(self, df, method):
@@ -233,9 +227,6 @@ def test_sum_cumsum(self, df, method):
self._check(df, method, expected_columns, expected_columns_numeric)
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("method", ["prod", "cumprod"])
def test_prod_cumprod(self, df, method):
@@ -496,7 +487,6 @@ def test_groupby_non_arithmetic_agg_int_like_precision(i):
],
)
@pytest.mark.parametrize("numeric_only", [True, False])
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_idxmin_idxmax_returns_int_types(func, values, numeric_only):
# GH 25444
df = DataFrame(
@@ -1610,7 +1600,6 @@ def test_corrwith_with_1_axis():
tm.assert_series_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.* is deprecated:FutureWarning")
def test_multiindex_group_all_columns_when_empty(groupby_func):
# GH 32464
df = DataFrame({"a": [], "b": [], "c": []}).set_index(["a", "b", "c"])
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index a7104c2e21049..59a8141be7db4 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1263,8 +1263,6 @@ def test_groupby_mixed_type_columns():
tm.assert_frame_equal(result, expected)
-# TODO: Ensure warning isn't emitted in the first place
-@pytest.mark.filterwarnings("ignore:Mean of:RuntimeWarning")
def test_cython_grouper_series_bug_noncontig():
arr = np.empty((100, 100))
arr.fill(np.nan)
@@ -1879,9 +1877,6 @@ def test_pivot_table_values_key_error():
@pytest.mark.parametrize(
"op", ["idxmax", "idxmin", "min", "max", "sum", "prod", "skew"]
)
-@pytest.mark.filterwarnings("ignore:The default value of numeric_only:FutureWarning")
-@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_empty_groupby(columns, keys, values, method, op, request, using_array_manager):
# GH8093 & GH26411
override_dtype = None
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 5c23d1dfd83c8..73b742591cd10 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -47,7 +47,7 @@ def test_int64_nocopy(self):
# and copy=False
arr = np.arange(10, dtype=np.int64)
tdi = TimedeltaIndex(arr, copy=False)
- assert tdi._data._data.base is arr
+ assert tdi._data._ndarray.base is arr
def test_infer_from_tdi(self):
# GH#23539
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index b3e59da4b0130..727d0bade2c2c 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -883,7 +883,7 @@ def test_setitem_dt64_string_scalar(self, tz_naive_fixture, indexer_sli):
if tz is None:
# TODO(EA2D): we can make this no-copy in tz-naive case too
assert ser.dtype == dti.dtype
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
else:
assert ser._values is values
@@ -911,7 +911,7 @@ def test_setitem_dt64_string_values(self, tz_naive_fixture, indexer_sli, key, bo
if tz is None:
# TODO(EA2D): we can make this no-copy in tz-naive case too
assert ser.dtype == dti.dtype
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
else:
assert ser._values is values
@@ -925,7 +925,7 @@ def test_setitem_td64_scalar(self, indexer_sli, scalar):
values._validate_setitem_value(scalar)
indexer_sli(ser)[0] = scalar
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
@pytest.mark.parametrize("box", [list, np.array, pd.array, pd.Categorical, Index])
@pytest.mark.parametrize(
@@ -945,7 +945,7 @@ def test_setitem_td64_string_values(self, indexer_sli, key, box):
values._validate_setitem_value(newvals)
indexer_sli(ser)[key] = newvals
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
def test_extension_array_cross_section():
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index ecf247efd74bf..8d2d165e991f5 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -469,7 +469,7 @@ def test_copy(self, mgr):
assert cp_blk.values.base is blk.values.base
else:
# DatetimeTZBlock has DatetimeIndex values
- assert cp_blk.values._data.base is blk.values._data.base
+ assert cp_blk.values._ndarray.base is blk.values._ndarray.base
# copy(deep=True) consolidates, so the block-wise assertions will
# fail is mgr is not consolidated
diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py
index eaeb769a94c38..e58df00c65608 100644
--- a/pandas/tests/io/test_feather.py
+++ b/pandas/tests/io/test_feather.py
@@ -10,10 +10,6 @@
pyarrow = pytest.importorskip("pyarrow", minversion="1.0.1")
-filter_sparse = pytest.mark.filterwarnings("ignore:The Sparse")
-
-
-@filter_sparse
@pytest.mark.single_cpu
class TestFeather:
def check_error_on_write(self, df, exc, err_msg):
diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py
index faaa61e84a351..86fabf0ed0ef2 100644
--- a/pandas/tests/series/indexing/test_getitem.py
+++ b/pandas/tests/series/indexing/test_getitem.py
@@ -471,7 +471,7 @@ def test_getitem_boolean_dt64_copies(self):
ser = Series(dti._data)
res = ser[key]
- assert res._values._data.base is None
+ assert res._values._ndarray.base is None
# compare with numeric case for reference
ser2 = Series(range(4))
diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py
index 7d77a755e082b..d731aeee1b39b 100644
--- a/pandas/tests/series/indexing/test_setitem.py
+++ b/pandas/tests/series/indexing/test_setitem.py
@@ -418,14 +418,14 @@ def test_setitem_invalidates_datetime_index_freq(self):
ts = dti[1]
ser = Series(dti)
assert ser._values is not dti
- assert ser._values._data.base is not dti._data._data.base
+ assert ser._values._ndarray.base is not dti._data._ndarray.base
assert dti.freq == "D"
ser.iloc[1] = NaT
assert ser._values.freq is None
# check that the DatetimeIndex was not altered in place
assert ser._values is not dti
- assert ser._values._data.base is not dti._data._data.base
+ assert ser._values._ndarray.base is not dti._data._ndarray.base
assert dti[1] == ts
assert dti.freq == "D"
@@ -435,9 +435,9 @@ def test_dt64tz_setitem_does_not_mutate_dti(self):
ts = dti[0]
ser = Series(dti)
assert ser._values is not dti
- assert ser._values._data.base is not dti._data._data.base
+ assert ser._values._ndarray.base is not dti._data._ndarray.base
assert ser._mgr.arrays[0] is not dti
- assert ser._mgr.arrays[0]._data.base is not dti._data._data.base
+ assert ser._mgr.arrays[0]._ndarray.base is not dti._data._ndarray.base
ser[::3] = NaT
assert ser[0] is NaT
diff --git a/pandas/tests/series/indexing/test_xs.py b/pandas/tests/series/indexing/test_xs.py
index aaccad0f2bd70..a67f3ec708f24 100644
--- a/pandas/tests/series/indexing/test_xs.py
+++ b/pandas/tests/series/indexing/test_xs.py
@@ -11,7 +11,7 @@
def test_xs_datetimelike_wrapping():
# GH#31630 a case where we shouldn't wrap datetime64 in Timestamp
- arr = date_range("2016-01-01", periods=3)._data._data
+ arr = date_range("2016-01-01", periods=3)._data._ndarray
ser = Series(arr, dtype=object)
for i in range(len(ser)):
diff --git a/pandas/tests/util/test_show_versions.py b/pandas/tests/util/test_show_versions.py
index 99c7e0a1a8956..439971084fba8 100644
--- a/pandas/tests/util/test_show_versions.py
+++ b/pandas/tests/util/test_show_versions.py
@@ -20,14 +20,6 @@
# html5lib
"ignore:Using or importing the ABCs from:DeprecationWarning"
)
-@pytest.mark.filterwarnings(
- # fastparquet
- "ignore:pandas.core.index is deprecated:FutureWarning"
-)
-@pytest.mark.filterwarnings(
- # pandas_datareader
- "ignore:pandas.util.testing is deprecated:FutureWarning"
-)
@pytest.mark.filterwarnings(
# https://github.com/pandas-dev/pandas/issues/35252
"ignore:Distutils:UserWarning"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49979 | 2022-11-30T23:11:51Z | 2022-12-10T19:16:30Z | 2022-12-10T19:16:30Z | 2022-12-10T22:22:55Z |
REF: Refactor merge array conversions for factorizer | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index e818e367ca83d..b6fa5da857910 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -2364,35 +2364,7 @@ def _factorize_keys(
# "_values_for_factorize"
rk, _ = rk._values_for_factorize() # type: ignore[union-attr]
- klass: type[libhashtable.Factorizer]
- if is_numeric_dtype(lk.dtype):
- if not is_dtype_equal(lk, rk):
- dtype = find_common_type([lk.dtype, rk.dtype])
- if isinstance(dtype, ExtensionDtype):
- cls = dtype.construct_array_type()
- if not isinstance(lk, ExtensionArray):
- lk = cls._from_sequence(lk, dtype=dtype, copy=False)
- else:
- lk = lk.astype(dtype)
-
- if not isinstance(rk, ExtensionArray):
- rk = cls._from_sequence(rk, dtype=dtype, copy=False)
- else:
- rk = rk.astype(dtype)
- else:
- lk = lk.astype(dtype)
- rk = rk.astype(dtype)
- if isinstance(lk, BaseMaskedArray):
- # Invalid index type "type" for "Dict[Type[object], Type[Factorizer]]";
- # expected type "Type[object]"
- klass = _factorizers[lk.dtype.type] # type: ignore[index]
- else:
- klass = _factorizers[lk.dtype.type]
-
- else:
- klass = libhashtable.ObjectFactorizer
- lk = ensure_object(lk)
- rk = ensure_object(rk)
+ klass, lk, rk = _convert_arrays_and_get_rizer_klass(lk, rk)
rizer = klass(max(len(lk), len(rk)))
@@ -2433,6 +2405,41 @@ def _factorize_keys(
return llab, rlab, count
+def _convert_arrays_and_get_rizer_klass(
+ lk: ArrayLike, rk: ArrayLike
+) -> tuple[type[libhashtable.Factorizer], ArrayLike, ArrayLike]:
+ klass: type[libhashtable.Factorizer]
+ if is_numeric_dtype(lk.dtype):
+ if not is_dtype_equal(lk, rk):
+ dtype = find_common_type([lk.dtype, rk.dtype])
+ if isinstance(dtype, ExtensionDtype):
+ cls = dtype.construct_array_type()
+ if not isinstance(lk, ExtensionArray):
+ lk = cls._from_sequence(lk, dtype=dtype, copy=False)
+ else:
+ lk = lk.astype(dtype)
+
+ if not isinstance(rk, ExtensionArray):
+ rk = cls._from_sequence(rk, dtype=dtype, copy=False)
+ else:
+ rk = rk.astype(dtype)
+ else:
+ lk = lk.astype(dtype)
+ rk = rk.astype(dtype)
+ if isinstance(lk, BaseMaskedArray):
+ # Invalid index type "type" for "Dict[Type[object], Type[Factorizer]]";
+ # expected type "Type[object]"
+ klass = _factorizers[lk.dtype.type] # type: ignore[index]
+ else:
+ klass = _factorizers[lk.dtype.type]
+
+ else:
+ klass = libhashtable.ObjectFactorizer
+ lk = ensure_object(lk)
+ rk = ensure_object(rk)
+ return klass, lk, rk
+
+
def _sort_labels(
uniques: np.ndarray, left: npt.NDArray[np.intp], right: npt.NDArray[np.intp]
) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp]]:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49978 | 2022-11-30T21:51:04Z | 2022-12-01T18:59:57Z | 2022-12-01T18:59:56Z | 2022-12-10T20:00:59Z |
DEP: Enforce deprecation of mangle_dupe_cols | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 53bcf6ffd7a8a..a073087f6ec8f 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -155,15 +155,6 @@ usecols : list-like or callable, default ``None``
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
-mangle_dupe_cols : boolean, default ``True``
- Duplicate columns will be specified as 'X', 'X.1'...'X.N', rather than 'X'...'X'.
- Passing in ``False`` will cause data to be overwritten if there are duplicate
- names in the columns.
-
- .. deprecated:: 1.5.0
- The argument was never implemented, and a new argument where the
- renaming pattern can be specified will be added instead.
-
General parsing configuration
+++++++++++++++++++++++++++++
@@ -587,10 +578,6 @@ If the header is in a row other than the first, pass the row number to
Duplicate names parsing
'''''''''''''''''''''''
- .. deprecated:: 1.5.0
- ``mangle_dupe_cols`` was never implemented, and a new argument where the
- renaming pattern can be specified will be added instead.
-
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
@@ -599,8 +586,7 @@ distinguish between them so as to prevent overwriting data:
data = "a,b,a\n0,1,2\n3,4,5"
pd.read_csv(StringIO(data))
-There is no more duplicate data because ``mangle_dupe_cols=True`` by default,
-which modifies a series of duplicate columns 'X', ..., 'X' to become
+There is no more duplicate data because duplicate columns 'X', ..., 'X' become
'X', 'X.1', ..., 'X.N'.
.. _io.usecols:
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 1fb9a81e85a83..df82bcd37e971 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -434,6 +434,7 @@ Removal of prior version deprecations/changes
- Removed argument ``inplace`` from :meth:`Categorical.remove_unused_categories` (:issue:`37918`)
- Disallow passing non-round floats to :class:`Timestamp` with ``unit="M"`` or ``unit="Y"`` (:issue:`47266`)
- Remove keywords ``convert_float`` and ``mangle_dupe_cols`` from :func:`read_excel` (:issue:`41176`)
+- Remove keyword ``mangle_dupe_cols`` from :func:`read_csv` and :func:`read_table` (:issue:`48137`)
- Removed ``errors`` keyword from :meth:`DataFrame.where`, :meth:`Series.where`, :meth:`DataFrame.mask` and :meth:`Series.mask` (:issue:`47728`)
- Disallow passing non-keyword arguments to :func:`read_excel` except ``io`` and ``sheet_name`` (:issue:`34418`)
- Disallow passing non-keyword arguments to :meth:`DataFrame.drop` and :meth:`Series.drop` except ``labels`` (:issue:`41486`)
diff --git a/pandas/_libs/parsers.pyi b/pandas/_libs/parsers.pyi
index d888511916e27..60f5304c39ad9 100644
--- a/pandas/_libs/parsers.pyi
+++ b/pandas/_libs/parsers.pyi
@@ -58,7 +58,6 @@ class TextReader:
skiprows=...,
skipfooter: int = ..., # int64_t
verbose: bool = ...,
- mangle_dupe_cols: bool = ...,
float_precision: Literal["round_trip", "legacy", "high"] | None = ...,
skip_blank_lines: bool = ...,
encoding_errors: bytes | str = ...,
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 92874ef201246..85d74e201d5bb 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -317,7 +317,7 @@ cdef class TextReader:
object handle
object orig_header
bint na_filter, keep_default_na, verbose, has_usecols, has_mi_columns
- bint mangle_dupe_cols, allow_leading_cols
+ bint allow_leading_cols
uint64_t parser_start # this is modified after __init__
list clocks
const char *encoding_errors
@@ -373,7 +373,6 @@ cdef class TextReader:
skiprows=None,
skipfooter=0, # int64_t
bint verbose=False,
- bint mangle_dupe_cols=True,
float_precision=None,
bint skip_blank_lines=True,
encoding_errors=b"strict",
@@ -390,8 +389,6 @@ cdef class TextReader:
self.parser = parser_new()
self.parser.chunksize = tokenize_chunksize
- self.mangle_dupe_cols = mangle_dupe_cols
-
# For timekeeping
self.clocks = []
@@ -680,7 +677,7 @@ cdef class TextReader:
this_header.append(name)
- if not self.has_mi_columns and self.mangle_dupe_cols:
+ if not self.has_mi_columns:
# Ensure that regular columns are used before unnamed ones
# to keep given names and mangle unnamed columns
col_loop_order = [i for i in range(len(this_header))
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index b0f3754271894..7b9794dd434e6 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -125,7 +125,6 @@ def __init__(self, kwds) -> None:
self.true_values = kwds.get("true_values")
self.false_values = kwds.get("false_values")
- self.mangle_dupe_cols = kwds.get("mangle_dupe_cols", True)
self.infer_datetime_format = kwds.pop("infer_datetime_format", False)
self.cache_dates = kwds.pop("cache_dates", True)
@@ -333,34 +332,28 @@ def extract(r):
return names, index_names, col_names, passed_names
@final
- def _maybe_dedup_names(self, names: Sequence[Hashable]) -> Sequence[Hashable]:
- # see gh-7160 and gh-9424: this helps to provide
- # immediate alleviation of the duplicate names
- # issue and appears to be satisfactory to users,
- # but ultimately, not needing to butcher the names
- # would be nice!
- if self.mangle_dupe_cols:
- names = list(names) # so we can index
- counts: DefaultDict[Hashable, int] = defaultdict(int)
- is_potential_mi = _is_potential_multi_index(names, self.index_col)
-
- for i, col in enumerate(names):
- cur_count = counts[col]
-
- while cur_count > 0:
- counts[col] = cur_count + 1
+ def _dedup_names(self, names: Sequence[Hashable]) -> Sequence[Hashable]:
+ names = list(names) # so we can index
+ counts: DefaultDict[Hashable, int] = defaultdict(int)
+ is_potential_mi = _is_potential_multi_index(names, self.index_col)
- if is_potential_mi:
- # for mypy
- assert isinstance(col, tuple)
- col = col[:-1] + (f"{col[-1]}.{cur_count}",)
- else:
- col = f"{col}.{cur_count}"
- cur_count = counts[col]
+ for i, col in enumerate(names):
+ cur_count = counts[col]
- names[i] = col
+ while cur_count > 0:
counts[col] = cur_count + 1
+ if is_potential_mi:
+ # for mypy
+ assert isinstance(col, tuple)
+ col = col[:-1] + (f"{col[-1]}.{cur_count}",)
+ else:
+ col = f"{col}.{cur_count}"
+ cur_count = counts[col]
+
+ names[i] = col
+ counts[col] = cur_count + 1
+
return names
@final
@@ -1182,7 +1175,6 @@ def converter(*date_cols):
"verbose": False,
"encoding": None,
"compression": None,
- "mangle_dupe_cols": True,
"infer_datetime_format": False,
"skip_blank_lines": True,
"encoding_errors": "strict",
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index c1f2e6ddb2388..e0daf157d3d3a 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -227,7 +227,7 @@ def read(
except StopIteration:
if self._first_chunk:
self._first_chunk = False
- names = self._maybe_dedup_names(self.orig_names)
+ names = self._dedup_names(self.orig_names)
index, columns, col_dict = self._get_empty_meta(
names,
self.index_col,
@@ -281,7 +281,7 @@ def read(
if self.usecols is not None:
names = self._filter_usecols(names)
- names = self._maybe_dedup_names(names)
+ names = self._dedup_names(names)
# rename dict keys
data_tups = sorted(data.items())
@@ -303,7 +303,7 @@ def read(
# assert for mypy, orig_names is List or None, None would error in list(...)
assert self.orig_names is not None
names = list(self.orig_names)
- names = self._maybe_dedup_names(names)
+ names = self._dedup_names(names)
if self.usecols is not None:
names = self._filter_usecols(names)
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 121c52ba1c323..aebf285e669bb 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -259,7 +259,7 @@ def read(
columns: Sequence[Hashable] = list(self.orig_names)
if not len(content): # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
- names = self._maybe_dedup_names(self.orig_names)
+ names = self._dedup_names(self.orig_names)
# error: Cannot determine type of 'index_col'
index, columns, col_dict = self._get_empty_meta(
names,
@@ -293,7 +293,7 @@ def _exclude_implicit_index(
self,
alldata: list[np.ndarray],
) -> tuple[Mapping[Hashable, np.ndarray], Sequence[Hashable]]:
- names = self._maybe_dedup_names(self.orig_names)
+ names = self._dedup_names(self.orig_names)
offset = 0
if self._implicit_index:
@@ -424,7 +424,7 @@ def _infer_columns(
else:
this_columns.append(c)
- if not have_mi_columns and self.mangle_dupe_cols:
+ if not have_mi_columns:
counts: DefaultDict = defaultdict(int)
# Ensure that regular columns are used before unnamed ones
# to keep given names and mangle unnamed columns
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 575390e9b97a4..d9c2403a19d0c 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -41,10 +41,7 @@
AbstractMethodError,
ParserWarning,
)
-from pandas.util._decorators import (
- Appender,
- deprecate_kwarg,
-)
+from pandas.util._decorators import Appender
from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
@@ -152,14 +149,6 @@
example of a valid callable argument would be ``lambda x: x.upper() in
['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
parsing time and lower memory usage.
-mangle_dupe_cols : bool, default True
- Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
- 'X'...'X'. Passing in False will cause data to be overwritten if there
- are duplicate names in the columns.
-
- .. deprecated:: 1.5.0
- Not implemented, and a new argument to specify the pattern for the
- names of duplicated columns will be added instead
dtype : Type name or dict of column -> type, optional
Data type for data or columns. E.g. {{'a': np.float64, 'b': np.int32,
'c': 'Int64'}}
@@ -604,7 +593,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -661,7 +649,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -718,7 +705,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -775,7 +761,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -821,7 +806,6 @@ def read_csv(
...
-@deprecate_kwarg(old_arg_name="mangle_dupe_cols", new_arg_name=None)
@Appender(
_doc_read_csv_and_table.format(
func_name="read_csv",
@@ -842,7 +826,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = lib.no_default,
index_col: IndexLabel | Literal[False] | None = None,
usecols=None,
- mangle_dupe_cols: bool = True,
# General Parsing Configuration
dtype: DtypeArg | None = None,
engine: CSVEngine | None = None,
@@ -923,7 +906,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -980,7 +962,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -1037,7 +1018,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -1094,7 +1074,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -1140,7 +1119,6 @@ def read_table(
...
-@deprecate_kwarg(old_arg_name="mangle_dupe_cols", new_arg_name=None)
@Appender(
_doc_read_csv_and_table.format(
func_name="read_table",
@@ -1161,7 +1139,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = lib.no_default,
index_col: IndexLabel | Literal[False] | None = None,
usecols=None,
- mangle_dupe_cols: bool = True,
# General Parsing Configuration
dtype: DtypeArg | None = None,
engine: CSVEngine | None = None,
@@ -1406,9 +1383,6 @@ def _get_options_with_defaults(self, engine: CSVEngine) -> dict[str, Any]:
f"The {repr(argname)} option is not supported with the "
f"'pyarrow' engine"
)
- if argname == "mangle_dupe_cols" and value is False:
- # GH12935
- raise ValueError("Setting mangle_dupe_cols=False is not supported yet")
options[argname] = value
for argname, default in _c_parser_defaults.items():
diff --git a/pandas/tests/io/parser/test_mangle_dupes.py b/pandas/tests/io/parser/test_mangle_dupes.py
index 13b419c3390fc..5709e7e4027e8 100644
--- a/pandas/tests/io/parser/test_mangle_dupes.py
+++ b/pandas/tests/io/parser/test_mangle_dupes.py
@@ -14,22 +14,11 @@
@skip_pyarrow
-@pytest.mark.parametrize("kwargs", [{}, {"mangle_dupe_cols": True}])
-def test_basic(all_parsers, kwargs):
- # TODO: add test for condition "mangle_dupe_cols=False"
- # once it is actually supported (gh-12935)
+def test_basic(all_parsers):
parser = all_parsers
data = "a,a,b,b,b\n1,2,3,4,5"
- if "mangle_dupe_cols" in kwargs:
- with tm.assert_produces_warning(
- FutureWarning,
- match="the 'mangle_dupe_cols' keyword is deprecated",
- check_stacklevel=False,
- ):
- result = parser.read_csv(StringIO(data), sep=",", **kwargs)
- else:
- result = parser.read_csv(StringIO(data), sep=",", **kwargs)
+ result = parser.read_csv(StringIO(data), sep=",")
expected = DataFrame([[1, 2, 3, 4, 5]], columns=["a", "a.1", "b", "b.1", "b.2"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index 578cea44a8ed6..185dc733df3c2 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -34,14 +34,10 @@ class TestUnsupportedFeatures:
def test_mangle_dupe_cols_false(self):
# see gh-12935
data = "a b c\n1 2 3"
- msg = "is not supported"
for engine in ("c", "python"):
- with tm.assert_produces_warning(
- FutureWarning, match="the 'mangle_dupe_cols' keyword is deprecated"
- ):
- with pytest.raises(ValueError, match=msg):
- read_csv(StringIO(data), engine=engine, mangle_dupe_cols=False)
+ with pytest.raises(TypeError, match="unexpected keyword"):
+ read_csv(StringIO(data), engine=engine, mangle_dupe_cols=True)
def test_c_engine(self):
# see gh-6607
| #48137 | https://api.github.com/repos/pandas-dev/pandas/pulls/49977 | 2022-11-30T21:42:53Z | 2022-11-30T23:40:46Z | 2022-11-30T23:40:46Z | 2022-12-10T20:00:16Z |
Standard library imports | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 0779f9c95f7b4..87885b9c8a0e3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -313,3 +313,9 @@ repos:
additional_dependencies:
- autotyping==22.9.0
- libcst==0.4.7
+ - id: stdlib-imports
+ name: Place standard library imports at top of file
+ entry: python -m scripts.standard_library_imports_should_be_global
+ language: python
+ types: [python]
+ exclude: ^versionneer\.py$
diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 3c1362b1ac83e..985545f46a8c0 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -45,15 +45,12 @@ if [[ -z "$CHECK" || "$CHECK" == "code" ]]; then
python -W error -c "
import sys
import pandas
-
-blocklist = {'bs4', 'gcsfs', 'html5lib', 'http', 'ipython', 'jinja2', 'hypothesis',
- 'lxml', 'matplotlib', 'openpyxl', 'py', 'pytest', 's3fs', 'scipy',
- 'tables', 'urllib.request', 'xlrd', 'xlsxwriter'}
+from scripts.standard_library_imports_should_be_global import BLOCKLIST
# GH#28227 for some of these check for top-level modules, while others are
# more specific (e.g. urllib.request)
import_mods = set(m.split('.')[0] for m in sys.modules) | set(sys.modules)
-mods = blocklist & import_mods
+mods = BLOCKLIST & import_mods
if mods:
sys.stderr.write('err: pandas should not import: {}\n'.format(', '.join(mods)))
sys.exit(len(mods))
diff --git a/scripts/standard_library_imports_should_be_global.py b/scripts/standard_library_imports_should_be_global.py
new file mode 100644
index 0000000000000..99143b4445c88
--- /dev/null
+++ b/scripts/standard_library_imports_should_be_global.py
@@ -0,0 +1,98 @@
+"""
+Check that standard library imports appear at the top of modules.
+
+Imports within functions should only be used to prevent circular imports
+, for optional dependencies, or if an import is slow.
+
+This is meant to be run as a pre-commit hook - to run it manually, you can do:
+
+ pre-commit run stdlib-imports --all-files
+
+"""
+import argparse
+import ast
+from ast import NodeVisitor
+import importlib
+import sys
+
+BLOCKLIST = {
+ "bs4",
+ "gcsfs",
+ "html5lib",
+ "http",
+ "ipython",
+ "jinja2",
+ "hypothesis",
+ "lxml",
+ "matplotlib",
+ "openpyxl",
+ "py",
+ "pytest",
+ "s3fs",
+ "scipy",
+ "tables",
+ "urllib.request",
+ "xlrd",
+ "xlsxwriter",
+}
+
+
+class Visitor(NodeVisitor):
+ def __init__(self, file) -> None:
+ self.ret = 0
+ self.file = file
+
+ def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
+ for _node in ast.walk(node):
+ if (
+ isinstance(_node, ast.ImportFrom)
+ and _node.__module__ != "__main__"
+ and _node.module not in BLOCKLIST
+ and _node.module.split(".")[0] not in BLOCKLIST
+ ):
+ try:
+ importlib.import_module(_node.module)
+ except Exception as exp: # noqa: F841
+ pass
+ else:
+ print(
+ f"{self.file}:{_node.lineno}:{_node.col_offset} standard "
+ f"library import '{_node.module}' should be at the top of "
+ "the file"
+ )
+ self.ret = 1
+ elif isinstance(_node, ast.Import):
+ for _name in _node.names:
+ if (
+ _name.name == "__main__"
+ or _name.name in BLOCKLIST
+ or _name.name.split(".")[0] in BLOCKLIST
+ ):
+ continue
+ try:
+ importlib.import_module(_name.name)
+ except Exception as exp: # noqa: F841
+ pass
+ else:
+ print(
+ f"{self.file}:{_node.lineno}:{_node.col_offset} standard "
+ f"library import '{_name.name}' should be at the top of "
+ "the file"
+ )
+ self.ret = 1
+ self.generic_visit(node)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument("paths", nargs="*")
+ args = parser.parse_args()
+ ret = 0
+ for file in args.paths:
+ with open(file, encoding="utf-8") as fd:
+ content = fd.read()
+ tree = ast.parse(content)
+ visitor = Visitor(file)
+ visitor.visit(tree)
+ ret |= visitor.ret
+ sys.exit(ret)
| This is part 1 of 2 making style changes to GH49647 | https://api.github.com/repos/pandas-dev/pandas/pulls/49976 | 2022-11-30T20:28:13Z | 2022-12-01T18:25:04Z | null | 2022-12-01T18:25:05Z |
add multi index with categories test | diff --git a/pandas/tests/indexes/multi/test_setops.py b/pandas/tests/indexes/multi/test_setops.py
index d0345861d6778..aa6e472a7f5a5 100644
--- a/pandas/tests/indexes/multi/test_setops.py
+++ b/pandas/tests/indexes/multi/test_setops.py
@@ -692,6 +692,43 @@ def test_intersection_lexsort_depth(levels1, levels2, codes1, codes2, names):
assert mi_int._lexsort_depth == 2
+@pytest.mark.parametrize(
+ "a",
+ [pd.Categorical(["a", "b"], categories=["a", "b"]), ["a", "b"]],
+)
+@pytest.mark.parametrize(
+ "b",
+ [
+ pd.Categorical(["a", "b"], categories=["b", "a"]),
+ pd.Categorical(["a", "b"], categories=["b", "a"]),
+ ],
+)
+def test_intersection_with_non_lex_sorted_categories(a, b):
+ # GH#49974
+ other = ["1", "2"]
+
+ df1 = DataFrame({"x": a, "y": other})
+ df2 = DataFrame({"x": b, "y": other})
+
+ expected = MultiIndex.from_arrays([a, other], names=["x", "y"])
+
+ res1 = MultiIndex.from_frame(df1).intersection(
+ MultiIndex.from_frame(df2.sort_values(["x", "y"]))
+ )
+ res2 = MultiIndex.from_frame(df1).intersection(MultiIndex.from_frame(df2))
+ res3 = MultiIndex.from_frame(df1.sort_values(["x", "y"])).intersection(
+ MultiIndex.from_frame(df2)
+ )
+ res4 = MultiIndex.from_frame(df1.sort_values(["x", "y"])).intersection(
+ MultiIndex.from_frame(df2.sort_values(["x", "y"]))
+ )
+
+ tm.assert_index_equal(res1, expected)
+ tm.assert_index_equal(res2, expected)
+ tm.assert_index_equal(res3, expected)
+ tm.assert_index_equal(res4, expected)
+
+
@pytest.mark.parametrize("val", [pd.NA, 100])
def test_intersection_keep_ea_dtypes(val, any_numeric_ea_dtype):
# GH#48604
| This PR adds test to address the issue in #49337, which have already been fixed on main
- [x] closses #49337
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Do you want a entry in the changelog since this is not a fix, just a test to ensure we dont run into this again?
<s>- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.<s>
| https://api.github.com/repos/pandas-dev/pandas/pulls/49975 | 2022-11-30T17:00:21Z | 2023-01-03T21:07:41Z | null | 2023-01-05T17:03:08Z |
PERF: ArrowExtensionArray.to_numpy | diff --git a/asv_bench/benchmarks/array.py b/asv_bench/benchmarks/array.py
index cb949637ea745..924040ff0648b 100644
--- a/asv_bench/benchmarks/array.py
+++ b/asv_bench/benchmarks/array.py
@@ -92,3 +92,41 @@ def time_setitem_slice(self, multiple_chunks):
def time_tolist(self, multiple_chunks):
self.array.tolist()
+
+
+class ArrowExtensionArray:
+
+ params = [
+ [
+ "boolean[pyarrow]",
+ "float64[pyarrow]",
+ "int64[pyarrow]",
+ "string[pyarrow]",
+ "timestamp[ns][pyarrow]",
+ ],
+ [False, True],
+ ]
+ param_names = ["dtype", "hasna"]
+
+ def setup(self, dtype, hasna):
+ N = 100_000
+ if dtype == "boolean[pyarrow]":
+ data = np.random.choice([True, False], N, replace=True)
+ elif dtype == "float64[pyarrow]":
+ data = np.random.randn(N)
+ elif dtype == "int64[pyarrow]":
+ data = np.arange(N)
+ elif dtype == "string[pyarrow]":
+ data = tm.rands_array(10, N)
+ elif dtype == "timestamp[ns][pyarrow]":
+ data = pd.date_range("2000-01-01", freq="s", periods=N)
+ else:
+ raise NotImplementedError
+
+ arr = pd.array(data, dtype=dtype)
+ if hasna:
+ arr[::2] = pd.NA
+ self.arr = arr
+
+ def time_to_numpy(self, dtype, hasna):
+ self.arr.to_numpy()
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 215e9c2a85bba..b828d18d1d700 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -748,6 +748,7 @@ Performance improvements
- Reduce memory usage of :meth:`DataFrame.to_pickle`/:meth:`Series.to_pickle` when using BZ2 or LZMA (:issue:`49068`)
- Performance improvement for :class:`~arrays.StringArray` constructor passing a numpy array with type ``np.str_`` (:issue:`49109`)
- Performance improvement in :meth:`~arrays.ArrowExtensionArray.factorize` (:issue:`49177`)
+- Performance improvement in :meth:`~arrays.ArrowExtensionArray.to_numpy` (:issue:`49973`)
- Performance improvement in :meth:`DataFrame.join` when joining on a subset of a :class:`MultiIndex` (:issue:`48611`)
- Performance improvement for :meth:`MultiIndex.intersection` (:issue:`48604`)
- Performance improvement in ``var`` for nullable dtypes (:issue:`48379`).
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 254ff8894b36c..d698c5eb11751 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -9,11 +9,13 @@
import numpy as np
+from pandas._libs import lib
from pandas._typing import (
ArrayLike,
Dtype,
FillnaOptions,
Iterator,
+ NpDtype,
PositionalIndexer,
SortKind,
TakeIndexer,
@@ -31,6 +33,7 @@
is_bool_dtype,
is_integer,
is_integer_dtype,
+ is_object_dtype,
is_scalar,
)
from pandas.core.dtypes.missing import isna
@@ -351,6 +354,10 @@ def __arrow_array__(self, type=None):
"""Convert myself to a pyarrow ChunkedArray."""
return self._data
+ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
+ """Correctly construct numpy arrays when passed to `np.asarray()`."""
+ return self.to_numpy(dtype=dtype)
+
def __invert__(self: ArrowExtensionArrayT) -> ArrowExtensionArrayT:
return type(self)(pc.invert(self._data))
@@ -749,6 +756,33 @@ def take(
indices_array[indices_array < 0] += len(self._data)
return type(self)(self._data.take(indices_array))
+ @doc(ExtensionArray.to_numpy)
+ def to_numpy(
+ self,
+ dtype: npt.DTypeLike | None = None,
+ copy: bool = False,
+ na_value: object = lib.no_default,
+ ) -> np.ndarray:
+ if dtype is None and self._hasna:
+ dtype = object
+ if na_value is lib.no_default:
+ na_value = self.dtype.na_value
+
+ pa_type = self._data.type
+ if (
+ is_object_dtype(dtype)
+ or pa.types.is_timestamp(pa_type)
+ or pa.types.is_duration(pa_type)
+ ):
+ result = np.array(list(self), dtype=dtype)
+ else:
+ result = np.asarray(self._data, dtype=dtype)
+ if copy or self._hasna:
+ result = result.copy()
+ if self._hasna:
+ result[self.isna()] = na_value
+ return result
+
def unique(self: ArrowExtensionArrayT) -> ArrowExtensionArrayT:
"""
Compute the ArrowExtensionArray of unique values.
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index b8b1d64d7a093..c79e2f752c5a8 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -12,7 +12,6 @@
)
from pandas._typing import (
Dtype,
- NpDtype,
Scalar,
npt,
)
@@ -151,31 +150,6 @@ def dtype(self) -> StringDtype: # type: ignore[override]
"""
return self._dtype
- def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
- """Correctly construct numpy arrays when passed to `np.asarray()`."""
- return self.to_numpy(dtype=dtype)
-
- def to_numpy(
- self,
- dtype: npt.DTypeLike | None = None,
- copy: bool = False,
- na_value=lib.no_default,
- ) -> np.ndarray:
- """
- Convert to a NumPy ndarray.
- """
- # TODO: copy argument is ignored
-
- result = np.array(self._data, dtype=dtype)
- if self._data.null_count > 0:
- if na_value is lib.no_default:
- if dtype and np.issubdtype(dtype, np.floating):
- return result
- na_value = self._dtype.na_value
- mask = self.isna()
- result[mask] = na_value
- return result
-
def insert(self, loc: int, item) -> ArrowStringArray:
if not isinstance(item, str) and item is not libmissing.NA:
raise TypeError("Scalar must be NA or str")
@@ -219,10 +193,11 @@ def astype(self, dtype, copy: bool = True):
if copy:
return self.copy()
return self
-
elif isinstance(dtype, NumericDtype):
data = self._data.cast(pa.from_numpy_dtype(dtype.numpy_dtype))
return dtype.__from_arrow__(data)
+ elif isinstance(dtype, np.dtype) and np.issubdtype(dtype, np.floating):
+ return self.to_numpy(dtype=dtype, na_value=np.nan)
return super().astype(dtype, copy=copy)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 557cdd96bf00c..c36b129f919e8 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -1421,3 +1421,20 @@ def test_astype_from_non_pyarrow(data):
assert not isinstance(pd_array.dtype, ArrowDtype)
assert isinstance(result.dtype, ArrowDtype)
tm.assert_extension_array_equal(result, data)
+
+
+def test_to_numpy_with_defaults(data):
+ # GH49973
+ result = data.to_numpy()
+
+ pa_type = data._data.type
+ if pa.types.is_duration(pa_type) or pa.types.is_timestamp(pa_type):
+ expected = np.array(list(data))
+ else:
+ expected = np.array(data._data)
+
+ if data._hasna:
+ expected = expected.astype(object)
+ expected[pd.isna(data)] = pd.NA
+
+ tm.assert_numpy_array_equal(result, expected)
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature.
Moved and refactored `to_numpy` from `ArrowStringArray` to `ArrowExtensionArray` to improve performance for other pyarrow types.
```
before after ratio
<main> <arrow-ea-numpy>
- 63.3±0.5ms 3.17±0.2ms 0.05 array.ArrowExtensionArray.time_to_numpy('float64[pyarrow]', True)
- 63.8±1ms 3.10±0.2ms 0.05 array.ArrowExtensionArray.time_to_numpy('int64[pyarrow]', True)
- 62.1±1ms 1.42±0.02ms 0.02 array.ArrowExtensionArray.time_to_numpy('boolean[pyarrow]', True)
- 31.2±1ms 122±0.3μs 0.00 array.ArrowExtensionArray.time_to_numpy('boolean[pyarrow]', False)
- 31.0±0.4ms 33.9±0.8μs 0.00 array.ArrowExtensionArray.time_to_numpy('int64[pyarrow]', False)
- 31.7±1ms 30.6±1μs 0.00 array.ArrowExtensionArray.time_to_numpy('float64[pyarrow]', False)
```
This results in perf improvements for a number of ops that go through numpy:
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10**5, 10), dtype="float64[pyarrow]")
%timeit df.corr()
402 ms ± 17.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) <- main
26 ms ± 1.38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) <- PR
%timeit df.rolling(100).sum()
378 ms ± 9.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) <- main
15.2 ms ± 503 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <- PR
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/49973 | 2022-11-30T12:52:21Z | 2022-12-16T03:26:13Z | 2022-12-16T03:26:13Z | 2022-12-20T00:46:16Z |
Fix `Styler.format()` Excel example | diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 9244a8c5e672d..d7cb70b0f5110 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -1110,7 +1110,7 @@ def format(
>>> df = pd.DataFrame({"A": [1, 0, -1]})
>>> pseudo_css = "number-format: 0§[Red](0)§-§@;"
- >>> df.style.applymap(lambda v: css).to_excel("formatted_file.xlsx")
+ >>> df.style.applymap(lambda: pseudo_css).to_excel("formatted_file.xlsx")
... # doctest: +SKIP
.. figure:: ../../_static/style/format_excel_css.png
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49971 | 2022-11-30T04:38:27Z | 2022-11-30T23:35:41Z | 2022-11-30T23:35:41Z | 2022-12-01T21:14:28Z |
TST : Adding Oracle Database support to run SQL Tests. | diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 394fceb69b788..de3f2d203a915 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -11,7 +11,7 @@
- Tests for the different SQL flavors (flavor specific type conversions)
- Tests for the sqlalchemy mode: `_TestSQLAlchemy` is the base class with
common methods. The different tested flavors (sqlite3, MySQL,
- PostgreSQL) derive from the base class
+ PostgreSQL, Oracle) derive from the base class
- Tests for the fallback mode (`TestSQLiteFallback`)
"""
@@ -72,11 +72,39 @@
except ImportError:
SQLALCHEMY_INSTALLED = False
+"""
+ For Oracle Database, you must set the environment variable
+ ORACLE_CONNECTION_STRING to the database connection string.
+ For ex: ORACLE_CONNECTION_STRING="<database_name>+<driver_name>
+ ://<user_name>:<password>@<host_address>:<portnumber>/?
+ service_name=<service_name>"
+"""
+import os
+ORACLE_CONNECTION_STRING = os.environ.get(
+ "ORACLE_CONNECTION_STRING", None
+)
+
+"""
+ The Oracle Database python-oracledb driver runs in Thin mode by default. To run
+ in Thick mode, you should set an environment variable ORACLE_DRIVER_TYPE to the
+ value "thick".
+"""
+def switchOracleDBDriverMode():
+ try:
+ import oracledb
+ # get the value of environment variable
+ oracle_driver_type = os.environ.get("ORACLE_DRIVER_TYPE", "thin")
+ if oracle_driver_type == "thick":
+ oracledb.init_oracle_client()
+ except ImportError:
+ print('python-oracledb is not installed. Hence, Oracle Database testcases will be skipped')
+
SQL_STRINGS = {
"read_parameters": {
"sqlite": "SELECT * FROM iris WHERE Name=? AND SepalLength=?",
"mysql": "SELECT * FROM iris WHERE `Name`=%s AND `SepalLength`=%s",
"postgresql": 'SELECT * FROM iris WHERE "Name"=%s AND "SepalLength"=%s',
+ "oracle": 'SELECT * FROM iris WHERE "Name" = :1 AND "SepalLength" = :2',
},
"read_named_parameters": {
"sqlite": """
@@ -90,11 +118,16 @@
SELECT * FROM iris WHERE
"Name"=%(name)s AND "SepalLength"=%(length)s
""",
+ "oracle": """
+ SELECT * FROM iris WHERE
+ "Name"=:name AND "SepalLength"=:length
+ """,
},
"read_no_parameters_with_percent": {
"sqlite": "SELECT * FROM iris WHERE Name LIKE '%'",
"mysql": "SELECT * FROM iris WHERE `Name` LIKE '%'",
"postgresql": "SELECT * FROM iris WHERE \"Name\" LIKE '%'",
+ "oracle": 'SELECT * FROM iris WHERE "Name" LIKE "%"',
},
}
@@ -139,6 +172,18 @@ def create_and_load_iris_sqlite3(conn: sqlite3.Connection, iris_file: Path):
stmt = "INSERT INTO iris VALUES(?, ?, ?, ?, ?)"
cur.executemany(stmt, reader)
+"""
+ This function is only for Oracle Database.
+ It inserts one row at a time into the table.
+"""
+def insert_iteratively(conn, params, iris):
+ from sqlalchemy import insert
+ with conn.connect() as con:
+ with con.begin():
+ for param in params:
+ stmt = insert(iris).values(param)
+ con.execute(stmt)
+
def create_and_load_iris(conn, iris_file: Path, dialect: str):
from sqlalchemy import insert
@@ -152,13 +197,16 @@ def create_and_load_iris(conn, iris_file: Path, dialect: str):
reader = csv.reader(csvfile)
header = next(reader)
params = [dict(zip(header, row)) for row in reader]
- stmt = insert(iris).values(params)
- if isinstance(conn, Engine):
- with conn.connect() as conn:
- with conn.begin():
- conn.execute(stmt)
+ if dialect == 'oracle':
+ insert_iteratively(conn, params, iris)
else:
- conn.execute(stmt)
+ stmt = insert(iris).values(params)
+ if isinstance(conn, Engine):
+ with conn.connect() as conn:
+ with conn.begin():
+ conn.execute(stmt)
+ else:
+ conn.execute(stmt)
def create_and_load_iris_view(conn):
@@ -190,9 +238,11 @@ def types_table_metadata(dialect: str):
MetaData,
Table,
)
+ from sqlalchemy.dialects.oracle import LONG
date_type = TEXT if dialect == "sqlite" else DateTime
bool_type = Integer if dialect == "sqlite" else Boolean
+ TEXT = LONG if dialect == "oracle" else TEXT
metadata = MetaData()
types = Table(
"types",
@@ -242,13 +292,20 @@ def create_and_load_types(conn, types_data: list[dict], dialect: str):
types.drop(conn, checkfirst=True)
types.create(bind=conn)
- stmt = insert(types).values(types_data)
- if isinstance(conn, Engine):
- with conn.connect() as conn:
- with conn.begin():
- conn.execute(stmt)
+ if dialect == 'oracle':
+ up_dict_row_1 = {"DateCol": datetime(2000,1,3,0,0,0),}
+ up_dict_row_2 = {"DateCol": datetime(2000,1,4,0,0,0),}
+ types_data[0].update(up_dict_row_1)
+ types_data[1].update(up_dict_row_2)
+ insert_iteratively(conn, types_data, types)
else:
- conn.execute(stmt)
+ stmt = insert(types).values(types_data)
+ if isinstance(conn, Engine):
+ with conn.connect() as conn:
+ with conn.begin():
+ conn.execute(stmt)
+ else:
+ conn.execute(stmt)
def check_iris_frame(frame: DataFrame):
@@ -405,6 +462,35 @@ def mysql_pymysql_conn(mysql_pymysql_engine):
yield mysql_pymysql_engine.connect()
+@pytest.fixture
+def oracle_oracledb_engine(iris_path, types_data):
+ sqlalchemy = pytest.importorskip("sqlalchemy")
+ pytest.importorskip("oracledb")
+
+ switchOracleDBDriverMode()
+ engine = sqlalchemy.create_engine(
+ ORACLE_CONNECTION_STRING
+ )
+ insp = sqlalchemy.inspect(engine)
+ if not insp.has_table("iris"):
+ create_and_load_iris(engine, iris_path, "oracle")
+ if not insp.has_table("types"):
+ for entry in types_data:
+ entry.pop("DateColWithTz")
+ create_and_load_types(engine, types_data, "oracle")
+ yield engine
+ with engine.connect() as conn:
+ with conn.begin():
+ stmt = sqlalchemy.text("DROP TABLE test_frame")
+ conn.execute(stmt)
+ engine.dispose()
+
+
+@pytest.fixture
+def oracle_oracledb_conn(oracle_oracledb_engine):
+ yield oracle_oracledb_engine.connect()
+
+
@pytest.fixture
def postgresql_psycopg2_engine(iris_path, types_data):
sqlalchemy = pytest.importorskip("sqlalchemy")
@@ -472,6 +558,11 @@ def sqlite_buildin_iris(sqlite_buildin, iris_path):
"mysql_pymysql_conn",
]
+oracle_connectable = [
+ "oracle_oracledb_engine",
+ "oracle_oracledb_conn",
+]
+
postgresql_connectable = [
"postgresql_psycopg2_engine",
@@ -488,10 +579,10 @@ def sqlite_buildin_iris(sqlite_buildin, iris_path):
"sqlite_iris_conn",
]
-sqlalchemy_connectable = mysql_connectable + postgresql_connectable + sqlite_connectable
+sqlalchemy_connectable = mysql_connectable + oracle_connectable + postgresql_connectable + sqlite_connectable
sqlalchemy_connectable_iris = (
- mysql_connectable + postgresql_connectable + sqlite_iris_connectable
+ mysql_connectable + oracle_connectable + postgresql_connectable + sqlite_iris_connectable
)
all_connectable = sqlalchemy_connectable + ["sqlite_buildin"]
@@ -798,7 +889,8 @@ def _to_sql_save_index(self):
def _transaction_test(self):
with self.pandasSQL.run_transaction() as trans:
- stmt = "CREATE TABLE test_trans (A INT, B TEXT)"
+ b_type = 'LONG' if self.flavor == 'oracle' else 'TEXT'
+ stmt = f"CREATE TABLE test_trans (A INT, B {b_type})"
if isinstance(self.pandasSQL, SQLiteDatabase):
trans.execute(stmt)
else:
@@ -2038,13 +2130,15 @@ def test_dtype(self):
TEXT,
String,
)
+ from sqlalchemy.dialects.oracle import LONG
from sqlalchemy.schema import MetaData
cols = ["A", "B"]
data = [(0.8, True), (0.9, None)]
df = DataFrame(data, columns=cols)
assert df.to_sql("dtype_test", self.conn) == 2
- assert df.to_sql("dtype_test2", self.conn, dtype={"B": TEXT}) == 2
+ assert df.to_sql("dtype_test2", self.conn,
+ dtype={"B": LONG if self.flavor == "oracle" else TEXT}) == 2
meta = MetaData()
meta.reflect(bind=self.conn)
sqltype = meta.tables["dtype_test2"].columns["B"].type
@@ -2486,6 +2580,33 @@ def test_schema_support(self):
res2 = pdsql.read_table("test_schema_other2")
tm.assert_frame_equal(res1, res2)
+@pytest.mark.db
+class TestOracleSQLAlchemy(_TestSQLAlchemy):
+ """
+ Test the SQLAlchemy backend against Oracle Database.
+ """
+
+ flavor = "oracle"
+
+ @classmethod
+ def setup_engine(cls):
+ pytest.importorskip("oracledb")
+ return sqlalchemy.create_engine(
+ ORACLE_CONNECTION_STRING
+ )
+
+ @classmethod
+ def setup_driver(cls):
+ # setup the Oracle Database driver mode(thin/thick)
+ pytest.importorskip("oracledb")
+ switchOracleDBDriverMode()
+
+ def test_default_type_conversion(self):
+ pass
+
+ def test_oracle(self):
+ pass
+
# -----------------------------------------------------------------------------
# -- Test Sqlite / MySQL fallback
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49970 | 2022-11-30T03:23:47Z | 2022-12-17T19:55:21Z | null | 2022-12-17T19:55:21Z |
BUG: Avoid intercepting TypeError in DataFrame.agg | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 6d9f5510eb8c5..722de91ba5246 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -663,12 +663,6 @@ def agg(self):
result = None
try:
result = super().agg()
- except TypeError as err:
- exc = TypeError(
- "DataFrame constructor called with "
- f"incompatible data and dtype: {err}"
- )
- raise exc from err
finally:
self.obj = obj
self.axis = axis
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 28c776d0a6d35..7bf1621d0acea 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -1181,8 +1181,7 @@ def test_agg_multiple_mixed_raises():
)
# sorted index
- # TODO: GH#49399 will fix error message
- msg = "DataFrame constructor called with"
+ msg = "does not support reduction"
with pytest.raises(TypeError, match=msg):
mdf.agg(["min", "sum"])
@@ -1283,7 +1282,7 @@ def test_nuiscance_columns():
)
tm.assert_frame_equal(result, expected)
- msg = "DataFrame constructor called with incompatible data and dtype"
+ msg = "does not support reduction"
with pytest.raises(TypeError, match=msg):
df.agg("sum")
@@ -1291,8 +1290,7 @@ def test_nuiscance_columns():
expected = Series([6, 6.0, "foobarbaz"], index=["A", "B", "C"])
tm.assert_series_equal(result, expected)
- # TODO: GH#49399 will fix error message
- msg = "DataFrame constructor called with"
+ msg = "does not support reduction"
with pytest.raises(TypeError, match=msg):
df.agg(["sum"])
| - [x] closes #49399
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49969 | 2022-11-30T02:59:45Z | 2022-12-01T20:31:02Z | 2022-12-01T20:31:02Z | 2022-12-01T20:31:09Z |
BUG: Allow read_sql to work with chunksize. | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 2c98ff61cbef6..62a54a548a990 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -9,7 +9,10 @@
ABC,
abstractmethod,
)
-from contextlib import contextmanager
+from contextlib import (
+ ExitStack,
+ contextmanager,
+)
from datetime import (
date,
datetime,
@@ -69,6 +72,14 @@
# -- Helper functions
+def _cleanup_after_generator(generator, exit_stack: ExitStack):
+ """Does the cleanup after iterating through the generator."""
+ try:
+ yield from generator
+ finally:
+ exit_stack.close()
+
+
def _convert_params(sql, params):
"""Convert SQL and params args to DBAPI2.0 compliant format."""
args = [sql]
@@ -772,12 +783,11 @@ def has_table(table_name: str, con, schema: str | None = None) -> bool:
table_exists = has_table
-@contextmanager
def pandasSQL_builder(
con,
schema: str | None = None,
need_transaction: bool = False,
-) -> Iterator[PandasSQL]:
+) -> PandasSQL:
"""
Convenience function to return the correct PandasSQL subclass based on the
provided parameters. Also creates a sqlalchemy connection and transaction
@@ -786,45 +796,24 @@ def pandasSQL_builder(
import sqlite3
if isinstance(con, sqlite3.Connection) or con is None:
- yield SQLiteDatabase(con)
- else:
- sqlalchemy = import_optional_dependency("sqlalchemy", errors="ignore")
+ return SQLiteDatabase(con)
- if sqlalchemy is not None and isinstance(
- con, (str, sqlalchemy.engine.Connectable)
- ):
- with _sqlalchemy_con(con, need_transaction) as con:
- yield SQLDatabase(con, schema=schema)
- elif isinstance(con, str) and sqlalchemy is None:
- raise ImportError("Using URI string without sqlalchemy installed.")
- else:
-
- warnings.warn(
- "pandas only supports SQLAlchemy connectable (engine/connection) or "
- "database string URI or sqlite3 DBAPI2 connection. Other DBAPI2 "
- "objects are not tested. Please consider using SQLAlchemy.",
- UserWarning,
- stacklevel=find_stack_level() + 2,
- )
- yield SQLiteDatabase(con)
+ sqlalchemy = import_optional_dependency("sqlalchemy", errors="ignore")
+ if isinstance(con, str) and sqlalchemy is None:
+ raise ImportError("Using URI string without sqlalchemy installed.")
-@contextmanager
-def _sqlalchemy_con(connectable, need_transaction: bool):
- """Create a sqlalchemy connection and a transaction if necessary."""
- sqlalchemy = import_optional_dependency("sqlalchemy", errors="raise")
+ if sqlalchemy is not None and isinstance(con, (str, sqlalchemy.engine.Connectable)):
+ return SQLDatabase(con, schema, need_transaction)
- if isinstance(connectable, str):
- connectable = sqlalchemy.create_engine(connectable)
- if isinstance(connectable, sqlalchemy.engine.Engine):
- with connectable.connect() as con:
- if need_transaction:
- with con.begin():
- yield con
- else:
- yield con
- else:
- yield connectable
+ warnings.warn(
+ "pandas only supports SQLAlchemy connectable (engine/connection) or "
+ "database string URI or sqlite3 DBAPI2 connection. Other DBAPI2 "
+ "objects are not tested. Please consider using SQLAlchemy.",
+ UserWarning,
+ stacklevel=find_stack_level(),
+ )
+ return SQLiteDatabase(con)
class SQLTable(PandasObject):
@@ -1049,6 +1038,7 @@ def _query_iterator(
def read(
self,
+ exit_stack: ExitStack,
coerce_float: bool = True,
parse_dates=None,
columns=None,
@@ -1069,13 +1059,16 @@ def read(
column_names = result.keys()
if chunksize is not None:
- return self._query_iterator(
- result,
- chunksize,
- column_names,
- coerce_float=coerce_float,
- parse_dates=parse_dates,
- use_nullable_dtypes=use_nullable_dtypes,
+ return _cleanup_after_generator(
+ self._query_iterator(
+ result,
+ chunksize,
+ column_names,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates,
+ use_nullable_dtypes=use_nullable_dtypes,
+ ),
+ exit_stack,
)
else:
data = result.fetchall()
@@ -1327,6 +1320,12 @@ class PandasSQL(PandasObject, ABC):
Subclasses Should define read_query and to_sql.
"""
+ def __enter__(self):
+ return self
+
+ def __exit__(self, *args) -> None:
+ pass
+
def read_table(
self,
table_name: str,
@@ -1482,20 +1481,38 @@ class SQLDatabase(PandasSQL):
Parameters
----------
- con : SQLAlchemy Connection
- Connection to connect with the database. Using SQLAlchemy makes it
+ con : SQLAlchemy Connectable or URI string.
+ Connectable to connect with the database. Using SQLAlchemy makes it
possible to use any DB supported by that library.
schema : string, default None
Name of SQL schema in database to write to (if database flavor
supports this). If None, use default schema (default).
+ need_transaction : bool, default False
+ If True, SQLDatabase will create a transaction.
"""
- def __init__(self, con, schema: str | None = None) -> None:
+ def __init__(
+ self, con, schema: str | None = None, need_transaction: bool = False
+ ) -> None:
+ from sqlalchemy import create_engine
+ from sqlalchemy.engine import Engine
from sqlalchemy.schema import MetaData
+ self.exit_stack = ExitStack()
+ if isinstance(con, str):
+ con = create_engine(con)
+ if isinstance(con, Engine):
+ con = self.exit_stack.enter_context(con.connect())
+ if need_transaction:
+ self.exit_stack.enter_context(con.begin())
self.con = con
self.meta = MetaData(schema=schema)
+ self.returns_generator = False
+
+ def __exit__(self, *args) -> None:
+ if not self.returns_generator:
+ self.exit_stack.close()
@contextmanager
def run_transaction(self):
@@ -1566,7 +1583,10 @@ def read_table(
"""
self.meta.reflect(bind=self.con, only=[table_name])
table = SQLTable(table_name, self, index=index_col, schema=schema)
+ if chunksize is not None:
+ self.returns_generator = True
return table.read(
+ self.exit_stack,
coerce_float=coerce_float,
parse_dates=parse_dates,
columns=columns,
@@ -1675,15 +1695,19 @@ def read_query(
columns = result.keys()
if chunksize is not None:
- return self._query_iterator(
- result,
- chunksize,
- columns,
- index_col=index_col,
- coerce_float=coerce_float,
- parse_dates=parse_dates,
- dtype=dtype,
- use_nullable_dtypes=use_nullable_dtypes,
+ self.returns_generator = True
+ return _cleanup_after_generator(
+ self._query_iterator(
+ result,
+ chunksize,
+ columns,
+ index_col=index_col,
+ coerce_float=coerce_float,
+ parse_dates=parse_dates,
+ dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
+ ),
+ self.exit_stack,
)
else:
data = result.fetchall()
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 490b425ee52bf..b7cff1627a81f 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -260,24 +260,34 @@ def check_iris_frame(frame: DataFrame):
row = frame.iloc[0]
assert issubclass(pytype, np.floating)
tm.equalContents(row.values, [5.1, 3.5, 1.4, 0.2, "Iris-setosa"])
+ assert frame.shape in ((150, 5), (8, 5))
def count_rows(conn, table_name: str):
stmt = f"SELECT count(*) AS count_1 FROM {table_name}"
if isinstance(conn, sqlite3.Connection):
cur = conn.cursor()
- result = cur.execute(stmt)
+ return cur.execute(stmt).fetchone()[0]
else:
- from sqlalchemy import text
+ from sqlalchemy import (
+ create_engine,
+ text,
+ )
from sqlalchemy.engine import Engine
stmt = text(stmt)
- if isinstance(conn, Engine):
+ if isinstance(conn, str):
+ try:
+ engine = create_engine(conn)
+ with engine.connect() as conn:
+ return conn.execute(stmt).scalar_one()
+ finally:
+ engine.dispose()
+ elif isinstance(conn, Engine):
with conn.connect() as conn:
- result = conn.execute(stmt)
+ return conn.execute(stmt).scalar_one()
else:
- result = conn.execute(stmt)
- return result.fetchone()[0]
+ return conn.execute(stmt).scalar_one()
@pytest.fixture
@@ -388,6 +398,7 @@ def mysql_pymysql_engine(iris_path, types_data):
engine = sqlalchemy.create_engine(
"mysql+pymysql://root@localhost:3306/pandas",
connect_args={"client_flag": pymysql.constants.CLIENT.MULTI_STATEMENTS},
+ poolclass=sqlalchemy.pool.NullPool,
)
insp = sqlalchemy.inspect(engine)
if not insp.has_table("iris"):
@@ -414,7 +425,8 @@ def postgresql_psycopg2_engine(iris_path, types_data):
sqlalchemy = pytest.importorskip("sqlalchemy")
pytest.importorskip("psycopg2")
engine = sqlalchemy.create_engine(
- "postgresql+psycopg2://postgres:postgres@localhost:5432/pandas"
+ "postgresql+psycopg2://postgres:postgres@localhost:5432/pandas",
+ poolclass=sqlalchemy.pool.NullPool,
)
insp = sqlalchemy.inspect(engine)
if not insp.has_table("iris"):
@@ -435,9 +447,16 @@ def postgresql_psycopg2_conn(postgresql_psycopg2_engine):
@pytest.fixture
-def sqlite_engine():
+def sqlite_str():
+ pytest.importorskip("sqlalchemy")
+ with tm.ensure_clean() as name:
+ yield "sqlite:///" + name
+
+
+@pytest.fixture
+def sqlite_engine(sqlite_str):
sqlalchemy = pytest.importorskip("sqlalchemy")
- engine = sqlalchemy.create_engine("sqlite://")
+ engine = sqlalchemy.create_engine(sqlite_str, poolclass=sqlalchemy.pool.NullPool)
yield engine
engine.dispose()
@@ -447,6 +466,15 @@ def sqlite_conn(sqlite_engine):
yield sqlite_engine.connect()
+@pytest.fixture
+def sqlite_iris_str(sqlite_str, iris_path):
+ sqlalchemy = pytest.importorskip("sqlalchemy")
+ engine = sqlalchemy.create_engine(sqlite_str)
+ create_and_load_iris(engine, iris_path, "sqlite")
+ engine.dispose()
+ return sqlite_str
+
+
@pytest.fixture
def sqlite_iris_engine(sqlite_engine, iris_path):
create_and_load_iris(sqlite_engine, iris_path, "sqlite")
@@ -485,11 +513,13 @@ def sqlite_buildin_iris(sqlite_buildin, iris_path):
sqlite_connectable = [
"sqlite_engine",
"sqlite_conn",
+ "sqlite_str",
]
sqlite_iris_connectable = [
"sqlite_iris_engine",
"sqlite_iris_conn",
+ "sqlite_iris_str",
]
sqlalchemy_connectable = mysql_connectable + postgresql_connectable + sqlite_connectable
@@ -541,10 +571,47 @@ def test_to_sql_exist_fail(conn, test_frame1, request):
@pytest.mark.db
@pytest.mark.parametrize("conn", all_connectable_iris)
-def test_read_iris(conn, request):
+def test_read_iris_query(conn, request):
conn = request.getfixturevalue(conn)
- with pandasSQL_builder(conn) as pandasSQL:
- iris_frame = pandasSQL.read_query("SELECT * FROM iris")
+ iris_frame = read_sql_query("SELECT * FROM iris", conn)
+ check_iris_frame(iris_frame)
+ iris_frame = pd.read_sql("SELECT * FROM iris", conn)
+ check_iris_frame(iris_frame)
+ iris_frame = pd.read_sql("SELECT * FROM iris where 0=1", conn)
+ assert iris_frame.shape == (0, 5)
+ assert "SepalWidth" in iris_frame.columns
+
+
+@pytest.mark.db
+@pytest.mark.parametrize("conn", all_connectable_iris)
+def test_read_iris_query_chunksize(conn, request):
+ conn = request.getfixturevalue(conn)
+ iris_frame = concat(read_sql_query("SELECT * FROM iris", conn, chunksize=7))
+ check_iris_frame(iris_frame)
+ iris_frame = concat(pd.read_sql("SELECT * FROM iris", conn, chunksize=7))
+ check_iris_frame(iris_frame)
+ iris_frame = concat(pd.read_sql("SELECT * FROM iris where 0=1", conn, chunksize=7))
+ assert iris_frame.shape == (0, 5)
+ assert "SepalWidth" in iris_frame.columns
+
+
+@pytest.mark.db
+@pytest.mark.parametrize("conn", sqlalchemy_connectable_iris)
+def test_read_iris_table(conn, request):
+ conn = request.getfixturevalue(conn)
+ iris_frame = read_sql_table("iris", conn)
+ check_iris_frame(iris_frame)
+ iris_frame = pd.read_sql("iris", conn)
+ check_iris_frame(iris_frame)
+
+
+@pytest.mark.db
+@pytest.mark.parametrize("conn", sqlalchemy_connectable_iris)
+def test_read_iris_table_chunksize(conn, request):
+ conn = request.getfixturevalue(conn)
+ iris_frame = concat(read_sql_table("iris", conn, chunksize=7))
+ check_iris_frame(iris_frame)
+ iris_frame = concat(pd.read_sql("iris", conn, chunksize=7))
check_iris_frame(iris_frame)
| - [X] closes #50199
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This fixes a bug which I introduced in #49531. That PR refactored code so that `SQLDatabase` only accepts a sqlalchemy `Connection` and not an `Engine`. I used a context manager to handle cleanup. However, this can break `read_sql` with `chunksize`, because the caller might be iterating over the chunks after the connection is closed. I added tests with chunksize, and I use `NullPool` with `create_engine`, because this is more likely to cause the connection to close. I also created a new string connectable to test with. | https://api.github.com/repos/pandas-dev/pandas/pulls/49967 | 2022-11-30T00:57:53Z | 2023-01-31T21:23:32Z | 2023-01-31T21:23:32Z | 2023-02-02T09:28:35Z |
REF: reverse dispatch in factorize | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 381f76c4502d6..65691e6f46eb5 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -73,7 +73,6 @@
ABCExtensionArray,
ABCIndex,
ABCMultiIndex,
- ABCRangeIndex,
ABCSeries,
ABCTimedeltaArray,
)
@@ -738,13 +737,11 @@ def factorize(
# Step 2 is dispatched to extension types (like Categorical). They are
# responsible only for factorization. All data coercion, sorting and boxing
# should happen here.
- if isinstance(values, ABCRangeIndex):
- return values.factorize(sort=sort)
+ if isinstance(values, (ABCIndex, ABCSeries)):
+ return values.factorize(sort=sort, use_na_sentinel=use_na_sentinel)
values = _ensure_arraylike(values)
original = values
- if not isinstance(values, ABCMultiIndex):
- values = extract_array(values, extract_numpy=True)
if (
isinstance(values, (ABCDatetimeArray, ABCTimedeltaArray))
@@ -753,7 +750,7 @@ def factorize(
# The presence of 'freq' means we can fast-path sorting and know there
# aren't NAs
codes, uniques = values.factorize(sort=sort)
- return _re_wrap_factorize(original, uniques, codes)
+ return codes, uniques
elif not isinstance(values.dtype, np.dtype):
codes, uniques = values.factorize(use_na_sentinel=use_na_sentinel)
@@ -789,21 +786,6 @@ def factorize(
uniques = _reconstruct_data(uniques, original.dtype, original)
- return _re_wrap_factorize(original, uniques, codes)
-
-
-def _re_wrap_factorize(original, uniques, codes: np.ndarray):
- """
- Wrap factorize results in Series or Index depending on original type.
- """
- if isinstance(original, ABCIndex):
- uniques = ensure_wrapped_if_datetimelike(uniques)
- uniques = original._shallow_copy(uniques, name=None)
- elif isinstance(original, ABCSeries):
- from pandas import Index
-
- uniques = Index(uniques)
-
return codes, uniques
diff --git a/pandas/core/base.py b/pandas/core/base.py
index fb563fd640cfd..22a4790b32506 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -79,6 +79,7 @@
from pandas import (
Categorical,
+ Index,
Series,
)
@@ -1134,8 +1135,20 @@ def factorize(
self,
sort: bool = False,
use_na_sentinel: bool = True,
- ):
- return algorithms.factorize(self, sort=sort, use_na_sentinel=use_na_sentinel)
+ ) -> tuple[npt.NDArray[np.intp], Index]:
+
+ codes, uniques = algorithms.factorize(
+ self._values, sort=sort, use_na_sentinel=use_na_sentinel
+ )
+
+ if isinstance(self, ABCIndex):
+ # preserve e.g. NumericIndex, preserve MultiIndex
+ uniques = self._constructor(uniques)
+ else:
+ from pandas import Index
+
+ uniques = Index(uniques)
+ return codes, uniques
_shared_docs[
"searchsorted"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Needed for upcoming removal of DTA/TDA.freq | https://api.github.com/repos/pandas-dev/pandas/pulls/49966 | 2022-11-30T00:50:25Z | 2022-12-01T01:00:37Z | 2022-12-01T01:00:37Z | 2022-12-01T02:53:28Z |
ENH: Add io.nullable_backend=pyarrow support to read_excel | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 541d021791f35..0c3d85cbcb620 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -28,13 +28,26 @@ The available extras, found in the :ref:`installation guide<install.dependencies
``[all, performance, computation, timezone, fss, aws, gcp, excel, parquet, feather, hdf5, spss, postgresql, mysql,
sql-other, html, xml, plot, output_formatting, clipboard, compression, test]`` (:issue:`39164`).
-.. _whatsnew_200.enhancements.io_readers_nullable_pyarrow:
+.. _whatsnew_200.enhancements.io_use_nullable_dtypes_and_nullable_backend:
Configuration option, ``io.nullable_backend``, to return pyarrow-backed dtypes from IO functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-A new global configuration, ``io.nullable_backend`` can now be used in conjunction with the parameter ``use_nullable_dtypes=True`` in :func:`read_parquet`, :func:`read_orc` and :func:`read_csv` (with ``engine="pyarrow"``)
-to return pyarrow-backed dtypes when set to ``"pyarrow"`` (:issue:`48957`).
+The ``use_nullable_dtypes`` keyword argument has been expanded to the following functions to enable automatic conversion to nullable dtypes (:issue:`36712`)
+
+* :func:`read_csv`
+* :func:`read_excel`
+
+Additionally a new global configuration, ``io.nullable_backend`` can now be used in conjunction with the parameter ``use_nullable_dtypes=True`` in the following functions
+to select the nullable dtypes implementation.
+
+* :func:`read_csv` (with ``engine="pyarrow"``)
+* :func:`read_excel`
+* :func:`read_parquet`
+* :func:`read_orc`
+
+By default, ``io.nullable_backend`` is set to ``"pandas"`` to return existing, numpy-backed nullable dtypes, but it can also
+be set to ``"pyarrow"`` to return pyarrow-backed, nullable :class:`ArrowDtype` (:issue:`48957`).
.. ipython:: python
@@ -43,10 +56,15 @@ to return pyarrow-backed dtypes when set to ``"pyarrow"`` (:issue:`48957`).
1,2.5,True,a,,,,,
3,4.5,False,b,6,7.5,True,a,
""")
- with pd.option_context("io.nullable_backend", "pyarrow"):
- df = pd.read_csv(data, use_nullable_dtypes=True, engine="pyarrow")
+ with pd.option_context("io.nullable_backend", "pandas"):
+ df = pd.read_csv(data, use_nullable_dtypes=True)
df.dtypes
+ data.seek(0)
+ with pd.option_context("io.nullable_backend", "pyarrow"):
+ df_pyarrow = pd.read_csv(data, use_nullable_dtypes=True, engine="pyarrow")
+ df_pyarrow.dtypes
+
.. _whatsnew_200.enhancements.other:
Other enhancements
@@ -55,7 +73,6 @@ Other enhancements
- :meth:`.DataFrameGroupBy.quantile` and :meth:`.SeriesGroupBy.quantile` now preserve nullable dtypes instead of casting to numpy dtypes (:issue:`37493`)
- :meth:`Series.add_suffix`, :meth:`DataFrame.add_suffix`, :meth:`Series.add_prefix` and :meth:`DataFrame.add_prefix` support an ``axis`` argument. If ``axis`` is set, the default behaviour of which axis to consider can be overwritten (:issue:`47819`)
- :func:`assert_frame_equal` now shows the first element where the DataFrames differ, analogously to ``pytest``'s output (:issue:`47910`)
-- Added new argument ``use_nullable_dtypes`` to :func:`read_csv` and :func:`read_excel` to enable automatic conversion to nullable dtypes (:issue:`36712`)
- Added ``index`` parameter to :meth:`DataFrame.to_dict` (:issue:`46398`)
- Added support for extension array dtypes in :func:`merge` (:issue:`44240`)
- Added metadata propagation for binary operators on :class:`DataFrame` (:issue:`28283`)
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 7b9794dd434e6..c5fc054952b1f 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -26,6 +26,8 @@
import numpy as np
+from pandas._config.config import get_option
+
from pandas._libs import (
lib,
parsers,
@@ -39,6 +41,7 @@
DtypeObj,
Scalar,
)
+from pandas.compat._optional import import_optional_dependency
from pandas.errors import (
ParserError,
ParserWarning,
@@ -71,6 +74,7 @@
from pandas import StringDtype
from pandas.core import algorithms
from pandas.core.arrays import (
+ ArrowExtensionArray,
BooleanArray,
Categorical,
ExtensionArray,
@@ -710,6 +714,7 @@ def _infer_types(
use_nullable_dtypes: Literal[True] | Literal[False] = (
self.use_nullable_dtypes and no_dtype_specified
)
+ nullable_backend = get_option("io.nullable_backend")
result: ArrayLike
if try_num_bool and is_object_dtype(values.dtype):
@@ -767,6 +772,16 @@ def _infer_types(
if inferred_type != "datetime":
result = StringDtype().construct_array_type()._from_sequence(values)
+ if use_nullable_dtypes and nullable_backend == "pyarrow":
+ pa = import_optional_dependency("pyarrow")
+ if isinstance(result, np.ndarray):
+ result = ArrowExtensionArray(pa.array(result, from_pandas=True))
+ else:
+ # ExtensionArray
+ result = ArrowExtensionArray(
+ pa.array(result.to_numpy(), from_pandas=True)
+ )
+
return result, na_count
def _cast_types(self, values: ArrayLike, cast_type: DtypeObj, column) -> ArrayLike:
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index bff4c98fe2842..822e24b224052 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -536,7 +536,11 @@ def test_reader_dtype_str(self, read_ext, dtype, expected):
actual = pd.read_excel(basename + read_ext, dtype=dtype)
tm.assert_frame_equal(actual, expected)
- def test_use_nullable_dtypes(self, read_ext):
+ @pytest.mark.parametrize(
+ "nullable_backend",
+ ["pandas", pytest.param("pyarrow", marks=td.skip_if_no("pyarrow"))],
+ )
+ def test_use_nullable_dtypes(self, read_ext, nullable_backend):
# GH#36712
if read_ext in (".xlsb", ".xls"):
pytest.skip(f"No engine for filetype: '{read_ext}'")
@@ -557,10 +561,30 @@ def test_use_nullable_dtypes(self, read_ext):
)
with tm.ensure_clean(read_ext) as file_path:
df.to_excel(file_path, "test", index=False)
- result = pd.read_excel(
- file_path, sheet_name="test", use_nullable_dtypes=True
+ with pd.option_context("io.nullable_backend", nullable_backend):
+ result = pd.read_excel(
+ file_path, sheet_name="test", use_nullable_dtypes=True
+ )
+ if nullable_backend == "pyarrow":
+ import pyarrow as pa
+
+ from pandas.arrays import ArrowExtensionArray
+
+ expected = DataFrame(
+ {
+ col: ArrowExtensionArray(pa.array(df[col], from_pandas=True))
+ for col in df.columns
+ }
)
- tm.assert_frame_equal(result, df)
+ # pyarrow by default infers timestamp resolution as us, not ns
+ expected["i"] = ArrowExtensionArray(
+ expected["i"].array._data.cast(pa.timestamp(unit="us"))
+ )
+ # pyarrow supports a null type, so don't have to default to Int64
+ expected["j"] = ArrowExtensionArray(pa.array([None, None]))
+ else:
+ expected = df
+ tm.assert_frame_equal(result, expected)
def test_use_nullabla_dtypes_and_dtype(self, read_ext):
# GH#36712
| - [x] xref #48957 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49965 | 2022-11-30T00:16:46Z | 2022-12-02T11:13:08Z | 2022-12-02T11:13:08Z | 2022-12-02T17:56:11Z |
DOC: See also box should refer to same type (Series.dt.tz_convert --> Series.dt.tz_localize) | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 5fdf2c88503a5..9560a3703dc63 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -814,8 +814,8 @@ def tz_convert(self, tz) -> DatetimeArray:
See Also
--------
- DatetimeIndex.tz : A timezone that has a variable offset from UTC.
- DatetimeIndex.tz_localize : Localize tz-naive DatetimeIndex to a
+ pandas.Series.dt.tz : A timezone that has a variable offset from UTC.
+ pandas.Series.dt.tz_localize : Localize tz-naive DatetimeIndex to a
given time zone, or remove timezone from a tz-aware DatetimeIndex.
Examples
| - [x] closes #49297
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49964 | 2022-11-30T00:13:39Z | 2023-01-03T21:07:25Z | null | 2023-01-03T21:07:25Z |
TST/CoW: copy-on-write tests for df.head and df.tail | diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 0c68f6a866eec..bf65f153b10dd 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -250,3 +250,33 @@ def test_set_index(using_copy_on_write):
df2.iloc[0, 1] = 0
assert not np.shares_memory(get_array(df2, "c"), get_array(df, "c"))
tm.assert_frame_equal(df, df_orig)
+
+
+@pytest.mark.parametrize(
+ "method",
+ [
+ lambda df: df.head(),
+ lambda df: df.head(2),
+ lambda df: df.tail(),
+ lambda df: df.tail(3),
+ ],
+)
+def test_head_tail(method, using_copy_on_write):
+ df = DataFrame({"a": [1, 2, 3], "b": [0.1, 0.2, 0.3]})
+ df_orig = df.copy()
+ df2 = method(df)
+ df2._mgr._verify_integrity()
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ assert np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
+
+ # modify df2 to trigger CoW for that block
+ df2.iloc[0, 0] = 0
+ assert np.shares_memory(get_array(df2, "b"), get_array(df, "b"))
+ if using_copy_on_write:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ else:
+ # without CoW enabled, head and tail return views. Mutating df2 also mutates df.
+ df2.iloc[0, 0] = 1
+ tm.assert_frame_equal(df, df_orig)
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Progress towards #49473
Because `.head()` and `.tail()` use `.iloc` under the hood, copy-on-write is already implemented for them, but this PR adds explicit tests for both methods. | https://api.github.com/repos/pandas-dev/pandas/pulls/49963 | 2022-11-29T22:35:16Z | 2022-12-01T20:41:19Z | 2022-12-01T20:41:19Z | 2022-12-01T20:41:25Z |
STYLE: #49656 - generic.py | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 1b17deb7def90..2de83bb7a4468 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4,7 +4,7 @@
import collections
import datetime as dt
import gc
-import json
+from json import loads
import operator
import pickle
import re
@@ -134,7 +134,6 @@
algorithms as algos,
arraylike,
indexing,
- missing,
nanops,
sample,
)
@@ -160,7 +159,11 @@
SingleArrayManager,
)
from pandas.core.internals.construction import mgr_to_mgr
-from pandas.core.missing import find_valid_index
+from pandas.core.missing import (
+ clean_fill_method,
+ clean_reindex_fill_method,
+ find_valid_index,
+)
from pandas.core.ops import align_method_FRAME
from pandas.core.reshape.concat import concat
from pandas.core.shared_docs import _shared_docs
@@ -2122,7 +2125,7 @@ def _repr_data_resource_(self):
as_json = data.to_json(orient="table")
as_json = cast(str, as_json)
- return json.loads(as_json, object_pairs_hook=collections.OrderedDict)
+ return loads(as_json, object_pairs_hook=collections.OrderedDict)
# ----------------------------------------------------------------------
# I/O Methods
@@ -2410,7 +2413,7 @@ def to_json(
Examples
--------
- >>> import json
+ >>> from json import loads, dumps
>>> df = pd.DataFrame(
... [["a", "b"], ["c", "d"]],
... index=["row 1", "row 2"],
@@ -2418,8 +2421,8 @@ def to_json(
... )
>>> result = df.to_json(orient="split")
- >>> parsed = json.loads(result)
- >>> json.dumps(parsed, indent=4) # doctest: +SKIP
+ >>> parsed = loads(result)
+ >>> dumps(parsed, indent=4) # doctest: +SKIP
{{
"columns": [
"col 1",
@@ -2445,8 +2448,8 @@ def to_json(
Note that index labels are not preserved with this encoding.
>>> result = df.to_json(orient="records")
- >>> parsed = json.loads(result)
- >>> json.dumps(parsed, indent=4) # doctest: +SKIP
+ >>> parsed = loads(result)
+ >>> dumps(parsed, indent=4) # doctest: +SKIP
[
{{
"col 1": "a",
@@ -2461,8 +2464,8 @@ def to_json(
Encoding/decoding a Dataframe using ``'index'`` formatted JSON:
>>> result = df.to_json(orient="index")
- >>> parsed = json.loads(result)
- >>> json.dumps(parsed, indent=4) # doctest: +SKIP
+ >>> parsed = loads(result)
+ >>> dumps(parsed, indent=4) # doctest: +SKIP
{{
"row 1": {{
"col 1": "a",
@@ -2477,8 +2480,8 @@ def to_json(
Encoding/decoding a Dataframe using ``'columns'`` formatted JSON:
>>> result = df.to_json(orient="columns")
- >>> parsed = json.loads(result)
- >>> json.dumps(parsed, indent=4) # doctest: +SKIP
+ >>> parsed = loads(result)
+ >>> dumps(parsed, indent=4) # doctest: +SKIP
{{
"col 1": {{
"row 1": "a",
@@ -2493,8 +2496,8 @@ def to_json(
Encoding/decoding a Dataframe using ``'values'`` formatted JSON:
>>> result = df.to_json(orient="values")
- >>> parsed = json.loads(result)
- >>> json.dumps(parsed, indent=4) # doctest: +SKIP
+ >>> parsed = loads(result)
+ >>> dumps(parsed, indent=4) # doctest: +SKIP
[
[
"a",
@@ -2509,8 +2512,8 @@ def to_json(
Encoding with Table Schema:
>>> result = df.to_json(orient="table")
- >>> parsed = json.loads(result)
- >>> json.dumps(parsed, indent=4) # doctest: +SKIP
+ >>> parsed = loads(result)
+ >>> dumps(parsed, indent=4) # doctest: +SKIP
{{
"schema": {{
"fields": [
@@ -5169,7 +5172,7 @@ def reindex(self: NDFrameT, *args, **kwargs) -> NDFrameT:
# construct the args
axes, kwargs = self._construct_axes_from_arguments(args, kwargs)
- method = missing.clean_reindex_fill_method(kwargs.pop("method", None))
+ method = clean_reindex_fill_method(kwargs.pop("method", None))
level = kwargs.pop("level", None)
copy = kwargs.pop("copy", None)
limit = kwargs.pop("limit", None)
@@ -9201,7 +9204,7 @@ def align(
4 600.0 700.0 800.0 900.0 NaN
"""
- method = missing.clean_fill_method(method)
+ method = clean_fill_method(method)
if broadcast_axis == 1 and self.ndim != other.ndim:
if isinstance(self, ABCSeries):
| - [ ] closes #49656
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49960 | 2022-11-29T20:46:18Z | 2022-11-29T23:13:04Z | 2022-11-29T23:13:04Z | 2022-11-29T23:13:04Z |
48779-doc-MonthEnd | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 8e022ac662d21..82f134a348fee 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -2412,11 +2412,28 @@ cdef class MonthEnd(MonthOffset):
"""
DateOffset of one month end.
+ MonthEnd goes to the next date which is an end of the month.
+ To get the end of the current month pass the parameter n equals 0.
+
+ See Also
+ --------
+ :class:`~pandas.tseries.offsets.DateOffset` : Standard kind of date increment.
+
Examples
--------
- >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts = pd.Timestamp(2022, 1, 30)
>>> ts + pd.offsets.MonthEnd()
Timestamp('2022-01-31 00:00:00')
+
+ >>> ts = pd.Timestamp(2022, 1, 31)
+ >>> ts + pd.offsets.MonthEnd()
+ Timestamp('2022-02-28 00:00:00')
+
+ If you want to get the end of the current month pass the parameter n equals 0:
+
+ >>> ts = pd.Timestamp(2022, 1, 31)
+ >>> ts + pd.offsets.MonthEnd(0)
+ Timestamp('2022-01-31 00:00:00')
"""
_period_dtype_code = PeriodDtypeCode.M
_prefix = "M"
| - [x] closes #48779
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
As proposed by @MarcoGorelli I updated docs for `MonthEnd` and added one more example to highlight “the last day of the month” behavior.
I checked that build of documentation shows the new description. | https://api.github.com/repos/pandas-dev/pandas/pulls/49958 | 2022-11-29T09:40:50Z | 2022-11-30T16:31:31Z | 2022-11-30T16:31:31Z | 2022-11-30T20:41:54Z |
STYLE #49656: format.py - 9.99/10 | diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index cdc21f04da43a..24c701bdca448 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -9,7 +9,7 @@
QUOTE_NONE,
QUOTE_NONNUMERIC,
)
-import decimal
+import decimal as dec
from functools import partial
from io import StringIO
import math
@@ -108,7 +108,7 @@
)
from pandas.io.formats.printing import (
adjoin,
- justify,
+ justify as jtf,
pprint_thing,
)
@@ -433,7 +433,7 @@ def len(self, text: str) -> int:
return len(text)
def justify(self, texts: Any, max_len: int, mode: str = "right") -> list[str]:
- return justify(texts, max_len, mode=mode)
+ return jtf(texts, max_len, mode=mode)
def adjoin(self, space: int, *lists, **kwargs) -> str:
return adjoin(space, *lists, strlen=self.len, justfunc=self.justify, **kwargs)
@@ -2071,12 +2071,12 @@ def __call__(self, num: float) -> str:
@return: engineering formatted string
"""
- dnum = decimal.Decimal(str(num))
+ dnum = dec.Decimal(str(num))
- if decimal.Decimal.is_nan(dnum):
+ if dec.Decimal.is_nan(dnum):
return "NaN"
- if decimal.Decimal.is_infinite(dnum):
+ if dec.Decimal.is_infinite(dnum):
return "inf"
sign = 1
@@ -2086,9 +2086,9 @@ def __call__(self, num: float) -> str:
dnum = -dnum
if dnum != 0:
- pow10 = decimal.Decimal(int(math.floor(dnum.log10() / 3) * 3))
+ pow10 = dec.Decimal(int(math.floor(dnum.log10() / 3) * 3))
else:
- pow10 = decimal.Decimal(0)
+ pow10 = dec.Decimal(0)
pow10 = pow10.min(max(self.ENG_PREFIXES.keys()))
pow10 = pow10.max(min(self.ENG_PREFIXES.keys()))
| - [ ] xref #49656
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49957 | 2022-11-29T06:07:09Z | 2022-11-29T08:09:04Z | null | 2022-11-29T08:10:36Z |
Update format.py | diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index cdc21f04da43a..cbb56e2d19cc0 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -9,7 +9,7 @@
QUOTE_NONE,
QUOTE_NONNUMERIC,
)
-import decimal
+import decimal as dec
from functools import partial
from io import StringIO
import math
@@ -108,7 +108,7 @@
)
from pandas.io.formats.printing import (
adjoin,
- justify,
+ justify as jtf,
pprint_thing,
)
@@ -433,7 +433,7 @@ def len(self, text: str) -> int:
return len(text)
def justify(self, texts: Any, max_len: int, mode: str = "right") -> list[str]:
- return justify(texts, max_len, mode=mode)
+ return jtf(texts, max_len, mode=mode)
def adjoin(self, space: int, *lists, **kwargs) -> str:
return adjoin(space, *lists, strlen=self.len, justfunc=self.justify, **kwargs)
@@ -1794,12 +1794,12 @@ def _format_datetime64_dateonly(
def get_format_datetime64(
- is_dates_only: bool, nat_rep: str = "NaT", date_format: str | None = None
+ dates_only: bool, nat_rep: str = "NaT", date_format: str | None = None
) -> Callable:
"""Return a formatter callable taking a datetime64 as input and providing
a string as output"""
- if is_dates_only:
+ if dates_only:
return lambda x: _format_datetime64_dateonly(
x, nat_rep=nat_rep, date_format=date_format
)
@@ -2071,12 +2071,12 @@ def __call__(self, num: float) -> str:
@return: engineering formatted string
"""
- dnum = decimal.Decimal(str(num))
+ dnum = dec.Decimal(str(num))
- if decimal.Decimal.is_nan(dnum):
+ if dec.Decimal.is_nan(dnum):
return "NaN"
- if decimal.Decimal.is_infinite(dnum):
+ if dec.Decimal.is_infinite(dnum):
return "inf"
sign = 1
@@ -2086,9 +2086,9 @@ def __call__(self, num: float) -> str:
dnum = -dnum
if dnum != 0:
- pow10 = decimal.Decimal(int(math.floor(dnum.log10() / 3) * 3))
+ pow10 = dec.Decimal(int(math.floor(dnum.log10() / 3) * 3))
else:
- pow10 = decimal.Decimal(0)
+ pow10 = dec.Decimal(0)
pow10 = pow10.min(max(self.ENG_PREFIXES.keys()))
pow10 = pow10.max(min(self.ENG_PREFIXES.keys()))
| - [ ] xref #49656
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49956 | 2022-11-29T05:25:56Z | 2022-11-29T06:00:19Z | null | 2022-11-29T06:00:19Z |
Remove defaults channel from conda asv conf | diff --git a/asv_bench/asv.conf.json b/asv_bench/asv.conf.json
index b1feb1d0af79c..16f8f28b66d31 100644
--- a/asv_bench/asv.conf.json
+++ b/asv_bench/asv.conf.json
@@ -57,7 +57,7 @@
"odfpy": [],
"jinja2": [],
},
- "conda_channels": ["defaults", "conda-forge"],
+ "conda_channels": ["conda-forge"],
// Combinations of libraries/python versions can be excluded/included
// from the set to test. Each entry is a dictionary containing additional
// key-value pairs to include/exclude.
| Looks like we only use conda-forge elsewhere so assuming this is an oversight | https://api.github.com/repos/pandas-dev/pandas/pulls/49955 | 2022-11-29T04:24:17Z | 2022-11-29T08:20:11Z | 2022-11-29T08:20:11Z | 2022-11-29T17:24:49Z |
REF: move methods to TimelikeOps | diff --git a/pandas/_libs/tslibs/offsets.pyi b/pandas/_libs/tslibs/offsets.pyi
index eacdf17b0b4d3..dfa72a9592e3c 100644
--- a/pandas/_libs/tslibs/offsets.pyi
+++ b/pandas/_libs/tslibs/offsets.pyi
@@ -14,6 +14,7 @@ import numpy as np
from pandas._typing import npt
+from .nattype import NaTType
from .timedeltas import Timedelta
_BaseOffsetT = TypeVar("_BaseOffsetT", bound=BaseOffset)
@@ -51,6 +52,8 @@ class BaseOffset:
def __radd__(self, other: _DatetimeT) -> _DatetimeT: ...
@overload
def __radd__(self, other: _TimedeltaT) -> _TimedeltaT: ...
+ @overload
+ def __radd__(self, other: NaTType) -> NaTType: ...
def __sub__(self: _BaseOffsetT, other: BaseOffset) -> _BaseOffsetT: ...
@overload
def __rsub__(self, other: npt.NDArray[np.object_]) -> npt.NDArray[np.object_]: ...
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index be20d825b0c80..cd3cba230980b 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -162,11 +162,10 @@ class DatetimeLikeArrayMixin(OpsMixin, NDArrayBackedExtensionArray):
Shared Base/Mixin class for DatetimeArray, TimedeltaArray, PeriodArray
Assumes that __new__/__init__ defines:
- _data
- _freq
+ _ndarray
and that the inheriting class has methods:
- _generate_range
+ freq
"""
# _infer_matches -> which infer_dtype strings are close enough to our own
@@ -174,6 +173,7 @@ class DatetimeLikeArrayMixin(OpsMixin, NDArrayBackedExtensionArray):
_is_recognized_dtype: Callable[[DtypeObj], bool]
_recognized_scalars: tuple[type, ...]
_ndarray: np.ndarray
+ freq: BaseOffset | None
@cache_readonly
def _can_hold_na(self) -> bool:
@@ -876,24 +876,6 @@ def _maybe_mask_results(
# ------------------------------------------------------------------
# Frequency Properties/Methods
- @property
- def freq(self):
- """
- Return the frequency object if it is set, otherwise None.
- """
- return self._freq
-
- @freq.setter
- def freq(self, value) -> None:
- if value is not None:
- value = to_offset(value)
- self._validate_frequency(self, value)
-
- if self.ndim > 1:
- raise ValueError("Cannot set freq with ndim > 1")
-
- self._freq = value
-
@property
def freqstr(self) -> str | None:
"""
@@ -935,51 +917,6 @@ def resolution(self) -> str:
# error: Item "None" of "Optional[Any]" has no attribute "attrname"
return self._resolution_obj.attrname # type: ignore[union-attr]
- @classmethod
- def _validate_frequency(cls, index, freq, **kwargs):
- """
- Validate that a frequency is compatible with the values of a given
- Datetime Array/Index or Timedelta Array/Index
-
- Parameters
- ----------
- index : DatetimeIndex or TimedeltaIndex
- The index on which to determine if the given frequency is valid
- freq : DateOffset
- The frequency to validate
- """
- # TODO: this is not applicable to PeriodArray, move to correct Mixin
- inferred = index.inferred_freq
- if index.size == 0 or inferred == freq.freqstr:
- return None
-
- try:
- on_freq = cls._generate_range(
- start=index[0], end=None, periods=len(index), freq=freq, **kwargs
- )
- if not np.array_equal(index.asi8, on_freq.asi8):
- raise ValueError
- except ValueError as e:
- if "non-fixed" in str(e):
- # non-fixed frequencies are not meaningful for timedelta64;
- # we retain that error message
- raise e
- # GH#11587 the main way this is reached is if the `np.array_equal`
- # check above is False. This can also be reached if index[0]
- # is `NaT`, in which case the call to `cls._generate_range` will
- # raise a ValueError, which we re-raise with a more targeted
- # message.
- raise ValueError(
- f"Inferred frequency {inferred} from passed values "
- f"does not conform to passed frequency {freq.freqstr}"
- ) from e
-
- @classmethod
- def _generate_range(
- cls: type[DatetimeLikeArrayT], start, end, periods, freq, *args, **kwargs
- ) -> DatetimeLikeArrayT:
- raise AbstractMethodError(cls)
-
# monotonicity/uniqueness properties are called via frequencies.infer_freq,
# see GH#23789
@@ -1381,9 +1318,8 @@ def __add__(self, other):
# as is_integer returns True for these
if not is_period_dtype(self.dtype):
raise integer_op_not_supported(self)
- result = cast("PeriodArray", self)._addsub_int_array_or_scalar(
- other * self.freq.n, operator.add
- )
+ obj = cast("PeriodArray", self)
+ result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.add)
# array-like others
elif is_timedelta64_dtype(other_dtype):
@@ -1398,9 +1334,8 @@ def __add__(self, other):
elif is_integer_dtype(other_dtype):
if not is_period_dtype(self.dtype):
raise integer_op_not_supported(self)
- result = cast("PeriodArray", self)._addsub_int_array_or_scalar(
- other * self.freq.n, operator.add
- )
+ obj = cast("PeriodArray", self)
+ result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.add)
else:
# Includes Categorical, other ExtensionArrays
# For PeriodDtype, if self is a TimedeltaArray and other is a
@@ -1440,9 +1375,8 @@ def __sub__(self, other):
# as is_integer returns True for these
if not is_period_dtype(self.dtype):
raise integer_op_not_supported(self)
- result = cast("PeriodArray", self)._addsub_int_array_or_scalar(
- other * self.freq.n, operator.sub
- )
+ obj = cast("PeriodArray", self)
+ result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.sub)
elif isinstance(other, Period):
result = self._sub_periodlike(other)
@@ -1463,9 +1397,8 @@ def __sub__(self, other):
elif is_integer_dtype(other_dtype):
if not is_period_dtype(self.dtype):
raise integer_op_not_supported(self)
- result = cast("PeriodArray", self)._addsub_int_array_or_scalar(
- other * self.freq.n, operator.sub
- )
+ obj = cast("PeriodArray", self)
+ result = obj._addsub_int_array_or_scalar(other * obj.freq.n, operator.sub)
else:
# Includes ExtensionArrays, float_dtype
return NotImplemented
@@ -1945,6 +1878,73 @@ def __init__(
def _validate_dtype(cls, values, dtype):
raise AbstractMethodError(cls)
+ @property
+ def freq(self):
+ """
+ Return the frequency object if it is set, otherwise None.
+ """
+ return self._freq
+
+ @freq.setter
+ def freq(self, value) -> None:
+ if value is not None:
+ value = to_offset(value)
+ self._validate_frequency(self, value)
+
+ if self.ndim > 1:
+ raise ValueError("Cannot set freq with ndim > 1")
+
+ self._freq = value
+
+ @final
+ @classmethod
+ def _validate_frequency(
+ cls, index: TimelikeOps, freq: BaseOffset, **kwargs
+ ) -> None:
+ """
+ Validate that a frequency is compatible with the values of a given
+ Datetime Array/Index or Timedelta Array/Index
+
+ Parameters
+ ----------
+ index : DatetimeArray or TimedeltaArray
+ The index on which to determine if the given frequency is valid
+ freq : DateOffset
+ The frequency to validate
+ **kwargs : dict
+ DatetimeArray may pass "ambiguous"
+ """
+ inferred = index.inferred_freq
+ if index.size == 0 or inferred == freq.freqstr:
+ return None
+
+ try:
+ on_freq = cls._generate_range(
+ start=index[0], end=None, periods=len(index), freq=freq, **kwargs
+ )
+ if not np.array_equal(index.asi8, on_freq.asi8):
+ raise ValueError
+ except ValueError as err:
+ if "non-fixed" in str(err):
+ # non-fixed frequencies are not meaningful for timedelta64;
+ # we retain that error message
+ raise err
+ # GH#11587 the main way this is reached is if the `np.array_equal`
+ # check above is False. This can also be reached if index[0]
+ # is `NaT`, in which case the call to `cls._generate_range` will
+ # raise a ValueError, which we re-raise with a more targeted
+ # message.
+ raise ValueError(
+ f"Inferred frequency {inferred} from passed values "
+ f"does not conform to passed frequency {freq.freqstr}"
+ ) from err
+
+ @classmethod
+ def _generate_range(
+ cls: type[TimelikeOpsT], start, end, periods, freq, *args, **kwargs
+ ) -> TimelikeOpsT:
+ raise AbstractMethodError(cls)
+
# --------------------------------------------------------------
@cache_readonly
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 41ca630e86c10..1a4cf63cd3b2d 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -353,9 +353,9 @@ def _check_compatible_with(self, other) -> None:
def dtype(self) -> PeriodDtype:
return self._dtype
- # error: Read-only property cannot override read-write property
- @property # type: ignore[misc]
- def freq(self) -> BaseOffset:
+ # error: Cannot override writeable attribute with read-only property
+ @property
+ def freq(self) -> BaseOffset: # type: ignore[override]
"""
Return the frequency object for this PeriodArray.
"""
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index a6c396555e9a7..d86d4665626ef 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -22,7 +22,6 @@
lib,
)
from pandas._libs.tslibs import (
- BaseOffset,
Resolution,
Tick,
parsing,
@@ -82,7 +81,7 @@
DatetimeLikeArrayMixin,
cache=True,
)
-@inherit_names(["mean", "freq", "freqstr"], DatetimeLikeArrayMixin)
+@inherit_names(["mean", "freqstr"], DatetimeLikeArrayMixin)
class DatetimeIndexOpsMixin(NDArrayBackedExtensionIndex):
"""
Common ops mixin to support a unified interface datetimelike Index.
@@ -91,20 +90,26 @@ class DatetimeIndexOpsMixin(NDArrayBackedExtensionIndex):
_is_numeric_dtype = False
_can_hold_strings = False
_data: DatetimeArray | TimedeltaArray | PeriodArray
- freq: BaseOffset | None
freqstr: str | None
_resolution_obj: Resolution
+ @property
+ def freq(self):
+ return self._data.freq
+
+ @final
@property
def asi8(self) -> npt.NDArray[np.int64]:
return self._data.asi8
# ------------------------------------------------------------------------
+ @final
@cache_readonly
def hasnans(self) -> bool:
return self._data._hasna
+ @final
def equals(self, other: Any) -> bool:
"""
Determines if two Index objects contain the same elements.
@@ -141,6 +146,7 @@ def equals(self, other: Any) -> bool:
return np.array_equal(self.asi8, other.asi8)
+ @final
@Appender(Index.__contains__.__doc__)
def __contains__(self, key: Any) -> bool:
hash(key)
@@ -157,6 +163,7 @@ def _convert_tolerance(self, tolerance, target):
# --------------------------------------------------------------------
# Rendering Methods
+ @final
def format(
self,
name: bool = False,
@@ -180,6 +187,7 @@ def format(
return self._format_with_header(header, na_rep=na_rep, date_format=date_format)
+ @final
def _format_with_header(
self, header: list[str], na_rep: str = "NaT", date_format: str | None = None
) -> list[str]:
@@ -192,6 +200,7 @@ def _format_with_header(
def _formatter_func(self):
return self._data._formatter()
+ @final
def _format_attrs(self):
"""
Return a list of tuples of the (attr,formatted_value).
@@ -206,6 +215,7 @@ def _format_attrs(self):
attrs.append(("freq", freq))
return attrs
+ @final
@Appender(Index._summary.__doc__)
def _summary(self, name=None) -> str:
result = super()._summary(name=name)
@@ -389,15 +399,18 @@ class DatetimeTimedeltaMixin(DatetimeIndexOpsMixin):
_join_precedence = 10
- def _with_freq(self, freq):
+ @final
+ def _with_freq(self: _TDT, freq) -> _TDT:
arr = self._data._with_freq(freq)
return type(self)._simple_new(arr, name=self._name)
+ @final
@property
def values(self) -> np.ndarray:
# NB: For Datetime64TZ this is lossy
return self._data._ndarray
+ @final
@doc(DatetimeIndexOpsMixin.shift)
def shift(self: _TDT, periods: int = 1, freq=None) -> _TDT:
if freq is not None and freq != self.freq:
@@ -427,6 +440,7 @@ def shift(self: _TDT, periods: int = 1, freq=None) -> _TDT:
# --------------------------------------------------------------------
# Set Operation Methods
+ @final
@cache_readonly
def _as_range_index(self) -> RangeIndex:
# Convert our i8 representations to RangeIndex
@@ -439,6 +453,7 @@ def _as_range_index(self) -> RangeIndex:
def _can_range_setop(self, other):
return isinstance(self.freq, Tick) and isinstance(other.freq, Tick)
+ @final
def _wrap_range_setop(self, other, res_i8):
new_freq = None
if not len(res_i8):
@@ -458,6 +473,7 @@ def _wrap_range_setop(self, other, res_i8):
)
return self._wrap_setop_result(other, result)
+ @final
def _range_intersect(self, other, sort):
# Dispatch to RangeIndex intersection logic.
left = self._as_range_index
@@ -465,6 +481,7 @@ def _range_intersect(self, other, sort):
res_i8 = left.intersection(right, sort=sort)
return self._wrap_range_setop(other, res_i8)
+ @final
def _range_union(self, other, sort):
# Dispatch to RangeIndex union logic.
left = self._as_range_index
@@ -472,6 +489,7 @@ def _range_union(self, other, sort):
res_i8 = left.union(right, sort=sort)
return self._wrap_range_setop(other, res_i8)
+ @final
def _intersection(self, other: Index, sort: bool = False) -> Index:
"""
intersection specialized to the case with matching dtypes and both non-empty.
@@ -494,6 +512,7 @@ def _intersection(self, other: Index, sort: bool = False) -> Index:
else:
return self._fast_intersect(other, sort)
+ @final
def _fast_intersect(self, other, sort):
# to make our life easier, "sort" the two ranges
if self[0] <= other[0]:
@@ -514,6 +533,7 @@ def _fast_intersect(self, other, sort):
return result
+ @final
def _can_fast_intersect(self: _T, other: _T) -> bool:
# Note: we only get here with len(self) > 0 and len(other) > 0
if self.freq is None:
@@ -532,6 +552,7 @@ def _can_fast_intersect(self: _T, other: _T) -> bool:
# GH#42104
return self.freq.n == 1
+ @final
def _can_fast_union(self: _T, other: _T) -> bool:
# Assumes that type(self) == type(other), as per the annotation
# The ability to fast_union also implies that `freq` should be
@@ -562,6 +583,7 @@ def _can_fast_union(self: _T, other: _T) -> bool:
# Only need to "adjoin", not overlap
return (right_start == left_end + freq) or right_start in left
+ @final
def _fast_union(self: _TDT, other: _TDT, sort=None) -> _TDT:
# Caller is responsible for ensuring self and other are non-empty
@@ -597,6 +619,7 @@ def _fast_union(self: _TDT, other: _TDT, sort=None) -> _TDT:
else:
return left
+ @final
def _union(self, other, sort):
# We are called by `union`, which is responsible for this validation
assert isinstance(other, type(self))
@@ -616,6 +639,7 @@ def _union(self, other, sort):
# --------------------------------------------------------------------
# Join Methods
+ @final
def _get_join_freq(self, other):
"""
Get the freq to attach to the result of a join operation.
@@ -625,16 +649,19 @@ def _get_join_freq(self, other):
freq = self.freq
return freq
+ @final
def _wrap_joined_index(self, joined, other):
assert other.dtype == self.dtype, (other.dtype, self.dtype)
result = super()._wrap_joined_index(joined, other)
result._data._freq = self._get_join_freq(other)
return result
+ @final
def _get_engine_target(self) -> np.ndarray:
# engine methods and libjoin methods need dt64/td64 values cast to i8
return self._data._ndarray.view("i8")
+ @final
def _from_join_target(self, result: np.ndarray):
# view e.g. i8 back to M8[ns]
result = result.view(self._data._ndarray.dtype)
@@ -643,6 +670,7 @@ def _from_join_target(self, result: np.ndarray):
# --------------------------------------------------------------------
# List-like Methods
+ @final
def _get_delete_freq(self, loc: int | slice | Sequence[int]):
"""
Find the `freq` for self.delete(loc).
@@ -665,6 +693,7 @@ def _get_delete_freq(self, loc: int | slice | Sequence[int]):
freq = self.freq
return freq
+ @final
def _get_insert_freq(self, loc: int, item):
"""
Find the `freq` for self.insert(loc, item).
@@ -692,12 +721,14 @@ def _get_insert_freq(self, loc: int, item):
freq = self.freq
return freq
+ @final
@doc(NDArrayBackedExtensionIndex.delete)
def delete(self, loc) -> DatetimeTimedeltaMixin:
result = super().delete(loc)
result._data._freq = self._get_delete_freq(loc)
return result
+ @final
@doc(NDArrayBackedExtensionIndex.insert)
def insert(self, loc: int, item):
result = super().insert(loc, item)
@@ -709,6 +740,7 @@ def insert(self, loc: int, item):
# --------------------------------------------------------------------
# NDArray-Like Methods
+ @final
@Appender(_index_shared_docs["take"] % _index_doc_kwargs)
def take(
self,
| Broken off from branch working on removing .freq from DTA/TDA. | https://api.github.com/repos/pandas-dev/pandas/pulls/49952 | 2022-11-29T00:40:54Z | 2022-11-29T23:27:56Z | null | 2022-11-29T23:28:18Z |
DEPR: Change numeric_only default to False in remaining groupby methods | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 42170aaa09978..9abd08004edaa 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -572,7 +572,7 @@ Removal of prior version deprecations/changes
- Changed default of ``numeric_only`` to ``False`` in all DataFrame methods with that argument (:issue:`46096`, :issue:`46906`)
- Changed default of ``numeric_only`` to ``False`` in :meth:`Series.rank` (:issue:`47561`)
- Enforced deprecation of silently dropping nuisance columns in groupby and resample operations when ``numeric_only=False`` (:issue:`41475`)
-- Changed default of ``numeric_only`` to ``False`` in various :class:`.DataFrameGroupBy` methods (:issue:`46072`)
+- Changed default of ``numeric_only`` in various :class:`.DataFrameGroupBy` methods; all methods now default to ``numeric_only=False`` (:issue:`46072`)
- Changed default of ``numeric_only`` to ``False`` in :class:`.Resampler` methods (:issue:`47177`)
-
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 7e9163b87cee6..a80892a145a70 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -87,7 +87,6 @@
_agg_template,
_apply_docs,
_transform_template,
- warn_dropping_nuisance_columns_deprecated,
)
from pandas.core.groupby.grouper import get_grouper
from pandas.core.indexes.api import (
@@ -438,7 +437,7 @@ def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
)
def _cython_transform(
- self, how: str, numeric_only: bool = True, axis: AxisInt = 0, **kwargs
+ self, how: str, numeric_only: bool = False, axis: AxisInt = 0, **kwargs
):
assert axis == 0 # handled by caller
@@ -1333,13 +1332,12 @@ def _wrap_applied_output_series(
def _cython_transform(
self,
how: str,
- numeric_only: bool | lib.NoDefault = lib.no_default,
+ numeric_only: bool = False,
axis: AxisInt = 0,
**kwargs,
) -> DataFrame:
assert axis == 0 # handled by caller
# TODO: no tests with self.ndim == 1 for DataFrameGroupBy
- numeric_only_bool = self._resolve_numeric_only(how, numeric_only, axis)
# With self.axis == 0, we have multi-block tests
# e.g. test_rank_min_int, test_cython_transform_frame
@@ -1347,8 +1345,7 @@ def _cython_transform(
# With self.axis == 1, _get_data_to_aggregate does a transpose
# so we always have a single block.
mgr: Manager2D = self._get_data_to_aggregate()
- orig_mgr_len = len(mgr)
- if numeric_only_bool:
+ if numeric_only:
mgr = mgr.get_numeric_data(copy=False)
def arr_func(bvalues: ArrayLike) -> ArrayLike:
@@ -1358,12 +1355,9 @@ def arr_func(bvalues: ArrayLike) -> ArrayLike:
# We could use `mgr.apply` here and not have to set_axis, but
# we would have to do shape gymnastics for ArrayManager compat
- res_mgr = mgr.grouped_reduce(arr_func, ignore_failures=False)
+ res_mgr = mgr.grouped_reduce(arr_func)
res_mgr.set_axis(1, mgr.axes[1])
- if len(res_mgr) < orig_mgr_len:
- warn_dropping_nuisance_columns_deprecated(type(self), how, numeric_only)
-
res_df = self.obj._constructor(res_mgr)
if self.axis == 1:
res_df = res_df.T
@@ -1493,15 +1487,8 @@ def _transform_item_by_item(self, obj: DataFrame, wrapper) -> DataFrame:
output = {}
inds = []
for i, (colname, sgb) in enumerate(self._iterate_column_groupbys(obj)):
- try:
- output[i] = sgb.transform(wrapper)
- except TypeError:
- # e.g. trying to call nanmean with string values
- warn_dropping_nuisance_columns_deprecated(
- type(self), "transform", numeric_only=False
- )
- else:
- inds.append(i)
+ output[i] = sgb.transform(wrapper)
+ inds.append(i)
if not output:
raise TypeError("Transform function invalid for data types")
@@ -2243,7 +2230,7 @@ def corr(
self,
method: str | Callable[[np.ndarray, np.ndarray], float] = "pearson",
min_periods: int = 1,
- numeric_only: bool | lib.NoDefault = lib.no_default,
+ numeric_only: bool = False,
) -> DataFrame:
result = self._op_via_apply(
"corr", method=method, min_periods=min_periods, numeric_only=numeric_only
@@ -2255,7 +2242,7 @@ def cov(
self,
min_periods: int | None = None,
ddof: int | None = 1,
- numeric_only: bool | lib.NoDefault = lib.no_default,
+ numeric_only: bool = False,
) -> DataFrame:
result = self._op_via_apply(
"cov", min_periods=min_periods, ddof=ddof, numeric_only=numeric_only
@@ -2316,7 +2303,7 @@ def corrwith(
axis: Axis = 0,
drop: bool = False,
method: CorrelationMethod = "pearson",
- numeric_only: bool | lib.NoDefault = lib.no_default,
+ numeric_only: bool = False,
) -> DataFrame:
result = self._op_via_apply(
"corrwith",
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index b3f6bb3edb9da..497e0ef724373 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1007,15 +1007,8 @@ def _op_via_apply(self, name: str, *args, **kwargs):
if kwargs.get("axis", None) is None or kwargs.get("axis") is lib.no_default:
kwargs["axis"] = self.axis
- numeric_only = kwargs.get("numeric_only", lib.no_default)
-
def curried(x):
- with warnings.catch_warnings():
- # Catch any warnings from dispatch to DataFrame; we'll emit
- # a warning for groupby below
- match = "The default value of numeric_only "
- warnings.filterwarnings("ignore", match, FutureWarning)
- return f(x, *args, **kwargs)
+ return f(x, *args, **kwargs)
# preserve the name so we can detect it when calling plot methods,
# to avoid duplicates
@@ -1037,13 +1030,6 @@ def curried(x):
not_indexed_same=not is_transform,
)
- if self._selected_obj.ndim != 1 and self.axis != 1 and result.ndim != 1:
- missing = self._obj_with_exclusions.columns.difference(result.columns)
- if len(missing) > 0:
- warn_dropping_nuisance_columns_deprecated(
- type(self), name, numeric_only
- )
-
if self.grouper.has_dropped_na and is_transform:
# result will have dropped rows due to nans, fill with null
# and ensure index is ordered same as the input
@@ -1308,80 +1294,6 @@ def _wrap_applied_output(
):
raise AbstractMethodError(self)
- def _resolve_numeric_only(
- self, how: str, numeric_only: bool | lib.NoDefault, axis: AxisInt
- ) -> bool:
- """
- Determine subclass-specific default value for 'numeric_only'.
-
- For SeriesGroupBy we want the default to be False (to match Series behavior).
- For DataFrameGroupBy we want it to be True (for backwards-compat).
-
- Parameters
- ----------
- numeric_only : bool or lib.no_default
- axis : int
- Axis passed to the groupby op (not self.axis).
-
- Returns
- -------
- bool
- """
- # GH#41291
- if numeric_only is lib.no_default:
- # i.e. not explicitly passed by user
- if self.obj.ndim == 2:
- # i.e. DataFrameGroupBy
- numeric_only = axis != 1
- # GH#42395 GH#43108 GH#43154
- # Regression from 1.2.5 to 1.3 caused object columns to be dropped
- if self.axis:
- obj = self._obj_with_exclusions.T
- else:
- obj = self._obj_with_exclusions
- check = obj._get_numeric_data()
- if len(obj.columns) and not len(check.columns) and not obj.empty:
- numeric_only = False
-
- else:
- numeric_only = False
-
- if numeric_only and self.obj.ndim == 1 and not is_numeric_dtype(self.obj.dtype):
- # GH#47500
- warnings.warn(
- f"{type(self).__name__}.{how} called with "
- f"numeric_only={numeric_only} and dtype {self.obj.dtype}. This will "
- "raise a TypeError in a future version of pandas",
- category=FutureWarning,
- stacklevel=find_stack_level(),
- )
- raise NotImplementedError(
- f"{type(self).__name__}.{how} does not implement numeric_only"
- )
-
- return numeric_only
-
- def _maybe_warn_numeric_only_depr(
- self, how: str, result: DataFrame | Series, numeric_only: bool | lib.NoDefault
- ) -> None:
- """Emit warning on numeric_only behavior deprecation when appropriate.
-
- Parameters
- ----------
- how : str
- Groupby kernel name.
- result :
- Result of the groupby operation.
- numeric_only : bool or lib.no_default
- Argument as passed by user.
- """
- if (
- self._obj_with_exclusions.ndim != 1
- and result.ndim > 1
- and len(result.columns) < len(self._obj_with_exclusions.columns)
- ):
- warn_dropping_nuisance_columns_deprecated(type(self), how, numeric_only)
-
# -----------------------------------------------------------------
# numba
@@ -1606,9 +1518,7 @@ def _python_apply_general(
)
@final
- def _python_agg_general(
- self, func, *args, raise_on_typeerror: bool = False, **kwargs
- ):
+ def _python_agg_general(self, func, *args, **kwargs):
func = com.is_builtin_func(func)
f = lambda x: func(x, *args, **kwargs)
@@ -1621,18 +1531,7 @@ def _python_agg_general(
for idx, obj in enumerate(self._iterate_slices()):
name = obj.name
-
- try:
- # if this function is invalid for this dtype, we will ignore it.
- result = self.grouper.agg_series(obj, f)
- except TypeError:
- if raise_on_typeerror:
- raise
- warn_dropping_nuisance_columns_deprecated(
- type(self), "agg", numeric_only=False
- )
- continue
-
+ result = self.grouper.agg_series(obj, f)
key = base.OutputKey(label=name, position=idx)
output[key] = result
@@ -1644,7 +1543,7 @@ def _python_agg_general(
@final
def _agg_general(
self,
- numeric_only: bool | lib.NoDefault = True,
+ numeric_only: bool = False,
min_count: int = -1,
*,
alias: str,
@@ -1706,26 +1605,25 @@ def _cython_agg_general(
self,
how: str,
alt: Callable,
- numeric_only: bool | lib.NoDefault,
+ numeric_only: bool = False,
min_count: int = -1,
**kwargs,
):
# Note: we never get here with how="ohlc" for DataFrameGroupBy;
# that goes through SeriesGroupBy
- numeric_only_bool = self._resolve_numeric_only(how, numeric_only, axis=0)
data = self._get_data_to_aggregate()
is_ser = data.ndim == 1
- orig_len = len(data)
- if numeric_only_bool:
+ if numeric_only:
if is_ser and not is_numeric_dtype(self._selected_obj.dtype):
# GH#41291 match Series behavior
kwd_name = "numeric_only"
if how in ["any", "all"]:
kwd_name = "bool_only"
- raise NotImplementedError(
- f"{type(self).__name__}.{how} does not implement {kwd_name}."
+ raise TypeError(
+ f"Cannot use {kwd_name}={numeric_only} with "
+ f"{type(self).__name__}.{how} and non-numeric types."
)
if not is_ser:
data = data.get_numeric_data(copy=False)
@@ -1751,10 +1649,7 @@ def array_func(values: ArrayLike) -> ArrayLike:
# TypeError -> we may have an exception in trying to aggregate
# continue and exclude the block
- new_mgr = data.grouped_reduce(array_func, ignore_failures=False)
-
- if not is_ser and len(new_mgr) < orig_len:
- warn_dropping_nuisance_columns_deprecated(type(self), how, numeric_only)
+ new_mgr = data.grouped_reduce(array_func)
res = self._wrap_agged_manager(new_mgr)
if is_ser:
@@ -1764,7 +1659,7 @@ def array_func(values: ArrayLike) -> ArrayLike:
return res
def _cython_transform(
- self, how: str, numeric_only: bool = True, axis: AxisInt = 0, **kwargs
+ self, how: str, numeric_only: bool = False, axis: AxisInt = 0, **kwargs
):
raise AbstractMethodError(self)
@@ -2144,23 +2039,21 @@ def median(self, numeric_only: bool = False):
Parameters
----------
- numeric_only : bool, default True
+ numeric_only : bool, default False
Include only float, int, boolean columns.
.. versionchanged:: 2.0.0
- numeric_only no longer accepts ``None``.
+ numeric_only no longer accepts ``None`` and defaults to False.
Returns
-------
Series or DataFrame
Median of values within each group.
"""
- numeric_only_bool = self._resolve_numeric_only("median", numeric_only, axis=0)
-
result = self._cython_agg_general(
"median",
- alt=lambda x: Series(x).median(numeric_only=numeric_only_bool),
+ alt=lambda x: Series(x).median(numeric_only=numeric_only),
numeric_only=numeric_only,
)
return result.__finalize__(self.obj, method="groupby")
@@ -2221,10 +2114,8 @@ def std(
return np.sqrt(self._numba_agg_general(sliding_var, engine_kwargs, ddof))
else:
- # Resolve numeric_only so that var doesn't warn
- numeric_only_bool = self._resolve_numeric_only("std", numeric_only, axis=0)
if (
- numeric_only_bool
+ numeric_only
and self.obj.ndim == 1
and not is_numeric_dtype(self.obj.dtype)
):
@@ -2235,7 +2126,7 @@ def std(
result = self._get_cythonized_result(
libgroupby.group_var,
cython_dtype=np.dtype(np.float64),
- numeric_only=numeric_only_bool,
+ numeric_only=numeric_only,
needs_counts=True,
post_processing=lambda vals, inference: np.sqrt(vals),
ddof=ddof,
@@ -2319,7 +2210,7 @@ def sem(self, ddof: int = 1, numeric_only: bool = False):
ddof : int, default 1
Degrees of freedom.
- numeric_only : bool, default True
+ numeric_only : bool, default False
Include only `float`, `int` or `boolean` data.
.. versionadded:: 1.5.0
@@ -2333,14 +2224,12 @@ def sem(self, ddof: int = 1, numeric_only: bool = False):
Series or DataFrame
Standard error of the mean of values within each group.
"""
- # Reolve numeric_only so that std doesn't warn
if numeric_only and self.obj.ndim == 1 and not is_numeric_dtype(self.obj.dtype):
raise TypeError(
f"{type(self).__name__}.sem called with "
f"numeric_only={numeric_only} and dtype {self.obj.dtype}"
)
result = self.std(ddof=ddof, numeric_only=numeric_only)
- self._maybe_warn_numeric_only_depr("sem", result, numeric_only)
if result.ndim == 1:
result /= np.sqrt(self.count())
@@ -3107,7 +2996,7 @@ def quantile(
self,
q: float | AnyArrayLike = 0.5,
interpolation: str = "linear",
- numeric_only: bool | lib.NoDefault = lib.no_default,
+ numeric_only: bool = False,
):
"""
Return group values at the given quantile, a la numpy.percentile.
@@ -3118,11 +3007,15 @@ def quantile(
Value(s) between 0 and 1 providing the quantile(s) to compute.
interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
Method to use when the desired quantile falls between two points.
- numeric_only : bool, default True
+ numeric_only : bool, default False
Include only `float`, `int` or `boolean` data.
.. versionadded:: 1.5.0
+ .. versionchanged:: 2.0.0
+
+ numeric_only now defaults to ``False``.
+
Returns
-------
Series or DataFrame
@@ -3146,12 +3039,7 @@ def quantile(
a 2.0
b 3.0
"""
- numeric_only_bool = self._resolve_numeric_only("quantile", numeric_only, axis=0)
- if (
- numeric_only_bool
- and self.obj.ndim == 1
- and not is_numeric_dtype(self.obj.dtype)
- ):
+ if numeric_only and self.obj.ndim == 1 and not is_numeric_dtype(self.obj.dtype):
raise TypeError(
f"{type(self).__name__}.quantile called with "
f"numeric_only={numeric_only} and dtype {self.obj.dtype}"
@@ -3296,25 +3184,8 @@ def blk_func(values: ArrayLike) -> ArrayLike:
obj = self._obj_with_exclusions
is_ser = obj.ndim == 1
mgr = self._get_data_to_aggregate()
- data = mgr.get_numeric_data() if numeric_only_bool else mgr
- res_mgr = data.grouped_reduce(blk_func, ignore_failures=False)
-
- if (
- numeric_only is lib.no_default
- and not is_ser
- and len(res_mgr.items) != len(mgr.items)
- ):
- warn_dropping_nuisance_columns_deprecated(
- type(self), "quantile", numeric_only
- )
-
- if len(res_mgr.items) == 0:
- # re-call grouped_reduce to get the desired exception message
- mgr.grouped_reduce(blk_func, ignore_failures=False)
- # grouped_reduce _should_ raise, so this should not be reached
- raise TypeError( # pragma: no cover
- "All columns were dropped in grouped_reduce"
- )
+ data = mgr.get_numeric_data() if numeric_only else mgr
+ res_mgr = data.grouped_reduce(blk_func)
if is_ser:
res = self._wrap_agged_manager(res_mgr)
@@ -3613,9 +3484,8 @@ def cummin(
skipna = kwargs.get("skipna", True)
if axis != 0:
f = lambda x: np.minimum.accumulate(x, axis)
- numeric_only_bool = self._resolve_numeric_only("cummax", numeric_only, axis)
obj = self._selected_obj
- if numeric_only_bool:
+ if numeric_only:
obj = obj._get_numeric_data()
return self._python_apply_general(f, obj, is_transform=True)
@@ -3639,9 +3509,8 @@ def cummax(
skipna = kwargs.get("skipna", True)
if axis != 0:
f = lambda x: np.maximum.accumulate(x, axis)
- numeric_only_bool = self._resolve_numeric_only("cummax", numeric_only, axis)
obj = self._selected_obj
- if numeric_only_bool:
+ if numeric_only:
obj = obj._get_numeric_data()
return self._python_apply_general(f, obj, is_transform=True)
@@ -3654,7 +3523,7 @@ def _get_cythonized_result(
self,
base_func: Callable,
cython_dtype: np.dtype,
- numeric_only: bool | lib.NoDefault = lib.no_default,
+ numeric_only: bool = False,
needs_counts: bool = False,
needs_nullable: bool = False,
needs_mask: bool = False,
@@ -3670,7 +3539,7 @@ def _get_cythonized_result(
base_func : callable, Cythonized function to be called
cython_dtype : np.dtype
Type of the array that will be modified by the Cython call.
- numeric_only : bool, default True
+ numeric_only : bool, default False
Whether only numeric datatypes should be computed
needs_counts : bool, default False
Whether the counts should be a part of the Cython call
@@ -3701,9 +3570,6 @@ def _get_cythonized_result(
-------
`Series` or `DataFrame` with filled values
"""
- how = base_func.__name__
- numeric_only_bool = self._resolve_numeric_only(how, numeric_only, axis=0)
-
if post_processing and not callable(post_processing):
raise ValueError("'post_processing' must be a callable!")
if pre_processing and not callable(pre_processing):
@@ -3772,18 +3638,15 @@ def blk_func(values: ArrayLike) -> ArrayLike:
mgr = self._get_data_to_aggregate()
orig_mgr_len = len(mgr)
- if numeric_only_bool:
+ if numeric_only:
mgr = mgr.get_numeric_data()
- res_mgr = mgr.grouped_reduce(blk_func, ignore_failures=False)
+ res_mgr = mgr.grouped_reduce(blk_func)
if not is_ser and len(res_mgr.items) != orig_mgr_len:
- howstr = how.replace("group_", "")
- warn_dropping_nuisance_columns_deprecated(type(self), howstr, numeric_only)
-
if len(res_mgr.items) == 0:
# We re-call grouped_reduce to get the right exception message
- mgr.grouped_reduce(blk_func, ignore_failures=False)
+ mgr.grouped_reduce(blk_func)
# grouped_reduce _should_ raise, so this should not be reached
raise TypeError( # pragma: no cover
"All columns were dropped in grouped_reduce"
@@ -4331,27 +4194,3 @@ def _insert_quantile_level(idx: Index, qs: npt.NDArray[np.float64]) -> MultiInde
else:
mi = MultiIndex.from_product([idx, qs])
return mi
-
-
-def warn_dropping_nuisance_columns_deprecated(cls, how: str, numeric_only) -> None:
- if numeric_only is not lib.no_default and not numeric_only:
- # numeric_only was specified and falsey but still dropped nuisance columns
- warnings.warn(
- "Dropping invalid columns in "
- f"{cls.__name__}.{how} is deprecated. "
- "In a future version, a TypeError will be raised. "
- f"Before calling .{how}, select only columns which "
- "should be valid for the function.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- elif numeric_only is lib.no_default:
- warnings.warn(
- "The default value of numeric_only in "
- f"{cls.__name__}.{how} is deprecated. "
- "In a future version, numeric_only will default to False. "
- f"Either specify numeric_only or select only columns which "
- "should be valid for the function.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 37ae9d103c8b5..feca755fd43db 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -213,7 +213,7 @@ def apply(
-------
ArrayManager
"""
- assert "filter" not in kwargs and "ignore_failures" not in kwargs
+ assert "filter" not in kwargs
align_keys = align_keys or []
result_arrays: list[np.ndarray] = []
@@ -923,15 +923,13 @@ def idelete(self, indexer) -> ArrayManager:
# --------------------------------------------------------------------
# Array-wise Operation
- def grouped_reduce(self: T, func: Callable, ignore_failures: bool = False) -> T:
+ def grouped_reduce(self: T, func: Callable) -> T:
"""
Apply grouped reduction function columnwise, returning a new ArrayManager.
Parameters
----------
func : grouped reduction function
- ignore_failures : bool, default False
- Whether to drop columns where func raises TypeError.
Returns
-------
@@ -943,13 +941,7 @@ def grouped_reduce(self: T, func: Callable, ignore_failures: bool = False) -> T:
for i, arr in enumerate(self.arrays):
# grouped_reduce functions all expect 2D arrays
arr = ensure_block_shape(arr, ndim=2)
- try:
- res = func(arr)
- except (TypeError, NotImplementedError):
- if not ignore_failures:
- raise
- continue
-
+ res = func(arr)
if res.ndim == 2:
# reverse of ensure_block_shape
assert res.shape[0] == 1
@@ -963,10 +955,7 @@ def grouped_reduce(self: T, func: Callable, ignore_failures: bool = False) -> T:
else:
index = Index(range(result_arrays[0].shape[0]))
- if ignore_failures:
- columns = self.items[np.array(result_indices, dtype="int64")]
- else:
- columns = self.items
+ columns = self.items
# error: Argument 1 to "ArrayManager" has incompatible type "List[ndarray]";
# expected "List[Union[ndarray, ExtensionArray]]"
diff --git a/pandas/core/internals/base.py b/pandas/core/internals/base.py
index 37aa60f1ee52d..8a0f2863d851f 100644
--- a/pandas/core/internals/base.py
+++ b/pandas/core/internals/base.py
@@ -189,12 +189,7 @@ def setitem_inplace(self, indexer, value) -> None:
arr[indexer] = value
- def grouped_reduce(self, func, ignore_failures: bool = False):
- """
- ignore_failures : bool, default False
- Not used; for compatibility with ArrayManager/BlockManager.
- """
-
+ def grouped_reduce(self, func):
arr = self.array
res = func(arr)
index = default_index(len(res))
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 3eca3756e1678..20cc087adab23 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -320,7 +320,7 @@ def apply(
-------
BlockManager
"""
- assert "filter" not in kwargs and "ignore_failures" not in kwargs
+ assert "filter" not in kwargs
align_keys = align_keys or []
result_blocks: list[Block] = []
@@ -1466,44 +1466,29 @@ def idelete(self, indexer) -> BlockManager:
# ----------------------------------------------------------------
# Block-wise Operation
- def grouped_reduce(self: T, func: Callable, ignore_failures: bool = False) -> T:
+ def grouped_reduce(self: T, func: Callable) -> T:
"""
Apply grouped reduction function blockwise, returning a new BlockManager.
Parameters
----------
func : grouped reduction function
- ignore_failures : bool, default False
- Whether to drop blocks where func raises TypeError.
Returns
-------
BlockManager
"""
result_blocks: list[Block] = []
- dropped_any = False
for blk in self.blocks:
if blk.is_object:
# split on object-dtype blocks bc some columns may raise
# while others do not.
for sb in blk._split():
- try:
- applied = sb.apply(func)
- except (TypeError, NotImplementedError):
- if not ignore_failures:
- raise
- dropped_any = True
- continue
+ applied = sb.apply(func)
result_blocks = extend_blocks(applied, result_blocks)
else:
- try:
- applied = blk.apply(func)
- except (TypeError, NotImplementedError):
- if not ignore_failures:
- raise
- dropped_any = True
- continue
+ applied = blk.apply(func)
result_blocks = extend_blocks(applied, result_blocks)
if len(result_blocks) == 0:
@@ -1511,10 +1496,6 @@ def grouped_reduce(self: T, func: Callable, ignore_failures: bool = False) -> T:
else:
index = Index(range(result_blocks[0].values.shape[-1]))
- if dropped_any:
- # faster to skip _combine if we haven't dropped any blocks
- return self._combine(result_blocks, copy=False, index=index)
-
return type(self).from_blocks(result_blocks, [self.axes[0], index])
def reduce(self: T, func: Callable) -> T:
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 2d3ff95504371..03b917edd357b 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -301,11 +301,12 @@ def test_wrap_agg_out(three_group):
def func(ser):
if ser.dtype == object:
- raise TypeError
+ raise TypeError("Test error message")
return ser.sum()
- with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"):
- result = grouped.aggregate(func)
+ with pytest.raises(TypeError, match="Test error message"):
+ grouped.aggregate(func)
+ result = grouped[[c for c in three_group if c != "C"]].aggregate(func)
exp_grouped = three_group.loc[:, three_group.columns != "C"]
expected = exp_grouped.groupby(["A", "B"]).aggregate(func)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py
index b8d2350cf6267..dc09a2e0ea6ad 100644
--- a/pandas/tests/groupby/aggregate/test_cython.py
+++ b/pandas/tests/groupby/aggregate/test_cython.py
@@ -92,9 +92,8 @@ def test_cython_agg_boolean():
def test_cython_agg_nothing_to_agg():
frame = DataFrame({"a": np.random.randint(0, 5, 50), "b": ["foo", "bar"] * 25})
- with tm.assert_produces_warning(FutureWarning, match="This will raise a TypeError"):
- with pytest.raises(NotImplementedError, match="does not implement"):
- frame.groupby("a")["b"].mean(numeric_only=True)
+ with pytest.raises(TypeError, match="Cannot use numeric_only=True"):
+ frame.groupby("a")["b"].mean(numeric_only=True)
with pytest.raises(TypeError, match="Could not convert (foo|bar)*"):
frame.groupby("a")["b"].mean()
@@ -116,9 +115,8 @@ def test_cython_agg_nothing_to_agg_with_dates():
"dates": pd.date_range("now", periods=50, freq="T"),
}
)
- with tm.assert_produces_warning(FutureWarning, match="This will raise a TypeError"):
- with pytest.raises(NotImplementedError, match="does not implement"):
- frame.groupby("b").dates.mean(numeric_only=True)
+ with pytest.raises(TypeError, match="Cannot use numeric_only=True"):
+ frame.groupby("b").dates.mean(numeric_only=True)
def test_cython_agg_frame_columns():
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 9aa58e919ce24..6a89c72354d04 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -293,8 +293,7 @@ def raiseException(df):
raise TypeError("test")
with pytest.raises(TypeError, match="test"):
- with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"):
- df.groupby(0).agg(raiseException)
+ df.groupby(0).agg(raiseException)
def test_series_agg_multikey():
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 5c250618bf3c4..b35c4158bf420 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -1842,6 +1842,9 @@ def test_category_order_reducer(
):
msg = "GH#10694 - idxmax/min fail with unused categories"
request.node.add_marker(pytest.mark.xfail(reason=msg))
+ elif reduction_func == "corrwith" and not as_index:
+ msg = "GH#49950 - corrwith with as_index=False may not have grouping column"
+ request.node.add_marker(pytest.mark.xfail(reason=msg))
elif index_kind != "range" and not as_index:
pytest.skip(reason="Result doesn't have categories, nothing to test")
df = DataFrame(
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 0f301e05dc898..ef39aabd83d22 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -263,7 +263,7 @@ def _check(self, df, method, expected_columns, expected_columns_numeric):
# have no Python fallback
exception = NotImplementedError if method.startswith("cum") else TypeError
- if method in ("min", "max", "cummin", "cummax"):
+ if method in ("min", "max", "cummin", "cummax", "cumsum", "cumprod"):
# The methods default to numeric_only=False and raise TypeError
msg = "|".join(
[
@@ -591,10 +591,8 @@ def test_axis1_numeric_only(request, groupby_func, numeric_only):
method(*args, **kwargs)
elif groupby_func not in has_axis:
msg = "got an unexpected keyword argument 'axis'"
- warn = FutureWarning if groupby_func == "skew" and not numeric_only else None
- with tm.assert_produces_warning(warn, match="Dropping of nuisance columns"):
- with pytest.raises(TypeError, match=msg):
- method(*args, **kwargs)
+ with pytest.raises(TypeError, match=msg):
+ method(*args, **kwargs)
# fillna and shift are successful even on object dtypes
elif (numeric_only is None or not numeric_only) and groupby_func not in (
"fillna",
@@ -1374,46 +1372,44 @@ def test_groupby_sum_timedelta_with_nat():
@pytest.mark.parametrize(
- "kernel, numeric_only_default, has_arg",
+ "kernel, has_arg",
[
- ("all", False, False),
- ("any", False, False),
- ("bfill", False, False),
- ("corr", True, True),
- ("corrwith", True, True),
- ("cov", True, True),
- ("cummax", False, True),
- ("cummin", False, True),
- ("cumprod", True, True),
- ("cumsum", True, True),
- ("diff", False, False),
- ("ffill", False, False),
- ("fillna", False, False),
- ("first", False, True),
- ("idxmax", True, True),
- ("idxmin", True, True),
- ("last", False, True),
- ("max", False, True),
- ("mean", False, True),
- ("median", False, True),
- ("min", False, True),
- ("nth", False, False),
- ("nunique", False, False),
- ("pct_change", False, False),
- ("prod", False, True),
- ("quantile", True, True),
- ("sem", False, True),
- ("skew", False, True),
- ("std", False, True),
- ("sum", False, True),
- ("var", False, True),
+ ("all", False),
+ ("any", False),
+ ("bfill", False),
+ ("corr", True),
+ ("corrwith", True),
+ ("cov", True),
+ ("cummax", True),
+ ("cummin", True),
+ ("cumprod", True),
+ ("cumsum", True),
+ ("diff", False),
+ ("ffill", False),
+ ("fillna", False),
+ ("first", True),
+ ("idxmax", True),
+ ("idxmin", True),
+ ("last", True),
+ ("max", True),
+ ("mean", True),
+ ("median", True),
+ ("min", True),
+ ("nth", False),
+ ("nunique", False),
+ ("pct_change", False),
+ ("prod", True),
+ ("quantile", True),
+ ("sem", True),
+ ("skew", True),
+ ("std", True),
+ ("sum", True),
+ ("var", True),
],
)
@pytest.mark.parametrize("numeric_only", [True, False, lib.no_default])
@pytest.mark.parametrize("keys", [["a1"], ["a1", "a2"]])
-def test_deprecate_numeric_only(
- kernel, numeric_only_default, has_arg, numeric_only, keys
-):
+def test_numeric_only(kernel, has_arg, numeric_only, keys):
# GH#46072
# drops_nuisance: Whether the op drops nuisance columns even when numeric_only=False
# has_arg: Whether the op has a numeric_only arg
@@ -1424,26 +1420,9 @@ def test_deprecate_numeric_only(
gb = df.groupby(keys)
method = getattr(gb, kernel)
- if (
- has_arg
- and (kernel not in ("idxmax", "idxmin") or numeric_only is True)
- and (
- # Cases where b does not appear in the result
- numeric_only is True
- or (numeric_only is lib.no_default and numeric_only_default)
- )
- ):
- if numeric_only is True or not numeric_only_default:
- warn = None
- else:
- warn = FutureWarning
- if numeric_only is lib.no_default and numeric_only_default:
- msg = f"The default value of numeric_only in DataFrameGroupBy.{kernel}"
- else:
- msg = f"Dropping invalid columns in DataFrameGroupBy.{kernel}"
- with tm.assert_produces_warning(warn, match=msg):
- result = method(*args, **kwargs)
-
+ if has_arg and numeric_only is True:
+ # Cases where b does not appear in the result
+ result = method(*args, **kwargs)
assert "b" not in result.columns
elif (
# kernels that work on any dtype and have numeric_only arg
@@ -1577,31 +1556,17 @@ def test_deprecate_numeric_only_series(dtype, groupby_func, request):
with pytest.raises(TypeError, match=msg):
method(*args, numeric_only=True)
elif dtype is object:
- err_category = NotImplementedError
- err_msg = f"{groupby_func} does not implement numeric_only"
- if groupby_func.startswith("cum"):
- # cum ops already exhibit future behavior
- warn_category = None
- warn_msg = ""
- err_category = TypeError
- err_msg = f"{groupby_func} is not supported for object dtype"
- elif groupby_func == "skew":
- warn_category = None
- warn_msg = ""
- err_category = TypeError
- err_msg = "Series.skew does not allow numeric_only=True with non-numeric"
- elif groupby_func == "sem":
- warn_category = None
- warn_msg = ""
- err_category = TypeError
- err_msg = "called with numeric_only=True and dtype object"
- else:
- warn_category = FutureWarning
- warn_msg = "This will raise a TypeError"
-
- with tm.assert_produces_warning(warn_category, match=warn_msg):
- with pytest.raises(err_category, match=err_msg):
- method(*args, numeric_only=True)
+ msg = "|".join(
+ [
+ "Cannot use numeric_only=True",
+ "called with numeric_only=True and dtype object",
+ "Series.skew does not allow numeric_only=True with non-numeric",
+ "got an unexpected keyword argument 'numeric_only'",
+ "is not supported for object dtype",
+ ]
+ )
+ with pytest.raises(TypeError, match=msg):
+ method(*args, numeric_only=True)
else:
result = method(*args, numeric_only=True)
expected = method(*args, numeric_only=False)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index c35930ed43607..a7104c2e21049 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -996,12 +996,11 @@ def test_wrap_aggregated_output_multindex(mframe):
def aggfun(ser):
if ser.name == ("foo", "one"):
- raise TypeError
+ raise TypeError("Test error message")
return ser.sum()
- with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"):
- agged2 = df.groupby(keys).aggregate(aggfun)
- assert len(agged2.columns) + 1 == len(df.columns)
+ with pytest.raises(TypeError, match="Test error message"):
+ df.groupby(keys).aggregate(aggfun)
def test_groupby_level_apply(mframe):
diff --git a/pandas/tests/groupby/test_quantile.py b/pandas/tests/groupby/test_quantile.py
index 5b0c0f671ae7c..56b9b35f1f688 100644
--- a/pandas/tests/groupby/test_quantile.py
+++ b/pandas/tests/groupby/test_quantile.py
@@ -1,8 +1,6 @@
import numpy as np
import pytest
-from pandas._libs import lib
-
import pandas as pd
from pandas import (
DataFrame,
@@ -160,10 +158,7 @@ def test_quantile_raises():
df = DataFrame([["foo", "a"], ["foo", "b"], ["foo", "c"]], columns=["key", "val"])
with pytest.raises(TypeError, match="cannot be performed against 'object' dtypes"):
- with tm.assert_produces_warning(
- FutureWarning, match="Dropping invalid columns"
- ):
- df.groupby("key").quantile()
+ df.groupby("key").quantile()
def test_quantile_out_of_bounds_q_raises():
@@ -242,16 +237,11 @@ def test_groupby_quantile_nullable_array(values, q):
@pytest.mark.parametrize("q", [0.5, [0.0, 0.5, 1.0]])
-@pytest.mark.parametrize("numeric_only", [lib.no_default, True, False])
-def test_groupby_quantile_skips_invalid_dtype(q, numeric_only):
+@pytest.mark.parametrize("numeric_only", [True, False])
+def test_groupby_quantile_raises_on_invalid_dtype(q, numeric_only):
df = DataFrame({"a": [1], "b": [2.0], "c": ["x"]})
-
- if numeric_only is lib.no_default or numeric_only:
- warn = FutureWarning if numeric_only is lib.no_default else None
- msg = "The default value of numeric_only in DataFrameGroupBy.quantile"
- with tm.assert_produces_warning(warn, match=msg):
- result = df.groupby("a").quantile(q, numeric_only=numeric_only)
-
+ if numeric_only:
+ result = df.groupby("a").quantile(q, numeric_only=numeric_only)
expected = df.groupby("a")[["b"]].quantile(q)
tm.assert_frame_equal(result, expected)
else:
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 23005f291970b..8bdbc86d8659c 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -20,7 +20,6 @@
date_range,
)
import pandas._testing as tm
-from pandas.core.groupby.generic import DataFrameGroupBy
from pandas.tests.groupby import get_groupby_method_args
@@ -409,31 +408,21 @@ def test_transform_select_columns(df):
tm.assert_frame_equal(result, expected)
-def test_transform_exclude_nuisance(df):
+def test_transform_nuisance_raises(df):
# case that goes through _transform_item_by_item
df.columns = ["A", "B", "B", "D"]
# this also tests orderings in transform between
# series/frame to make sure it's consistent
- expected = {}
grouped = df.groupby("A")
gbc = grouped["B"]
- with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"):
- expected["B"] = gbc.transform(lambda x: np.mean(x))
- # squeeze 1-column DataFrame down to Series
- expected["B"] = expected["B"]["B"]
-
- assert isinstance(gbc.obj, DataFrame)
- assert isinstance(gbc, DataFrameGroupBy)
-
- expected["D"] = grouped["D"].transform(np.mean)
- expected = DataFrame(expected)
- with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"):
- result = df.groupby("A").transform(lambda x: np.mean(x))
+ with pytest.raises(TypeError, match="Could not convert"):
+ gbc.transform(lambda x: np.mean(x))
- tm.assert_frame_equal(result, expected)
+ with pytest.raises(TypeError, match="Could not convert"):
+ df.groupby("A").transform(lambda x: np.mean(x))
def test_transform_function_aliases(df):
@@ -519,10 +508,11 @@ def test_groupby_transform_with_int():
}
)
with np.errstate(all="ignore"):
- with tm.assert_produces_warning(
- FutureWarning, match="Dropping invalid columns"
- ):
- result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
+ with pytest.raises(TypeError, match="Could not convert"):
+ df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
+ result = df.groupby("A")[["B", "C"]].transform(
+ lambda x: (x - x.mean()) / x.std()
+ )
expected = DataFrame(
{"B": np.nan, "C": Series([-1, 0, 1, -1, 0, 1], dtype="float64")}
)
@@ -538,10 +528,11 @@ def test_groupby_transform_with_int():
}
)
with np.errstate(all="ignore"):
- with tm.assert_produces_warning(
- FutureWarning, match="Dropping invalid columns"
- ):
- result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
+ with pytest.raises(TypeError, match="Could not convert"):
+ df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
+ result = df.groupby("A")[["B", "C"]].transform(
+ lambda x: (x - x.mean()) / x.std()
+ )
expected = DataFrame({"B": np.nan, "C": [-1.0, 0.0, 1.0, -1.0, 0.0, 1.0]})
tm.assert_frame_equal(result, expected)
@@ -549,10 +540,11 @@ def test_groupby_transform_with_int():
s = Series([2, 3, 4, 10, 5, -1])
df = DataFrame({"A": [1, 1, 1, 2, 2, 2], "B": 1, "C": s, "D": "foo"})
with np.errstate(all="ignore"):
- with tm.assert_produces_warning(
- FutureWarning, match="Dropping invalid columns"
- ):
- result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
+ with pytest.raises(TypeError, match="Could not convert"):
+ df.groupby("A").transform(lambda x: (x - x.mean()) / x.std())
+ result = df.groupby("A")[["B", "C"]].transform(
+ lambda x: (x - x.mean()) / x.std()
+ )
s1 = s.iloc[0:3]
s1 = (s1 - s1.mean()) / s1.std()
@@ -562,8 +554,9 @@ def test_groupby_transform_with_int():
tm.assert_frame_equal(result, expected)
# int doesn't get downcasted
- with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"):
- result = df.groupby("A").transform(lambda x: x * 2 / 2)
+ with pytest.raises(TypeError, match="unsupported operand type"):
+ df.groupby("A").transform(lambda x: x * 2 / 2)
+ result = df.groupby("A")[["B", "C"]].transform(lambda x: x * 2 / 2)
expected = DataFrame({"B": 1.0, "C": [2.0, 3.0, 4.0, 10.0, 5.0, -1.0]})
tm.assert_frame_equal(result, expected)
@@ -755,13 +748,15 @@ def test_cython_transform_frame(op, args, targop):
expected = expected.sort_index(axis=1)
- warn = None if op == "shift" else FutureWarning
- msg = "The default value of numeric_only"
- with tm.assert_produces_warning(warn, match=msg):
- result = gb.transform(op, *args).sort_index(axis=1)
+ if op != "shift":
+ with pytest.raises(TypeError, match="datetime64 type does not support"):
+ gb.transform(op, *args).sort_index(axis=1)
+ result = gb[expected.columns].transform(op, *args).sort_index(axis=1)
tm.assert_frame_equal(result, expected)
- with tm.assert_produces_warning(warn, match=msg):
- result = getattr(gb, op)(*args).sort_index(axis=1)
+ if op != "shift":
+ with pytest.raises(TypeError, match="datetime64 type does not support"):
+ getattr(gb, op)(*args).sort_index(axis=1)
+ result = getattr(gb[expected.columns], op)(*args).sort_index(axis=1)
tm.assert_frame_equal(result, expected)
# individual columns
for c in df:
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index e256b957699b7..5f1e0904b8c3c 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -903,18 +903,16 @@ def test_series_downsample_method(method, numeric_only, expected_data):
expected_index = date_range("2018-12-31", periods=1, freq="Y")
df = Series(["cat_1", "cat_2"], index=index)
resampled = df.resample("Y")
+ kwargs = {} if numeric_only is lib.no_default else {"numeric_only": numeric_only}
func = getattr(resampled, method)
if numeric_only and numeric_only is not lib.no_default:
- with tm.assert_produces_warning(
- FutureWarning, match="This will raise a TypeError"
- ):
- with pytest.raises(NotImplementedError, match="not implement numeric_only"):
- func(numeric_only=numeric_only)
+ with pytest.raises(TypeError, match="Cannot use numeric_only=True"):
+ func(**kwargs)
elif method == "prod":
with pytest.raises(TypeError, match="can't multiply sequence by non-int"):
- func(numeric_only=numeric_only)
+ func(**kwargs)
else:
- result = func(numeric_only=numeric_only)
+ result = func(**kwargs)
expected = Series(expected_data, index=expected_index)
tm.assert_series_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49951 | 2022-11-29T00:35:14Z | 2022-11-29T18:02:18Z | 2022-11-29T18:02:18Z | 2022-11-29T22:40:17Z |
ASV: Fix asv after versioneer change | diff --git a/asv_bench/asv.conf.json b/asv_bench/asv.conf.json
index 4a0c882640eb6..b1feb1d0af79c 100644
--- a/asv_bench/asv.conf.json
+++ b/asv_bench/asv.conf.json
@@ -125,6 +125,7 @@
"regression_thresholds": {
},
"build_command":
- ["python setup.py build -j4",
+ ["python -m pip install versioneer[toml]",
+ "python setup.py build -j4",
"PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"],
}
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
#49924 broke the local ci workflow. We have to install versioner before building. Adding it to the other dependencies did not work | https://api.github.com/repos/pandas-dev/pandas/pulls/49948 | 2022-11-28T23:48:01Z | 2022-11-29T03:18:39Z | 2022-11-29T03:18:39Z | 2023-01-21T01:33:39Z |
STYLE make black local hook run twice as fast | diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index 7a90e1bec7783..540e9481befd6 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -36,6 +36,8 @@ jobs:
- name: Run pre-commit
uses: pre-commit/action@v2.0.3
+ with:
+ extra_args: --verbose --all-files
docstring_typing_pylint:
name: Docstring validation, typing, and pylint
@@ -93,7 +95,7 @@ jobs:
- name: Typing + pylint
uses: pre-commit/action@v2.0.3
with:
- extra_args: --hook-stage manual --all-files
+ extra_args: --verbose --hook-stage manual --all-files
if: ${{ steps.build.outcome == 'success' && always() }}
- name: Run docstring validation script tests
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index cc6875589c691..869dbd2d67e8b 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -17,10 +17,6 @@ repos:
entry: python scripts/run_vulture.py
pass_filenames: true
require_serial: false
-- repo: https://github.com/python/black
- rev: 22.10.0
- hooks:
- - id: black
- repo: https://github.com/codespell-project/codespell
rev: v2.2.2
hooks:
@@ -114,6 +110,16 @@ repos:
additional_dependencies: *flake8_dependencies
- repo: local
hooks:
+ # NOTE: we make `black` a local hook because if it's installed from
+ # PyPI (rather than from source) then it'll run twice as fast thanks to mypyc
+ - id: black
+ name: black
+ description: "Black: The uncompromising Python code formatter"
+ entry: black
+ language: python
+ require_serial: true
+ types_or: [python, pyi]
+ additional_dependencies: [black==22.10.0]
- id: pyright
# note: assumes python env is setup and activated
name: pyright
diff --git a/environment.yml b/environment.yml
index 7cd8859825453..e7430d7a4084f 100644
--- a/environment.yml
+++ b/environment.yml
@@ -85,7 +85,7 @@ dependencies:
- cxx-compiler
# code checks
- - black=22.3.0
+ - black=22.10.0
- cpplint
- flake8=5.0.4
- flake8-bugbear=22.7.1 # used by flake8, find likely bugs
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 78dddbe607084..c17915db60ca7 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -62,7 +62,7 @@ py
moto
flask
asv
-black==22.3.0
+black==22.10.0
cpplint
flake8==5.0.4
flake8-bugbear==22.7.1
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Since https://ichard26.github.io/blog/2022/05/compiling-black-with-mypyc-part-1/ , `black` runs twice as fast when the wheels are installed, compared with when it's built from source in the pre-commit hook
---
EDIT: CI timings show 53.84s vs 131.85s, so this is more than a 2x improvement | https://api.github.com/repos/pandas-dev/pandas/pulls/49947 | 2022-11-28T23:16:05Z | 2022-11-29T17:51:16Z | 2022-11-29T17:51:16Z | 2022-11-29T17:51:26Z |
DEPR: enforce not-automatically aligning in DataFrame comparisons | diff --git a/asv_bench/benchmarks/arithmetic.py b/asv_bench/benchmarks/arithmetic.py
index 496db66c78569..4f84e0a562687 100644
--- a/asv_bench/benchmarks/arithmetic.py
+++ b/asv_bench/benchmarks/arithmetic.py
@@ -106,6 +106,10 @@ def time_frame_op_with_series_axis0(self, opname):
def time_frame_op_with_series_axis1(self, opname):
getattr(operator, opname)(self.df, self.ser)
+ # exclude comparisons from the params for time_frame_op_with_series_axis1
+ # since they do not do alignment so raise
+ time_frame_op_with_series_axis1.params = [params[0][6:]]
+
class FrameWithFrameWide:
# Many-columns, mixed dtypes
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 1fb9a81e85a83..03ea1aa5b8f9f 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -569,6 +569,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation of calling numpy "ufunc"s on :class:`DataFrame` with ``method="outer"``; this now raises ``NotImplementedError`` (:issue:`36955`)
- Enforced deprecation disallowing passing ``numeric_only=True`` to :class:`Series` reductions (``rank``, ``any``, ``all``, ...) with non-numeric dtype (:issue:`47500`)
- Changed behavior of :meth:`DataFrameGroupBy.apply` and :meth:`SeriesGroupBy.apply` so that ``group_keys`` is respected even if a transformer is detected (:issue:`34998`)
+- Comparisons between a :class:`DataFrame` and a :class:`Series` where the frame's columns do not match the series's index raise ``ValueError`` instead of automatically aligning, do ``left, right = left.align(right, axis=1, copy=False)`` before comparing (:issue:`36795`)
- Enforced deprecation ``numeric_only=None`` (the default) in DataFrame reductions that would silently drop columns that raised; ``numeric_only`` now defaults to ``False`` (:issue:`41480`)
- Changed default of ``numeric_only`` to ``False`` in all DataFrame methods with that argument (:issue:`46096`, :issue:`46906`)
- Changed default of ``numeric_only`` to ``False`` in :meth:`Series.rank` (:issue:`47561`)
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index af27ff67599ac..bfedaca093a8e 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -7,7 +7,6 @@
import operator
from typing import TYPE_CHECKING
-import warnings
import numpy as np
@@ -18,7 +17,6 @@
Level,
)
from pandas.util._decorators import Appender
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_array_like,
@@ -299,13 +297,10 @@ def to_series(right):
if not flex:
if not left.axes[axis].equals(right.index):
- warnings.warn(
- "Automatic reindexing on DataFrame vs Series comparisons "
- "is deprecated and will raise ValueError in a future version. "
- "Do `left, right = left.align(right, axis=1, copy=False)` "
- "before e.g. `left == right`",
- FutureWarning,
- stacklevel=find_stack_level(),
+ raise ValueError(
+ "Operands are not aligned. Do "
+ "`left, right = left.align(right, axis=1, copy=False)` "
+ "before operating."
)
left, right = left.align(
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index bad5335ad2d58..b4f1c5404d178 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -307,43 +307,60 @@ def test_timestamp_compare_series(self, left, right):
def test_dt64arr_timestamp_equality(self, box_with_array):
# GH#11034
+ box = box_with_array
ser = Series([Timestamp("2000-01-29 01:59:00"), Timestamp("2000-01-30"), NaT])
- ser = tm.box_expected(ser, box_with_array)
+ ser = tm.box_expected(ser, box)
xbox = get_upcast_box(ser, ser, True)
result = ser != ser
expected = tm.box_expected([False, False, True], xbox)
tm.assert_equal(result, expected)
- warn = FutureWarning if box_with_array is pd.DataFrame else None
- with tm.assert_produces_warning(warn):
+ if box is pd.DataFrame:
# alignment for frame vs series comparisons deprecated
+ # in GH#46795 enforced 2.0
+ with pytest.raises(ValueError, match="not aligned"):
+ ser != ser[0]
+
+ else:
result = ser != ser[0]
- expected = tm.box_expected([False, True, True], xbox)
- tm.assert_equal(result, expected)
+ expected = tm.box_expected([False, True, True], xbox)
+ tm.assert_equal(result, expected)
- with tm.assert_produces_warning(warn):
+ if box is pd.DataFrame:
# alignment for frame vs series comparisons deprecated
+ # in GH#46795 enforced 2.0
+ with pytest.raises(ValueError, match="not aligned"):
+ ser != ser[2]
+ else:
result = ser != ser[2]
- expected = tm.box_expected([True, True, True], xbox)
- tm.assert_equal(result, expected)
+ expected = tm.box_expected([True, True, True], xbox)
+ tm.assert_equal(result, expected)
result = ser == ser
expected = tm.box_expected([True, True, False], xbox)
tm.assert_equal(result, expected)
- with tm.assert_produces_warning(warn):
+ if box is pd.DataFrame:
# alignment for frame vs series comparisons deprecated
+ # in GH#46795 enforced 2.0
+ with pytest.raises(ValueError, match="not aligned"):
+ ser == ser[0]
+ else:
result = ser == ser[0]
- expected = tm.box_expected([True, False, False], xbox)
- tm.assert_equal(result, expected)
+ expected = tm.box_expected([True, False, False], xbox)
+ tm.assert_equal(result, expected)
- with tm.assert_produces_warning(warn):
+ if box is pd.DataFrame:
# alignment for frame vs series comparisons deprecated
+ # in GH#46795 enforced 2.0
+ with pytest.raises(ValueError, match="not aligned"):
+ ser == ser[2]
+ else:
result = ser == ser[2]
- expected = tm.box_expected([False, False, False], xbox)
- tm.assert_equal(result, expected)
+ expected = tm.box_expected([False, False, False], xbox)
+ tm.assert_equal(result, expected)
@pytest.mark.parametrize(
"datetimelike",
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index 545482e6d3dad..8aedac036c2c9 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -1164,19 +1164,15 @@ def test_frame_with_zero_len_series_corner_cases():
expected = DataFrame(df.values * np.nan, columns=df.columns)
tm.assert_frame_equal(result, expected)
- with tm.assert_produces_warning(FutureWarning):
- # Automatic alignment for comparisons deprecated
- result = df == ser
- expected = DataFrame(False, index=df.index, columns=df.columns)
- tm.assert_frame_equal(result, expected)
+ with pytest.raises(ValueError, match="not aligned"):
+ # Automatic alignment for comparisons deprecated GH#36795, enforced 2.0
+ df == ser
- # non-float case should not raise on comparison
+ # non-float case should not raise TypeError on comparison
df2 = DataFrame(df.values.view("M8[ns]"), columns=df.columns)
- with tm.assert_produces_warning(FutureWarning):
+ with pytest.raises(ValueError, match="not aligned"):
# Automatic alignment for comparisons deprecated
- result = df2 == ser
- expected = DataFrame(False, index=df.index, columns=df.columns)
- tm.assert_frame_equal(result, expected)
+ df2 == ser
def test_zero_len_frame_with_series_corner_cases():
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index a7551af68bc2b..c39973d7649e8 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -476,9 +476,6 @@ def test_finalize_called_eval_numexpr():
# Binary operations
-@pytest.mark.filterwarnings(
- "ignore:Automatic reindexing on DataFrame vs Series:FutureWarning"
-)
@pytest.mark.parametrize("annotate", ["left", "right", "both"])
@pytest.mark.parametrize(
"args",
@@ -504,6 +501,20 @@ def test_binops(request, args, annotate, all_binary_operators):
if annotate in {"left", "both"} and not isinstance(right, int):
right.attrs = {"a": 1}
+ is_cmp = all_binary_operators in [
+ operator.eq,
+ operator.ne,
+ operator.gt,
+ operator.ge,
+ operator.lt,
+ operator.le,
+ ]
+ if is_cmp and isinstance(left, pd.DataFrame) and isinstance(right, pd.Series):
+ # in 2.0 silent alignment on comparisons was removed xref GH#28759
+ left, right = left.align(right, axis=1, copy=False)
+ elif is_cmp and isinstance(left, pd.Series) and isinstance(right, pd.DataFrame):
+ right, left = right.align(left, axis=1, copy=False)
+
result = all_binary_operators(left, right)
assert result.attrs == {"a": 1}
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49946 | 2022-11-28T22:52:54Z | 2022-12-01T22:06:38Z | 2022-12-01T22:06:38Z | 2022-12-01T22:06:45Z |
STYLE pre-commit autoupdate | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index cc6875589c691..9a10d48cf3e27 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -28,11 +28,11 @@ repos:
types_or: [python, rst, markdown]
additional_dependencies: [tomli]
- repo: https://github.com/MarcoGorelli/cython-lint
- rev: v0.2.1
+ rev: v0.9.1
hooks:
- id: cython-lint
- repo: https://github.com/pre-commit/pre-commit-hooks
- rev: v4.3.0
+ rev: v4.4.0
hooks:
- id: debug-statements
- id: end-of-file-fixer
@@ -51,22 +51,22 @@ repos:
exclude: ^pandas/_libs/src/(klib|headers)/
args: [--quiet, '--extensions=c,h', '--headers=h', --recursive, '--filter=-readability/casting,-runtime/int,-build/include_subdir']
- repo: https://github.com/PyCQA/flake8
- rev: 5.0.4
+ rev: 6.0.0
hooks:
- id: flake8
# Need to patch os.remove rule in pandas-dev-flaker
exclude: ^ci/fix_wheels.py
additional_dependencies: &flake8_dependencies
- - flake8==5.0.4
+ - flake8==6.0.0
- flake8-bugbear==22.7.1
- pandas-dev-flaker==0.5.0
- repo: https://github.com/pycqa/pylint
- rev: v2.15.5
+ rev: v2.15.6
hooks:
- id: pylint
stages: [manual]
- repo: https://github.com/pycqa/pylint
- rev: v2.15.5
+ rev: v2.15.6
hooks:
- id: pylint
alias: redefined-outer-name
@@ -89,7 +89,7 @@ repos:
hooks:
- id: isort
- repo: https://github.com/asottile/pyupgrade
- rev: v3.2.0
+ rev: v3.2.2
hooks:
- id: pyupgrade
args: [--py38-plus]
diff --git a/environment.yml b/environment.yml
index 7cd8859825453..eae8abd1f4b11 100644
--- a/environment.yml
+++ b/environment.yml
@@ -87,7 +87,7 @@ dependencies:
# code checks
- black=22.3.0
- cpplint
- - flake8=5.0.4
+ - flake8=6.0.0
- flake8-bugbear=22.7.1 # used by flake8, find likely bugs
- isort>=5.2.1 # check that imports are in the right order
- mypy=0.990
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index a5b07d46bfeef..92874ef201246 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -1,14 +1,11 @@
# Copyright (c) 2012, Lambda Foundry, Inc.
# See LICENSE for the license
-from base64 import decode
from collections import defaultdict
from csv import (
QUOTE_MINIMAL,
QUOTE_NONE,
QUOTE_NONNUMERIC,
)
-from errno import ENOENT
-import inspect
import sys
import time
import warnings
@@ -24,10 +21,7 @@ from pandas.core.arrays import (
)
cimport cython
-from cpython.bytes cimport (
- PyBytes_AsString,
- PyBytes_FromString,
-)
+from cpython.bytes cimport PyBytes_AsString
from cpython.exc cimport (
PyErr_Fetch,
PyErr_Occurred,
@@ -631,7 +625,7 @@ cdef class TextReader:
cdef:
Py_ssize_t i, start, field_count, passed_count, unnamed_count, level
char *word
- str name, old_name
+ str name
uint64_t hr, data_line = 0
list header = []
set unnamed_cols = set()
@@ -939,7 +933,7 @@ cdef class TextReader:
object name, na_flist, col_dtype = None
bint na_filter = 0
int64_t num_cols
- dict result
+ dict results
bint use_nullable_dtypes
start = self.parser_start
@@ -1461,7 +1455,7 @@ cdef _string_box_utf8(parser_t *parser, int64_t col,
bint na_filter, kh_str_starts_t *na_hashset,
const char *encoding_errors):
cdef:
- int error, na_count = 0
+ int na_count = 0
Py_ssize_t i, lines
coliter_t it
const char *word = NULL
@@ -1517,7 +1511,7 @@ cdef _categorical_convert(parser_t *parser, int64_t col,
"Convert column data into codes, categories"
cdef:
int na_count = 0
- Py_ssize_t i, size, lines
+ Py_ssize_t i, lines
coliter_t it
const char *word = NULL
@@ -1525,8 +1519,6 @@ cdef _categorical_convert(parser_t *parser, int64_t col,
int64_t[::1] codes
int64_t current_category = 0
- char *errors = "strict"
-
int ret = 0
kh_str_t *table
khiter_t k
@@ -1972,7 +1964,6 @@ cdef kh_str_starts_t* kset_from_list(list values) except NULL:
cdef kh_float64_t* kset_float64_from_list(values) except NULL:
# caller takes responsibility for freeing the hash table
cdef:
- khiter_t k
kh_float64_t *table
int ret = 0
float64_t val
@@ -1983,7 +1974,7 @@ cdef kh_float64_t* kset_float64_from_list(values) except NULL:
for value in values:
val = float(value)
- k = kh_put_float64(table, val, &ret)
+ kh_put_float64(table, val, &ret)
if table.n_buckets <= 128:
# See reasoning in kset_from_list
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index b0208f9ca3296..c40712251ae5b 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -28,7 +28,7 @@ from cpython.datetime cimport ( # alias bc `tzinfo` is a kwarg below
PyTZInfo_Check,
datetime,
import_datetime,
- time,
+ time as dt_time,
tzinfo as tzinfo_type,
)
from cpython.object cimport (
@@ -120,7 +120,7 @@ from pandas._libs.tslibs.tzconversion cimport (
# ----------------------------------------------------------------------
# Constants
-_zero_time = time(0, 0)
+_zero_time = dt_time(0, 0)
_no_input = object()
# ----------------------------------------------------------------------
diff --git a/pandas/tests/frame/indexing/test_getitem.py b/pandas/tests/frame/indexing/test_getitem.py
index f17e2a197a82b..6427c68dc76b5 100644
--- a/pandas/tests/frame/indexing/test_getitem.py
+++ b/pandas/tests/frame/indexing/test_getitem.py
@@ -115,8 +115,8 @@ def test_getitem_dupe_cols(self):
iter,
Index,
set,
- lambda l: dict(zip(l, range(len(l)))),
- lambda l: dict(zip(l, range(len(l)))).keys(),
+ lambda keys: dict(zip(keys, range(len(keys)))),
+ lambda keys: dict(zip(keys, range(len(keys)))).keys(),
],
ids=["list", "iter", "Index", "set", "dict", "dict_keys"],
)
diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index ccd59172bc741..1a414ebf73abe 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -207,7 +207,7 @@ def test_readjson_chunks_multiple_empty_lines(chunksize):
def test_readjson_unicode(monkeypatch):
with tm.ensure_clean("test.json") as path:
- monkeypatch.setattr("locale.getpreferredencoding", lambda l: "cp949")
+ monkeypatch.setattr("locale.getpreferredencoding", lambda do_setlocale: "cp949")
with open(path, "w", encoding="utf-8") as f:
f.write('{"£©µÀÆÖÞßéöÿ":["АБВГДабвгд가"]}')
diff --git a/pandas/tests/io/pytables/test_round_trip.py b/pandas/tests/io/pytables/test_round_trip.py
index ce71e9e990364..e76f85c0c69d0 100644
--- a/pandas/tests/io/pytables/test_round_trip.py
+++ b/pandas/tests/io/pytables/test_round_trip.py
@@ -302,7 +302,7 @@ def test_index_types(setup_path):
with catch_warnings(record=True):
values = np.random.randn(2)
- func = lambda l, r: tm.assert_series_equal(l, r, check_index_type=True)
+ func = lambda lhs, rhs: tm.assert_series_equal(lhs, rhs, check_index_type=True)
with catch_warnings(record=True):
ser = Series(values, [0, "y"])
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 78dddbe607084..f7d474e2b16dc 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -64,7 +64,7 @@ flask
asv
black==22.3.0
cpplint
-flake8==5.0.4
+flake8==6.0.0
flake8-bugbear==22.7.1
isort>=5.2.1
mypy==0.990
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49943 | 2022-11-28T21:32:14Z | 2022-11-29T01:34:31Z | 2022-11-29T01:34:31Z | 2022-11-29T01:34:43Z |
PERF: read_html: Reading one HTML file with multiple tables is much slower than loading each table separatly | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index e657a98f6358f..baed1750c0f9b 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -613,6 +613,7 @@ Performance improvements
- Performance improvement in :func:`read_stata` with parameter ``index_col`` set to ``None`` (the default). Now the index will be a :class:`RangeIndex` instead of :class:`Int64Index` (:issue:`49745`)
- Performance improvement in :func:`merge` when not merging on the index - the new index will now be :class:`RangeIndex` instead of :class:`Int64Index` (:issue:`49478`)
- Performance improvement in :meth:`DataFrame.to_dict` and :meth:`Series.to_dict` when using any non-object dtypes (:issue:`46470`)
+- Performance improvement in :func:`read_html` when there are multiple tables (:issue:`49929`)
.. ---------------------------------------------------------------------------
.. _whatsnew_200.bug_fixes:
diff --git a/pandas/io/html.py b/pandas/io/html.py
index eceff2d2ec1f3..4f6e43a1639a5 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -744,8 +744,8 @@ def _parse_tables(self, doc, match, kwargs):
pattern = match.pattern
# 1. check all descendants for the given pattern and only search tables
- # 2. go up the tree until we find a table
- xpath_expr = f"//table//*[re:test(text(), {repr(pattern)})]/ancestor::table"
+ # GH 49929
+ xpath_expr = f"//table[.//text()[re:test(., {repr(pattern)})]]"
# if any table attributes were given build an xpath expression to
# search for them
| - [x] closes #49929
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
```py
# setup from referenced issue
In [2]: %timeit combined = pd.read_html(StringIO(combined_html))
324 ms ± 1.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # <- PR
In [3]: %timeit combined = pd.read_html(StringIO(combined_html)) # <- main
44.3 s ± 4.28 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/49942 | 2022-11-28T20:34:33Z | 2022-11-29T17:30:36Z | 2022-11-29T17:30:36Z | 2022-12-13T00:23:20Z |
REF: simplify DTI/PI.get_loc | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 5709f94e2ccd5..1b9291a19d32c 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -526,29 +526,35 @@ def _parsed_string_to_bounds(self, reso: Resolution, parsed: dt.datetime):
"The index must be timezone aware when indexing "
"with a date string with a UTC offset"
)
- start = self._maybe_cast_for_get_loc(start)
- end = self._maybe_cast_for_get_loc(end)
+ # The flipped case with parsed.tz is None and self.tz is not None
+ # is ruled out bc parsed and reso are produced by _parse_with_reso,
+ # which localizes parsed.
return start, end
- def _disallow_mismatched_indexing(self, key, one_way: bool = False) -> None:
+ def _parse_with_reso(self, label: str):
+ parsed, reso = super()._parse_with_reso(label)
+
+ parsed = Timestamp(parsed)
+
+ if self.tz is not None and parsed.tzinfo is None:
+ # we special-case timezone-naive strings and timezone-aware
+ # DatetimeIndex
+ # https://github.com/pandas-dev/pandas/pull/36148#issuecomment-687883081
+ parsed = parsed.tz_localize(self.tz)
+
+ return parsed, reso
+
+ def _disallow_mismatched_indexing(self, key) -> None:
"""
Check for mismatched-tzawareness indexing and re-raise as KeyError.
"""
+ # we get here with isinstance(key, self._data._recognized_scalars)
try:
- self._deprecate_mismatched_indexing(key, one_way=one_way)
+ # GH#36148
+ self._data._assert_tzawareness_compat(key)
except TypeError as err:
raise KeyError(key) from err
- def _deprecate_mismatched_indexing(self, key, one_way: bool = False) -> None:
- # GH#36148
- # we get here with isinstance(key, self._data._recognized_scalars)
- if self.tz is not None and one_way:
- # we special-case timezone-naive strings and timezone-aware
- # DatetimeIndex
- return
-
- self._data._assert_tzawareness_compat(key)
-
def get_loc(self, key, method=None, tolerance=None):
"""
Get integer location for requested label
@@ -566,7 +572,7 @@ def get_loc(self, key, method=None, tolerance=None):
if isinstance(key, self._data._recognized_scalars):
# needed to localize naive datetimes
self._disallow_mismatched_indexing(key)
- key = self._maybe_cast_for_get_loc(key)
+ key = Timestamp(key)
elif isinstance(key, str):
@@ -574,7 +580,7 @@ def get_loc(self, key, method=None, tolerance=None):
parsed, reso = self._parse_with_reso(key)
except ValueError as err:
raise KeyError(key) from err
- self._disallow_mismatched_indexing(parsed, one_way=True)
+ self._disallow_mismatched_indexing(parsed)
if self._can_partial_date_slice(reso):
try:
@@ -583,7 +589,7 @@ def get_loc(self, key, method=None, tolerance=None):
if method is None:
raise KeyError(key) from err
- key = self._maybe_cast_for_get_loc(key)
+ key = parsed
elif isinstance(key, dt.timedelta):
# GH#20464
@@ -607,24 +613,6 @@ def get_loc(self, key, method=None, tolerance=None):
except KeyError as err:
raise KeyError(orig_key) from err
- def _maybe_cast_for_get_loc(self, key) -> Timestamp:
- # needed to localize naive datetimes or dates (GH 35690)
- try:
- key = Timestamp(key)
- except ValueError as err:
- # FIXME(dateutil#1180): we get here because parse_with_reso
- # doesn't raise on "t2m"
- if not isinstance(key, str):
- # Not expected to be reached, but check to be sure
- raise # pragma: no cover
- raise KeyError(key) from err
-
- if key.tzinfo is None:
- key = key.tz_localize(self.tz)
- else:
- key = key.tz_convert(self.tz)
- return key
-
@doc(DatetimeTimedeltaMixin._maybe_cast_slice_bound)
def _maybe_cast_slice_bound(self, label, side: str):
@@ -635,8 +623,8 @@ def _maybe_cast_slice_bound(self, label, side: str):
label = Timestamp(label).to_pydatetime()
label = super()._maybe_cast_slice_bound(label, side)
- self._deprecate_mismatched_indexing(label)
- return self._maybe_cast_for_get_loc(label)
+ self._data._assert_tzawareness_compat(label)
+ return Timestamp(label)
def slice_indexer(self, start=None, end=None, step=None):
"""
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index e8a734864a9c8..ef47cb9bf1070 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -420,18 +420,14 @@ def get_loc(self, key, method=None, tolerance=None):
if reso == self._resolution_obj:
# the reso < self._resolution_obj case goes
# through _get_string_slice
- key = self._cast_partial_indexing_scalar(key)
- loc = self.get_loc(key, method=method, tolerance=tolerance)
- # Recursing instead of falling through matters for the exception
- # message in test_get_loc3 (though not clear if that really matters)
- return loc
+ key = self._cast_partial_indexing_scalar(parsed)
elif method is None:
raise KeyError(key)
else:
key = self._cast_partial_indexing_scalar(parsed)
elif isinstance(key, Period):
- key = self._maybe_cast_for_get_loc(key)
+ self._disallow_mismatched_indexing(key)
elif isinstance(key, datetime):
key = self._cast_partial_indexing_scalar(key)
@@ -445,8 +441,7 @@ def get_loc(self, key, method=None, tolerance=None):
except KeyError as err:
raise KeyError(orig_key) from err
- def _maybe_cast_for_get_loc(self, key: Period) -> Period:
- # name is a misnomer, chosen for compat with DatetimeIndex
+ def _disallow_mismatched_indexing(self, key: Period) -> None:
sfreq = self.freq
kfreq = key.freq
if not (
@@ -460,15 +455,14 @@ def _maybe_cast_for_get_loc(self, key: Period) -> Period:
# checking these two attributes is sufficient to check equality,
# and much more performant than `self.freq == key.freq`
raise KeyError(key)
- return key
- def _cast_partial_indexing_scalar(self, label):
+ def _cast_partial_indexing_scalar(self, label: datetime) -> Period:
try:
- key = Period(label, freq=self.freq)
+ period = Period(label, freq=self.freq)
except ValueError as err:
# we cannot construct the Period
raise KeyError(label) from err
- return key
+ return period
@doc(DatetimeIndexOpsMixin._maybe_cast_slice_bound)
def _maybe_cast_slice_bound(self, label, side: str):
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index fcc7fa083691e..58b77ce50293d 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -373,7 +373,7 @@ def test_get_loc3(self):
msg = "Input has different freq=None from PeriodArray\\(freq=D\\)"
with pytest.raises(ValueError, match=msg):
idx.get_loc("2000-01-10", method="nearest", tolerance="1 hour")
- with pytest.raises(KeyError, match=r"^Period\('2000-01-10', 'D'\)$"):
+ with pytest.raises(KeyError, match=r"^'2000-01-10'$"):
idx.get_loc("2000-01-10", method="nearest", tolerance="1 day")
with pytest.raises(
ValueError, match="list-like tolerance size must match target index size"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49941 | 2022-11-28T18:41:25Z | 2022-11-28T22:58:14Z | 2022-11-28T22:58:14Z | 2022-11-29T00:29:12Z |
STYLE #49656: fixed redefined-outer-name linting issue to format.py | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index cc6875589c691..97a9748de8513 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -79,8 +79,6 @@ repos:
|^pandas/util/_test_decorators\.py # keep excluded
|^pandas/_version\.py # keep excluded
|^pandas/conftest\.py # keep excluded
- |^pandas/core/tools/datetimes\.py
- |^pandas/io/formats/format\.py
|^pandas/core/generic\.py
args: [--disable=all, --enable=redefined-outer-name]
stages: [manual]
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 5709f94e2ccd5..9f17bf6858d30 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -384,7 +384,7 @@ def _is_comparable_dtype(self, dtype: DtypeObj) -> bool:
def _formatter_func(self):
from pandas.io.formats.format import get_format_datetime64
- formatter = get_format_datetime64(is_dates_only=self._is_dates_only)
+ formatter = get_format_datetime64(is_dates_only_=self._is_dates_only)
return lambda x: f"'{formatter(x)}'"
# --------------------------------------------------------------------
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index cdc21f04da43a..61c12f5011886 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -9,7 +9,7 @@
QUOTE_NONE,
QUOTE_NONNUMERIC,
)
-import decimal
+from decimal import Decimal
from functools import partial
from io import StringIO
import math
@@ -106,11 +106,7 @@
check_parent_directory,
stringify_path,
)
-from pandas.io.formats.printing import (
- adjoin,
- justify,
- pprint_thing,
-)
+from pandas.io.formats import printing
if TYPE_CHECKING:
from pandas import (
@@ -339,7 +335,7 @@ def _get_footer(self) -> str:
if footer:
footer += ", "
- series_name = pprint_thing(name, escape_chars=("\t", "\r", "\n"))
+ series_name = printing.pprint_thing(name, escape_chars=("\t", "\r", "\n"))
footer += f"Name: {series_name}"
if self.length is True or (
@@ -354,7 +350,7 @@ def _get_footer(self) -> str:
if dtype_name:
if footer:
footer += ", "
- footer += f"dtype: {pprint_thing(dtype_name)}"
+ footer += f"dtype: {printing.pprint_thing(dtype_name)}"
# level infos are added to the end and in a new line, like it is done
# for Categoricals
@@ -433,10 +429,12 @@ def len(self, text: str) -> int:
return len(text)
def justify(self, texts: Any, max_len: int, mode: str = "right") -> list[str]:
- return justify(texts, max_len, mode=mode)
+ return printing.justify(texts, max_len, mode=mode)
def adjoin(self, space: int, *lists, **kwargs) -> str:
- return adjoin(space, *lists, strlen=self.len, justfunc=self.justify, **kwargs)
+ return printing.adjoin(
+ space, *lists, strlen=self.len, justfunc=self.justify, **kwargs
+ )
class EastAsianTextAdjustment(TextAdjustment):
@@ -1375,7 +1373,7 @@ def _format_strings(self) -> list[str]:
else:
quote_strings = self.quoting is not None and self.quoting != QUOTE_NONE
formatter = partial(
- pprint_thing,
+ printing.pprint_thing,
escape_chars=("\t", "\r", "\n"),
quote_strings=quote_strings,
)
@@ -1794,12 +1792,12 @@ def _format_datetime64_dateonly(
def get_format_datetime64(
- is_dates_only: bool, nat_rep: str = "NaT", date_format: str | None = None
+ is_dates_only_: bool, nat_rep: str = "NaT", date_format: str | None = None
) -> Callable:
"""Return a formatter callable taking a datetime64 as input and providing
a string as output"""
- if is_dates_only:
+ if is_dates_only_:
return lambda x: _format_datetime64_dateonly(
x, nat_rep=nat_rep, date_format=date_format
)
@@ -2071,12 +2069,12 @@ def __call__(self, num: float) -> str:
@return: engineering formatted string
"""
- dnum = decimal.Decimal(str(num))
+ dnum = Decimal(str(num))
- if decimal.Decimal.is_nan(dnum):
+ if Decimal.is_nan(dnum):
return "NaN"
- if decimal.Decimal.is_infinite(dnum):
+ if Decimal.is_infinite(dnum):
return "inf"
sign = 1
@@ -2086,9 +2084,9 @@ def __call__(self, num: float) -> str:
dnum = -dnum
if dnum != 0:
- pow10 = decimal.Decimal(int(math.floor(dnum.log10() / 3) * 3))
+ pow10 = Decimal(int(math.floor(dnum.log10() / 3) * 3))
else:
- pow10 = decimal.Decimal(0)
+ pow10 = Decimal(0)
pow10 = pow10.min(max(self.ENG_PREFIXES.keys()))
pow10 = pow10.max(min(self.ENG_PREFIXES.keys()))
| Fixed redefined-outer-name issue in:
- pandas/io/formats/format.py
redefined-outer-name t issues raised for:
- justify function (import from pandas.io.formats.printing)
- decimal (changed to from decimal import Decimal)
- is_dates_only parameter conflict with function name
- updated get_format_datetime64 function with renamed parameter
- passed related tests in pytest pandas
- running: pylint --disable=all --enable=redefined-outer-name pandas/io/
-- > Results in 10.00/10 rating
| https://api.github.com/repos/pandas-dev/pandas/pulls/49937 | 2022-11-28T04:28:25Z | 2022-11-29T19:06:21Z | 2022-11-29T19:06:21Z | 2022-11-29T19:06:28Z |
BUG: avoid segfault in tz_localize with ZoneInfo | diff --git a/pandas/_libs/tslibs/tzconversion.pyx b/pandas/_libs/tslibs/tzconversion.pyx
index 99855b36e8676..afdf6d3d9b001 100644
--- a/pandas/_libs/tslibs/tzconversion.pyx
+++ b/pandas/_libs/tslibs/tzconversion.pyx
@@ -233,6 +233,7 @@ timedelta-like}
int64_t shift_delta = 0
ndarray[int64_t] result_a, result_b, dst_hours
int64_t[::1] result
+ bint is_zi = False
bint infer_dst = False, is_dst = False, fill = False
bint shift_forward = False, shift_backward = False
bint fill_nonexist = False
@@ -304,6 +305,7 @@ timedelta-like}
# Determine whether each date lies left of the DST transition (store in
# result_a) or right of the DST transition (store in result_b)
if is_zoneinfo(tz):
+ is_zi = True
result_a, result_b =_get_utc_bounds_zoneinfo(
vals, tz, creso=creso
)
@@ -385,6 +387,11 @@ timedelta-like}
# nonexistent times
new_local = val - remaining_mins - 1
+ if is_zi:
+ raise NotImplementedError(
+ "nonexistent shifting is not implemented with ZoneInfo tzinfos"
+ )
+
delta_idx = bisect_right_i8(info.tdata, new_local, info.ntrans)
delta_idx = delta_idx - delta_idx_offset
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 30ff8306a03b2..6959fe6a7fb66 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1895,3 +1895,16 @@ def using_copy_on_write() -> bool:
Fixture to check if Copy-on-Write is enabled.
"""
return pd.options.mode.copy_on_write and pd.options.mode.data_manager == "block"
+
+
+warsaws = ["Europe/Warsaw", "dateutil/Europe/Warsaw"]
+if zoneinfo is not None:
+ warsaws.append(zoneinfo.ZoneInfo("Europe/Warsaw"))
+
+
+@pytest.fixture(params=warsaws)
+def warsaw(request):
+ """
+ tzinfo for Europe/Warsaw using pytz, dateutil, or zoneinfo.
+ """
+ return request.param
diff --git a/pandas/tests/indexes/datetimes/test_constructors.py b/pandas/tests/indexes/datetimes/test_constructors.py
index d67bc8a132c0c..f8e826d2ac053 100644
--- a/pandas/tests/indexes/datetimes/test_constructors.py
+++ b/pandas/tests/indexes/datetimes/test_constructors.py
@@ -872,10 +872,16 @@ def test_constructor_with_ambiguous_keyword_arg(self):
result = date_range(end=end, periods=2, ambiguous=False)
tm.assert_index_equal(result, expected)
- def test_constructor_with_nonexistent_keyword_arg(self):
+ def test_constructor_with_nonexistent_keyword_arg(self, warsaw, request):
# GH 35297
+ if type(warsaw).__name__ == "ZoneInfo":
+ mark = pytest.mark.xfail(
+ reason="nonexistent-shift not yet implemented for ZoneInfo",
+ raises=NotImplementedError,
+ )
+ request.node.add_marker(mark)
- timezone = "Europe/Warsaw"
+ timezone = warsaw
# nonexistent keyword in start
start = Timestamp("2015-03-29 02:30:00").tz_localize(
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 8d651efe336e8..877a7dca42620 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -649,30 +649,6 @@ def test_dti_tz_localize_bdate_range(self):
localized = dr.tz_localize(pytz.utc)
tm.assert_index_equal(dr_utc, localized)
- @pytest.mark.parametrize("tz", ["Europe/Warsaw", "dateutil/Europe/Warsaw"])
- @pytest.mark.parametrize(
- "method, exp", [["NaT", pd.NaT], ["raise", None], ["foo", "invalid"]]
- )
- def test_dti_tz_localize_nonexistent(self, tz, method, exp):
- # GH 8917
- n = 60
- dti = date_range(start="2015-03-29 02:00:00", periods=n, freq="min")
- if method == "raise":
- with pytest.raises(pytz.NonExistentTimeError, match="2015-03-29 02:00:00"):
- dti.tz_localize(tz, nonexistent=method)
- elif exp == "invalid":
- msg = (
- "The nonexistent argument must be one of "
- "'raise', 'NaT', 'shift_forward', 'shift_backward' "
- "or a timedelta object"
- )
- with pytest.raises(ValueError, match=msg):
- dti.tz_localize(tz, nonexistent=method)
- else:
- result = dti.tz_localize(tz, nonexistent=method)
- expected = DatetimeIndex([exp] * n, tz=tz)
- tm.assert_index_equal(result, expected)
-
@pytest.mark.parametrize(
"start_ts, tz, end_ts, shift",
[
@@ -730,10 +706,9 @@ def test_dti_tz_localize_nonexistent_shift(
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize("offset", [-1, 1])
- @pytest.mark.parametrize("tz_type", ["", "dateutil/"])
- def test_dti_tz_localize_nonexistent_shift_invalid(self, offset, tz_type):
+ def test_dti_tz_localize_nonexistent_shift_invalid(self, offset, warsaw):
# GH 8917
- tz = tz_type + "Europe/Warsaw"
+ tz = warsaw
dti = DatetimeIndex([Timestamp("2015-03-29 02:20:00")])
msg = "The provided timedelta will relocalize on a nonexistent time"
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/scalar/timestamp/test_timezones.py b/pandas/tests/scalar/timestamp/test_timezones.py
index 3e02ab208c502..3ebffaad23910 100644
--- a/pandas/tests/scalar/timestamp/test_timezones.py
+++ b/pandas/tests/scalar/timestamp/test_timezones.py
@@ -141,9 +141,9 @@ def test_tz_localize_ambiguous_raise(self):
with pytest.raises(AmbiguousTimeError, match=msg):
ts.tz_localize("US/Pacific", ambiguous="raise")
- def test_tz_localize_nonexistent_invalid_arg(self):
+ def test_tz_localize_nonexistent_invalid_arg(self, warsaw):
# GH 22644
- tz = "Europe/Warsaw"
+ tz = warsaw
ts = Timestamp("2015-03-29 02:00:00")
msg = (
"The nonexistent argument must be one of 'raise', 'NaT', "
@@ -291,27 +291,26 @@ def test_timestamp_tz_localize_nonexistent_shift(
assert result._creso == getattr(NpyDatetimeUnit, f"NPY_FR_{unit}").value
@pytest.mark.parametrize("offset", [-1, 1])
- @pytest.mark.parametrize("tz_type", ["", "dateutil/"])
- def test_timestamp_tz_localize_nonexistent_shift_invalid(self, offset, tz_type):
+ def test_timestamp_tz_localize_nonexistent_shift_invalid(self, offset, warsaw):
# GH 8917, 24466
- tz = tz_type + "Europe/Warsaw"
+ tz = warsaw
ts = Timestamp("2015-03-29 02:20:00")
msg = "The provided timedelta will relocalize on a nonexistent time"
with pytest.raises(ValueError, match=msg):
ts.tz_localize(tz, nonexistent=timedelta(seconds=offset))
- @pytest.mark.parametrize("tz", ["Europe/Warsaw", "dateutil/Europe/Warsaw"])
@pytest.mark.parametrize("unit", ["ns", "us", "ms", "s"])
- def test_timestamp_tz_localize_nonexistent_NaT(self, tz, unit):
+ def test_timestamp_tz_localize_nonexistent_NaT(self, warsaw, unit):
# GH 8917
+ tz = warsaw
ts = Timestamp("2015-03-29 02:20:00").as_unit(unit)
result = ts.tz_localize(tz, nonexistent="NaT")
assert result is NaT
- @pytest.mark.parametrize("tz", ["Europe/Warsaw", "dateutil/Europe/Warsaw"])
@pytest.mark.parametrize("unit", ["ns", "us", "ms", "s"])
- def test_timestamp_tz_localize_nonexistent_raise(self, tz, unit):
+ def test_timestamp_tz_localize_nonexistent_raise(self, warsaw, unit):
# GH 8917
+ tz = warsaw
ts = Timestamp("2015-03-29 02:20:00").as_unit(unit)
msg = "2015-03-29 02:20:00"
with pytest.raises(pytz.NonExistentTimeError, match=msg):
diff --git a/pandas/tests/series/methods/test_tz_localize.py b/pandas/tests/series/methods/test_tz_localize.py
index b8a1ea55db4fe..a9e28bfeeb76b 100644
--- a/pandas/tests/series/methods/test_tz_localize.py
+++ b/pandas/tests/series/methods/test_tz_localize.py
@@ -58,7 +58,6 @@ def test_series_tz_localize_matching_index(self):
)
tm.assert_series_equal(result, expected)
- @pytest.mark.parametrize("tz", ["Europe/Warsaw", "dateutil/Europe/Warsaw"])
@pytest.mark.parametrize(
"method, exp",
[
@@ -68,8 +67,9 @@ def test_series_tz_localize_matching_index(self):
["foo", "invalid"],
],
)
- def test_tz_localize_nonexistent(self, tz, method, exp):
+ def test_tz_localize_nonexistent(self, warsaw, method, exp):
# GH 8917
+ tz = warsaw
n = 60
dti = date_range(start="2015-03-29 02:00:00", periods=n, freq="min")
ser = Series(1, index=dti)
@@ -85,13 +85,27 @@ def test_tz_localize_nonexistent(self, tz, method, exp):
df.tz_localize(tz, nonexistent=method)
elif exp == "invalid":
- with pytest.raises(ValueError, match="argument must be one of"):
+ msg = (
+ "The nonexistent argument must be one of "
+ "'raise', 'NaT', 'shift_forward', 'shift_backward' "
+ "or a timedelta object"
+ )
+ with pytest.raises(ValueError, match=msg):
dti.tz_localize(tz, nonexistent=method)
- with pytest.raises(ValueError, match="argument must be one of"):
+ with pytest.raises(ValueError, match=msg):
ser.tz_localize(tz, nonexistent=method)
- with pytest.raises(ValueError, match="argument must be one of"):
+ with pytest.raises(ValueError, match=msg):
df.tz_localize(tz, nonexistent=method)
+ elif method == "shift_forward" and type(tz).__name__ == "ZoneInfo":
+ msg = "nonexistent shifting is not implemented with ZoneInfo tzinfos"
+ with pytest.raises(NotImplementedError, match=msg):
+ ser.tz_localize(tz, nonexistent=method)
+ with pytest.raises(NotImplementedError, match=msg):
+ df.tz_localize(tz, nonexistent=method)
+ with pytest.raises(NotImplementedError, match=msg):
+ dti.tz_localize(tz, nonexistent=method)
+
else:
result = ser.tz_localize(tz, nonexistent=method)
expected = Series(1, index=DatetimeIndex([exp] * n, tz=tz))
@@ -101,6 +115,9 @@ def test_tz_localize_nonexistent(self, tz, method, exp):
expected = expected.to_frame()
tm.assert_frame_equal(result, expected)
+ res_index = dti.tz_localize(tz, nonexistent=method)
+ tm.assert_index_equal(res_index, expected.index)
+
@pytest.mark.parametrize("tzstr", ["US/Eastern", "dateutil/US/Eastern"])
def test_series_tz_localize_empty(self, tzstr):
# GH#2248
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49936 | 2022-11-27T22:46:37Z | 2022-11-28T18:17:46Z | 2022-11-28T18:17:45Z | 2022-11-28T18:30:22Z |
CI_check_correctness_notebooks | diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index 66bc7cd917b31..7a90e1bec7783 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -79,6 +79,10 @@ jobs:
run: ci/code_checks.sh docstrings
if: ${{ steps.build.outcome == 'success' && always() }}
+ - name: Run check of documentation notebooks
+ run: ci/code_checks.sh notebooks
+ if: ${{ steps.build.outcome == 'success' && always() }}
+
- name: Use existing environment for type checking
run: |
echo $PATH >> $GITHUB_PATH
diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index c6067faf92d37..3c1362b1ac83e 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -12,9 +12,10 @@
# $ ./ci/code_checks.sh doctests # run doctests
# $ ./ci/code_checks.sh docstrings # validate docstring errors
# $ ./ci/code_checks.sh single-docs # check single-page docs build warning-free
+# $ ./ci/code_checks.sh notebooks # check execution of documentation notebooks
-[[ -z "$1" || "$1" == "code" || "$1" == "doctests" || "$1" == "docstrings" || "$1" == "single-docs" ]] || \
- { echo "Unknown command $1. Usage: $0 [code|doctests|docstrings]"; exit 9999; }
+[[ -z "$1" || "$1" == "code" || "$1" == "doctests" || "$1" == "docstrings" || "$1" == "single-docs" || "$1" == "notebooks" ]] || \
+ { echo "Unknown command $1. Usage: $0 [code|doctests|docstrings|single-docs|notebooks]"; exit 9999; }
BASE_DIR="$(dirname $0)/.."
RET=0
@@ -84,6 +85,15 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
fi
+### DOCUMENTATION NOTEBOOKS ###
+if [[ -z "$CHECK" || "$CHECK" == "notebooks" ]]; then
+
+ MSG='Notebooks' ; echo $MSG
+ jupyter nbconvert --execute $(find doc/source -name '*.ipynb') --to notebook
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
+fi
+
### SINGLE-PAGE DOCS ###
if [[ -z "$CHECK" || "$CHECK" == "single-docs" ]]; then
python doc/make.py --warnings-are-errors --single pandas.Series.value_counts
| This PR is related to the PR #49874.
Creating new PR, that checks correctness notebooks, as was discussed with @MarcoGorelli .
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
CI didn't fail when documentation notebook executed with an error. I added new notebooks section in `ci/code_checks.sh` to check if the notebook execution passes. | https://api.github.com/repos/pandas-dev/pandas/pulls/49935 | 2022-11-27T22:29:57Z | 2022-11-28T17:45:03Z | 2022-11-28T17:45:03Z | 2022-11-28T17:45:14Z |
Bug: Incorrect IntervalIndex.is_overlapping | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index d8609737b8c7a..451cba732ff60 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -698,7 +698,7 @@ Strings
Interval
^^^^^^^^
--
+- Bug in :meth:`IntervalIndex.is_overlapping` incorrect output if interval has duplicate left boundaries (:issue:`49581`)
-
Indexing
diff --git a/pandas/_libs/intervaltree.pxi.in b/pandas/_libs/intervaltree.pxi.in
index e7a310513d2fa..0d7c96a6f2f2b 100644
--- a/pandas/_libs/intervaltree.pxi.in
+++ b/pandas/_libs/intervaltree.pxi.in
@@ -81,7 +81,8 @@ cdef class IntervalTree(IntervalMixin):
"""How to sort the left labels; this is used for binary search
"""
if self._left_sorter is None:
- self._left_sorter = np.argsort(self.left)
+ values = [self.right, self.left]
+ self._left_sorter = np.lexsort(values)
return self._left_sorter
@property
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index c8d7470032e5f..98c21fad1f8c2 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -791,6 +791,13 @@ def test_is_overlapping(self, start, shift, na_value, closed):
result = index.is_overlapping
assert result is expected
+ # intervals with duplicate left values
+ a = [10, 15, 20, 25, 30, 35, 40, 45, 45, 50, 55, 60, 65, 70, 75, 80, 85]
+ b = [15, 20, 25, 30, 35, 40, 45, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90]
+ index = IntervalIndex.from_arrays(a, b, closed="right")
+ result = index.is_overlapping
+ assert result is False
+
@pytest.mark.parametrize(
"tuples",
[
| - [x] closes #49581
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49933 | 2022-11-27T20:58:42Z | 2022-12-05T20:07:59Z | 2022-12-05T20:07:59Z | 2022-12-07T00:17:48Z |
BUG: metadata does not always propagate in binary operations | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 97ee96d8be25d..4fb68b1f90288 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -771,7 +771,7 @@ Styler
Metadata
^^^^^^^^
- Fixed metadata propagation in :meth:`DataFrame.corr` and :meth:`DataFrame.cov` (:issue:`28283`)
--
+- Fixed metadata propagation in binary operations (:issue:`49931`)
Other
^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..152965553480e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7484,7 +7484,7 @@ def _arith_method(self, other, op):
self, other = ops.align_method_FRAME(self, other, axis, flex=True, level=None)
new_data = self._dispatch_frame_op(other, op, axis=axis)
- return self._construct_result(new_data)
+ return self._construct_result(new_data).__finalize__(other)
_logical_method = _arith_method
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index af27ff67599ac..e2f9150c918eb 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -411,7 +411,9 @@ def _maybe_align_series_as_frame(frame: DataFrame, series: Series, axis: AxisInt
rvalues = rvalues.reshape(1, -1)
rvalues = np.broadcast_to(rvalues, frame.shape)
- return type(frame)(rvalues, index=frame.index, columns=frame.columns)
+ return type(frame)(rvalues, index=frame.index, columns=frame.columns).__finalize__(
+ series
+ )
def flex_arith_method_FRAME(op):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 48bc07ca022ee..ee50e999cbc3f 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5950,7 +5950,7 @@ def _logical_method(self, other, op):
def _arith_method(self, other, op):
self, other = ops.align_method_SERIES(self, other)
- return base.IndexOpsMixin._arith_method(self, other, op)
+ return base.IndexOpsMixin._arith_method(self, other, op).__finalize__(other)
Series._add_numeric_operations()
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index a7551af68bc2b..50a7b1b5abe90 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -498,10 +498,14 @@ def test_binops(request, args, annotate, all_binary_operators):
left, right = args
if annotate == "both" and isinstance(left, int) or isinstance(right, int):
return
+ elif annotate == "left" and isinstance(left, int):
+ return
+ elif annotate == "right" and isinstance(right, int):
+ return
if annotate in {"left", "both"} and not isinstance(left, int):
left.attrs = {"a": 1}
- if annotate in {"left", "both"} and not isinstance(right, int):
+ if annotate in {"right", "both"} and not isinstance(right, int):
right.attrs = {"a": 1}
result = all_binary_operators(left, right)
| - [x] closes #49916 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49931 | 2022-11-27T19:03:02Z | 2022-12-30T10:10:58Z | null | 2022-12-30T10:10:58Z |
CLN: clean core.construction._extract_index | diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 8e186b1f4a034..563011abe2c41 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -580,63 +580,59 @@ def _extract_index(data) -> Index:
"""
Try to infer an Index from the passed data, raise ValueError on failure.
"""
- index = None
+ index: Index
if len(data) == 0:
- index = Index([])
- else:
- raw_lengths = []
- indexes: list[list[Hashable] | Index] = []
-
- have_raw_arrays = False
- have_series = False
- have_dicts = False
-
- for val in data:
- if isinstance(val, ABCSeries):
- have_series = True
- indexes.append(val.index)
- elif isinstance(val, dict):
- have_dicts = True
- indexes.append(list(val.keys()))
- elif is_list_like(val) and getattr(val, "ndim", 1) == 1:
- have_raw_arrays = True
- raw_lengths.append(len(val))
- elif isinstance(val, np.ndarray) and val.ndim > 1:
- raise ValueError("Per-column arrays must each be 1-dimensional")
-
- if not indexes and not raw_lengths:
- raise ValueError("If using all scalar values, you must pass an index")
+ return Index([])
- if have_series:
- index = union_indexes(indexes)
- elif have_dicts:
- index = union_indexes(indexes, sort=False)
+ raw_lengths = []
+ indexes: list[list[Hashable] | Index] = []
- if have_raw_arrays:
- lengths = list(set(raw_lengths))
- if len(lengths) > 1:
- raise ValueError("All arrays must be of the same length")
+ have_raw_arrays = False
+ have_series = False
+ have_dicts = False
- if have_dicts:
- raise ValueError(
- "Mixing dicts with non-Series may lead to ambiguous ordering."
- )
+ for val in data:
+ if isinstance(val, ABCSeries):
+ have_series = True
+ indexes.append(val.index)
+ elif isinstance(val, dict):
+ have_dicts = True
+ indexes.append(list(val.keys()))
+ elif is_list_like(val) and getattr(val, "ndim", 1) == 1:
+ have_raw_arrays = True
+ raw_lengths.append(len(val))
+ elif isinstance(val, np.ndarray) and val.ndim > 1:
+ raise ValueError("Per-column arrays must each be 1-dimensional")
+
+ if not indexes and not raw_lengths:
+ raise ValueError("If using all scalar values, you must pass an index")
+
+ if have_series:
+ index = union_indexes(indexes)
+ elif have_dicts:
+ index = union_indexes(indexes, sort=False)
+
+ if have_raw_arrays:
+ lengths = list(set(raw_lengths))
+ if len(lengths) > 1:
+ raise ValueError("All arrays must be of the same length")
+
+ if have_dicts:
+ raise ValueError(
+ "Mixing dicts with non-Series may lead to ambiguous ordering."
+ )
- if have_series:
- assert index is not None # for mypy
- if lengths[0] != len(index):
- msg = (
- f"array length {lengths[0]} does not match index "
- f"length {len(index)}"
- )
- raise ValueError(msg)
- else:
- index = default_index(lengths[0])
+ if have_series:
+ if lengths[0] != len(index):
+ msg = (
+ f"array length {lengths[0]} does not match index "
+ f"length {len(index)}"
+ )
+ raise ValueError(msg)
+ else:
+ index = default_index(lengths[0])
- # error: Argument 1 to "ensure_index" has incompatible type "Optional[Index]";
- # expected "Union[Union[Union[ExtensionArray, ndarray], Index, Series],
- # Sequence[Any]]"
- return ensure_index(index) # type: ignore[arg-type]
+ return ensure_index(index)
def reorder_arrays(
| Return early in the case of len(data) == 0, thereby avoiding needing a long `else` clause. Also clean type hints.
EDIT: The PR looks a bit messy at first sight, but it's mostly just dedenting stuff. Checking "Hide whitespace" makes the PR much clearer, as suggested by @MarcoGorelli in #49921. | https://api.github.com/repos/pandas-dev/pandas/pulls/49930 | 2022-11-27T18:16:01Z | 2022-11-28T08:57:27Z | 2022-11-28T08:57:27Z | 2022-11-28T09:12:31Z |
Backport PR #49908 on branch 1.5.x ( BUG: SeriesGroupBy.apply sets name attribute if result is DataFrame) | diff --git a/doc/source/whatsnew/v1.5.3.rst b/doc/source/whatsnew/v1.5.3.rst
index e7428956c50b5..e0bcd81805cc1 100644
--- a/doc/source/whatsnew/v1.5.3.rst
+++ b/doc/source/whatsnew/v1.5.3.rst
@@ -16,6 +16,7 @@ Fixed regressions
- Fixed performance regression in :meth:`Series.isin` when ``values`` is empty (:issue:`49839`)
- Fixed regression in :meth:`DataFrameGroupBy.transform` when used with ``as_index=False`` (:issue:`49834`)
- Enforced reversion of ``color`` as an alias for ``c`` and ``size`` as an alias for ``s`` in function :meth:`DataFrame.plot.scatter` (:issue:`49732`)
+- Fixed regression in :meth:`SeriesGroupBy.apply` setting a ``name`` attribute on the result if the result was a :class:`DataFrame` (:issue:`49907`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 97d332394e045..7e6e138fa8fe6 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -411,7 +411,8 @@ def _wrap_applied_output(
not_indexed_same=not_indexed_same,
override_group_keys=override_group_keys,
)
- result.name = self.obj.name
+ if isinstance(result, Series):
+ result.name = self.obj.name
return result
else:
# GH #6265 #24880
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 47ea6a99ffea9..dfd62394c840e 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -335,6 +335,7 @@ def f(piece):
result = grouped.apply(f)
assert isinstance(result, DataFrame)
+ assert not hasattr(result, "name") # GH49907
tm.assert_index_equal(result.index, ts.index)
| Backport PR #49908: BUG: SeriesGroupBy.apply sets name attribute if result is DataFrame | https://api.github.com/repos/pandas-dev/pandas/pulls/49928 | 2022-11-27T16:05:53Z | 2022-11-27T19:05:23Z | 2022-11-27T19:05:23Z | 2022-11-27T19:05:24Z |
BUG: IntervalIndex.overlapping with duplicate left boundaries | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 97ee96d8be25d..a3448b7eb53a8 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -672,7 +672,7 @@ Strings
Interval
^^^^^^^^
--
+- Bug in :meth:`IntervalIndex.is_overlapping` incorrect output if interval has duplicate left boundaries (:issue:`49581`)
-
Indexing
diff --git a/pandas/_libs/intervaltree.pxi.in b/pandas/_libs/intervaltree.pxi.in
index e7a310513d2fa..e4b7f65990895 100644
--- a/pandas/_libs/intervaltree.pxi.in
+++ b/pandas/_libs/intervaltree.pxi.in
@@ -81,7 +81,9 @@ cdef class IntervalTree(IntervalMixin):
"""How to sort the left labels; this is used for binary search
"""
if self._left_sorter is None:
- self._left_sorter = np.argsort(self.left)
+ left_right = np.asarray([(self.left[i], self.right[i]) for i in range(0,
+ len(self.left))], dtype=[('l', self.dtype), ('r', self.dtype)])
+ self._left_sorter = np.argsort(left_right, order=('l', 'r'))
return self._left_sorter
@property
diff --git a/pandas/_libs/intervaltree.pxi.pdf b/pandas/_libs/intervaltree.pxi.pdf
new file mode 100644
index 0000000000000..3b456a8f1b21b
Binary files /dev/null and b/pandas/_libs/intervaltree.pxi.pdf differ
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index c8d7470032e5f..0c990e8fc2435 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -791,6 +791,13 @@ def test_is_overlapping(self, start, shift, na_value, closed):
result = index.is_overlapping
assert result is expected
+ # intervales with duplicate left values
+ a = [10, 15, 20, 25, 30, 35, 40, 45, 45, 50, 55, 60, 65, 70, 75, 80, 85]
+ b = [15, 20, 25, 30, 35, 40, 45, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90]
+ index = IntervalIndex.from_arrays(a, b, closed="right")
+ result = index.is_overlapping
+ assert result is False
+
@pytest.mark.parametrize(
"tuples",
[
| - [x] closes #49581
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49927 | 2022-11-27T09:03:19Z | 2022-11-27T20:55:39Z | null | 2022-11-27T20:55:39Z |
TST: added test for pd.where overflow error GH31687 | diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index 501822f856a63..2a283f719ec0d 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -1017,3 +1017,15 @@ def test_where_producing_ea_cond_for_np_dtype():
{"a": Series([pd.NA, pd.NA, 2], dtype="Int64"), "b": [np.nan, 2, 3]}
)
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "replacement", [0.001, True, "snake", None, datetime(2022, 5, 4)]
+)
+def test_where_int_overflow(replacement):
+ # GH 31687
+ df = DataFrame([[1.0, 2e25, "nine"], [np.nan, 0.1, None]])
+ result = df.where(pd.notnull(df), replacement)
+ expected = DataFrame([[1.0, 2e25, "nine"], [replacement, 0.1, replacement]])
+
+ tm.assert_frame_equal(result, expected)
| Added a test for pd.where to /indexing/test_where.py
- [x] closes #31687
- [x] tests added / passed
- [x] passes all pre-commit code checks
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49926 | 2022-11-27T06:46:26Z | 2022-11-28T09:08:18Z | 2022-11-28T09:08:18Z | 2022-11-29T01:14:06Z |
BLD: use nonvendor versioneer | diff --git a/.github/workflows/32-bit-linux.yml b/.github/workflows/32-bit-linux.yml
index cf8a0fe0da91c..806742c99abac 100644
--- a/.github/workflows/32-bit-linux.yml
+++ b/.github/workflows/32-bit-linux.yml
@@ -38,7 +38,8 @@ jobs:
/opt/python/cp38-cp38/bin/python -m venv ~/virtualenvs/pandas-dev && \
. ~/virtualenvs/pandas-dev/bin/activate && \
python -m pip install --no-deps -U pip wheel 'setuptools<60.0.0' && \
- pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \
+ python -m pip install versioneer[toml] && \
+ python -m pip install cython numpy python-dateutil pytz pytest pytest-xdist pytest-asyncio>=0.17 hypothesis && \
python setup.py build_ext -q -j1 && \
python -m pip install --no-build-isolation --no-use-pep517 -e . && \
python -m pip list && \
diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml
index 87f40270d8774..08fc3fe4c50a4 100644
--- a/.github/workflows/package-checks.yml
+++ b/.github/workflows/package-checks.yml
@@ -43,6 +43,7 @@ jobs:
- name: Install required dependencies
run: |
python -m pip install --upgrade pip setuptools wheel python-dateutil pytz numpy cython
+ python -m pip install versioneer[toml]
shell: bash -el {0}
- name: Pip install with extra
diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml
index 7c4b36dab109d..88f3d16a73a1a 100644
--- a/.github/workflows/python-dev.yml
+++ b/.github/workflows/python-dev.yml
@@ -75,6 +75,7 @@ jobs:
python -m pip install --upgrade pip setuptools wheel
python -m pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy
python -m pip install git+https://github.com/nedbat/coveragepy.git
+ python -m pip install versioneer[toml]
python -m pip install python-dateutil pytz cython hypothesis==6.52.1 pytest>=6.2.5 pytest-xdist pytest-cov pytest-asyncio>=0.17
python -m pip list
diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml
index 9957fc72e9f51..c9ac218bff935 100644
--- a/.github/workflows/sdist.yml
+++ b/.github/workflows/sdist.yml
@@ -47,6 +47,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip setuptools wheel
+ python -m pip install versioneer[toml]
# GH 39416
pip install numpy
diff --git a/ci/deps/actions-310-numpydev.yaml b/ci/deps/actions-310-numpydev.yaml
index ef20c2aa889b9..44acae877bdf8 100644
--- a/ci/deps/actions-310-numpydev.yaml
+++ b/ci/deps/actions-310-numpydev.yaml
@@ -4,7 +4,10 @@ channels:
dependencies:
- python=3.10
- # tools
+ # build dependencies
+ - versioneer[toml]
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 9ebc305a0cb0c..1f6f73f3c963f 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -4,8 +4,11 @@ channels:
dependencies:
- python=3.10
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml
index b167d992a7c1a..1c59b9db9b1fa 100644
--- a/ci/deps/actions-38-downstream_compat.yaml
+++ b/ci/deps/actions-38-downstream_compat.yaml
@@ -5,8 +5,11 @@ channels:
dependencies:
- python=3.8
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/ci/deps/actions-38-minimum_versions.yaml b/ci/deps/actions-38-minimum_versions.yaml
index 8e0ccd77b19a6..512aa13c6899a 100644
--- a/ci/deps/actions-38-minimum_versions.yaml
+++ b/ci/deps/actions-38-minimum_versions.yaml
@@ -6,8 +6,11 @@ channels:
dependencies:
- python=3.8.0
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/ci/deps/actions-38.yaml b/ci/deps/actions-38.yaml
index 825b8aeebfc2f..48b9d18771afb 100644
--- a/ci/deps/actions-38.yaml
+++ b/ci/deps/actions-38.yaml
@@ -4,8 +4,11 @@ channels:
dependencies:
- python=3.8
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 1ee96878dbe34..59191ad107d12 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -4,8 +4,11 @@ channels:
dependencies:
- python=3.9
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/ci/deps/actions-pypy-38.yaml b/ci/deps/actions-pypy-38.yaml
index e06b992acc191..17d39a4c53ac3 100644
--- a/ci/deps/actions-pypy-38.yaml
+++ b/ci/deps/actions-pypy-38.yaml
@@ -7,8 +7,11 @@ dependencies:
# with base pandas has been dealt with
- python=3.8[build=*_pypy] # TODO: use this once pypy3.8 is available
- # tools
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-asyncio
diff --git a/ci/deps/circle-38-arm64.yaml b/ci/deps/circle-38-arm64.yaml
index ae4a82d016131..63547d3521489 100644
--- a/ci/deps/circle-38-arm64.yaml
+++ b/ci/deps/circle-38-arm64.yaml
@@ -4,8 +4,11 @@ channels:
dependencies:
- python=3.8
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython>=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/environment.yml b/environment.yml
index be7a57d615ed0..7cd8859825453 100644
--- a/environment.yml
+++ b/environment.yml
@@ -6,8 +6,11 @@ dependencies:
- python=3.8
- pip
- # test dependencies
+ # build dependencies
+ - versioneer[toml]
- cython=0.29.32
+
+ # test dependencies
- pytest>=6.0
- pytest-cov
- pytest-xdist>=1.31
diff --git a/pandas/_version.py b/pandas/_version.py
index 4877bdff3eb3a..6705b8505f7e2 100644
--- a/pandas/_version.py
+++ b/pandas/_version.py
@@ -1,20 +1,25 @@
-# pylint: disable=consider-using-f-string
# This file helps to compute a version number in source trees obtained from
-# git-archive tarball (such as those provided by GitHub's download-from-tag
+# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
-# This file is released into the public domain. Generated by
-# versioneer-0.19 (https://github.com/python-versioneer/python-versioneer)
+# This file is released into the public domain.
+# Generated by versioneer-0.28
+# https://github.com/python-versioneer/python-versioneer
"""Git implementation of _version.py."""
import errno
+import functools
import os
import re
import subprocess
import sys
+from typing import (
+ Callable,
+ Dict,
+)
def get_keywords():
@@ -52,7 +57,8 @@ class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
-HANDLERS = {}
+LONG_VERSION_PY: Dict[str, str] = {}
+HANDLERS: Dict[str, Dict[str, Callable]] = {}
def register_vcs_handler(vcs, method): # decorator
@@ -71,17 +77,26 @@ def decorate(f):
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
- p = None
- for c in commands:
- dispcmd = str([c] + args)
+ process = None
+
+ popen_kwargs = {}
+ if sys.platform == "win32":
+ # This hides the console window if pythonw.exe is used
+ startupinfo = subprocess.STARTUPINFO()
+ startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
+ popen_kwargs["startupinfo"] = startupinfo
+
+ for command in commands:
+ dispcmd = str([command] + args)
try:
# remember shell=False, so use git.cmd on windows, not just git
- p = subprocess.Popen(
- [c] + args,
+ process = subprocess.Popen(
+ [command] + args,
cwd=cwd,
env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr else None),
+ **popen_kwargs,
)
break
except OSError:
@@ -89,20 +104,20 @@ def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=
if e.errno == errno.ENOENT:
continue
if verbose:
- print("unable to run %s" % dispcmd)
+ print(f"unable to run {dispcmd}")
print(e)
return None, None
else:
if verbose:
print(f"unable to find command, tried {commands}")
return None, None
- stdout = p.communicate()[0].strip().decode()
- if p.returncode != 0:
+ stdout = process.communicate()[0].strip().decode()
+ if process.returncode != 0:
if verbose:
- print("unable to run %s (error)" % dispcmd)
- print("stdout was %s" % stdout)
- return None, p.returncode
- return stdout, p.returncode
+ print(f"unable to run {dispcmd} (error)")
+ print(f"stdout was {stdout}")
+ return None, process.returncode
+ return stdout, process.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
@@ -114,7 +129,7 @@ def versions_from_parentdir(parentdir_prefix, root, verbose):
"""
rootdirs = []
- for i in range(3):
+ for _ in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {
@@ -124,14 +139,13 @@ def versions_from_parentdir(parentdir_prefix, root, verbose):
"error": None,
"date": None,
}
- else:
- rootdirs.append(root)
- root = os.path.dirname(root) # up a level
+ rootdirs.append(root)
+ root = os.path.dirname(root) # up a level
if verbose:
print(
- "Tried directories %s but none started with prefix %s"
- % (str(rootdirs), parentdir_prefix)
+ f"Tried directories {str(rootdirs)} \
+ but none started with prefix {parentdir_prefix}"
)
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@@ -145,21 +159,20 @@ def git_get_keywords(versionfile_abs):
# _version.py.
keywords = {}
try:
- f = open(versionfile_abs)
- for line in f.readlines():
- if line.strip().startswith("git_refnames ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["refnames"] = mo.group(1)
- if line.strip().startswith("git_full ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["full"] = mo.group(1)
- if line.strip().startswith("git_date ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["date"] = mo.group(1)
- f.close()
+ with open(versionfile_abs) as fobj:
+ for line in fobj:
+ if line.strip().startswith("git_refnames ="):
+ mo = re.search(r'=\s*"(.*)"', line)
+ if mo:
+ keywords["refnames"] = mo.group(1)
+ if line.strip().startswith("git_full ="):
+ mo = re.search(r'=\s*"(.*)"', line)
+ if mo:
+ keywords["full"] = mo.group(1)
+ if line.strip().startswith("git_date ="):
+ mo = re.search(r'=\s*"(.*)"', line)
+ if mo:
+ keywords["date"] = mo.group(1)
except OSError:
pass
return keywords
@@ -168,8 +181,8 @@ def git_get_keywords(versionfile_abs):
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
- if not keywords:
- raise NotThisMethod("no keywords at all, weird")
+ if "refnames" not in keywords:
+ raise NotThisMethod("Short version file found")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
@@ -200,18 +213,23 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
- # "stabilization", as well as "HEAD" and "main".
+ # "stabilization", as well as "HEAD" and "master".
tags = {r for r in refs if re.search(r"\d", r)}
if verbose:
- print("discarding '%s', no digits" % ",".join(refs - tags))
+ print(f"discarding '{','.join(refs - tags)}', no digits")
if verbose:
- print("likely tags: %s" % ",".join(sorted(tags)))
+ print(f"likely tags: {','.join(sorted(tags))}")
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix) :]
+ # Filter out refs that exactly match prefix or that don't start
+ # with a number once the prefix is stripped (mostly a concern
+ # when prefix is '')
+ if not re.match(r"\d", r):
+ continue
if verbose:
- print("picking %s" % r)
+ print(f"picking {r}")
return {
"version": r,
"full-revisionid": keywords["full"].strip(),
@@ -232,7 +250,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
@register_vcs_handler("git", "pieces_from_vcs")
-def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
+def git_pieces_from_vcs(tag_prefix, root, verbose, runner=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
@@ -243,15 +261,22 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
- out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True)
+ # GIT_DIR can interfere with correct operation of Versioneer.
+ # It may be intended to be passed to the Versioneer-versioned project,
+ # but that should not change where we get our version from.
+ env = os.environ.copy()
+ env.pop("GIT_DIR", None)
+ runner = functools.partial(runner, env=env)
+
+ _, rc = runner(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=not verbose)
if rc != 0:
if verbose:
- print("Directory %s not under git control" % root)
+ print(f"Directory {root} not under git control")
raise NotThisMethod("'git rev-parse --git-dir' returned error")
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
- describe_out, rc = run_command(
+ describe_out, rc = runner(
GITS,
[
"describe",
@@ -260,7 +285,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"--always",
"--long",
"--match",
- "%s*" % tag_prefix,
+ f"{tag_prefix}[[:digit:]]*",
],
cwd=root,
)
@@ -268,7 +293,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
- full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
+ full_out, rc = runner(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
@@ -278,6 +303,38 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
+ branch_name, rc = runner(GITS, ["rev-parse", "--abbrev-ref", "HEAD"], cwd=root)
+ # --abbrev-ref was added in git-1.6.3
+ if rc != 0 or branch_name is None:
+ raise NotThisMethod("'git rev-parse --abbrev-ref' returned error")
+ branch_name = branch_name.strip()
+
+ if branch_name == "HEAD":
+ # If we aren't exactly on a branch, pick a branch which represents
+ # the current commit. If all else fails, we are on a branchless
+ # commit.
+ branches, rc = runner(GITS, ["branch", "--contains"], cwd=root)
+ # --contains was added in git-1.5.4
+ if rc != 0 or branches is None:
+ raise NotThisMethod("'git branch --contains' returned error")
+ branches = branches.split("\n")
+
+ # Remove the first line if we're running detached
+ if "(" in branches[0]:
+ branches.pop(0)
+
+ # Strip off the leading "* " from the list of branches.
+ branches = [branch[2:] for branch in branches]
+ if "master" in branches:
+ branch_name = "master"
+ elif not branches:
+ branch_name = None
+ else:
+ # Pick the first branch that is returned. Good or bad.
+ branch_name = branches[0]
+
+ pieces["branch"] = branch_name
+
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
@@ -295,7 +352,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
if not mo:
# unparsable. Maybe git-describe is misbehaving?
- pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
+ pieces["error"] = f"unable to parse git-describe output: '{describe_out}'"
return pieces
# tag
@@ -304,10 +361,9 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
- pieces["error"] = "tag '{}' doesn't start with prefix '{}'".format(
- full_tag,
- tag_prefix,
- )
+ pieces[
+ "error"
+ ] = f"tag '{full_tag}' doesn't start with prefix '{tag_prefix}'"
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix) :]
@@ -320,13 +376,11 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
else:
# HEX: no tags
pieces["closest-tag"] = None
- count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
- pieces["distance"] = int(count_out) # total number of commits
+ out, rc = runner(GITS, ["rev-list", "HEAD", "--left-right"], cwd=root)
+ pieces["distance"] = len(out.split()) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
- date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[
- 0
- ].strip()
+ date = runner(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
@@ -335,7 +389,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
return pieces
-def plus_or_dot(pieces) -> str:
+def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
@@ -355,30 +409,77 @@ def render_pep440(pieces):
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
- rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
+ rendered += f"{pieces['distance']}.g{pieces['short']}"
+ if pieces["dirty"]:
+ rendered += ".dirty"
+ else:
+ # exception #1
+ rendered = f"0+untagged.{pieces['distance']}.g{pieces['short']}"
+ if pieces["dirty"]:
+ rendered += ".dirty"
+ return rendered
+
+
+def render_pep440_branch(pieces):
+ """TAG[[.dev0]+DISTANCE.gHEX[.dirty]] .
+
+ The ".dev0" means not master branch. Note that .dev0 sorts backwards
+ (a feature branch will appear "older" than the master branch).
+
+ Exceptions:
+ 1: no tags. 0[.dev0]+untagged.DISTANCE.gHEX[.dirty]
+ """
+ if pieces["closest-tag"]:
+ rendered = pieces["closest-tag"]
+ if pieces["distance"] or pieces["dirty"]:
+ if pieces["branch"] != "master":
+ rendered += ".dev0"
+ rendered += plus_or_dot(pieces)
+ rendered += f"{pieces['distance']}.g{pieces['short']}"
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
- rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"])
+ rendered = "0"
+ if pieces["branch"] != "master":
+ rendered += ".dev0"
+ rendered += f"+untagged.{pieces['distance']}.g{pieces['short']}"
if pieces["dirty"]:
rendered += ".dirty"
return rendered
+def pep440_split_post(ver):
+ """Split pep440 version string at the post-release segment.
+
+ Returns the release segments before the post-release and the
+ post-release version number (or -1 if no post-release segment is present).
+ """
+ vc = str.split(ver, ".post")
+ return vc[0], int(vc[1] or 0) if len(vc) == 2 else None
+
+
def render_pep440_pre(pieces):
- """TAG[.post0.devDISTANCE] -- No -dirty.
+ """TAG[.postN.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
if pieces["distance"]:
- rendered += ".post0.dev%d" % pieces["distance"]
+ # update the post release segment
+ tag_version, post_version = pep440_split_post(pieces["closest-tag"])
+ rendered = tag_version
+ if post_version is not None:
+ rendered += f".post{post_version + 1}.dev{pieces['distance']}"
+ else:
+ rendered += f".post0.dev{pieces['distance']}"
+ else:
+ # no commits, use the tag as the version
+ rendered = pieces["closest-tag"]
else:
# exception #1
- rendered = "0.post0.dev%d" % pieces["distance"]
+ rendered = f"0.post0.dev{pieces['distance']}"
return rendered
@@ -395,17 +496,46 @@ def render_pep440_post(pieces):
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
- rendered += ".post%d" % pieces["distance"]
+ rendered += f".post{pieces['distance']}"
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
- rendered += "g%s" % pieces["short"]
+ rendered += f"g{pieces['short']}"
else:
# exception #1
- rendered = "0.post%d" % pieces["distance"]
+ rendered = f"0.post{pieces['distance']}"
if pieces["dirty"]:
rendered += ".dev0"
- rendered += "+g%s" % pieces["short"]
+ rendered += f"+g{pieces['short']}"
+ return rendered
+
+
+def render_pep440_post_branch(pieces):
+ """TAG[.postDISTANCE[.dev0]+gHEX[.dirty]] .
+
+ The ".dev0" means not master branch.
+
+ Exceptions:
+ 1: no tags. 0.postDISTANCE[.dev0]+gHEX[.dirty]
+ """
+ if pieces["closest-tag"]:
+ rendered = pieces["closest-tag"]
+ if pieces["distance"] or pieces["dirty"]:
+ rendered += f".post{pieces['distance']}"
+ if pieces["branch"] != "master":
+ rendered += ".dev0"
+ rendered += plus_or_dot(pieces)
+ rendered += f"g{pieces['short']}"
+ if pieces["dirty"]:
+ rendered += ".dirty"
+ else:
+ # exception #1
+ rendered = f"0.post{pieces['distance']}"
+ if pieces["branch"] != "master":
+ rendered += ".dev0"
+ rendered += f"+g{pieces['short']}"
+ if pieces["dirty"]:
+ rendered += ".dirty"
return rendered
@@ -420,12 +550,12 @@ def render_pep440_old(pieces):
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
- rendered += ".post%d" % pieces["distance"]
+ rendered += f"0.post{pieces['distance']}"
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
- rendered = "0.post%d" % pieces["distance"]
+ rendered = f"0.post{pieces['distance']}"
if pieces["dirty"]:
rendered += ".dev0"
return rendered
@@ -442,7 +572,7 @@ def render_git_describe(pieces):
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
- rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
+ rendered += f"-{pieces['distance']}-g{pieces['short']}"
else:
# exception #1
rendered = pieces["short"]
@@ -462,7 +592,7 @@ def render_git_describe_long(pieces):
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
- rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
+ rendered += f"-{pieces['distance']}-g{pieces['short']}"
else:
# exception #1
rendered = pieces["short"]
@@ -487,10 +617,14 @@ def render(pieces, style):
if style == "pep440":
rendered = render_pep440(pieces)
+ elif style == "pep440-branch":
+ rendered = render_pep440_branch(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
+ elif style == "pep440-post-branch":
+ rendered = render_pep440_post_branch(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
@@ -498,7 +632,7 @@ def render(pieces, style):
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
- raise ValueError("unknown style '%s'" % style)
+ raise ValueError(f"unknown style '{style}'")
return {
"version": rendered,
@@ -529,7 +663,7 @@ def get_versions():
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
- for i in cfg.versionfile_source.split("/"):
+ for _ in cfg.versionfile_source.split("/"):
root = os.path.dirname(root)
except NameError:
return {
diff --git a/pyproject.toml b/pyproject.toml
index 6ce05ce5d679e..f14bad44a889f 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -5,12 +5,19 @@ requires = [
"setuptools>=51.0.0",
"wheel",
"Cython>=0.29.32,<3", # Note: sync with setup.py, environment.yml and asv.conf.json
- "oldest-supported-numpy>=2022.8.16"
+ "oldest-supported-numpy>=2022.8.16",
+ "versioneer[toml]"
]
-# uncomment to enable pep517 after versioneer problem is fixed.
-# https://github.com/python-versioneer/python-versioneer/issues/193
# build-backend = "setuptools.build_meta"
+[tool.versioneer]
+VCS = "git"
+style = "pep440"
+versionfile_source = "pandas/_version.py"
+versionfile_build = "pandas/_version.py"
+tag_prefix = "v"
+parentdir_prefix = "pandas-"
+
[tool.cibuildwheel]
skip = "cp36-* cp37-* pp37-* *-manylinux_i686 *_ppc64le *_s390x *-musllinux*"
build-verbosity = "3"
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 5c90e5908ece8..78dddbe607084 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -2,6 +2,7 @@
# See that file for comments about the need/usage of each dependency.
pip
+versioneer[toml]
cython==0.29.32
pytest>=6.0
pytest-cov
diff --git a/setup.cfg b/setup.cfg
index 6de5bf2173a70..c3aaf7cf4e9ae 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -167,17 +167,6 @@ inplace = True
[options.packages.find]
include = pandas, pandas.*
-# See the docstring in versioneer.py for instructions. Note that you must
-# re-run 'versioneer.py setup' after changing this section, and commit the
-# resulting files.
-[versioneer]
-VCS = git
-style = pep440
-versionfile_source = pandas/_version.py
-versionfile_build = pandas/_version.py
-tag_prefix = v
-parentdir_prefix = pandas-
-
[flake8]
max-line-length = 88
ignore =
diff --git a/setup.py b/setup.py
index 02b91cbb0d9b4..f8fa048757289 100755
--- a/setup.py
+++ b/setup.py
@@ -23,7 +23,6 @@
setup,
)
from setuptools.command.build_ext import build_ext as _build_ext
-
import versioneer
cmdclass = versioneer.get_cmdclass()
diff --git a/versioneer.py b/versioneer.py
deleted file mode 100644
index 5dd66e4df6c36..0000000000000
--- a/versioneer.py
+++ /dev/null
@@ -1,1915 +0,0 @@
-# Version: 0.19
-# pylint: disable=consider-using-f-string
-
-"""The Versioneer - like a rocketeer, but for versions.
-
-The Versioneer
-==============
-
-* like a rocketeer, but for versions!
-* https://github.com/python-versioneer/python-versioneer
-* Brian Warner
-* License: Public Domain
-* Compatible with: Python 3.6, 3.7, 3.8, 3.9 and pypy3
-* [![Latest Version][pypi-image]][pypi-url]
-* [![Build Status][travis-image]][travis-url]
-
-This is a tool for managing a recorded version number in distutils-based
-python projects. The goal is to remove the tedious and error-prone "update
-the embedded version string" step from your release process. Making a new
-release should be as easy as recording a new tag in your version-control
-system, and maybe making new tarballs.
-
-
-## Quick Install
-
-* `pip install versioneer` to somewhere in your $PATH
-* add a `[versioneer]` section to your setup.cfg (see [Install](INSTALL.md))
-* run `versioneer install` in your source tree, commit the results
-* Verify version information with `python setup.py version`
-
-## Version Identifiers
-
-Source trees come from a variety of places:
-
-* a version-control system checkout (mostly used by developers)
-* a nightly tarball, produced by build automation
-* a snapshot tarball, produced by a web-based VCS browser, like GitHub's
- "tarball from tag" feature
-* a release tarball, produced by "setup.py sdist", distributed through PyPI
-
-Within each source tree, the version identifier (either a string or a number,
-this tool is format-agnostic) can come from a variety of places:
-
-* ask the VCS tool itself, e.g. "git describe" (for checkouts), which knows
- about recent "tags" and an absolute revision-id
-* the name of the directory into which the tarball was unpacked
-* an expanded VCS keyword ($Id$, etc)
-* a `_version.py` created by some earlier build step
-
-For released software, the version identifier is closely related to a VCS
-tag. Some projects use tag names that include more than just the version
-string (e.g. "myproject-1.2" instead of just "1.2"), in which case the tool
-needs to strip the tag prefix to extract the version identifier. For
-unreleased software (between tags), the version identifier should provide
-enough information to help developers recreate the same tree, while also
-giving them an idea of roughly how old the tree is (after version 1.2, before
-version 1.3). Many VCS systems can report a description that captures this,
-for example `git describe --tags --dirty --always` reports things like
-"0.7-1-g574ab98-dirty" to indicate that the checkout is one revision past the
-0.7 tag, has a unique revision id of "574ab98", and is "dirty" (it has
-uncommitted changes).
-
-The version identifier is used for multiple purposes:
-
-* to allow the module to self-identify its version: `myproject.__version__`
-* to choose a name and prefix for a 'setup.py sdist' tarball
-
-## Theory of Operation
-
-Versioneer works by adding a special `_version.py` file into your source
-tree, where your `__init__.py` can import it. This `_version.py` knows how to
-dynamically ask the VCS tool for version information at import time.
-
-`_version.py` also contains `$Revision$` markers, and the installation
-process marks `_version.py` to have this marker rewritten with a tag name
-during the `git archive` command. As a result, generated tarballs will
-contain enough information to get the proper version.
-
-To allow `setup.py` to compute a version too, a `versioneer.py` is added to
-the top level of your source tree, next to `setup.py` and the `setup.cfg`
-that configures it. This overrides several distutils/setuptools commands to
-compute the version when invoked, and changes `setup.py build` and `setup.py
-sdist` to replace `_version.py` with a small static file that contains just
-the generated version data.
-
-## Installation
-
-See [INSTALL.md](./INSTALL.md) for detailed installation instructions.
-
-## Version-String Flavors
-
-Code which uses Versioneer can learn about its version string at runtime by
-importing `_version` from your main `__init__.py` file and running the
-`get_versions()` function. From the "outside" (e.g. in `setup.py`), you can
-import the top-level `versioneer.py` and run `get_versions()`.
-
-Both functions return a dictionary with different flavors of version
-information:
-
-* `['version']`: A condensed version string, rendered using the selected
- style. This is the most commonly used value for the project's version
- string. The default "pep440" style yields strings like `0.11`,
- `0.11+2.g1076c97`, or `0.11+2.g1076c97.dirty`. See the "Styles" section
- below for alternative styles.
-
-* `['full-revisionid']`: detailed revision identifier. For Git, this is the
- full SHA1 commit id, e.g. "1076c978a8d3cfc70f408fe5974aa6c092c949ac".
-
-* `['date']`: Date and time of the latest `HEAD` commit. For Git, it is the
- commit date in ISO 8601 format. This will be None if the date is not
- available.
-
-* `['dirty']`: a boolean, True if the tree has uncommitted changes. Note that
- this is only accurate if run in a VCS checkout, otherwise it is likely to
- be False or None
-
-* `['error']`: if the version string could not be computed, this will be set
- to a string describing the problem, otherwise it will be None. It may be
- useful to throw an exception in setup.py if this is set, to avoid e.g.
- creating tarballs with a version string of "unknown".
-
-Some variants are more useful than others. Including `full-revisionid` in a
-bug report should allow developers to reconstruct the exact code being tested
-(or indicate the presence of local changes that should be shared with the
-developers). `version` is suitable for display in an "about" box or a CLI
-`--version` output: it can be easily compared against release notes and lists
-of bugs fixed in various releases.
-
-The installer adds the following text to your `__init__.py` to place a basic
-version in `YOURPROJECT.__version__`:
-
- from ._version import get_versions
- __version__ = get_versions()['version']
- del get_versions
-
-## Styles
-
-The setup.cfg `style=` configuration controls how the VCS information is
-rendered into a version string.
-
-The default style, "pep440", produces a PEP440-compliant string, equal to the
-un-prefixed tag name for actual releases, and containing an additional "local
-version" section with more detail for in-between builds. For Git, this is
-TAG[+DISTANCE.gHEX[.dirty]] , using information from `git describe --tags
---dirty --always`. For example "0.11+2.g1076c97.dirty" indicates that the
-tree is like the "1076c97" commit but has uncommitted changes (".dirty"), and
-that this commit is two revisions ("+2") beyond the "0.11" tag. For released
-software (exactly equal to a known tag), the identifier will only contain the
-stripped tag, e.g. "0.11".
-
-Other styles are available. See [details.md](details.md) in the Versioneer
-source tree for descriptions.
-
-## Debugging
-
-Versioneer tries to avoid fatal errors: if something goes wrong, it will tend
-to return a version of "0+unknown". To investigate the problem, run `setup.py
-version`, which will run the version-lookup code in a verbose mode, and will
-display the full contents of `get_versions()` (including the `error` string,
-which may help identify what went wrong).
-
-## Known Limitations
-
-Some situations are known to cause problems for Versioneer. This details the
-most significant ones. More can be found on GitHub
-[issues page](https://github.com/python-versioneer/python-versioneer/issues).
-
-### Subprojects
-
-Versioneer has limited support for source trees in which `setup.py` is not in
-the root directory (e.g. `setup.py` and `.git/` are *not* siblings). The are
-two common reasons why `setup.py` might not be in the root:
-
-* Source trees which contain multiple subprojects, such as
- [Buildbot](https://github.com/buildbot/buildbot), which contains both
- "master" and "slave" subprojects, each with their own `setup.py`,
- `setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI
- distributions (and upload multiple independently-installable tarballs).
-* Source trees whose main purpose is to contain a C library, but which also
- provide bindings to Python (and perhaps other languages) in subdirectories.
-
-Versioneer will look for `.git` in parent directories, and most operations
-should get the right version string. However `pip` and `setuptools` have bugs
-and implementation details which frequently cause `pip install .` from a
-subproject directory to fail to find a correct version string (so it usually
-defaults to `0+unknown`).
-
-`pip install --editable .` should work correctly. `setup.py install` might
-work too.
-
-Pip-8.1.1 is known to have this problem, but hopefully it will get fixed in
-some later version.
-
-[Bug #38](https://github.com/python-versioneer/python-versioneer/issues/38) is tracking
-this issue. The discussion in
-[PR #61](https://github.com/python-versioneer/python-versioneer/pull/61) describes the
-issue from the Versioneer side in more detail.
-[pip PR#3176](https://github.com/pypa/pip/pull/3176) and
-[pip PR#3615](https://github.com/pypa/pip/pull/3615) contain work to improve
-pip to let Versioneer work correctly.
-
-Versioneer-0.16 and earlier only looked for a `.git` directory next to the
-`setup.cfg`, so subprojects were completely unsupported with those releases.
-
-### Editable installs with setuptools <= 18.5
-
-`setup.py develop` and `pip install --editable .` allow you to install a
-project into a virtualenv once, then continue editing the source code (and
-test) without re-installing after every change.
-
-"Entry-point scripts" (`setup(entry_points={"console_scripts": ..})`) are a
-convenient way to specify executable scripts that should be installed along
-with the python package.
-
-These both work as expected when using modern setuptools. When using
-setuptools-18.5 or earlier, however, certain operations will cause
-`pkg_resources.DistributionNotFound` errors when running the entrypoint
-script, which must be resolved by re-installing the package. This happens
-when the install happens with one version, then the egg_info data is
-regenerated while a different version is checked out. Many setup.py commands
-cause egg_info to be rebuilt (including `sdist`, `wheel`, and installing into
-a different virtualenv), so this can be surprising.
-
-[Bug #83](https://github.com/python-versioneer/python-versioneer/issues/83) describes
-this one, but upgrading to a newer version of setuptools should probably
-resolve it.
-
-
-## Updating Versioneer
-
-To upgrade your project to a new release of Versioneer, do the following:
-
-* install the new Versioneer (`pip install -U versioneer` or equivalent)
-* edit `setup.cfg`, if necessary, to include any new configuration settings
- indicated by the release notes. See [UPGRADING](./UPGRADING.md) for details.
-* re-run `versioneer install` in your source tree, to replace
- `SRC/_version.py`
-* commit any changed files
-
-## Future Directions
-
-This tool is designed to make it easily extended to other version-control
-systems: all VCS-specific components are in separate directories like
-src/git/ . The top-level `versioneer.py` script is assembled from these
-components by running make-versioneer.py . In the future, make-versioneer.py
-will take a VCS name as an argument, and will construct a version of
-`versioneer.py` that is specific to the given VCS. It might also take the
-configuration arguments that are currently provided manually during
-installation by editing setup.py . Alternatively, it might go the other
-direction and include code from all supported VCS systems, reducing the
-number of intermediate scripts.
-
-## Similar projects
-
-* [setuptools_scm](https://github.com/pypa/setuptools_scm/) - a non-vendored build-time
- dependency
-* [minver](https://github.com/jbweston/miniver) - a lightweight reimplementation of
- versioneer
-
-## License
-
-To make Versioneer easier to embed, all its code is dedicated to the public
-domain. The `_version.py` that it creates is also in the public domain.
-Specifically, both are released under the Creative Commons "Public Domain
-Dedication" license (CC0-1.0), as described in
-https://creativecommons.org/publicdomain/zero/1.0/ .
-
-[pypi-image]: https://img.shields.io/pypi/v/versioneer.svg
-[pypi-url]: https://pypi.python.org/pypi/versioneer/
-[travis-image]:
-https://img.shields.io/travis/com/python-versioneer/python-versioneer.svg
-[travis-url]: https://travis-ci.com/github/python-versioneer/python-versioneer
-
-"""
-
-import configparser
-import errno
-import json
-import os
-import re
-import subprocess
-import sys
-
-
-class VersioneerConfig:
- """Container for Versioneer configuration parameters."""
-
-
-def get_root():
- """Get the project root directory.
-
- We require that all commands are run from the project root, i.e. the
- directory that contains setup.py, setup.cfg, and versioneer.py .
- """
- root = os.path.realpath(os.path.abspath(os.getcwd()))
- setup_py = os.path.join(root, "setup.py")
- versioneer_py = os.path.join(root, "versioneer.py")
- if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
- # allow 'python path/to/setup.py COMMAND'
- root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0])))
- setup_py = os.path.join(root, "setup.py")
- versioneer_py = os.path.join(root, "versioneer.py")
- if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
- err = (
- "Versioneer was unable to run the project root directory. "
- "Versioneer requires setup.py to be executed from "
- "its immediate directory (like 'python setup.py COMMAND'), "
- "or in a way that lets it use sys.argv[0] to find the root "
- "(like 'python path/to/setup.py COMMAND')."
- )
- raise VersioneerBadRootError(err)
- try:
- # Certain runtime workflows (setup.py install/develop in a setuptools
- # tree) execute all dependencies in a single python process, so
- # "versioneer" may be imported multiple times, and python's shared
- # module-import table will cache the first one. So we can't use
- # os.path.dirname(__file__), as that will find whichever
- # versioneer.py was first imported, even in later projects.
- me = os.path.realpath(os.path.abspath(__file__))
- me_dir = os.path.normcase(os.path.splitext(me)[0])
- vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0])
- if me_dir != vsr_dir:
- print(
- "Warning: build in %s is using versioneer.py from %s"
- % (os.path.dirname(me), versioneer_py)
- )
- except NameError:
- pass
- return root
-
-
-def get_config_from_root(root):
- """Read the project setup.cfg file to determine Versioneer config."""
- # This might raise EnvironmentError (if setup.cfg is missing), or
- # configparser.NoSectionError (if it lacks a [versioneer] section), or
- # configparser.NoOptionError (if it lacks "VCS="). See the docstring at
- # the top of versioneer.py for instructions on writing your setup.cfg .
- setup_cfg = os.path.join(root, "setup.cfg")
- parser = configparser.ConfigParser()
- with open(setup_cfg) as f:
- parser.read_file(f)
- VCS = parser.get("versioneer", "VCS") # mandatory
-
- def get(parser, name):
- if parser.has_option("versioneer", name):
- return parser.get("versioneer", name)
- return None
-
- cfg = VersioneerConfig()
- cfg.VCS = VCS
- cfg.style = get(parser, "style") or ""
- cfg.versionfile_source = get(parser, "versionfile_source")
- cfg.versionfile_build = get(parser, "versionfile_build")
- cfg.tag_prefix = get(parser, "tag_prefix")
- if cfg.tag_prefix in ("''", '""'):
- cfg.tag_prefix = ""
- cfg.parentdir_prefix = get(parser, "parentdir_prefix")
- cfg.verbose = get(parser, "verbose")
- return cfg
-
-
-class NotThisMethod(Exception):
- """Exception raised if a method is not valid for the current scenario."""
-
-
-# these dictionaries contain VCS-specific tools
-LONG_VERSION_PY = {}
-HANDLERS = {}
-
-
-def register_vcs_handler(vcs, method): # decorator
- """Create decorator to mark a method as the handler of a VCS."""
-
- def decorate(f):
- """Store f in HANDLERS[vcs][method]."""
- if vcs not in HANDLERS:
- HANDLERS[vcs] = {}
- HANDLERS[vcs][method] = f
- return f
-
- return decorate
-
-
-def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None):
- """Call the given command(s)."""
- assert isinstance(commands, list)
- p = None
- for c in commands:
- dispcmd = str([c] + args)
- try:
- # remember shell=False, so use git.cmd on windows, not just git
- p = subprocess.Popen(
- [c] + args,
- cwd=cwd,
- env=env,
- stdout=subprocess.PIPE,
- stderr=(subprocess.PIPE if hide_stderr else None),
- )
- break
- except OSError:
- e = sys.exc_info()[1]
- if e.errno == errno.ENOENT:
- continue
- if verbose:
- print("unable to run %s" % dispcmd)
- print(e)
- return None, None
- else:
- if verbose:
- print(f"unable to find command, tried {commands}")
- return None, None
- stdout = p.communicate()[0].strip().decode()
- if p.returncode != 0:
- if verbose:
- print("unable to run %s (error)" % dispcmd)
- print("stdout was %s" % stdout)
- return None, p.returncode
- return stdout, p.returncode
-
-
-LONG_VERSION_PY[
- "git"
-] = r'''
-# pylint: disable=consider-using-f-string
-# This file helps to compute a version number in source trees obtained from
-# git-archive tarball (such as those provided by GitHub's download-from-tag
-# feature). Distribution tarballs (built by setup.py sdist) and build
-# directories (produced by setup.py build) will contain a much shorter file
-# that just contains the computed version number.
-
-# This file is released into the public domain. Generated by
-# versioneer-0.19 (https://github.com/python-versioneer/python-versioneer)
-
-"""Git implementation of _version.py."""
-
-import errno
-import os
-import re
-import subprocess
-import sys
-
-
-def get_keywords():
- """Get the keywords needed to look up the version information."""
- # these strings will be replaced by git during git-archive.
- # setup.py/versioneer.py will grep for the variable names, so they must
- # each be defined on a line of their own. _version.py will just call
- # get_keywords().
- git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
- git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
- git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s"
- keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
- return keywords
-
-
-class VersioneerConfig:
- """Container for Versioneer configuration parameters."""
-
-
-def get_config():
- """Create, populate and return the VersioneerConfig() object."""
- # these strings are filled in when 'setup.py versioneer' creates
- # _version.py
- cfg = VersioneerConfig()
- cfg.VCS = "git"
- cfg.style = "%(STYLE)s"
- cfg.tag_prefix = "%(TAG_PREFIX)s"
- cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s"
- cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s"
- cfg.verbose = False
- return cfg
-
-
-class NotThisMethod(Exception):
- """Exception raised if a method is not valid for the current scenario."""
-
-
-LONG_VERSION_PY = {}
-HANDLERS = {}
-
-
-def register_vcs_handler(vcs, method): # decorator
- """Create decorator to mark a method as the handler of a VCS."""
- def decorate(f):
- """Store f in HANDLERS[vcs][method]."""
- if vcs not in HANDLERS:
- HANDLERS[vcs] = {}
- HANDLERS[vcs][method] = f
- return f
- return decorate
-
-
-def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
- env=None):
- """Call the given command(s)."""
- assert isinstance(commands, list)
- p = None
- for c in commands:
- try:
- dispcmd = str([c] + args)
- # remember shell=False, so use git.cmd on windows, not just git
- p = subprocess.Popen([c] + args, cwd=cwd, env=env,
- stdout=subprocess.PIPE,
- stderr=(subprocess.PIPE if hide_stderr
- else None))
- break
- except EnvironmentError:
- e = sys.exc_info()[1]
- if e.errno == errno.ENOENT:
- continue
- if verbose:
- print("unable to run %%s" %% dispcmd)
- print(e)
- return None, None
- else:
- if verbose:
- print("unable to find command, tried %%s" %% (commands,))
- return None, None
- stdout = p.communicate()[0].strip().decode()
- if p.returncode != 0:
- if verbose:
- print("unable to run %%s (error)" %% dispcmd)
- print("stdout was %%s" %% stdout)
- return None, p.returncode
- return stdout, p.returncode
-
-
-def versions_from_parentdir(parentdir_prefix, root, verbose):
- """Try to determine the version from the parent directory name.
-
- Source tarballs conventionally unpack into a directory that includes both
- the project name and a version string. We will also support searching up
- two directory levels for an appropriately named parent directory
- """
- rootdirs = []
-
- for i in range(3):
- dirname = os.path.basename(root)
- if dirname.startswith(parentdir_prefix):
- return {"version": dirname[len(parentdir_prefix):],
- "full-revisionid": None,
- "dirty": False, "error": None, "date": None}
- else:
- rootdirs.append(root)
- root = os.path.dirname(root) # up a level
-
- if verbose:
- print("Tried directories %%s but none started with prefix %%s" %%
- (str(rootdirs), parentdir_prefix))
- raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
-
-
-@register_vcs_handler("git", "get_keywords")
-def git_get_keywords(versionfile_abs):
- """Extract version information from the given file."""
- # the code embedded in _version.py can just fetch the value of these
- # keywords. When used from setup.py, we don't want to import _version.py,
- # so we do it with a regexp instead. This function is not used from
- # _version.py.
- keywords = {}
- try:
- f = open(versionfile_abs, "r")
- for line in f.readlines():
- if line.strip().startswith("git_refnames ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["refnames"] = mo.group(1)
- if line.strip().startswith("git_full ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["full"] = mo.group(1)
- if line.strip().startswith("git_date ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["date"] = mo.group(1)
- f.close()
- except EnvironmentError:
- pass
- return keywords
-
-
-@register_vcs_handler("git", "keywords")
-def git_versions_from_keywords(keywords, tag_prefix, verbose):
- """Get version information from git keywords."""
- if not keywords:
- raise NotThisMethod("no keywords at all, weird")
- date = keywords.get("date")
- if date is not None:
- # Use only the last line. Previous lines may contain GPG signature
- # information.
- date = date.splitlines()[-1]
-
- # git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant
- # datestamp. However we prefer "%%ci" (which expands to an "ISO-8601
- # -like" string, which we must then edit to make compliant), because
- # it's been around since git-1.5.3, and it's too difficult to
- # discover which version we're using, or to work around using an
- # older one.
- date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
- refnames = keywords["refnames"].strip()
- if refnames.startswith("$Format"):
- if verbose:
- print("keywords are unexpanded, not using")
- raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
- refs = set([r.strip() for r in refnames.strip("()").split(",")])
- # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
- # just "foo-1.0". If we see a "tag: " prefix, prefer those.
- TAG = "tag: "
- tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
- if not tags:
- # Either we're using git < 1.8.3, or there really are no tags. We use
- # a heuristic: assume all version tags have a digit. The old git %%d
- # expansion behaves like git log --decorate=short and strips out the
- # refs/heads/ and refs/tags/ prefixes that would let us distinguish
- # between branches and tags. By ignoring refnames without digits, we
- # filter out many common branch names like "release" and
- # "stabilization", as well as "HEAD" and "master".
- tags = set([r for r in refs if re.search(r'\d', r)])
- if verbose:
- print("discarding '%%s', no digits" %% ",".join(refs - tags))
- if verbose:
- print("likely tags: %%s" %% ",".join(sorted(tags)))
- for ref in sorted(tags):
- # sorting will prefer e.g. "2.0" over "2.0rc1"
- if ref.startswith(tag_prefix):
- r = ref[len(tag_prefix):]
- if verbose:
- print("picking %%s" %% r)
- return {"version": r,
- "full-revisionid": keywords["full"].strip(),
- "dirty": False, "error": None,
- "date": date}
- # no suitable tags, so version is "0+unknown", but full hex is still there
- if verbose:
- print("no suitable tags, using unknown + full revision id")
- return {"version": "0+unknown",
- "full-revisionid": keywords["full"].strip(),
- "dirty": False, "error": "no suitable tags", "date": None}
-
-
-@register_vcs_handler("git", "pieces_from_vcs")
-def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
- """Get version from 'git describe' in the root of the source tree.
-
- This only gets called if the git-archive 'subst' keywords were *not*
- expanded, and _version.py hasn't already been rewritten with a short
- version string, meaning we're inside a checked out source tree.
- """
- GITS = ["git"]
- if sys.platform == "win32":
- GITS = ["git.cmd", "git.exe"]
-
- out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root,
- hide_stderr=True)
- if rc != 0:
- if verbose:
- print("Directory %%s not under git control" %% root)
- raise NotThisMethod("'git rev-parse --git-dir' returned error")
-
- # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
- # if there isn't one, this yields HEX[-dirty] (no NUM)
- describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty",
- "--always", "--long",
- "--match", "%%s*" %% tag_prefix],
- cwd=root)
- # --long was added in git-1.5.5
- if describe_out is None:
- raise NotThisMethod("'git describe' failed")
- describe_out = describe_out.strip()
- full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
- if full_out is None:
- raise NotThisMethod("'git rev-parse' failed")
- full_out = full_out.strip()
-
- pieces = {}
- pieces["long"] = full_out
- pieces["short"] = full_out[:7] # maybe improved later
- pieces["error"] = None
-
- # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
- # TAG might have hyphens.
- git_describe = describe_out
-
- # look for -dirty suffix
- dirty = git_describe.endswith("-dirty")
- pieces["dirty"] = dirty
- if dirty:
- git_describe = git_describe[:git_describe.rindex("-dirty")]
-
- # now we have TAG-NUM-gHEX or HEX
-
- if "-" in git_describe:
- # TAG-NUM-gHEX
- mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
- if not mo:
- # unparsable. Maybe git-describe is misbehaving?
- pieces["error"] = ("unable to parse git-describe output: '%%s'"
- %% describe_out)
- return pieces
-
- # tag
- full_tag = mo.group(1)
- if not full_tag.startswith(tag_prefix):
- if verbose:
- fmt = "tag '%%s' doesn't start with prefix '%%s'"
- print(fmt %% (full_tag, tag_prefix))
- pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'"
- %% (full_tag, tag_prefix))
- return pieces
- pieces["closest-tag"] = full_tag[len(tag_prefix):]
-
- # distance: number of commits since tag
- pieces["distance"] = int(mo.group(2))
-
- # commit: short hex revision ID
- pieces["short"] = mo.group(3)
-
- else:
- # HEX: no tags
- pieces["closest-tag"] = None
- count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"],
- cwd=root)
- pieces["distance"] = int(count_out) # total number of commits
-
- # commit date: see ISO-8601 comment in git_versions_from_keywords()
- date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"],
- cwd=root)[0].strip()
- # Use only the last line. Previous lines may contain GPG signature
- # information.
- date = date.splitlines()[-1]
- pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
-
- return pieces
-
-
-def plus_or_dot(pieces):
- """Return a + if we don't already have one, else return a ."""
- if "+" in pieces.get("closest-tag", ""):
- return "."
- return "+"
-
-
-def render_pep440(pieces):
- """Build up version string, with post-release "local version identifier".
-
- Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
- get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
-
- Exceptions:
- 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"] or pieces["dirty"]:
- rendered += plus_or_dot(pieces)
- rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
- if pieces["dirty"]:
- rendered += ".dirty"
- else:
- # exception #1
- rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"],
- pieces["short"])
- if pieces["dirty"]:
- rendered += ".dirty"
- return rendered
-
-
-def render_pep440_pre(pieces):
- """TAG[.post0.devDISTANCE] -- No -dirty.
-
- Exceptions:
- 1: no tags. 0.post0.devDISTANCE
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"]:
- rendered += ".post0.dev%%d" %% pieces["distance"]
- else:
- # exception #1
- rendered = "0.post0.dev%%d" %% pieces["distance"]
- return rendered
-
-
-def render_pep440_post(pieces):
- """TAG[.postDISTANCE[.dev0]+gHEX] .
-
- The ".dev0" means dirty. Note that .dev0 sorts backwards
- (a dirty tree will appear "older" than the corresponding clean one),
- but you shouldn't be releasing software with -dirty anyways.
-
- Exceptions:
- 1: no tags. 0.postDISTANCE[.dev0]
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"] or pieces["dirty"]:
- rendered += ".post%%d" %% pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- rendered += plus_or_dot(pieces)
- rendered += "g%%s" %% pieces["short"]
- else:
- # exception #1
- rendered = "0.post%%d" %% pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- rendered += "+g%%s" %% pieces["short"]
- return rendered
-
-
-def render_pep440_old(pieces):
- """TAG[.postDISTANCE[.dev0]] .
-
- The ".dev0" means dirty.
-
- Exceptions:
- 1: no tags. 0.postDISTANCE[.dev0]
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"] or pieces["dirty"]:
- rendered += ".post%%d" %% pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- else:
- # exception #1
- rendered = "0.post%%d" %% pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- return rendered
-
-
-def render_git_describe(pieces):
- """TAG[-DISTANCE-gHEX][-dirty].
-
- Like 'git describe --tags --dirty --always'.
-
- Exceptions:
- 1: no tags. HEX[-dirty] (note: no 'g' prefix)
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"]:
- rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
- else:
- # exception #1
- rendered = pieces["short"]
- if pieces["dirty"]:
- rendered += "-dirty"
- return rendered
-
-
-def render_git_describe_long(pieces):
- """TAG-DISTANCE-gHEX[-dirty].
-
- Like 'git describe --tags --dirty --always -long'.
- The distance/hash is unconditional.
-
- Exceptions:
- 1: no tags. HEX[-dirty] (note: no 'g' prefix)
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
- else:
- # exception #1
- rendered = pieces["short"]
- if pieces["dirty"]:
- rendered += "-dirty"
- return rendered
-
-
-def render(pieces, style):
- """Render the given version pieces into the requested style."""
- if pieces["error"]:
- return {"version": "unknown",
- "full-revisionid": pieces.get("long"),
- "dirty": None,
- "error": pieces["error"],
- "date": None}
-
- if not style or style == "default":
- style = "pep440" # the default
-
- if style == "pep440":
- rendered = render_pep440(pieces)
- elif style == "pep440-pre":
- rendered = render_pep440_pre(pieces)
- elif style == "pep440-post":
- rendered = render_pep440_post(pieces)
- elif style == "pep440-old":
- rendered = render_pep440_old(pieces)
- elif style == "git-describe":
- rendered = render_git_describe(pieces)
- elif style == "git-describe-long":
- rendered = render_git_describe_long(pieces)
- else:
- raise ValueError("unknown style '%%s'" %% style)
-
- return {"version": rendered, "full-revisionid": pieces["long"],
- "dirty": pieces["dirty"], "error": None,
- "date": pieces.get("date")}
-
-
-def get_versions():
- """Get version information or return default if unable to do so."""
- # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
- # __file__, we can work backwards from there to the root. Some
- # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
- # case we can only use expanded keywords.
-
- cfg = get_config()
- verbose = cfg.verbose
-
- try:
- return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
- verbose)
- except NotThisMethod:
- pass
-
- try:
- root = os.path.realpath(__file__)
- # versionfile_source is the relative path from the top of the source
- # tree (where the .git directory might live) to this file. Invert
- # this to find the root from __file__.
- for i in cfg.versionfile_source.split('/'):
- root = os.path.dirname(root)
- except NameError:
- return {"version": "0+unknown", "full-revisionid": None,
- "dirty": None,
- "error": "unable to find root of source tree",
- "date": None}
-
- try:
- pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
- return render(pieces, cfg.style)
- except NotThisMethod:
- pass
-
- try:
- if cfg.parentdir_prefix:
- return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
- except NotThisMethod:
- pass
-
- return {"version": "0+unknown", "full-revisionid": None,
- "dirty": None,
- "error": "unable to compute version", "date": None}
-'''
-
-
-@register_vcs_handler("git", "get_keywords")
-def git_get_keywords(versionfile_abs):
- """Extract version information from the given file."""
- # the code embedded in _version.py can just fetch the value of these
- # keywords. When used from setup.py, we don't want to import _version.py,
- # so we do it with a regexp instead. This function is not used from
- # _version.py.
- keywords = {}
- try:
- f = open(versionfile_abs)
- for line in f.readlines():
- if line.strip().startswith("git_refnames ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["refnames"] = mo.group(1)
- if line.strip().startswith("git_full ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["full"] = mo.group(1)
- if line.strip().startswith("git_date ="):
- mo = re.search(r'=\s*"(.*)"', line)
- if mo:
- keywords["date"] = mo.group(1)
- f.close()
- except OSError:
- pass
- return keywords
-
-
-@register_vcs_handler("git", "keywords")
-def git_versions_from_keywords(keywords, tag_prefix, verbose):
- """Get version information from git keywords."""
- if not keywords:
- raise NotThisMethod("no keywords at all, weird")
- date = keywords.get("date")
- if date is not None:
- # Use only the last line. Previous lines may contain GPG signature
- # information.
- date = date.splitlines()[-1]
-
- # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
- # datestamp. However we prefer "%ci" (which expands to an "ISO-8601
- # -like" string, which we must then edit to make compliant), because
- # it's been around since git-1.5.3, and it's too difficult to
- # discover which version we're using, or to work around using an
- # older one.
- date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
- refnames = keywords["refnames"].strip()
- if refnames.startswith("$Format"):
- if verbose:
- print("keywords are unexpanded, not using")
- raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
- refs = {r.strip() for r in refnames.strip("()").split(",")}
- # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
- # just "foo-1.0". If we see a "tag: " prefix, prefer those.
- TAG = "tag: "
- tags = {r[len(TAG) :] for r in refs if r.startswith(TAG)}
- if not tags:
- # Either we're using git < 1.8.3, or there really are no tags. We use
- # a heuristic: assume all version tags have a digit. The old git %d
- # expansion behaves like git log --decorate=short and strips out the
- # refs/heads/ and refs/tags/ prefixes that would let us distinguish
- # between branches and tags. By ignoring refnames without digits, we
- # filter out many common branch names like "release" and
- # "stabilization", as well as "HEAD" and "master".
- tags = {r for r in refs if re.search(r"\d", r)}
- if verbose:
- print("discarding '%s', no digits" % ",".join(refs - tags))
- if verbose:
- print("likely tags: %s" % ",".join(sorted(tags)))
- for ref in sorted(tags):
- # sorting will prefer e.g. "2.0" over "2.0rc1"
- if ref.startswith(tag_prefix):
- r = ref[len(tag_prefix) :]
- if verbose:
- print("picking %s" % r)
- return {
- "version": r,
- "full-revisionid": keywords["full"].strip(),
- "dirty": False,
- "error": None,
- "date": date,
- }
- # no suitable tags, so version is "0+unknown", but full hex is still there
- if verbose:
- print("no suitable tags, using unknown + full revision id")
- return {
- "version": "0+unknown",
- "full-revisionid": keywords["full"].strip(),
- "dirty": False,
- "error": "no suitable tags",
- "date": None,
- }
-
-
-@register_vcs_handler("git", "pieces_from_vcs")
-def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
- """Get version from 'git describe' in the root of the source tree.
-
- This only gets called if the git-archive 'subst' keywords were *not*
- expanded, and _version.py hasn't already been rewritten with a short
- version string, meaning we're inside a checked out source tree.
- """
- GITS = ["git"]
- if sys.platform == "win32":
- GITS = ["git.cmd", "git.exe"]
-
- out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True)
- if rc != 0:
- if verbose:
- print("Directory %s not under git control" % root)
- raise NotThisMethod("'git rev-parse --git-dir' returned error")
-
- # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
- # if there isn't one, this yields HEX[-dirty] (no NUM)
- describe_out, rc = run_command(
- GITS,
- [
- "describe",
- "--tags",
- "--dirty",
- "--always",
- "--long",
- "--match",
- "%s*" % tag_prefix,
- ],
- cwd=root,
- )
- # --long was added in git-1.5.5
- if describe_out is None:
- raise NotThisMethod("'git describe' failed")
- describe_out = describe_out.strip()
- full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
- if full_out is None:
- raise NotThisMethod("'git rev-parse' failed")
- full_out = full_out.strip()
-
- pieces = {}
- pieces["long"] = full_out
- pieces["short"] = full_out[:7] # maybe improved later
- pieces["error"] = None
-
- # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
- # TAG might have hyphens.
- git_describe = describe_out
-
- # look for -dirty suffix
- dirty = git_describe.endswith("-dirty")
- pieces["dirty"] = dirty
- if dirty:
- git_describe = git_describe[: git_describe.rindex("-dirty")]
-
- # now we have TAG-NUM-gHEX or HEX
-
- if "-" in git_describe:
- # TAG-NUM-gHEX
- mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
- if not mo:
- # unparsable. Maybe git-describe is misbehaving?
- pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
- return pieces
-
- # tag
- full_tag = mo.group(1)
- if not full_tag.startswith(tag_prefix):
- if verbose:
- fmt = "tag '%s' doesn't start with prefix '%s'"
- print(fmt % (full_tag, tag_prefix))
- pieces["error"] = "tag '{}' doesn't start with prefix '{}'".format(
- full_tag,
- tag_prefix,
- )
- return pieces
- pieces["closest-tag"] = full_tag[len(tag_prefix) :]
-
- # distance: number of commits since tag
- pieces["distance"] = int(mo.group(2))
-
- # commit: short hex revision ID
- pieces["short"] = mo.group(3)
-
- else:
- # HEX: no tags
- pieces["closest-tag"] = None
- count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
- pieces["distance"] = int(count_out) # total number of commits
-
- # commit date: see ISO-8601 comment in git_versions_from_keywords()
- date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[
- 0
- ].strip()
- # Use only the last line. Previous lines may contain GPG signature
- # information.
- date = date.splitlines()[-1]
- pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
-
- return pieces
-
-
-def do_vcs_install(manifest_in, versionfile_source, ipy):
- """Git-specific installation logic for Versioneer.
-
- For Git, this means creating/changing .gitattributes to mark _version.py
- for export-subst keyword substitution.
- """
- GITS = ["git"]
- if sys.platform == "win32":
- GITS = ["git.cmd", "git.exe"]
- files = [manifest_in, versionfile_source]
- if ipy:
- files.append(ipy)
- try:
- me = __file__
- if me.endswith(".pyc") or me.endswith(".pyo"):
- me = os.path.splitext(me)[0] + ".py"
- versioneer_file = os.path.relpath(me)
- except NameError:
- versioneer_file = "versioneer.py"
- files.append(versioneer_file)
- present = False
- try:
- f = open(".gitattributes")
- for line in f.readlines():
- if line.strip().startswith(versionfile_source):
- if "export-subst" in line.strip().split()[1:]:
- present = True
- f.close()
- except OSError:
- pass
- if not present:
- f = open(".gitattributes", "a+")
- f.write("%s export-subst\n" % versionfile_source)
- f.close()
- files.append(".gitattributes")
- run_command(GITS, ["add", "--"] + files)
-
-
-def versions_from_parentdir(parentdir_prefix, root, verbose):
- """Try to determine the version from the parent directory name.
-
- Source tarballs conventionally unpack into a directory that includes both
- the project name and a version string. We will also support searching up
- two directory levels for an appropriately named parent directory
- """
- rootdirs = []
-
- for i in range(3):
- dirname = os.path.basename(root)
- if dirname.startswith(parentdir_prefix):
- return {
- "version": dirname[len(parentdir_prefix) :],
- "full-revisionid": None,
- "dirty": False,
- "error": None,
- "date": None,
- }
- else:
- rootdirs.append(root)
- root = os.path.dirname(root) # up a level
-
- if verbose:
- print(
- "Tried directories %s but none started with prefix %s"
- % (str(rootdirs), parentdir_prefix)
- )
- raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
-
-
-SHORT_VERSION_PY = """
-# This file was generated by 'versioneer.py' (0.19) from
-# revision-control system data, or from the parent directory name of an
-# unpacked source archive. Distribution tarballs contain a pre-generated copy
-# of this file.
-
-import json
-
-version_json = '''
-%s
-''' # END VERSION_JSON
-
-
-def get_versions():
- return json.loads(version_json)
-"""
-
-
-def versions_from_file(filename):
- """Try to determine the version from _version.py if present."""
- try:
- with open(filename) as f:
- contents = f.read()
- except OSError:
- raise NotThisMethod("unable to read _version.py")
- mo = re.search(
- r"version_json = '''\n(.*)''' # END VERSION_JSON", contents, re.M | re.S
- )
- if not mo:
- mo = re.search(
- r"version_json = '''\r\n(.*)''' # END VERSION_JSON", contents, re.M | re.S
- )
- if not mo:
- raise NotThisMethod("no version_json in _version.py")
- return json.loads(mo.group(1))
-
-
-def write_to_version_file(filename, versions):
- """Write the given version number to the given _version.py file."""
- os.unlink(filename)
- contents = json.dumps(versions, sort_keys=True, indent=1, separators=(",", ": "))
- with open(filename, "w") as f:
- f.write(SHORT_VERSION_PY % contents)
-
- print("set {} to '{}'".format(filename, versions["version"]))
-
-
-def plus_or_dot(pieces):
- """Return a + if we don't already have one, else return a ."""
- if "+" in pieces.get("closest-tag", ""):
- return "."
- return "+"
-
-
-def render_pep440(pieces):
- """Build up version string, with post-release "local version identifier".
-
- Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
- get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
-
- Exceptions:
- 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"] or pieces["dirty"]:
- rendered += plus_or_dot(pieces)
- rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
- if pieces["dirty"]:
- rendered += ".dirty"
- else:
- # exception #1
- rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"])
- if pieces["dirty"]:
- rendered += ".dirty"
- return rendered
-
-
-def render_pep440_pre(pieces):
- """TAG[.post0.devDISTANCE] -- No -dirty.
-
- Exceptions:
- 1: no tags. 0.post0.devDISTANCE
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"]:
- rendered += ".post0.dev%d" % pieces["distance"]
- else:
- # exception #1
- rendered = "0.post0.dev%d" % pieces["distance"]
- return rendered
-
-
-def render_pep440_post(pieces):
- """TAG[.postDISTANCE[.dev0]+gHEX] .
-
- The ".dev0" means dirty. Note that .dev0 sorts backwards
- (a dirty tree will appear "older" than the corresponding clean one),
- but you shouldn't be releasing software with -dirty anyways.
-
- Exceptions:
- 1: no tags. 0.postDISTANCE[.dev0]
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"] or pieces["dirty"]:
- rendered += ".post%d" % pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- rendered += plus_or_dot(pieces)
- rendered += "g%s" % pieces["short"]
- else:
- # exception #1
- rendered = "0.post%d" % pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- rendered += "+g%s" % pieces["short"]
- return rendered
-
-
-def render_pep440_old(pieces):
- """TAG[.postDISTANCE[.dev0]] .
-
- The ".dev0" means dirty.
-
- Exceptions:
- 1: no tags. 0.postDISTANCE[.dev0]
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"] or pieces["dirty"]:
- rendered += ".post%d" % pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- else:
- # exception #1
- rendered = "0.post%d" % pieces["distance"]
- if pieces["dirty"]:
- rendered += ".dev0"
- return rendered
-
-
-def render_git_describe(pieces):
- """TAG[-DISTANCE-gHEX][-dirty].
-
- Like 'git describe --tags --dirty --always'.
-
- Exceptions:
- 1: no tags. HEX[-dirty] (note: no 'g' prefix)
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- if pieces["distance"]:
- rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
- else:
- # exception #1
- rendered = pieces["short"]
- if pieces["dirty"]:
- rendered += "-dirty"
- return rendered
-
-
-def render_git_describe_long(pieces):
- """TAG-DISTANCE-gHEX[-dirty].
-
- Like 'git describe --tags --dirty --always -long'.
- The distance/hash is unconditional.
-
- Exceptions:
- 1: no tags. HEX[-dirty] (note: no 'g' prefix)
- """
- if pieces["closest-tag"]:
- rendered = pieces["closest-tag"]
- rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
- else:
- # exception #1
- rendered = pieces["short"]
- if pieces["dirty"]:
- rendered += "-dirty"
- return rendered
-
-
-def render(pieces, style):
- """Render the given version pieces into the requested style."""
- if pieces["error"]:
- return {
- "version": "unknown",
- "full-revisionid": pieces.get("long"),
- "dirty": None,
- "error": pieces["error"],
- "date": None,
- }
-
- if not style or style == "default":
- style = "pep440" # the default
-
- if style == "pep440":
- rendered = render_pep440(pieces)
- elif style == "pep440-pre":
- rendered = render_pep440_pre(pieces)
- elif style == "pep440-post":
- rendered = render_pep440_post(pieces)
- elif style == "pep440-old":
- rendered = render_pep440_old(pieces)
- elif style == "git-describe":
- rendered = render_git_describe(pieces)
- elif style == "git-describe-long":
- rendered = render_git_describe_long(pieces)
- else:
- raise ValueError("unknown style '%s'" % style)
-
- return {
- "version": rendered,
- "full-revisionid": pieces["long"],
- "dirty": pieces["dirty"],
- "error": None,
- "date": pieces.get("date"),
- }
-
-
-class VersioneerBadRootError(Exception):
- """The project root directory is unknown or missing key files."""
-
-
-def get_versions(verbose=False):
- """Get the project version from whatever source is available.
-
- Returns dict with two keys: 'version' and 'full'.
- """
- if "versioneer" in sys.modules:
- # see the discussion in cmdclass.py:get_cmdclass()
- del sys.modules["versioneer"]
-
- root = get_root()
- cfg = get_config_from_root(root)
-
- assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg"
- handlers = HANDLERS.get(cfg.VCS)
- assert handlers, "unrecognized VCS '%s'" % cfg.VCS
- verbose = verbose or cfg.verbose
- assert (
- cfg.versionfile_source is not None
- ), "please set versioneer.versionfile_source"
- assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix"
-
- versionfile_abs = os.path.join(root, cfg.versionfile_source)
-
- # extract version from first of: _version.py, VCS command (e.g. 'git
- # describe'), parentdir. This is meant to work for developers using a
- # source checkout, for users of a tarball created by 'setup.py sdist',
- # and for users of a tarball/zipball created by 'git archive' or GitHub's
- # download-from-tag feature or the equivalent in other VCSes.
-
- get_keywords_f = handlers.get("get_keywords")
- from_keywords_f = handlers.get("keywords")
- if get_keywords_f and from_keywords_f:
- try:
- keywords = get_keywords_f(versionfile_abs)
- ver = from_keywords_f(keywords, cfg.tag_prefix, verbose)
- if verbose:
- print("got version from expanded keyword %s" % ver)
- return ver
- except NotThisMethod:
- pass
-
- try:
- ver = versions_from_file(versionfile_abs)
- if verbose:
- print(f"got version from file {versionfile_abs} {ver}")
- return ver
- except NotThisMethod:
- pass
-
- from_vcs_f = handlers.get("pieces_from_vcs")
- if from_vcs_f:
- try:
- pieces = from_vcs_f(cfg.tag_prefix, root, verbose)
- ver = render(pieces, cfg.style)
- if verbose:
- print("got version from VCS %s" % ver)
- return ver
- except NotThisMethod:
- pass
-
- try:
- if cfg.parentdir_prefix:
- ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
- if verbose:
- print("got version from parentdir %s" % ver)
- return ver
- except NotThisMethod:
- pass
-
- if verbose:
- print("unable to compute version")
-
- return {
- "version": "0+unknown",
- "full-revisionid": None,
- "dirty": None,
- "error": "unable to compute version",
- "date": None,
- }
-
-
-def get_version():
- """Get the short version string for this project."""
- return get_versions()["version"]
-
-
-def get_cmdclass(cmdclass=None):
- """Get the custom setuptools/distutils subclasses used by Versioneer.
-
- If the package uses a different cmdclass (e.g. one from numpy), it
- should be provide as an argument.
- """
- if "versioneer" in sys.modules:
- del sys.modules["versioneer"]
- # this fixes the "python setup.py develop" case (also 'install' and
- # 'easy_install .'), in which subdependencies of the main project are
- # built (using setup.py bdist_egg) in the same python process. Assume
- # a main project A and a dependency B, which use different versions
- # of Versioneer. A's setup.py imports A's Versioneer, leaving it in
- # sys.modules by the time B's setup.py is executed, causing B to run
- # with the wrong versioneer. Setuptools wraps the sub-dep builds in a
- # sandbox that restores sys.modules to its pre-build state, so the
- # parent is protected against the child's "import versioneer". By
- # removing ourselves from sys.modules here, before the child build
- # happens, we protect the child from the parent's versioneer too.
- # Also see https://github.com/python-versioneer/python-versioneer/issues/52
-
- cmds = {} if cmdclass is None else cmdclass.copy()
-
- # we add "version" to both distutils and setuptools
- from distutils.core import Command
-
- class cmd_version(Command):
- description = "report generated version string"
- user_options = []
- boolean_options = []
-
- def initialize_options(self):
- pass
-
- def finalize_options(self):
- pass
-
- def run(self):
- vers = get_versions(verbose=True)
- print("Version: %s" % vers["version"])
- print(" full-revisionid: %s" % vers.get("full-revisionid"))
- print(" dirty: %s" % vers.get("dirty"))
- print(" date: %s" % vers.get("date"))
- if vers["error"]:
- print(" error: %s" % vers["error"])
-
- cmds["version"] = cmd_version
-
- # we override "build_py" in both distutils and setuptools
- #
- # most invocation pathways end up running build_py:
- # distutils/build -> build_py
- # distutils/install -> distutils/build ->..
- # setuptools/bdist_wheel -> distutils/install ->..
- # setuptools/bdist_egg -> distutils/install_lib -> build_py
- # setuptools/install -> bdist_egg ->..
- # setuptools/develop -> ?
- # pip install:
- # copies source tree to a tempdir before running egg_info/etc
- # if .git isn't copied too, 'git describe' will fail
- # then does setup.py bdist_wheel, or sometimes setup.py install
- # setup.py egg_info -> ?
-
- # we override different "build_py" commands for both environments
- if "build_py" in cmds:
- _build_py = cmds["build_py"]
- elif "setuptools" in sys.modules:
- from setuptools.command.build_py import build_py as _build_py
- else:
- from distutils.command.build_py import build_py as _build_py
-
- class cmd_build_py(_build_py):
- def run(self):
- root = get_root()
- cfg = get_config_from_root(root)
- versions = get_versions()
- _build_py.run(self)
- # now locate _version.py in the new build/ directory and replace
- # it with an updated value
- if cfg.versionfile_build:
- target_versionfile = os.path.join(self.build_lib, cfg.versionfile_build)
- print("UPDATING %s" % target_versionfile)
- write_to_version_file(target_versionfile, versions)
-
- cmds["build_py"] = cmd_build_py
-
- if "setuptools" in sys.modules:
- from setuptools.command.build_ext import build_ext as _build_ext
- else:
- from distutils.command.build_ext import build_ext as _build_ext
-
- class cmd_build_ext(_build_ext):
- def run(self):
- root = get_root()
- cfg = get_config_from_root(root)
- versions = get_versions()
- _build_ext.run(self)
- if self.inplace:
- # build_ext --inplace will only build extensions in
- # build/lib<..> dir with no _version.py to write to.
- # As in place builds will already have a _version.py
- # in the module dir, we do not need to write one.
- return
- # now locate _version.py in the new build/ directory and replace
- # it with an updated value
- target_versionfile = os.path.join(self.build_lib, cfg.versionfile_source)
- print("UPDATING %s" % target_versionfile)
- write_to_version_file(target_versionfile, versions)
-
- cmds["build_ext"] = cmd_build_ext
-
- if "cx_Freeze" in sys.modules: # cx_freeze enabled?
- from cx_Freeze.dist import build_exe as _build_exe
-
- # nczeczulin reports that py2exe won't like the pep440-style string
- # as FILEVERSION, but it can be used for PRODUCTVERSION, e.g.
- # setup(console=[{
- # "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION
- # "product_version": versioneer.get_version(),
- # ...
-
- class cmd_build_exe(_build_exe):
- def run(self):
- root = get_root()
- cfg = get_config_from_root(root)
- versions = get_versions()
- target_versionfile = cfg.versionfile_source
- print("UPDATING %s" % target_versionfile)
- write_to_version_file(target_versionfile, versions)
-
- _build_exe.run(self)
- os.unlink(target_versionfile)
- with open(cfg.versionfile_source, "w") as f:
- LONG = LONG_VERSION_PY[cfg.VCS]
- f.write(
- LONG
- % {
- "DOLLAR": "$",
- "STYLE": cfg.style,
- "TAG_PREFIX": cfg.tag_prefix,
- "PARENTDIR_PREFIX": cfg.parentdir_prefix,
- "VERSIONFILE_SOURCE": cfg.versionfile_source,
- }
- )
-
- cmds["build_exe"] = cmd_build_exe
- del cmds["build_py"]
-
- if "py2exe" in sys.modules: # py2exe enabled?
- from py2exe.distutils_buildexe import py2exe as _py2exe
-
- class cmd_py2exe(_py2exe):
- def run(self):
- root = get_root()
- cfg = get_config_from_root(root)
- versions = get_versions()
- target_versionfile = cfg.versionfile_source
- print("UPDATING %s" % target_versionfile)
- write_to_version_file(target_versionfile, versions)
-
- _py2exe.run(self)
- os.unlink(target_versionfile)
- with open(cfg.versionfile_source, "w") as f:
- LONG = LONG_VERSION_PY[cfg.VCS]
- f.write(
- LONG
- % {
- "DOLLAR": "$",
- "STYLE": cfg.style,
- "TAG_PREFIX": cfg.tag_prefix,
- "PARENTDIR_PREFIX": cfg.parentdir_prefix,
- "VERSIONFILE_SOURCE": cfg.versionfile_source,
- }
- )
-
- cmds["py2exe"] = cmd_py2exe
-
- # we override different "sdist" commands for both environments
- if "sdist" in cmds:
- _sdist = cmds["sdist"]
- elif "setuptools" in sys.modules:
- from setuptools.command.sdist import sdist as _sdist
- else:
- from distutils.command.sdist import sdist as _sdist
-
- class cmd_sdist(_sdist):
- def run(self):
- versions = get_versions()
- self._versioneer_generated_versions = versions
- # unless we update this, the command will keep using the old
- # version
- self.distribution.metadata.version = versions["version"]
- return _sdist.run(self)
-
- def make_release_tree(self, base_dir, files):
- root = get_root()
- cfg = get_config_from_root(root)
- _sdist.make_release_tree(self, base_dir, files)
- # now locate _version.py in the new base_dir directory
- # (remembering that it may be a hardlink) and replace it with an
- # updated value
- target_versionfile = os.path.join(base_dir, cfg.versionfile_source)
- print("UPDATING %s" % target_versionfile)
- write_to_version_file(
- target_versionfile, self._versioneer_generated_versions
- )
-
- cmds["sdist"] = cmd_sdist
-
- return cmds
-
-
-CONFIG_ERROR = """
-setup.cfg is missing the necessary Versioneer configuration. You need
-a section like:
-
- [versioneer]
- VCS = git
- style = pep440
- versionfile_source = src/myproject/_version.py
- versionfile_build = myproject/_version.py
- tag_prefix =
- parentdir_prefix = myproject-
-
-You will also need to edit your setup.py to use the results:
-
- import versioneer
- setup(version=versioneer.get_version(),
- cmdclass=versioneer.get_cmdclass(), ...)
-
-Please read the docstring in ./versioneer.py for configuration instructions,
-edit setup.cfg, and re-run the installer or 'python versioneer.py setup'.
-"""
-
-SAMPLE_CONFIG = """
-# See the docstring in versioneer.py for instructions. Note that you must
-# re-run 'versioneer.py setup' after changing this section, and commit the
-# resulting files.
-
-[versioneer]
-#VCS = git
-#style = pep440
-#versionfile_source =
-#versionfile_build =
-#tag_prefix =
-#parentdir_prefix =
-
-"""
-
-INIT_PY_SNIPPET = """
-from pandas._version import get_versions
-__version__ = get_versions()['version']
-del get_versions
-"""
-
-
-def do_setup():
- """Do main VCS-independent setup function for installing Versioneer."""
- root = get_root()
- try:
- cfg = get_config_from_root(root)
- except (OSError, configparser.NoSectionError, configparser.NoOptionError) as e:
- if isinstance(e, (EnvironmentError, configparser.NoSectionError)):
- print("Adding sample versioneer config to setup.cfg", file=sys.stderr)
- with open(os.path.join(root, "setup.cfg"), "a") as f:
- f.write(SAMPLE_CONFIG)
- print(CONFIG_ERROR, file=sys.stderr)
- return 1
-
- print(" creating %s" % cfg.versionfile_source)
- with open(cfg.versionfile_source, "w") as f:
- LONG = LONG_VERSION_PY[cfg.VCS]
- f.write(
- LONG
- % {
- "DOLLAR": "$",
- "STYLE": cfg.style,
- "TAG_PREFIX": cfg.tag_prefix,
- "PARENTDIR_PREFIX": cfg.parentdir_prefix,
- "VERSIONFILE_SOURCE": cfg.versionfile_source,
- }
- )
-
- ipy = os.path.join(os.path.dirname(cfg.versionfile_source), "__init__.py")
- if os.path.exists(ipy):
- try:
- with open(ipy) as f:
- old = f.read()
- except OSError:
- old = ""
- if INIT_PY_SNIPPET not in old:
- print(" appending to %s" % ipy)
- with open(ipy, "a") as f:
- f.write(INIT_PY_SNIPPET)
- else:
- print(" %s unmodified" % ipy)
- else:
- print(" %s doesn't exist, ok" % ipy)
- ipy = None
-
- # Make sure both the top-level "versioneer.py" and versionfile_source
- # (PKG/_version.py, used by runtime code) are in MANIFEST.in, so
- # they'll be copied into source distributions. Pip won't be able to
- # install the package without this.
- manifest_in = os.path.join(root, "MANIFEST.in")
- simple_includes = set()
- try:
- with open(manifest_in) as f:
- for line in f:
- if line.startswith("include "):
- for include in line.split()[1:]:
- simple_includes.add(include)
- except OSError:
- pass
- # That doesn't cover everything MANIFEST.in can do
- # (http://docs.python.org/2/distutils/sourcedist.html#commands), so
- # it might give some false negatives. Appending redundant 'include'
- # lines is safe, though.
- if "versioneer.py" not in simple_includes:
- print(" appending 'versioneer.py' to MANIFEST.in")
- with open(manifest_in, "a") as f:
- f.write("include versioneer.py\n")
- else:
- print(" 'versioneer.py' already in MANIFEST.in")
- if cfg.versionfile_source not in simple_includes:
- print(
- " appending versionfile_source ('%s') to MANIFEST.in"
- % cfg.versionfile_source
- )
- with open(manifest_in, "a") as f:
- f.write("include %s\n" % cfg.versionfile_source)
- else:
- print(" versionfile_source already in MANIFEST.in")
-
- # Make VCS-specific changes. For git, this means creating/changing
- # .gitattributes to mark _version.py for export-subst keyword
- # substitution.
- do_vcs_install(manifest_in, cfg.versionfile_source, ipy)
- return 0
-
-
-def scan_setup_py():
- """Validate the contents of setup.py against Versioneer's expectations."""
- found = set()
- setters = False
- errors = 0
- with open("setup.py") as f:
- for line in f.readlines():
- if "import versioneer" in line:
- found.add("import")
- if "versioneer.get_cmdclass()" in line:
- found.add("cmdclass")
- if "versioneer.get_version()" in line:
- found.add("get_version")
- if "versioneer.VCS" in line:
- setters = True
- if "versioneer.versionfile_source" in line:
- setters = True
- if len(found) != 3:
- print("")
- print("Your setup.py appears to be missing some important items")
- print("(but I might be wrong). Please make sure it has something")
- print("roughly like the following:")
- print("")
- print(" import versioneer")
- print(" setup( version=versioneer.get_version(),")
- print(" cmdclass=versioneer.get_cmdclass(), ...)")
- print("")
- errors += 1
- if setters:
- print("You should remove lines like 'versioneer.VCS = ' and")
- print("'versioneer.versionfile_source = ' . This configuration")
- print("now lives in setup.cfg, and should be removed from setup.py")
- print("")
- errors += 1
- return errors
-
-
-if __name__ == "__main__":
- cmd = sys.argv[1]
- if cmd == "setup":
- errors = do_setup()
- errors += scan_setup_py()
- if errors:
- sys.exit(1)
| This PR:
1. move versioneer config to pyproject.toml
2. use nonvendored version of versioneer
After this, we could enable PEP 517. | https://api.github.com/repos/pandas-dev/pandas/pulls/49924 | 2022-11-27T01:11:21Z | 2022-11-28T20:14:11Z | 2022-11-28T20:14:11Z | 2023-04-03T23:20:05Z |
CLN: move codespell config to pyproject.toml | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 6b74dd057e865..cc6875589c691 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -26,6 +26,7 @@ repos:
hooks:
- id: codespell
types_or: [python, rst, markdown]
+ additional_dependencies: [tomli]
- repo: https://github.com/MarcoGorelli/cython-lint
rev: v0.2.1
hooks:
diff --git a/pyproject.toml b/pyproject.toml
index d3065ae7b129a..6ce05ce5d679e 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -334,3 +334,7 @@ exclude_lines = [
[tool.coverage.html]
directory = "coverage_html_report"
+
+[tool.codespell]
+ignore-words-list = "blocs, coo, hist, nd, sav, ser, recuse"
+ignore-regex = 'https://([\w/\.])+'
diff --git a/setup.cfg b/setup.cfg
index dbd7cce1874c8..6de5bf2173a70 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -283,7 +283,3 @@ exclude =
# work around issue of undefined variable warnings
# https://github.com/pandas-dev/pandas/pull/38837#issuecomment-752884156
doc/source/getting_started/comparison/includes/*.rst
-
-[codespell]
-ignore-words-list = blocs,coo,hist,nd,sav,ser,recuse
-ignore-regex = https://([\w/\.])+
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/49923 | 2022-11-27T00:37:03Z | 2022-11-27T08:45:49Z | 2022-11-27T08:45:49Z | 2022-11-27T08:45:50Z |
API: ensure read_json closes file handle | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index f1ee96ddc3c16..dd46cd533bb80 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -349,6 +349,7 @@ Other API changes
- :func:`read_stata` with parameter ``index_col`` set to ``None`` (the default) will now set the index on the returned :class:`DataFrame` to a :class:`RangeIndex` instead of a :class:`Int64Index` (:issue:`49745`)
- Changed behavior of :class:`Index` constructor with an object-dtype ``numpy.ndarray`` containing all-``bool`` values or all-complex values, this will now retain object dtype, consistent with the :class:`Series` behavior (:issue:`49594`)
- Changed behavior of :meth:`DataFrame.shift` with ``axis=1``, an integer ``fill_value``, and homogeneous datetime-like dtype, this now fills new columns with integer dtypes instead of casting to datetimelike (:issue:`49842`)
+- Files are now closed when encountering an exception in :func:`read_json` (:issue:`49921`)
- :meth:`DataFrame.values`, :meth:`DataFrame.to_numpy`, :meth:`DataFrame.xs`, :meth:`DataFrame.reindex`, :meth:`DataFrame.fillna`, and :meth:`DataFrame.replace` no longer silently consolidate the underlying arrays; do ``df = df.copy()`` to ensure consolidation (:issue:`49356`)
-
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 3020731b77a3c..5f02822b68d6d 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -750,8 +750,7 @@ def read_json(
if chunksize:
return json_reader
-
- with json_reader:
+ else:
return json_reader.read()
@@ -896,20 +895,20 @@ def read(self) -> DataFrame | Series:
Read the whole JSON input into a pandas object.
"""
obj: DataFrame | Series
- if self.lines:
- if self.chunksize:
- obj = concat(self)
- elif self.nrows:
- lines = list(islice(self.data, self.nrows))
- lines_json = self._combine_lines(lines)
- obj = self._get_object_parser(lines_json)
+ with self:
+ if self.lines:
+ if self.chunksize:
+ obj = concat(self)
+ elif self.nrows:
+ lines = list(islice(self.data, self.nrows))
+ lines_json = self._combine_lines(lines)
+ obj = self._get_object_parser(lines_json)
+ else:
+ data = ensure_str(self.data)
+ data_lines = data.split("\n")
+ obj = self._get_object_parser(self._combine_lines(data_lines))
else:
- data = ensure_str(self.data)
- data_lines = data.split("\n")
- obj = self._get_object_parser(self._combine_lines(data_lines))
- else:
- obj = self._get_object_parser(self.data)
- self.close()
+ obj = self._get_object_parser(self.data)
return obj
def _get_object_parser(self, json) -> DataFrame | Series:
@@ -964,24 +963,27 @@ def __next__(self: JsonReader[Literal["frame", "series"]]) -> DataFrame | Series
...
def __next__(self) -> DataFrame | Series:
- if self.nrows:
- if self.nrows_seen >= self.nrows:
- self.close()
- raise StopIteration
+ if self.nrows and self.nrows_seen >= self.nrows:
+ self.close()
+ raise StopIteration
lines = list(islice(self.data, self.chunksize))
- if lines:
+ if not lines:
+ self.close()
+ raise StopIteration
+
+ try:
lines_json = self._combine_lines(lines)
obj = self._get_object_parser(lines_json)
# Make sure that the returned objects have the right index.
obj.index = range(self.nrows_seen, self.nrows_seen + len(obj))
self.nrows_seen += len(obj)
+ except Exception as ex:
+ self.close()
+ raise ex
- return obj
-
- self.close()
- raise StopIteration
+ return obj
def __enter__(self) -> JsonReader[FrameSeriesStrT]:
return self
| Use context manager instead of closing manually with `self.close` to ensure file always closes.
Not sure if this should have a whatsnew entry and where. | https://api.github.com/repos/pandas-dev/pandas/pulls/49921 | 2022-11-26T23:40:51Z | 2022-12-02T00:31:10Z | 2022-12-02T00:31:10Z | 2022-12-05T09:27:37Z |
Json fix normalize | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 97ee96d8be25d..e657a98f6358f 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -717,6 +717,7 @@ I/O
- Bug in :func:`read_csv` for a single-line csv with fewer columns than ``names`` raised :class:`.errors.ParserError` with ``engine="c"`` (:issue:`47566`)
- Bug in :func:`DataFrame.to_string` with ``header=False`` that printed the index name on the same line as the first row of the data (:issue:`49230`)
- Fixed memory leak which stemmed from the initialization of the internal JSON module (:issue:`49222`)
+- Fixed issue where :func:`json_normalize` would incorrectly remove leading characters from column names that matched the ``sep`` argument (:issue:`49861`)
-
Period
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 3791dba6e36e3..4b2d0d9beea3f 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -7,6 +7,7 @@
defaultdict,
)
import copy
+import sys
from typing import (
Any,
DefaultDict,
@@ -148,13 +149,18 @@ def _normalise_json(
if isinstance(data, dict):
for key, value in data.items():
new_key = f"{key_string}{separator}{key}"
+
+ if not key_string:
+ if sys.version_info < (3, 9):
+ from pandas.util._str_methods import removeprefix
+
+ new_key = removeprefix(new_key, separator)
+ else:
+ new_key = new_key.removeprefix(separator)
+
_normalise_json(
data=value,
- # to avoid adding the separator to the start of every key
- # GH#43831 avoid adding key if key_string blank
- key_string=new_key
- if new_key[: len(separator)] != separator
- else new_key[len(separator) :],
+ key_string=new_key,
normalized_dict=normalized_dict,
separator=separator,
)
diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py
index 986c0039715a6..86059c24b1e48 100644
--- a/pandas/tests/io/json/test_normalize.py
+++ b/pandas/tests/io/json/test_normalize.py
@@ -561,6 +561,14 @@ def generator_data():
tm.assert_frame_equal(result, expected)
+ def test_top_column_with_leading_underscore(self):
+ # 49861
+ data = {"_id": {"a1": 10, "l2": {"l3": 0}}, "gg": 4}
+ result = json_normalize(data, sep="_")
+ expected = DataFrame([[4, 10, 0]], columns=["gg", "_id_a1", "_id_l2_l3"])
+
+ tm.assert_frame_equal(result, expected)
+
class TestNestedToRecord:
def test_flat_stays_flat(self):
| closes #49861
| https://api.github.com/repos/pandas-dev/pandas/pulls/49920 | 2022-11-26T20:01:42Z | 2022-11-28T20:04:06Z | 2022-11-28T20:04:06Z | 2023-01-03T18:19:16Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.