title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
ENH: Arithmetic with Timestamp-based intervals | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 55570341cf4e8..247f2a7515705 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -49,6 +49,26 @@ For example:
buffer = io.BytesIO()
data.to_csv(buffer, mode="w+b", encoding="utf-8", compression="gzip")
+Arithmetic with Timestamp and Timedelta-based Intervals
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Arithmetic can now be performed on :class:`Interval` s having their left and right
+ends as :class:`Timestamp` s or :class:`Timedelta` s, like what would be possible
+if the ends were numeric (:issue:`35908`).
+
+Arithmetic can be performed by directly using arithmetic operators (`-` or `+`),
+so something like this will work:
+
+.. ipython:: python
+
+ interval = pd.Interval(pd.Timestamp("1900-01-01"), pd.Timestamp("1900-01-02"))
+ interval - pd.Timestamp("1900-01-01")
+
+This works when endpoints are :class:`Timestamp` s or :class:`Timedelta` s.
+
+However, it should be noted that adding :class:`Timestamp` s , and subtracting :class:`Timestamp`
+from a :class:`Timedelta` is illegal.
+
.. _whatsnew_120.enhancements.other:
Other enhancements
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 6867e8aba7411..393fecf1259bc 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -395,6 +395,8 @@ cdef class Interval(IntervalMixin):
isinstance(y, numbers.Number)
or PyDelta_Check(y)
or is_timedelta64_object(y)
+ or isinstance(y, _Timestamp)
+ or isinstance(y, _Timedelta)
):
return Interval(self.left + y, self.right + y, closed=self.closed)
elif (
@@ -413,6 +415,8 @@ cdef class Interval(IntervalMixin):
isinstance(y, numbers.Number)
or PyDelta_Check(y)
or is_timedelta64_object(y)
+ or isinstance(y, _Timestamp)
+ or isinstance(y, _Timedelta)
):
return Interval(self.left - y, self.right - y, closed=self.closed)
return NotImplemented
diff --git a/pandas/tests/scalar/interval/test_interval.py b/pandas/tests/scalar/interval/test_interval.py
index a0151bb9ac7bf..5fd3d34a84f20 100644
--- a/pandas/tests/scalar/interval/test_interval.py
+++ b/pandas/tests/scalar/interval/test_interval.py
@@ -184,6 +184,184 @@ def test_math_sub(self, closed):
with pytest.raises(TypeError, match=msg):
interval - "foo"
+ def test_math_sub_interval_timestamp_timestamp(self, closed):
+ # Tests for interval of timestamp - timestamp
+ interval = Interval(
+ Timestamp("1900-01-01"), Timestamp("1900-01-02"), closed=closed
+ )
+ expected = Interval(
+ Timedelta("0 days 00:00:00"), Timedelta("1 days 00:00:00"), closed=closed
+ )
+
+ result = interval - Timestamp("1900-01-01")
+ assert result == expected
+
+ expected = Interval(
+ interval.left - Timestamp("1900-01-01"),
+ interval.right - Timestamp("1900-01-01"),
+ closed=closed,
+ )
+ assert result == expected
+
+ result = interval
+ result -= Timestamp("1900-01-01")
+
+ expected = Interval(
+ Timedelta("0 days 00:00:00"), Timedelta("1 days 00:00:00"), closed=closed
+ )
+ assert result == expected
+
+ expected = Interval(
+ interval.left - Timestamp("1900-01-01"),
+ interval.right - Timestamp("1900-01-01"),
+ closed=closed,
+ )
+ assert result == expected
+
+ def test_math_sub_interval_timestamp_timedelta(self, closed):
+ # Tests for interval of timestamps - timedelta
+ interval = Interval(
+ Timestamp("1900-01-01"), Timestamp("1900-01-02"), closed=closed
+ )
+ expected = Interval(
+ Timestamp("1899-12-31"), Timestamp("1900-01-01"), closed=closed
+ )
+
+ result = interval - Timedelta("1 days 00:00:00")
+ assert result == expected
+
+ expected = Interval(
+ interval.left - Timedelta("1 days 00:00:00"),
+ interval.right - Timedelta("1 days 00:00:00"),
+ closed=closed,
+ )
+ assert result == expected
+
+ result = interval
+ result -= Timedelta("1 days 00:00:00")
+
+ expected = Interval(
+ Timestamp("1899-12-31"), Timestamp("1900-01-01"), closed=closed
+ )
+ assert result == expected
+
+ expected = Interval(
+ interval.left - Timedelta("1 days 00:00:00"),
+ interval.right - Timedelta("1 days 00:00:00"),
+ closed=closed,
+ )
+ assert result == expected
+
+ def test_math_add_interval_timestamp_timedelta(self, closed):
+ interval = Interval(
+ Timestamp("1900-01-01"), Timestamp("1900-01-02"), closed=closed
+ )
+ expected = Interval(
+ Timestamp("1900-01-02"), Timestamp("1900-01-03"), closed=closed
+ )
+
+ result = interval + Timedelta("1 days 00:00:00")
+ assert result == expected
+
+ result = interval
+ result += Timedelta("1 days 00:00:00")
+ assert result == expected
+
+ expected = Interval(
+ interval.left + Timedelta("1 days 00:00:00"),
+ interval.right + Timedelta("1 days 00:00:00"),
+ closed=closed,
+ )
+
+ result = interval + Timedelta("1 days 00:00:00")
+ assert result == expected
+
+ result = interval
+ result += Timedelta("1 days 00:00:00")
+ assert result == expected
+
+ def test_math_add_interval_timedelta_timedelta(self, closed):
+ interval = Interval(
+ Timedelta("1 days 00:00:00"), Timedelta("2 days 00:00:00"), closed=closed
+ )
+ expected = Interval(
+ Timedelta("4 days 01:00:00"), Timedelta("5 days 01:00:00"), closed=closed
+ )
+
+ result = interval + Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ result = interval
+ result += Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ expected = Interval(
+ interval.left + Timedelta("3 days 01:00:00"),
+ interval.right + Timedelta("3 days 01:00:00"),
+ closed=closed,
+ )
+
+ result = interval + Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ result = interval
+ result += Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ def test_sub_interval_imedelta_timedelta(self, closed):
+ interval = Interval(
+ Timedelta("1 days 00:00:00"), Timedelta("2 days 00:00:00"), closed=closed
+ )
+ expected = Interval(
+ Timedelta("-3 days +23:00:00"),
+ Timedelta("-2 days +23:00:00"),
+ closed=closed,
+ )
+
+ result = interval - Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ result = interval
+ result -= Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ expected = Interval(
+ interval.left - Timedelta("3 days 01:00:00"),
+ interval.right - Timedelta("3 days 01:00:00"),
+ closed=closed,
+ )
+
+ result = interval - Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ result = interval
+ result -= Timedelta("3 days 01:00:00")
+ assert result == expected
+
+ def test_math_add_interval_timestamp_timestamp(self, closed):
+ interval = Interval(
+ Timestamp("1900-01-01"), Timestamp("1900-01-02"), closed=closed
+ )
+
+ msg = r"unsupported operand type\(s\) for \+"
+ with pytest.raises(TypeError, match=msg):
+ interval = interval + Timestamp("2002-01-08")
+
+ with pytest.raises(TypeError, match=msg):
+ interval += Timestamp("2002-01-08")
+
+ def test_math_sub_interval_timedelta_timestamp(self, closed):
+ interval = Interval(
+ Timedelta("1 days 00:00:00"), Timedelta("3 days 00:00:00"), closed=closed
+ )
+
+ msg = r"unsupported operand type\(s\) for \-"
+ with pytest.raises(TypeError, match=msg):
+ interval = interval - Timestamp("1900-01-01")
+
+ with pytest.raises(TypeError, match=msg):
+ interval -= Timestamp("1900-01-01")
+
def test_math_mult(self, closed):
interval = Interval(0, 1, closed=closed)
expected = Interval(0, 2, closed=closed)
| - [X] closes #35908
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/36001 | 2020-08-31T03:34:34Z | 2021-02-11T01:35:24Z | null | 2021-02-11T01:35:25Z |
ENH: vendor typing_extensions | diff --git a/pandas/_vendored/__init__.py b/pandas/_vendored/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/_vendored/typing_extensions.py b/pandas/_vendored/typing_extensions.py
new file mode 100644
index 0000000000000..53df8da175a56
--- /dev/null
+++ b/pandas/_vendored/typing_extensions.py
@@ -0,0 +1,2466 @@
+"""
+vendored copy of typing_extensions, copied from
+https://raw.githubusercontent.com/python/typing/master/typing_extensions/src_py3/typing_extensions.py
+
+on 2020-08-30.
+
+typing_extensions is distributed under the Python Software Foundation License.
+
+This is not a direct copy/paste of the original file. Changes are:
+ - this docstring
+ - ran `black`
+ - ran `isort`
+ - edited strings split by black to adhere to pandas style conventions
+ - AsyncContextManager is defined without `exec`
+ - python2-style super usages are updated
+ - replace foo[dot]__class__ with type(foo)
+ - Change a comment-syntax annotation in a docstring to newer syntax
+"""
+
+# These are used by Protocol implementation
+# We use internal typing helpers here, but this significantly reduces
+# code duplication. (Also this is only until Protocol is in typing.)
+import abc
+import collections
+import collections.abc as collections_abc
+import contextlib
+import operator
+import sys
+import typing
+from typing import Callable, Generic, Tuple, TypeVar
+
+# After PEP 560, internal typing API was substantially reworked.
+# This is especially important for Protocol class which uses internal APIs
+# quite extensivelly.
+PEP_560 = sys.version_info[:3] >= (3, 7, 0)
+
+if PEP_560:
+ GenericMeta = TypingMeta = type
+else:
+ from typing import GenericMeta, TypingMeta
+OLD_GENERICS = False
+try:
+ from typing import _next_in_mro, _type_check, _type_vars
+except ImportError:
+ OLD_GENERICS = True
+try:
+ from typing import _subs_tree # noqa
+
+ SUBS_TREE = True
+except ImportError:
+ SUBS_TREE = False
+try:
+ from typing import _tp_cache
+except ImportError:
+
+ def _tp_cache(x):
+ return x
+
+
+try:
+ from typing import _TypingEllipsis, _TypingEmpty
+except ImportError:
+
+ class _TypingEllipsis:
+ pass
+
+ class _TypingEmpty:
+ pass
+
+
+# The two functions below are copies of typing internal helpers.
+# They are needed by _ProtocolMeta
+
+
+def _no_slots_copy(dct):
+ dict_copy = dict(dct)
+ if "__slots__" in dict_copy:
+ for slot in dict_copy["__slots__"]:
+ dict_copy.pop(slot, None)
+ return dict_copy
+
+
+def _check_generic(cls, parameters):
+ if not cls.__parameters__:
+ raise TypeError("%s is not a generic class" % repr(cls))
+ alen = len(parameters)
+ elen = len(cls.__parameters__)
+ if alen != elen:
+ raise TypeError(
+ "Too %s parameters for %s; actual %s, expected %s"
+ % ("many" if alen > elen else "few", repr(cls), alen, elen)
+ )
+
+
+if hasattr(typing, "_generic_new"):
+ _generic_new = typing._generic_new
+else:
+ # Note: The '_generic_new(...)' function is used as a part of the
+ # process of creating a generic type and was added to the typing module
+ # as of Python 3.5.3.
+ #
+ # We've defined '_generic_new(...)' below to exactly match the behavior
+ # implemented in older versions of 'typing' bundled with Python 3.5.0 to
+ # 3.5.2. This helps eliminate redundancy when defining collection types
+ # like 'Deque' later.
+ #
+ # See https://github.com/python/typing/pull/308 for more details -- in
+ # particular, compare and contrast the definition of types like
+ # 'typing.List' before and after the merge.
+
+ def _generic_new(base_cls, cls, *args, **kwargs):
+ return base_cls.__new__(cls, *args, **kwargs)
+
+
+# See https://github.com/python/typing/pull/439
+if hasattr(typing, "_geqv"):
+ from typing import _geqv
+
+ _geqv_defined = True
+else:
+ _geqv = None
+ _geqv_defined = False
+
+if sys.version_info[:2] >= (3, 6):
+ import _collections_abc
+
+ _check_methods_in_mro = _collections_abc._check_methods
+else:
+
+ def _check_methods_in_mro(C, *methods):
+ mro = C.__mro__
+ for method in methods:
+ for B in mro:
+ if method in B.__dict__:
+ if B.__dict__[method] is None:
+ return NotImplemented
+ break
+ else:
+ return NotImplemented
+ return True
+
+
+# Please keep __all__ alphabetized within each category.
+__all__ = [
+ # Super-special typing primitives.
+ "ClassVar",
+ "Final",
+ "Type",
+ # ABCs (from collections.abc).
+ # The following are added depending on presence
+ # of their non-generic counterparts in stdlib:
+ # 'Awaitable',
+ # 'AsyncIterator',
+ # 'AsyncIterable',
+ # 'Coroutine',
+ # 'AsyncGenerator',
+ # 'AsyncContextManager',
+ # 'ChainMap',
+ # Concrete collection types.
+ "ContextManager",
+ "Counter",
+ "Deque",
+ "DefaultDict",
+ "TypedDict",
+ # Structural checks, a.k.a. protocols.
+ "SupportsIndex",
+ # One-off things.
+ "final",
+ "IntVar",
+ "Literal",
+ "NewType",
+ "overload",
+ "Text",
+ "TYPE_CHECKING",
+]
+
+# Annotated relies on substitution trees of pep 560. It will not work for
+# versions of typing older than 3.5.3
+HAVE_ANNOTATED = PEP_560 or SUBS_TREE
+
+if PEP_560:
+ __all__.extend(["get_args", "get_origin", "get_type_hints"])
+
+if HAVE_ANNOTATED:
+ __all__.append("Annotated")
+
+# Protocols are hard to backport to the original version of typing 3.5.0
+HAVE_PROTOCOLS = sys.version_info[:3] != (3, 5, 0)
+
+if HAVE_PROTOCOLS:
+ __all__.extend(["Protocol", "runtime", "runtime_checkable"])
+
+
+# TODO
+if hasattr(typing, "NoReturn"):
+ NoReturn = typing.NoReturn
+elif hasattr(typing, "_FinalTypingBase"):
+
+ class _NoReturn(typing._FinalTypingBase, _root=True):
+ """Special type indicating functions that never return.
+ Example::
+
+ from typing import NoReturn
+
+ def stop() -> NoReturn:
+ raise Exception('no way')
+
+ This type is invalid in other positions, e.g., ``List[NoReturn]``
+ will fail in static type checkers.
+ """
+
+ __slots__ = ()
+
+ def __instancecheck__(self, obj):
+ raise TypeError("NoReturn cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("NoReturn cannot be used with issubclass().")
+
+ NoReturn = _NoReturn(_root=True)
+else:
+
+ class _NoReturnMeta(typing.TypingMeta):
+ """Metaclass for NoReturn"""
+
+ def __new__(cls, name, bases, namespace, _root=False):
+ return super().__new__(cls, name, bases, namespace, _root=_root)
+
+ def __instancecheck__(self, obj):
+ raise TypeError("NoReturn cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("NoReturn cannot be used with issubclass().")
+
+ class NoReturn(typing.Final, metaclass=_NoReturnMeta, _root=True):
+ """Special type indicating functions that never return.
+ Example::
+
+ from typing import NoReturn
+
+ def stop() -> NoReturn:
+ raise Exception('no way')
+
+ This type is invalid in other positions, e.g., ``List[NoReturn]``
+ will fail in static type checkers.
+ """
+
+ __slots__ = ()
+
+
+# Some unconstrained type variables. These are used by the container types.
+# (These are not for export.)
+T = typing.TypeVar("T") # Any type.
+KT = typing.TypeVar("KT") # Key type.
+VT = typing.TypeVar("VT") # Value type.
+T_co = typing.TypeVar("T_co", covariant=True) # Any type covariant containers.
+V_co = typing.TypeVar("V_co", covariant=True) # Any type covariant containers.
+VT_co = typing.TypeVar("VT_co", covariant=True) # Value type covariant containers.
+T_contra = typing.TypeVar("T_contra", contravariant=True) # Ditto contravariant.
+
+
+if hasattr(typing, "ClassVar"):
+ ClassVar = typing.ClassVar
+elif hasattr(typing, "_FinalTypingBase"):
+
+ class _ClassVar(typing._FinalTypingBase, _root=True):
+ """Special type construct to mark class variables.
+
+ An annotation wrapped in ClassVar indicates that a given
+ attribute is intended to be used as a class variable and
+ should not be set on instances of that class. Usage::
+
+ class Starship:
+ stats: ClassVar[Dict[str, int]] = {} # class variable
+ damage: int = 10 # instance variable
+
+ ClassVar accepts only types and cannot be further subscribed.
+
+ Note that ClassVar is not a class itself, and should not
+ be used with isinstance() or issubclass().
+ """
+
+ __slots__ = ("__type__",)
+
+ def __init__(self, tp=None, **kwds):
+ self.__type__ = tp
+
+ def __getitem__(self, item):
+ cls = type(self)
+ if self.__type__ is None:
+ return cls(
+ typing._type_check(
+ item, "{} accepts only single type.".format(cls.__name__[1:])
+ ),
+ _root=True,
+ )
+ raise TypeError("{} cannot be further subscripted".format(cls.__name__[1:]))
+
+ def _eval_type(self, globalns, localns):
+ new_tp = typing._eval_type(self.__type__, globalns, localns)
+ if new_tp == self.__type__:
+ return self
+ return type(self)(new_tp, _root=True)
+
+ def __repr__(self):
+ r = super().__repr__()
+ if self.__type__ is not None:
+ r += "[{}]".format(typing._type_repr(self.__type__))
+ return r
+
+ def __hash__(self):
+ return hash((type(self).__name__, self.__type__))
+
+ def __eq__(self, other):
+ if not isinstance(other, _ClassVar):
+ return NotImplemented
+ if self.__type__ is not None:
+ return self.__type__ == other.__type__
+ return self is other
+
+ ClassVar = _ClassVar(_root=True)
+else:
+
+ class _ClassVarMeta(typing.TypingMeta):
+ """Metaclass for ClassVar"""
+
+ def __new__(cls, name, bases, namespace, tp=None, _root=False):
+ self = super().__new__(cls, name, bases, namespace, _root=_root)
+ if tp is not None:
+ self.__type__ = tp
+ return self
+
+ def __instancecheck__(self, obj):
+ raise TypeError("ClassVar cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("ClassVar cannot be used with issubclass().")
+
+ def __getitem__(self, item):
+ cls = type(self)
+ if self.__type__ is not None:
+ raise TypeError(
+ "{} cannot be further subscripted".format(cls.__name__[1:])
+ )
+
+ param = typing._type_check(
+ item, "{} accepts only single type.".format(cls.__name__[1:])
+ )
+ return cls(
+ self.__name__, self.__bases__, dict(self.__dict__), tp=param, _root=True
+ )
+
+ def _eval_type(self, globalns, localns):
+ new_tp = typing._eval_type(self.__type__, globalns, localns)
+ if new_tp == self.__type__:
+ return self
+ return type(self)(
+ self.__name__,
+ self.__bases__,
+ dict(self.__dict__),
+ tp=self.__type__,
+ _root=True,
+ )
+
+ def __repr__(self):
+ r = super().__repr__()
+ if self.__type__ is not None:
+ r += "[{}]".format(typing._type_repr(self.__type__))
+ return r
+
+ def __hash__(self):
+ return hash((type(self).__name__, self.__type__))
+
+ def __eq__(self, other):
+ if not isinstance(other, ClassVar):
+ return NotImplemented
+ if self.__type__ is not None:
+ return self.__type__ == other.__type__
+ return self is other
+
+ class ClassVar(typing.Final, metaclass=_ClassVarMeta, _root=True):
+ """Special type construct to mark class variables.
+
+ An annotation wrapped in ClassVar indicates that a given
+ attribute is intended to be used as a class variable and
+ should not be set on instances of that class. Usage::
+
+ class Starship:
+ stats: ClassVar[Dict[str, int]] = {} # class variable
+ damage: int = 10 # instance variable
+
+ ClassVar accepts only types and cannot be further subscribed.
+
+ Note that ClassVar is not a class itself, and should not
+ be used with isinstance() or issubclass().
+ """
+
+ __type__ = None
+
+
+# On older versions of typing there is an internal class named "Final".
+if hasattr(typing, "Final") and sys.version_info[:2] >= (3, 7):
+ Final = typing.Final
+elif sys.version_info[:2] >= (3, 7):
+
+ class _FinalForm(typing._SpecialForm, _root=True):
+ def __repr__(self):
+ return "typing_extensions." + self._name
+
+ def __getitem__(self, parameters):
+ item = typing._type_check(
+ parameters, "{} accepts only single type".format(self._name)
+ )
+ return _GenericAlias(self, (item,))
+
+ Final = _FinalForm(
+ "Final",
+ doc="""A special typing construct to indicate that a name
+ cannot be re-assigned or overridden in a subclass.
+ For example:
+
+ MAX_SIZE: Final = 9000
+ MAX_SIZE += 1 # Error reported by type checker
+
+ class Connection:
+ TIMEOUT: Final[int] = 10
+ class FastConnector(Connection):
+ TIMEOUT = 1 # Error reported by type checker
+
+ There is no runtime checking of these properties.""",
+ )
+elif hasattr(typing, "_FinalTypingBase"):
+
+ class _Final(typing._FinalTypingBase, _root=True):
+ """A special typing construct to indicate that a name
+ cannot be re-assigned or overridden in a subclass.
+ For example:
+
+ MAX_SIZE: Final = 9000
+ MAX_SIZE += 1 # Error reported by type checker
+
+ class Connection:
+ TIMEOUT: Final[int] = 10
+ class FastConnector(Connection):
+ TIMEOUT = 1 # Error reported by type checker
+
+ There is no runtime checking of these properties.
+ """
+
+ __slots__ = ("__type__",)
+
+ def __init__(self, tp=None, **kwds):
+ self.__type__ = tp
+
+ def __getitem__(self, item):
+ cls = type(self)
+ if self.__type__ is None:
+ return cls(
+ typing._type_check(
+ item, "{} accepts only single type.".format(cls.__name__[1:])
+ ),
+ _root=True,
+ )
+ raise TypeError("{} cannot be further subscripted".format(cls.__name__[1:]))
+
+ def _eval_type(self, globalns, localns):
+ new_tp = typing._eval_type(self.__type__, globalns, localns)
+ if new_tp == self.__type__:
+ return self
+ return type(self)(new_tp, _root=True)
+
+ def __repr__(self):
+ r = super().__repr__()
+ if self.__type__ is not None:
+ r += "[{}]".format(typing._type_repr(self.__type__))
+ return r
+
+ def __hash__(self):
+ return hash((type(self).__name__, self.__type__))
+
+ def __eq__(self, other):
+ if not isinstance(other, _Final):
+ return NotImplemented
+ if self.__type__ is not None:
+ return self.__type__ == other.__type__
+ return self is other
+
+ Final = _Final(_root=True)
+else:
+
+ class _FinalMeta(typing.TypingMeta):
+ """Metaclass for Final"""
+
+ def __new__(cls, name, bases, namespace, tp=None, _root=False):
+ self = super().__new__(cls, name, bases, namespace, _root=_root)
+ if tp is not None:
+ self.__type__ = tp
+ return self
+
+ def __instancecheck__(self, obj):
+ raise TypeError("Final cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("Final cannot be used with issubclass().")
+
+ def __getitem__(self, item):
+ cls = type(self)
+ if self.__type__ is not None:
+ raise TypeError(
+ "{} cannot be further subscripted".format(cls.__name__[1:])
+ )
+
+ param = typing._type_check(
+ item, "{} accepts only single type.".format(cls.__name__[1:])
+ )
+ return cls(
+ self.__name__, self.__bases__, dict(self.__dict__), tp=param, _root=True
+ )
+
+ def _eval_type(self, globalns, localns):
+ new_tp = typing._eval_type(self.__type__, globalns, localns)
+ if new_tp == self.__type__:
+ return self
+ return type(self)(
+ self.__name__,
+ self.__bases__,
+ dict(self.__dict__),
+ tp=self.__type__,
+ _root=True,
+ )
+
+ def __repr__(self):
+ r = super().__repr__()
+ if self.__type__ is not None:
+ r += "[{}]".format(typing._type_repr(self.__type__))
+ return r
+
+ def __hash__(self):
+ return hash((type(self).__name__, self.__type__))
+
+ def __eq__(self, other):
+ if not isinstance(other, Final):
+ return NotImplemented
+ if self.__type__ is not None:
+ return self.__type__ == other.__type__
+ return self is other
+
+ class Final(typing.Final, metaclass=_FinalMeta, _root=True):
+ """A special typing construct to indicate that a name
+ cannot be re-assigned or overridden in a subclass.
+ For example:
+
+ MAX_SIZE: Final = 9000
+ MAX_SIZE += 1 # Error reported by type checker
+
+ class Connection:
+ TIMEOUT: Final[int] = 10
+ class FastConnector(Connection):
+ TIMEOUT = 1 # Error reported by type checker
+
+ There is no runtime checking of these properties.
+ """
+
+ __type__ = None
+
+
+if hasattr(typing, "final"):
+ final = typing.final
+else:
+
+ def final(f):
+ """This decorator can be used to indicate to type checkers that
+ the decorated method cannot be overridden, and decorated class
+ cannot be subclassed. For example:
+
+ class Base:
+ @final
+ def done(self) -> None:
+ ...
+ class Sub(Base):
+ def done(self) -> None: # Error reported by type checker
+ ...
+ @final
+ class Leaf:
+ ...
+ class Other(Leaf): # Error reported by type checker
+ ...
+
+ There is no runtime checking of these properties.
+ """
+ return f
+
+
+def IntVar(name):
+ return TypeVar(name)
+
+
+if hasattr(typing, "Literal"):
+ Literal = typing.Literal
+elif sys.version_info[:2] >= (3, 7):
+
+ class _LiteralForm(typing._SpecialForm, _root=True):
+ def __repr__(self):
+ return "typing_extensions." + self._name
+
+ def __getitem__(self, parameters):
+ return _GenericAlias(self, parameters)
+
+ Literal = _LiteralForm(
+ "Literal",
+ doc="""A type that can be used to indicate to type checkers
+ that the corresponding value has a value literally equivalent
+ to the provided parameter. For example:
+
+ var: Literal[4] = 4
+
+ The type checker understands that 'var' is literally equal to
+ the value 4 and no other value.
+
+ Literal[...] cannot be subclassed. There is no runtime
+ checking verifying that the parameter is actually a value
+ instead of a type.""",
+ )
+elif hasattr(typing, "_FinalTypingBase"):
+
+ class _Literal(typing._FinalTypingBase, _root=True):
+ """A type that can be used to indicate to type checkers that the
+ corresponding value has a value literally equivalent to the
+ provided parameter. For example:
+
+ var: Literal[4] = 4
+
+ The type checker understands that 'var' is literally equal to the
+ value 4 and no other value.
+
+ Literal[...] cannot be subclassed. There is no runtime checking
+ verifying that the parameter is actually a value instead of a type.
+ """
+
+ __slots__ = ("__values__",)
+
+ def __init__(self, values=None, **kwds):
+ self.__values__ = values
+
+ def __getitem__(self, values):
+ cls = type(self)
+ if self.__values__ is None:
+ if not isinstance(values, tuple):
+ values = (values,)
+ return cls(values, _root=True)
+ raise TypeError("{} cannot be further subscripted".format(cls.__name__[1:]))
+
+ def _eval_type(self, globalns, localns):
+ return self
+
+ def __repr__(self):
+ r = super().__repr__()
+ if self.__values__ is not None:
+ r += "[{}]".format(", ".join(map(typing._type_repr, self.__values__)))
+ return r
+
+ def __hash__(self):
+ return hash((type(self).__name__, self.__values__))
+
+ def __eq__(self, other):
+ if not isinstance(other, _Literal):
+ return NotImplemented
+ if self.__values__ is not None:
+ return self.__values__ == other.__values__
+ return self is other
+
+ Literal = _Literal(_root=True)
+else:
+
+ class _LiteralMeta(typing.TypingMeta):
+ """Metaclass for Literal"""
+
+ def __new__(cls, name, bases, namespace, values=None, _root=False):
+ self = super().__new__(cls, name, bases, namespace, _root=_root)
+ if values is not None:
+ self.__values__ = values
+ return self
+
+ def __instancecheck__(self, obj):
+ raise TypeError("Literal cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("Literal cannot be used with issubclass().")
+
+ def __getitem__(self, item):
+ cls = type(self)
+ if self.__values__ is not None:
+ raise TypeError(
+ "{} cannot be further subscripted".format(cls.__name__[1:])
+ )
+
+ if not isinstance(item, tuple):
+ item = (item,)
+ return cls(
+ self.__name__,
+ self.__bases__,
+ dict(self.__dict__),
+ values=item,
+ _root=True,
+ )
+
+ def _eval_type(self, globalns, localns):
+ return self
+
+ def __repr__(self):
+ r = super().__repr__()
+ if self.__values__ is not None:
+ r += "[{}]".format(", ".join(map(typing._type_repr, self.__values__)))
+ return r
+
+ def __hash__(self):
+ return hash((type(self).__name__, self.__values__))
+
+ def __eq__(self, other):
+ if not isinstance(other, Literal):
+ return NotImplemented
+ if self.__values__ is not None:
+ return self.__values__ == other.__values__
+ return self is other
+
+ class Literal(typing.Final, metaclass=_LiteralMeta, _root=True):
+ """A type that can be used to indicate to type checkers that the
+ corresponding value has a value literally equivalent to the
+ provided parameter. For example:
+
+ var: Literal[4] = 4
+
+ The type checker understands that 'var' is literally equal to the
+ value 4 and no other value.
+
+ Literal[...] cannot be subclassed. There is no runtime checking
+ verifying that the parameter is actually a value instead of a type.
+ """
+
+ __values__ = None
+
+
+def _overload_dummy(*args, **kwds):
+ """Helper for @overload to raise when called."""
+ raise NotImplementedError(
+ "You should not call an overloaded function. "
+ "A series of @overload-decorated functions "
+ "outside a stub module should always be followed "
+ "by an implementation that is not @overload-ed."
+ )
+
+
+def overload(func):
+ """Decorator for overloaded functions/methods.
+
+ In a stub file, place two or more stub definitions for the same
+ function in a row, each decorated with @overload. For example:
+
+ @overload
+ def utf8(value: None) -> None: ...
+ @overload
+ def utf8(value: bytes) -> bytes: ...
+ @overload
+ def utf8(value: str) -> bytes: ...
+
+ In a non-stub file (i.e. a regular .py file), do the same but
+ follow it with an implementation. The implementation should *not*
+ be decorated with @overload. For example:
+
+ @overload
+ def utf8(value: None) -> None: ...
+ @overload
+ def utf8(value: bytes) -> bytes: ...
+ @overload
+ def utf8(value: str) -> bytes: ...
+ def utf8(value):
+ # implementation goes here
+ """
+ return _overload_dummy
+
+
+# This is not a real generic class. Don't use outside annotations.
+if hasattr(typing, "Type"):
+ Type = typing.Type
+else:
+ # Internal type variable used for Type[].
+ CT_co = typing.TypeVar("CT_co", covariant=True, bound=type)
+
+ class Type(typing.Generic[CT_co], extra=type):
+ """A special construct usable to annotate class objects.
+
+ For example, suppose we have the following classes::
+
+ class User: ... # Abstract base for User classes
+ class BasicUser(User): ...
+ class ProUser(User): ...
+ class TeamUser(User): ...
+
+ And a function that takes a class argument that's a subclass of
+ User and returns an instance of the corresponding class::
+
+ U = TypeVar('U', bound=User)
+ def new_user(user_class: Type[U]) -> U:
+ user = user_class()
+ # (Here we could write the user object to a database)
+ return user
+ joe = new_user(BasicUser)
+
+ At this point the type checker knows that joe has type BasicUser.
+ """
+
+ __slots__ = ()
+
+
+# Various ABCs mimicking those in collections.abc.
+# A few are simply re-exported for completeness.
+
+
+def _define_guard(type_name):
+ """
+ Returns True if the given type isn't defined in typing but
+ is defined in collections_abc.
+
+ Adds the type to __all__ if the collection is found in either
+ typing or collection_abc.
+ """
+ if hasattr(typing, type_name):
+ __all__.append(type_name)
+ globals()[type_name] = getattr(typing, type_name)
+ return False
+ elif hasattr(collections_abc, type_name):
+ __all__.append(type_name)
+ return True
+ else:
+ return False
+
+
+class _ExtensionsGenericMeta(GenericMeta):
+ def __subclasscheck__(self, subclass):
+ """This mimics a more modern GenericMeta.__subclasscheck__() logic
+ (that does not have problems with recursion) to work around interactions
+ between collections, typing, and typing_extensions on older
+ versions of Python, see https://github.com/python/typing/issues/501.
+ """
+ if sys.version_info[:3] >= (3, 5, 3) or sys.version_info[:3] < (3, 5, 0):
+ if self.__origin__ is not None:
+ if sys._getframe(1).f_globals["__name__"] not in ["abc", "functools"]:
+ raise TypeError(
+ "Parameterized generics cannot be used with class "
+ "or instance checks"
+ )
+ return False
+ if not self.__extra__:
+ return super().__subclasscheck__(subclass)
+ res = self.__extra__.__subclasshook__(subclass)
+ if res is not NotImplemented:
+ return res
+ if self.__extra__ in subclass.__mro__:
+ return True
+ for scls in self.__extra__.__subclasses__():
+ if isinstance(scls, GenericMeta):
+ continue
+ if issubclass(subclass, scls):
+ return True
+ return False
+
+
+if _define_guard("Awaitable"):
+
+ class Awaitable(
+ typing.Generic[T_co],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections_abc.Awaitable,
+ ):
+ __slots__ = ()
+
+
+if _define_guard("Coroutine"):
+
+ class Coroutine(
+ Awaitable[V_co],
+ typing.Generic[T_co, T_contra, V_co],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections_abc.Coroutine,
+ ):
+ __slots__ = ()
+
+
+if _define_guard("AsyncIterable"):
+
+ class AsyncIterable(
+ typing.Generic[T_co],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections_abc.AsyncIterable,
+ ):
+ __slots__ = ()
+
+
+if _define_guard("AsyncIterator"):
+
+ class AsyncIterator(
+ AsyncIterable[T_co],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections_abc.AsyncIterator,
+ ):
+ __slots__ = ()
+
+
+if hasattr(typing, "Deque"):
+ Deque = typing.Deque
+elif _geqv_defined:
+
+ class Deque(
+ collections.deque,
+ typing.MutableSequence[T],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.deque,
+ ):
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if _geqv(cls, Deque):
+ return collections.deque(*args, **kwds)
+ return _generic_new(collections.deque, cls, *args, **kwds)
+
+
+else:
+
+ class Deque(
+ collections.deque,
+ typing.MutableSequence[T],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.deque,
+ ):
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if cls._gorg is Deque:
+ return collections.deque(*args, **kwds)
+ return _generic_new(collections.deque, cls, *args, **kwds)
+
+
+if hasattr(typing, "ContextManager"):
+ ContextManager = typing.ContextManager
+elif hasattr(contextlib, "AbstractContextManager"):
+
+ class ContextManager(
+ typing.Generic[T_co],
+ metaclass=_ExtensionsGenericMeta,
+ extra=contextlib.AbstractContextManager,
+ ):
+ __slots__ = ()
+
+
+else:
+
+ class ContextManager(typing.Generic[T_co]):
+ __slots__ = ()
+
+ def __enter__(self):
+ return self
+
+ @abc.abstractmethod
+ def __exit__(self, exc_type, exc_value, traceback):
+ return None
+
+ @classmethod
+ def __subclasshook__(cls, C):
+ if cls is ContextManager:
+ # In Python 3.6+, it is possible to set a method to None to
+ # explicitly indicate that the class does not implement an ABC
+ # (https://bugs.python.org/issue25958), but we do not support
+ # that pattern here because this fallback class is only used
+ # in Python 3.5 and earlier.
+ if any("__enter__" in B.__dict__ for B in C.__mro__) and any(
+ "__exit__" in B.__dict__ for B in C.__mro__
+ ):
+ return True
+ return NotImplemented
+
+
+if hasattr(typing, "AsyncContextManager"):
+ AsyncContextManager = typing.AsyncContextManager
+ __all__.append("AsyncContextManager")
+elif hasattr(contextlib, "AbstractAsyncContextManager"):
+
+ class AsyncContextManager(
+ typing.Generic[T_co],
+ metaclass=_ExtensionsGenericMeta,
+ extra=contextlib.AbstractAsyncContextManager,
+ ):
+ __slots__ = ()
+
+ __all__.append("AsyncContextManager")
+
+else:
+
+ class AsyncContextManager(typing.Generic[T_co]):
+ __slots__ = ()
+
+ async def __aenter__(self):
+ return self
+
+ @abc.abstractmethod
+ async def __aexit__(self, exc_type, exc_value, traceback):
+ return None
+
+ @classmethod
+ def __subclasshook__(cls, C):
+ if cls is AsyncContextManager:
+ return _check_methods_in_mro(C, "__aenter__", "__aexit__")
+ return NotImplemented
+
+ __all__.append("AsyncContextManager")
+
+
+if hasattr(typing, "DefaultDict"):
+ DefaultDict = typing.DefaultDict
+elif _geqv_defined:
+
+ class DefaultDict(
+ collections.defaultdict,
+ typing.MutableMapping[KT, VT],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.defaultdict,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if _geqv(cls, DefaultDict):
+ return collections.defaultdict(*args, **kwds)
+ return _generic_new(collections.defaultdict, cls, *args, **kwds)
+
+
+else:
+
+ class DefaultDict(
+ collections.defaultdict,
+ typing.MutableMapping[KT, VT],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.defaultdict,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if cls._gorg is DefaultDict:
+ return collections.defaultdict(*args, **kwds)
+ return _generic_new(collections.defaultdict, cls, *args, **kwds)
+
+
+if hasattr(typing, "Counter"):
+ Counter = typing.Counter
+elif (3, 5, 0) <= sys.version_info[:3] <= (3, 5, 1):
+ assert _geqv_defined
+ _TInt = typing.TypeVar("_TInt")
+
+ class _CounterMeta(typing.GenericMeta):
+ """Metaclass for Counter"""
+
+ def __getitem__(self, item):
+ return super().__getitem__((item, int))
+
+ class Counter(
+ collections.Counter,
+ typing.Dict[T, int],
+ metaclass=_CounterMeta,
+ extra=collections.Counter,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if _geqv(cls, Counter):
+ return collections.Counter(*args, **kwds)
+ return _generic_new(collections.Counter, cls, *args, **kwds)
+
+
+elif _geqv_defined:
+
+ class Counter(
+ collections.Counter,
+ typing.Dict[T, int],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.Counter,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if _geqv(cls, Counter):
+ return collections.Counter(*args, **kwds)
+ return _generic_new(collections.Counter, cls, *args, **kwds)
+
+
+else:
+
+ class Counter(
+ collections.Counter,
+ typing.Dict[T, int],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.Counter,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if cls._gorg is Counter:
+ return collections.Counter(*args, **kwds)
+ return _generic_new(collections.Counter, cls, *args, **kwds)
+
+
+if hasattr(typing, "ChainMap"):
+ ChainMap = typing.ChainMap
+ __all__.append("ChainMap")
+elif hasattr(collections, "ChainMap"):
+ # ChainMap only exists in 3.3+
+ if _geqv_defined:
+
+ class ChainMap(
+ collections.ChainMap,
+ typing.MutableMapping[KT, VT],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.ChainMap,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if _geqv(cls, ChainMap):
+ return collections.ChainMap(*args, **kwds)
+ return _generic_new(collections.ChainMap, cls, *args, **kwds)
+
+ else:
+
+ class ChainMap(
+ collections.ChainMap,
+ typing.MutableMapping[KT, VT],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections.ChainMap,
+ ):
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwds):
+ if cls._gorg is ChainMap:
+ return collections.ChainMap(*args, **kwds)
+ return _generic_new(collections.ChainMap, cls, *args, **kwds)
+
+ __all__.append("ChainMap")
+
+
+if _define_guard("AsyncGenerator"):
+
+ class AsyncGenerator(
+ AsyncIterator[T_co],
+ typing.Generic[T_co, T_contra],
+ metaclass=_ExtensionsGenericMeta,
+ extra=collections_abc.AsyncGenerator,
+ ):
+ __slots__ = ()
+
+
+if hasattr(typing, "NewType"):
+ NewType = typing.NewType
+else:
+
+ def NewType(name, tp):
+ """NewType creates simple unique types with almost zero
+ runtime overhead. NewType(name, tp) is considered a subtype of tp
+ by static type checkers. At runtime, NewType(name, tp) returns
+ a dummy function that simply returns its argument. Usage::
+
+ UserId = NewType('UserId', int)
+
+ def name_by_id(user_id: UserId) -> str:
+ ...
+
+ UserId('user') # Fails type check
+
+ name_by_id(42) # Fails type check
+ name_by_id(UserId(42)) # OK
+
+ num: int = UserId(5) + 1
+ """
+
+ def new_type(x):
+ return x
+
+ new_type.__name__ = name
+ new_type.__supertype__ = tp
+ return new_type
+
+
+if hasattr(typing, "Text"):
+ Text = typing.Text
+else:
+ Text = str
+
+
+if hasattr(typing, "TYPE_CHECKING"):
+ TYPE_CHECKING = typing.TYPE_CHECKING
+else:
+ # Constant that's True when type checking, but False here.
+ TYPE_CHECKING = False
+
+
+def _gorg(cls):
+ """This function exists for compatibility with old typing versions."""
+ assert isinstance(cls, GenericMeta)
+ if hasattr(cls, "_gorg"):
+ return cls._gorg
+ while cls.__origin__ is not None:
+ cls = cls.__origin__
+ return cls
+
+
+if OLD_GENERICS:
+
+ def _next_in_mro(cls): # noqa
+ """This function exists for compatibility with old typing versions."""
+ next_in_mro = object
+ for i, c in enumerate(cls.__mro__[:-1]):
+ if isinstance(c, GenericMeta) and _gorg(c) is Generic:
+ next_in_mro = cls.__mro__[i + 1]
+ return next_in_mro
+
+
+_PROTO_WHITELIST = [
+ "Callable",
+ "Awaitable",
+ "Iterable",
+ "Iterator",
+ "AsyncIterable",
+ "AsyncIterator",
+ "Hashable",
+ "Sized",
+ "Container",
+ "Collection",
+ "Reversible",
+ "ContextManager",
+ "AsyncContextManager",
+]
+
+
+def _get_protocol_attrs(cls):
+ attrs = set()
+ for base in cls.__mro__[:-1]: # without object
+ if base.__name__ in ("Protocol", "Generic"):
+ continue
+ annotations = getattr(base, "__annotations__", {})
+ for attr in list(base.__dict__.keys()) + list(annotations.keys()):
+ if not attr.startswith("_abc_") and attr not in (
+ "__abstractmethods__",
+ "__annotations__",
+ "__weakref__",
+ "_is_protocol",
+ "_is_runtime_protocol",
+ "__dict__",
+ "__args__",
+ "__slots__",
+ "__next_in_mro__",
+ "__parameters__",
+ "__origin__",
+ "__orig_bases__",
+ "__extra__",
+ "__tree_hash__",
+ "__doc__",
+ "__subclasshook__",
+ "__init__",
+ "__new__",
+ "__module__",
+ "_MutableMapping__marker",
+ "_gorg",
+ ):
+ attrs.add(attr)
+ return attrs
+
+
+def _is_callable_members_only(cls):
+ return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls))
+
+
+if hasattr(typing, "Protocol"):
+ Protocol = typing.Protocol
+elif HAVE_PROTOCOLS and not PEP_560:
+
+ class _ProtocolMeta(GenericMeta):
+ """Internal metaclass for Protocol.
+
+ This exists so Protocol classes can be generic without deriving
+ from Generic.
+ """
+
+ if not OLD_GENERICS:
+
+ def __new__(
+ cls,
+ name,
+ bases,
+ namespace,
+ tvars=None,
+ args=None,
+ origin=None,
+ extra=None,
+ orig_bases=None,
+ ):
+ # This is just a version copied from GenericMeta.__new__ that
+ # includes "Protocol" special treatment. (Comments removed for brevity.)
+ assert extra is None # Protocols should not have extra
+ if tvars is not None:
+ assert origin is not None
+ assert all(isinstance(t, TypeVar) for t in tvars), tvars
+ else:
+ tvars = _type_vars(bases)
+ gvars = None
+ for base in bases:
+ if base is Generic:
+ raise TypeError("Cannot inherit from plain Generic")
+ if isinstance(base, GenericMeta) and base.__origin__ in (
+ Generic,
+ Protocol,
+ ):
+ if gvars is not None:
+ raise TypeError(
+ "Cannot inherit from Generic[...] or "
+ "Protocol[...] multiple times."
+ )
+ gvars = base.__parameters__
+ if gvars is None:
+ gvars = tvars
+ else:
+ tvarset = set(tvars)
+ gvarset = set(gvars)
+ if not tvarset <= gvarset:
+ raise TypeError(
+ "Some type variables (%s) "
+ "are not listed in %s[%s]"
+ % (
+ ", ".join(
+ str(t) for t in tvars if t not in gvarset
+ ),
+ "Generic"
+ if any(b.__origin__ is Generic for b in bases)
+ else "Protocol",
+ ", ".join(str(g) for g in gvars),
+ )
+ )
+ tvars = gvars
+
+ initial_bases = bases
+ if (
+ extra is not None
+ and type(extra) is abc.ABCMeta
+ and extra not in bases
+ ):
+ bases = (extra,) + bases
+ bases = tuple(
+ _gorg(b) if isinstance(b, GenericMeta) else b for b in bases
+ )
+ if any(isinstance(b, GenericMeta) and b is not Generic for b in bases):
+ bases = tuple(b for b in bases if b is not Generic)
+ namespace.update({"__origin__": origin, "__extra__": extra})
+ self = super().__new__(cls, name, bases, namespace, _root=True)
+ super().__setattr__("_gorg", self if not origin else _gorg(origin))
+ self.__parameters__ = tvars
+ self.__args__ = (
+ tuple(
+ ... if a is _TypingEllipsis else () if a is _TypingEmpty else a
+ for a in args
+ )
+ if args
+ else None
+ )
+ self.__next_in_mro__ = _next_in_mro(self)
+ if orig_bases is None:
+ self.__orig_bases__ = initial_bases
+ elif origin is not None:
+ self._abc_registry = origin._abc_registry
+ self._abc_cache = origin._abc_cache
+ if hasattr(self, "_subs_tree"):
+ self.__tree_hash__ = (
+ hash(self._subs_tree()) if origin else super().__hash__()
+ )
+ return self
+
+ def __init__(cls, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ if not cls.__dict__.get("_is_protocol", None):
+ cls._is_protocol = any(
+ b is Protocol
+ or isinstance(b, _ProtocolMeta)
+ and b.__origin__ is Protocol
+ for b in cls.__bases__
+ )
+ if cls._is_protocol:
+ for base in cls.__mro__[1:]:
+ if not (
+ base in (object, Generic)
+ or base.__module__ == "collections.abc"
+ and base.__name__ in _PROTO_WHITELIST
+ or isinstance(base, TypingMeta)
+ and base._is_protocol
+ or isinstance(base, GenericMeta)
+ and base.__origin__ is Generic
+ ):
+ raise TypeError(
+ "Protocols can only inherit from other "
+ "protocols, got %r" % base
+ )
+
+ def _no_init(self, *args, **kwargs):
+ if type(self)._is_protocol:
+ raise TypeError("Protocols cannot be instantiated")
+
+ cls.__init__ = _no_init
+
+ def _proto_hook(other):
+ if not cls.__dict__.get("_is_protocol", None):
+ return NotImplemented
+ if not isinstance(other, type):
+ # Same error as for issubclass(1, int)
+ raise TypeError("issubclass() arg 1 must be a class")
+ for attr in _get_protocol_attrs(cls):
+ for base in other.__mro__:
+ if attr in base.__dict__:
+ if base.__dict__[attr] is None:
+ return NotImplemented
+ break
+ annotations = getattr(base, "__annotations__", {})
+ if (
+ isinstance(annotations, typing.Mapping)
+ and attr in annotations
+ and isinstance(other, _ProtocolMeta)
+ and other._is_protocol
+ ):
+ break
+ else:
+ return NotImplemented
+ return True
+
+ if "__subclasshook__" not in cls.__dict__:
+ cls.__subclasshook__ = _proto_hook
+
+ def __instancecheck__(self, instance):
+ # We need this method for situations where attributes are
+ # assigned in __init__.
+ if (
+ not getattr(self, "_is_protocol", False)
+ or _is_callable_members_only(self)
+ ) and issubclass(type(instance), self):
+ return True
+ if self._is_protocol:
+ if all(
+ hasattr(instance, attr)
+ and (
+ not callable(getattr(self, attr, None))
+ or getattr(instance, attr) is not None
+ )
+ for attr in _get_protocol_attrs(self)
+ ):
+ return True
+ return super().__instancecheck__(instance)
+
+ def __subclasscheck__(self, cls):
+ if self.__origin__ is not None:
+ if sys._getframe(1).f_globals["__name__"] not in ["abc", "functools"]:
+ raise TypeError(
+ "Parameterized generics cannot be used with class "
+ "or instance checks"
+ )
+ return False
+ if self.__dict__.get("_is_protocol", None) and not self.__dict__.get(
+ "_is_runtime_protocol", None
+ ):
+ if sys._getframe(1).f_globals["__name__"] in [
+ "abc",
+ "functools",
+ "typing",
+ ]:
+ return False
+ raise TypeError(
+ "Instance and class checks can only be used with "
+ "@runtime protocols"
+ )
+ if self.__dict__.get(
+ "_is_runtime_protocol", None
+ ) and not _is_callable_members_only(self):
+ if sys._getframe(1).f_globals["__name__"] in [
+ "abc",
+ "functools",
+ "typing",
+ ]:
+ return super().__subclasscheck__(cls)
+ raise TypeError(
+ "Protocols with non-method members don't support issubclass()"
+ )
+ return super().__subclasscheck__(cls)
+
+ if not OLD_GENERICS:
+
+ @_tp_cache
+ def __getitem__(self, params):
+ # We also need to copy this from GenericMeta.__getitem__ to get
+ # special treatment of "Protocol". (Comments removed for brevity.)
+ if not isinstance(params, tuple):
+ params = (params,)
+ if not params and _gorg(self) is not Tuple:
+ raise TypeError(
+ "Parameter list to %s[...] cannot be empty" % self.__qualname__
+ )
+ msg = "Parameters to generic types must be types."
+ params = tuple(_type_check(p, msg) for p in params)
+ if self in (Generic, Protocol):
+ if not all(isinstance(p, TypeVar) for p in params):
+ raise TypeError(
+ "Parameters to %r[...] must all be type variables" % self
+ )
+ if len(set(params)) != len(params):
+ raise TypeError(
+ "Parameters to %r[...] must all be unique" % self
+ )
+ tvars = params
+ args = params
+ elif self in (Tuple, Callable):
+ tvars = _type_vars(params)
+ args = params
+ elif self.__origin__ in (Generic, Protocol):
+ raise TypeError(
+ "Cannot subscript already-subscripted %s" % repr(self)
+ )
+ else:
+ _check_generic(self, params)
+ tvars = _type_vars(params)
+ args = params
+
+ prepend = (self,) if self.__origin__ is None else ()
+ return type(self)(
+ self.__name__,
+ prepend + self.__bases__,
+ _no_slots_copy(self.__dict__),
+ tvars=tvars,
+ args=args,
+ origin=self,
+ extra=self.__extra__,
+ orig_bases=self.__orig_bases__,
+ )
+
+ class Protocol(metaclass=_ProtocolMeta):
+ """Base class for protocol classes. Protocol classes are defined as::
+
+ class Proto(Protocol):
+ def meth(self) -> int:
+ ...
+
+ Such classes are primarily used with static type checkers that recognize
+ structural subtyping (static duck-typing), for example::
+
+ class C:
+ def meth(self) -> int:
+ return 0
+
+ def func(x: Proto) -> int:
+ return x.meth()
+
+ func(C()) # Passes static type check
+
+ See PEP 544 for details. Protocol classes decorated with
+ @typing_extensions.runtime act as simple-minded runtime protocol that checks
+ only the presence of given attributes, ignoring their type signatures.
+
+ Protocol classes can be generic, they are defined as::
+
+ class GenProto({bases}):
+ def meth(self) -> T:
+ ...
+ """
+
+ __slots__ = ()
+ _is_protocol = True
+
+ def __new__(cls, *args, **kwds):
+ if _gorg(cls) is Protocol:
+ raise TypeError(
+ "Type Protocol cannot be instantiated; "
+ "it can be used only as a base class"
+ )
+ if OLD_GENERICS:
+ return _generic_new(_next_in_mro(cls), cls, *args, **kwds)
+ return _generic_new(cls.__next_in_mro__, cls, *args, **kwds)
+
+ if Protocol.__doc__ is not None:
+ Protocol.__doc__ = Protocol.__doc__.format(
+ bases="Protocol, Generic[T]" if OLD_GENERICS else "Protocol[T]"
+ )
+
+
+elif PEP_560:
+ from typing import _collect_type_vars, _GenericAlias, _type_check # noqa
+
+ class _ProtocolMeta(abc.ABCMeta):
+ # This metaclass is a bit unfortunate and exists only because of the lack
+ # of __instancehook__.
+ def __instancecheck__(cls, instance):
+ # We need this method for situations where attributes are
+ # assigned in __init__.
+ if (
+ not getattr(cls, "_is_protocol", False)
+ or _is_callable_members_only(cls)
+ ) and issubclass(type(instance), cls):
+ return True
+ if cls._is_protocol:
+ if all(
+ hasattr(instance, attr)
+ and (
+ not callable(getattr(cls, attr, None))
+ or getattr(instance, attr) is not None
+ )
+ for attr in _get_protocol_attrs(cls)
+ ):
+ return True
+ return super().__instancecheck__(instance)
+
+ class Protocol(metaclass=_ProtocolMeta):
+ # There is quite a lot of overlapping code with typing.Generic.
+ # Unfortunately it is hard to avoid this while these live in two different
+ # modules. The duplicated code will be removed when Protocol is moved to typing.
+ """Base class for protocol classes. Protocol classes are defined as::
+
+ class Proto(Protocol):
+ def meth(self) -> int:
+ ...
+
+ Such classes are primarily used with static type checkers that recognize
+ structural subtyping (static duck-typing), for example::
+
+ class C:
+ def meth(self) -> int:
+ return 0
+
+ def func(x: Proto) -> int:
+ return x.meth()
+
+ func(C()) # Passes static type check
+
+ See PEP 544 for details. Protocol classes decorated with
+ @typing_extensions.runtime act as simple-minded runtime protocol that checks
+ only the presence of given attributes, ignoring their type signatures.
+
+ Protocol classes can be generic, they are defined as::
+
+ class GenProto(Protocol[T]):
+ def meth(self) -> T:
+ ...
+ """
+ __slots__ = ()
+ _is_protocol = True
+
+ def __new__(cls, *args, **kwds):
+ if cls is Protocol:
+ raise TypeError(
+ "Type Protocol cannot be instantiated; "
+ "it can only be used as a base class"
+ )
+ return super().__new__(cls)
+
+ @_tp_cache
+ def __class_getitem__(cls, params):
+ if not isinstance(params, tuple):
+ params = (params,)
+ if not params and cls is not Tuple:
+ raise TypeError(
+ "Parameter list to {}[...] cannot be empty".format(cls.__qualname__)
+ )
+ msg = "Parameters to generic types must be types."
+ params = tuple(_type_check(p, msg) for p in params)
+ if cls is Protocol:
+ # Generic can only be subscripted with unique type variables.
+ if not all(isinstance(p, TypeVar) for p in params):
+ i = 0
+ while isinstance(params[i], TypeVar):
+ i += 1
+ raise TypeError(
+ "Parameters to Protocol[...] must all be type variables. "
+ "Parameter {} is {}".format(i + 1, params[i])
+ )
+ if len(set(params)) != len(params):
+ raise TypeError("Parameters to Protocol[...] must all be unique")
+ else:
+ # Subscripting a regular Generic subclass.
+ _check_generic(cls, params)
+ return _GenericAlias(cls, params)
+
+ def __init_subclass__(cls, *args, **kwargs):
+ tvars = []
+ if "__orig_bases__" in cls.__dict__:
+ error = Generic in cls.__orig_bases__
+ else:
+ error = Generic in cls.__bases__
+ if error:
+ raise TypeError("Cannot inherit from plain Generic")
+ if "__orig_bases__" in cls.__dict__:
+ tvars = _collect_type_vars(cls.__orig_bases__)
+ # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn].
+ # If found, tvars must be a subset of it.
+ # If not found, tvars is it.
+ # Also check for and reject plain Generic,
+ # and reject multiple Generic[...] and/or Protocol[...].
+ gvars = None
+ for base in cls.__orig_bases__:
+ if isinstance(base, _GenericAlias) and base.__origin__ in (
+ Generic,
+ Protocol,
+ ):
+ # for error messages
+ the_base = (
+ "Generic" if base.__origin__ is Generic else "Protocol"
+ )
+ if gvars is not None:
+ raise TypeError(
+ "Cannot inherit from Generic[...] "
+ "and/or Protocol[...] multiple types."
+ )
+ gvars = base.__parameters__
+ if gvars is None:
+ gvars = tvars
+ else:
+ tvarset = set(tvars)
+ gvarset = set(gvars)
+ if not tvarset <= gvarset:
+ s_vars = ", ".join(str(t) for t in tvars if t not in gvarset)
+ s_args = ", ".join(str(g) for g in gvars)
+ raise TypeError(
+ "Some type variables ({}) are "
+ "not listed in {}[{}]".format(s_vars, the_base, s_args)
+ )
+ tvars = gvars
+ cls.__parameters__ = tuple(tvars)
+
+ # Determine if this is a protocol or a concrete subclass.
+ if not cls.__dict__.get("_is_protocol", None):
+ cls._is_protocol = any(b is Protocol for b in cls.__bases__)
+
+ # Set (or override) the protocol subclass hook.
+ def _proto_hook(other):
+ if not cls.__dict__.get("_is_protocol", None):
+ return NotImplemented
+ if not getattr(cls, "_is_runtime_protocol", False):
+ if sys._getframe(2).f_globals["__name__"] in ["abc", "functools"]:
+ return NotImplemented
+ raise TypeError(
+ "Instance and class checks can only be used with "
+ "@runtime protocols"
+ )
+ if not _is_callable_members_only(cls):
+ if sys._getframe(2).f_globals["__name__"] in ["abc", "functools"]:
+ return NotImplemented
+ raise TypeError(
+ "Protocols with non-method members "
+ "don't support issubclass()"
+ )
+ if not isinstance(other, type):
+ # Same error as for issubclass(1, int)
+ raise TypeError("issubclass() arg 1 must be a class")
+ for attr in _get_protocol_attrs(cls):
+ for base in other.__mro__:
+ if attr in base.__dict__:
+ if base.__dict__[attr] is None:
+ return NotImplemented
+ break
+ annotations = getattr(base, "__annotations__", {})
+ if (
+ isinstance(annotations, typing.Mapping)
+ and attr in annotations
+ and isinstance(other, _ProtocolMeta)
+ and other._is_protocol
+ ):
+ break
+ else:
+ return NotImplemented
+ return True
+
+ if "__subclasshook__" not in cls.__dict__:
+ cls.__subclasshook__ = _proto_hook
+
+ # We have nothing more to do for non-protocols.
+ if not cls._is_protocol:
+ return
+
+ # Check consistency of bases.
+ for base in cls.__bases__:
+ if not (
+ base in (object, Generic)
+ or base.__module__ == "collections.abc"
+ and base.__name__ in _PROTO_WHITELIST
+ or isinstance(base, _ProtocolMeta)
+ and base._is_protocol
+ ):
+ raise TypeError(
+ "Protocols can only inherit from other "
+ "protocols, got %r" % base
+ )
+
+ def _no_init(self, *args, **kwargs):
+ if type(self)._is_protocol:
+ raise TypeError("Protocols cannot be instantiated")
+
+ cls.__init__ = _no_init
+
+
+if hasattr(typing, "runtime_checkable"):
+ runtime_checkable = typing.runtime_checkable
+elif HAVE_PROTOCOLS:
+
+ def runtime_checkable(cls):
+ """Mark a protocol class as a runtime protocol, so that it
+ can be used with isinstance() and issubclass(). Raise TypeError
+ if applied to a non-protocol class.
+
+ This allows a simple-minded structural check very similar to the
+ one-offs in collections.abc such as Hashable.
+ """
+ if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol:
+ raise TypeError(
+ "@runtime_checkable can be only applied to protocol classes, "
+ "got %r" % cls
+ )
+ cls._is_runtime_protocol = True
+ return cls
+
+
+if HAVE_PROTOCOLS:
+ # Exists for backwards compatibility.
+ runtime = runtime_checkable
+
+
+if hasattr(typing, "SupportsIndex"):
+ SupportsIndex = typing.SupportsIndex
+elif HAVE_PROTOCOLS:
+
+ @runtime_checkable
+ class SupportsIndex(Protocol):
+ __slots__ = ()
+
+ @abc.abstractmethod
+ def __index__(self) -> int:
+ pass
+
+
+if sys.version_info[:2] >= (3, 9):
+ # The standard library TypedDict in Python 3.8 does not store runtime information
+ # about which (if any) keys are optional. See https://bugs.python.org/issue38834
+ TypedDict = typing.TypedDict
+else:
+
+ def _check_fails(cls, other):
+ try:
+ if sys._getframe(1).f_globals["__name__"] not in [
+ "abc",
+ "functools",
+ "typing",
+ ]:
+ # Typed dicts are only for static structural subtyping.
+ raise TypeError("TypedDict does not support instance and class checks")
+ except (AttributeError, ValueError):
+ pass
+ return False
+
+ def _dict_new(*args, **kwargs):
+ if not args:
+ raise TypeError("TypedDict.__new__(): not enough arguments")
+ _, args = args[0], args[1:] # allow the "cls" keyword be passed
+ return dict(*args, **kwargs)
+
+ _dict_new.__text_signature__ = "($cls, _typename, _fields=None, /, **kwargs)"
+
+ def _typeddict_new(*args, total=True, **kwargs):
+ if not args:
+ raise TypeError("TypedDict.__new__(): not enough arguments")
+ _, args = args[0], args[1:] # allow the "cls" keyword be passed
+ if args:
+ typename, args = (
+ args[0],
+ args[1:],
+ ) # allow the "_typename" keyword be passed
+ elif "_typename" in kwargs:
+ typename = kwargs.pop("_typename")
+ import warnings
+
+ warnings.warn(
+ "Passing '_typename' as keyword argument is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ else:
+ raise TypeError(
+ "TypedDict.__new__() missing 1 required positional "
+ "argument: '_typename'"
+ )
+ if args:
+ try:
+ (fields,) = args # allow the "_fields" keyword be passed
+ except ValueError:
+ raise TypeError(
+ "TypedDict.__new__() takes from 2 to 3 "
+ "positional arguments but {} "
+ "were given".format(len(args) + 2)
+ )
+ elif "_fields" in kwargs and len(kwargs) == 1:
+ fields = kwargs.pop("_fields")
+ import warnings
+
+ warnings.warn(
+ "Passing '_fields' as keyword argument is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ else:
+ fields = None
+
+ if fields is None:
+ fields = kwargs
+ elif kwargs:
+ raise TypeError(
+ "TypedDict takes either a dict or keyword arguments, but not both"
+ )
+
+ ns = {"__annotations__": dict(fields), "__total__": total}
+ try:
+ # Setting correct module is necessary to make typed dict classes pickleable.
+ ns["__module__"] = sys._getframe(1).f_globals.get("__name__", "__main__")
+ except (AttributeError, ValueError):
+ pass
+
+ return _TypedDictMeta(typename, (), ns)
+
+ _typeddict_new.__text_signature__ = (
+ "($cls, _typename, _fields=None, /, *, total=True, **kwargs)"
+ )
+
+ class _TypedDictMeta(type):
+ def __new__(cls, name, bases, ns, total=True):
+ # Create new typed dict class object.
+ # This method is called directly when TypedDict is subclassed,
+ # or via _typeddict_new when TypedDict is instantiated. This way
+ # TypedDict supports all three syntaxes described in its docstring.
+ # Subclasses and instances of TypedDict return actual dictionaries
+ # via _dict_new.
+ ns["__new__"] = _typeddict_new if name == "TypedDict" else _dict_new
+ tp_dict = super().__new__(cls, name, (dict,), ns)
+
+ annotations = {}
+ own_annotations = ns.get("__annotations__", {})
+ own_annotation_keys = set(own_annotations.keys())
+ msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type"
+ own_annotations = {
+ n: typing._type_check(tp, msg) for n, tp in own_annotations.items()
+ }
+ required_keys = set()
+ optional_keys = set()
+
+ for base in bases:
+ annotations.update(base.__dict__.get("__annotations__", {}))
+ required_keys.update(base.__dict__.get("__required_keys__", ()))
+ optional_keys.update(base.__dict__.get("__optional_keys__", ()))
+
+ annotations.update(own_annotations)
+ if total:
+ required_keys.update(own_annotation_keys)
+ else:
+ optional_keys.update(own_annotation_keys)
+
+ tp_dict.__annotations__ = annotations
+ tp_dict.__required_keys__ = frozenset(required_keys)
+ tp_dict.__optional_keys__ = frozenset(optional_keys)
+ if not hasattr(tp_dict, "__total__"):
+ tp_dict.__total__ = total
+ return tp_dict
+
+ __instancecheck__ = __subclasscheck__ = _check_fails
+
+ TypedDict = _TypedDictMeta("TypedDict", (dict,), {})
+ TypedDict.__module__ = __name__
+ TypedDict.__doc__ = """A simple typed name space. At runtime it is equivalent to a plain dict.
+
+ TypedDict creates a dictionary type that expects all of its
+ instances to have a certain set of keys, with each key
+ associated with a value of a consistent type. This expectation
+ is not checked at runtime but is only enforced by type checkers.
+ Usage::
+
+ class Point2D(TypedDict):
+ x: int
+ y: int
+ label: str
+
+ a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK
+ b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check
+
+ assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first')
+
+ The type info can be accessed via the Point2D.__annotations__ dict, and
+ the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets.
+ TypedDict supports two additional equivalent forms::
+
+ Point2D = TypedDict('Point2D', x=int, y=int, label=str)
+ Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str})
+
+ The class syntax is only supported in Python 3.6+, while two other
+ syntax forms work for Python 2.7 and 3.2+
+ """
+
+
+# Python 3.9+ has PEP 593 (Annotated and modified get_type_hints)
+if hasattr(typing, "Annotated"):
+ Annotated = typing.Annotated
+ get_type_hints = typing.get_type_hints
+ # Not exported and not a public API, but needed for get_origin() and get_args()
+ # to work.
+ _AnnotatedAlias = typing._AnnotatedAlias
+elif PEP_560:
+
+ class _AnnotatedAlias(typing._GenericAlias, _root=True):
+ """Runtime representation of an annotated type.
+
+ At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't'
+ with extra annotations. The alias behaves like a normal typing alias,
+ instantiating is the same as instantiating the underlying type, binding
+ it to types is also the same.
+ """
+
+ def __init__(self, origin, metadata):
+ if isinstance(origin, _AnnotatedAlias):
+ metadata = origin.__metadata__ + metadata
+ origin = origin.__origin__
+ super().__init__(origin, origin)
+ self.__metadata__ = metadata
+
+ def copy_with(self, params):
+ assert len(params) == 1
+ new_type = params[0]
+ return _AnnotatedAlias(new_type, self.__metadata__)
+
+ def __repr__(self):
+ return "typing_extensions.Annotated[{}, {}]".format(
+ typing._type_repr(self.__origin__),
+ ", ".join(repr(a) for a in self.__metadata__),
+ )
+
+ def __reduce__(self):
+ return operator.getitem, (Annotated, (self.__origin__,) + self.__metadata__)
+
+ def __eq__(self, other):
+ if not isinstance(other, _AnnotatedAlias):
+ return NotImplemented
+ if self.__origin__ != other.__origin__:
+ return False
+ return self.__metadata__ == other.__metadata__
+
+ def __hash__(self):
+ return hash((self.__origin__, self.__metadata__))
+
+ class Annotated:
+ """Add context specific metadata to a type.
+
+ Example: Annotated[int, runtime_check.Unsigned] indicates to the
+ hypothetical runtime_check module that this type is an unsigned int.
+ Every other consumer of this type can ignore this metadata and treat
+ this type as int.
+
+ The first argument to Annotated must be a valid type (and will be in
+ the __origin__ field), the remaining arguments are kept as a tuple in
+ the __extra__ field.
+
+ Details:
+
+ - It's an error to call `Annotated` with less than two arguments.
+ - Nested Annotated are flattened::
+
+ Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3]
+
+ - Instantiating an annotated type is equivalent to instantiating the
+ underlying type::
+
+ Annotated[C, Ann1](5) == C(5)
+
+ - Annotated can be used as a generic type alias::
+
+ Optimized = Annotated[T, runtime.Optimize()]
+ Optimized[int] == Annotated[int, runtime.Optimize()]
+
+ OptimizedList = Annotated[List[T], runtime.Optimize()]
+ OptimizedList[int] == Annotated[List[int], runtime.Optimize()]
+ """
+
+ __slots__ = ()
+
+ def __new__(cls, *args, **kwargs):
+ raise TypeError("Type Annotated cannot be instantiated.")
+
+ @_tp_cache
+ def __class_getitem__(cls, params):
+ if not isinstance(params, tuple) or len(params) < 2:
+ raise TypeError(
+ "Annotated[...] should be used "
+ "with at least two arguments (a type and an "
+ "annotation)."
+ )
+ msg = "Annotated[t, ...]: t must be a type."
+ origin = typing._type_check(params[0], msg)
+ metadata = tuple(params[1:])
+ return _AnnotatedAlias(origin, metadata)
+
+ def __init_subclass__(cls, *args, **kwargs):
+ raise TypeError("Cannot subclass {}.Annotated".format(cls.__module__))
+
+ def _strip_annotations(t):
+ """Strips the annotations from a given type.
+ """
+ if isinstance(t, _AnnotatedAlias):
+ return _strip_annotations(t.__origin__)
+ if isinstance(t, typing._GenericAlias):
+ stripped_args = tuple(_strip_annotations(a) for a in t.__args__)
+ if stripped_args == t.__args__:
+ return t
+ res = t.copy_with(stripped_args)
+ res._special = t._special
+ return res
+ return t
+
+ def get_type_hints(obj, globalns=None, localns=None, include_extras=False):
+ """Return type hints for an object.
+
+ This is often the same as obj.__annotations__, but it handles
+ forward references encoded as string literals, adds Optional[t] if a
+ default value equal to None is set and recursively replaces all
+ 'Annotated[T, ...]' with 'T' (unless 'include_extras=True').
+
+ The argument may be a module, class, method, or function. The annotations
+ are returned as a dictionary. For classes, annotations include also
+ inherited members.
+
+ TypeError is raised if the argument is not of a type that can contain
+ annotations, and an empty dictionary is returned if no annotations are
+ present.
+
+ BEWARE -- the behavior of globalns and localns is counterintuitive
+ (unless you are familiar with how eval and exec work). The
+ search order is locals first, then globals.
+
+ - If no dict arguments are passed, an attempt is made to use the
+ globals from obj (or the respective module's globals for classes),
+ and these are also used as the locals. If the object does not appear
+ to have globals, an empty dictionary is used.
+
+ - If one dict argument is passed, it is used for both globals and
+ locals.
+
+ - If two dict arguments are passed, they specify globals and
+ locals, respectively.
+ """
+ hint = typing.get_type_hints(obj, globalns=globalns, localns=localns)
+ if include_extras:
+ return hint
+ return {k: _strip_annotations(t) for k, t in hint.items()}
+
+
+elif HAVE_ANNOTATED:
+
+ def _is_dunder(name):
+ """Returns True if name is a __dunder_variable_name__."""
+ return len(name) > 4 and name.startswith("__") and name.endswith("__")
+
+ # Prior to Python 3.7 types did not have `copy_with`. A lot of the equality
+ # checks, argument expansion etc. are done on the _subs_tre. As a result we
+ # can't provide a get_type_hints function that strips out annotations.
+
+ class AnnotatedMeta(typing.GenericMeta):
+ """Metaclass for Annotated"""
+
+ def __new__(cls, name, bases, namespace, **kwargs):
+ if any(b is not object for b in bases):
+ raise TypeError("Cannot subclass " + str(Annotated))
+ return super().__new__(cls, name, bases, namespace, **kwargs)
+
+ @property
+ def __metadata__(self):
+ return self._subs_tree()[2]
+
+ def _tree_repr(self, tree):
+ cls, origin, metadata = tree
+ if not isinstance(origin, tuple):
+ tp_repr = typing._type_repr(origin)
+ else:
+ tp_repr = origin[0]._tree_repr(origin)
+ metadata_reprs = ", ".join(repr(arg) for arg in metadata)
+ return "%s[%s, %s]" % (cls, tp_repr, metadata_reprs)
+
+ def _subs_tree(self, tvars=None, args=None): # noqa
+ if self is Annotated:
+ return Annotated
+ res = super()._subs_tree(tvars=tvars, args=args)
+ # Flatten nested Annotated
+ if isinstance(res[1], tuple) and res[1][0] is Annotated:
+ sub_tp = res[1][1]
+ sub_annot = res[1][2]
+ return (Annotated, sub_tp, sub_annot + res[2])
+ return res
+
+ def _get_cons(self):
+ """Return the class used to create instance of this type."""
+ if self.__origin__ is None:
+ raise TypeError(
+ "Cannot get the underlying type of a "
+ "non-specialized Annotated type."
+ )
+ tree = self._subs_tree()
+ while isinstance(tree, tuple) and tree[0] is Annotated:
+ tree = tree[1]
+ if isinstance(tree, tuple):
+ return tree[0]
+ else:
+ return tree
+
+ @_tp_cache
+ def __getitem__(self, params):
+ if not isinstance(params, tuple):
+ params = (params,)
+ if self.__origin__ is not None: # specializing an instantiated type
+ return super().__getitem__(params)
+ elif not isinstance(params, tuple) or len(params) < 2:
+ raise TypeError(
+ "Annotated[...] should be instantiated "
+ "with at least two arguments (a type and an "
+ "annotation)."
+ )
+ else:
+ msg = "Annotated[t, ...]: t must be a type."
+ tp = typing._type_check(params[0], msg)
+ metadata = tuple(params[1:])
+ return type(self)(
+ self.__name__,
+ self.__bases__,
+ _no_slots_copy(self.__dict__),
+ tvars=_type_vars((tp,)),
+ # Metadata is a tuple so it won't be touched by _replace_args et al.
+ args=(tp, metadata),
+ origin=self,
+ )
+
+ def __call__(self, *args, **kwargs):
+ cons = self._get_cons()
+ result = cons(*args, **kwargs)
+ try:
+ result.__orig_class__ = self
+ except AttributeError:
+ pass
+ return result
+
+ def __getattr__(self, attr):
+ # For simplicity we just don't relay all dunder names
+ if self.__origin__ is not None and not _is_dunder(attr):
+ return getattr(self._get_cons(), attr)
+ raise AttributeError(attr)
+
+ def __setattr__(self, attr, value):
+ if _is_dunder(attr) or attr.startswith("_abc_"):
+ super().__setattr__(attr, value)
+ elif self.__origin__ is None:
+ raise AttributeError(attr)
+ else:
+ setattr(self._get_cons(), attr, value)
+
+ def __instancecheck__(self, obj):
+ raise TypeError("Annotated cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("Annotated cannot be used with issubclass().")
+
+ class Annotated(metaclass=AnnotatedMeta):
+ """Add context specific metadata to a type.
+
+ Example: Annotated[int, runtime_check.Unsigned] indicates to the
+ hypothetical runtime_check module that this type is an unsigned int.
+ Every other consumer of this type can ignore this metadata and treat
+ this type as int.
+
+ The first argument to Annotated must be a valid type, the remaining
+ arguments are kept as a tuple in the __metadata__ field.
+
+ Details:
+
+ - It's an error to call `Annotated` with less than two arguments.
+ - Nested Annotated are flattened::
+
+ Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3]
+
+ - Instantiating an annotated type is equivalent to instantiating the
+ underlying type::
+
+ Annotated[C, Ann1](5) == C(5)
+
+ - Annotated can be used as a generic type alias::
+
+ Optimized = Annotated[T, runtime.Optimize()]
+ Optimized[int] == Annotated[int, runtime.Optimize()]
+
+ OptimizedList = Annotated[List[T], runtime.Optimize()]
+ OptimizedList[int] == Annotated[List[int], runtime.Optimize()]
+ """
+
+
+# Python 3.8 has get_origin() and get_args() but those implementations aren't
+# Annotated-aware, so we can't use those, only Python 3.9 versions will do.
+if sys.version_info[:2] >= (3, 9):
+ get_origin = typing.get_origin
+ get_args = typing.get_args
+elif PEP_560:
+ from typing import _GenericAlias # noqa
+
+ def get_origin(tp):
+ """Get the unsubscripted version of a type.
+
+ This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar
+ and Annotated. Return None for unsupported types. Examples::
+
+ get_origin(Literal[42]) is Literal
+ get_origin(int) is None
+ get_origin(ClassVar[int]) is ClassVar
+ get_origin(Generic) is Generic
+ get_origin(Generic[T]) is Generic
+ get_origin(Union[T, int]) is Union
+ get_origin(List[Tuple[T, T]][int]) == list
+ """
+ if isinstance(tp, _AnnotatedAlias):
+ return Annotated
+ if isinstance(tp, _GenericAlias):
+ return tp.__origin__
+ if tp is Generic:
+ return Generic
+ return None
+
+ def get_args(tp):
+ """Get type arguments with all substitutions performed.
+
+ For unions, basic simplifications used by Union constructor are performed.
+ Examples::
+ get_args(Dict[str, int]) == (str, int)
+ get_args(int) == ()
+ get_args(Union[int, Union[T, int], str][int]) == (int, str)
+ get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int])
+ get_args(Callable[[], T][int]) == ([], int)
+ """
+ if isinstance(tp, _AnnotatedAlias):
+ return (tp.__origin__,) + tp.__metadata__
+ if isinstance(tp, _GenericAlias):
+ res = tp.__args__
+ if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis:
+ res = (list(res[:-1]), res[-1])
+ return res
+ return ()
+
+
+if hasattr(typing, "TypeAlias"):
+ TypeAlias = typing.TypeAlias
+elif sys.version_info[:2] >= (3, 9):
+
+ class _TypeAliasForm(typing._SpecialForm, _root=True):
+ def __repr__(self):
+ return "typing_extensions." + self._name
+
+ @_TypeAliasForm
+ def TypeAlias(self, parameters):
+ """Special marker indicating that an assignment should
+ be recognized as a proper type alias definition by type
+ checkers.
+
+ For example::
+
+ Predicate: TypeAlias = Callable[..., bool]
+
+ It's invalid when used anywhere except as in the example above.
+ """
+ raise TypeError("{} is not subscriptable".format(self))
+
+
+elif sys.version_info[:2] >= (3, 7):
+
+ class _TypeAliasForm(typing._SpecialForm, _root=True):
+ def __repr__(self):
+ return "typing_extensions." + self._name
+
+ TypeAlias = _TypeAliasForm(
+ "TypeAlias",
+ doc="""Special marker indicating that an assignment should
+ be recognized as a proper type alias definition by type
+ checkers.
+
+ For example::
+
+ Predicate: TypeAlias = Callable[..., bool]
+
+ It's invalid when used anywhere except as in the example
+ above.""",
+ )
+
+elif hasattr(typing, "_FinalTypingBase"):
+
+ class _TypeAliasMeta(typing.TypingMeta):
+ """Metaclass for TypeAlias"""
+
+ def __repr__(self):
+ return "typing_extensions.TypeAlias"
+
+ class _TypeAliasBase(typing._FinalTypingBase, metaclass=_TypeAliasMeta, _root=True):
+ """Special marker indicating that an assignment should
+ be recognized as a proper type alias definition by type
+ checkers.
+
+ For example::
+
+ Predicate: TypeAlias = Callable[..., bool]
+
+ It's invalid when used anywhere except as in the example above.
+ """
+
+ __slots__ = ()
+
+ def __instancecheck__(self, obj):
+ raise TypeError("TypeAlias cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("TypeAlias cannot be used with issubclass().")
+
+ def __repr__(self):
+ return "typing_extensions.TypeAlias"
+
+ TypeAlias = _TypeAliasBase(_root=True)
+else:
+
+ class _TypeAliasMeta(typing.TypingMeta):
+ """Metaclass for TypeAlias"""
+
+ def __instancecheck__(self, obj):
+ raise TypeError("TypeAlias cannot be used with isinstance().")
+
+ def __subclasscheck__(self, cls):
+ raise TypeError("TypeAlias cannot be used with issubclass().")
+
+ def __call__(self, *args, **kwargs):
+ raise TypeError("Cannot instantiate TypeAlias")
+
+ class TypeAlias(metaclass=_TypeAliasMeta, _root=True):
+ """Special marker indicating that an assignment should
+ be recognized as a proper type alias definition by type
+ checkers.
+
+ For example::
+
+ Predicate: TypeAlias = Callable[..., bool]
+
+ It's invalid when used anywhere except as in the example above.
+ """
+
+ __slots__ = ()
diff --git a/setup.cfg b/setup.cfg
index 2447a91f88f4e..29ae85f7985f7 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -68,6 +68,7 @@ omit =
*/tests/*
pandas/_typing.py
pandas/_version.py
+ pandas/_vendored/typing_extensions.py
plugins = Cython.Coverage
[coverage:report]
@@ -99,7 +100,7 @@ directory = coverage_html_report
# To be kept consistent with "Import Formatting" section in contributing.rst
[isort]
-known_pre_libs = pandas._config
+known_pre_libs = pandas._config,pandas._vendored
known_pre_core = pandas._libs,pandas._typing,pandas.util._*,pandas.compat,pandas.errors
known_dtypes = pandas.core.dtypes
known_post_core = pandas.tseries,pandas.io,pandas.plotting
@@ -113,7 +114,7 @@ combine_as_imports = True
line_length = 88
force_sort_within_sections = True
skip_glob = env,
-skip = pandas/__init__.py
+skip = pandas/__init__.py,pandas/_vendored/typing_extensions.py
[mypy]
ignore_missing_imports=True
@@ -124,6 +125,10 @@ warn_redundant_casts = True
warn_unused_ignores = True
show_error_codes = True
+[mypy-pandas._vendored.*]
+check_untyped_defs=False
+ignore_errors=True
+
[mypy-pandas.tests.*]
check_untyped_defs=False
| - [x] closes #34869
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
haven't figured out how to make isort and black ignore this file | https://api.github.com/repos/pandas-dev/pandas/pulls/36000 | 2020-08-31T02:47:39Z | 2020-09-01T23:37:09Z | 2020-09-01T23:37:09Z | 2020-09-01T23:47:58Z |
BUG: None in Float64Index raising TypeError, should return False | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 9747a8ef3e71f..b907b8ac33516 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -29,7 +29,7 @@ Bug fixes
- Bug in :class:`Series` constructor raising a ``TypeError`` when constructing sparse datetime64 dtypes (:issue:`35762`)
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should bw ``""`` (:issue:`35712`)
--
+- Bug in :meth:`Float64Index.__contains__` incorrectly raising ``TypeError`` instead of returning ``False`` (:issue:`35788`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index d6659cc1895b1..569562f5b5037 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -80,7 +80,11 @@ cdef class IndexEngine:
values = self._get_index_values()
self._check_type(val)
- loc = _bin_search(values, val) # .searchsorted(val, side='left')
+ try:
+ loc = _bin_search(values, val) # .searchsorted(val, side='left')
+ except TypeError:
+ # GH#35788 e.g. val=None with float64 values
+ raise KeyError(val)
if loc >= len(values):
raise KeyError(val)
if values[loc] != val:
diff --git a/pandas/tests/indexes/numeric/test_indexing.py b/pandas/tests/indexes/numeric/test_indexing.py
index 473e370c76f8b..508bd2f566507 100644
--- a/pandas/tests/indexes/numeric/test_indexing.py
+++ b/pandas/tests/indexes/numeric/test_indexing.py
@@ -228,6 +228,12 @@ def test_take_fill_value_ints(self, klass):
class TestContains:
+ @pytest.mark.parametrize("klass", [Float64Index, Int64Index, UInt64Index])
+ def test_contains_none(self, klass):
+ # GH#35788 should return False, not raise TypeError
+ index = klass([0, 1, 2, 3, 4])
+ assert None not in index
+
def test_contains_float64_nans(self):
index = Float64Index([1.0, 2.0, np.nan])
assert np.nan in index
| - [x] closes #35788
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35999 | 2020-08-30T23:24:59Z | 2020-09-01T01:20:41Z | 2020-09-01T01:20:41Z | 2020-09-01T15:03:03Z |
Issue35925 Remove trailing commas | diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 724558bd49ea2..274860b3fdb5c 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1846,7 +1846,7 @@ def test_multilevel_index_loc_order(self, dim, keys, expected):
# GH 22797
# Try to respect order of keys given for MultiIndex.loc
kwargs = {dim: [["c", "a", "a", "b", "b"], [1, 1, 2, 1, 2]]}
- df = pd.DataFrame(np.arange(25).reshape(5, 5), **kwargs,)
+ df = pd.DataFrame(np.arange(25).reshape(5, 5), **kwargs)
exp_index = MultiIndex.from_arrays(expected)
if dim == "index":
res = df.loc[keys, :]
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 0d60e6e8a978f..c45e4508c6153 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -285,7 +285,7 @@ def test_nansum(self, skipna):
def test_nanmean(self, skipna):
self.check_funs(
- nanops.nanmean, np.mean, skipna, allow_obj=False, allow_date=False,
+ nanops.nanmean, np.mean, skipna, allow_obj=False, allow_date=False
)
def test_nanmean_overflow(self):
diff --git a/pandas/tests/window/moments/test_moments_consistency_rolling.py b/pandas/tests/window/moments/test_moments_consistency_rolling.py
index a3de8aa69f840..158b994cf03ae 100644
--- a/pandas/tests/window/moments/test_moments_consistency_rolling.py
+++ b/pandas/tests/window/moments/test_moments_consistency_rolling.py
@@ -95,7 +95,7 @@ def test_rolling_apply_consistency(
with warnings.catch_warnings():
warnings.filterwarnings(
- "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning,
+ "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning
)
# test consistency between rolling_xyz() and either (a)
# rolling_apply of Series.xyz(), or (b) rolling_apply of
@@ -107,7 +107,7 @@ def test_rolling_apply_consistency(
functions = no_nan_functions + base_functions
for (f, require_min_periods, name) in functions:
rolling_f = getattr(
- x.rolling(window=window, center=center, min_periods=min_periods), name,
+ x.rolling(window=window, center=center, min_periods=min_periods), name
)
if (
@@ -492,7 +492,7 @@ def test_moment_functions_zero_length_pairwise():
df2["a"] = df2["a"].astype("float64")
df1_expected = DataFrame(
- index=pd.MultiIndex.from_product([df1.index, df1.columns]), columns=Index([]),
+ index=pd.MultiIndex.from_product([df1.index, df1.columns]), columns=Index([])
)
df2_expected = DataFrame(
index=pd.MultiIndex.from_product(
@@ -635,7 +635,7 @@ def test_rolling_consistency(consistency_data, window, min_periods, center):
# with empty/0-length Series/DataFrames
with warnings.catch_warnings():
warnings.filterwarnings(
- "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning,
+ "ignore", message=".*(empty slice|0 for slice).*", category=RuntimeWarning
)
# test consistency between different rolling_* moments
diff --git a/pandas/tests/window/moments/test_moments_ewm.py b/pandas/tests/window/moments/test_moments_ewm.py
index 89d46a8bb6cb5..a83bfabc4a048 100644
--- a/pandas/tests/window/moments/test_moments_ewm.py
+++ b/pandas/tests/window/moments/test_moments_ewm.py
@@ -73,7 +73,7 @@ def simple_wma(s, w):
(s1, True, True, [(1.0 - alpha), np.nan, 1.0]),
(s1, False, False, [(1.0 - alpha) ** 2, np.nan, alpha]),
(s1, False, True, [(1.0 - alpha), np.nan, alpha]),
- (s2, True, False, [np.nan, (1.0 - alpha) ** 3, np.nan, np.nan, 1.0, np.nan],),
+ (s2, True, False, [np.nan, (1.0 - alpha) ** 3, np.nan, np.nan, 1.0, np.nan]),
(s2, True, True, [np.nan, (1.0 - alpha), np.nan, np.nan, 1.0, np.nan]),
(
s2,
@@ -95,7 +95,7 @@ def simple_wma(s, w):
alpha * ((1.0 - alpha) ** 2 + alpha),
],
),
- (s3, False, True, [(1.0 - alpha) ** 2, np.nan, (1.0 - alpha) * alpha, alpha],),
+ (s3, False, True, [(1.0 - alpha) ** 2, np.nan, (1.0 - alpha) * alpha, alpha]),
]:
expected = simple_wma(s, Series(w))
result = s.ewm(com=com, adjust=adjust, ignore_na=ignore_na).mean()
diff --git a/pandas/tests/window/moments/test_moments_rolling.py b/pandas/tests/window/moments/test_moments_rolling.py
index 81f020fe7de23..da256e80dff7e 100644
--- a/pandas/tests/window/moments/test_moments_rolling.py
+++ b/pandas/tests/window/moments/test_moments_rolling.py
@@ -150,14 +150,14 @@ def get_result(obj, window, min_periods=None, center=False):
series_xp = (
get_result(
- series.reindex(list(series.index) + s), window=25, min_periods=minp,
+ series.reindex(list(series.index) + s), window=25, min_periods=minp
)
.shift(-12)
.reindex(series.index)
)
frame_xp = (
get_result(
- frame.reindex(list(frame.index) + s), window=25, min_periods=minp,
+ frame.reindex(list(frame.index) + s), window=25, min_periods=minp
)
.shift(-12)
.reindex(frame.index)
@@ -169,14 +169,14 @@ def get_result(obj, window, min_periods=None, center=False):
else:
series_xp = (
get_result(
- series.reindex(list(series.index) + s), window=25, min_periods=0,
+ series.reindex(list(series.index) + s), window=25, min_periods=0
)
.shift(-12)
.reindex(series.index)
)
frame_xp = (
get_result(
- frame.reindex(list(frame.index) + s), window=25, min_periods=0,
+ frame.reindex(list(frame.index) + s), window=25, min_periods=0
)
.shift(-12)
.reindex(frame.index)
diff --git a/pandas/tests/window/test_base_indexer.py b/pandas/tests/window/test_base_indexer.py
index 2300d8dd5529b..ab73e075eed04 100644
--- a/pandas/tests/window/test_base_indexer.py
+++ b/pandas/tests/window/test_base_indexer.py
@@ -88,8 +88,8 @@ def get_window_bounds(self, num_values, min_periods, center, closed):
@pytest.mark.parametrize(
"func,np_func,expected,np_kwargs",
[
- ("count", len, [3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 2.0, np.nan], {},),
- ("min", np.min, [0.0, 1.0, 2.0, 3.0, 4.0, 6.0, 6.0, 7.0, 8.0, np.nan], {},),
+ ("count", len, [3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 2.0, np.nan], {}),
+ ("min", np.min, [0.0, 1.0, 2.0, 3.0, 4.0, 6.0, 6.0, 7.0, 8.0, np.nan], {}),
(
"max",
np.max,
@@ -204,7 +204,7 @@ def test_rolling_forward_skewness(constructor):
@pytest.mark.parametrize(
"func,expected",
[
- ("cov", [2.0, 2.0, 2.0, 97.0, 2.0, -93.0, 2.0, 2.0, np.nan, np.nan],),
+ ("cov", [2.0, 2.0, 2.0, 97.0, 2.0, -93.0, 2.0, 2.0, np.nan, np.nan]),
(
"corr",
[
diff --git a/pandas/tests/window/test_pairwise.py b/pandas/tests/window/test_pairwise.py
index e82d4b8cbf770..7425cc5df4c2f 100644
--- a/pandas/tests/window/test_pairwise.py
+++ b/pandas/tests/window/test_pairwise.py
@@ -195,7 +195,7 @@ def test_cov_mulittindex(self):
columns = MultiIndex.from_product([list("ab"), list("xy"), list("AB")])
index = range(3)
- df = DataFrame(np.arange(24).reshape(3, 8), index=index, columns=columns,)
+ df = DataFrame(np.arange(24).reshape(3, 8), index=index, columns=columns)
result = df.ewm(alpha=0.1).cov()
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py
index 8d72e2cb92ca9..67b20fd2d6daa 100644
--- a/pandas/tests/window/test_rolling.py
+++ b/pandas/tests/window/test_rolling.py
@@ -73,7 +73,7 @@ def test_constructor_with_timedelta_window(window):
# GH 15440
n = 10
df = DataFrame(
- {"value": np.arange(n)}, index=pd.date_range("2015-12-24", periods=n, freq="D"),
+ {"value": np.arange(n)}, index=pd.date_range("2015-12-24", periods=n, freq="D")
)
expected_data = np.append([0.0, 1.0], np.arange(3.0, 27.0, 3))
@@ -92,7 +92,7 @@ def test_constructor_timedelta_window_and_minperiods(window, raw):
# GH 15305
n = 10
df = DataFrame(
- {"value": np.arange(n)}, index=pd.date_range("2017-08-08", periods=n, freq="D"),
+ {"value": np.arange(n)}, index=pd.date_range("2017-08-08", periods=n, freq="D")
)
expected = DataFrame(
{"value": np.append([np.NaN, 1.0], np.arange(3.0, 27.0, 3))},
@@ -153,7 +153,7 @@ def test_closed_one_entry(func):
def test_closed_one_entry_groupby(func):
# GH24718
ser = pd.DataFrame(
- data={"A": [1, 1, 2], "B": [3, 2, 1]}, index=pd.date_range("2000", periods=3),
+ data={"A": [1, 1, 2], "B": [3, 2, 1]}, index=pd.date_range("2000", periods=3)
)
result = getattr(
ser.groupby("A", sort=False)["B"].rolling("10D", closed="left"), func
@@ -182,7 +182,7 @@ def test_closed_one_entry_groupby(func):
def test_closed_min_max_datetime(input_dtype, func, closed, expected):
# see gh-21704
ser = pd.Series(
- data=np.arange(10).astype(input_dtype), index=pd.date_range("2000", periods=10),
+ data=np.arange(10).astype(input_dtype), index=pd.date_range("2000", periods=10)
)
result = getattr(ser.rolling("3D", closed=closed), func)()
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index f80ff1a53cd69..8ef6dac2862db 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -548,7 +548,7 @@ def is_superperiod(source, target) -> bool:
def _maybe_coerce_freq(code) -> str:
- """ we might need to coerce a code to a rule_code
+ """we might need to coerce a code to a rule_code
and uppercase it
Parameters
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index 0dad8c7397e37..ca7b99492bbf7 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -186,10 +186,10 @@ def skip_if_no(package: str, min_version: Optional[str] = None):
is_platform_windows(), reason="not used on win32"
)
skip_if_has_locale = pytest.mark.skipif(
- _skip_if_has_locale(), reason=f"Specific locale is set {locale.getlocale()[0]}",
+ _skip_if_has_locale(), reason=f"Specific locale is set {locale.getlocale()[0]}"
)
skip_if_not_us_locale = pytest.mark.skipif(
- _skip_if_not_us_locale(), reason=f"Specific locale is set {locale.getlocale()[0]}",
+ _skip_if_not_us_locale(), reason=f"Specific locale is set {locale.getlocale()[0]}"
)
skip_if_no_scipy = pytest.mark.skipif(
_skip_if_no_scipy(), reason="Missing SciPy requirement"
| #35925
Files edited:
- pandas/tests/test_multilevel.py
- pandas/tests/test_nanops.py
- pandas/tests/window/moments/test_moments_consistency_rolling.py
- pandas/tests/window/moments/test_moments_ewm.py
- pandas/tests/window/moments/test_moments_rolling.py
- pandas/tests/window/test_base_indexer.py
- pandas/tests/window/test_pairwise.py
- pandas/tests/window/test_rolling.py
- pandas/tseries/frequencies.py
- pandas/util/_test_decorators.py | https://api.github.com/repos/pandas-dev/pandas/pulls/35996 | 2020-08-30T17:56:06Z | 2020-08-31T09:59:18Z | 2020-08-31T09:59:18Z | 2020-08-31T09:59:26Z |
TYP: typing errors in _xlsxwriter.py #35994 | diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 3cd0d721bbdc6..ead36c95556b1 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -653,7 +653,6 @@ def __new__(cls, path, engine=None, **kwargs):
return object.__new__(cls)
# declare external properties you can count on
- book = None
curr_sheet = None
path = None
diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index 72f3d81b1c662..f39391ae1fe7f 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -25,7 +25,7 @@ def __init__(
super().__init__(path, mode=mode, **engine_kwargs)
- self.book: OpenDocumentSpreadsheet = OpenDocumentSpreadsheet()
+ self.book = OpenDocumentSpreadsheet()
self._style_dict: Dict[str, str] = {}
def save(self) -> None:
diff --git a/pandas/io/excel/_xlsxwriter.py b/pandas/io/excel/_xlsxwriter.py
index 85a1bb031f457..bdbb006ae93dc 100644
--- a/pandas/io/excel/_xlsxwriter.py
+++ b/pandas/io/excel/_xlsxwriter.py
@@ -1,3 +1,5 @@
+from typing import Dict, List, Tuple
+
import pandas._libs.json as json
from pandas.io.excel._base import ExcelWriter
@@ -8,7 +10,7 @@ class _XlsxStyler:
# Map from openpyxl-oriented styles to flatter xlsxwriter representation
# Ordering necessary for both determinism and because some are keyed by
# prefixes of others.
- STYLE_MAPPING = {
+ STYLE_MAPPING: Dict[str, List[Tuple[Tuple[str, ...], str]]] = {
"font": [
(("name",), "font_name"),
(("sz",), "font_size"),
@@ -170,7 +172,7 @@ def __init__(
**engine_kwargs,
):
# Use the xlsxwriter module as the Excel writer.
- import xlsxwriter
+ from xlsxwriter import Workbook
if mode == "a":
raise ValueError("Append mode is not supported with xlsxwriter!")
@@ -184,7 +186,7 @@ def __init__(
**engine_kwargs,
)
- self.book = xlsxwriter.Workbook(path, **engine_kwargs)
+ self.book = Workbook(path, **engine_kwargs)
def save(self):
"""
| - [x] closes #35994
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35995 | 2020-08-30T17:18:37Z | 2020-08-31T18:22:35Z | 2020-08-31T18:22:35Z | 2020-08-31T20:03:42Z |
Updating fork | Updating fork (this PR was opened in the wrong repo by mistake, apologies for the same.) | https://api.github.com/repos/pandas-dev/pandas/pulls/35993 | 2020-08-30T16:44:16Z | 2020-08-30T16:44:36Z | null | 2020-08-30T16:47:58Z | |
TYP: check_untyped_defs core.dtypes.cast | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index f71fd0d406c54..e66f513e347a9 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -75,7 +75,7 @@ def _new_DatetimeIndex(cls, d):
+ [
method
for method in DatetimeArray._datetimelike_methods
- if method not in ("tz_localize",)
+ if method not in ("tz_localize", "tz_convert")
],
DatetimeArray,
wrap=True,
@@ -228,6 +228,11 @@ class DatetimeIndex(DatetimeTimedeltaMixin):
# --------------------------------------------------------------------
# methods that dispatch to array and wrap result in DatetimeIndex
+ @doc(DatetimeArray.tz_convert)
+ def tz_convert(self, tz) -> "DatetimeIndex":
+ arr = self._data.tz_convert(tz)
+ return type(self)._simple_new(arr, name=self.name)
+
@doc(DatetimeArray.tz_localize)
def tz_localize(
self, tz, ambiguous="raise", nonexistent="raise"
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 3c1fe6bacefcf..8fcc5f74ea897 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -307,9 +307,7 @@ def _convert_listlike_datetimes(
if not isinstance(arg, (DatetimeArray, DatetimeIndex)):
return DatetimeIndex(arg, tz=tz, name=name)
if tz == "utc":
- # error: Item "DatetimeIndex" of "Union[DatetimeArray, DatetimeIndex]" has
- # no attribute "tz_convert"
- arg = arg.tz_convert(None).tz_localize(tz) # type: ignore[union-attr]
+ arg = arg.tz_convert(None).tz_localize(tz)
return arg
elif is_datetime64_ns_dtype(arg_dtype):
diff --git a/setup.cfg b/setup.cfg
index aa1535a171f0a..2ba22e5aad3c7 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -157,9 +157,6 @@ check_untyped_defs=False
[mypy-pandas.core.computation.scope]
check_untyped_defs=False
-[mypy-pandas.core.dtypes.cast]
-check_untyped_defs=False
-
[mypy-pandas.core.frame]
check_untyped_defs=False
| https://api.github.com/repos/pandas-dev/pandas/pulls/35992 | 2020-08-30T15:35:14Z | 2020-08-31T01:31:23Z | 2020-08-31T01:31:22Z | 2020-08-31T08:07:07Z | |
TYP: misc typing in core\indexes\base.py | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 95bd757f1994e..27f9b577203ac 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1772,13 +1772,13 @@ def from_records(
arrays = [data[k] for k in columns]
else:
arrays = []
- arr_columns = []
+ arr_columns_list = []
for k, v in data.items():
if k in columns:
- arr_columns.append(k)
+ arr_columns_list.append(k)
arrays.append(v)
- arrays, arr_columns = reorder_arrays(arrays, arr_columns, columns)
+ arrays, arr_columns = reorder_arrays(arrays, arr_columns_list, columns)
elif isinstance(data, (np.ndarray, DataFrame)):
arrays, columns = to_arrays(data, columns)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index a07c3328def54..48b02fc525cc1 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -10,6 +10,8 @@
Hashable,
List,
Optional,
+ Sequence,
+ TypeVar,
Union,
)
import warnings
@@ -22,7 +24,7 @@
from pandas._libs.tslibs import OutOfBoundsDatetime, Timestamp
from pandas._libs.tslibs.period import IncompatibleFrequency
from pandas._libs.tslibs.timezones import tz_compare
-from pandas._typing import DtypeObj, Label
+from pandas._typing import AnyArrayLike, Dtype, DtypeObj, Label
from pandas.compat import set_function_name
from pandas.compat.numpy import function as nv
from pandas.errors import InvalidIndexError
@@ -98,7 +100,7 @@
)
if TYPE_CHECKING:
- from pandas import Series
+ from pandas import RangeIndex, Series
__all__ = ["Index"]
@@ -188,6 +190,9 @@ def _new_Index(cls, d):
return cls.__new__(cls, **d)
+_IndexT = TypeVar("_IndexT", bound="Index")
+
+
class Index(IndexOpsMixin, PandasObject):
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
@@ -787,7 +792,13 @@ def repeat(self, repeats, axis=None):
# --------------------------------------------------------------------
# Copying Methods
- def copy(self, name=None, deep=False, dtype=None, names=None):
+ def copy(
+ self: _IndexT,
+ name: Optional[Label] = None,
+ deep: bool = False,
+ dtype: Optional[Dtype] = None,
+ names: Optional[Sequence[Label]] = None,
+ ) -> _IndexT:
"""
Make a copy of this object.
@@ -949,10 +960,9 @@ def _format_with_header(
# could have nans
mask = isna(values)
if mask.any():
- result = np.array(result)
- result[mask] = na_rep
- # error: "List[str]" has no attribute "tolist"
- result = result.tolist() # type: ignore[attr-defined]
+ result_arr = np.array(result)
+ result_arr[mask] = na_rep
+ result = result_arr.tolist()
else:
result = trim_front(format_array(values, None, justify="left"))
return header + result
@@ -4913,7 +4923,13 @@ def _get_string_slice(self, key: str_t, use_lhs: bool = True, use_rhs: bool = Tr
# overridden in DatetimeIndex, TimedeltaIndex and PeriodIndex
raise NotImplementedError
- def slice_indexer(self, start=None, end=None, step=None, kind=None):
+ def slice_indexer(
+ self,
+ start: Optional[Label] = None,
+ end: Optional[Label] = None,
+ step: Optional[int] = None,
+ kind: Optional[str_t] = None,
+ ) -> slice:
"""
Compute the slice indexer for input labels and step.
@@ -5513,7 +5529,9 @@ def ensure_index_from_sequences(sequences, names=None):
return MultiIndex.from_arrays(sequences, names=names)
-def ensure_index(index_like, copy: bool = False):
+def ensure_index(
+ index_like: Union[AnyArrayLike, Sequence], copy: bool = False
+) -> Index:
"""
Ensure that we have an index from some index-like object.
@@ -5549,7 +5567,18 @@ def ensure_index(index_like, copy: bool = False):
index_like = index_like.copy()
return index_like
if hasattr(index_like, "name"):
- return Index(index_like, name=index_like.name, copy=copy)
+ # https://github.com/python/mypy/issues/1424
+ # error: Item "ExtensionArray" of "Union[ExtensionArray,
+ # Sequence[Any]]" has no attribute "name" [union-attr]
+ # error: Item "Sequence[Any]" of "Union[ExtensionArray, Sequence[Any]]"
+ # has no attribute "name" [union-attr]
+ # error: "Sequence[Any]" has no attribute "name" [attr-defined]
+ # error: Item "Sequence[Any]" of "Union[Series, Sequence[Any]]" has no
+ # attribute "name" [union-attr]
+ # error: Item "Sequence[Any]" of "Union[Any, Sequence[Any]]" has no
+ # attribute "name" [union-attr]
+ name = index_like.name # type: ignore[union-attr, attr-defined]
+ return Index(index_like, name=name, copy=copy)
if is_iterator(index_like):
index_like = list(index_like)
@@ -5604,7 +5633,7 @@ def _validate_join_method(method: str):
raise ValueError(f"do not recognize join method {method}")
-def default_index(n):
+def default_index(n: int) -> "RangeIndex":
from pandas.core.indexes.range import RangeIndex
return RangeIndex(0, n, name=None)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 5d309ef7cd515..08f9bd51de77b 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1,7 +1,7 @@
""" define the IntervalIndex """
from operator import le, lt
import textwrap
-from typing import Any, List, Optional, Tuple, Union
+from typing import TYPE_CHECKING, Any, List, Optional, Tuple, Union, cast
import numpy as np
@@ -56,6 +56,9 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex, timedelta_range
from pandas.core.ops import get_op_result_name
+if TYPE_CHECKING:
+ from pandas import CategoricalIndex
+
_VALID_CLOSED = {"left", "right", "both", "neither"}
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
@@ -786,6 +789,7 @@ def get_indexer(
right_indexer = self.right.get_indexer(target_as_index.right)
indexer = np.where(left_indexer == right_indexer, left_indexer, -1)
elif is_categorical_dtype(target_as_index.dtype):
+ target_as_index = cast("CategoricalIndex", target_as_index)
# get an indexer for unique categories then propagate to codes via take_1d
categories_indexer = self.get_indexer(target_as_index.categories)
indexer = take_1d(categories_indexer, target_as_index.codes, fill_value=-1)
| https://api.github.com/repos/pandas-dev/pandas/pulls/35991 | 2020-08-30T15:33:36Z | 2020-09-02T18:44:40Z | 2020-09-02T18:44:40Z | 2020-09-02T18:49:29Z | |
TYP: misc typing fixes for pandas\core\frame.py | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 95bd757f1994e..e14f757e02159 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1014,7 +1014,7 @@ def iterrows(self) -> Iterable[Tuple[Label, Series]]:
s = klass(v, index=columns, name=k)
yield k, s
- def itertuples(self, index=True, name="Pandas"):
+ def itertuples(self, index: bool = True, name: Optional[str] = "Pandas"):
"""
Iterate over DataFrame rows as namedtuples.
@@ -1088,7 +1088,11 @@ def itertuples(self, index=True, name="Pandas"):
arrays.extend(self.iloc[:, k] for k in range(len(self.columns)))
if name is not None:
- itertuple = collections.namedtuple(name, fields, rename=True)
+ # https://github.com/python/mypy/issues/9046
+ # error: namedtuple() expects a string literal as the first argument
+ itertuple = collections.namedtuple( # type: ignore[misc]
+ name, fields, rename=True
+ )
return map(itertuple._make, zip(*arrays))
# fallback to regular tuples
@@ -4591,7 +4595,7 @@ def set_index(
frame = self.copy()
arrays = []
- names = []
+ names: List[Label] = []
if append:
names = list(self.index.names)
if isinstance(self.index, MultiIndex):
| pandas\core\frame.py:1091: error: namedtuple() expects a string literal as the first argument [misc]
pandas\core\frame.py:4594: error: Need type annotation for 'names' (hint: "names: List[<type>] = ...") [var-annotated] | https://api.github.com/repos/pandas-dev/pandas/pulls/35990 | 2020-08-30T15:29:57Z | 2020-08-31T15:06:35Z | 2020-08-31T15:06:35Z | 2020-08-31T16:23:51Z |
TST: Verify operators with IntegerArray and list-likes (22606) | diff --git a/pandas/tests/arithmetic/conftest.py b/pandas/tests/arithmetic/conftest.py
index 6286711ac6113..31e9a4c4bc44e 100644
--- a/pandas/tests/arithmetic/conftest.py
+++ b/pandas/tests/arithmetic/conftest.py
@@ -2,6 +2,7 @@
import pytest
import pandas as pd
+import pandas._testing as tm
# ------------------------------------------------------------------
# Helper Functions
@@ -239,5 +240,14 @@ def box_with_array(request):
return request.param
+@pytest.fixture(params=[pd.Index, pd.Series, tm.to_array, np.array, list], ids=id_func)
+def box_1d_array(request):
+ """
+ Fixture to test behavior for Index, Series, tm.to_array, numpy Array and list
+ classes
+ """
+ return request.param
+
+
# alias so we can use the same fixture for multiple parameters in a test
box_with_array2 = box_with_array
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index df98b43e11f4a..4472088fc6d14 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -11,11 +11,19 @@
import pytest
import pandas as pd
-from pandas import Index, Series, Timedelta, TimedeltaIndex
+from pandas import Index, Int64Index, Series, Timedelta, TimedeltaIndex, array
import pandas._testing as tm
from pandas.core import ops
+@pytest.fixture(params=[Index, Series, tm.to_array])
+def box_pandas_1d_array(request):
+ """
+ Fixture to test behavior for Index, Series and tm.to_array classes
+ """
+ return request.param
+
+
def adjust_negative_zero(zero, expected):
"""
Helper to adjust the expected result if we are dividing by -0.0
@@ -1340,3 +1348,33 @@ def test_dataframe_div_silenced():
)
with tm.assert_produces_warning(None):
pdf1.div(pdf2, fill_value=0)
+
+
+@pytest.mark.parametrize(
+ "data, expected_data",
+ [([0, 1, 2], [0, 2, 4])],
+)
+def test_integer_array_add_list_like(
+ box_pandas_1d_array, box_1d_array, data, expected_data
+):
+ # GH22606 Verify operators with IntegerArray and list-likes
+ arr = array(data, dtype="Int64")
+ container = box_pandas_1d_array(arr)
+ left = container + box_1d_array(data)
+ right = box_1d_array(data) + container
+
+ if Series == box_pandas_1d_array:
+ assert_function = tm.assert_series_equal
+ expected = Series(expected_data, dtype="Int64")
+ elif Series == box_1d_array:
+ assert_function = tm.assert_series_equal
+ expected = Series(expected_data, dtype="object")
+ elif Index in (box_pandas_1d_array, box_1d_array):
+ assert_function = tm.assert_index_equal
+ expected = Int64Index(expected_data)
+ else:
+ assert_function = tm.assert_numpy_array_equal
+ expected = np.array(expected_data, dtype="object")
+
+ assert_function(left, expected)
+ assert_function(right, expected)
| - [x] closes #22606
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35987 | 2020-08-30T12:48:40Z | 2020-10-07T03:15:05Z | 2020-10-07T03:15:05Z | 2020-10-07T03:15:09Z |
DOC: Add Notes about difference to numpy behaviour for ddof in std() GH35985 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 486bea7cd1b47..f488195dcaa86 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10706,7 +10706,12 @@ def _doc_parms(cls):
Returns
-------
-%(name1)s or %(name2)s (if level specified)\n"""
+%(name1)s or %(name2)s (if level specified)
+
+Notes
+-----
+To have the same behaviour as `numpy.std`, use `ddof=0` (instead of the
+default `ddof=1`)\n"""
_bool_doc = """
%(desc)s
| - [x] closes #35985
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35986 | 2020-08-30T11:51:51Z | 2020-09-03T16:58:37Z | 2020-09-03T16:58:36Z | 2020-09-03T16:58:49Z |
ENH: Add axis argument to Dataframe.corr | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 55570341cf4e8..0db4a9e26246d 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -54,7 +54,7 @@ For example:
Other enhancements
^^^^^^^^^^^^^^^^^^
- :class:`Index` with object dtype supports division and multiplication (:issue:`34160`)
--
+- :meth:`DataFrame.corr` now allows an ``axis`` argument, set to 0 by default (correlation among columns) (:issue:`35002`)
-
.. _whatsnew_120.api_breaking.python:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 606bd4cc3b52d..0d663af052afa 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -20,6 +20,7 @@
TYPE_CHECKING,
Any,
AnyStr,
+ Callable,
Dict,
FrozenSet,
Hashable,
@@ -5787,7 +5788,7 @@ def nsmallest(self, n, columns, keep="first") -> "DataFrame":
population GDP alpha-2
Tuvalu 11300 38 TV
Anguilla 11300 311 AI
- Iceland 337000 17036 IS
+ Iceland 337000 17036 IS
When using ``keep='last'``, ties are resolved in reverse order:
@@ -8116,9 +8117,14 @@ def _series_round(s, decimals):
# ----------------------------------------------------------------------
# Statistical methods, etc.
- def corr(self, method="pearson", min_periods=1) -> "DataFrame":
+ def corr(
+ self,
+ method: Union[str, Callable[[np.ndarray, np.ndarray], np.float64]] = "pearson",
+ min_periods: Optional[int] = 1,
+ axis: Union[str, int] = 0,
+ ) -> "DataFrame":
"""
- Compute pairwise correlation of columns, excluding NA/null values.
+ Compute pairwise correlation of rows or columns, excluding NA/null values.
Parameters
----------
@@ -8140,6 +8146,12 @@ def corr(self, method="pearson", min_periods=1) -> "DataFrame":
to have a valid result. Currently only available for Pearson
and Spearman correlation.
+ axis : {0 or 'index', 1 or 'columns'}, default 0
+ The axis to use. 0 or 'index' to compute column-wise, 1 or 'columns' for
+ row-wise.
+
+ .. versionadded:: 1.2.0
+
Returns
-------
DataFrame
@@ -8162,12 +8174,22 @@ def corr(self, method="pearson", min_periods=1) -> "DataFrame":
dogs cats
dogs 1.0 0.3
cats 0.3 1.0
+ >>> df.corr(method=histogram_intersection, axis=1)
+ 0 1 2 3
+ 0 1.0 0.3 0.2 0.3
+ 1 0.3 1.0 0.0 0.1
+ 2 0.2 0.0 1.0 0.2
+ 3 0.3 0.1 0.2 1.0
"""
numeric_df = self._get_numeric_data()
- cols = numeric_df.columns
+ axis = numeric_df._get_axis_number(axis)
+ cols = numeric_df._get_agg_axis(axis)
idx = cols.copy()
mat = numeric_df.to_numpy(dtype=float, na_value=np.nan, copy=False)
+ if axis == 1:
+ mat = mat.transpose()
+
if method == "pearson":
correl = libalgos.nancorr(mat, minp=min_periods)
elif method == "spearman":
diff --git a/pandas/tests/frame/methods/test_cov_corr.py b/pandas/tests/frame/methods/test_cov_corr.py
index d3548b639572d..92e309320f430 100644
--- a/pandas/tests/frame/methods/test_cov_corr.py
+++ b/pandas/tests/frame/methods/test_cov_corr.py
@@ -175,6 +175,15 @@ def test_corr_int(self):
df3.cov()
df3.corr()
+ @td.skip_if_no_scipy
+ @pytest.mark.parametrize("meth", ["pearson", "spearman", "kendall"])
+ def test_corr_axes(self, meth):
+ # https://github.com/pandas-dev/pandas/issues/35002
+ df = pd.DataFrame(np.random.normal(size=(10, 4)))
+ expected = df.T.corr(meth, axis=0)
+ result = df.corr(meth, axis=1)
+ tm.assert_frame_equal(result, expected)
+
@td.skip_if_no_scipy
@pytest.mark.parametrize(
"nullable_column", [pd.array([1, 2, 3]), pd.array([1, 2, None])]
| - [x] closes #35002
- [x] tests added / passed
| https://api.github.com/repos/pandas-dev/pandas/pulls/35984 | 2020-08-30T07:12:04Z | 2020-10-29T04:13:02Z | null | 2020-10-29T04:13:02Z |
CLN: window/rolling.py | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 39fcfcbe2bff6..05cc996178051 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -377,23 +377,13 @@ def _prep_values(self, values: Optional[np.ndarray] = None) -> np.ndarray:
return values
- def _wrap_result(self, result, block=None, obj=None):
+ def _wrap_result(self, result: np.ndarray) -> "Series":
"""
- Wrap a single result.
+ Wrap a single 1D result.
"""
- if obj is None:
- obj = self._selected_obj
- index = obj.index
+ obj = self._selected_obj
- if isinstance(result, np.ndarray):
-
- if result.ndim == 1:
- from pandas import Series
-
- return Series(result, index, name=obj.name)
-
- return type(obj)(result, index=index, columns=block.columns)
- return result
+ return obj._constructor(result, obj.index, name=obj.name)
def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeriesUnion:
"""
@@ -454,7 +444,7 @@ def _insert_on_column(self, result: "DataFrame", obj: "DataFrame"):
# insert at the end
result[name] = extra_col
- def _center_window(self, result, window) -> np.ndarray:
+ def _center_window(self, result: np.ndarray, window) -> np.ndarray:
"""
Center the result in the window.
"""
@@ -513,7 +503,6 @@ def _apply_series(self, homogeneous_func: Callable[..., ArrayLike]) -> "Series":
Series version of _apply_blockwise
"""
_, obj = self._create_blocks(self._selected_obj)
- values = obj.values
try:
values = self._prep_values(obj.values)
@@ -535,7 +524,7 @@ def _apply_blockwise(
# This isn't quite blockwise, since `blocks` is actually a collection
# of homogenenous DataFrames.
- blocks, obj = self._create_blocks(self._selected_obj)
+ _, obj = self._create_blocks(self._selected_obj)
mgr = obj._mgr
def hfunc(bvalues: ArrayLike) -> ArrayLike:
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35982 | 2020-08-30T03:24:03Z | 2020-08-31T20:42:06Z | 2020-08-31T20:42:06Z | 2020-08-31T22:11:38Z |
DOC clean up doc/source/getting_started/overview.rst | diff --git a/doc/source/getting_started/overview.rst b/doc/source/getting_started/overview.rst
index d8a40c5406dee..032ba73a7293d 100644
--- a/doc/source/getting_started/overview.rst
+++ b/doc/source/getting_started/overview.rst
@@ -9,9 +9,9 @@ Package overview
**pandas** is a `Python <https://www.python.org>`__ package providing fast,
flexible, and expressive data structures designed to make working with
"relational" or "labeled" data both easy and intuitive. It aims to be the
-fundamental high-level building block for doing practical, **real world** data
+fundamental high-level building block for doing practical, **real-world** data
analysis in Python. Additionally, it has the broader goal of becoming **the
-most powerful and flexible open source data analysis / manipulation tool
+most powerful and flexible open source data analysis/manipulation tool
available in any language**. It is already well on its way toward this goal.
pandas is well suited for many different kinds of data:
@@ -21,7 +21,7 @@ pandas is well suited for many different kinds of data:
- Ordered and unordered (not necessarily fixed-frequency) time series data.
- Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
column labels
- - Any other form of observational / statistical data sets. The data actually
+ - Any other form of observational / statistical data sets. The data
need not be labeled at all to be placed into a pandas data structure
The two primary data structures of pandas, :class:`Series` (1-dimensional)
@@ -57,7 +57,7 @@ Here are just a few of the things that pandas does well:
Excel files, databases, and saving / loading data from the ultrafast **HDF5
format**
- **Time series**-specific functionality: date range generation and frequency
- conversion, moving window statistics, date shifting and lagging.
+ conversion, moving window statistics, date shifting, and lagging.
Many of these principles are here to address the shortcomings frequently
experienced using other languages / scientific research environments. For data
@@ -101,12 +101,12 @@ fashion.
Also, we would like sensible default behaviors for the common API functions
which take into account the typical orientation of time series and
-cross-sectional data sets. When using ndarrays to store 2- and 3-dimensional
+cross-sectional data sets. When using the N-dimensional array (ndarrays) to store 2- and 3-dimensional
data, a burden is placed on the user to consider the orientation of the data
set when writing functions; axes are considered more or less equivalent (except
when C- or Fortran-contiguousness matters for performance). In pandas, the axes
are intended to lend more semantic meaning to the data; i.e., for a particular
-data set there is likely to be a "right" way to orient the data. The goal,
+data set, there is likely to be a "right" way to orient the data. The goal,
then, is to reduce the amount of mental effort required to code up data
transformations in downstream functions.
@@ -148,8 +148,8 @@ pandas possible. Thanks to `all of our contributors <https://github.com/pandas-d
If you're interested in contributing, please visit the :ref:`contributing guide <contributing>`.
pandas is a `NumFOCUS <https://www.numfocus.org/open-source-projects/>`__ sponsored project.
-This will help ensure the success of development of pandas as a world-class open-source
-project, and makes it possible to `donate <https://pandas.pydata.org/donate.html>`__ to the project.
+This will help ensure the success of the development of pandas as a world-class open-source
+project and makes it possible to `donate <https://pandas.pydata.org/donate.html>`__ to the project.
Project governance
------------------
| - [x] closes #35980
| https://api.github.com/repos/pandas-dev/pandas/pulls/35981 | 2020-08-29T22:26:24Z | 2020-08-31T18:24:05Z | 2020-08-31T18:24:05Z | 2020-08-31T18:24:15Z |
BUG: Respect errors="ignore" during extension astype | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index b8f6d0e52d058..944f6f268e867 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -33,6 +33,7 @@ Bug fixes
- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
- Bug in :class:`Series` constructor raising a ``TypeError`` when constructing sparse datetime64 dtypes (:issue:`35762`)
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
+- Bug in :meth:`Series.astype` and :meth:`DataFrame.astype` not respecting the ``errors`` argument when set to ``"ignore"`` for extension dtypes (:issue:`35471`)
- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should be ``""`` (:issue:`35712`)
- Bug in :meth:`Float64Index.__contains__` incorrectly raising ``TypeError`` instead of returning ``False`` (:issue:`35788`)
- Bug in :class:`Series` constructor incorrectly raising a ``TypeError`` when passed an ordered set (:issue:`36044`)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 9f4e535dc787d..263c7c2b6940a 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -581,8 +581,13 @@ def astype(self, dtype, copy: bool = False, errors: str = "raise"):
# force the copy here
if self.is_extension:
- # TODO: Should we try/except this astype?
- values = self.values.astype(dtype)
+ try:
+ values = self.values.astype(dtype)
+ except (ValueError, TypeError):
+ if errors == "ignore":
+ values = self.values
+ else:
+ raise
else:
if issubclass(dtype.type, str):
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index b0fd0496ea81e..d3f256259b15f 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -8,6 +8,7 @@
CategoricalDtype,
DataFrame,
DatetimeTZDtype,
+ Interval,
IntervalDtype,
NaT,
Series,
@@ -565,3 +566,24 @@ def test_astype_empty_dtype_dict(self):
result = df.astype(dict())
tm.assert_frame_equal(result, df)
assert result is not df
+
+ @pytest.mark.parametrize(
+ "df",
+ [
+ DataFrame(Series(["x", "y", "z"], dtype="string")),
+ DataFrame(Series(["x", "y", "z"], dtype="category")),
+ DataFrame(Series(3 * [Timestamp("2020-01-01", tz="UTC")])),
+ DataFrame(Series(3 * [Interval(0, 1)])),
+ ],
+ )
+ @pytest.mark.parametrize("errors", ["raise", "ignore"])
+ def test_astype_ignores_errors_for_extension_dtypes(self, df, errors):
+ # https://github.com/pandas-dev/pandas/issues/35471
+ if errors == "ignore":
+ expected = df
+ result = df.astype(float, errors=errors)
+ tm.assert_frame_equal(result, expected)
+ else:
+ msg = "(Cannot cast)|(could not convert)"
+ with pytest.raises((ValueError, TypeError), match=msg):
+ df.astype(float, errors=errors)
diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py
index 9fdc4179de2e1..b9d90a9fc63dd 100644
--- a/pandas/tests/series/methods/test_astype.py
+++ b/pandas/tests/series/methods/test_astype.py
@@ -1,4 +1,6 @@
-from pandas import Series, date_range
+import pytest
+
+from pandas import Interval, Series, Timestamp, date_range
import pandas._testing as tm
@@ -23,3 +25,24 @@ def test_astype_dt64tz_to_str(self):
dtype=object,
)
tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "values",
+ [
+ Series(["x", "y", "z"], dtype="string"),
+ Series(["x", "y", "z"], dtype="category"),
+ Series(3 * [Timestamp("2020-01-01", tz="UTC")]),
+ Series(3 * [Interval(0, 1)]),
+ ],
+ )
+ @pytest.mark.parametrize("errors", ["raise", "ignore"])
+ def test_astype_ignores_errors_for_extension_dtypes(self, values, errors):
+ # https://github.com/pandas-dev/pandas/issues/35471
+ if errors == "ignore":
+ expected = values
+ result = values.astype(float, errors="ignore")
+ tm.assert_series_equal(result, expected)
+ else:
+ msg = "(Cannot cast)|(could not convert)"
+ with pytest.raises((ValueError, TypeError), match=msg):
+ values.astype(float, errors=errors)
| - [x] closes https://github.com/pandas-dev/pandas/issues/35471
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35979 | 2020-08-29T21:24:13Z | 2020-09-06T16:59:44Z | 2020-09-06T16:59:44Z | 2020-09-27T03:27:54Z |
TYP: annotate plotting._matplotlib.converter | diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 214a67690d695..3db7c38eced65 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -2,7 +2,7 @@
import datetime as pydt
from datetime import datetime, timedelta, tzinfo
import functools
-from typing import Optional, Tuple
+from typing import Any, List, Optional, Tuple
from dateutil.relativedelta import relativedelta
import matplotlib.dates as dates
@@ -144,7 +144,7 @@ def convert(value, unit, axis):
return value
@staticmethod
- def axisinfo(unit, axis):
+ def axisinfo(unit, axis) -> Optional[units.AxisInfo]:
if unit != "time":
return None
@@ -294,7 +294,7 @@ def try_parse(values):
return values
@staticmethod
- def axisinfo(unit, axis):
+ def axisinfo(unit: Optional[tzinfo], axis) -> units.AxisInfo:
"""
Return the :class:`~matplotlib.units.AxisInfo` for *unit*.
@@ -473,7 +473,7 @@ def _get_default_annual_spacing(nyears) -> Tuple[int, int]:
return (min_spacing, maj_spacing)
-def period_break(dates, period):
+def period_break(dates: PeriodIndex, period: str) -> np.ndarray:
"""
Returns the indices where the given period changes.
@@ -489,7 +489,7 @@ def period_break(dates, period):
return np.nonzero(current - previous)[0]
-def has_level_label(label_flags, vmin):
+def has_level_label(label_flags: np.ndarray, vmin: float) -> bool:
"""
Returns true if the ``label_flags`` indicate there is at least one label
for this level.
@@ -984,18 +984,24 @@ class TimeSeries_DateFormatter(Formatter):
----------
freq : {int, string}
Valid frequency specifier.
- minor_locator : {False, True}
+ minor_locator : bool, default False
Whether the current formatter should apply to minor ticks (True) or
major ticks (False).
- dynamic_mode : {True, False}
+ dynamic_mode : bool, default True
Whether the formatter works in dynamic mode or not.
"""
- def __init__(self, freq, minor_locator=False, dynamic_mode=True, plot_obj=None):
+ def __init__(
+ self,
+ freq,
+ minor_locator: bool = False,
+ dynamic_mode: bool = True,
+ plot_obj=None,
+ ):
freq = to_offset(freq)
self.format = None
self.freq = freq
- self.locs = []
+ self.locs: List[Any] = [] # unused, for matplotlib compat
self.formatdict = None
self.isminor = minor_locator
self.isdynamic = dynamic_mode
| https://api.github.com/repos/pandas-dev/pandas/pulls/35978 | 2020-08-29T18:01:24Z | 2020-08-30T13:23:45Z | 2020-08-30T13:23:45Z | 2020-08-30T15:06:17Z | |
ENH: Optimize nrows in read_excel | diff --git a/asv_bench/benchmarks/io/excel.py b/asv_bench/benchmarks/io/excel.py
index 80af2cff41769..1eaccb9f2d897 100644
--- a/asv_bench/benchmarks/io/excel.py
+++ b/asv_bench/benchmarks/io/excel.py
@@ -11,7 +11,7 @@
def _generate_dataframe():
- N = 2000
+ N = 20000
C = 5
df = DataFrame(
np.random.randn(N, C),
@@ -69,5 +69,9 @@ def time_read_excel(self, engine):
fname = self.fname_odf if engine == "odf" else self.fname_excel
read_excel(fname, engine=engine)
+ def time_read_excel_nrows(self, engine):
+ fname = self.fname_odf if engine == "odf" else self.fname_excel
+ read_excel(fname, engine=engine, nrows=1)
+
from ..pandas_vb_common import setup # noqa: F401 isort:skip
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index dbc88d0b371e8..e28ecc16fcb7b 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -207,6 +207,7 @@ Performance improvements
- Performance improvements when creating Series with dtype `str` or :class:`StringDtype` from array with many string elements (:issue:`36304`, :issue:`36317`)
- Performance improvement in :meth:`GroupBy.agg` with the ``numba`` engine (:issue:`35759`)
+- Performance improvement in `read_excel` for when ``nrows`` is much smaller than the length of the file (:issue:`33281`).
- Performance improvements when creating :meth:`pd.Series.map` from a huge dictionary (:issue:`34717`)
- Performance improvement in :meth:`GroupBy.transform` with the ``numba`` engine (:issue:`36240`)
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 65e95fd321772..e80072fad8896 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -3,12 +3,12 @@
from io import BufferedIOBase, BytesIO, RawIOBase
import os
from textwrap import fill
-from typing import Any, Mapping, Union
+from typing import Any, List, Mapping, Optional, Union
from pandas._config import config
from pandas._libs.parsers import STR_NA_VALUES
-from pandas._typing import StorageOptions
+from pandas._typing import Scalar, StorageOptions
from pandas.errors import EmptyDataError
from pandas.util._decorators import Appender, deprecate_nonkeyword_arguments
@@ -394,7 +394,14 @@ def get_sheet_by_index(self, index):
pass
@abc.abstractmethod
- def get_sheet_data(self, sheet, convert_float):
+ def get_sheet_data(
+ self,
+ sheet,
+ convert_float: bool,
+ header_nrows: int,
+ skiprows_nrows: int,
+ nrows: Optional[int],
+ ) -> List[List[Scalar]]:
pass
def parse(
@@ -450,7 +457,22 @@ def parse(
else: # assume an integer if not a string
sheet = self.get_sheet_by_index(asheetname)
- data = self.get_sheet_data(sheet, convert_float)
+ if isinstance(header, int):
+ header_nrows = header
+ elif header is None:
+ header_nrows = 0
+ else:
+ header_nrows = max(header)
+ if isinstance(skiprows, int):
+ skiprows_nrows = skiprows
+ elif skiprows is None:
+ skiprows_nrows = 0
+ else:
+ skiprows_nrows = len(skiprows)
+
+ data = self.get_sheet_data(
+ sheet, convert_float, header_nrows, skiprows_nrows, nrows
+ )
usecols = maybe_convert_usecols(usecols)
if not data:
diff --git a/pandas/io/excel/_odfreader.py b/pandas/io/excel/_odfreader.py
index ffb599cdfaaf8..6b3bf4f1375ad 100644
--- a/pandas/io/excel/_odfreader.py
+++ b/pandas/io/excel/_odfreader.py
@@ -1,4 +1,4 @@
-from typing import List, cast
+from typing import List, Optional, cast
import numpy as np
@@ -71,7 +71,14 @@ def get_sheet_by_name(self, name: str):
raise ValueError(f"sheet {name} not found")
- def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
+ def get_sheet_data(
+ self,
+ sheet,
+ convert_float: bool,
+ header_nrows: int,
+ skiprows_nrows: int,
+ nrows: Optional[int],
+ ) -> List[List[Scalar]]:
"""
Parse an ODF Table into a list of lists
"""
@@ -87,6 +94,8 @@ def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
table: List[List[Scalar]] = []
+ if isinstance(nrows, int):
+ sheet_rows = sheet_rows[: header_nrows + skiprows_nrows + nrows + 1]
for i, sheet_row in enumerate(sheet_rows):
sheet_cells = [x for x in sheet_row.childNodes if x.qname in cell_names]
empty_cells = 0
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index a5cadf4d93389..bc7b168eeaaa2 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -508,7 +508,14 @@ def _convert_cell(self, cell, convert_float: bool) -> Scalar:
return cell.value
- def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
+ def get_sheet_data(
+ self,
+ sheet,
+ convert_float: bool,
+ header_nrows: int,
+ skiprows_nrows: int,
+ nrows: Optional[int],
+ ) -> List[List[Scalar]]:
data: List[List[Scalar]] = []
for row in sheet.rows:
data.append([self._convert_cell(cell, convert_float) for cell in row])
diff --git a/pandas/io/excel/_pyxlsb.py b/pandas/io/excel/_pyxlsb.py
index ac94f4dd3df74..cf3dcebdff6eb 100644
--- a/pandas/io/excel/_pyxlsb.py
+++ b/pandas/io/excel/_pyxlsb.py
@@ -1,4 +1,4 @@
-from typing import List
+from typing import List, Optional
from pandas._typing import FilePathOrBuffer, Scalar, StorageOptions
from pandas.compat._optional import import_optional_dependency
@@ -68,7 +68,14 @@ def _convert_cell(self, cell, convert_float: bool) -> Scalar:
return cell.v
- def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
+ def get_sheet_data(
+ self,
+ sheet,
+ convert_float: bool,
+ header_nrows: int,
+ skiprows_nrows: int,
+ nrows: Optional[int],
+ ) -> List[List[Scalar]]:
return [
[self._convert_cell(c, convert_float) for c in r]
for r in sheet.rows(sparse=False)
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index dfd5dde0329ae..e5d0d66f9570a 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -1,8 +1,9 @@
from datetime import time
+from typing import List, Optional
import numpy as np
-from pandas._typing import StorageOptions
+from pandas._typing import Scalar, StorageOptions
from pandas.compat._optional import import_optional_dependency
from pandas.io.excel._base import BaseExcelReader
@@ -49,7 +50,14 @@ def get_sheet_by_name(self, name):
def get_sheet_by_index(self, index):
return self.book.sheet_by_index(index)
- def get_sheet_data(self, sheet, convert_float):
+ def get_sheet_data(
+ self,
+ sheet,
+ convert_float: bool,
+ header_nrows: int,
+ skiprows_nrows: int,
+ nrows: Optional[int],
+ ) -> List[List[Scalar]]:
from xlrd import (
XL_CELL_BOOLEAN,
XL_CELL_DATE,
@@ -98,9 +106,14 @@ def _parse_cell(cell_contents, cell_typ):
cell_contents = val
return cell_contents
- data = []
+ data: List[List[Scalar]] = []
- for i in range(sheet.nrows):
+ sheet_nrows = sheet.nrows
+
+ if isinstance(nrows, int):
+ sheet_nrows = min(header_nrows + skiprows_nrows + nrows + 1, sheet_nrows)
+
+ for i in range(sheet_nrows):
row = [
_parse_cell(value, typ)
for value, typ in zip(sheet.row_values(i), sheet.row_types(i))
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 431a50477fccc..b312f67349658 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -1153,5 +1153,21 @@ def test_read_datetime_multiindex(self, engine, read_ext):
],
)
expected = pd.DataFrame([], columns=expected_column_index)
+ tm.assert_frame_equal(expected, actual)
+ @pytest.mark.parametrize(
+ "header, skiprows", [(1, 2), (0, 3), (1, [0, 1]), ([2], 1)]
+ )
+ @td.check_file_leaks
+ def test_header_skiprows_nrows(self, engine, read_ext, header, skiprows):
+ # GH 32727
+ data = pd.read_excel("test1" + read_ext, engine=engine)
+ expected = (
+ DataFrame(data.iloc[3:6])
+ .reset_index(drop=True)
+ .rename(columns=data.iloc[2].rename(None))
+ )
+ actual = pd.read_excel(
+ "test1" + read_ext, engine=engine, header=header, skiprows=skiprows, nrows=3
+ )
tm.assert_frame_equal(expected, actual)
| - [ ] closes #32727
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
based on #33281
----
output of asv benchmarks:
```
(pandas-dev) marco@marco-Predator-PH315-52:~/pandas-dev/asv_bench$ asv continuous -f 1.1 upstream/master optimise-nrows-excel -b excel.ReadExcel
· Creating environments..................................................................................................................................
· Discovering benchmarks
·· Uninstalling from conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
·· Building d0a8a687 <optimise-nrows-excel> for conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt....................................
·· Installing d0a8a687 <optimise-nrows-excel> into conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt..
· Running 4 total benchmarks (2 commits * 1 environments * 2 benchmarks)
[ 0.00%] · For pandas commit c413df6d <master> (round 1/2):
[ 0.00%] ·· Building for conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt....................................
[ 0.00%] ·· Benchmarking conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 12.50%] ··· Setting up io.excel:62 ok
[ 12.50%] ··· Running (io.excel.ReadExcel.time_read_excel--)..
[ 25.00%] · For pandas commit d0a8a687 <optimise-nrows-excel> (round 1/2):
[ 25.00%] ·· Building for conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt..
[ 25.00%] ·· Benchmarking conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 37.50%] ··· Setting up io.excel:62 ok
[ 37.50%] ··· Running (io.excel.ReadExcel.time_read_excel--)..
[ 50.00%] · For pandas commit d0a8a687 <optimise-nrows-excel> (round 2/2):
[ 50.00%] ·· Benchmarking conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 62.50%] ··· Setting up io.excel:62 ok
[ 62.50%] ··· io.excel.ReadExcel.time_read_excel ok
[ 62.50%] ··· ========== ============
engine
---------- ------------
xlrd 953±6ms
openpyxl 1.66±0.03s
odf 6.02±0.02s
========== ============
[ 75.00%] ··· io.excel.ReadExcel.time_read_excel_nrows ok
[ 75.00%] ··· ========== ============
engine
---------- ------------
xlrd 878±20ms
openpyxl 1.67±0.02s
odf 4.58±0.04s
========== ============
[ 75.00%] · For pandas commit c413df6d <master> (round 2/2):
[ 75.00%] ·· Building for conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt..
[ 75.00%] ·· Benchmarking conda-py3.8-Cython0.29.16-jinja2-matplotlib-numba-numexpr-numpy-odfpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 87.50%] ··· Setting up io.excel:62 ok
[ 87.50%] ··· io.excel.ReadExcel.time_read_excel ok
[ 87.50%] ··· ========== ============
engine
---------- ------------
xlrd 941±5ms
openpyxl 1.69±0.02s
odf 6.15±0.04s
========== ============
[100.00%] ··· io.excel.ReadExcel.time_read_excel_nrows ok
[100.00%] ··· ========== ============
engine
---------- ------------
xlrd 971±20ms
openpyxl 1.69±0.01s
odf 6.07±0.03s
========== ============
before after ratio
[c413df6d] [d0a8a687]
<master> <optimise-nrows-excel>
- 971±20ms 878±20ms 0.90 io.excel.ReadExcel.time_read_excel_nrows('xlrd')
SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.
PERFORMANCE INCREASED.
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/35974 | 2020-08-29T10:15:23Z | 2020-09-21T21:47:15Z | 2020-09-21T21:47:15Z | 2020-09-22T17:00:12Z |
ENH: implement timeszones support for read_json(orient='table') and astype() from 'object' | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 46675c336c6a3..4d7c1479bd744 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -217,6 +217,7 @@ Other enhancements
- ``Styler`` now allows direct CSS class name addition to individual data cells (:issue:`36159`)
- :meth:`Rolling.mean()` and :meth:`Rolling.sum()` use Kahan summation to calculate the mean to avoid numerical problems (:issue:`10319`, :issue:`11645`, :issue:`13254`, :issue:`32761`, :issue:`36031`)
- :meth:`DatetimeIndex.searchsorted`, :meth:`TimedeltaIndex.searchsorted`, :meth:`PeriodIndex.searchsorted`, and :meth:`Series.searchsorted` with datetimelike dtypes will now try to cast string arguments (listlike and scalar) to the matching datetimelike type (:issue:`36346`)
+-
- Added methods :meth:`IntegerArray.prod`, :meth:`IntegerArray.min`, and :meth:`IntegerArray.max` (:issue:`33790`)
- Where possible :meth:`RangeIndex.difference` and :meth:`RangeIndex.symmetric_difference` will return :class:`RangeIndex` instead of :class:`Int64Index` (:issue:`36564`)
- Added :meth:`Rolling.sem()` and :meth:`Expanding.sem()` to compute the standard error of mean (:issue:`26476`).
@@ -388,6 +389,8 @@ Datetimelike
- Bug in :class:`DatetimeIndex.shift` incorrectly raising when shifting empty indexes (:issue:`14811`)
- :class:`Timestamp` and :class:`DatetimeIndex` comparisons between timezone-aware and timezone-naive objects now follow the standard library ``datetime`` behavior, returning ``True``/``False`` for ``!=``/``==`` and raising for inequality comparisons (:issue:`28507`)
- Bug in :meth:`DatetimeIndex.equals` and :meth:`TimedeltaIndex.equals` incorrectly considering ``int64`` indexes as equal (:issue:`36744`)
+- :meth:`to_json` and :meth:`read_json` now implements timezones parsing when orient structure is 'table'.
+- :meth:`astype` now attempts to convert to 'datetime64[ns, tz]' directly from 'object' with inferred timezone from string (:issue:`35973`).
- Bug in :meth:`TimedeltaIndex.sum` and :meth:`Series.sum` with ``timedelta64`` dtype on an empty index or series returning ``NaT`` instead of ``Timedelta(0)`` (:issue:`31751`)
Timedelta
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index a1050f4271e05..2b3cd2b51884c 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1970,7 +1970,13 @@ def sequence_to_dt64ns(
data, inferred_tz = objects_to_datetime64ns(
data, dayfirst=dayfirst, yearfirst=yearfirst
)
- tz = _maybe_infer_tz(tz, inferred_tz)
+ if tz and inferred_tz:
+ # two timezones: convert to intended from base UTC repr
+ data = tzconversion.tz_convert_from_utc(data.view("i8"), tz)
+ data = data.view(DT64NS_DTYPE)
+ elif inferred_tz:
+ tz = inferred_tz
+
data_dtype = data.dtype
# `data` may have originally been a Categorical[datetime64[ns, tz]],
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 288bc0adc5162..088e81b184192 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -262,7 +262,9 @@ def __init__(
# NotImplemented on a column MultiIndex
if obj.ndim == 2 and isinstance(obj.columns, MultiIndex):
- raise NotImplementedError("orient='table' is not supported for MultiIndex")
+ raise NotImplementedError(
+ "orient='table' is not supported for MultiIndex columns"
+ )
# TODO: Do this timedelta properly in objToJSON.c See GH #15137
if (
diff --git a/pandas/io/json/_table_schema.py b/pandas/io/json/_table_schema.py
index 2b4c86b3c4406..0499a35296490 100644
--- a/pandas/io/json/_table_schema.py
+++ b/pandas/io/json/_table_schema.py
@@ -323,10 +323,6 @@ def parse_table_schema(json, precise_float):
for field in table["schema"]["fields"]
}
- # Cannot directly use as_type with timezone data on object; raise for now
- if any(str(x).startswith("datetime64[ns, ") for x in dtypes.values()):
- raise NotImplementedError('table="orient" can not yet read timezone data')
-
# No ISO constructor for Timedelta as of yet, so need to raise
if "timedelta64" in dtypes.values():
raise NotImplementedError(
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index d3f256259b15f..f05c90f37ea8a 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -587,3 +587,27 @@ def test_astype_ignores_errors_for_extension_dtypes(self, df, errors):
msg = "(Cannot cast)|(could not convert)"
with pytest.raises((ValueError, TypeError), match=msg):
df.astype(float, errors=errors)
+
+ def test_astype_tz_conversion(self):
+ # GH 35973
+ val = {"tz": date_range("2020-08-30", freq="d", periods=2, tz="Europe/London")}
+ df = DataFrame(val)
+ result = df.astype({"tz": "datetime64[ns, Europe/Berlin]"})
+
+ expected = df
+ expected["tz"] = expected["tz"].dt.tz_convert("Europe/Berlin")
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("tz", ["UTC", "Europe/Berlin"])
+ def test_astype_tz_object_conversion(self, tz):
+ # GH 35973
+ val = {"tz": date_range("2020-08-30", freq="d", periods=2, tz="Europe/London")}
+ expected = DataFrame(val)
+
+ # convert expected to object dtype from other tz str (independently tested)
+ result = expected.astype({"tz": f"datetime64[ns, {tz}]"})
+ result = result.astype({"tz": "object"})
+
+ # do real test: object dtype to a specified tz, different from construction tz.
+ result = result.astype({"tz": "datetime64[ns, Europe/London]"})
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 6e35b224ef4c3..dba4b9214e50c 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -676,6 +676,11 @@ class TestTableOrientReader:
{"floats": [1.0, 2.0, 3.0, 4.0]},
{"floats": [1.1, 2.2, 3.3, 4.4]},
{"bools": [True, False, False, True]},
+ {
+ "timezones": pd.date_range(
+ "2016-01-01", freq="d", periods=4, tz="US/Central"
+ ) # added in # GH 35973
+ },
],
)
@pytest.mark.skipif(sys.version_info[:3] == (3, 7, 0), reason="GH-35309")
@@ -686,22 +691,59 @@ def test_read_json_table_orient(self, index_nm, vals, recwarn):
tm.assert_frame_equal(df, result)
@pytest.mark.parametrize("index_nm", [None, "idx", "index"])
+ @pytest.mark.parametrize(
+ "vals",
+ [{"timedeltas": pd.timedelta_range("1H", periods=4, freq="T")}],
+ )
+ def test_read_json_table_orient_raises(self, index_nm, vals, recwarn):
+ df = DataFrame(vals, index=pd.Index(range(4), name=index_nm))
+ out = df.to_json(orient="table")
+ with pytest.raises(NotImplementedError, match="can not yet read "):
+ pd.read_json(out, orient="table")
+
+ @pytest.mark.parametrize(
+ "idx",
+ [
+ pd.Index(range(4)),
+ pd.Index(
+ pd.date_range(
+ "2020-08-30",
+ freq="d",
+ periods=4,
+ ),
+ freq=None,
+ ),
+ pd.Index(
+ pd.date_range("2020-08-30", freq="d", periods=4, tz="US/Central"),
+ freq=None,
+ ),
+ pd.MultiIndex.from_product(
+ [
+ pd.date_range("2020-08-30", freq="d", periods=2, tz="US/Central"),
+ ["x", "y"],
+ ],
+ ),
+ ],
+ )
@pytest.mark.parametrize(
"vals",
[
- {"timedeltas": pd.timedelta_range("1H", periods=4, freq="T")},
+ {"floats": [1.1, 2.2, 3.3, 4.4]},
+ {"dates": pd.date_range("2020-08-30", freq="d", periods=4)},
{
"timezones": pd.date_range(
- "2016-01-01", freq="d", periods=4, tz="US/Central"
+ "2020-08-30", freq="d", periods=4, tz="Europe/London"
)
},
],
)
- def test_read_json_table_orient_raises(self, index_nm, vals, recwarn):
- df = DataFrame(vals, index=pd.Index(range(4), name=index_nm))
+ @pytest.mark.skipif(sys.version_info[:3] == (3, 7, 0), reason="GH-35309")
+ def test_read_json_table_timezones_orient(self, idx, vals, recwarn):
+ # GH 35973
+ df = DataFrame(vals, index=idx)
out = df.to_json(orient="table")
- with pytest.raises(NotImplementedError, match="can not yet read "):
- pd.read_json(out, orient="table")
+ result = pd.read_json(out, orient="table")
+ tm.assert_frame_equal(df, result)
def test_comprehensive(self):
df = DataFrame(
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Currently timezones raise a `NotImplementedError` when using the `read_json(orient='table')` method.
This PR aims to fix what I believe is a fairly common request (numerous workarounds and questions exist on StackOverflow).
The PR aims to reconstitute DataFrames via json columns with timezones, Index with timezones or MultiIndex with timezones and/or combinations. | https://api.github.com/repos/pandas-dev/pandas/pulls/35973 | 2020-08-29T07:45:26Z | 2020-11-04T03:00:06Z | 2020-11-04T03:00:06Z | 2020-11-22T08:35:29Z |
Comma cleanup | diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 52a1e3aae9058..c807e7eb9c4d3 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -86,11 +86,7 @@ def wrapper(x):
result0 = f(axis=0, skipna=False)
result1 = f(axis=1, skipna=False)
tm.assert_series_equal(
- result0,
- frame.apply(wrapper),
- check_dtype=check_dtype,
- rtol=rtol,
- atol=atol,
+ result0, frame.apply(wrapper), check_dtype=check_dtype, rtol=rtol, atol=atol
)
# HACK: win32
tm.assert_series_equal(
@@ -116,7 +112,15 @@ def wrapper(x):
if opname in ["sum", "prod"]:
expected = frame.apply(skipna_wrapper, axis=1)
tm.assert_series_equal(
- result1, expected, check_dtype=False, rtol=rtol, atol=atol,
+<<<<<<< HEAD
+ result1, expected, check_dtype=False, rtol=rtol, atol=atol
+=======
+ result1,
+ expected,
+ check_dtype=False,
+ rtol=rtol,
+ atol=atol,
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
)
# check dtypes
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index c8f5b2b0f6364..ac1100236d2f0 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -932,7 +932,13 @@ def test_constructor_mrecarray(self):
# from GH3479
assert_fr_equal = functools.partial(
- tm.assert_frame_equal, check_index_type=True, check_column_type=True,
+<<<<<<< HEAD
+ tm.assert_frame_equal, check_index_type=True, check_column_type=True
+=======
+ tm.assert_frame_equal,
+ check_index_type=True,
+ check_column_type=True,
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
)
arrays = [
("float", np.array([1.5, 2.0])),
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index 6a8f1e7c1aca2..703cb4412017b 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -417,7 +417,9 @@ def test_unstack_mixed_type_name_in_multiindex(
result = df.unstack(unstack_idx)
expected = pd.DataFrame(
- expected_values, columns=expected_columns, index=expected_index,
+ expected_values,
+ columns=expected_columns,
+ index=expected_index,
)
tm.assert_frame_equal(result, expected)
@@ -807,7 +809,12 @@ def test_unstack_multi_level_cols(self):
[["B", "C"], ["B", "D"]], names=["c1", "c2"]
),
index=pd.MultiIndex.from_tuples(
- [[10, 20, 30], [10, 20, 40]], names=["i1", "i2", "i3"],
+<<<<<<< HEAD
+ [[10, 20, 30], [10, 20, 40]], names=["i1", "i2", "i3"]
+=======
+ [[10, 20, 30], [10, 20, 40]],
+ names=["i1", "i2", "i3"],
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
),
)
assert df.unstack(["i2", "i1"]).columns.names[-2:] == ["i2", "i1"]
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index 4d0f1a326225d..4ea1e07912892 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -123,7 +123,11 @@
(pd.DataFrame, frame_data, operator.methodcaller("sort_index")),
(pd.DataFrame, frame_data, operator.methodcaller("nlargest", 1, "A")),
(pd.DataFrame, frame_data, operator.methodcaller("nsmallest", 1, "A")),
- (pd.DataFrame, frame_mi_data, operator.methodcaller("swaplevel"),),
+ (
+ pd.DataFrame,
+ frame_mi_data,
+ operator.methodcaller("swaplevel"),
+ ),
pytest.param(
(
pd.DataFrame,
@@ -178,7 +182,11 @@
marks=not_implemented_mark,
),
pytest.param(
- (pd.DataFrame, frame_mi_data, operator.methodcaller("unstack"),),
+ (
+ pd.DataFrame,
+ frame_mi_data,
+ operator.methodcaller("unstack"),
+ ),
marks=not_implemented_mark,
),
pytest.param(
@@ -317,7 +325,7 @@
marks=not_implemented_mark,
),
pytest.param(
- (pd.Series, ([1, 2],), operator.methodcaller("squeeze")),
+ (pd.Series, ([1, 2],), operator.methodcaller("squeeze"))
# marks=not_implemented_mark,
),
(pd.Series, ([1, 2],), operator.methodcaller("rename_axis", index="a")),
@@ -733,9 +741,14 @@ def test_timedelta_property(attr):
assert result.attrs == {"a": 1}
+<<<<<<< HEAD
+@pytest.mark.parametrize("method", [operator.methodcaller("total_seconds")])
+=======
@pytest.mark.parametrize(
- "method", [operator.methodcaller("total_seconds")],
+ "method",
+ [operator.methodcaller("total_seconds")],
)
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
@not_implemented_mark
def test_timedelta_methods(method):
s = pd.Series(pd.timedelta_range("2000", periods=4))
diff --git a/pandas/tests/generic/test_to_xarray.py b/pandas/tests/generic/test_to_xarray.py
index ab56a752f7e90..a85d7ddc1ea53 100644
--- a/pandas/tests/generic/test_to_xarray.py
+++ b/pandas/tests/generic/test_to_xarray.py
@@ -47,9 +47,7 @@ def test_to_xarray_index_types(self, index):
expected = df.copy()
expected["f"] = expected["f"].astype(object)
expected.columns.name = None
- tm.assert_frame_equal(
- result.to_dataframe(), expected,
- )
+ tm.assert_frame_equal(result.to_dataframe(), expected)
@td.skip_if_no("xarray", min_version="0.7.0")
def test_to_xarray(self):
diff --git a/pandas/tests/groupby/aggregate/test_numba.py b/pandas/tests/groupby/aggregate/test_numba.py
index 29e65e938f6f9..4fe78a481565f 100644
--- a/pandas/tests/groupby/aggregate/test_numba.py
+++ b/pandas/tests/groupby/aggregate/test_numba.py
@@ -57,7 +57,12 @@ def func_numba(values, index):
func_numba = numba.jit(func_numba)
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+<<<<<<< HEAD
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
+=======
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]},
+ columns=[0, 1],
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0)
@@ -90,7 +95,12 @@ def func_2(values, index):
func_2 = numba.jit(func_2)
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+<<<<<<< HEAD
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
+=======
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]},
+ columns=[0, 1],
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0)
@@ -121,7 +131,12 @@ def func_1(values, index):
return np.mean(values) - 3.4
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+<<<<<<< HEAD
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
+=======
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]},
+ columns=[0, 1],
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
)
grouped = data.groupby(0)
expected = grouped.agg(func_1, engine="numba")
@@ -142,7 +157,12 @@ def func_1(values, index):
)
def test_multifunc_notimplimented(agg_func):
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+<<<<<<< HEAD
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
+=======
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]},
+ columns=[0, 1],
+>>>>>>> c34ed0ebf1599a6ea21cf94846e4c7a8bb72a298
)
grouped = data.groupby(0)
with pytest.raises(NotImplementedError, match="Numba engine can"):
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index a1dcb28a32c6c..3183305fe2933 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -946,9 +946,7 @@ def fct(group):
tm.assert_series_equal(result, expected)
-@pytest.mark.parametrize(
- "function", [lambda gr: gr.index, lambda gr: gr.index + 1 - 1],
-)
+@pytest.mark.parametrize("function", [lambda gr: gr.index, lambda gr: gr.index + 1 - 1])
def test_apply_function_index_return(function):
# GH: 22541
df = pd.DataFrame([1, 2, 2, 2, 1, 2, 3, 1, 3, 1], columns=["id"])
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 13a32e285e70a..711daf7fe415d 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -17,7 +17,7 @@
def cartesian_product_for_groupers(result, args, names, fill_value=np.NaN):
- """ Reindex to a cartesian production for the groupers,
+ """Reindex to a cartesian production for the groupers,
preserving the nature (Categorical) of each grouper
"""
@@ -1449,7 +1449,7 @@ def test_groupby_agg_categorical_columns(func, expected_values):
result = df.groupby("groups").agg(func)
expected = pd.DataFrame(
- {"value": expected_values}, index=pd.Index([0, 1, 2], name="groups"),
+ {"value": expected_values}, index=pd.Index([0, 1, 2], name="groups")
)
tm.assert_frame_equal(result, expected)
| Comma cleanup for #35925 | https://api.github.com/repos/pandas-dev/pandas/pulls/35971 | 2020-08-29T04:38:16Z | 2020-09-01T02:47:14Z | null | 2020-09-01T02:47:23Z |
Comma cleanup for Issue #35925 | diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 52a1e3aae9058..aa20887dc9549 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -18,7 +18,7 @@
isna,
notna,
to_datetime,
- to_timedelta,
+ to_timedelta
)
import pandas._testing as tm
import pandas.core.algorithms as algorithms
@@ -34,7 +34,7 @@ def assert_stat_op_calc(
check_dates=False,
rtol=1e-5,
atol=1e-8,
- skipna_alternative=None,
+ skipna_alternative=None
):
"""
Check that operator opname works as advertised on frame
@@ -90,7 +90,7 @@ def wrapper(x):
frame.apply(wrapper),
check_dtype=check_dtype,
rtol=rtol,
- atol=atol,
+ atol=atol
)
# HACK: win32
tm.assert_series_equal(
@@ -98,7 +98,7 @@ def wrapper(x):
frame.apply(wrapper, axis=1),
check_dtype=False,
rtol=rtol,
- atol=atol,
+ atol=atol
)
else:
skipna_wrapper = alternative
@@ -110,13 +110,13 @@ def wrapper(x):
frame.apply(skipna_wrapper),
check_dtype=check_dtype,
rtol=rtol,
- atol=atol,
+ atol=atol
)
if opname in ["sum", "prod"]:
expected = frame.apply(skipna_wrapper, axis=1)
tm.assert_series_equal(
- result1, expected, check_dtype=False, rtol=rtol, atol=atol,
+ result1, expected, check_dtype=False, rtol=rtol, atol=atol
)
# check dtypes
@@ -333,7 +333,7 @@ def kurt(x):
float_frame_with_na,
has_skipna=False,
check_dtype=False,
- check_dates=True,
+ check_dates=True
)
# GH#32571 check_less_precise is needed on apparently-random
@@ -344,7 +344,7 @@ def kurt(x):
np.sum,
mixed_float_frame.astype("float32"),
check_dtype=False,
- rtol=1e-3,
+ rtol=1e-3
)
assert_stat_op_calc(
@@ -366,7 +366,7 @@ def kurt(x):
float_frame_with_na,
has_skipna=False,
check_dtype=False,
- check_dates=True,
+ check_dates=True
)
try:
@@ -399,14 +399,14 @@ def test_stat_operators_attempt_obj_array(self, method):
"a": [
-0.00049987540199591344,
-0.0016467257772919831,
- 0.00067695870775883013,
+ 0.00067695870775883013
],
"b": [-0, -0, 0.0],
"c": [
0.00031111847529610595,
0.0014902627951905339,
- -0.00094099200035979691,
- ],
+ -0.00094099200035979691
+ ]
}
df1 = DataFrame(data, index=["foo", "bar", "baz"], dtype="O")
@@ -427,7 +427,7 @@ def test_mixed_ops(self, op):
{
"int": [1, 2, 3, 4],
"float": [1.0, 2.0, 3.0, 4.0],
- "str": ["a", "b", "c", "d"],
+ "str": ["a", "b", "c", "d"]
}
)
@@ -444,7 +444,7 @@ def test_reduce_mixed_frame(self):
{
"bool_data": [True, True, False, False, False],
"int_data": [10, 20, 30, 40, 50],
- "string_data": ["a", "b", "c", "d", "e"],
+ "string_data": ["a", "b", "c", "d", "e"]
}
)
df.reindex(columns=["bool_data", "int_data", "string_data"])
@@ -500,7 +500,7 @@ def test_mean_mixed_string_decimal(self):
{"A": 2, "B": None, "C": Decimal("572.00")},
{"A": 4, "B": None, "C": Decimal("609.00")},
{"A": 3, "B": None, "C": Decimal("820.00")},
- {"A": 5, "B": None, "C": Decimal("1223.00")},
+ {"A": 5, "B": None, "C": Decimal("1223.00")}
]
df = pd.DataFrame(d)
@@ -570,7 +570,7 @@ def test_sem(self, datetime_frame):
def test_kurt(self):
index = MultiIndex(
levels=[["bar"], ["one", "two", "three"], [0, 1]],
- codes=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]],
+ codes=[[0, 0, 0, 0, 0, 0], [0, 1, 2, 0, 1, 2], [0, 1, 0, 1, 0, 1]]
)
df = DataFrame(np.random.randn(6, 3), index=index)
@@ -592,8 +592,8 @@ def test_kurt(self):
"D": ["a"],
"E": Categorical(["a"], categories=["a"]),
"F": to_datetime(["2000-1-2"]),
- "G": to_timedelta(["1 days"]),
- },
+ "G": to_timedelta(["1 days"])
+ }
),
(
False,
@@ -604,8 +604,8 @@ def test_kurt(self):
"D": np.array([np.nan], dtype=object),
"E": Categorical([np.nan], categories=["a"]),
"F": [pd.NaT],
- "G": to_timedelta([pd.NaT]),
- },
+ "G": to_timedelta([pd.NaT])
+ }
),
(
True,
@@ -616,8 +616,8 @@ def test_kurt(self):
"K": Categorical(["a", np.nan, np.nan, np.nan], categories=["a"]),
"L": to_datetime(["2000-1-2", "NaT", "NaT", "NaT"]),
"M": to_timedelta(["1 days", "nan", "nan", "nan"]),
- "N": [0, 1, 2, 3],
- },
+ "N": [0, 1, 2, 3]
+ }
),
(
False,
@@ -628,10 +628,10 @@ def test_kurt(self):
"K": Categorical([np.nan, "a", np.nan, np.nan], categories=["a"]),
"L": to_datetime(["NaT", "2000-1-2", "NaT", "NaT"]),
"M": to_timedelta(["nan", "1 days", "nan", "nan"]),
- "N": [0, 1, 2, 3],
- },
- ),
- ],
+ "N": [0, 1, 2, 3]
+ }
+ )
+ ]
)
def test_mode_dropna(self, dropna, expected):
@@ -650,7 +650,7 @@ def test_mode_dropna(self, dropna, expected):
"K": Categorical(["a", np.nan, "a", np.nan]),
"L": to_datetime(["2000-1-2", "2000-1-2", "NaT", "NaT"]),
"M": to_timedelta(["1 days", "nan", "1 days", "nan"]),
- "N": np.arange(4, dtype="int64"),
+ "N": np.arange(4, dtype="int64")
}
)
@@ -676,7 +676,7 @@ def test_operators_timedelta64(self):
dict(
A=date_range("2012-1-1", periods=3, freq="D"),
B=date_range("2012-1-2", periods=3, freq="D"),
- C=Timestamp("20120101") - timedelta(minutes=5, seconds=5),
+ C=Timestamp("20120101") - timedelta(minutes=5, seconds=5)
)
)
@@ -721,9 +721,9 @@ def test_operators_timedelta64(self):
"foo",
1,
1.0,
- Timestamp("20130101"),
+ Timestamp("20130101")
],
- index=mixed.columns,
+ index=mixed.columns
)
tm.assert_series_equal(result, expected)
@@ -747,7 +747,7 @@ def test_operators_timedelta64(self):
df = DataFrame(
{
"time": date_range("20130102", periods=5),
- "time2": date_range("20130105", periods=5),
+ "time2": date_range("20130105", periods=5)
}
)
df["off1"] = df["time2"] - df["time"]
@@ -871,7 +871,7 @@ def test_mean_datetimelike(self):
"A": np.arange(3),
"B": pd.date_range("2016-01-01", periods=3),
"C": pd.timedelta_range("1D", periods=3),
- "D": pd.period_range("2016", periods=3, freq="A"),
+ "D": pd.period_range("2016", periods=3, freq="A")
}
)
result = df.mean(numeric_only=True)
@@ -889,7 +889,7 @@ def test_mean_datetimelike_numeric_only_false(self):
{
"A": np.arange(3),
"B": pd.date_range("2016-01-01", periods=3),
- "C": pd.timedelta_range("1D", periods=3),
+ "C": pd.timedelta_range("1D", periods=3)
}
)
@@ -974,9 +974,9 @@ def test_any_all_extra(self):
{
"A": [True, False, False],
"B": [True, True, False],
- "C": [True, True, True],
+ "C": [True, True, True]
},
- index=["a", "b", "c"],
+ index=["a", "b", "c"]
)
result = df[["A", "B"]].any(1)
expected = Series([True, True, False], index=["a", "b", "c"])
@@ -1010,7 +1010,7 @@ def test_any_datetime(self):
pd.Timestamp("1960-02-15"),
pd.Timestamp("1960-02-16"),
pd.NaT,
- pd.NaT,
+ pd.NaT
]
df = DataFrame({"A": float_data, "B": datetime_data})
@@ -1034,7 +1034,7 @@ def test_any_all_bool_only(self):
"col1": [1, 2, 3],
"col2": [4, 5, 6],
"col3": [None, None, None],
- "col4": [False, False, True],
+ "col4": [False, False, True]
}
)
@@ -1125,9 +1125,9 @@ def test_any_all_bool_only(self):
},
True,
# In 1.13.3 and 1.14 np.all(df) returns a Timedelta here
- marks=[td.skip_if_np_lt("1.15")],
- ),
- ],
+ marks=[td.skip_if_np_lt("1.15")]
+ )
+ ]
)
def test_any_all_np_func(self, func, data, expected):
# GH 19976
@@ -1155,7 +1155,7 @@ def test_any_all_level_axis_none_raises(self, method):
{"A": 1},
index=MultiIndex.from_product(
[["A", "B"], ["a", "b"]], names=["out", "in"]
- ),
+ )
)
xpr = "Must specify 'axis' when aggregating by level."
with pytest.raises(ValueError, match=xpr):
@@ -1293,7 +1293,7 @@ def test_min_max_dt64_api_consistency_empty_df(self):
@pytest.mark.parametrize(
"initial",
- ["2018-10-08 13:36:45+00:00", "2018-10-08 13:36:45+03:00"], # Non-UTC timezone
+ ["2018-10-08 13:36:45+00:00", "2018-10-08 13:36:45+03:00"] # Non-UTC timezone
)
@pytest.mark.parametrize("method", ["min", "max"])
def test_preserve_timezone(self, initial: str, method):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index c8f5b2b0f6364..5a63108684436 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -30,7 +30,7 @@
Timedelta,
Timestamp,
date_range,
- isna,
+ isna
)
import pandas._testing as tm
from pandas.arrays import IntervalArray, PeriodArray, SparseArray
@@ -45,7 +45,7 @@
"int8",
"int16",
"int32",
- "int64",
+ "int64"
]
@@ -78,8 +78,8 @@ def test_series_with_name_not_matching_column(self):
lambda: DataFrame(data=()),
lambda: DataFrame(data=[]),
lambda: DataFrame(data=(_ for _ in [])),
- lambda: DataFrame(data=range(0)),
- ],
+ lambda: DataFrame(data=range(0))
+ ]
)
def test_empty_constructor(self, constructor):
expected = DataFrame()
@@ -93,8 +93,8 @@ def test_empty_constructor(self, constructor):
[
([[]], RangeIndex(1), RangeIndex(0)),
([[], []], RangeIndex(2), RangeIndex(0)),
- ([(_ for _ in [])], RangeIndex(1), RangeIndex(0)),
- ],
+ ([(_ for _ in [])], RangeIndex(1), RangeIndex(0))
+ ]
)
def test_emptylike_constructor(self, emptylike, expected_index, expected_columns):
expected = DataFrame(index=expected_index, columns=expected_columns)
@@ -259,7 +259,7 @@ def test_constructor_overflow_int64(self):
(2685045978526272070, 23),
(8921811264899370420, 45),
(17019687244989530680, 270),
- (9930107427299601010, 273),
+ (9930107427299601010, 273)
]
dtype = [("uid", "u8"), ("score", "u8")]
data = np.zeros((len(data_scores),), dtype=dtype)
@@ -275,8 +275,8 @@ def test_constructor_overflow_int64(self):
[2 ** 64 + 1],
np.array([-(2 ** 63) - 4], dtype=object),
np.array([-(2 ** 64) - 1]),
- [-(2 ** 65) - 2],
- ],
+ [-(2 ** 65) - 2]
+ ]
)
def test_constructor_int_overflow(self, values):
# see gh-18584
@@ -312,13 +312,13 @@ def test_constructor_dict(self):
exp = Series(
np.concatenate([[np.nan] * 5, datetime_series_short.values]),
index=datetime_series.index,
- name="col2",
+ name="col2"
)
tm.assert_series_equal(exp, frame["col2"])
frame = DataFrame(
{"col1": datetime_series, "col2": datetime_series_short},
- columns=["col2", "col3", "col4"],
+ columns=["col2", "col3", "col4"]
)
assert len(frame) == len(datetime_series_short)
@@ -453,7 +453,7 @@ def test_constructor_2d_index(self):
expected = DataFrame(
[1, 1],
index=pd.Int64Index([1, 2], dtype="int64"),
- columns=MultiIndex(levels=[[1]], codes=[[0]]),
+ columns=MultiIndex(levels=[[1]], codes=[[0]])
)
tm.assert_frame_equal(df, expected)
@@ -461,7 +461,7 @@ def test_constructor_2d_index(self):
expected = DataFrame(
[1, 1],
index=MultiIndex(levels=[[1, 2]], codes=[[0, 1]]),
- columns=MultiIndex(levels=[[1]], codes=[[0]]),
+ columns=MultiIndex(levels=[[1]], codes=[[0]])
)
tm.assert_frame_equal(df, expected)
@@ -482,7 +482,7 @@ def test_constructor_error_msgs(self):
DataFrame(
np.arange(12).reshape((4, 3)),
columns=["foo", "bar", "baz"],
- index=date_range("2000-01-01", periods=3),
+ index=date_range("2000-01-01", periods=3)
)
arr = np.array([[4, 5, 6]])
@@ -548,7 +548,7 @@ def test_constructor_dict_block(self):
expected = np.array([[4.0, 3.0, 2.0, 1.0]])
df = DataFrame(
{"d": [4.0], "c": [3.0], "b": [2.0], "a": [1.0]},
- columns=["d", "c", "b", "a"],
+ columns=["d", "c", "b", "a"]
)
tm.assert_numpy_array_equal(df.values, expected)
@@ -568,7 +568,7 @@ def test_constructor_dict_cast(self):
# can't cast to float
test_data = {
"A": dict(zip(range(20), tm.makeStringIndex(20))),
- "B": dict(zip(range(15), np.random.randn(15))),
+ "B": dict(zip(range(15), np.random.randn(15)))
}
frame = DataFrame(test_data, dtype=float)
assert len(frame) == 20
@@ -622,13 +622,13 @@ def check(result, expected):
check_dtype=True,
check_index_type=True,
check_column_type=True,
- check_names=True,
+ check_names=True
)
d = {
("a", "a"): {("i", "i"): 0, ("i", "j"): 1, ("j", "i"): 2},
("b", "a"): {("i", "i"): 6, ("i", "j"): 5, ("j", "i"): 4},
- ("b", "c"): {("i", "i"): 7, ("i", "j"): 8, ("j", "i"): 9},
+ ("b", "c"): {("i", "i"): 7, ("i", "j"): 8, ("j", "i"): 9}
}
_d = sorted(d.items())
df = DataFrame(d)
@@ -664,9 +664,9 @@ def create_data(constructor):
{0: 0, 1: None, 2: None, 3: None},
{0: None, 1: 2, 2: None, 3: None},
{0: None, 1: None, 2: 4, 3: None},
- {0: None, 1: None, 2: None, 3: 6},
+ {0: None, 1: None, 2: None, 3: 6}
],
- index=[Timestamp(dt) for dt in dates_as_str],
+ index=[Timestamp(dt) for dt in dates_as_str]
)
result_datetime64 = DataFrame(data_datetime64)
@@ -692,7 +692,7 @@ def create_data(constructor):
{0: 0, 1: None, 2: None, 3: None},
{0: None, 1: 2, 2: None, 3: None},
{0: None, 1: None, 2: 4, 3: None},
- {0: None, 1: None, 2: None, 3: 6},
+ {0: None, 1: None, 2: None, 3: 6}
],
index=[Timedelta(td, "D") for td in td_as_int],
)
@@ -724,9 +724,9 @@ def test_constructor_period_dict(self):
(Interval(left=0, right=5), IntervalDtype("int64")),
(
Timestamp("2011-01-01", tz="US/Eastern"),
- DatetimeTZDtype(tz="US/Eastern"),
- ),
- ],
+ DatetimeTZDtype(tz="US/Eastern")
+ )
+ ]
)
def test_constructor_extension_scalar_data(self, data, dtype):
# GH 34832
@@ -903,7 +903,7 @@ def test_constructor_maskedarray_hardened(self):
{"A": [np.nan, np.nan], "B": [np.nan, np.nan]},
columns=["A", "B"],
index=[1, 2],
- dtype=float,
+ dtype=float
)
tm.assert_frame_equal(result, expected)
# Check case where mask is hard but no data are masked
@@ -913,7 +913,7 @@ def test_constructor_maskedarray_hardened(self):
{"A": [1.0, 1.0], "B": [1.0, 1.0]},
columns=["A", "B"],
index=[1, 2],
- dtype=float,
+ dtype=float
)
tm.assert_frame_equal(result, expected)
@@ -932,12 +932,12 @@ def test_constructor_mrecarray(self):
# from GH3479
assert_fr_equal = functools.partial(
- tm.assert_frame_equal, check_index_type=True, check_column_type=True,
+ tm.assert_frame_equal, check_index_type=True, check_column_type=True
)
arrays = [
("float", np.array([1.5, 2.0])),
("int", np.array([1, 2])),
- ("str", np.array(["abc", "def"])),
+ ("str", np.array(["abc", "def"]))
]
for name, arr in arrays[:]:
arrays.append(
@@ -979,8 +979,8 @@ def test_constructor_corner_shape(self):
(None, None, ["a", "b"], "int64", np.dtype("int64")),
(None, list(range(10)), ["a", "b"], int, np.dtype("float64")),
({}, None, ["foo", "bar"], None, np.object_),
- ({"b": 1}, list(range(10)), list("abc"), int, np.dtype("float64")),
- ],
+ ({"b": 1}, list(range(10)), list("abc"), int, np.dtype("float64"))
+ ]
)
def test_constructor_dtype(self, data, index, columns, dtype, expected):
df = DataFrame(data, index, columns, dtype)
@@ -1043,7 +1043,7 @@ def test_constructor_more(self, float_frame):
# int cast
dm = DataFrame(
{"A": np.ones(10, dtype=int), "B": np.ones(10, dtype=np.float64)},
- index=np.arange(10),
+ index=np.arange(10)
)
assert len(dm.columns) == 2
@@ -1192,7 +1192,7 @@ def test_constructor_list_of_odicts(self):
OrderedDict([["a", 1.5], ["d", 6]]),
OrderedDict(),
OrderedDict([["a", 1.5], ["b", 3], ["c", 4]]),
- OrderedDict([["b", 3], ["c", 4], ["d", 6]]),
+ OrderedDict([["b", 3], ["c", 4], ["d", 6]])
]
result = DataFrame(data)
@@ -1256,7 +1256,7 @@ def test_constructor_ordered_dict_conflicting_orders(self):
def test_constructor_list_of_series(self):
data = [
OrderedDict([["a", 1.5], ["b", 3.0], ["c", 4.0]]),
- OrderedDict([["a", 1.5], ["b", 3.0], ["c", 6.0]]),
+ OrderedDict([["a", 1.5], ["b", 3.0], ["c", 6.0]])
]
sdict = OrderedDict(zip(["x", "y"], data))
idx = Index(["a", "b", "c"])
@@ -1264,7 +1264,7 @@ def test_constructor_list_of_series(self):
# all named
data2 = [
Series([1.5, 3, 4], idx, dtype="O", name="x"),
- Series([1.5, 3, 6], idx, name="y"),
+ Series([1.5, 3, 6], idx, name="y")
]
result = DataFrame(data2)
expected = DataFrame.from_dict(sdict, orient="index")
@@ -1273,7 +1273,7 @@ def test_constructor_list_of_series(self):
# some unnamed
data2 = [
Series([1.5, 3, 4], idx, dtype="O", name="x"),
- Series([1.5, 3, 6], idx),
+ Series([1.5, 3, 6], idx)
]
result = DataFrame(data2)
@@ -1288,7 +1288,7 @@ def test_constructor_list_of_series(self):
OrderedDict([["a", 1.5], ["d", 6]]),
OrderedDict(),
OrderedDict([["a", 1.5], ["b", 3], ["c", 4]]),
- OrderedDict([["b", 3], ["c", 4], ["d", 6]]),
+ OrderedDict([["b", 3], ["c", 4], ["d", 6]])
]
data = [
create_series_with_explicit_dtype(d, dtype_if_empty=object) for d in data
@@ -1308,7 +1308,7 @@ def test_constructor_list_of_series(self):
data = [
OrderedDict([["a", 1.5], ["b", 3.0], ["c", 4.0]]),
- OrderedDict([["a", 1.5], ["b", 3.0], ["c", 6.0]]),
+ OrderedDict([["a", 1.5], ["b", 3.0], ["c", 6.0]])
]
sdict = OrderedDict(zip(range(len(data)), data))
@@ -1324,7 +1324,7 @@ def test_constructor_list_of_series_aligned_index(self):
expected = DataFrame(
{"b": [0, 1, 2], "a": [0, 1, 2], "c": [0, 1, 2]},
columns=["b", "a", "c"],
- index=["0", "1", "2"],
+ index=["0", "1", "2"]
)
tm.assert_frame_equal(result, expected)
@@ -1389,8 +1389,8 @@ def test_constructor_mixed_type_rows(self):
(((), ()), [[], []]),
(([], []), [[], []]),
(([1], [2]), [[1], [2]]), # GH 32776
- (([1, 2, 3], [4, 5, 6]), [[1, 2, 3], [4, 5, 6]]),
- ],
+ (([1, 2, 3], [4, 5, 6]), [[1, 2, 3], [4, 5, 6]])
+ ]
)
def test_constructor_tuple(self, tuples, lists):
# GH 25691
@@ -1461,7 +1461,7 @@ def test_constructor_list_of_dict_order(self):
data = [
{"First": 1, "Second": 4, "Third": 7, "Fourth": 10},
{"Second": 5, "First": 2, "Fourth": 11, "Third": 8},
- {"Second": 6, "First": 3, "Fourth": 12, "Third": 9, "YYY": 14, "XXX": 13},
+ {"Second": 6, "First": 3, "Fourth": 12, "Third": 9, "YYY": 14, "XXX": 13}
]
expected = DataFrame(
{
@@ -1470,7 +1470,7 @@ def test_constructor_list_of_dict_order(self):
"Third": [7, 8, 9],
"Fourth": [10, 11, 12],
"YYY": [None, None, 14],
- "XXX": [None, None, 13],
+ "XXX": [None, None, 13]
}
)
result = DataFrame(data)
@@ -1494,7 +1494,7 @@ def test_constructor_from_ordered_dict(self):
[
("one", OrderedDict([("col_a", "foo1"), ("col_b", "bar1")])),
("two", OrderedDict([("col_a", "foo2"), ("col_b", "bar2")])),
- ("three", OrderedDict([("col_a", "foo3"), ("col_b", "bar3")])),
+ ("three", OrderedDict([("col_a", "foo3"), ("col_b", "bar3")]))
]
)
expected = DataFrame.from_dict(a, orient="columns").T
@@ -1508,7 +1508,7 @@ def test_from_dict_columns_parameter(self):
result = DataFrame.from_dict(
OrderedDict([("A", [1, 2]), ("B", [4, 5])]),
orient="index",
- columns=["one", "two"],
+ columns=["one", "two"]
)
expected = DataFrame([[1, 2], [4, 5]], index=["A", "B"], columns=["one", "two"])
tm.assert_frame_equal(result, expected)
@@ -1518,7 +1518,7 @@ def test_from_dict_columns_parameter(self):
DataFrame.from_dict(
dict([("A", [1, 2]), ("B", [4, 5])]),
orient="columns",
- columns=["one", "two"],
+ columns=["one", "two"]
)
with pytest.raises(ValueError, match=msg):
DataFrame.from_dict(
@@ -1531,8 +1531,8 @@ def test_from_dict_columns_parameter(self):
({}, [], "index"),
([{("a",): 1}, {("a",): 2}], [("a",)], "columns"),
([OrderedDict([(("a",), 1), (("b",), 2)])], [("a",), ("b",)], "columns"),
- ([{("a", "b"): 1}], [("a", "b")], "columns"),
- ],
+ ([{("a", "b"): 1}], [("a", "b")], "columns")
+ ]
)
def test_constructor_from_dict_tuples(self, data_dict, keys, orient):
# GH 16769
@@ -1624,15 +1624,15 @@ def test_constructor_Series_differently_indexed(self):
("idx1", "idx2", None, None),
("idx1", "idx1", "idx2", None),
("idx1", "idx2", "idx3", None),
- (None, None, None, None),
- ],
+ (None, None, None, None)
+ ]
)
def test_constructor_index_names(self, name_in1, name_in2, name_in3, name_out):
# GH13475
indices = [
pd.Index(["a", "b", "c"], name=name_in1),
pd.Index(["b", "c", "d"], name=name_in2),
- pd.Index(["c", "d", "e"], name=name_in3),
+ pd.Index(["c", "d", "e"], name=name_in3)
]
series = {
c: pd.Series([0, 1, 2], index=i) for i, c in zip(indices, ["x", "y", "z"])
@@ -1644,9 +1644,9 @@ def test_constructor_index_names(self, name_in1, name_in2, name_in3, name_out):
{
"x": [0, 1, 2, np.nan, np.nan],
"y": [np.nan, 0, 1, 2, np.nan],
- "z": [np.nan, np.nan, 0, 1, 2],
+ "z": [np.nan, np.nan, 0, 1, 2]
},
- index=exp_ind,
+ index=exp_ind
)
tm.assert_frame_equal(result, expected)
@@ -1727,8 +1727,8 @@ def test_constructor_single_value(self):
DataFrame(
np.array([["a", "a"], ["a", "a"]], dtype=object),
index=[1, 2],
- columns=["a", "c"],
- ),
+ columns=["a", "c"]
+ )
)
msg = "DataFrame constructor not properly called!"
@@ -1754,16 +1754,16 @@ def test_constructor_with_datetimes(self):
"B": "foo",
"C": "bar",
"D": Timestamp("20010101"),
- "E": datetime(2001, 1, 2, 0, 0),
+ "E": datetime(2001, 1, 2, 0, 0)
},
- index=np.arange(10),
+ index=np.arange(10)
)
result = df.dtypes
expected = Series(
[np.dtype("int64")]
+ [np.dtype(objectname)] * 2
+ [np.dtype(datetime64name)] * 2,
- index=list("ABCDE"),
+ index=list("ABCDE")
)
tm.assert_series_equal(result, expected)
@@ -1775,9 +1775,9 @@ def test_constructor_with_datetimes(self):
"b": 2,
"c": "foo",
floatname: np.array(1.0, dtype=floatname),
- intname: np.array(1, dtype=intname),
+ intname: np.array(1, dtype=intname)
},
- index=np.arange(10),
+ index=np.arange(10)
)
result = df.dtypes
expected = Series(
@@ -1786,7 +1786,7 @@ def test_constructor_with_datetimes(self):
+ [np.dtype("object")]
+ [np.dtype("float64")]
+ [np.dtype(intname)],
- index=["a", "b", "c", floatname, intname],
+ index=["a", "b", "c", floatname, intname]
)
tm.assert_series_equal(result, expected)
@@ -1797,9 +1797,9 @@ def test_constructor_with_datetimes(self):
"b": 2,
"c": "foo",
floatname: np.array([1.0] * 10, dtype=floatname),
- intname: np.array([1] * 10, dtype=intname),
+ intname: np.array([1] * 10, dtype=intname)
},
- index=np.arange(10),
+ index=np.arange(10)
)
result = df.dtypes
expected = Series(
@@ -1808,7 +1808,7 @@ def test_constructor_with_datetimes(self):
+ [np.dtype("object")]
+ [np.dtype("float64")]
+ [np.dtype(intname)],
- index=["a", "b", "c", floatname, intname],
+ index=["a", "b", "c", floatname, intname]
)
tm.assert_series_equal(result, expected)
@@ -1827,7 +1827,7 @@ def test_constructor_with_datetimes(self):
result = df.dtypes
expected = Series(
[np.dtype("datetime64[ns]"), np.dtype("object")],
- index=["datetimes", "dates"],
+ index=["datetimes", "dates"]
)
tm.assert_series_equal(result, expected)
@@ -1890,8 +1890,8 @@ def test_constructor_with_datetimes(self):
[[None], [np.datetime64("NaT")]],
[[None], [pd.NaT]],
[[pd.NaT], [np.datetime64("NaT")]],
- [[pd.NaT], [None]],
- ],
+ [[pd.NaT], [None]]
+ ]
)
def test_constructor_datetimes_with_nulls(self, arr):
# gh-15869, GH#11220
@@ -1942,7 +1942,7 @@ def test_constructor_for_list_with_dtypes(self):
"b": [1.2, 2.3, 5.1, 6.3],
"c": list("abcd"),
"d": [datetime(2000, 1, 1) for i in range(4)],
- "e": [1.0, 2, 4.0, 7],
+ "e": [1.0, 2, 4.0, 7]
}
)
result = df.dtypes
@@ -1952,9 +1952,9 @@ def test_constructor_for_list_with_dtypes(self):
np.dtype("float64"),
np.dtype("object"),
np.dtype("datetime64[ns]"),
- np.dtype("float64"),
+ np.dtype("float64")
],
- index=list("abcde"),
+ index=list("abcde")
)
tm.assert_series_equal(result, expected)
@@ -2059,9 +2059,9 @@ def test_constructor_categorical(self):
expected = DataFrame(
{
0: Series(list("abc"), dtype="category"),
- 1: Series(list("abd"), dtype="category"),
+ 1: Series(list("abd"), dtype="category")
},
- columns=[0, 1],
+ columns=[0, 1]
)
tm.assert_frame_equal(df, expected)
@@ -2150,8 +2150,8 @@ def test_from_records_iterator(self):
("x", np.float64),
("u", np.float32),
("y", np.int64),
- ("z", np.int32),
- ],
+ ("z", np.int32)
+ ]
)
df = DataFrame.from_records(iter(arr), nrows=2)
xp = DataFrame(
@@ -2159,7 +2159,7 @@ def test_from_records_iterator(self):
"x": np.array([1.0, 3.0], dtype=np.float64),
"u": np.array([1.0, 3.0], dtype=np.float32),
"y": np.array([2, 4], dtype=np.int64),
- "z": np.array([2, 4], dtype=np.int32),
+ "z": np.array([2, 4], dtype=np.int32)
}
)
tm.assert_frame_equal(df.reindex_like(xp), xp)
@@ -2237,7 +2237,7 @@ def create_dict(order_id):
return {
"order_id": order_id,
"quantity": np.random.randint(1, 10),
- "price": np.random.randint(1, 10),
+ "price": np.random.randint(1, 10)
}
documents = [create_dict(i) for i in range(10)]
@@ -2319,7 +2319,7 @@ def test_from_records_empty_with_nonempty_fields_gh3682(self):
+ tm.COMPLEX_DTYPES
+ tm.DATETIME64_DTYPES
+ tm.TIMEDELTA64_DTYPES
- + tm.BOOL_DTYPES,
+ + tm.BOOL_DTYPES
)
def test_check_dtype_empty_numeric_column(self, dtype):
# GH24386: Ensure dtypes are set correctly for an empty DataFrame.
@@ -2377,7 +2377,7 @@ def test_from_records_sequencelike(self):
"D": np.array([True, False] * 3, dtype=bool),
"E": np.array(np.random.randn(6), dtype=np.float32),
"E1": np.array(np.random.randn(6), dtype=np.float32),
- "F": np.array(np.arange(6), dtype=np.int32),
+ "F": np.array(np.arange(6), dtype=np.int32)
}
)
@@ -2458,7 +2458,7 @@ def test_from_records_dictlike(self):
"D": np.array([True, False] * 3, dtype=bool),
"E": np.array(np.random.randn(6), dtype=np.float32),
"E1": np.array(np.random.randn(6), dtype=np.float32),
- "F": np.array(np.arange(6), dtype=np.int32),
+ "F": np.array(np.arange(6), dtype=np.int32)
}
)
@@ -2597,8 +2597,8 @@ class List(list):
Categorical(list("aabbc")),
SparseArray([1, np.nan, np.nan, np.nan]),
IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)]),
- PeriodArray(pd.period_range(start="1/1/2017", end="1/1/2018", freq="M")),
- ],
+ PeriodArray(pd.period_range(start="1/1/2017", end="1/1/2018", freq="M"))
+ ]
)
def test_constructor_with_extension_array(self, extension_arr):
# GH11363
@@ -2621,14 +2621,14 @@ def test_construct_with_two_categoricalindex_series(self):
)
s2 = Series(
[2, 152, 2, 242, 150],
- index=pd.CategoricalIndex(["f", "female", "m", "male", "unknown"]),
+ index=pd.CategoricalIndex(["f", "female", "m", "male", "unknown"])
)
result = DataFrame([s1, s2])
expected = DataFrame(
np.array(
[[np.nan, 39.0, np.nan, 6.0, 4.0], [2.0, 152.0, 2.0, 242.0, 150.0]]
),
- columns=["f", "female", "m", "male", "unknown"],
+ columns=["f", "female", "m", "male", "unknown"]
)
tm.assert_frame_equal(result, expected)
@@ -2724,7 +2724,7 @@ def test_frame_timeseries_column(self):
"timestamps": [
Timestamp("20130101T10:00:00", tz="US/Eastern"),
Timestamp("20130101T10:01:00", tz="US/Eastern"),
- Timestamp("20130101T10:02:00", tz="US/Eastern"),
+ Timestamp("20130101T10:02:00", tz="US/Eastern")
]
}
)
@@ -2735,13 +2735,13 @@ def test_nested_dict_construction(self):
columns = ["Nevada", "Ohio"]
pop = {
"Nevada": {2001: 2.4, 2002: 2.9},
- "Ohio": {2000: 1.5, 2001: 1.7, 2002: 3.6},
+ "Ohio": {2000: 1.5, 2001: 1.7, 2002: 3.6}
}
result = DataFrame(pop, index=[2001, 2002, 2003], columns=columns)
expected = DataFrame(
[(2.4, 1.7), (2.9, 3.6), (np.nan, np.nan)],
columns=columns,
- index=Index([2001, 2002, 2003]),
+ index=Index([2001, 2002, 2003])
)
tm.assert_frame_equal(result, expected)
@@ -2761,27 +2761,27 @@ def test_from_tzaware_mixed_object_array(self):
[
Timestamp("2013-01-01 00:00:00"),
Timestamp("2013-01-02 00:00:00"),
- Timestamp("2013-01-03 00:00:00"),
+ Timestamp("2013-01-03 00:00:00")
],
[
Timestamp("2013-01-01 00:00:00-0500", tz="US/Eastern"),
pd.NaT,
- Timestamp("2013-01-03 00:00:00-0500", tz="US/Eastern"),
+ Timestamp("2013-01-03 00:00:00-0500", tz="US/Eastern")
],
[
Timestamp("2013-01-01 00:00:00+0100", tz="CET"),
pd.NaT,
- Timestamp("2013-01-03 00:00:00+0100", tz="CET"),
- ],
+ Timestamp("2013-01-03 00:00:00+0100", tz="CET")
+ ]
],
- dtype=object,
+ dtype=object
).T
res = DataFrame(arr, columns=["A", "B", "C"])
expected_dtypes = [
"datetime64[ns]",
"datetime64[ns, US/Eastern]",
- "datetime64[ns, CET]",
+ "datetime64[ns, CET]"
]
assert (res.dtypes == expected_dtypes).all()
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index 6a8f1e7c1aca2..d344ff55d84c6 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -14,7 +14,7 @@ def test_pivot(self):
data = {
"index": ["A", "B", "C", "C", "B", "A"],
"columns": ["One", "One", "One", "Two", "Two", "Two"],
- "values": [1.0, 2.0, 3.0, 3.0, 2.0, 1.0],
+ "values": [1.0, 2.0, 3.0, 3.0, 2.0, 1.0]
}
frame = DataFrame(data)
@@ -23,7 +23,7 @@ def test_pivot(self):
expected = DataFrame(
{
"One": {"A": 1.0, "B": 2.0, "C": 3.0},
- "Two": {"A": 1.0, "B": 2.0, "C": 3.0},
+ "Two": {"A": 1.0, "B": 2.0, "C": 3.0}
}
)
@@ -44,7 +44,7 @@ def test_pivot_duplicates(self):
{
"a": ["bar", "bar", "foo", "foo", "foo"],
"b": ["one", "two", "one", "one", "two"],
- "c": [1.0, 2.0, 3.0, 3.0, 4.0],
+ "c": [1.0, 2.0, 3.0, 3.0, 4.0]
}
)
with pytest.raises(ValueError, match="duplicate entries"):
@@ -68,7 +68,7 @@ def test_pivot_index_none(self):
data = {
"index": ["A", "B", "C", "C", "B", "A"],
"columns": ["One", "One", "One", "Two", "Two", "Two"],
- "values": [1.0, 2.0, 3.0, 3.0, 2.0, 1.0],
+ "values": [1.0, 2.0, 3.0, 3.0, 2.0, 1.0]
}
frame = DataFrame(data).set_index("index")
@@ -76,7 +76,7 @@ def test_pivot_index_none(self):
expected = DataFrame(
{
"One": {"A": 1.0, "B": 2.0, "C": 3.0},
- "Two": {"A": 1.0, "B": 2.0, "C": 3.0},
+ "Two": {"A": 1.0, "B": 2.0, "C": 3.0}
}
)
@@ -247,14 +247,14 @@ def test_unstack_fill_frame_datetime(self):
result = data.unstack()
expected = DataFrame(
{"a": [dv[0], pd.NaT, dv[3]], "b": [dv[1], dv[2], pd.NaT]},
- index=["x", "y", "z"],
+ index=["x", "y", "z"]
)
tm.assert_frame_equal(result, expected)
result = data.unstack(fill_value=dv[0])
expected = DataFrame(
{"a": [dv[0], dv[0], dv[3]], "b": [dv[1], dv[2], dv[0]]},
- index=["x", "y", "z"],
+ index=["x", "y", "z"]
)
tm.assert_frame_equal(result, expected)
@@ -270,14 +270,14 @@ def test_unstack_fill_frame_timedelta(self):
result = data.unstack()
expected = DataFrame(
{"a": [td[0], pd.NaT, td[3]], "b": [td[1], td[2], pd.NaT]},
- index=["x", "y", "z"],
+ index=["x", "y", "z"]
)
tm.assert_frame_equal(result, expected)
result = data.unstack(fill_value=td[1])
expected = DataFrame(
{"a": [td[0], td[1], td[3]], "b": [td[1], td[2], td[1]]},
- index=["x", "y", "z"],
+ index=["x", "y", "z"]
)
tm.assert_frame_equal(result, expected)
@@ -288,7 +288,7 @@ def test_unstack_fill_frame_period(self):
Period("2012-01"),
Period("2012-02"),
Period("2012-03"),
- Period("2012-04"),
+ Period("2012-04")
]
data = Series(periods)
data.index = MultiIndex.from_tuples(
@@ -298,7 +298,7 @@ def test_unstack_fill_frame_period(self):
result = data.unstack()
expected = DataFrame(
{"a": [periods[0], None, periods[3]], "b": [periods[1], periods[2], None]},
- index=["x", "y", "z"],
+ index=["x", "y", "z"]
)
tm.assert_frame_equal(result, expected)
@@ -306,9 +306,9 @@ def test_unstack_fill_frame_period(self):
expected = DataFrame(
{
"a": [periods[0], periods[1], periods[3]],
- "b": [periods[1], periods[2], periods[1]],
+ "b": [periods[1], periods[2], periods[1]]
},
- index=["x", "y", "z"],
+ index=["x", "y", "z"]
)
tm.assert_frame_equal(result, expected)
@@ -325,9 +325,9 @@ def test_unstack_fill_frame_categorical(self):
expected = DataFrame(
{
"a": pd.Categorical(list("axa"), categories=list("abc")),
- "b": pd.Categorical(list("bcx"), categories=list("abc")),
+ "b": pd.Categorical(list("bcx"), categories=list("abc"))
},
- index=list("xyz"),
+ index=list("xyz")
)
tm.assert_frame_equal(result, expected)
@@ -341,9 +341,9 @@ def test_unstack_fill_frame_categorical(self):
expected = DataFrame(
{
"a": pd.Categorical(list("aca"), categories=list("abc")),
- "b": pd.Categorical(list("bcc"), categories=list("abc")),
+ "b": pd.Categorical(list("bcc"), categories=list("abc"))
},
- index=list("xyz"),
+ index=list("xyz")
)
tm.assert_frame_equal(result, expected)
@@ -364,11 +364,11 @@ def test_unstack_tuplename_in_multiindex(self):
("d", "c"),
("e", "a"),
("e", "b"),
- ("e", "c"),
+ ("e", "c")
],
- names=[None, ("A", "a")],
+ names=[None, ("A", "a")]
),
- index=pd.Index([1, 2, 3], name=("B", "b")),
+ index=pd.Index([1, 2, 3], name=("B", "b"))
)
tm.assert_frame_equal(result, expected)
@@ -383,8 +383,8 @@ def test_unstack_tuplename_in_multiindex(self):
),
pd.MultiIndex.from_tuples(
[("d", "a"), ("d", "b"), ("e", "a"), ("e", "b")],
- names=[None, ("A", "a")],
- ),
+ names=[None, ("A", "a")]
+ )
),
(
(("A", "a"), "B"),
@@ -399,12 +399,12 @@ def test_unstack_tuplename_in_multiindex(self):
("e", "a", 1),
("e", "a", 2),
("e", "b", 1),
- ("e", "b", 2),
+ ("e", "b", 2)
],
- names=[None, ("A", "a"), "B"],
- ),
- ),
- ],
+ names=[None, ("A", "a"), "B"]
+ )
+ )
+ ]
)
def test_unstack_mixed_type_name_in_multiindex(
self, unstack_idx, expected_values, expected_index, expected_columns
@@ -417,7 +417,7 @@ def test_unstack_mixed_type_name_in_multiindex(
result = df.unstack(unstack_idx)
expected = pd.DataFrame(
- expected_values, columns=expected_columns, index=expected_index,
+ expected_values, columns=expected_columns, index=expected_index
)
tm.assert_frame_equal(result, expected)
@@ -435,7 +435,7 @@ def test_unstack_preserve_dtypes(self):
E=pd.Series([1.0, 50.0, 100.0]).astype("float32"),
F=pd.Series([3.0, 4.0, 5.0]).astype("float64"),
G=False,
- H=pd.Series([1, 200, 923442], dtype="int8"),
+ H=pd.Series([1, 200, 923442], dtype="int8")
)
)
@@ -486,9 +486,9 @@ def test_stack_mixed_levels(self):
("A", "cat", "long"),
("B", "cat", "long"),
("A", "dog", "short"),
- ("B", "dog", "short"),
+ ("B", "dog", "short")
],
- names=["exp", "animal", "hair_length"],
+ names=["exp", "animal", "hair_length"]
)
df = DataFrame(np.random.randn(4, 4), columns=columns)
@@ -530,9 +530,9 @@ def test_stack_int_level_names(self):
("A", "cat", "long"),
("B", "cat", "long"),
("A", "dog", "short"),
- ("B", "dog", "short"),
+ ("B", "dog", "short")
],
- names=["exp", "animal", "hair_length"],
+ names=["exp", "animal", "hair_length"]
)
df = DataFrame(np.random.randn(4, 4), columns=columns)
@@ -569,13 +569,13 @@ def test_unstack_bool(self):
df = DataFrame(
[False, False],
index=MultiIndex.from_arrays([["a", "b"], ["c", "l"]]),
- columns=["col"],
+ columns=["col"]
)
rs = df.unstack()
xp = DataFrame(
np.array([[False, np.nan], [np.nan, False]], dtype=object),
index=["a", "b"],
- columns=MultiIndex.from_arrays([["col", "col"], ["c", "l"]]),
+ columns=MultiIndex.from_arrays([["col", "col"], ["c", "l"]])
)
tm.assert_frame_equal(rs, xp)
@@ -584,7 +584,7 @@ def test_unstack_level_binding(self):
mi = pd.MultiIndex(
levels=[["foo", "bar"], ["one", "two"], ["a", "b"]],
codes=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]],
- names=["first", "second", "third"],
+ names=["first", "second", "third"]
)
s = pd.Series(0, index=mi)
result = s.unstack([1, 2]).stack(0)
@@ -592,7 +592,7 @@ def test_unstack_level_binding(self):
expected_mi = pd.MultiIndex(
levels=[["foo", "bar"], ["one", "two"]],
codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
- names=["first", "second"],
+ names=["first", "second"]
)
expected = pd.DataFrame(
@@ -600,7 +600,7 @@ def test_unstack_level_binding(self):
[[np.nan, 0], [0, np.nan], [np.nan, 0], [0, np.nan]], dtype=np.float64
),
index=expected_mi,
- columns=pd.Index(["a", "b"], name="third"),
+ columns=pd.Index(["a", "b"], name="third")
)
tm.assert_frame_equal(result, expected)
@@ -620,7 +620,7 @@ def test_unstack_to_series(self, float_frame):
midx = MultiIndex(
levels=[["x", "y"], ["a", "b", "c"]],
- codes=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]],
+ codes=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]
)
expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx)
@@ -650,7 +650,7 @@ def test_unstack_dtypes(self):
[np.dtype("int64")] * 4,
index=pd.MultiIndex.from_arrays(
[["C", "C", "D", "D"], [1, 2, 1, 2]], names=(None, "B")
- ),
+ )
)
tm.assert_series_equal(result, expected)
@@ -663,7 +663,7 @@ def test_unstack_dtypes(self):
[np.dtype("float64")] * 2 + [np.dtype("int64")] * 2,
index=pd.MultiIndex.from_arrays(
[["C", "C", "D", "D"], [1, 2, 1, 2]], names=(None, "B")
- ),
+ )
)
tm.assert_series_equal(result, expected)
df2["D"] = "foo"
@@ -673,14 +673,14 @@ def test_unstack_dtypes(self):
[np.dtype("float64")] * 2 + [np.dtype("object")] * 2,
index=pd.MultiIndex.from_arrays(
[["C", "C", "D", "D"], [1, 2, 1, 2]], names=(None, "B")
- ),
+ )
)
tm.assert_series_equal(result, expected)
# GH7405
for c, d in (
(np.zeros(5), np.zeros(5)),
- (np.arange(5, dtype="f8"), np.arange(5, 10, dtype="f8")),
+ (np.arange(5, dtype="f8"), np.arange(5, 10, dtype="f8"))
):
df = DataFrame(
@@ -688,7 +688,7 @@ def test_unstack_dtypes(self):
"A": ["a"] * 5,
"C": c,
"D": d,
- "B": pd.date_range("2012-01-01", periods=5),
+ "B": pd.date_range("2012-01-01", periods=5)
}
)
@@ -747,7 +747,7 @@ def test_unstack_unused_levels(self):
cases = (
(0, [13, 16, 6, 9, 2, 5, 8, 11], [np.nan, "a", 2], [np.nan, 5, 1]),
- (1, [8, 11, 1, 4, 12, 15, 13, 16], [np.nan, 5, 1], [np.nan, "a", 2]),
+ (1, [8, 11, 1, 4, 12, 15, 13, 16], [np.nan, 5, 1], [np.nan, "a", 2])
)
for level, idces, col_level, idx_level in cases:
result = df.unstack(level=level)
@@ -785,17 +785,17 @@ def test_unstack_long_index(self):
columns=pd.MultiIndex.from_tuples([[0]], names=["c1"]),
index=pd.MultiIndex.from_tuples(
[[0, 0, 1, 0, 0, 0, 1]],
- names=["i1", "i2", "i3", "i4", "i5", "i6", "i7"],
- ),
+ names=["i1", "i2", "i3", "i4", "i5", "i6", "i7"]
+ )
)
result = df.unstack(["i2", "i3", "i4", "i5", "i6", "i7"])
expected = pd.DataFrame(
[[1]],
columns=pd.MultiIndex.from_tuples(
[[0, 0, 1, 0, 0, 0, 1]],
- names=["c1", "i2", "i3", "i4", "i5", "i6", "i7"],
+ names=["c1", "i2", "i3", "i4", "i5", "i6", "i7"]
),
- index=pd.Index([0], name="i1"),
+ index=pd.Index([0], name="i1")
)
tm.assert_frame_equal(result, expected)
@@ -807,8 +807,8 @@ def test_unstack_multi_level_cols(self):
[["B", "C"], ["B", "D"]], names=["c1", "c2"]
),
index=pd.MultiIndex.from_tuples(
- [[10, 20, 30], [10, 20, 40]], names=["i1", "i2", "i3"],
- ),
+ [[10, 20, 30], [10, 20, 40]], names=["i1", "i2", "i3"]
+ )
)
assert df.unstack(["i2", "i1"]).columns.names[-2:] == ["i2", "i1"]
@@ -822,10 +822,10 @@ def test_unstack_multi_level_rows_and_cols(self):
["m1", "P3", 222],
["m1", "A5", 111],
["m2", "P3", 222],
- ["m2", "A5", 111],
+ ["m2", "A5", 111]
],
- names=["i1", "i2", "i3"],
- ),
+ names=["i1", "i2", "i3"]
+ )
)
result = df.unstack(["i3", "i2"])
expected = df.unstack(["i3"]).unstack(["i2"])
@@ -849,7 +849,7 @@ def verify(df):
{
"jim": ["a", "b", np.nan, "d"],
"joe": ["w", "x", "y", "z"],
- "jolie": ["a.w", "b.x", " .y", "d.z"],
+ "jolie": ["a.w", "b.x", " .y", "d.z"]
}
)
@@ -899,14 +899,14 @@ def verify(df):
14,
53,
60,
- 51,
- ],
+ 51
+ ]
}
)
df["4th"], df["5th"] = (
df.apply(lambda r: ".".join(map(cast, r)), axis=1),
- df.apply(lambda r: ".".join(map(cast, r.iloc[::-1])), axis=1),
+ df.apply(lambda r: ".".join(map(cast, r.iloc[::-1])), axis=1)
)
for idx in itertools.permutations(["1st", "2nd", "3rd"]):
@@ -924,7 +924,7 @@ def verify(df):
vals = [
[3, 0, 1, 2, np.nan, np.nan, np.nan, np.nan],
- [np.nan, np.nan, np.nan, np.nan, 4, 5, 6, 7],
+ [np.nan, np.nan, np.nan, np.nan, 4, 5, 6, 7]
]
vals = list(map(list, zip(*vals)))
idx = Index([np.nan, 0, 1, 2, 4, 5, 6, 7], name="B")
@@ -966,7 +966,7 @@ def verify(df):
{
"A": list("aaaaabbbbb"),
"B": (date_range("2012-01-01", periods=5).tolist() * 2),
- "C": np.arange(10),
+ "C": np.arange(10)
}
)
@@ -978,7 +978,7 @@ def verify(df):
cols = MultiIndex(
levels=[["C"], date_range("2012-01-01", periods=5)],
codes=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]],
- names=[None, "B"],
+ names=[None, "B"]
)
right = DataFrame(vals, columns=cols, index=idx)
@@ -991,31 +991,31 @@ def verify(df):
["Pb", 7.07e-06, np.nan, 680585148],
["Sn", 2.3614e-05, 0.0133, 680607017],
["Ag", 0.0, 0.0133, 680607017],
- ["Hg", -0.00015, 0.0133, 680607017],
+ ["Hg", -0.00015, 0.0133, 680607017]
]
df = DataFrame(
vals,
columns=["agent", "change", "dosage", "s_id"],
- index=[17263, 17264, 17265, 17266, 17267, 17268],
+ index=[17263, 17264, 17265, 17266, 17267, 17268]
)
left = df.copy().set_index(["s_id", "dosage", "agent"]).unstack()
vals = [
[np.nan, np.nan, 7.07e-06, np.nan, 0.0],
- [0.0, -0.00015, np.nan, 2.3614e-05, np.nan],
+ [0.0, -0.00015, np.nan, 2.3614e-05, np.nan]
]
idx = MultiIndex(
levels=[[680585148, 680607017], [0.0133]],
codes=[[0, 1], [-1, 0]],
- names=["s_id", "dosage"],
+ names=["s_id", "dosage"]
)
cols = MultiIndex(
levels=[["change"], ["Ag", "Hg", "Pb", "Sn", "U"]],
codes=[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4]],
- names=[None, "agent"],
+ names=[None, "agent"]
)
right = DataFrame(vals, columns=cols, index=idx)
@@ -1030,7 +1030,7 @@ def verify(df):
"1st": [1, 2, 1, 2, 1, 2],
"2nd": pd.date_range("2014-02-01", periods=6, freq="D"),
"jim": 100 + np.arange(6),
- "joe": (np.random.randn(6) * 10).round(2),
+ "joe": (np.random.randn(6) * 10).round(2)
}
)
@@ -1062,7 +1062,7 @@ def test_stack_partial_multiIndex(self):
def _test_stack_with_multiindex(multiindex):
df = DataFrame(
np.arange(3 * len(multiindex)).reshape(3, len(multiindex)),
- columns=multiindex,
+ columns=multiindex
)
for level in (-1, 0, 1, [0, 1], [1, 0]):
result = df.stack(level=level, dropna=False)
@@ -1088,7 +1088,7 @@ def _test_stack_with_multiindex(multiindex):
full_multiindex = MultiIndex.from_tuples(
[("B", "x"), ("B", "z"), ("A", "y"), ("C", "x"), ("C", "u")],
- names=["Upper", "Lower"],
+ names=["Upper", "Lower"]
)
for multiindex_columns in (
[0, 1, 2, 3, 4],
@@ -1102,7 +1102,7 @@ def _test_stack_with_multiindex(multiindex):
[0, 3],
[0],
[2],
- [4],
+ [4]
):
_test_stack_with_multiindex(full_multiindex[multiindex_columns])
if len(multiindex_columns) > 1:
@@ -1116,10 +1116,10 @@ def _test_stack_with_multiindex(multiindex):
index=MultiIndex(
levels=[[0, 1], ["u", "x", "y", "z"]],
codes=[[0, 0, 1, 1], [1, 3, 1, 3]],
- names=[None, "Lower"],
+ names=[None, "Lower"]
),
columns=Index(["B", "C"], name="Upper"),
- dtype=df.dtypes[0],
+ dtype=df.dtypes[0]
)
tm.assert_frame_equal(result, expected)
@@ -1154,8 +1154,8 @@ def test_stack_preserve_categorical_dtype_values(self):
[
([0, 0, 1, 1], pd.MultiIndex.from_product([[1, 2], ["a", "b"]])),
([0, 0, 2, 3], pd.MultiIndex.from_product([[1, 2], ["a", "b"]])),
- ([0, 1, 2, 3], pd.MultiIndex.from_product([[1, 2], ["a", "b"]])),
- ],
+ ([0, 1, 2, 3], pd.MultiIndex.from_product([[1, 2], ["a", "b"]]))
+ ]
)
def test_stack_multi_columns_non_unique_index(self, index, columns):
# GH-28301
@@ -1178,9 +1178,9 @@ def test_unstack_mixed_extension_types(self, level):
df = pd.DataFrame(
{
"A": pd.core.arrays.integer_array([0, 1, None]),
- "B": pd.Categorical(["a", "a", "b"]),
+ "B": pd.Categorical(["a", "a", "b"])
},
- index=index,
+ index=index
)
result = df.unstack(level=level)
@@ -1203,7 +1203,7 @@ def test_unstack_swaplevel_sortlevel(self, level):
[[3, 1, 2, 0]],
columns=pd.MultiIndex.from_tuples(
[("c", "A"), ("c", "B"), ("d", "A"), ("d", "B")], names=["baz", "foo"]
- ),
+ )
)
expected.index.name = "bar"
@@ -1240,9 +1240,9 @@ def test_unstack_timezone_aware_values():
"timestamp": [pd.Timestamp("2017-08-27 01:00:00.709949+0000", tz="UTC")],
"a": ["a"],
"b": ["b"],
- "c": ["c"],
+ "c": ["c"]
},
- columns=["timestamp", "a", "b", "c"],
+ columns=["timestamp", "a", "b", "c"]
)
result = df.set_index(["a", "b"]).unstack()
expected = pd.DataFrame(
@@ -1251,8 +1251,8 @@ def test_unstack_timezone_aware_values():
columns=pd.MultiIndex(
levels=[["timestamp", "c"], ["b"]],
codes=[[0, 1], [0, 0]],
- names=[None, "b"],
- ),
+ names=[None, "b"]
+ )
)
tm.assert_frame_equal(result, expected)
@@ -1268,7 +1268,7 @@ def test_stack_timezone_aware_values():
ts,
index=pd.MultiIndex(
levels=[["a", "b", "c"], ["A"]], codes=[[0, 1, 2], [0, 0, 0]]
- ),
+ )
)
tm.assert_series_equal(result, expected)
@@ -1281,7 +1281,7 @@ def test_unstacking_multi_index_df():
"score": [9.5, 8],
"employed": [False, True],
"kids": [0, 0],
- "gender": ["female", "male"],
+ "gender": ["female", "male"]
}
)
df = df.set_index(["name", "employed", "kids", "gender"])
@@ -1296,9 +1296,9 @@ def test_unstacking_multi_index_df():
("score", "female", False, 0),
("score", "female", True, 0),
("score", "male", False, 0),
- ("score", "male", True, 0),
+ ("score", "male", True, 0)
],
- names=[None, "gender", "employed", "kids"],
- ),
+ names=[None, "gender", "employed", "kids"]
+ )
)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py
index 4d0f1a326225d..6e6d25a67c50f 100644
--- a/pandas/tests/generic/test_finalize.py
+++ b/pandas/tests/generic/test_finalize.py
@@ -33,14 +33,14 @@
(
pd.Series,
(np.array([0], dtype="float64")),
- operator.methodcaller("view", "int64"),
+ operator.methodcaller("view", "int64")
),
(pd.Series, ([0],), operator.methodcaller("take", [])),
(pd.Series, ([0],), operator.methodcaller("__getitem__", [True])),
(pd.Series, ([0],), operator.methodcaller("repeat", 2)),
pytest.param(
(pd.Series, ([0],), operator.methodcaller("reset_index")),
- marks=pytest.mark.xfail,
+ marks=pytest.mark.xfail
),
(pd.Series, ([0],), operator.methodcaller("reset_index", drop=True)),
pytest.param(
@@ -69,25 +69,25 @@
(
pd.Series,
([0], pd.period_range("2000", periods=1)),
- operator.methodcaller("to_timestamp"),
+ operator.methodcaller("to_timestamp")
),
(
pd.Series,
([0], pd.date_range("2000", periods=1)),
- operator.methodcaller("to_period"),
+ operator.methodcaller("to_period")
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("dot", pd.DataFrame(index=["A"])),
+ operator.methodcaller("dot", pd.DataFrame(index=["A"]))
),
- marks=pytest.mark.xfail(reason="Implement binary finalize"),
+ marks=pytest.mark.xfail(reason="Implement binary finalize")
),
(pd.DataFrame, frame_data, operator.methodcaller("transpose")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("__getitem__", "A")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.DataFrame, frame_data, operator.methodcaller("__getitem__", ["A"])),
(pd.DataFrame, frame_data, operator.methodcaller("__getitem__", np.array([True]))),
@@ -95,7 +95,7 @@
(pd.DataFrame, frame_data, operator.methodcaller("query", "A == 1")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("eval", "A + 1")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.DataFrame, frame_data, operator.methodcaller("select_dtypes", include="int")),
(pd.DataFrame, frame_data, operator.methodcaller("assign", b=1)),
@@ -117,7 +117,7 @@
(pd.DataFrame, frame_data, operator.methodcaller("drop_duplicates")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("duplicated")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.DataFrame, frame_data, operator.methodcaller("sort_values", by="A")),
(pd.DataFrame, frame_data, operator.methodcaller("sort_index")),
@@ -128,193 +128,193 @@
(
pd.DataFrame,
frame_data,
- operator.methodcaller("add", pd.DataFrame(*frame_data)),
+ operator.methodcaller("add", pd.DataFrame(*frame_data))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
# TODO: div, mul, etc.
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("combine", pd.DataFrame(*frame_data), operator.add),
+ operator.methodcaller("combine", pd.DataFrame(*frame_data), operator.add)
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("combine_first", pd.DataFrame(*frame_data)),
+ operator.methodcaller("combine_first", pd.DataFrame(*frame_data))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("update", pd.DataFrame(*frame_data)),
+ operator.methodcaller("update", pd.DataFrame(*frame_data))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("pivot", columns="A")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
{"A": [1], "B": [1]},
- operator.methodcaller("pivot_table", columns="A"),
+ operator.methodcaller("pivot_table", columns="A")
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("stack")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("explode", "A")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_mi_data, operator.methodcaller("unstack"),),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
({"A": ["a", "b", "c"], "B": [1, 3, 5], "C": [2, 4, 6]},),
- operator.methodcaller("melt", id_vars=["A"], value_vars=["B"]),
+ operator.methodcaller("melt", id_vars=["A"], value_vars=["B"])
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("diff")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("applymap", lambda x: x)),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("append", pd.DataFrame({"A": [1]})),
+ operator.methodcaller("append", pd.DataFrame({"A": [1]}))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("append", pd.DataFrame({"B": [1]})),
+ operator.methodcaller("append", pd.DataFrame({"B": [1]}))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("merge", pd.DataFrame({"A": [1]})),
+ operator.methodcaller("merge", pd.DataFrame({"A": [1]}))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("round", 2)),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("corr")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("cov")),
marks=[
not_implemented_mark,
- pytest.mark.filterwarnings("ignore::RuntimeWarning"),
- ],
+ pytest.mark.filterwarnings("ignore::RuntimeWarning")
+ ]
),
pytest.param(
(
pd.DataFrame,
frame_data,
- operator.methodcaller("corrwith", pd.DataFrame(*frame_data)),
+ operator.methodcaller("corrwith", pd.DataFrame(*frame_data))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("count")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_mi_data, operator.methodcaller("count", level="A")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("nunique")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("idxmin")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("idxmax")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("mode")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("quantile")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("quantile", q=[0.25, 0.75])),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("quantile")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(
pd.DataFrame,
({"A": [1]}, [pd.Period("2000", "D")]),
- operator.methodcaller("to_timestamp"),
+ operator.methodcaller("to_timestamp")
),
(
pd.DataFrame,
({"A": [1]}, [pd.Timestamp("2000")]),
- operator.methodcaller("to_period", freq="D"),
+ operator.methodcaller("to_period", freq="D")
),
pytest.param(
(pd.DataFrame, frame_mi_data, operator.methodcaller("isin", [1])),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_mi_data, operator.methodcaller("isin", pd.Series([1]))),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(
pd.DataFrame,
frame_mi_data,
- operator.methodcaller("isin", pd.DataFrame({"A": [1]})),
+ operator.methodcaller("isin", pd.DataFrame({"A": [1]}))
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.DataFrame, frame_data, operator.methodcaller("swapaxes", 0, 1)),
(pd.DataFrame, frame_mi_data, operator.methodcaller("droplevel", "A")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("pop", "A")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("squeeze")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.Series, ([1, 2],), operator.methodcaller("squeeze")),
@@ -338,17 +338,17 @@
(pd.Series, (1, mi), operator.methodcaller("xs", "a")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("get", "A")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(
pd.DataFrame,
frame_data,
- operator.methodcaller("reindex_like", pd.DataFrame({"A": [1, 2, 3]})),
+ operator.methodcaller("reindex_like", pd.DataFrame({"A": [1, 2, 3]}))
),
(
pd.Series,
frame_data,
- operator.methodcaller("reindex_like", pd.Series([0, 1, 2])),
+ operator.methodcaller("reindex_like", pd.Series([0, 1, 2]))
),
(pd.DataFrame, frame_data, operator.methodcaller("add_prefix", "_")),
(pd.DataFrame, frame_data, operator.methodcaller("add_suffix", "_")),
@@ -369,12 +369,12 @@
(
pd.DataFrame,
({"A": np.array([1, 2], dtype=object)},),
- operator.methodcaller("infer_objects"),
+ operator.methodcaller("infer_objects")
),
(pd.Series, ([1, 2],), operator.methodcaller("convert_dtypes")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("convert_dtypes")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.Series, ([1, None, 3],), operator.methodcaller("interpolate")),
(pd.DataFrame, ({"A": [1, None, 3]},), operator.methodcaller("interpolate")),
@@ -383,52 +383,52 @@
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("asfreq", "H"),
+ operator.methodcaller("asfreq", "H")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("asfreq", "H"),
+ operator.methodcaller("asfreq", "H")
),
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("at_time", "12:00"),
+ operator.methodcaller("at_time", "12:00")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("at_time", "12:00"),
+ operator.methodcaller("at_time", "12:00")
),
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("between_time", "12:00", "13:00"),
+ operator.methodcaller("between_time", "12:00", "13:00")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("between_time", "12:00", "13:00"),
+ operator.methodcaller("between_time", "12:00", "13:00")
),
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("first", "3D"),
+ operator.methodcaller("first", "3D")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("first", "3D"),
+ operator.methodcaller("first", "3D")
),
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("last", "3D"),
+ operator.methodcaller("last", "3D")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("last", "3D"),
+ operator.methodcaller("last", "3D")
),
(pd.Series, ([1, 2],), operator.methodcaller("rank")),
(pd.DataFrame, frame_data, operator.methodcaller("rank")),
@@ -442,66 +442,66 @@
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("tshift"),
+ operator.methodcaller("tshift")
),
- marks=pytest.mark.filterwarnings("ignore::FutureWarning"),
+ marks=pytest.mark.filterwarnings("ignore::FutureWarning")
),
pytest.param(
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("tshift"),
+ operator.methodcaller("tshift")
),
- marks=pytest.mark.filterwarnings("ignore::FutureWarning"),
+ marks=pytest.mark.filterwarnings("ignore::FutureWarning")
),
(pd.Series, ([1, 2],), operator.methodcaller("truncate", before=0)),
(pd.DataFrame, frame_data, operator.methodcaller("truncate", before=0)),
(
pd.Series,
(1, pd.date_range("2000", periods=4, tz="UTC")),
- operator.methodcaller("tz_convert", "CET"),
+ operator.methodcaller("tz_convert", "CET")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4, tz="UTC")),
- operator.methodcaller("tz_convert", "CET"),
+ operator.methodcaller("tz_convert", "CET")
),
(
pd.Series,
(1, pd.date_range("2000", periods=4)),
- operator.methodcaller("tz_localize", "CET"),
+ operator.methodcaller("tz_localize", "CET")
),
(
pd.DataFrame,
({"A": [1, 1, 1, 1]}, pd.date_range("2000", periods=4)),
- operator.methodcaller("tz_localize", "CET"),
+ operator.methodcaller("tz_localize", "CET")
),
pytest.param(
(pd.Series, ([1, 2],), operator.methodcaller("describe")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("describe")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.Series, ([1, 2],), operator.methodcaller("pct_change")),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("pct_change")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.Series, ([1],), operator.methodcaller("transform", lambda x: x - x.min())),
pytest.param(
(
pd.DataFrame,
frame_mi_data,
- operator.methodcaller("transform", lambda x: x - x.min()),
+ operator.methodcaller("transform", lambda x: x - x.min())
),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
(pd.Series, ([1],), operator.methodcaller("apply", lambda x: x)),
pytest.param(
(pd.DataFrame, frame_mi_data, operator.methodcaller("apply", lambda x: x)),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
# Cumulative reductions
(pd.Series, ([1],), operator.methodcaller("cumsum")),
@@ -509,19 +509,19 @@
# Reductions
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("any")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("sum")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("std")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
pytest.param(
(pd.DataFrame, frame_data, operator.methodcaller("mean")),
- marks=not_implemented_mark,
+ marks=not_implemented_mark
),
]
@@ -568,8 +568,8 @@ def test_finalize_called(ndframe_method):
(pd.Series([1]), pd.Series([1])),
(pd.DataFrame({"A": [1]}), pd.DataFrame({"A": [1]})),
(pd.Series([1]), pd.DataFrame({"A": [1]})),
- (pd.DataFrame({"A": [1]}), pd.Series([1])),
- ],
+ (pd.DataFrame({"A": [1]}), pd.Series([1]))
+ ]
)
def test_binops(args, annotate, all_arithmetic_functions):
# This generates 326 tests... Is that needed?
@@ -600,7 +600,7 @@ def test_binops(args, annotate, all_arithmetic_functions):
operator.methodcaller("casefold"),
pytest.param(
operator.methodcaller("cat", ["a"]),
- marks=pytest.mark.xfail(reason="finalize not called."),
+ marks=pytest.mark.xfail(reason="finalize not called.")
),
operator.methodcaller("contains", "a"),
operator.methodcaller("count", "a"),
@@ -608,11 +608,11 @@ def test_binops(args, annotate, all_arithmetic_functions):
operator.methodcaller("endswith", "a"),
pytest.param(
operator.methodcaller("extract", r"(\w)(\d)"),
- marks=pytest.mark.xfail(reason="finalize not called."),
+ marks=pytest.mark.xfail(reason="finalize not called.")
),
pytest.param(
operator.methodcaller("extract", r"(\w)(\d)"),
- marks=pytest.mark.xfail(reason="finalize not called."),
+ marks=pytest.mark.xfail(reason="finalize not called.")
),
operator.methodcaller("find", "a"),
operator.methodcaller("findall", "a"),
@@ -651,9 +651,9 @@ def test_binops(args, annotate, all_arithmetic_functions):
operator.methodcaller("istitle"),
operator.methodcaller("isnumeric"),
operator.methodcaller("isdecimal"),
- operator.methodcaller("get_dummies"),
+ operator.methodcaller("get_dummies")
],
- ids=idfn,
+ ids=idfn
)
@not_implemented_mark
def test_string_method(method):
@@ -674,9 +674,9 @@ def test_string_method(method):
operator.methodcaller("floor", "H"),
operator.methodcaller("ceil", "H"),
operator.methodcaller("month_name"),
- operator.methodcaller("day_name"),
+ operator.methodcaller("day_name")
],
- ids=idfn,
+ ids=idfn
)
@not_implemented_mark
def test_datetime_method(method):
@@ -711,8 +711,8 @@ def test_datetime_method(method):
"is_year_end",
"is_leap_year",
"daysinmonth",
- "days_in_month",
- ],
+ "days_in_month"
+ ]
)
@not_implemented_mark
def test_datetime_property(attr):
@@ -734,7 +734,7 @@ def test_timedelta_property(attr):
@pytest.mark.parametrize(
- "method", [operator.methodcaller("total_seconds")],
+ "method", [operator.methodcaller("total_seconds")]
)
@not_implemented_mark
def test_timedelta_methods(method):
@@ -755,8 +755,8 @@ def test_timedelta_methods(method):
operator.methodcaller("remove_unused_categories"),
operator.methodcaller("rename_categories", {"a": "A", "b": "B"}),
operator.methodcaller("reorder_categories", ["b", "a"]),
- operator.methodcaller("set_categories", ["A", "B"]),
- ],
+ operator.methodcaller("set_categories", ["A", "B"])
+ ]
)
@not_implemented_mark
def test_categorical_accessor(method):
@@ -780,8 +780,8 @@ def test_categorical_accessor(method):
lambda x: x.agg("sum"),
lambda x: x.agg(["sum", "count"]),
lambda x: x.transform(lambda y: y),
- lambda x: x.apply(lambda y: y),
- ],
+ lambda x: x.apply(lambda y: y)
+ ]
)
@not_implemented_mark
def test_groupby(obj, method):
diff --git a/pandas/tests/generic/test_to_xarray.py b/pandas/tests/generic/test_to_xarray.py
index ab56a752f7e90..88bf10eec1d2c 100644
--- a/pandas/tests/generic/test_to_xarray.py
+++ b/pandas/tests/generic/test_to_xarray.py
@@ -27,7 +27,7 @@ def test_to_xarray_index_types(self, index):
"e": [True, False, True],
"f": pd.Categorical(list("abc")),
"g": pd.date_range("20130101", periods=3),
- "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
+ "h": pd.date_range("20130101", periods=3, tz="US/Eastern")
}
)
@@ -48,7 +48,7 @@ def test_to_xarray_index_types(self, index):
expected["f"] = expected["f"].astype(object)
expected.columns.name = None
tm.assert_frame_equal(
- result.to_dataframe(), expected,
+ result.to_dataframe(), expected
)
@td.skip_if_no("xarray", min_version="0.7.0")
@@ -64,7 +64,7 @@ def test_to_xarray(self):
"e": [True, False, True],
"f": pd.Categorical(list("abc")),
"g": pd.date_range("20130101", periods=3),
- "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
+ "h": pd.date_range("20130101", periods=3, tz="US/Eastern")
}
)
diff --git a/pandas/tests/groupby/aggregate/test_numba.py b/pandas/tests/groupby/aggregate/test_numba.py
index 29e65e938f6f9..e8c2a91916b6c 100644
--- a/pandas/tests/groupby/aggregate/test_numba.py
+++ b/pandas/tests/groupby/aggregate/test_numba.py
@@ -16,7 +16,7 @@ def incorrect_function(x):
data = DataFrame(
{"key": ["a", "a", "b", "b", "a"], "data": [1.0, 2.0, 3.0, 4.0, 5.0]},
- columns=["key", "data"],
+ columns=["key", "data"]
)
with pytest.raises(NumbaUtilError, match="The first 2"):
data.groupby("key").agg(incorrect_function, engine="numba")
@@ -32,7 +32,7 @@ def incorrect_function(x, **kwargs):
data = DataFrame(
{"key": ["a", "a", "b", "b", "a"], "data": [1.0, 2.0, 3.0, 4.0, 5.0]},
- columns=["key", "data"],
+ columns=["key", "data"]
)
with pytest.raises(NumbaUtilError, match="numba does not support"):
data.groupby("key").agg(incorrect_function, engine="numba", a=1)
@@ -57,7 +57,7 @@ def func_numba(values, index):
func_numba = numba.jit(func_numba)
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0)
@@ -90,7 +90,7 @@ def func_2(values, index):
func_2 = numba.jit(func_2)
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0)
@@ -121,7 +121,7 @@ def func_1(values, index):
return np.mean(values) - 3.4
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
)
grouped = data.groupby(0)
expected = grouped.agg(func_1, engine="numba")
@@ -137,12 +137,12 @@ def func_1(values, index):
["min", "max"],
"min",
{"B": ["min", "max"], "C": "sum"},
- NamedAgg(column="B", aggfunc="min"),
- ],
+ NamedAgg(column="B", aggfunc="min")
+ ]
)
def test_multifunc_notimplimented(agg_func):
data = DataFrame(
- {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1],
+ {0: ["a", "a", "b", "b", "a"], 1: [1.0, 2.0, 3.0, 4.0, 5.0]}, columns=[0, 1]
)
grouped = data.groupby(0)
with pytest.raises(NotImplementedError, match="Numba engine can"):
| - [x] contributes to #35925
- [x] tests added / passed
Tested with:
`./ci/code_checks.sh`
Files edited:
pandas/tests/frame/test_analytics.py,
pandas/tests/frame/test_constructors.py,
pandas/tests/frame/test_reshape.py,
pandas/tests/generic/test_finalize.py,
pandas/tests/generic/test_to_xarray.py,
pandas/tests/groupby/aggregate/test_numba.py
| https://api.github.com/repos/pandas-dev/pandas/pulls/35970 | 2020-08-29T02:40:36Z | 2020-08-29T03:31:50Z | null | 2020-08-29T03:31:50Z |
TYP: annotate plotting._matplotlib.tools | diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index 26b25597ce1a6..4d643ffb734e4 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -1,6 +1,6 @@
# being a bit too dynamic
from math import ceil
-from typing import TYPE_CHECKING, Tuple
+from typing import TYPE_CHECKING, Iterable, List, Sequence, Tuple, Union
import warnings
import matplotlib.table
@@ -15,10 +15,13 @@
from pandas.plotting._matplotlib import compat
if TYPE_CHECKING:
+ from matplotlib.axes import Axes
+ from matplotlib.axis import Axis
+ from matplotlib.lines import Line2D # noqa:F401
from matplotlib.table import Table
-def format_date_labels(ax, rot):
+def format_date_labels(ax: "Axes", rot):
# mini version of autofmt_xdate
for label in ax.get_xticklabels():
label.set_ha("right")
@@ -278,7 +281,7 @@ def _subplots(
return fig, axes
-def _remove_labels_from_axis(axis):
+def _remove_labels_from_axis(axis: "Axis"):
for t in axis.get_majorticklabels():
t.set_visible(False)
@@ -294,7 +297,15 @@ def _remove_labels_from_axis(axis):
axis.get_label().set_visible(False)
-def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey):
+def _handle_shared_axes(
+ axarr: Iterable["Axes"],
+ nplots: int,
+ naxes: int,
+ nrows: int,
+ ncols: int,
+ sharex: bool,
+ sharey: bool,
+):
if nplots > 1:
if compat._mpl_ge_3_2_0():
row_num = lambda x: x.get_subplotspec().rowspan.start
@@ -340,7 +351,7 @@ def _handle_shared_axes(axarr, nplots, naxes, nrows, ncols, sharex, sharey):
_remove_labels_from_axis(ax.yaxis)
-def _flatten(axes):
+def _flatten(axes: Union["Axes", Sequence["Axes"]]) -> Sequence["Axes"]:
if not is_list_like(axes):
return np.array([axes])
elif isinstance(axes, (np.ndarray, ABCIndexClass)):
@@ -348,7 +359,13 @@ def _flatten(axes):
return np.array(axes)
-def _set_ticks_props(axes, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None):
+def _set_ticks_props(
+ axes: Union["Axes", Sequence["Axes"]],
+ xlabelsize=None,
+ xrot=None,
+ ylabelsize=None,
+ yrot=None,
+):
import matplotlib.pyplot as plt
for ax in _flatten(axes):
@@ -363,7 +380,7 @@ def _set_ticks_props(axes, xlabelsize=None, xrot=None, ylabelsize=None, yrot=Non
return axes
-def _get_all_lines(ax):
+def _get_all_lines(ax: "Axes") -> List["Line2D"]:
lines = ax.get_lines()
if hasattr(ax, "right_ax"):
@@ -375,7 +392,7 @@ def _get_all_lines(ax):
return lines
-def _get_xlim(lines) -> Tuple[float, float]:
+def _get_xlim(lines: Iterable["Line2D"]) -> Tuple[float, float]:
left, right = np.inf, -np.inf
for l in lines:
x = l.get_xdata(orig=False)
| Same idea as #35960, focused on clarifying Axis vs Axes | https://api.github.com/repos/pandas-dev/pandas/pulls/35968 | 2020-08-28T22:30:05Z | 2020-08-31T10:15:05Z | 2020-08-31T10:15:05Z | 2020-08-31T14:45:45Z |
Deprecate groupby/pivot observed=False default | diff --git a/doc/source/user_guide/10min.rst b/doc/source/user_guide/10min.rst
index cf548ba5d1133..81c33b53e21a8 100644
--- a/doc/source/user_guide/10min.rst
+++ b/doc/source/user_guide/10min.rst
@@ -702,11 +702,11 @@ Sorting is per order in the categories, not lexical order.
df.sort_values(by="grade")
-Grouping by a categorical column also shows empty categories.
+Grouping by a categorical column can also show empty categories, using the observed keyword.
.. ipython:: python
- df.groupby("grade").size()
+ df.groupby("grade", observed=False).size()
Plotting
diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index 2cd48ac7adb0e..f952bd9150ce5 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -809,8 +809,8 @@ Groupby operations on the index will preserve the index nature as well.
.. ipython:: python
- df2.groupby(level=0).sum()
- df2.groupby(level=0).sum().index
+ df2.groupby(level=0, observed=False).sum()
+ df2.groupby(level=0, observed=False).sum().index
Reindexing operations will return a resulting index based on the type of the passed
indexer. Passing a list will return a plain-old ``Index``; indexing with
diff --git a/doc/source/user_guide/categorical.rst b/doc/source/user_guide/categorical.rst
index 5c43de05fb5b9..0221bc4101b63 100644
--- a/doc/source/user_guide/categorical.rst
+++ b/doc/source/user_guide/categorical.rst
@@ -622,7 +622,7 @@ even if some categories are not present in the data:
s = pd.Series(pd.Categorical(["a", "b", "c", "c"], categories=["c", "a", "b", "d"]))
s.value_counts()
-``DataFrame`` methods like :meth:`DataFrame.sum` also show "unused" categories.
+``DataFrame`` methods like :meth:`DataFrame.sum` also show "unused" categories:
.. ipython:: python
@@ -635,7 +635,8 @@ even if some categories are not present in the data:
)
df.sum(axis=1, level=1)
-Groupby will also show "unused" categories:
+Groupby will also show "unused" categories by default, though this behavior
+is deprecated. In a future release, users must specify a value for ``observed``:
.. ipython:: python
@@ -643,7 +644,7 @@ Groupby will also show "unused" categories:
["a", "b", "b", "b", "c", "c", "c"], categories=["a", "b", "c", "d"]
)
df = pd.DataFrame({"cats": cats, "values": [1, 2, 2, 2, 3, 4, 5]})
- df.groupby("cats").mean()
+ df.groupby("cats", observed=False).mean()
cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
df2 = pd.DataFrame(
@@ -653,7 +654,7 @@ Groupby will also show "unused" categories:
"values": [1, 2, 3, 4],
}
)
- df2.groupby(["cats", "B"]).mean()
+ df2.groupby(["cats", "B"], observed=False).mean()
Pivot tables:
@@ -662,7 +663,7 @@ Pivot tables:
raw_cat = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
df = pd.DataFrame({"A": raw_cat, "B": ["c", "d", "c", "d"], "values": [1, 2, 3, 4]})
- pd.pivot_table(df, values="values", index=["A", "B"])
+ pd.pivot_table(df, values="values", index=["A", "B"], observed=False)
Data munging
------------
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index d6081155b58db..b6f30beae1dbb 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -1269,7 +1269,7 @@ can be used as group keys. If so, the order of the levels will be preserved:
factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
- data.groupby(factor).mean()
+ data.groupby(factor, observed=True).mean()
.. _groupby.specify:
diff --git a/doc/source/whatsnew/v0.19.0.rst b/doc/source/whatsnew/v0.19.0.rst
index 340e1ce9ee1ef..cec8e44806250 100644
--- a/doc/source/whatsnew/v0.19.0.rst
+++ b/doc/source/whatsnew/v0.19.0.rst
@@ -1131,6 +1131,7 @@ An analogous change has been made to ``MultiIndex.from_product``.
As a consequence, ``groupby`` and ``set_index`` also preserve categorical dtypes in indexes
.. ipython:: python
+ :okwarning:
df = pd.DataFrame({"A": [0, 1], "B": [10, 11], "C": cat})
df_grouped = df.groupby(by=["A", "C"]).first()
diff --git a/doc/source/whatsnew/v0.20.0.rst b/doc/source/whatsnew/v0.20.0.rst
index 2cb8e13e9a18a..dbd77aab4ff3d 100644
--- a/doc/source/whatsnew/v0.20.0.rst
+++ b/doc/source/whatsnew/v0.20.0.rst
@@ -291,6 +291,7 @@ In previous versions, ``.groupby(..., sort=False)`` would fail with a ``ValueErr
**New behavior**:
.. ipython:: python
+ :okwarning:
df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
diff --git a/doc/source/whatsnew/v0.22.0.rst b/doc/source/whatsnew/v0.22.0.rst
index ec9769c22e76b..d8672be0bc711 100644
--- a/doc/source/whatsnew/v0.22.0.rst
+++ b/doc/source/whatsnew/v0.22.0.rst
@@ -118,6 +118,7 @@ instead of ``NaN``.
*pandas 0.22*
.. ipython:: python
+ :okwarning:
grouper = pd.Categorical(["a", "a"], categories=["a", "b"])
pd.Series([1, 2]).groupby(grouper).sum()
@@ -126,6 +127,7 @@ To restore the 0.21 behavior of returning ``NaN`` for unobserved groups,
use ``min_count>=1``.
.. ipython:: python
+ :okwarning:
pd.Series([1, 2]).groupby(grouper).sum(min_count=1)
diff --git a/doc/source/whatsnew/v0.23.0.rst b/doc/source/whatsnew/v0.23.0.rst
index f4caea9d363eb..a763803d6fa3b 100644
--- a/doc/source/whatsnew/v0.23.0.rst
+++ b/doc/source/whatsnew/v0.23.0.rst
@@ -288,6 +288,7 @@ For pivoting operations, this behavior is *already* controlled by the ``dropna``
df
.. ipython:: python
+ :okwarning:
pd.pivot_table(df, values='values', index=['A', 'B'],
dropna=True)
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 8dbc6728dccfe..ce6e2a1395868 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -522,6 +522,7 @@ Deprecations
- Deprecated :meth:`Index.asi8` for :class:`Index` subclasses other than :class:`.DatetimeIndex`, :class:`.TimedeltaIndex`, and :class:`PeriodIndex` (:issue:`37877`)
- The ``inplace`` parameter of :meth:`Categorical.remove_unused_categories` is deprecated and will be removed in a future version (:issue:`37643`)
- The ``null_counts`` parameter of :meth:`DataFrame.info` is deprecated and replaced by ``show_counts``. It will be removed in a future version (:issue:`37999`)
+- Deprecated default keyword argument of ``observed=False`` in :~meth:`DataFrame.groupby` and :~meth:`DataFrame.pivot_table` (:issue:`17594`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5f149f10b05d3..53f72abd8d93f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5677,7 +5677,7 @@ def value_counts(
if subset is None:
subset = self.columns.tolist()
- counts = self.groupby(subset).grouper.size()
+ counts = self.groupby(subset, observed=True).grouper.size()
if sort:
counts = counts.sort_values(ascending=ascending)
@@ -6698,7 +6698,7 @@ def groupby(
sort: bool = True,
group_keys: bool = True,
squeeze: bool = no_default,
- observed: bool = False,
+ observed: Optional[bool] = None,
dropna: bool = True,
) -> DataFrameGroupBy:
from pandas.core.groupby.generic import DataFrameGroupBy
@@ -7029,7 +7029,7 @@ def pivot_table(
margins=False,
dropna=True,
margins_name="All",
- observed=False,
+ observed=None,
) -> DataFrame:
from pandas.core.reshape.pivot import pivot_table
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 4a9e020a0fe46..61cdc6b98d919 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -87,10 +87,15 @@
from pandas.core.dtypes.missing import isna, notna
import pandas as pd
-from pandas.core import arraylike, indexing, missing, nanops
-import pandas.core.algorithms as algos
+from pandas.core import (
+ algorithms as algos,
+ arraylike,
+ common as com,
+ indexing,
+ missing,
+ nanops,
+)
from pandas.core.base import PandasObject, SelectionMixin
-import pandas.core.common as com
from pandas.core.construction import create_series_with_explicit_dtype
from pandas.core.flags import Flags
from pandas.core.indexes import base as ibase
@@ -10545,7 +10550,8 @@ def pct_change(
def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwargs):
if axis is None:
raise ValueError("Must specify 'axis' when aggregating by level.")
- grouped = self.groupby(level=level, axis=axis, sort=False)
+ # see pr-35967 for discussion about the observed keyword
+ grouped = self.groupby(level=level, axis=axis, sort=False, observed=False)
if hasattr(grouped, name) and skipna:
return getattr(grouped, name)(**kwargs)
axis = self._get_axis_number(axis)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 798c0742f03e5..98d26ccb34a00 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -526,7 +526,7 @@ def __init__(
sort: bool = True,
group_keys: bool = True,
squeeze: bool = False,
- observed: bool = False,
+ observed: Optional[bool] = None,
mutated: bool = False,
dropna: bool = True,
):
@@ -3016,7 +3016,7 @@ def get_groupby(
sort: bool = True,
group_keys: bool = True,
squeeze: bool = False,
- observed: bool = False,
+ observed: Optional[bool] = None,
mutated: bool = False,
dropna: bool = True,
) -> GroupBy:
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index e8af9da30a298..23b562301aeb1 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -2,6 +2,7 @@
Provide user facing operators for doing the split part of the
split-apply-combine paradigm.
"""
+import textwrap
from typing import Dict, Hashable, List, Optional, Set, Tuple
import warnings
@@ -31,6 +32,18 @@
from pandas.io.formats.printing import pprint_thing
+_observed_msg = textwrap.dedent(
+ """\
+Grouping by a categorical but 'observed' was not specified.
+Using 'observed=False', but in a future version of pandas
+not specifying 'observed' will raise an error. Pass
+'observed=True' or 'observed=False' to silence this warning.
+
+See the `groupby` documentation for more information on the
+observed keyword.
+"""
+)
+
class Grouper:
"""
@@ -432,7 +445,7 @@ def __init__(
name=None,
level=None,
sort: bool = True,
- observed: bool = False,
+ observed: Optional[bool] = None,
in_axis: bool = False,
dropna: bool = True,
):
@@ -495,6 +508,10 @@ def __init__(
# a passed Categorical
elif is_categorical_dtype(self.grouper):
+ if observed is None:
+ warnings.warn(_observed_msg, FutureWarning)
+ observed = False
+
self.grouper, self.all_grouper = recode_for_groupby(
self.grouper, self.sort, observed
)
@@ -631,7 +648,7 @@ def get_grouper(
axis: int = 0,
level=None,
sort: bool = True,
- observed: bool = False,
+ observed: Optional[bool] = None,
mutated: bool = False,
validate: bool = True,
dropna: bool = True,
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 52ffb1567cb2d..c9ffc9a69281b 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -493,7 +493,12 @@ def _format_duplicate_message(self):
duplicates = self[self.duplicated(keep="first")].unique()
assert len(duplicates)
- out = Series(np.arange(len(self))).groupby(self).agg(list)[duplicates]
+ # see pr-35967 about the observed keyword
+ out = (
+ Series(np.arange(len(self)))
+ .groupby(self, observed=False)
+ .agg(list)[duplicates]
+ )
if self.nlevels == 1:
out = out.rename_axis("label")
return out.to_frame(name="positions")
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 2c6cdb846221f..94d8b50cf5597 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -109,13 +109,15 @@ def _groupby_and_merge(by, on, left: "DataFrame", right: "DataFrame", merge_piec
if not isinstance(by, (list, tuple)):
by = [by]
- lby = left.groupby(by, sort=False)
+ # see pr-35967 for discussion about observed=False
+ # this is the previous default behavior if the group is a categorical
+ lby = left.groupby(by, sort=False, observed=False)
rby: Optional[groupby.DataFrameGroupBy] = None
# if we can groupby the rhs
# then we can get vastly better perf
if all(item in right.columns for item in by):
- rby = right.groupby(by, sort=False)
+ rby = right.groupby(by, sort=False, observed=False)
for key, lhs in lby:
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 40496a5b8671b..19a56b1651197 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -46,7 +46,7 @@ def pivot_table(
margins=False,
dropna=True,
margins_name="All",
- observed=False,
+ observed=None,
) -> "DataFrame":
index = _convert_by(index)
columns = _convert_by(columns)
@@ -612,6 +612,8 @@ def crosstab(
margins=margins,
margins_name=margins_name,
dropna=dropna,
+ # the below is only here to silence the FutureWarning
+ observed=False,
**kwargs,
)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b20cf8eed9a2e..b51e2a42293d0 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1674,7 +1674,7 @@ def groupby(
sort: bool = True,
group_keys: bool = True,
squeeze: bool = no_default,
- observed: bool = False,
+ observed: Optional[bool] = None,
dropna: bool = True,
) -> "SeriesGroupBy":
from pandas.core.groupby.generic import SeriesGroupBy
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 3aeb3b664b27f..92e52a3d174dd 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -119,6 +119,17 @@
This only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
+
+ The current default of ``observed=False`` is deprecated. In
+ the future this will be a required keyword in the presence
+ of a categorical grouper and a failure to specify a value will
+ result in an error.
+
+ Explicitly pass ``observed=True`` to silence the warning and not
+ show all observed values.
+ Explicitly pass ``observed=False`` to silence the warning and
+ show groups for all observed values.
+
dropna : bool, default True
If True, and if group keys contain NA values, NA values together
with row/column will be dropped.
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index 7122a38db9d0a..82bf1af5da297 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -195,7 +195,7 @@ def _grouped_plot_by_column(
return_type=None,
**kwargs,
):
- grouped = data.groupby(by)
+ grouped = data.groupby(by, observed=False)
if columns is None:
if not isinstance(by, (list, tuple)):
by = [by]
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 073918eda3deb..cd3757f6a5ecf 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -13,8 +13,7 @@
from pandas.core.dtypes.common import is_integer_dtype
import pandas as pd
-from pandas import DataFrame, Index, MultiIndex, Series, concat
-import pandas._testing as tm
+from pandas import DataFrame, Index, MultiIndex, Series, _testing as tm, concat
from pandas.core.base import SpecificationError
from pandas.core.groupby.grouper import Grouping
@@ -1074,7 +1073,7 @@ def test_groupby_single_agg_cat_cols(grp_col_dict, exp_data):
input_df = input_df.astype({"cat": "category", "cat_ord": "category"})
input_df["cat_ord"] = input_df["cat_ord"].cat.as_ordered()
- result_df = input_df.groupby("cat").agg(grp_col_dict)
+ result_df = input_df.groupby("cat", observed=False).agg(grp_col_dict)
# create expected dataframe
cat_index = pd.CategoricalIndex(
@@ -1108,7 +1107,7 @@ def test_groupby_combined_aggs_cat_cols(grp_col_dict, exp_data):
input_df = input_df.astype({"cat": "category", "cat_ord": "category"})
input_df["cat_ord"] = input_df["cat_ord"].cat.as_ordered()
- result_df = input_df.groupby("cat").agg(grp_col_dict)
+ result_df = input_df.groupby("cat", observed=False).agg(grp_col_dict)
# create expected dataframe
cat_index = pd.CategoricalIndex(
diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py
index c907391917ca8..6e96605418731 100644
--- a/pandas/tests/groupby/aggregate/test_cython.py
+++ b/pandas/tests/groupby/aggregate/test_cython.py
@@ -1,13 +1,20 @@
"""
test cython .agg behavior
"""
-
import numpy as np
import pytest
import pandas as pd
-from pandas import DataFrame, Index, NaT, Series, Timedelta, Timestamp, bdate_range
-import pandas._testing as tm
+from pandas import (
+ DataFrame,
+ Index,
+ NaT,
+ Series,
+ Timedelta,
+ Timestamp,
+ _testing as tm,
+ bdate_range,
+)
from pandas.core.groupby.groupby import DataError
@@ -175,6 +182,7 @@ def test__cython_agg_general(op, targop):
("max", np.max),
],
)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_cython_agg_empty_buckets(op, targop, observed):
df = DataFrame([11, 12, 13])
grps = range(0, 55, 5)
@@ -189,6 +197,7 @@ def test_cython_agg_empty_buckets(op, targop, observed):
tm.assert_frame_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_cython_agg_empty_buckets_nanops(observed):
# GH-18869 can't call nanops on empty groups, so hardcode expected
# for these
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 5d0f6d6262899..5138f5de21a4c 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -1,7 +1,6 @@
"""
test all other .agg behavior
"""
-
import datetime as dt
from functools import partial
@@ -15,10 +14,10 @@
MultiIndex,
PeriodIndex,
Series,
+ _testing as tm,
date_range,
period_range,
)
-import pandas._testing as tm
from pandas.core.base import SpecificationError
from pandas.io.formats.printing import pprint_thing
@@ -555,6 +554,7 @@ def test_agg_structs_series(structure, expected):
tm.assert_series_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_agg_category_nansum(observed):
categories = ["a", "b", "c"]
df = DataFrame(
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 8cf77ca6335f4..a1b3f7fe2e463 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -11,9 +11,9 @@
Index,
MultiIndex,
Series,
+ _testing as tm,
qcut,
)
-import pandas._testing as tm
def cartesian_product_for_groupers(result, args, names, fill_value=np.NaN):
@@ -212,6 +212,7 @@ def f(x):
tm.assert_index_equal((desc_result.stack().index.get_level_values(1)), exp)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_level_get_group(observed):
# GH15155
df = DataFrame(
@@ -276,6 +277,7 @@ def test_apply(ordered):
tm.assert_series_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_observed(observed):
# multiple groupers, don't re-expand the output space
# of the grouper
@@ -384,11 +386,13 @@ def test_observed(observed):
tm.assert_frame_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_observed_codes_remap(observed):
d = {"C1": [3, 3, 4, 5], "C2": [1, 2, 3, 4], "C3": [10, 100, 200, 34]}
df = DataFrame(d)
values = pd.cut(df["C1"], [1, 2, 3, 6])
values.name = "cat"
+
groups_double_key = df.groupby([values, "C2"], observed=observed)
idx = MultiIndex.from_arrays([values, [1, 2, 3, 4]], names=["cat", "C2"])
@@ -423,12 +427,14 @@ def test_observed_perf():
assert result.index.levels[2].nunique() == df.other_id.nunique()
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_observed_groups(observed):
# gh-20583
# test that we have the appropriate groups
cat = Categorical(["a", "c", "a"], categories=["a", "b", "c"])
df = DataFrame({"cat": cat, "vals": [1, 2, 3]})
+
g = df.groupby("cat", observed=observed)
result = g.groups
@@ -444,6 +450,7 @@ def test_observed_groups(observed):
tm.assert_dict_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_observed_groups_with_nan(observed):
# GH 24740
df = DataFrame(
@@ -480,6 +487,7 @@ def test_observed_nth():
tm.assert_series_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_dataframe_categorical_with_nan(observed):
# GH 21151
s1 = Categorical([np.nan, "a", np.nan, "a"], categories=["a", "b", "c"])
@@ -503,6 +511,7 @@ def test_dataframe_categorical_with_nan(observed):
@pytest.mark.parametrize("ordered", [True, False])
@pytest.mark.parametrize("observed", [True, False])
@pytest.mark.parametrize("sort", [True, False])
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_dataframe_categorical_ordered_observed_sort(ordered, observed, sort):
# GH 25871: Fix groupby sorting on ordered Categoricals
# GH 25167: Groupby with observed=True doesn't sort
@@ -1062,7 +1071,7 @@ def test_groupby_multiindex_categorical_datetime():
"values": np.arange(9),
}
)
- result = df.groupby(["key1", "key2"]).mean()
+ result = df.groupby(["key1", "key2"], observed=False).mean()
idx = MultiIndex.from_product(
[
@@ -1167,6 +1176,7 @@ def test_seriesgroupby_observed_true(df_cat, operation, kwargs):
@pytest.mark.parametrize("operation", ["agg", "apply"])
@pytest.mark.parametrize("observed", [False, None])
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_seriesgroupby_observed_false_or_none(df_cat, observed, operation):
# GH 24880
index, _ = MultiIndex.from_product(
@@ -1231,6 +1241,7 @@ def test_seriesgroupby_observed_false_or_none(df_cat, observed, operation):
),
],
)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_seriesgroupby_observed_apply_dict(df_cat, observed, index, data):
# GH 24880
expected = Series(data=data, index=index, name="C")
@@ -1242,12 +1253,13 @@ def test_seriesgroupby_observed_apply_dict(df_cat, observed, index, data):
def test_groupby_categorical_series_dataframe_consistent(df_cat):
# GH 20416
- expected = df_cat.groupby(["A", "B"])["C"].mean()
- result = df_cat.groupby(["A", "B"]).mean()["C"]
+ expected = df_cat.groupby(["A", "B"], observed=False)["C"].mean()
+ result = df_cat.groupby(["A", "B"], observed=False).mean()["C"]
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("code", [([1, 0, 0]), ([0, 0, 0])])
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_groupby_categorical_axis_1(code):
# GH 13420
df = DataFrame({"a": [1, 2, 3, 4], "b": [-1, -2, -3, -4], "c": [5, 6, 7, 8]})
@@ -1257,6 +1269,7 @@ def test_groupby_categorical_axis_1(code):
tm.assert_frame_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_groupby_cat_preserves_structure(observed, ordered):
# GH 28787
df = DataFrame(
@@ -1285,6 +1298,7 @@ def test_get_nonexistent_category():
)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_series_groupby_on_2_categoricals_unobserved(reduction_func, observed, request):
# GH 17605
if reduction_func == "ngroup":
@@ -1384,6 +1398,7 @@ def test_dataframe_groupby_on_2_categoricals_when_observed_is_true(reduction_fun
@pytest.mark.parametrize("observed", [False, None])
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_dataframe_groupby_on_2_categoricals_when_observed_is_false(
reduction_func, observed, request
):
@@ -1417,6 +1432,7 @@ def test_dataframe_groupby_on_2_categoricals_when_observed_is_false(
assert (res.loc[unobserved_cats] == expected).all().all()
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_series_groupby_categorical_aggregation_getitem():
# GH 8870
d = {"foo": [10, 8, 4, 1], "bar": [10, 20, 30, 40], "baz": ["d", "c", "d", "c"]}
@@ -1472,6 +1488,7 @@ def test_groupy_first_returned_categorical_instead_of_dataframe(func):
tm.assert_series_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_read_only_category_no_sort():
# GH33410
cats = np.array([1, 2])
@@ -1480,10 +1497,12 @@ def test_read_only_category_no_sort():
{"a": [1, 3, 5, 7], "b": Categorical([1, 1, 2, 2], categories=Index(cats))}
)
expected = DataFrame(data={"a": [2, 6]}, index=CategoricalIndex([1, 2], name="b"))
+
result = df.groupby("b", sort=False).mean()
tm.assert_frame_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_sorted_missing_category_values():
# GH 28597
df = DataFrame(
@@ -1631,6 +1650,7 @@ def test_categorical_transform():
@pytest.mark.parametrize("func", ["first", "last"])
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_series_groupby_first_on_categorical_col_grouped_on_2_categoricals(
func: str, observed: bool
):
@@ -1656,6 +1676,7 @@ def test_series_groupby_first_on_categorical_col_grouped_on_2_categoricals(
@pytest.mark.parametrize("func", ["first", "last"])
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_df_groupby_first_on_categorical_col_grouped_on_2_categoricals(
func: str, observed: bool
):
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 12e570490487d..cc0c6c61e7e56 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -7,9 +7,17 @@
from pandas.errors import UnsupportedFunctionCall
import pandas as pd
-from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, date_range, isna
-import pandas._testing as tm
-import pandas.core.nanops as nanops
+from pandas import (
+ DataFrame,
+ Index,
+ MultiIndex,
+ Series,
+ Timestamp,
+ _testing as tm,
+ date_range,
+ isna,
+)
+from pandas.core import nanops as nanops
from pandas.util import _test_decorators as td
@@ -410,6 +418,7 @@ def test_cython_median():
tm.assert_frame_equal(rs, xp)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_median_empty_bins(observed):
df = DataFrame(np.random.randint(0, 44, 500))
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 7c179a79513fa..a96789a7c80ce 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -15,10 +15,10 @@
MultiIndex,
Series,
Timestamp,
+ _testing as tm,
date_range,
read_csv,
)
-import pandas._testing as tm
from pandas.core.base import SpecificationError
import pandas.core.common as com
@@ -2012,7 +2012,7 @@ def test_dup_labels_output_shape(groupby_func, idx):
pytest.skip("Not applicable")
df = DataFrame([[1, 1]], columns=idx)
- grp_by = df.groupby([0])
+ grp_by = df.groupby([0], observed=False)
args = []
if groupby_func in {"fillna", "nth"}:
diff --git a/pandas/tests/groupby/test_groupby_subclass.py b/pandas/tests/groupby/test_groupby_subclass.py
index d268d87708552..574a42fb7224e 100644
--- a/pandas/tests/groupby/test_groupby_subclass.py
+++ b/pandas/tests/groupby/test_groupby_subclass.py
@@ -3,8 +3,7 @@
import numpy as np
import pytest
-from pandas import DataFrame, Series
-import pandas._testing as tm
+from pandas import DataFrame, Series, _testing as tm
@pytest.mark.parametrize(
@@ -21,7 +20,7 @@ def test_groupby_preserves_subclass(obj, groupby_func):
if isinstance(obj, Series) and groupby_func in {"corrwith"}:
pytest.skip("Not applicable")
- grouped = obj.groupby(np.arange(0, 10))
+ grouped = obj.groupby(np.arange(0, 10), observed=False)
# Groups should preserve subclass type
assert isinstance(grouped.get_group(0), type(obj))
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 1d2208592a06d..979b01371247f 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -1,5 +1,4 @@
""" test where we are determining what we are grouping, or getting groups """
-
import numpy as np
import pytest
@@ -11,9 +10,9 @@
MultiIndex,
Series,
Timestamp,
+ _testing as tm,
date_range,
)
-import pandas._testing as tm
from pandas.core.groupby.grouper import Grouping
# selection
@@ -311,6 +310,7 @@ def test_groupby_levels_and_columns(self):
by_columns.columns = by_columns.columns.astype(np.int64)
tm.assert_frame_equal(by_levels, by_columns)
+ @pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_groupby_categorical_index_and_columns(self, observed):
# GH18432, adapted for GH25871
columns = ["A", "B", "A", "B"]
@@ -702,6 +702,29 @@ def test_groupby_multiindex_level_empty(self):
)
tm.assert_frame_equal(result, expected)
+ def test_default_observed_deprecated(self):
+ # pr-35967
+ df = DataFrame([["A", 1, 1], ["A", 2, 1], ["B", 1, 1]], columns=["x", "y", "z"])
+ df.x = df.x.astype("category")
+ df.y = df.x.astype("category")
+
+ with tm.assert_produces_warning(
+ expected_warning=FutureWarning, check_stacklevel=False
+ ):
+ df.groupby(["x", "y"])
+
+ with tm.assert_produces_warning(None) as any_warnings:
+ df.groupby(["x", "y"], observed=True)
+ df.groupby(["x", "y"], observed=False)
+ assert len(any_warnings) == 0
+
+ cat = pd.Categorical(["A", "B", "C"], categories=["A", "B", "C", "D"])
+ s = Series(cat)
+ with tm.assert_produces_warning(
+ expected_warning=FutureWarning, check_stacklevel=False
+ ):
+ s.groupby(cat)
+
# get_group
# --------------------------------
@@ -755,6 +778,7 @@ def test_get_group(self):
with pytest.raises(ValueError, match=msg):
g.get_group(("foo", "bar", "baz"))
+ @pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_get_group_empty_bins(self, observed):
d = DataFrame([3, 1, 7, 6])
diff --git a/pandas/tests/groupby/test_size.py b/pandas/tests/groupby/test_size.py
index ba27e5a24ba00..cb724d46bc0d1 100644
--- a/pandas/tests/groupby/test_size.py
+++ b/pandas/tests/groupby/test_size.py
@@ -1,8 +1,7 @@
import numpy as np
import pytest
-from pandas import DataFrame, Index, PeriodIndex, Series
-import pandas._testing as tm
+from pandas import DataFrame, Index, PeriodIndex, Series, _testing as tm
@pytest.mark.parametrize("by", ["A", "B", ["A", "B"]])
@@ -50,7 +49,7 @@ def test_size_period_index():
def test_size_on_categorical(as_index):
df = DataFrame([[1, 1], [2, 2]], columns=["A", "B"])
df["A"] = df["A"].astype("category")
- result = df.groupby(["A", "B"], as_index=as_index).size()
+ result = df.groupby(["A", "B"], as_index=as_index, observed=False).size()
expected = DataFrame(
[[1, 1, 1], [1, 2, 0], [2, 1, 0], [2, 2, 1]], columns=["A", "B", "size"]
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 8acd051fbc643..71e182f34bb0a 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -13,10 +13,10 @@
MultiIndex,
Series,
Timestamp,
+ _testing as tm,
concat,
date_range,
)
-import pandas._testing as tm
from pandas.core.groupby.groupby import DataError
@@ -994,7 +994,7 @@ def test_transform_absent_categories(func):
x_cats = range(2)
y = [1]
df = DataFrame({"x": Categorical(x_vals, x_cats), "y": y})
- result = getattr(df.y.groupby(df.x), func)()
+ result = getattr(df.y.groupby(df.x, observed=False), func)()
expected = df.y
tm.assert_series_equal(result, expected)
@@ -1153,6 +1153,7 @@ def test_transform_lambda_indexing():
tm.assert_frame_equal(result, expected)
+@pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_categorical_and_not_categorical_key(observed):
# Checks that groupby-transform, when grouping by both a categorical
# and a non-categorical key, doesn't try to expand the output to include
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index f9b2a02920841..11fef6f271672 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -12,10 +12,10 @@
Index,
MultiIndex,
Series,
+ _testing as tm,
concat,
date_range,
)
-import pandas._testing as tm
from pandas.api.types import CategoricalDtype as CDT
from pandas.core.reshape.pivot import pivot_table
@@ -108,6 +108,7 @@ def test_pivot_table(self, observed):
expected = self.data.groupby(index + [columns])["D"].agg(np.mean).unstack()
tm.assert_frame_equal(table, expected)
+ @pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_pivot_table_categorical_observed_equal(self, observed):
# issue #24923
df = DataFrame(
@@ -193,7 +194,9 @@ def test_pivot_table_categorical(self):
["c", "d", "c", "d"], categories=["c", "d", "y"], ordered=True
)
df = DataFrame({"A": cat1, "B": cat2, "values": [1, 2, 3, 4]})
- result = pd.pivot_table(df, values="values", index=["A", "B"], dropna=True)
+ result = pd.pivot_table(
+ df, values="values", index=["A", "B"], dropna=True, observed=False
+ )
exp_index = MultiIndex.from_arrays([cat1, cat2], names=["A", "B"])
expected = DataFrame({"values": [1, 2, 3, 4]}, index=exp_index)
@@ -212,7 +215,9 @@ def test_pivot_table_dropna_categoricals(self, dropna):
)
df["A"] = df["A"].astype(CDT(categories, ordered=False))
- result = df.pivot_table(index="B", columns="A", values="C", dropna=dropna)
+ result = df.pivot_table(
+ index="B", columns="A", values="C", dropna=dropna, observed=False
+ )
expected_columns = Series(["a", "b", "c"], name="A")
expected_columns = expected_columns.astype(CDT(categories, ordered=False))
expected_index = Series([1, 2, 3], name="B")
@@ -240,7 +245,7 @@ def test_pivot_with_non_observable_dropna(self, dropna):
}
)
- result = df.pivot_table(index="A", values="B", dropna=dropna)
+ result = df.pivot_table(index="A", values="B", dropna=dropna, observed=False)
expected = DataFrame(
{"B": [2, 3]},
index=Index(
@@ -265,7 +270,7 @@ def test_pivot_with_non_observable_dropna(self, dropna):
}
)
- result = df.pivot_table(index="A", values="B", dropna=dropna)
+ result = df.pivot_table(index="A", values="B", dropna=dropna, observed=False)
expected = DataFrame(
{"B": [2, 3, 0]},
index=Index(
@@ -281,7 +286,7 @@ def test_pivot_with_non_observable_dropna(self, dropna):
def test_pivot_with_interval_index(self, interval_values, dropna):
# GH 25814
df = DataFrame({"A": interval_values, "B": 1})
- result = df.pivot_table(index="A", values="B", dropna=dropna)
+ result = df.pivot_table(index="A", values="B", dropna=dropna, observed=False)
expected = DataFrame({"B": 1}, index=Index(interval_values.unique(), name="A"))
tm.assert_frame_equal(result, expected)
@@ -299,7 +304,13 @@ def test_pivot_with_interval_index_margins(self):
)
pivot_tab = pd.pivot_table(
- df, index="C", columns="B", values="A", aggfunc="sum", margins=True
+ df,
+ index="C",
+ columns="B",
+ values="A",
+ aggfunc="sum",
+ margins=True,
+ observed=False,
)
result = pivot_tab["All"]
@@ -1752,6 +1763,7 @@ def test_margins_casted_to_float(self, observed):
)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_pivot_with_categorical(self, observed, ordered):
# gh-21370
idx = [np.nan, "low", "high", "low", np.nan]
@@ -1787,6 +1799,7 @@ def test_pivot_with_categorical(self, observed, ordered):
tm.assert_frame_equal(result, expected)
+ @pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_categorical_aggfunc(self, observed):
# GH 9534
df = DataFrame(
@@ -1807,6 +1820,7 @@ def test_categorical_aggfunc(self, observed):
)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.filterwarnings("ignore:Grouping by a categorical:FutureWarning")
def test_categorical_pivot_index_ordering(self, observed):
# GH 8731
df = DataFrame(
@@ -2058,6 +2072,13 @@ def agg(arr):
with pytest.raises(KeyError, match="notpresent"):
foo.pivot_table("notpresent", "X", "Y", aggfunc=agg)
+ def test_pivot_table_observed_deprecated_default(self):
+ # pr-35967
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # make sure we actually have a category to warn on
+ self.data.A = self.data.A.astype("category")
+ self.data.pivot_table(values="D", index=["A", "B"], columns=["C"])
+
class TestPivot:
def test_pivot(self):
| - [x] Relates to #17594, Closes #30552
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Had a relatively small 70k data frame that I was trying to do a groupby sum on blow up on me today. This was the reason. I had something like zip codes and cities as categoricals, expected SQL-like groupby but instead got a cartesian product of 'cities' and 'zips'. Sounds like there was some previous desire to explore a new default.
Didn't try to do any wild stuff to keep up with the stacklevel depending on where this was called from. | https://api.github.com/repos/pandas-dev/pandas/pulls/35967 | 2020-08-28T21:52:59Z | 2021-07-11T20:27:45Z | null | 2021-07-11T20:27:45Z |
BUG: instantiation using a dict with a period scalar | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index ff9e803b4990a..7f1b0c88c83e1 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -333,7 +333,7 @@ Sparse
ExtensionArray
^^^^^^^^^^^^^^
--
+- Fixed Bug where :class:`DataFrame` column set to scalar extension type via a dict instantion was considered an object type rather than the extension type (:issue:`35965`)
-
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 3812c306b8eb4..0993328aef8de 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -472,7 +472,7 @@ def sanitize_array(
# figure out the dtype from the value (upcast if necessary)
if dtype is None:
- dtype, value = infer_dtype_from_scalar(value)
+ dtype, value = infer_dtype_from_scalar(value, pandas_dtype=True)
else:
# need to possibly convert the value here
value = maybe_cast_to_datetime(value, dtype)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 7c5aafcbbc7e9..e87e944672eea 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -709,7 +709,6 @@ def infer_dtype_from_scalar(val, pandas_dtype: bool = False) -> Tuple[DtypeObj,
elif pandas_dtype:
if lib.is_period(val):
dtype = PeriodDtype(freq=val.freq)
- val = val.ordinal
elif lib.is_interval(val):
subtype = infer_dtype_from_scalar(val.left, pandas_dtype=True)[0]
dtype = IntervalDtype(subtype=subtype)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 419ff81a2a478..7aada1e6eda48 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -612,6 +612,8 @@ def _maybe_convert_i8(self, key):
if scalar:
# Timestamp/Timedelta
key_dtype, key_i8 = infer_dtype_from_scalar(key, pandas_dtype=True)
+ if lib.is_period(key):
+ key_i8 = key.ordinal
else:
# DatetimeIndex/TimedeltaIndex
key_dtype, key_i8 = key.dtype, Index(key.asi8)
diff --git a/pandas/tests/dtypes/cast/test_infer_dtype.py b/pandas/tests/dtypes/cast/test_infer_dtype.py
index 70d38aad951cc..157adacbdfdf7 100644
--- a/pandas/tests/dtypes/cast/test_infer_dtype.py
+++ b/pandas/tests/dtypes/cast/test_infer_dtype.py
@@ -84,13 +84,11 @@ def test_infer_dtype_from_period(freq, pandas_dtype):
if pandas_dtype:
exp_dtype = f"period[{freq}]"
- exp_val = p.ordinal
else:
exp_dtype = np.object_
- exp_val = p
assert dtype == exp_dtype
- assert val == exp_val
+ assert val == p
@pytest.mark.parametrize(
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 0d1004809f7f1..eb334e811c5a4 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -717,6 +717,24 @@ def test_constructor_period_dict(self):
assert df["a"].dtype == a.dtype
assert df["b"].dtype == b.dtype
+ @pytest.mark.parametrize(
+ "data,dtype",
+ [
+ (pd.Period("2012-01", freq="M"), "period[M]"),
+ (pd.Period("2012-02-01", freq="D"), "period[D]"),
+ (Interval(left=0, right=5), IntervalDtype("int64")),
+ (Interval(left=0.1, right=0.5), IntervalDtype("float64")),
+ ],
+ )
+ def test_constructor_period_dict_scalar(self, data, dtype):
+ # scalar periods
+ df = DataFrame({"a": data}, index=[0])
+ assert df["a"].dtype == dtype
+
+ expected = DataFrame(index=[0], columns=["a"], data=data)
+
+ tm.assert_frame_equal(df, expected)
+
@pytest.mark.parametrize(
"data,dtype",
[
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index ce078059479b4..0fb8c5955a2e7 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -8,16 +8,23 @@
from pandas._libs import iNaT, lib
from pandas.core.dtypes.common import is_categorical_dtype, is_datetime64tz_dtype
-from pandas.core.dtypes.dtypes import CategoricalDtype
+from pandas.core.dtypes.dtypes import (
+ CategoricalDtype,
+ DatetimeTZDtype,
+ IntervalDtype,
+ PeriodDtype,
+)
import pandas as pd
from pandas import (
Categorical,
DataFrame,
Index,
+ Interval,
IntervalIndex,
MultiIndex,
NaT,
+ Period,
Series,
Timestamp,
date_range,
@@ -1075,6 +1082,26 @@ def test_constructor_dict_order(self):
expected = Series([1, 0, 2], index=list("bac"))
tm.assert_series_equal(result, expected)
+ @pytest.mark.parametrize(
+ "data,dtype",
+ [
+ (Period("2020-01"), PeriodDtype("M")),
+ (Interval(left=0, right=5), IntervalDtype("int64")),
+ (
+ Timestamp("2011-01-01", tz="US/Eastern"),
+ DatetimeTZDtype(tz="US/Eastern"),
+ ),
+ ],
+ )
+ def test_constructor_dict_extension(self, data, dtype):
+ d = {"a": data}
+ result = Series(d, index=["a"])
+ expected = Series(data, index=["a"], dtype=dtype)
+
+ assert result.dtype == dtype
+
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.parametrize("value", [2, np.nan, None, float("nan")])
def test_constructor_dict_nan_key(self, value):
# GH 18480
| - [x] closes #35965
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Fixing bug discussed in [issue 35965](https://github.com/pandas-dev/pandas/issues/35965) where `pd.DataFrame({'a': pd.Period('2020-01')})` created `a` as an object column instead of a `period[m]` column.
Changing the functionality of `infer_dtype_from_scalar` isn't necessarily required here, but the fact that `infer_dtype_from_scalar` would return the `period.ordinal` value seems inconsistent with the behavior for other dtypes in this function. Additionally, that functionality was only used in a single place within the code (`interval.py`), which I fixed accordingly. | https://api.github.com/repos/pandas-dev/pandas/pulls/35966 | 2020-08-28T21:15:57Z | 2020-09-11T13:03:04Z | 2020-09-11T13:03:03Z | 2020-10-10T15:07:02Z |
BUG/CLN: Decouple Series/DataFrame.transform | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index bce6a735b7b07..8864469eaf858 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -342,6 +342,7 @@ Other
^^^^^
- Bug in :meth:`DataFrame.replace` and :meth:`Series.replace` incorrectly raising ``AssertionError`` instead of ``ValueError`` when invalid parameter combinations are passed (:issue:`36045`)
- Bug in :meth:`DataFrame.replace` and :meth:`Series.replace` with numeric values and string ``to_replace`` (:issue:`34789`)
+- Bug in :meth:`Series.transform` would give incorrect results or raise when the argument ``func`` was dictionary (:issue:`35811`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index 7ca68d8289bd5..8b74fe01d0dc0 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -18,9 +18,10 @@
Union,
)
-from pandas._typing import AggFuncType, FrameOrSeries, Label
+from pandas._typing import AggFuncType, Axis, FrameOrSeries, Label
from pandas.core.dtypes.common import is_dict_like, is_list_like
+from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
from pandas.core.base import SpecificationError
import pandas.core.common as com
@@ -384,3 +385,98 @@ def validate_func_kwargs(
if not columns:
raise TypeError(no_arg_message)
return columns, func
+
+
+def transform(
+ obj: FrameOrSeries, func: AggFuncType, axis: Axis, *args, **kwargs,
+) -> FrameOrSeries:
+ """
+ Transform a DataFrame or Series
+
+ Parameters
+ ----------
+ obj : DataFrame or Series
+ Object to compute the transform on.
+ func : string, function, list, or dictionary
+ Function(s) to compute the transform with.
+ axis : {0 or 'index', 1 or 'columns'}
+ Axis along which the function is applied:
+
+ * 0 or 'index': apply function to each column.
+ * 1 or 'columns': apply function to each row.
+
+ Returns
+ -------
+ DataFrame or Series
+ Result of applying ``func`` along the given axis of the
+ Series or DataFrame.
+
+ Raises
+ ------
+ ValueError
+ If the transform function fails or does not transform.
+ """
+ from pandas.core.reshape.concat import concat
+
+ is_series = obj.ndim == 1
+
+ if obj._get_axis_number(axis) == 1:
+ assert not is_series
+ return transform(obj.T, func, 0, *args, **kwargs).T
+
+ if isinstance(func, list):
+ if is_series:
+ func = {com.get_callable_name(v) or v: v for v in func}
+ else:
+ func = {col: func for col in obj}
+
+ if isinstance(func, dict):
+ if not is_series:
+ cols = sorted(set(func.keys()) - set(obj.columns))
+ if len(cols) > 0:
+ raise SpecificationError(f"Column(s) {cols} do not exist")
+
+ if any(isinstance(v, dict) for v in func.values()):
+ # GH 15931 - deprecation of renaming keys
+ raise SpecificationError("nested renamer is not supported")
+
+ results = {}
+ for name, how in func.items():
+ colg = obj._gotitem(name, ndim=1)
+ try:
+ results[name] = transform(colg, how, 0, *args, **kwargs)
+ except Exception as e:
+ if str(e) == "Function did not transform":
+ raise e
+
+ # combine results
+ if len(results) == 0:
+ raise ValueError("Transform function failed")
+ return concat(results, axis=1)
+
+ # func is either str or callable
+ try:
+ if isinstance(func, str):
+ result = obj._try_aggregate_string_function(func, *args, **kwargs)
+ else:
+ f = obj._get_cython_func(func)
+ if f and not args and not kwargs:
+ result = getattr(obj, f)()
+ else:
+ try:
+ result = obj.apply(func, args=args, **kwargs)
+ except Exception:
+ result = func(obj, *args, **kwargs)
+ except Exception:
+ raise ValueError("Transform function failed")
+
+ # Functions that transform may return empty Series/DataFrame
+ # when the dtype is not appropriate
+ if isinstance(result, (ABCSeries, ABCDataFrame)) and result.empty:
+ raise ValueError("Transform function failed")
+ if not isinstance(result, (ABCSeries, ABCDataFrame)) or not result.index.equals(
+ obj.index
+ ):
+ raise ValueError("Function did not transform")
+
+ return result
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 1926803d8f04b..a688302b99724 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -4,7 +4,7 @@
import builtins
import textwrap
-from typing import Any, Dict, FrozenSet, List, Optional, Union
+from typing import Any, Callable, Dict, FrozenSet, List, Optional, Union
import numpy as np
@@ -560,7 +560,7 @@ def _aggregate_multiple_funcs(self, arg, _axis):
) from err
return result
- def _get_cython_func(self, arg: str) -> Optional[str]:
+ def _get_cython_func(self, arg: Callable) -> Optional[str]:
"""
if we define an internal function for this argument, return it
"""
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b03593ad8afe1..1e5360f39a75e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -45,6 +45,7 @@
from pandas._libs import algos as libalgos, lib, properties
from pandas._libs.lib import no_default
from pandas._typing import (
+ AggFuncType,
ArrayLike,
Axes,
Axis,
@@ -116,7 +117,7 @@
from pandas.core import algorithms, common as com, nanops, ops
from pandas.core.accessor import CachedAccessor
-from pandas.core.aggregation import reconstruct_func, relabel_result
+from pandas.core.aggregation import reconstruct_func, relabel_result, transform
from pandas.core.arrays import Categorical, ExtensionArray
from pandas.core.arrays.datetimelike import DatetimeLikeArrayMixin as DatetimeLikeArray
from pandas.core.arrays.sparse import SparseFrameAccessor
@@ -7461,15 +7462,16 @@ def _aggregate(self, arg, axis=0, *args, **kwargs):
agg = aggregate
@doc(
- NDFrame.transform,
+ _shared_docs["transform"],
klass=_shared_doc_kwargs["klass"],
axis=_shared_doc_kwargs["axis"],
)
- def transform(self, func, axis=0, *args, **kwargs) -> DataFrame:
- axis = self._get_axis_number(axis)
- if axis == 1:
- return self.T.transform(func, *args, **kwargs).T
- return super().transform(func, *args, **kwargs)
+ def transform(
+ self, func: AggFuncType, axis: Axis = 0, *args, **kwargs
+ ) -> DataFrame:
+ result = transform(self, func, axis, *args, **kwargs)
+ assert isinstance(result, DataFrame)
+ return result
def apply(self, func, axis=0, raw=False, result_type=None, args=(), **kwds):
"""
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index fffd2e068ebcf..9ed9db801d0a8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10648,80 +10648,6 @@ def ewm(
times=times,
)
- @doc(klass=_shared_doc_kwargs["klass"], axis="")
- def transform(self, func, *args, **kwargs):
- """
- Call ``func`` on self producing a {klass} with transformed values.
-
- Produced {klass} will have same axis length as self.
-
- Parameters
- ----------
- func : function, str, list or dict
- Function to use for transforming the data. If a function, must either
- work when passed a {klass} or when passed to {klass}.apply.
-
- Accepted combinations are:
-
- - function
- - string function name
- - list of functions and/or function names, e.g. ``[np.exp, 'sqrt']``
- - dict of axis labels -> functions, function names or list of such.
- {axis}
- *args
- Positional arguments to pass to `func`.
- **kwargs
- Keyword arguments to pass to `func`.
-
- Returns
- -------
- {klass}
- A {klass} that must have the same length as self.
-
- Raises
- ------
- ValueError : If the returned {klass} has a different length than self.
-
- See Also
- --------
- {klass}.agg : Only perform aggregating type operations.
- {klass}.apply : Invoke function on a {klass}.
-
- Examples
- --------
- >>> df = pd.DataFrame({{'A': range(3), 'B': range(1, 4)}})
- >>> df
- A B
- 0 0 1
- 1 1 2
- 2 2 3
- >>> df.transform(lambda x: x + 1)
- A B
- 0 1 2
- 1 2 3
- 2 3 4
-
- Even though the resulting {klass} must have the same length as the
- input {klass}, it is possible to provide several input functions:
-
- >>> s = pd.Series(range(3))
- >>> s
- 0 0
- 1 1
- 2 2
- dtype: int64
- >>> s.transform([np.sqrt, np.exp])
- sqrt exp
- 0 0.000000 1.000000
- 1 1.000000 2.718282
- 2 1.414214 7.389056
- """
- result = self.agg(func, *args, **kwargs)
- if is_scalar(result) or len(result) != len(self):
- raise ValueError("transforms cannot produce aggregated results")
-
- return result
-
# ----------------------------------------------------------------------
# Misc methods
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 6cbd93135a2ca..632b93cdcf24b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -25,6 +25,7 @@
from pandas._libs import lib, properties, reshape, tslibs
from pandas._libs.lib import no_default
from pandas._typing import (
+ AggFuncType,
ArrayLike,
Axis,
DtypeObj,
@@ -89,6 +90,7 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.core.indexing import check_bool_indexer
from pandas.core.internals import SingleBlockManager
+from pandas.core.shared_docs import _shared_docs
from pandas.core.sorting import ensure_key_mapped
from pandas.core.strings import StringMethods
from pandas.core.tools.datetimes import to_datetime
@@ -4081,14 +4083,16 @@ def aggregate(self, func=None, axis=0, *args, **kwargs):
agg = aggregate
@doc(
- NDFrame.transform,
+ _shared_docs["transform"],
klass=_shared_doc_kwargs["klass"],
axis=_shared_doc_kwargs["axis"],
)
- def transform(self, func, axis=0, *args, **kwargs):
- # Validate the axis parameter
- self._get_axis_number(axis)
- return super().transform(func, *args, **kwargs)
+ def transform(
+ self, func: AggFuncType, axis: Axis = 0, *args, **kwargs
+ ) -> FrameOrSeriesUnion:
+ from pandas.core.aggregation import transform
+
+ return transform(self, func, axis, *args, **kwargs)
def apply(self, func, convert_dtype=True, args=(), **kwds):
"""
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 0aaccb47efc44..244ee3aa298db 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -257,3 +257,72 @@
1 b B E 3
2 c B E 5
"""
+
+_shared_docs[
+ "transform"
+] = """\
+Call ``func`` on self producing a {klass} with transformed values.
+
+Produced {klass} will have same axis length as self.
+
+Parameters
+----------
+func : function, str, list or dict
+ Function to use for transforming the data. If a function, must either
+ work when passed a {klass} or when passed to {klass}.apply.
+
+ Accepted combinations are:
+
+ - function
+ - string function name
+ - list of functions and/or function names, e.g. ``[np.exp, 'sqrt']``
+ - dict of axis labels -> functions, function names or list of such.
+{axis}
+*args
+ Positional arguments to pass to `func`.
+**kwargs
+ Keyword arguments to pass to `func`.
+
+Returns
+-------
+{klass}
+ A {klass} that must have the same length as self.
+
+Raises
+------
+ValueError : If the returned {klass} has a different length than self.
+
+See Also
+--------
+{klass}.agg : Only perform aggregating type operations.
+{klass}.apply : Invoke function on a {klass}.
+
+Examples
+--------
+>>> df = pd.DataFrame({{'A': range(3), 'B': range(1, 4)}})
+>>> df
+ A B
+0 0 1
+1 1 2
+2 2 3
+>>> df.transform(lambda x: x + 1)
+ A B
+0 1 2
+1 2 3
+2 3 4
+
+Even though the resulting {klass} must have the same length as the
+input {klass}, it is possible to provide several input functions:
+
+>>> s = pd.Series(range(3))
+>>> s
+0 0
+1 1
+2 2
+dtype: int64
+>>> s.transform([np.sqrt, np.exp])
+ sqrt exp
+0 0.000000 1.000000
+1 1.000000 2.718282
+2 1.414214 7.389056
+"""
diff --git a/pandas/tests/frame/apply/test_frame_transform.py b/pandas/tests/frame/apply/test_frame_transform.py
index 3a345215482ed..346e60954fc13 100644
--- a/pandas/tests/frame/apply/test_frame_transform.py
+++ b/pandas/tests/frame/apply/test_frame_transform.py
@@ -1,72 +1,203 @@
import operator
+import re
import numpy as np
import pytest
-import pandas as pd
+from pandas import DataFrame, MultiIndex
import pandas._testing as tm
+from pandas.core.base import SpecificationError
+from pandas.core.groupby.base import transformation_kernels
from pandas.tests.frame.common import zip_frames
-def test_agg_transform(axis, float_frame):
- other_axis = 1 if axis in {0, "index"} else 0
+def test_transform_ufunc(axis, float_frame):
+ # GH 35964
+ with np.errstate(all="ignore"):
+ f_sqrt = np.sqrt(float_frame)
+ result = float_frame.transform(np.sqrt, axis=axis)
+ expected = f_sqrt
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("op", transformation_kernels)
+def test_transform_groupby_kernel(axis, float_frame, op):
+ # GH 35964
+ if op == "cumcount":
+ pytest.xfail("DataFrame.cumcount does not exist")
+ if op == "tshift":
+ pytest.xfail("Only works on time index and is deprecated")
+ if axis == 1 or axis == "columns":
+ pytest.xfail("GH 36308: groupby.transform with axis=1 is broken")
+
+ args = [0.0] if op == "fillna" else []
+ if axis == 0 or axis == "index":
+ ones = np.ones(float_frame.shape[0])
+ else:
+ ones = np.ones(float_frame.shape[1])
+ expected = float_frame.groupby(ones, axis=axis).transform(op, *args)
+ result = float_frame.transform(op, axis, *args)
+ tm.assert_frame_equal(result, expected)
+
+@pytest.mark.parametrize(
+ "ops, names", [([np.sqrt], ["sqrt"]), ([np.abs, np.sqrt], ["absolute", "sqrt"])]
+)
+def test_transform_list(axis, float_frame, ops, names):
+ # GH 35964
+ other_axis = 1 if axis in {0, "index"} else 0
with np.errstate(all="ignore"):
+ expected = zip_frames([op(float_frame) for op in ops], axis=other_axis)
+ if axis in {0, "index"}:
+ expected.columns = MultiIndex.from_product([float_frame.columns, names])
+ else:
+ expected.index = MultiIndex.from_product([float_frame.index, names])
+ result = float_frame.transform(ops, axis=axis)
+ tm.assert_frame_equal(result, expected)
- f_abs = np.abs(float_frame)
- f_sqrt = np.sqrt(float_frame)
- # ufunc
- result = float_frame.transform(np.sqrt, axis=axis)
- expected = f_sqrt.copy()
- tm.assert_frame_equal(result, expected)
-
- result = float_frame.transform(np.sqrt, axis=axis)
- tm.assert_frame_equal(result, expected)
-
- # list-like
- expected = f_sqrt.copy()
- if axis in {0, "index"}:
- expected.columns = pd.MultiIndex.from_product(
- [float_frame.columns, ["sqrt"]]
- )
- else:
- expected.index = pd.MultiIndex.from_product([float_frame.index, ["sqrt"]])
- result = float_frame.transform([np.sqrt], axis=axis)
- tm.assert_frame_equal(result, expected)
-
- # multiple items in list
- # these are in the order as if we are applying both
- # functions per series and then concatting
- expected = zip_frames([f_abs, f_sqrt], axis=other_axis)
- if axis in {0, "index"}:
- expected.columns = pd.MultiIndex.from_product(
- [float_frame.columns, ["absolute", "sqrt"]]
- )
- else:
- expected.index = pd.MultiIndex.from_product(
- [float_frame.index, ["absolute", "sqrt"]]
- )
- result = float_frame.transform([np.abs, "sqrt"], axis=axis)
- tm.assert_frame_equal(result, expected)
+def test_transform_dict(axis, float_frame):
+ # GH 35964
+ if axis == 0 or axis == "index":
+ e = float_frame.columns[0]
+ expected = float_frame[[e]].transform(np.abs)
+ else:
+ e = float_frame.index[0]
+ expected = float_frame.iloc[[0]].transform(np.abs)
+ result = float_frame.transform({e: np.abs}, axis=axis)
+ tm.assert_frame_equal(result, expected)
-def test_transform_and_agg_err(axis, float_frame):
- # cannot both transform and agg
- msg = "transforms cannot produce aggregated results"
- with pytest.raises(ValueError, match=msg):
- float_frame.transform(["max", "min"], axis=axis)
+@pytest.mark.parametrize("use_apply", [True, False])
+def test_transform_udf(axis, float_frame, use_apply):
+ # GH 35964
+ # transform uses UDF either via apply or passing the entire DataFrame
+ def func(x):
+ # transform is using apply iff x is not a DataFrame
+ if use_apply == isinstance(x, DataFrame):
+ # Force transform to fallback
+ raise ValueError
+ return x + 1
- msg = "cannot combine transform and aggregation operations"
- with pytest.raises(ValueError, match=msg):
- with np.errstate(all="ignore"):
- float_frame.transform(["max", "sqrt"], axis=axis)
+ result = float_frame.transform(func, axis=axis)
+ expected = float_frame + 1
+ tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("method", ["abs", "shift", "pct_change", "cumsum", "rank"])
def test_transform_method_name(method):
# GH 19760
- df = pd.DataFrame({"A": [-1, 2]})
+ df = DataFrame({"A": [-1, 2]})
result = df.transform(method)
expected = operator.methodcaller(method)(df)
tm.assert_frame_equal(result, expected)
+
+
+def test_transform_and_agg_err(axis, float_frame):
+ # GH 35964
+ # cannot both transform and agg
+ msg = "Function did not transform"
+ with pytest.raises(ValueError, match=msg):
+ float_frame.transform(["max", "min"], axis=axis)
+
+ msg = "Function did not transform"
+ with pytest.raises(ValueError, match=msg):
+ float_frame.transform(["max", "sqrt"], axis=axis)
+
+
+def test_agg_dict_nested_renaming_depr():
+ df = DataFrame({"A": range(5), "B": 5})
+
+ # nested renaming
+ msg = r"nested renamer is not supported"
+ with pytest.raises(SpecificationError, match=msg):
+ # mypy identifies the argument as an invalid type
+ df.transform({"A": {"foo": "min"}, "B": {"bar": "max"}})
+
+
+def test_transform_reducer_raises(all_reductions):
+ # GH 35964
+ op = all_reductions
+ df = DataFrame({"A": [1, 2, 3]})
+ msg = "Function did not transform"
+ with pytest.raises(ValueError, match=msg):
+ df.transform(op)
+ with pytest.raises(ValueError, match=msg):
+ df.transform([op])
+ with pytest.raises(ValueError, match=msg):
+ df.transform({"A": op})
+ with pytest.raises(ValueError, match=msg):
+ df.transform({"A": [op]})
+
+
+# mypy doesn't allow adding lists of different types
+# https://github.com/python/mypy/issues/5492
+@pytest.mark.parametrize("op", [*transformation_kernels, lambda x: x + 1])
+def test_transform_bad_dtype(op):
+ # GH 35964
+ df = DataFrame({"A": 3 * [object]}) # DataFrame that will fail on most transforms
+ if op in ("backfill", "shift", "pad", "bfill", "ffill"):
+ pytest.xfail("Transform function works on any datatype")
+ msg = "Transform function failed"
+ with pytest.raises(ValueError, match=msg):
+ df.transform(op)
+ with pytest.raises(ValueError, match=msg):
+ df.transform([op])
+ with pytest.raises(ValueError, match=msg):
+ df.transform({"A": op})
+ with pytest.raises(ValueError, match=msg):
+ df.transform({"A": [op]})
+
+
+@pytest.mark.parametrize("op", transformation_kernels)
+def test_transform_partial_failure(op):
+ # GH 35964
+ wont_fail = ["ffill", "bfill", "fillna", "pad", "backfill", "shift"]
+ if op in wont_fail:
+ pytest.xfail("Transform kernel is successful on all dtypes")
+ if op == "cumcount":
+ pytest.xfail("transform('cumcount') not implemented")
+ if op == "tshift":
+ pytest.xfail("Only works on time index; deprecated")
+
+ # Using object makes most transform kernels fail
+ df = DataFrame({"A": 3 * [object], "B": [1, 2, 3]})
+
+ expected = df[["B"]].transform([op])
+ result = df.transform([op])
+ tm.assert_equal(result, expected)
+
+ expected = df[["B"]].transform({"B": op})
+ result = df.transform({"B": op})
+ tm.assert_equal(result, expected)
+
+ expected = df[["B"]].transform({"B": [op]})
+ result = df.transform({"B": [op]})
+ tm.assert_equal(result, expected)
+
+
+@pytest.mark.parametrize("use_apply", [True, False])
+def test_transform_passes_args(use_apply):
+ # GH 35964
+ # transform uses UDF either via apply or passing the entire DataFrame
+ expected_args = [1, 2]
+ expected_kwargs = {"c": 3}
+
+ def f(x, a, b, c):
+ # transform is using apply iff x is not a DataFrame
+ if use_apply == isinstance(x, DataFrame):
+ # Force transform to fallback
+ raise ValueError
+ assert [a, b] == expected_args
+ assert c == expected_kwargs["c"]
+ return x
+
+ DataFrame([1]).transform(f, 0, *expected_args, **expected_kwargs)
+
+
+def test_transform_missing_columns(axis):
+ # GH 35964
+ df = DataFrame({"A": [1, 2], "B": [3, 4]})
+ match = re.escape("Column(s) ['C'] do not exist")
+ with pytest.raises(SpecificationError, match=match):
+ df.transform({"C": "cumsum"})
diff --git a/pandas/tests/series/apply/test_series_apply.py b/pandas/tests/series/apply/test_series_apply.py
index b948317f32062..827f466e23106 100644
--- a/pandas/tests/series/apply/test_series_apply.py
+++ b/pandas/tests/series/apply/test_series_apply.py
@@ -209,8 +209,8 @@ def test_transform(self, string_series):
f_abs = np.abs(string_series)
# ufunc
- expected = f_sqrt.copy()
result = string_series.apply(np.sqrt)
+ expected = f_sqrt.copy()
tm.assert_series_equal(result, expected)
# list-like
@@ -219,6 +219,9 @@ def test_transform(self, string_series):
expected.columns = ["sqrt"]
tm.assert_frame_equal(result, expected)
+ result = string_series.apply(["sqrt"])
+ tm.assert_frame_equal(result, expected)
+
# multiple items in list
# these are in the order as if we are applying both functions per
# series and then concatting
diff --git a/pandas/tests/series/apply/test_series_transform.py b/pandas/tests/series/apply/test_series_transform.py
index 8bc3d2dc4d0db..0842674da2a7d 100644
--- a/pandas/tests/series/apply/test_series_transform.py
+++ b/pandas/tests/series/apply/test_series_transform.py
@@ -1,50 +1,90 @@
import numpy as np
import pytest
-import pandas as pd
+from pandas import DataFrame, Series, concat
import pandas._testing as tm
+from pandas.core.base import SpecificationError
+from pandas.core.groupby.base import transformation_kernels
-def test_transform(string_series):
- # transforming functions
-
+def test_transform_ufunc(string_series):
+ # GH 35964
with np.errstate(all="ignore"):
f_sqrt = np.sqrt(string_series)
- f_abs = np.abs(string_series)
- # ufunc
- result = string_series.transform(np.sqrt)
- expected = f_sqrt.copy()
- tm.assert_series_equal(result, expected)
+ # ufunc
+ result = string_series.transform(np.sqrt)
+ expected = f_sqrt.copy()
+ tm.assert_series_equal(result, expected)
- # list-like
- result = string_series.transform([np.sqrt])
- expected = f_sqrt.to_frame().copy()
- expected.columns = ["sqrt"]
- tm.assert_frame_equal(result, expected)
- result = string_series.transform([np.sqrt])
- tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("op", transformation_kernels)
+def test_transform_groupby_kernel(string_series, op):
+ # GH 35964
+ if op == "cumcount":
+ pytest.xfail("Series.cumcount does not exist")
+ if op == "tshift":
+ pytest.xfail("Only works on time index and is deprecated")
+
+ args = [0.0] if op == "fillna" else []
+ ones = np.ones(string_series.shape[0])
+ expected = string_series.groupby(ones).transform(op, *args)
+ result = string_series.transform(op, 0, *args)
+ tm.assert_series_equal(result, expected)
- result = string_series.transform(["sqrt"])
- tm.assert_frame_equal(result, expected)
- # multiple items in list
- # these are in the order as if we are applying both functions per
- # series and then concatting
- expected = pd.concat([f_sqrt, f_abs], axis=1)
- result = string_series.transform(["sqrt", "abs"])
- expected.columns = ["sqrt", "abs"]
+@pytest.mark.parametrize(
+ "ops, names", [([np.sqrt], ["sqrt"]), ([np.abs, np.sqrt], ["absolute", "sqrt"])]
+)
+def test_transform_list(string_series, ops, names):
+ # GH 35964
+ with np.errstate(all="ignore"):
+ expected = concat([op(string_series) for op in ops], axis=1)
+ expected.columns = names
+ result = string_series.transform(ops)
tm.assert_frame_equal(result, expected)
-def test_transform_and_agg_error(string_series):
+def test_transform_dict(string_series):
+ # GH 35964
+ with np.errstate(all="ignore"):
+ expected = concat([np.sqrt(string_series), np.abs(string_series)], axis=1)
+ expected.columns = ["foo", "bar"]
+ result = string_series.transform({"foo": np.sqrt, "bar": np.abs})
+ tm.assert_frame_equal(result, expected)
+
+
+def test_transform_udf(axis, string_series):
+ # GH 35964
+ # via apply
+ def func(x):
+ if isinstance(x, Series):
+ raise ValueError
+ return x + 1
+
+ result = string_series.transform(func)
+ expected = string_series + 1
+ tm.assert_series_equal(result, expected)
+
+ # via map Series -> Series
+ def func(x):
+ if not isinstance(x, Series):
+ raise ValueError
+ return x + 1
+
+ result = string_series.transform(func)
+ expected = string_series + 1
+ tm.assert_series_equal(result, expected)
+
+
+def test_transform_wont_agg(string_series):
+ # GH 35964
# we are trying to transform with an aggregator
- msg = "transforms cannot produce aggregated results"
+ msg = "Function did not transform"
with pytest.raises(ValueError, match=msg):
string_series.transform(["min", "max"])
- msg = "cannot combine transform and aggregation operations"
+ msg = "Function did not transform"
with pytest.raises(ValueError, match=msg):
with np.errstate(all="ignore"):
string_series.transform(["sqrt", "max"])
@@ -52,8 +92,74 @@ def test_transform_and_agg_error(string_series):
def test_transform_none_to_type():
# GH34377
- df = pd.DataFrame({"a": [None]})
-
- msg = "DataFrame constructor called with incompatible data and dtype"
- with pytest.raises(TypeError, match=msg):
+ df = DataFrame({"a": [None]})
+ msg = "Transform function failed"
+ with pytest.raises(ValueError, match=msg):
df.transform({"a": int})
+
+
+def test_transform_reducer_raises(all_reductions):
+ # GH 35964
+ op = all_reductions
+ s = Series([1, 2, 3])
+ msg = "Function did not transform"
+ with pytest.raises(ValueError, match=msg):
+ s.transform(op)
+ with pytest.raises(ValueError, match=msg):
+ s.transform([op])
+ with pytest.raises(ValueError, match=msg):
+ s.transform({"A": op})
+ with pytest.raises(ValueError, match=msg):
+ s.transform({"A": [op]})
+
+
+# mypy doesn't allow adding lists of different types
+# https://github.com/python/mypy/issues/5492
+@pytest.mark.parametrize("op", [*transformation_kernels, lambda x: x + 1])
+def test_transform_bad_dtype(op):
+ # GH 35964
+ s = Series(3 * [object]) # Series that will fail on most transforms
+ if op in ("backfill", "shift", "pad", "bfill", "ffill"):
+ pytest.xfail("Transform function works on any datatype")
+ msg = "Transform function failed"
+ with pytest.raises(ValueError, match=msg):
+ s.transform(op)
+ with pytest.raises(ValueError, match=msg):
+ s.transform([op])
+ with pytest.raises(ValueError, match=msg):
+ s.transform({"A": op})
+ with pytest.raises(ValueError, match=msg):
+ s.transform({"A": [op]})
+
+
+@pytest.mark.parametrize("use_apply", [True, False])
+def test_transform_passes_args(use_apply):
+ # GH 35964
+ # transform uses UDF either via apply or passing the entire Series
+ expected_args = [1, 2]
+ expected_kwargs = {"c": 3}
+
+ def f(x, a, b, c):
+ # transform is using apply iff x is not a Series
+ if use_apply == isinstance(x, Series):
+ # Force transform to fallback
+ raise ValueError
+ assert [a, b] == expected_args
+ assert c == expected_kwargs["c"]
+ return x
+
+ Series([1]).transform(f, 0, *expected_args, **expected_kwargs)
+
+
+def test_transform_axis_1_raises():
+ # GH 35964
+ msg = "No axis named 1 for object type Series"
+ with pytest.raises(ValueError, match=msg):
+ Series([1]).transform("sum", axis=1)
+
+
+def test_transform_nested_renamer():
+ # GH 35964
+ match = "nested renamer is not supported"
+ with pytest.raises(SpecificationError, match=match):
+ Series([1]).transform({"A": {"B": ["sum"]}})
| - [x] closes #35811
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
First step toward #35725. Currently `transform` just calls `aggregate`, and so if we are to forbid `aggregate` from transforming, these need to be decoupled. Other than the bugfix (#35811), the only other behavioral change is in the error messages.
Assuming the bugfix #35811 is the correct behavior, docs/whatsnew also needs to be updated.
I wasn't sure if tests should be marked with #35725 or perhaps this PR #. Any guidance here? | https://api.github.com/repos/pandas-dev/pandas/pulls/35964 | 2020-08-28T20:33:13Z | 2020-09-12T21:36:52Z | 2020-09-12T21:36:52Z | 2020-12-03T21:44:40Z |
TYP: misc cleanup in core\generic.py | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index fea3efedb6abb..dd7b02d98ad42 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -387,7 +387,7 @@ def _get_block_manager_axis(cls, axis: Axis) -> int:
return m - axis
return axis
- def _get_axis_resolvers(self, axis: str) -> Dict[str, ABCSeries]:
+ def _get_axis_resolvers(self, axis: str) -> Dict[str, Union["Series", MultiIndex]]:
# index or columns
axis_index = getattr(self, axis)
d = dict()
@@ -417,10 +417,10 @@ def _get_axis_resolvers(self, axis: str) -> Dict[str, ABCSeries]:
d[axis] = dindex
return d
- def _get_index_resolvers(self) -> Dict[str, ABCSeries]:
+ def _get_index_resolvers(self) -> Dict[str, Union["Series", MultiIndex]]:
from pandas.core.computation.parsing import clean_column_name
- d: Dict[str, ABCSeries] = {}
+ d: Dict[str, Union["Series", MultiIndex]] = {}
for axis_name in self._AXIS_ORDERS:
d.update(self._get_axis_resolvers(axis_name))
@@ -4703,14 +4703,15 @@ def filter(
return self.reindex(**{name: [r for r in items if r in labels]})
elif like:
- def f(x):
+ def f(x) -> bool:
+ assert like is not None # needed for mypy
return like in ensure_str(x)
values = labels.map(f)
return self.loc(axis=axis)[values]
elif regex:
- def f(x):
+ def f(x) -> bool:
return matcher.search(ensure_str(x)) is not None
matcher = re.compile(regex)
@@ -6556,7 +6557,10 @@ def replace(
regex = True
items = list(to_replace.items())
- keys, values = zip(*items) if items else ([], [])
+ if items:
+ keys, values = zip(*items)
+ else:
+ keys, values = ([], [])
are_mappings = [is_dict_like(v) for v in values]
| pandas\core\generic.py:4707: error: Unsupported operand types for in ("Optional[str]" and "str") [operator]
pandas\core\generic.py:6559: error: 'builtins.object' object is not iterable [misc]
| https://api.github.com/repos/pandas-dev/pandas/pulls/35963 | 2020-08-28T19:43:38Z | 2020-08-29T23:57:01Z | 2020-08-29T23:57:01Z | 2020-08-30T11:17:04Z |
TYP: annotate plotting based on _get_axe_freq | diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index b490e07e43753..4d23a5e5fc249 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -1,5 +1,5 @@
import re
-from typing import List, Optional
+from typing import TYPE_CHECKING, List, Optional
import warnings
from matplotlib.artist import Artist
@@ -43,6 +43,9 @@
table,
)
+if TYPE_CHECKING:
+ from matplotlib.axes import Axes
+
class MPLPlot:
"""
@@ -1147,7 +1150,7 @@ def _plot(cls, ax, x, y, style=None, column_num=None, stacking_id=None, **kwds):
return lines
@classmethod
- def _ts_plot(cls, ax, x, data, style=None, **kwds):
+ def _ts_plot(cls, ax: "Axes", x, data, style=None, **kwds):
from pandas.plotting._matplotlib.timeseries import (
_decorate_axes,
_maybe_resample,
diff --git a/pandas/plotting/_matplotlib/timeseries.py b/pandas/plotting/_matplotlib/timeseries.py
index 193602e1baf4a..fd89a093d25a4 100644
--- a/pandas/plotting/_matplotlib/timeseries.py
+++ b/pandas/plotting/_matplotlib/timeseries.py
@@ -24,14 +24,15 @@
from pandas.tseries.frequencies import get_period_alias, is_subperiod, is_superperiod
if TYPE_CHECKING:
- from pandas import Index, Series # noqa:F401
+ from matplotlib.axes import Axes
+ from pandas import Index, Series # noqa:F401
# ---------------------------------------------------------------------
# Plotting functions and monkey patches
-def _maybe_resample(series: "Series", ax, kwargs):
+def _maybe_resample(series: "Series", ax: "Axes", kwargs):
# resample against axes freq if necessary
freq, ax_freq = _get_freq(ax, series)
@@ -74,7 +75,7 @@ def _is_sup(f1: str, f2: str) -> bool:
)
-def _upsample_others(ax, freq, kwargs):
+def _upsample_others(ax: "Axes", freq, kwargs):
legend = ax.get_legend()
lines, labels = _replot_ax(ax, freq, kwargs)
_replot_ax(ax, freq, kwargs)
@@ -97,7 +98,7 @@ def _upsample_others(ax, freq, kwargs):
ax.legend(lines, labels, loc="best", title=title)
-def _replot_ax(ax, freq, kwargs):
+def _replot_ax(ax: "Axes", freq, kwargs):
data = getattr(ax, "_plot_data", None)
# clear current axes and data
@@ -127,7 +128,7 @@ def _replot_ax(ax, freq, kwargs):
return lines, labels
-def _decorate_axes(ax, freq, kwargs):
+def _decorate_axes(ax: "Axes", freq, kwargs):
"""Initialize axes for time-series plotting"""
if not hasattr(ax, "_plot_data"):
ax._plot_data = []
@@ -143,7 +144,7 @@ def _decorate_axes(ax, freq, kwargs):
ax.date_axis_info = None
-def _get_ax_freq(ax):
+def _get_ax_freq(ax: "Axes"):
"""
Get the freq attribute of the ax object if set.
Also checks shared axes (eg when using secondary yaxis, sharex=True
@@ -174,7 +175,7 @@ def _get_period_alias(freq) -> Optional[str]:
return freq
-def _get_freq(ax, series: "Series"):
+def _get_freq(ax: "Axes", series: "Series"):
# get frequency from data
freq = getattr(series.index, "freq", None)
if freq is None:
@@ -192,7 +193,7 @@ def _get_freq(ax, series: "Series"):
return freq, ax_freq
-def _use_dynamic_x(ax, data: "FrameOrSeriesUnion") -> bool:
+def _use_dynamic_x(ax: "Axes", data: FrameOrSeriesUnion) -> bool:
freq = _get_index_freq(data.index)
ax_freq = _get_ax_freq(ax)
@@ -234,7 +235,7 @@ def _get_index_freq(index: "Index") -> Optional[BaseOffset]:
return freq
-def _maybe_convert_index(ax, data):
+def _maybe_convert_index(ax: "Axes", data):
# tsplot converts automatically, but don't want to convert index
# over and over for DataFrames
if isinstance(data.index, (ABCDatetimeIndex, ABCPeriodIndex)):
@@ -264,7 +265,7 @@ def _maybe_convert_index(ax, data):
# Do we need the rest for convenience?
-def _format_coord(freq, t, y):
+def _format_coord(freq, t, y) -> str:
time_period = Period(ordinal=int(t), freq=freq)
return f"t = {time_period} y = {y:8f}"
| In some places in plotting `ax` is an Axes object and in other its an Axis object. Current goal is to pin these down.
in timeseries._get_ax_freq we call `ax.get_shared_x_axes()`, which is an Axes method that does not exist on Axis. This annotates that usage and annotates all the other places where we can infer Axes from that. | https://api.github.com/repos/pandas-dev/pandas/pulls/35960 | 2020-08-28T18:20:48Z | 2020-08-30T11:59:01Z | 2020-08-30T11:59:01Z | 2020-08-30T15:06:52Z |
Issue35925 remove more trailing commas | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 555024ad75f5e..dbc105be3c62b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -962,12 +962,12 @@ def _get_values_tuple(self, key):
# If key is contained, would have returned by now
indexer, new_index = self.index.get_loc_level(key)
return self._constructor(self._values[indexer], index=new_index).__finalize__(
- self,
+ self
)
def _get_values(self, indexer):
try:
- return self._constructor(self._mgr.get_slice(indexer)).__finalize__(self,)
+ return self._constructor(self._mgr.get_slice(indexer)).__finalize__(self)
except ValueError:
# mpl compat if we look up e.g. ser[:, np.newaxis];
# see tests.series.timeseries.test_mpl_compat_hack
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index c57c434dd3040..1913b51a68c15 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -362,7 +362,7 @@ def var(self, bias: bool = False, *args, **kwargs):
def f(arg):
return window_aggregations.ewmcov(
- arg, arg, self.com, self.adjust, self.ignore_na, self.min_periods, bias,
+ arg, arg, self.com, self.adjust, self.ignore_na, self.min_periods, bias
)
return self._apply(f)
@@ -458,7 +458,7 @@ def _get_corr(X, Y):
def _cov(x, y):
return window_aggregations.ewmcov(
- x, y, self.com, self.adjust, self.ignore_na, self.min_periods, 1,
+ x, y, self.com, self.adjust, self.ignore_na, self.min_periods, 1
)
x_values = X._prep_values()
diff --git a/pandas/core/window/indexers.py b/pandas/core/window/indexers.py
index 7c76a8e2a0b22..a21521f4ce8bb 100644
--- a/pandas/core/window/indexers.py
+++ b/pandas/core/window/indexers.py
@@ -40,7 +40,7 @@ class BaseIndexer:
"""Base class for window bounds calculations."""
def __init__(
- self, index_array: Optional[np.ndarray] = None, window_size: int = 0, **kwargs,
+ self, index_array: Optional[np.ndarray] = None, window_size: int = 0, **kwargs
):
"""
Parameters
@@ -105,7 +105,7 @@ def get_window_bounds(
) -> Tuple[np.ndarray, np.ndarray]:
return calculate_variable_window_bounds(
- num_values, self.window_size, min_periods, center, closed, self.index_array,
+ num_values, self.window_size, min_periods, center, closed, self.index_array
)
@@ -316,7 +316,7 @@ def get_window_bounds(
# Cannot use groupby_indicies as they might not be monotonic with the object
# we're rolling over
window_indicies = np.arange(
- window_indicies_start, window_indicies_start + len(indices),
+ window_indicies_start, window_indicies_start + len(indices)
)
window_indicies_start += len(indices)
# Extend as we'll be slicing window like [start, end)
diff --git a/pandas/core/window/numba_.py b/pandas/core/window/numba_.py
index 5d35ec7457ab0..aec294c3c84c2 100644
--- a/pandas/core/window/numba_.py
+++ b/pandas/core/window/numba_.py
@@ -57,7 +57,7 @@ def generate_numba_apply_func(
@numba.jit(nopython=nopython, nogil=nogil, parallel=parallel)
def roll_apply(
- values: np.ndarray, begin: np.ndarray, end: np.ndarray, minimum_periods: int,
+ values: np.ndarray, begin: np.ndarray, end: np.ndarray, minimum_periods: int
) -> np.ndarray:
result = np.empty(len(begin))
for i in loop_range(len(result)):
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index baabdf0fca29a..39fcfcbe2bff6 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -2117,7 +2117,7 @@ def count(self):
@Substitution(name="rolling")
@Appender(_shared_docs["apply"])
def apply(
- self, func, raw=False, engine=None, engine_kwargs=None, args=None, kwargs=None,
+ self, func, raw=False, engine=None, engine_kwargs=None, args=None, kwargs=None
):
return super().apply(
func,
diff --git a/pandas/io/formats/css.py b/pandas/io/formats/css.py
index b40d2a57b8106..4d6f03489725f 100644
--- a/pandas/io/formats/css.py
+++ b/pandas/io/formats/css.py
@@ -20,9 +20,7 @@ def expand(self, prop, value: str):
try:
mapping = self.SIDE_SHORTHANDS[len(tokens)]
except KeyError:
- warnings.warn(
- f'Could not expand "{prop}: {value}"', CSSWarning,
- )
+ warnings.warn(f'Could not expand "{prop}: {value}"', CSSWarning)
return
for key, idx in zip(self.SIDES, mapping):
yield prop_fmt.format(key), tokens[idx]
@@ -117,10 +115,7 @@ def __call__(self, declarations_str, inherited=None):
props[prop] = self.size_to_pt(
props[prop], em_pt=font_size, conversions=self.BORDER_WIDTH_RATIOS
)
- for prop in [
- f"margin-{side}",
- f"padding-{side}",
- ]:
+ for prop in [f"margin-{side}", f"padding-{side}"]:
if prop in props:
# TODO: support %
props[prop] = self.size_to_pt(
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 81990b3d505e1..461ef6823918e 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -80,7 +80,7 @@
FloatFormatType = Union[str, Callable, "EngFormatter"]
ColspaceType = Mapping[Label, Union[str, int]]
ColspaceArgType = Union[
- str, int, Sequence[Union[str, int]], Mapping[Label, Union[str, int]],
+ str, int, Sequence[Union[str, int]], Mapping[Label, Union[str, int]]
]
common_docstring = """
@@ -741,7 +741,7 @@ def _to_str_columns(self) -> List[List[str]]:
for i, c in enumerate(frame):
fmt_values = self._format_col(i)
fmt_values = _make_fixed_width(
- fmt_values, self.justify, minimum=col_space.get(c, 0), adj=self.adj,
+ fmt_values, self.justify, minimum=col_space.get(c, 0), adj=self.adj
)
stringified.append(fmt_values)
else:
@@ -1069,7 +1069,7 @@ def _get_formatted_index(self, frame: "DataFrame") -> List[str]:
fmt_index = [
tuple(
_make_fixed_width(
- list(x), justify="left", minimum=col_space.get("", 0), adj=self.adj,
+ list(x), justify="left", minimum=col_space.get("", 0), adj=self.adj
)
)
for x in fmt_index
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index ea79efd0579e5..b556732e4d116 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -12,7 +12,7 @@
def read_orc(
- path: FilePathOrBuffer, columns: Optional[List[str]] = None, **kwargs,
+ path: FilePathOrBuffer, columns: Optional[List[str]] = None, **kwargs
) -> "DataFrame":
"""
Load an ORC object from the file path, returning a DataFrame.
| xref #35925
| https://api.github.com/repos/pandas-dev/pandas/pulls/35959 | 2020-08-28T17:55:58Z | 2020-08-28T18:39:49Z | 2020-08-28T18:39:49Z | 2020-08-28T18:40:00Z |
TYP: Remove NDFrame._add_series_or_dataframe_operations | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 606bd4cc3b52d..95bd757f1994e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9306,7 +9306,6 @@ def _AXIS_NAMES(self) -> Dict[int, str]:
DataFrame._add_numeric_operations()
-DataFrame._add_series_or_dataframe_operations()
ops.add_flex_arithmetic_methods(DataFrame)
ops.add_special_arithmetic_methods(DataFrame)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index fea3efedb6abb..8bdf0861175b2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6,7 +6,6 @@
import operator
import pickle
import re
-from textwrap import dedent
from typing import (
TYPE_CHECKING,
Any,
@@ -101,17 +100,22 @@
from pandas.core.missing import find_valid_index
from pandas.core.ops import _align_method_FRAME
from pandas.core.shared_docs import _shared_docs
+from pandas.core.window import Expanding, ExponentialMovingWindow, Rolling, Window
from pandas.io.formats import format as fmt
from pandas.io.formats.format import DataFrameFormatter, format_percentiles
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from pandas._libs.tslibs import BaseOffset
+
from pandas.core.resample import Resampler
from pandas.core.series import Series # noqa: F401
+ from pandas.core.window.indexers import BaseIndexer
# goal is to be able to define the docs close to function, while still being
# able to share
+_shared_docs = {**_shared_docs}
_shared_doc_kwargs = dict(
axes="keywords for axes",
klass="Series/DataFrame",
@@ -5127,51 +5131,6 @@ def pipe(self, func, *args, **kwargs):
"""
return com.pipe(self, func, *args, **kwargs)
- _shared_docs["aggregate"] = dedent(
- """
- Aggregate using one or more operations over the specified axis.
- {versionadded}
- Parameters
- ----------
- func : function, str, list or dict
- Function to use for aggregating the data. If a function, must either
- work when passed a {klass} or when passed to {klass}.apply.
-
- Accepted combinations are:
-
- - function
- - string function name
- - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
- - dict of axis labels -> functions, function names or list of such.
- {axis}
- *args
- Positional arguments to pass to `func`.
- **kwargs
- Keyword arguments to pass to `func`.
-
- Returns
- -------
- scalar, Series or DataFrame
-
- The return can be:
-
- * scalar : when Series.agg is called with single function
- * Series : when DataFrame.agg is called with a single function
- * DataFrame : when DataFrame.agg is called with several functions
-
- Return scalar, Series or DataFrame.
- {see_also}
- Notes
- -----
- `agg` is an alias for `aggregate`. Use the alias.
-
- In pandas, agg, as most operations just ignores the missing values,
- and returns the operation only considering the values that are present.
-
- A passed user-defined-function will be passed a Series for evaluation.
- {examples}"""
- )
-
# ----------------------------------------------------------------------
# Attribute access
@@ -7448,77 +7407,6 @@ def clip(
return result
- _shared_docs[
- "groupby"
- ] = """
- Group %(klass)s using a mapper or by a Series of columns.
-
- A groupby operation involves some combination of splitting the
- object, applying a function, and combining the results. This can be
- used to group large amounts of data and compute operations on these
- groups.
-
- Parameters
- ----------
- by : mapping, function, label, or list of labels
- Used to determine the groups for the groupby.
- If ``by`` is a function, it's called on each value of the object's
- index. If a dict or Series is passed, the Series or dict VALUES
- will be used to determine the groups (the Series' values are first
- aligned; see ``.align()`` method). If an ndarray is passed, the
- values are used as-is determine the groups. A label or list of
- labels may be passed to group by the columns in ``self``. Notice
- that a tuple is interpreted as a (single) key.
- axis : {0 or 'index', 1 or 'columns'}, default 0
- Split along rows (0) or columns (1).
- level : int, level name, or sequence of such, default None
- If the axis is a MultiIndex (hierarchical), group by a particular
- level or levels.
- as_index : bool, default True
- For aggregated output, return object with group labels as the
- index. Only relevant for DataFrame input. as_index=False is
- effectively "SQL-style" grouped output.
- sort : bool, default True
- Sort group keys. Get better performance by turning this off.
- Note this does not influence the order of observations within each
- group. Groupby preserves the order of rows within each group.
- group_keys : bool, default True
- When calling apply, add group keys to index to identify pieces.
- squeeze : bool, default False
- Reduce the dimensionality of the return type if possible,
- otherwise return a consistent type.
-
- .. deprecated:: 1.1.0
-
- observed : bool, default False
- This only applies if any of the groupers are Categoricals.
- If True: only show observed values for categorical groupers.
- If False: show all values for categorical groupers.
-
- .. versionadded:: 0.23.0
- dropna : bool, default True
- If True, and if group keys contain NA values, NA values together
- with row/column will be dropped.
- If False, NA values will also be treated as the key in groups
-
- .. versionadded:: 1.1.0
-
- Returns
- -------
- %(klass)sGroupBy
- Returns a groupby object that contains information about the groups.
-
- See Also
- --------
- resample : Convenience method for frequency conversion and resampling
- of time series.
-
- Notes
- -----
- See the `user guide
- <https://pandas.pydata.org/pandas-docs/stable/groupby.html>`_ for more.
- """
-
def asfreq(
self: FrameOrSeries,
freq,
@@ -8427,35 +8315,6 @@ def ranker(data):
return ranker(data)
- _shared_docs[
- "compare"
- ] = """
- Compare to another %(klass)s and show the differences.
-
- .. versionadded:: 1.1.0
-
- Parameters
- ----------
- other : %(klass)s
- Object to compare with.
-
- align_axis : {0 or 'index', 1 or 'columns'}, default 1
- Determine which axis to align the comparison on.
-
- * 0, or 'index' : Resulting differences are stacked vertically
- with rows drawn alternately from self and other.
- * 1, or 'columns' : Resulting differences are aligned horizontally
- with columns drawn alternately from self and other.
-
- keep_shape : bool, default False
- If true, all rows and columns are kept.
- Otherwise, only the ones with different values are kept.
-
- keep_equal : bool, default False
- If true, the result keeps values that are equal.
- Otherwise, equal values are shown as NaNs.
- """
-
@Appender(_shared_docs["compare"] % _shared_doc_kwargs)
def compare(
self,
@@ -10585,45 +10444,21 @@ def mad(self, axis=None, skipna=None, level=None):
examples=_min_examples,
)
- @classmethod
- def _add_series_or_dataframe_operations(cls):
- """
- Add the series or dataframe only operations to the cls; evaluate
- the doc strings again.
- """
- from pandas.core.window import (
- Expanding,
- ExponentialMovingWindow,
- Rolling,
- Window,
- )
-
- @doc(Rolling)
- def rolling(
- self,
- window,
- min_periods=None,
- center=False,
- win_type=None,
- on=None,
- axis=0,
- closed=None,
- ):
- axis = self._get_axis_number(axis)
-
- if win_type is not None:
- return Window(
- self,
- window=window,
- min_periods=min_periods,
- center=center,
- win_type=win_type,
- on=on,
- axis=axis,
- closed=closed,
- )
+ @doc(Rolling)
+ def rolling(
+ self,
+ window: "Union[int, timedelta, BaseOffset, BaseIndexer]",
+ min_periods: Optional[int] = None,
+ center: bool_t = False,
+ win_type: Optional[str] = None,
+ on: Optional[str] = None,
+ axis: Axis = 0,
+ closed: Optional[str] = None,
+ ):
+ axis = self._get_axis_number(axis)
- return Rolling(
+ if win_type is not None:
+ return Window(
self,
window=window,
min_periods=min_periods,
@@ -10634,53 +10469,59 @@ def rolling(
closed=closed,
)
- cls.rolling = rolling
-
- @doc(Expanding)
- def expanding(self, min_periods=1, center=None, axis=0):
- axis = self._get_axis_number(axis)
- if center is not None:
- warnings.warn(
- "The `center` argument on `expanding` "
- "will be removed in the future",
- FutureWarning,
- stacklevel=2,
- )
- else:
- center = False
+ return Rolling(
+ self,
+ window=window,
+ min_periods=min_periods,
+ center=center,
+ win_type=win_type,
+ on=on,
+ axis=axis,
+ closed=closed,
+ )
- return Expanding(self, min_periods=min_periods, center=center, axis=axis)
+ @doc(Expanding)
+ def expanding(
+ self, min_periods: int = 1, center: Optional[bool_t] = None, axis: Axis = 0
+ ) -> Expanding:
+ axis = self._get_axis_number(axis)
+ if center is not None:
+ warnings.warn(
+ "The `center` argument on `expanding` will be removed in the future",
+ FutureWarning,
+ stacklevel=2,
+ )
+ else:
+ center = False
- cls.expanding = expanding
+ return Expanding(self, min_periods=min_periods, center=center, axis=axis)
- @doc(ExponentialMovingWindow)
- def ewm(
+ @doc(ExponentialMovingWindow)
+ def ewm(
+ self,
+ com: Optional[float] = None,
+ span: Optional[float] = None,
+ halflife: Optional[Union[float, TimedeltaConvertibleTypes]] = None,
+ alpha: Optional[float] = None,
+ min_periods: int = 0,
+ adjust: bool_t = True,
+ ignore_na: bool_t = False,
+ axis: Axis = 0,
+ times: Optional[Union[str, np.ndarray, FrameOrSeries]] = None,
+ ) -> ExponentialMovingWindow:
+ axis = self._get_axis_number(axis)
+ return ExponentialMovingWindow(
self,
- com=None,
- span=None,
- halflife=None,
- alpha=None,
- min_periods=0,
- adjust=True,
- ignore_na=False,
- axis=0,
- times=None,
- ):
- axis = self._get_axis_number(axis)
- return ExponentialMovingWindow(
- self,
- com=com,
- span=span,
- halflife=halflife,
- alpha=alpha,
- min_periods=min_periods,
- adjust=adjust,
- ignore_na=ignore_na,
- axis=axis,
- times=times,
- )
-
- cls.ewm = ewm
+ com=com,
+ span=span,
+ halflife=halflife,
+ alpha=alpha,
+ min_periods=min_periods,
+ adjust=adjust,
+ ignore_na=ignore_na,
+ axis=axis,
+ times=times,
+ )
@doc(klass=_shared_doc_kwargs["klass"], axis="")
def transform(self, func, *args, **kwargs):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 555024ad75f5e..a852529e9b517 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5000,7 +5000,6 @@ def to_period(self, freq=None, copy=True) -> "Series":
Series._add_numeric_operations()
-Series._add_series_or_dataframe_operations()
# Add arithmetic!
ops.add_flex_arithmetic_methods(Series)
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index b81942f062b19..0aaccb47efc44 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -2,117 +2,258 @@
_shared_docs: Dict[str, str] = dict()
+_shared_docs[
+ "aggregate"
+] = """\
+Aggregate using one or more operations over the specified axis.
+{versionadded}
+Parameters
+----------
+func : function, str, list or dict
+ Function to use for aggregating the data. If a function, must either
+ work when passed a {klass} or when passed to {klass}.apply.
+
+ Accepted combinations are:
+
+ - function
+ - string function name
+ - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
+ - dict of axis labels -> functions, function names or list of such.
+{axis}
+*args
+ Positional arguments to pass to `func`.
+**kwargs
+ Keyword arguments to pass to `func`.
+
+Returns
+-------
+scalar, Series or DataFrame
+
+ The return can be:
+
+ * scalar : when Series.agg is called with single function
+ * Series : when DataFrame.agg is called with a single function
+ * DataFrame : when DataFrame.agg is called with several functions
+
+ Return scalar, Series or DataFrame.
+{see_also}
+Notes
+-----
+`agg` is an alias for `aggregate`. Use the alias.
+
+A passed user-defined-function will be passed a Series for evaluation.
+{examples}"""
+
+_shared_docs[
+ "compare"
+] = """\
+Compare to another %(klass)s and show the differences.
+
+.. versionadded:: 1.1.0
+
+Parameters
+----------
+other : %(klass)s
+ Object to compare with.
+
+align_axis : {0 or 'index', 1 or 'columns'}, default 1
+ Determine which axis to align the comparison on.
+
+ * 0, or 'index' : Resulting differences are stacked vertically
+ with rows drawn alternately from self and other.
+ * 1, or 'columns' : Resulting differences are aligned horizontally
+ with columns drawn alternately from self and other.
+
+keep_shape : bool, default False
+ If true, all rows and columns are kept.
+ Otherwise, only the ones with different values are kept.
+
+keep_equal : bool, default False
+ If true, the result keeps values that are equal.
+ Otherwise, equal values are shown as NaNs.
+"""
+
+_shared_docs[
+ "groupby"
+] = """\
+Group %(klass)s using a mapper or by a Series of columns.
+
+A groupby operation involves some combination of splitting the
+object, applying a function, and combining the results. This can be
+used to group large amounts of data and compute operations on these
+groups.
+
+Parameters
+----------
+by : mapping, function, label, or list of labels
+ Used to determine the groups for the groupby.
+ If ``by`` is a function, it's called on each value of the object's
+ index. If a dict or Series is passed, the Series or dict VALUES
+ will be used to determine the groups (the Series' values are first
+ aligned; see ``.align()`` method). If an ndarray is passed, the
+ values are used as-is determine the groups. A label or list of
+ labels may be passed to group by the columns in ``self``. Notice
+ that a tuple is interpreted as a (single) key.
+axis : {0 or 'index', 1 or 'columns'}, default 0
+ Split along rows (0) or columns (1).
+level : int, level name, or sequence of such, default None
+ If the axis is a MultiIndex (hierarchical), group by a particular
+ level or levels.
+as_index : bool, default True
+ For aggregated output, return object with group labels as the
+ index. Only relevant for DataFrame input. as_index=False is
+ effectively "SQL-style" grouped output.
+sort : bool, default True
+ Sort group keys. Get better performance by turning this off.
+ Note this does not influence the order of observations within each
+ group. Groupby preserves the order of rows within each group.
+group_keys : bool, default True
+ When calling apply, add group keys to index to identify pieces.
+squeeze : bool, default False
+ Reduce the dimensionality of the return type if possible,
+ otherwise return a consistent type.
+
+ .. deprecated:: 1.1.0
+
+observed : bool, default False
+ This only applies if any of the groupers are Categoricals.
+ If True: only show observed values for categorical groupers.
+ If False: show all values for categorical groupers.
+
+ .. versionadded:: 0.23.0
+dropna : bool, default True
+ If True, and if group keys contain NA values, NA values together
+ with row/column will be dropped.
+ If False, NA values will also be treated as the key in groups
+
+ .. versionadded:: 1.1.0
+
+Returns
+-------
+%(klass)sGroupBy
+ Returns a groupby object that contains information about the groups.
+
+See Also
+--------
+resample : Convenience method for frequency conversion and resampling
+ of time series.
+
+Notes
+-----
+See the `user guide
+<https://pandas.pydata.org/pandas-docs/stable/groupby.html>`_ for more.
+"""
_shared_docs[
"melt"
-] = """
- Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
-
- This function is useful to massage a DataFrame into a format where one
- or more columns are identifier variables (`id_vars`), while all other
- columns, considered measured variables (`value_vars`), are "unpivoted" to
- the row axis, leaving just two non-identifier columns, 'variable' and
- 'value'.
- %(versionadded)s
- Parameters
- ----------
- id_vars : tuple, list, or ndarray, optional
- Column(s) to use as identifier variables.
- value_vars : tuple, list, or ndarray, optional
- Column(s) to unpivot. If not specified, uses all columns that
- are not set as `id_vars`.
- var_name : scalar
- Name to use for the 'variable' column. If None it uses
- ``frame.columns.name`` or 'variable'.
- value_name : scalar, default 'value'
- Name to use for the 'value' column.
- col_level : int or str, optional
- If columns are a MultiIndex then use this level to melt.
- ignore_index : bool, default True
- If True, original index is ignored. If False, the original index is retained.
- Index labels will be repeated as necessary.
-
- .. versionadded:: 1.1.0
-
- Returns
- -------
- DataFrame
- Unpivoted DataFrame.
-
- See Also
- --------
- %(other)s : Identical method.
- pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
- DataFrame.pivot : Return reshaped DataFrame organized
- by given index / column values.
- DataFrame.explode : Explode a DataFrame from list-like
- columns to long format.
-
- Examples
- --------
- >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
- ... 'B': {0: 1, 1: 3, 2: 5},
- ... 'C': {0: 2, 1: 4, 2: 6}})
- >>> df
- A B C
- 0 a 1 2
- 1 b 3 4
- 2 c 5 6
-
- >>> %(caller)sid_vars=['A'], value_vars=['B'])
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
-
- >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
- 3 a C 2
- 4 b C 4
- 5 c C 6
-
- The names of 'variable' and 'value' columns can be customized:
-
- >>> %(caller)sid_vars=['A'], value_vars=['B'],
- ... var_name='myVarname', value_name='myValname')
- A myVarname myValname
- 0 a B 1
- 1 b B 3
- 2 c B 5
-
- Original index values can be kept around:
-
- >>> %(caller)sid_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
- 0 a C 2
- 1 b C 4
- 2 c C 6
-
- If you have multi-index columns:
-
- >>> df.columns = [list('ABC'), list('DEF')]
- >>> df
- A B C
- D E F
- 0 a 1 2
- 1 b 3 4
- 2 c 5 6
-
- >>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
- A variable value
- 0 a B 1
- 1 b B 3
- 2 c B 5
-
- >>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
- (A, D) variable_0 variable_1 value
- 0 a B E 1
- 1 b B E 3
- 2 c B E 5
- """
+] = """\
+Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
+
+This function is useful to massage a DataFrame into a format where one
+or more columns are identifier variables (`id_vars`), while all other
+columns, considered measured variables (`value_vars`), are "unpivoted" to
+the row axis, leaving just two non-identifier columns, 'variable' and
+'value'.
+%(versionadded)s
+Parameters
+----------
+id_vars : tuple, list, or ndarray, optional
+ Column(s) to use as identifier variables.
+value_vars : tuple, list, or ndarray, optional
+ Column(s) to unpivot. If not specified, uses all columns that
+ are not set as `id_vars`.
+var_name : scalar
+ Name to use for the 'variable' column. If None it uses
+ ``frame.columns.name`` or 'variable'.
+value_name : scalar, default 'value'
+ Name to use for the 'value' column.
+col_level : int or str, optional
+ If columns are a MultiIndex then use this level to melt.
+ignore_index : bool, default True
+ If True, original index is ignored. If False, the original index is retained.
+ Index labels will be repeated as necessary.
+
+ .. versionadded:: 1.1.0
+
+Returns
+-------
+DataFrame
+ Unpivoted DataFrame.
+
+See Also
+--------
+%(other)s : Identical method.
+pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
+DataFrame.pivot : Return reshaped DataFrame organized
+ by given index / column values.
+DataFrame.explode : Explode a DataFrame from list-like
+ columns to long format.
+
+Examples
+--------
+>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
+... 'B': {0: 1, 1: 3, 2: 5},
+... 'C': {0: 2, 1: 4, 2: 6}})
+>>> df
+ A B C
+0 a 1 2
+1 b 3 4
+2 c 5 6
+
+>>> %(caller)sid_vars=['A'], value_vars=['B'])
+ A variable value
+0 a B 1
+1 b B 3
+2 c B 5
+
+>>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
+ A variable value
+0 a B 1
+1 b B 3
+2 c B 5
+3 a C 2
+4 b C 4
+5 c C 6
+
+The names of 'variable' and 'value' columns can be customized:
+
+>>> %(caller)sid_vars=['A'], value_vars=['B'],
+... var_name='myVarname', value_name='myValname')
+ A myVarname myValname
+0 a B 1
+1 b B 3
+2 c B 5
+
+Original index values can be kept around:
+
+>>> %(caller)sid_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
+ A variable value
+0 a B 1
+1 b B 3
+2 c B 5
+0 a C 2
+1 b C 4
+2 c C 6
+
+If you have multi-index columns:
+
+>>> df.columns = [list('ABC'), list('DEF')]
+>>> df
+ A B C
+ D E F
+0 a 1 2
+1 b 3 4
+2 c 5 6
+
+>>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
+ A variable value
+0 a B 1
+1 b B 3
+2 c B 5
+
+>>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
+ (A, D) variable_0 variable_1 value
+0 a B E 1
+1 b B E 3
+2 c B E 5
+"""
diff --git a/pandas/core/window/common.py b/pandas/core/window/common.py
index 51a067427e867..2f3058db4493b 100644
--- a/pandas/core/window/common.py
+++ b/pandas/core/window/common.py
@@ -7,9 +7,9 @@
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
-from pandas.core.generic import _shared_docs
from pandas.core.groupby.base import GroupByMixin
from pandas.core.indexes.api import MultiIndex
+from pandas.core.shared_docs import _shared_docs
_shared_docs = dict(**_shared_docs)
_doc_template = """
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index baabdf0fca29a..f5e3587ed02d5 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -22,7 +22,7 @@
from pandas._libs.tslibs import BaseOffset, to_offset
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import ArrayLike, Axis, FrameOrSeriesUnion, Label
+from pandas._typing import ArrayLike, Axis, FrameOrSeries, FrameOrSeriesUnion, Label
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, cache_readonly, doc
@@ -159,7 +159,7 @@ class _Window(PandasObject, ShallowMixin, SelectionMixin):
def __init__(
self,
- obj: FrameOrSeriesUnion,
+ obj: FrameOrSeries,
window=None,
min_periods: Optional[int] = None,
center: bool = False,
| Refactoring ``NDFrame._add series or dataframe`` class method helps with typing. | https://api.github.com/repos/pandas-dev/pandas/pulls/35957 | 2020-08-28T17:04:32Z | 2020-08-30T12:00:25Z | 2020-08-30T12:00:25Z | 2020-08-30T12:54:09Z |
Issue35925 remove trailing commas | diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 2c0d4931a7bf2..99a586f056b12 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -29,7 +29,7 @@
def concatenate_block_managers(
- mgrs_indexers, axes, concat_axis: int, copy: bool,
+ mgrs_indexers, axes, concat_axis: int, copy: bool
) -> BlockManager:
"""
Concatenate block managers into one.
@@ -76,7 +76,7 @@ def concatenate_block_managers(
b = make_block(values, placement=placement, ndim=blk.ndim)
else:
b = make_block(
- _concatenate_join_units(join_units, concat_axis, copy=copy,),
+ _concatenate_join_units(join_units, concat_axis, copy=copy),
placement=placement,
)
blocks.append(b)
@@ -339,7 +339,7 @@ def _concatenate_join_units(join_units, concat_axis, copy):
# 2D to put it a non-EA Block
concat_values = np.atleast_2d(concat_values)
else:
- concat_values = concat_compat(to_concat, axis=concat_axis,)
+ concat_values = concat_compat(to_concat, axis=concat_axis)
return concat_values
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index a5372b14d210f..67ff3b9456ccf 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -491,7 +491,7 @@ def get_axe(block, qs, axes):
values = values.take(indexer)
return SingleBlockManager(
- make_block(values, ndim=1, placement=np.arange(len(values))), axes[0],
+ make_block(values, ndim=1, placement=np.arange(len(values))), axes[0]
)
def isna(self, func) -> "BlockManager":
@@ -519,9 +519,7 @@ def where(
def setitem(self, indexer, value) -> "BlockManager":
return self.apply("setitem", indexer=indexer, value=value)
- def putmask(
- self, mask, new, align: bool = True, axis: int = 0,
- ):
+ def putmask(self, mask, new, align: bool = True, axis: int = 0):
transpose = self.ndim == 2
if align:
@@ -1923,7 +1921,7 @@ def _compare_or_regex_search(
"""
def _check_comparison_types(
- result: Union[ArrayLike, bool], a: ArrayLike, b: Union[Scalar, Pattern],
+ result: Union[ArrayLike, bool], a: ArrayLike, b: Union[Scalar, Pattern]
):
"""
Raises an error if the two arrays (a,b) cannot be compared.
diff --git a/pandas/core/internals/ops.py b/pandas/core/internals/ops.py
index ae4892c720d5b..05f5f9a00ae1b 100644
--- a/pandas/core/internals/ops.py
+++ b/pandas/core/internals/ops.py
@@ -11,7 +11,7 @@
BlockPairInfo = namedtuple(
- "BlockPairInfo", ["lvals", "rvals", "locs", "left_ea", "right_ea", "rblk"],
+ "BlockPairInfo", ["lvals", "rvals", "locs", "left_ea", "right_ea", "rblk"]
)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index e7e28798d84a2..e3f16a3ef4f90 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1329,7 +1329,7 @@ def _zero_out_fperr(arg):
@disallow("M8", "m8")
def nancorr(
- a: np.ndarray, b: np.ndarray, method="pearson", min_periods: Optional[int] = None,
+ a: np.ndarray, b: np.ndarray, method="pearson", min_periods: Optional[int] = None
):
"""
a, b: ndarrays
diff --git a/pandas/core/ops/docstrings.py b/pandas/core/ops/docstrings.py
index 4ace873f029ae..99c2fefc97ae7 100644
--- a/pandas/core/ops/docstrings.py
+++ b/pandas/core/ops/docstrings.py
@@ -31,7 +31,7 @@ def _make_flex_doc(op_name, typ):
base_doc = _flex_doc_SERIES
if op_desc["reverse"]:
base_doc += _see_also_reverse_SERIES.format(
- reverse=op_desc["reverse"], see_also_desc=op_desc["see_also_desc"],
+ reverse=op_desc["reverse"], see_also_desc=op_desc["see_also_desc"]
)
doc_no_examples = base_doc.format(
desc=op_desc["desc"],
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 9e8fb643791f2..299b68c6e71e0 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -500,7 +500,7 @@ def get_result(self):
mgrs_indexers.append((obj._mgr, indexers))
new_data = concatenate_block_managers(
- mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy,
+ mgrs_indexers, self.new_axes, concat_axis=self.bm_axis, copy=self.copy
)
if not self.copy:
new_data._consolidate_inplace()
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 64a9e2dbf6d99..969ac56e41860 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -239,7 +239,7 @@ def _add_margins(
elif values:
marginal_result_set = _generate_marginal_results(
- table, data, values, rows, cols, aggfunc, observed, margins_name,
+ table, data, values, rows, cols, aggfunc, observed, margins_name
)
if not isinstance(marginal_result_set, tuple):
return marginal_result_set
@@ -308,7 +308,7 @@ def _compute_grand_margin(data, values, aggfunc, margins_name: str = "All"):
def _generate_marginal_results(
- table, data, values, rows, cols, aggfunc, observed, margins_name: str = "All",
+ table, data, values, rows, cols, aggfunc, observed, margins_name: str = "All"
):
if len(cols) > 0:
# need to "interleave" the margins
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 391313fbb5283..e81dd8f0c735c 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -81,9 +81,7 @@ class _Unstacker:
unstacked : DataFrame
"""
- def __init__(
- self, index: MultiIndex, level=-1, constructor=None,
- ):
+ def __init__(self, index: MultiIndex, level=-1, constructor=None):
if constructor is None:
constructor = DataFrame
@@ -422,7 +420,7 @@ def unstack(obj, level, fill_value=None):
if is_extension_array_dtype(obj.dtype):
return _unstack_extension_series(obj, level, fill_value)
unstacker = _Unstacker(
- obj.index, level=level, constructor=obj._constructor_expanddim,
+ obj.index, level=level, constructor=obj._constructor_expanddim
)
return unstacker.get_result(
obj.values, value_columns=None, fill_value=fill_value
@@ -436,7 +434,7 @@ def _unstack_frame(obj, level, fill_value=None):
return obj._constructor(mgr)
else:
return _Unstacker(
- obj.index, level=level, constructor=obj._constructor,
+ obj.index, level=level, constructor=obj._constructor
).get_result(obj._values, value_columns=obj.columns, fill_value=fill_value)
| xref #35925 | https://api.github.com/repos/pandas-dev/pandas/pulls/35956 | 2020-08-28T16:42:01Z | 2020-08-28T18:06:09Z | 2020-08-28T18:06:09Z | 2020-08-28T18:06:28Z |
TYP: misc cleanup in core\groupby\generic.py | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 82e629d184b19..3172fb4e0e853 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -9,7 +9,6 @@
import copy
from functools import partial
from textwrap import dedent
-import typing
from typing import (
TYPE_CHECKING,
Any,
@@ -22,6 +21,7 @@
Optional,
Sequence,
Type,
+ TypeVar,
Union,
)
import warnings
@@ -92,7 +92,7 @@
# TODO: validate types on ScalarResult and move to _typing
# Blocked from using by https://github.com/python/mypy/issues/1484
# See note at _mangle_lambda_list
-ScalarResult = typing.TypeVar("ScalarResult")
+ScalarResult = TypeVar("ScalarResult")
def generate_property(name: str, klass: Type[FrameOrSeries]):
@@ -606,8 +606,8 @@ def filter(self, func, dropna=True, *args, **kwargs):
wrapper = lambda x: func(x, *args, **kwargs)
# Interpret np.nan as False.
- def true_and_notna(x, *args, **kwargs) -> bool:
- b = wrapper(x, *args, **kwargs)
+ def true_and_notna(x) -> bool:
+ b = wrapper(x)
return b and notna(b)
try:
@@ -1210,7 +1210,7 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
# TODO: Remove when default dtype of empty Series is object
kwargs = first_not_none._construct_axes_dict()
backup = create_series_with_explicit_dtype(
- **kwargs, dtype_if_empty=object
+ dtype_if_empty=object, **kwargs
)
values = [x if (x is not None) else backup for x in values]
| pandas\core\groupby\generic.py:610: error: Too many arguments [call-arg]
pandas\core\groupby\generic.py:1212: error: "create_series_with_explicit_dtype" gets multiple values for keyword argument "dtype_if_empty" [misc] | https://api.github.com/repos/pandas-dev/pandas/pulls/35955 | 2020-08-28T15:40:41Z | 2020-08-28T17:00:58Z | 2020-08-28T17:00:58Z | 2020-08-28T17:58:18Z |
TYP: misc typing cleanups for #32911 | diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index 0131240f99cf6..72f3d81b1c662 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -42,7 +42,7 @@ def write_cells(
sheet_name: Optional[str] = None,
startrow: int = 0,
startcol: int = 0,
- freeze_panes: Optional[List] = None,
+ freeze_panes: Optional[Tuple[int, int]] = None,
) -> None:
"""
Write the frame cells using odf
@@ -215,14 +215,17 @@ def _process_style(self, style: Dict[str, Any]) -> str:
self.book.styles.addElement(odf_style)
return name
- def _create_freeze_panes(self, sheet_name: str, freeze_panes: List[int]) -> None:
- """Create freeze panes in the sheet
+ def _create_freeze_panes(
+ self, sheet_name: str, freeze_panes: Tuple[int, int]
+ ) -> None:
+ """
+ Create freeze panes in the sheet.
Parameters
----------
sheet_name : str
Name of the spreadsheet
- freeze_panes : list
+ freeze_panes : tuple of (int, int)
Freeze pane location x and y
"""
from odf.config import (
| pandas\io\excel\_odswriter.py:39:5: error: Argument 5 of "write_cells" is incompatible with supertype "ExcelWriter"; supertype defines the argument type as "Optional[Tuple[int, int]]" [override]
pandas\io\excel\_odswriter.py:62:35: error: Argument 1 to "_validate_freeze_panes" has incompatible type "Optional[List[Any]]"; expected "Optional[Tuple[int, int]]" [arg-type] | https://api.github.com/repos/pandas-dev/pandas/pulls/35954 | 2020-08-28T15:26:28Z | 2020-08-29T11:39:36Z | 2020-08-29T11:39:35Z | 2020-08-29T18:23:23Z |
TYP: misc typing cleanups for #29116 | diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index e2374b81ca13b..7ca68d8289bd5 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -10,6 +10,7 @@
Callable,
DefaultDict,
Dict,
+ Iterable,
List,
Optional,
Sequence,
@@ -17,14 +18,14 @@
Union,
)
-from pandas._typing import AggFuncType, Label
+from pandas._typing import AggFuncType, FrameOrSeries, Label
from pandas.core.dtypes.common import is_dict_like, is_list_like
from pandas.core.base import SpecificationError
import pandas.core.common as com
from pandas.core.indexes.api import Index
-from pandas.core.series import FrameOrSeriesUnion, Series
+from pandas.core.series import Series
def reconstruct_func(
@@ -276,12 +277,13 @@ def maybe_mangle_lambdas(agg_spec: Any) -> Any:
def relabel_result(
- result: FrameOrSeriesUnion,
+ result: FrameOrSeries,
func: Dict[str, List[Union[Callable, str]]],
- columns: Tuple,
- order: List[int],
+ columns: Iterable[Label],
+ order: Iterable[int],
) -> Dict[Label, Series]:
- """Internal function to reorder result if relabelling is True for
+ """
+ Internal function to reorder result if relabelling is True for
dataframe.agg, and return the reordered result in dict.
Parameters:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 606bd4cc3b52d..fe6fb97012fac 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7415,6 +7415,12 @@ def aggregate(self, func=None, axis=0, *args, **kwargs):
if relabeling:
# This is to keep the order to columns occurrence unchanged, and also
# keep the order of new columns occurrence unchanged
+
+ # For the return values of reconstruct_func, if relabeling is
+ # False, columns and order will be None.
+ assert columns is not None
+ assert order is not None
+
result_in_dict = relabel_result(result, func, columns, order)
result = DataFrame(result_in_dict, index=columns)
| pandas\core\frame.py:7429:59: error: Argument 3 to "relabel_result" has incompatible type "Optional[List[str]]"; expected "Tuple[Any, ...]" [arg-type]
pandas\core\frame.py:7429:68: error: Argument 4 to "relabel_result" has incompatible type "Optional[List[int]]"; expected "List[int]" [arg-type] | https://api.github.com/repos/pandas-dev/pandas/pulls/35953 | 2020-08-28T15:20:55Z | 2020-08-30T17:50:38Z | 2020-08-30T17:50:38Z | 2020-08-30T18:41:14Z |
BUG: Fix DataFrame.groupby().apply() for NaN groups with dropna=False | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index e65daa439a225..aa3255e673797 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -214,7 +214,8 @@ Performance improvements
Bug fixes
~~~~~~~~~
-
+- Bug in :meth:`DataFrameGroupBy.apply` raising error with ``np.nan`` group(s) when ``dropna=False`` (:issue:`35889`)
+-
Categorical
^^^^^^^^^^^
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 299b68c6e71e0..9b94dae8556f6 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -11,6 +11,7 @@
from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
+from pandas.core.dtypes.missing import isna
from pandas.core.arrays.categorical import (
factorize_from_iterable,
@@ -624,10 +625,11 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None) -> MultiInde
for hlevel, level in zip(zipped, levels):
to_concat = []
for key, index in zip(hlevel, indexes):
- mask = level == key
+ # Find matching codes, include matching nan values as equal.
+ mask = (isna(level) & isna(key)) | (level == key)
if not mask.any():
raise ValueError(f"Key {key} not in level {level}")
- i = np.nonzero(level == key)[0][0]
+ i = np.nonzero(mask)[0][0]
to_concat.append(np.repeat(i, len(index)))
codes_list.append(np.concatenate(to_concat))
diff --git a/pandas/tests/groupby/test_groupby_dropna.py b/pandas/tests/groupby/test_groupby_dropna.py
index d1501111cb22b..66db06eeebdfb 100644
--- a/pandas/tests/groupby/test_groupby_dropna.py
+++ b/pandas/tests/groupby/test_groupby_dropna.py
@@ -274,3 +274,56 @@ def test_groupby_dropna_datetime_like_data(
expected = pd.DataFrame({"values": values}, index=pd.Index(indexes, name="dt"))
tm.assert_frame_equal(grouped, expected)
+
+
+@pytest.mark.parametrize(
+ "dropna, data, selected_data, levels",
+ [
+ pytest.param(
+ False,
+ {"groups": ["a", "a", "b", np.nan], "values": [10, 10, 20, 30]},
+ {"values": [0, 1, 0, 0]},
+ ["a", "b", np.nan],
+ id="dropna_false_has_nan",
+ ),
+ pytest.param(
+ True,
+ {"groups": ["a", "a", "b", np.nan], "values": [10, 10, 20, 30]},
+ {"values": [0, 1, 0]},
+ None,
+ id="dropna_true_has_nan",
+ ),
+ pytest.param(
+ # no nan in "groups"; dropna=True|False should be same.
+ False,
+ {"groups": ["a", "a", "b", "c"], "values": [10, 10, 20, 30]},
+ {"values": [0, 1, 0, 0]},
+ None,
+ id="dropna_false_no_nan",
+ ),
+ pytest.param(
+ # no nan in "groups"; dropna=True|False should be same.
+ True,
+ {"groups": ["a", "a", "b", "c"], "values": [10, 10, 20, 30]},
+ {"values": [0, 1, 0, 0]},
+ None,
+ id="dropna_true_no_nan",
+ ),
+ ],
+)
+def test_groupby_apply_with_dropna_for_multi_index(dropna, data, selected_data, levels):
+ # GH 35889
+
+ df = pd.DataFrame(data)
+ gb = df.groupby("groups", dropna=dropna)
+ result = gb.apply(lambda grp: pd.DataFrame({"values": range(len(grp))}))
+
+ mi_tuples = tuple(zip(data["groups"], selected_data["values"]))
+ mi = pd.MultiIndex.from_tuples(mi_tuples, names=["groups", None])
+ # Since right now, by default MI will drop NA from levels when we create MI
+ # via `from_*`, so we need to add NA for level manually afterwards.
+ if not dropna and levels:
+ mi = mi.set_levels(levels, level="groups")
+
+ expected = pd.DataFrame(selected_data, index=mi)
+ tm.assert_frame_equal(result, expected)
| - [X] closes #35889
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry in `v1.1.2.rst`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35951 | 2020-08-28T09:33:10Z | 2020-09-05T03:15:05Z | 2020-09-05T03:15:04Z | 2020-09-05T03:15:11Z |
CLN remove unnecessary trailing commas to get ready for new version of black: generic -> blocks | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 2afa56b50c3c7..82e629d184b19 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -221,9 +221,7 @@ def _selection_name(self):
def apply(self, func, *args, **kwargs):
return super().apply(func, *args, **kwargs)
- @doc(
- _agg_template, examples=_agg_examples_doc, klass="Series",
- )
+ @doc(_agg_template, examples=_agg_examples_doc, klass="Series")
def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs):
if maybe_use_numba(engine):
@@ -935,9 +933,7 @@ class DataFrameGroupBy(GroupBy[DataFrame]):
See :ref:`groupby.aggregate.named` for more."""
)
- @doc(
- _agg_template, examples=_agg_examples_doc, klass="DataFrame",
- )
+ @doc(_agg_template, examples=_agg_examples_doc, klass="DataFrame")
def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs):
if maybe_use_numba(engine):
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index f96b488fb8d0d..a91366af61d0d 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1077,7 +1077,7 @@ def _aggregate_with_numba(self, data, func, *args, engine_kwargs=None, **kwargs)
tuple(args), kwargs, func, engine_kwargs
)
result = numba_agg_func(
- sorted_data, sorted_index, starts, ends, len(group_keys), len(data.columns),
+ sorted_data, sorted_index, starts, ends, len(group_keys), len(data.columns)
)
if cache_key not in NUMBA_FUNC_CACHE:
NUMBA_FUNC_CACHE[cache_key] = numba_agg_func
@@ -1595,8 +1595,7 @@ def max(self, numeric_only: bool = False, min_count: int = -1):
def first(self, numeric_only: bool = False, min_count: int = -1):
def first_compat(obj: FrameOrSeries, axis: int = 0):
def first(x: Series):
- """Helper function for first item that isn't NA.
- """
+ """Helper function for first item that isn't NA."""
x = x.array[notna(x.array)]
if len(x) == 0:
return np.nan
@@ -1620,8 +1619,7 @@ def first(x: Series):
def last(self, numeric_only: bool = False, min_count: int = -1):
def last_compat(obj: FrameOrSeries, axis: int = 0):
def last(x: Series):
- """Helper function for last item that isn't NA.
- """
+ """Helper function for last item that isn't NA."""
x = x.array[notna(x.array)]
if len(x) == 0:
return np.nan
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index c6171a55359fe..290680f380f5f 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -583,7 +583,7 @@ def transform(self, values, how: str, axis: int = 0, **kwargs):
return self._cython_operation("transform", values, how, axis, **kwargs)
def _aggregate(
- self, result, counts, values, comp_ids, agg_func, min_count: int = -1,
+ self, result, counts, values, comp_ids, agg_func, min_count: int = -1
):
if agg_func is libgroupby.group_nth:
# different signature from the others
@@ -603,9 +603,7 @@ def _transform(
return result
- def agg_series(
- self, obj: Series, func: F, *args, **kwargs,
- ):
+ def agg_series(self, obj: Series, func: F, *args, **kwargs):
# Caller is responsible for checking ngroups != 0
assert self.ngroups != 0
@@ -653,9 +651,7 @@ def _aggregate_series_fast(self, obj: Series, func: F):
result, counts = grouper.get_result()
return result, counts
- def _aggregate_series_pure_python(
- self, obj: Series, func: F, *args, **kwargs,
- ):
+ def _aggregate_series_pure_python(self, obj: Series, func: F, *args, **kwargs):
group_index, _, ngroups = self.group_info
counts = np.zeros(ngroups, dtype=int)
@@ -841,9 +837,7 @@ def groupings(self) -> "List[grouper.Grouping]":
for lvl, name in zip(self.levels, self.names)
]
- def agg_series(
- self, obj: Series, func: F, *args, **kwargs,
- ):
+ def agg_series(self, obj: Series, func: F, *args, **kwargs):
# Caller is responsible for checking ngroups != 0
assert self.ngroups != 0
assert len(self.bins) > 0 # otherwise we'd get IndexError in get_result
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 0e8d7c1b866b8..efe1a853a9a76 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -81,9 +81,7 @@ def wrapper(left, right):
DatetimeLikeArrayMixin,
cache=True,
)
-@inherit_names(
- ["mean", "asi8", "freq", "freqstr", "_box_func"], DatetimeLikeArrayMixin,
-)
+@inherit_names(["mean", "asi8", "freq", "freqstr", "_box_func"], DatetimeLikeArrayMixin)
class DatetimeIndexOpsMixin(ExtensionIndex):
"""
Common ops mixin to support a unified interface datetimelike Index.
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 9281f8017761d..5d309ef7cd515 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -182,10 +182,10 @@ def func(intvidx_self, other, sort=False):
)
@inherit_names(["set_closed", "to_tuples"], IntervalArray, wrap=True)
@inherit_names(
- ["__array__", "overlaps", "contains", "left", "right", "length"], IntervalArray,
+ ["__array__", "overlaps", "contains", "left", "right", "length"], IntervalArray
)
@inherit_names(
- ["is_non_overlapping_monotonic", "mid", "closed"], IntervalArray, cache=True,
+ ["is_non_overlapping_monotonic", "mid", "closed"], IntervalArray, cache=True
)
class IntervalIndex(IntervalMixin, ExtensionIndex):
_typ = "intervalindex"
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 731907993d08f..80bb9f10fadd9 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -436,7 +436,7 @@ def isin(self, values, level=None):
def _is_compatible_with_other(self, other) -> bool:
return super()._is_compatible_with_other(other) or all(
isinstance(
- obj, (ABCInt64Index, ABCFloat64Index, ABCUInt64Index, ABCRangeIndex),
+ obj, (ABCInt64Index, ABCFloat64Index, ABCUInt64Index, ABCRangeIndex)
)
for obj in [self, other]
)
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index b85e2d3947cb1..f1457a9aac62b 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -82,7 +82,7 @@ class RangeIndex(Int64Index):
# Constructors
def __new__(
- cls, start=None, stop=None, step=None, dtype=None, copy=False, name=None,
+ cls, start=None, stop=None, step=None, dtype=None, copy=False, name=None
):
cls._validate_dtype(dtype)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index c62be4f767f00..a38b47a4c2a25 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -724,7 +724,7 @@ def replace(
# _can_hold_element checks have reduced this back to the
# scalar case and we can avoid a costly object cast
return self.replace(
- to_replace[0], value, inplace=inplace, regex=regex, convert=convert,
+ to_replace[0], value, inplace=inplace, regex=regex, convert=convert
)
# GH 22083, TypeError or ValueError occurred within error handling
@@ -905,7 +905,7 @@ def setitem(self, indexer, value):
return block
def putmask(
- self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False,
+ self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False
) -> List["Block"]:
"""
putmask the data to the block; it is possible that we may create a
@@ -1292,7 +1292,7 @@ def shift(self, periods: int, axis: int = 0, fill_value=None):
return [self.make_block(new_values)]
def where(
- self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0,
+ self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0
) -> List["Block"]:
"""
evaluate the block; return result block(s) from the result
@@ -1366,7 +1366,7 @@ def where_func(cond, values, other):
# we are explicitly ignoring errors
block = self.coerce_to_target_dtype(other)
blocks = block.where(
- orig_other, cond, errors=errors, try_cast=try_cast, axis=axis,
+ orig_other, cond, errors=errors, try_cast=try_cast, axis=axis
)
return self._maybe_downcast(blocks, "infer")
@@ -1605,7 +1605,7 @@ def set(self, locs, values):
self.values = values
def putmask(
- self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False,
+ self, mask, new, inplace: bool = False, axis: int = 0, transpose: bool = False
) -> List["Block"]:
"""
See Block.putmask.__doc__
@@ -1816,7 +1816,7 @@ def diff(self, n: int, axis: int = 1) -> List["Block"]:
return super().diff(n, axis)
def shift(
- self, periods: int, axis: int = 0, fill_value: Any = None,
+ self, periods: int, axis: int = 0, fill_value: Any = None
) -> List["ExtensionBlock"]:
"""
Shift the block by `periods`.
@@ -1833,7 +1833,7 @@ def shift(
]
def where(
- self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0,
+ self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0
) -> List["Block"]:
cond = _extract_bool_array(cond)
@@ -1945,7 +1945,7 @@ def _can_hold_element(self, element: Any) -> bool:
)
def to_native_types(
- self, na_rep="", float_format=None, decimal=".", quoting=None, **kwargs,
+ self, na_rep="", float_format=None, decimal=".", quoting=None, **kwargs
):
""" convert to our native types format """
values = self.values
@@ -2369,7 +2369,7 @@ def replace(self, to_replace, value, inplace=False, regex=False, convert=True):
if not np.can_cast(to_replace_values, bool):
return self
return super().replace(
- to_replace, value, inplace=inplace, regex=regex, convert=convert,
+ to_replace, value, inplace=inplace, regex=regex, convert=convert
)
@@ -2453,18 +2453,18 @@ def replace(self, to_replace, value, inplace=False, regex=False, convert=True):
if not either_list and is_re(to_replace):
return self._replace_single(
- to_replace, value, inplace=inplace, regex=True, convert=convert,
+ to_replace, value, inplace=inplace, regex=True, convert=convert
)
elif not (either_list or regex):
return super().replace(
- to_replace, value, inplace=inplace, regex=regex, convert=convert,
+ to_replace, value, inplace=inplace, regex=regex, convert=convert
)
elif both_lists:
for to_rep, v in zip(to_replace, value):
result_blocks = []
for b in blocks:
result = b._replace_single(
- to_rep, v, inplace=inplace, regex=regex, convert=convert,
+ to_rep, v, inplace=inplace, regex=regex, convert=convert
)
result_blocks = _extend_blocks(result, result_blocks)
blocks = result_blocks
@@ -2475,18 +2475,18 @@ def replace(self, to_replace, value, inplace=False, regex=False, convert=True):
result_blocks = []
for b in blocks:
result = b._replace_single(
- to_rep, value, inplace=inplace, regex=regex, convert=convert,
+ to_rep, value, inplace=inplace, regex=regex, convert=convert
)
result_blocks = _extend_blocks(result, result_blocks)
blocks = result_blocks
return result_blocks
return self._replace_single(
- to_replace, value, inplace=inplace, convert=convert, regex=regex,
+ to_replace, value, inplace=inplace, convert=convert, regex=regex
)
def _replace_single(
- self, to_replace, value, inplace=False, regex=False, convert=True, mask=None,
+ self, to_replace, value, inplace=False, regex=False, convert=True, mask=None
):
"""
Replace elements by the given value.
| xref #35925 | https://api.github.com/repos/pandas-dev/pandas/pulls/35950 | 2020-08-28T08:00:48Z | 2020-08-28T09:27:17Z | 2020-08-28T09:27:17Z | 2020-08-28T16:50:43Z |
CLN remove unnecessary trailing commas to get ready for new version of black: _testing -> generic | diff --git a/pandas/_testing.py b/pandas/_testing.py
index ef6232fa6d575..b402b040d9268 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -939,7 +939,7 @@ def assert_categorical_equal(
if check_category_order:
assert_index_equal(left.categories, right.categories, obj=f"{obj}.categories")
assert_numpy_array_equal(
- left.codes, right.codes, check_dtype=check_dtype, obj=f"{obj}.codes",
+ left.codes, right.codes, check_dtype=check_dtype, obj=f"{obj}.codes"
)
else:
try:
@@ -948,9 +948,7 @@ def assert_categorical_equal(
except TypeError:
# e.g. '<' not supported between instances of 'int' and 'str'
lc, rc = left.categories, right.categories
- assert_index_equal(
- lc, rc, obj=f"{obj}.categories",
- )
+ assert_index_equal(lc, rc, obj=f"{obj}.categories")
assert_index_equal(
left.categories.take(left.codes),
right.categories.take(right.codes),
@@ -1092,7 +1090,7 @@ def _raise(left, right, err_msg):
if err_msg is None:
if left.shape != right.shape:
raise_assert_detail(
- obj, f"{obj} shapes are different", left.shape, right.shape,
+ obj, f"{obj} shapes are different", left.shape, right.shape
)
diff = 0
@@ -1559,7 +1557,7 @@ def assert_frame_equal(
# shape comparison
if left.shape != right.shape:
raise_assert_detail(
- obj, f"{obj} shape mismatch", f"{repr(left.shape)}", f"{repr(right.shape)}",
+ obj, f"{obj} shape mismatch", f"{repr(left.shape)}", f"{repr(right.shape)}"
)
if check_like:
@@ -2884,7 +2882,7 @@ def convert_rows_list_to_csv_str(rows_list: List[str]):
return expected
-def external_error_raised(expected_exception: Type[Exception],) -> ContextManager:
+def external_error_raised(expected_exception: Type[Exception]) -> ContextManager:
"""
Helper function to mark pytest.raises that have an external error message.
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index befde7c355818..2a6e983eff3ee 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -462,7 +462,7 @@ def isin(comps: AnyArrayLike, values: AnyArrayLike) -> np.ndarray:
def _factorize_array(
- values, na_sentinel: int = -1, size_hint=None, na_value=None, mask=None,
+ values, na_sentinel: int = -1, size_hint=None, na_value=None, mask=None
) -> Tuple[np.ndarray, np.ndarray]:
"""
Factorize an array-like to codes and uniques.
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 832d09b062265..2976747d66dfa 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -40,7 +40,7 @@ def take(
fill_value = self._validate_fill_value(fill_value)
new_data = take(
- self._ndarray, indices, allow_fill=allow_fill, fill_value=fill_value,
+ self._ndarray, indices, allow_fill=allow_fill, fill_value=fill_value
)
return self._from_backing_data(new_data)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index a28b341669918..27b1afdb438cb 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1505,7 +1505,7 @@ def argsort(self, ascending=True, kind="quicksort", **kwargs):
return super().argsort(ascending=ascending, kind=kind, **kwargs)
def sort_values(
- self, inplace: bool = False, ascending: bool = True, na_position: str = "last",
+ self, inplace: bool = False, ascending: bool = True, na_position: str = "last"
):
"""
Sort the Categorical by category value returning a new
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 57df067c7b16e..d83ff91a1315f 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -138,7 +138,7 @@ def __from_arrow__(
return IntegerArray._concat_same_type(results)
-def integer_array(values, dtype=None, copy: bool = False,) -> "IntegerArray":
+def integer_array(values, dtype=None, copy: bool = False) -> "IntegerArray":
"""
Infer and return an integer array of the values.
@@ -182,7 +182,7 @@ def safe_cast(values, dtype, copy: bool):
def coerce_to_array(
- values, dtype, mask=None, copy: bool = False,
+ values, dtype, mask=None, copy: bool = False
) -> Tuple[np.ndarray, np.ndarray]:
"""
Coerce the input values array to numpy arrays with a mask
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 235840d6d201e..1237dea5c1a64 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -126,7 +126,7 @@ def __invert__(self: BaseMaskedArrayT) -> BaseMaskedArrayT:
return type(self)(~self._data, self._mask)
def to_numpy(
- self, dtype=None, copy: bool = False, na_value: Scalar = lib.no_default,
+ self, dtype=None, copy: bool = False, na_value: Scalar = lib.no_default
) -> np.ndarray:
"""
Convert to a NumPy Array.
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 05f901518d82f..23a4a70734c81 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -280,7 +280,7 @@ def isna(self) -> np.ndarray:
return isna(self._ndarray)
def fillna(
- self, value=None, method: Optional[str] = None, limit: Optional[int] = None,
+ self, value=None, method: Optional[str] = None, limit: Optional[int] = None
) -> "PandasArray":
# TODO(_values_for_fillna): remove this
value, method = validate_fillna_kwargs(value, method)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index ddaf6d39f1837..cc39ffb5d1203 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -634,7 +634,7 @@ def _sub_period_array(self, other):
return new_values
def _addsub_int_array(
- self, other: np.ndarray, op: Callable[[Any, Any], Any],
+ self, other: np.ndarray, op: Callable[[Any, Any], Any]
) -> "PeriodArray":
"""
Add or subtract array of integers; equivalent to applying
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index e8c9f28e50084..f145e76046bee 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -514,9 +514,7 @@ def sanitize_array(
return subarr
-def _try_cast(
- arr, dtype: Optional[DtypeObj], copy: bool, raise_cast_failure: bool,
-):
+def _try_cast(arr, dtype: Optional[DtypeObj], copy: bool, raise_cast_failure: bool):
"""
Convert input to numpy ndarray and optionally cast to a given dtype.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 286da6e1de9d5..fea3efedb6abb 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -315,17 +315,13 @@ def _data(self):
@property
def _AXIS_NUMBERS(self) -> Dict[str, int]:
""".. deprecated:: 1.1.0"""
- warnings.warn(
- "_AXIS_NUMBERS has been deprecated.", FutureWarning, stacklevel=3,
- )
+ warnings.warn("_AXIS_NUMBERS has been deprecated.", FutureWarning, stacklevel=3)
return {"index": 0}
@property
def _AXIS_NAMES(self) -> Dict[int, str]:
""".. deprecated:: 1.1.0"""
- warnings.warn(
- "_AXIS_NAMES has been deprecated.", FutureWarning, stacklevel=3,
- )
+ warnings.warn("_AXIS_NAMES has been deprecated.", FutureWarning, stacklevel=3)
return {0: "index"}
def _construct_axes_dict(self, axes=None, **kwargs):
@@ -5128,7 +5124,7 @@ def pipe(self, func, *args, **kwargs):
... .pipe(g, arg1=a)
... .pipe((func, 'arg2'), arg1=a, arg3=c)
... ) # doctest: +SKIP
- """
+ """
return com.pipe(self, func, *args, **kwargs)
_shared_docs["aggregate"] = dedent(
@@ -5630,7 +5626,7 @@ def astype(
else:
# else, only a single dtype is given
- new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors,)
+ new_data = self._mgr.astype(dtype=dtype, copy=copy, errors=errors)
return self._constructor(new_data).__finalize__(self, method="astype")
# GH 33113: handle empty frame or series
@@ -6520,7 +6516,7 @@ def replace(
3 b
4 b
dtype: object
- """
+ """
if not (
is_scalar(to_replace)
or is_re_compilable(to_replace)
@@ -7772,7 +7768,7 @@ def between_time(
raise TypeError("Index must be DatetimeIndex")
indexer = index.indexer_between_time(
- start_time, end_time, include_start=include_start, include_end=include_end,
+ start_time, end_time, include_start=include_start, include_end=include_end
)
return self._take_with_is_copy(indexer, axis=axis)
@@ -8939,7 +8935,7 @@ def _where(
self._check_inplace_setting(other)
new_data = self._mgr.putmask(
- mask=cond, new=other, align=align, axis=block_axis,
+ mask=cond, new=other, align=align, axis=block_axis
)
result = self._constructor(new_data)
return self._update_inplace(result)
| xref #35925
| https://api.github.com/repos/pandas-dev/pandas/pulls/35949 | 2020-08-28T06:59:47Z | 2020-08-28T09:26:44Z | 2020-08-28T09:26:44Z | 2020-08-28T16:50:38Z |
paste_windows() wrong arg for c_wchar_p | diff --git a/pandas/io/clipboard/__init__.py b/pandas/io/clipboard/__init__.py
index d16955a98b62f..a4d4d10ae7a8b 100644
--- a/pandas/io/clipboard/__init__.py
+++ b/pandas/io/clipboard/__init__.py
@@ -468,7 +468,11 @@ def paste_windows():
# (Also, it may return a handle to an empty buffer,
# but technically that's not empty)
return ""
- return c_wchar_p(handle).value
+ locked_handle = safeGlobalLock(handle)
+ text = c_wchar_p(locked_handle).value
+ safeGlobalUnlock(handle)
+ return text
+
return copy_windows, paste_windows
| paste_windows() now directly used handle return from safeGetClipboardData(CF_UNICODETEXT) as argument for c_wchar_p, which should used safeGlobalLock(handle) instead, while copy_windows(text) doing the right thing.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35947 | 2020-08-28T06:16:24Z | 2021-01-01T22:07:33Z | null | 2021-01-01T22:07:33Z |
CLN: resolve UserWarning in `pandas/plotting/_matplotlib/core.py` #35945 | diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 2d64e1b051444..2d519f56738b1 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -1226,8 +1226,8 @@ def get_label(i):
if self._need_to_set_index:
xticks = ax.get_xticks()
xticklabels = [get_label(x) for x in xticks]
- ax.set_xticklabels(xticklabels)
ax.xaxis.set_major_locator(FixedLocator(xticks))
+ ax.set_xticklabels(xticklabels)
condition = (
not self._use_dynamic_x()
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index ee43e5d7072fe..9ab697cb57690 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -2796,10 +2796,12 @@ def test_table(self):
_check_plot_works(df.plot, table=True)
_check_plot_works(df.plot, table=df)
- ax = df.plot()
- assert len(ax.tables) == 0
- plotting.table(ax, df.T)
- assert len(ax.tables) == 1
+ # GH 35945 UserWarning
+ with tm.assert_produces_warning(None):
+ ax = df.plot()
+ assert len(ax.tables) == 0
+ plotting.table(ax, df.T)
+ assert len(ax.tables) == 1
def test_errorbar_scatter(self):
df = DataFrame(np.random.randn(5, 2), index=range(5), columns=["x", "y"])
| - [x] closes #35945
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35946 | 2020-08-28T05:40:08Z | 2020-09-05T03:33:50Z | 2020-09-05T03:33:50Z | 2020-10-02T11:56:55Z |
Version Number correction in to_json table | diff --git a/pandas/io/json/_table_schema.py b/pandas/io/json/_table_schema.py
index 84146a5d732e1..612f1194a65d9 100644
--- a/pandas/io/json/_table_schema.py
+++ b/pandas/io/json/_table_schema.py
@@ -23,6 +23,7 @@
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas import DataFrame
+from pandas import __version__ as __v__
import pandas.core.common as com
if TYPE_CHECKING:
@@ -274,7 +275,7 @@ def build_table_schema(
schema["primaryKey"] = primary_key
if version:
- schema["pandas_version"] = "0.20.0"
+ schema["pandas_version"] = __v__
return schema
| Added a line of code to obtain version of Pandas and display it in Schema of a table json when to_json(orient='table') is used.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35944 | 2020-08-28T05:21:36Z | 2020-08-29T18:44:13Z | null | 2020-08-29T18:45:42Z |
CLN: resolve DeprecationWarning in `pandas/_testing.py` #35942 | diff --git a/pandas/_testing.py b/pandas/_testing.py
index ef6232fa6d575..97047e0632087 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -787,7 +787,11 @@ def _get_ilevel_values(index, level):
# skip exact index checking when `check_categorical` is False
if check_exact and check_categorical:
if not left.equals(right):
- diff = np.sum((left.values != right.values).astype(int)) * 100.0 / len(left)
+ diff = (
+ np.sum((np.not_equal(left.values, right.values)).astype(int))
+ * 100.0
+ / len(left)
+ )
msg = f"{obj} values are different ({np.round(diff, 5)} %)"
raise_assert_detail(obj, msg, left, right)
else:
| - [x] closes #35942
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35943 | 2020-08-28T04:20:47Z | 2020-10-05T00:15:54Z | null | 2020-10-05T00:16:32Z |
DOC: complement the documentation for pandas.DataFrame.agg #35912 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 606bd4cc3b52d..b1e7c8a51f52c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7376,6 +7376,15 @@ def _gotitem(
min 1.0 2.0
sum 12.0 NaN
+ Aggregate different functions over the columns and rename the index of the resulting
+ DataFrame.
+
+ >>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
+ A B C
+ x 7.0 NaN NaN
+ y NaN 2.0 NaN
+ z NaN NaN 6.0
+
Aggregate over the columns.
>>> df.agg("mean", axis="columns")
| - [x] closes #35912
- [x] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
<img width="676" alt="Screenshot 2020-08-28 at 11 59 23" src="https://user-images.githubusercontent.com/21543236/91519967-a71dd280-e926-11ea-87f5-e647fe650168.png">
| https://api.github.com/repos/pandas-dev/pandas/pulls/35941 | 2020-08-28T04:06:05Z | 2020-08-31T18:40:13Z | 2020-08-31T18:40:12Z | 2020-08-31T18:40:16Z |
TYP: annotations in core.groupby | diff --git a/pandas/core/groupby/categorical.py b/pandas/core/groupby/categorical.py
index db734bb2f0c07..4d5acf527a867 100644
--- a/pandas/core/groupby/categorical.py
+++ b/pandas/core/groupby/categorical.py
@@ -1,3 +1,5 @@
+from typing import Optional, Tuple
+
import numpy as np
from pandas.core.algorithms import unique1d
@@ -6,9 +8,12 @@
CategoricalDtype,
recode_for_categories,
)
+from pandas.core.indexes.api import CategoricalIndex
-def recode_for_groupby(c: Categorical, sort: bool, observed: bool):
+def recode_for_groupby(
+ c: Categorical, sort: bool, observed: bool
+) -> Tuple[Categorical, Optional[Categorical]]:
"""
Code the categories to ensure we can groupby for categoricals.
@@ -73,7 +78,9 @@ def recode_for_groupby(c: Categorical, sort: bool, observed: bool):
return c.reorder_categories(cat.categories), None
-def recode_from_groupby(c: Categorical, sort: bool, ci):
+def recode_from_groupby(
+ c: Categorical, sort: bool, ci: CategoricalIndex
+) -> CategoricalIndex:
"""
Reverse the codes_to_groupby to account for sort / observed.
@@ -91,7 +98,8 @@ def recode_from_groupby(c: Categorical, sort: bool, ci):
"""
# we re-order to the original category orderings
if sort:
- return ci.set_categories(c.categories)
+ return ci.set_categories(c.categories) # type: ignore [attr-defined]
# we are not sorting, so add unobserved to the end
- return ci.add_categories(c.categories[~c.categories.isin(ci.categories)])
+ new_cats = c.categories[~c.categories.isin(ci.categories)]
+ return ci.add_categories(new_cats) # type: ignore [attr-defined]
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 3172fb4e0e853..e39464628ccaa 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -23,6 +23,7 @@
Type,
TypeVar,
Union,
+ cast,
)
import warnings
@@ -83,7 +84,7 @@
from pandas.plotting import boxplot_frame_groupby
if TYPE_CHECKING:
- from pandas.core.internals import Block
+ from pandas.core.internals import Block # noqa:F401
NamedAgg = namedtuple("NamedAgg", ["column", "aggfunc"])
@@ -1591,7 +1592,7 @@ def _gotitem(self, key, ndim: int, subset=None):
Parameters
----------
key : string / list of selections
- ndim : 1,2
+ ndim : {1, 2}
requested ndim of result
subset : object, default None
subset to act on
@@ -1617,7 +1618,7 @@ def _gotitem(self, key, ndim: int, subset=None):
raise AssertionError("invalid ndim for _gotitem")
- def _wrap_frame_output(self, result, obj) -> DataFrame:
+ def _wrap_frame_output(self, result, obj: DataFrame) -> DataFrame:
result_index = self.grouper.levels[0]
if self.axis == 0:
@@ -1634,20 +1635,14 @@ def _get_data_to_aggregate(self) -> BlockManager:
else:
return obj._mgr
- def _insert_inaxis_grouper_inplace(self, result):
+ def _insert_inaxis_grouper_inplace(self, result: DataFrame) -> None:
# zip in reverse so we can always insert at loc 0
- izip = zip(
- *map(
- reversed,
- (
- self.grouper.names,
- self.grouper.get_group_levels(),
- [grp.in_axis for grp in self.grouper.groupings],
- ),
- )
- )
columns = result.columns
- for name, lev, in_axis in izip:
+ for name, lev, in_axis in zip(
+ reversed(self.grouper.names),
+ reversed(self.grouper.get_group_levels()),
+ reversed([grp.in_axis for grp in self.grouper.groupings]),
+ ):
# GH #28549
# When using .apply(-), name will be in columns already
if in_axis and name not in columns:
@@ -1712,7 +1707,7 @@ def _wrap_transformed_output(
return result
- def _wrap_agged_blocks(self, blocks: "Sequence[Block]", items: Index) -> DataFrame:
+ def _wrap_agged_blocks(self, blocks: Sequence["Block"], items: Index) -> DataFrame:
if not self.as_index:
index = np.arange(blocks[0].values.shape[-1])
mgr = BlockManager(blocks, axes=[items, index])
@@ -1739,7 +1734,7 @@ def _iterate_column_groupbys(self):
exclusions=self.exclusions,
)
- def _apply_to_column_groupbys(self, func):
+ def _apply_to_column_groupbys(self, func) -> DataFrame:
from pandas.core.reshape.concat import concat
return concat(
@@ -1748,7 +1743,7 @@ def _apply_to_column_groupbys(self, func):
axis=1,
)
- def count(self):
+ def count(self) -> DataFrame:
"""
Compute count of group, excluding missing values.
@@ -1778,7 +1773,7 @@ def count(self):
return self._reindex_output(result, fill_value=0)
- def nunique(self, dropna: bool = True):
+ def nunique(self, dropna: bool = True) -> DataFrame:
"""
Return DataFrame with counts of unique elements in each position.
@@ -1844,6 +1839,7 @@ def nunique(self, dropna: bool = True):
],
axis=1,
)
+ results = cast(DataFrame, results)
if axis_number == 1:
results = results.T
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index a91366af61d0d..651af2d314251 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -459,7 +459,7 @@ def f(self):
@contextmanager
-def _group_selection_context(groupby):
+def _group_selection_context(groupby: "_GroupBy"):
"""
Set / reset the _group_selection_context.
"""
@@ -489,7 +489,7 @@ def __init__(
keys: Optional[_KeysArgType] = None,
axis: int = 0,
level=None,
- grouper: "Optional[ops.BaseGrouper]" = None,
+ grouper: Optional["ops.BaseGrouper"] = None,
exclusions=None,
selection=None,
as_index: bool = True,
@@ -734,7 +734,7 @@ def pipe(self, func, *args, **kwargs):
plot = property(GroupByPlot)
- def _make_wrapper(self, name):
+ def _make_wrapper(self, name: str) -> Callable:
assert name in self._apply_allowlist
with _group_selection_context(self):
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 8239a792c65dd..18970ea0544e4 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -568,7 +568,9 @@ def codes(self) -> np.ndarray:
@cache_readonly
def result_index(self) -> Index:
if self.all_grouper is not None:
- return recode_from_groupby(self.all_grouper, self.sort, self.group_index)
+ group_idx = self.group_index
+ assert isinstance(group_idx, CategoricalIndex) # set in __init__
+ return recode_from_groupby(self.all_grouper, self.sort, group_idx)
return self.group_index
@property
@@ -607,7 +609,7 @@ def get_grouper(
mutated: bool = False,
validate: bool = True,
dropna: bool = True,
-) -> "Tuple[ops.BaseGrouper, List[Hashable], FrameOrSeries]":
+) -> Tuple["ops.BaseGrouper", List[Hashable], FrameOrSeries]:
"""
Create and return a BaseGrouper, which is an internal
mapping of how to create the grouper indexers.
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 290680f380f5f..4dd5b7f30e7f0 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -82,7 +82,7 @@ class BaseGrouper:
def __init__(
self,
axis: Index,
- groupings: "Sequence[grouper.Grouping]",
+ groupings: Sequence["grouper.Grouping"],
sort: bool = True,
group_keys: bool = True,
mutated: bool = False,
| I'm still seeing a couple of mypy complaints, suggestions @simonjayhawkins ? | https://api.github.com/repos/pandas-dev/pandas/pulls/35939 | 2020-08-28T01:34:17Z | 2020-08-31T10:16:16Z | 2020-08-31T10:16:16Z | 2020-08-31T14:44:24Z |
REGR: Fix comparison broadcasting over array of Intervals | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index b4c196f548147..c6917d1b50619 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -17,6 +17,7 @@ Fixed regressions
- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
- Fix regression in updating a column inplace (e.g. using ``df['col'].fillna(.., inplace=True)``) (:issue:`35731`)
- Performance regression for :meth:`RangeIndex.format` (:issue:`35712`)
+- Regression in :meth:`DataFrame.replace` where a ``TypeError`` would be raised when attempting to replace elements of type :class:`Interval` (:issue:`35931`)
-
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 6867e8aba7411..40bd5ad8f5a1f 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -358,6 +358,11 @@ cdef class Interval(IntervalMixin):
self_tuple = (self.left, self.right, self.closed)
other_tuple = (other.left, other.right, other.closed)
return PyObject_RichCompare(self_tuple, other_tuple, op)
+ elif util.is_array(other):
+ return np.array(
+ [PyObject_RichCompare(self, x, op) for x in other],
+ dtype=bool,
+ )
return NotImplemented
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 8603bff0587b6..83dfd42ae2a6e 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1581,3 +1581,10 @@ def test_replace_with_compiled_regex(self):
result = df.replace({regex: "z"}, regex=True)
expected = pd.DataFrame(["z", "b", "c"])
tm.assert_frame_equal(result, expected)
+
+ def test_replace_intervals(self):
+ # https://github.com/pandas-dev/pandas/issues/35931
+ df = pd.DataFrame({"a": [pd.Interval(0, 1), pd.Interval(0, 1)]})
+ result = df.replace({"a": {pd.Interval(0, 1): "x"}})
+ expected = pd.DataFrame({"a": ["x", "x"]})
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/scalar/interval/test_arithmetic.py b/pandas/tests/scalar/interval/test_arithmetic.py
index 5252f1a4d5a24..b4c2b448e252a 100644
--- a/pandas/tests/scalar/interval/test_arithmetic.py
+++ b/pandas/tests/scalar/interval/test_arithmetic.py
@@ -45,3 +45,15 @@ def test_numeric_interval_add_timedelta_raises(interval, delta):
with pytest.raises((TypeError, ValueError), match=msg):
delta + interval
+
+
+@pytest.mark.parametrize("klass", [timedelta, np.timedelta64, Timedelta])
+def test_timdelta_add_timestamp_interval(klass):
+ delta = klass(0)
+ expected = Interval(Timestamp("2020-01-01"), Timestamp("2020-02-01"))
+
+ result = delta + expected
+ assert result == expected
+
+ result = expected + delta
+ assert result == expected
diff --git a/pandas/tests/scalar/interval/test_interval.py b/pandas/tests/scalar/interval/test_interval.py
index a0151bb9ac7bf..8ad9a2c7a9c70 100644
--- a/pandas/tests/scalar/interval/test_interval.py
+++ b/pandas/tests/scalar/interval/test_interval.py
@@ -2,6 +2,7 @@
import pytest
from pandas import Interval, Period, Timedelta, Timestamp
+import pandas._testing as tm
import pandas.core.common as com
@@ -267,3 +268,11 @@ def test_constructor_errors_tz(self, tz_left, tz_right):
msg = "left and right must have the same time zone"
with pytest.raises(error, match=msg):
Interval(left, right)
+
+ def test_equality_comparison_broadcasts_over_array(self):
+ # https://github.com/pandas-dev/pandas/issues/35931
+ interval = Interval(0, 1)
+ arr = np.array([interval, interval])
+ result = interval == arr
+ expected = np.array([True, True])
+ tm.assert_numpy_array_equal(result, expected)
| - [x] closes https://github.com/pandas-dev/pandas/issues/35931
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35938 | 2020-08-28T01:21:44Z | 2020-08-31T22:32:34Z | 2020-08-31T22:32:34Z | 2020-09-01T14:55:45Z |
BUG: BlockSlider not clearing index._cache | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 407e8ba029ada..fca7e7d209031 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -256,6 +256,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrameGroupBy.apply` where a non-nuisance grouping column would be dropped from the output columns if another groupby method was called before ``.apply()`` (:issue:`34656`)
- Bug in :meth:`DataFrameGroupby.apply` would drop a :class:`CategoricalIndex` when grouped on. (:issue:`35792`)
- Bug when subsetting columns on a :class:`~pandas.core.groupby.DataFrameGroupBy` (e.g. ``df.groupby('a')[['b']])``) would reset the attributes ``axis``, ``dropna``, ``group_keys``, ``level``, ``mutated``, ``sort``, and ``squeeze`` to their default values. (:issue:`9959`)
+- Bug in :meth:`DataFrameGroupby.tshift` failing to raise ``ValueError`` when a frequency cannot be inferred for the index of a group (:issue:`35937`)
-
Reshaping
diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index 7b36bc8baf891..8161b5c5c2b11 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -53,6 +53,7 @@ cdef class _BaseGrouper:
# to a 1-d ndarray like datetime / timedelta / period.
object.__setattr__(cached_ityp, '_index_data', islider.buf)
cached_ityp._engine.clear_mapping()
+ cached_ityp._cache.clear() # e.g. inferred_freq must go
object.__setattr__(cached_typ._mgr._block, 'values', vslider.buf)
object.__setattr__(cached_typ._mgr._block, 'mgr_locs',
slice(len(vslider.buf)))
@@ -71,6 +72,7 @@ cdef class _BaseGrouper:
object res
cached_ityp._engine.clear_mapping()
+ cached_ityp._cache.clear() # e.g. inferred_freq must go
res = self.f(cached_typ)
res = _extract_result(res)
if not initialized:
@@ -455,6 +457,7 @@ cdef class BlockSlider:
object.__setattr__(self.index, '_index_data', self.idx_slider.buf)
self.index._engine.clear_mapping()
+ self.index._cache.clear() # e.g. inferred_freq must go
cdef reset(self):
cdef:
diff --git a/pandas/tests/groupby/test_allowlist.py b/pandas/tests/groupby/test_allowlist.py
index 0fd66cc047017..4a735fc7bb686 100644
--- a/pandas/tests/groupby/test_allowlist.py
+++ b/pandas/tests/groupby/test_allowlist.py
@@ -369,7 +369,6 @@ def test_groupby_selection_with_methods(df):
"ffill",
"bfill",
"pct_change",
- "tshift",
]
for m in methods:
@@ -379,6 +378,11 @@ def test_groupby_selection_with_methods(df):
# should always be frames!
tm.assert_frame_equal(res, exp)
+ # check that the index cache is cleared
+ with pytest.raises(ValueError, match="Freq was not set in the index"):
+ # GH#35937
+ g.tshift()
+
# methods which aren't just .foo()
tm.assert_frame_equal(g.fillna(0), g_exp.fillna(0))
tm.assert_frame_equal(g.dtypes, g_exp.dtypes)
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35937 | 2020-08-27T23:33:20Z | 2020-09-02T03:18:17Z | 2020-09-02T03:18:17Z | 2020-09-02T17:00:22Z |
REGR: Fix inplace updates on column to set correct values | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 9747a8ef3e71f..b4c196f548147 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
+- Fix regression in updating a column inplace (e.g. using ``df['col'].fillna(.., inplace=True)``) (:issue:`35731`)
- Performance regression for :meth:`RangeIndex.format` (:issue:`35712`)
-
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index a5372b14d210f..31f753eb9d75b 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1027,6 +1027,7 @@ def iset(self, loc: Union[int, slice, np.ndarray], value):
Set new item in-place. Does not consolidate. Adds new Block if not
contained in the current set of items
"""
+ value = extract_array(value, extract_numpy=True)
# FIXME: refactor, clearly separate broadcasting & zip-like assignment
# can prob also fix the various if tests for sparse/categorical
if self._blklocs is None and self.ndim > 1:
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index b9219f9f833de..bbfaacae1b444 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -348,6 +348,12 @@ def test_fillna_frame(self, data_missing):
# Non-scalar "scalar" values.
super().test_fillna_frame(data_missing)
+ @pytest.mark.skip("Invalid test")
+ def test_fillna_fill_other(self, data):
+ # inplace update doesn't work correctly with patched extension arrays
+ # extract_array returns PandasArray, while dtype is a numpy dtype
+ super().test_fillna_fill_other(data_missing)
+
class TestReshaping(BaseNumPyTests, base.BaseReshapingTests):
@pytest.mark.skip("Incorrect parent test")
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index 8ecd9066ceff0..00cfa6265934f 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -644,3 +644,17 @@ def test_to_dict_of_blocks_item_cache():
assert df.loc[0, "b"] == "foo"
assert df["b"] is ser
+
+
+def test_update_inplace_sets_valid_block_values():
+ # https://github.com/pandas-dev/pandas/issues/33457
+ df = pd.DataFrame({"a": pd.Series([1, 2, None], dtype="category")})
+
+ # inplace update of a single column
+ df["a"].fillna(1, inplace=True)
+
+ # check we havent put a Series into any block.values
+ assert isinstance(df._mgr.blocks[0].values, pd.Categorical)
+
+ # smoketest for OP bug from GH#35731
+ assert df.isnull().sum().sum() == 0
| Closes #35731 | https://api.github.com/repos/pandas-dev/pandas/pulls/35936 | 2020-08-27T20:11:50Z | 2020-08-31T12:36:13Z | 2020-08-31T12:36:12Z | 2020-08-31T13:09:24Z |
TYP: annotations in pandas.plotting | diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 8f2080658e63e..214a67690d695 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -1,7 +1,8 @@
import contextlib
import datetime as pydt
-from datetime import datetime, timedelta
+from datetime import datetime, timedelta, tzinfo
import functools
+from typing import Optional, Tuple
from dateutil.relativedelta import relativedelta
import matplotlib.dates as dates
@@ -152,7 +153,7 @@ def axisinfo(unit, axis):
return units.AxisInfo(majloc=majloc, majfmt=majfmt, label="time")
@staticmethod
- def default_units(x, axis):
+ def default_units(x, axis) -> str:
return "time"
@@ -421,7 +422,7 @@ def autoscale(self):
return self.nonsingular(vmin, vmax)
-def _from_ordinal(x, tz=None):
+def _from_ordinal(x, tz: Optional[tzinfo] = None) -> datetime:
ix = int(x)
dt = datetime.fromordinal(ix)
remainder = float(x) - ix
@@ -450,7 +451,7 @@ def _from_ordinal(x, tz=None):
# -------------------------------------------------------------------------
-def _get_default_annual_spacing(nyears):
+def _get_default_annual_spacing(nyears) -> Tuple[int, int]:
"""
Returns a default spacing between consecutive ticks for annual data.
"""
diff --git a/pandas/plotting/_matplotlib/timeseries.py b/pandas/plotting/_matplotlib/timeseries.py
index eef4276f0ed09..193602e1baf4a 100644
--- a/pandas/plotting/_matplotlib/timeseries.py
+++ b/pandas/plotting/_matplotlib/timeseries.py
@@ -62,13 +62,13 @@ def _maybe_resample(series: "Series", ax, kwargs):
return freq, series
-def _is_sub(f1, f2):
+def _is_sub(f1: str, f2: str) -> bool:
return (f1.startswith("W") and is_subperiod("D", f2)) or (
f2.startswith("W") and is_subperiod(f1, "D")
)
-def _is_sup(f1, f2):
+def _is_sup(f1: str, f2: str) -> bool:
return (f1.startswith("W") and is_superperiod("D", f2)) or (
f2.startswith("W") and is_superperiod(f1, "D")
)
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index caf2f27de9276..26b25597ce1a6 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -1,16 +1,22 @@
# being a bit too dynamic
from math import ceil
+from typing import TYPE_CHECKING, Tuple
import warnings
import matplotlib.table
import matplotlib.ticker as ticker
import numpy as np
+from pandas._typing import FrameOrSeries
+
from pandas.core.dtypes.common import is_list_like
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.plotting._matplotlib import compat
+if TYPE_CHECKING:
+ from matplotlib.table import Table
+
def format_date_labels(ax, rot):
# mini version of autofmt_xdate
@@ -21,7 +27,7 @@ def format_date_labels(ax, rot):
fig.subplots_adjust(bottom=0.2)
-def table(ax, data, rowLabels=None, colLabels=None, **kwargs):
+def table(ax, data: FrameOrSeries, rowLabels=None, colLabels=None, **kwargs) -> "Table":
if isinstance(data, ABCSeries):
data = data.to_frame()
elif isinstance(data, ABCDataFrame):
@@ -43,7 +49,7 @@ def table(ax, data, rowLabels=None, colLabels=None, **kwargs):
return table
-def _get_layout(nplots, layout=None, layout_type="box"):
+def _get_layout(nplots: int, layout=None, layout_type: str = "box") -> Tuple[int, int]:
if layout is not None:
if not isinstance(layout, (tuple, list)) or len(layout) != 2:
raise ValueError("Layout must be a tuple of (rows, columns)")
@@ -92,14 +98,14 @@ def _get_layout(nplots, layout=None, layout_type="box"):
def _subplots(
- naxes=None,
- sharex=False,
- sharey=False,
- squeeze=True,
+ naxes: int,
+ sharex: bool = False,
+ sharey: bool = False,
+ squeeze: bool = True,
subplot_kw=None,
ax=None,
layout=None,
- layout_type="box",
+ layout_type: str = "box",
**fig_kw,
):
"""
@@ -369,7 +375,7 @@ def _get_all_lines(ax):
return lines
-def _get_xlim(lines):
+def _get_xlim(lines) -> Tuple[float, float]:
left, right = np.inf, -np.inf
for l in lines:
x = l.get_xdata(orig=False)
| https://api.github.com/repos/pandas-dev/pandas/pulls/35935 | 2020-08-27T18:06:46Z | 2020-08-28T09:07:06Z | 2020-08-28T09:07:06Z | 2020-08-28T17:09:57Z | |
TYP: Annotations | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index befde7c355818..8501726c7d76d 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -10,7 +10,7 @@
import numpy as np
from pandas._libs import Timestamp, algos, hashtable as htable, iNaT, lib
-from pandas._typing import AnyArrayLike, ArrayLike, DtypeObj
+from pandas._typing import AnyArrayLike, ArrayLike, DtypeObj, FrameOrSeriesUnion
from pandas.util._decorators import doc
from pandas.core.dtypes.cast import (
@@ -58,7 +58,7 @@
from pandas.core.indexers import validate_indices
if TYPE_CHECKING:
- from pandas import Series
+ from pandas import DataFrame, Series
_shared_docs: Dict[str, str] = {}
@@ -1101,6 +1101,9 @@ def __init__(self, obj, n: int, keep: str):
if self.keep not in ("first", "last", "all"):
raise ValueError('keep must be either "first", "last" or "all"')
+ def compute(self, method: str) -> FrameOrSeriesUnion:
+ raise NotImplementedError
+
def nlargest(self):
return self.compute("nlargest")
@@ -1133,7 +1136,7 @@ class SelectNSeries(SelectN):
nordered : Series
"""
- def compute(self, method):
+ def compute(self, method: str) -> "Series":
n = self.n
dtype = self.obj.dtype
@@ -1207,7 +1210,7 @@ def __init__(self, obj, n: int, keep: str, columns):
columns = list(columns)
self.columns = columns
- def compute(self, method):
+ def compute(self, method: str) -> "DataFrame":
from pandas import Int64Index
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index d85647edc3b81..8193d65b3b30c 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1167,6 +1167,10 @@ class ExtensionOpsMixin:
with NumPy arrays.
"""
+ @classmethod
+ def _create_arithmetic_method(cls, op):
+ raise AbstractMethodError(cls)
+
@classmethod
def _add_arithmetic_ops(cls):
cls.__add__ = cls._create_arithmetic_method(operator.add)
@@ -1186,6 +1190,10 @@ def _add_arithmetic_ops(cls):
cls.__divmod__ = cls._create_arithmetic_method(divmod)
cls.__rdivmod__ = cls._create_arithmetic_method(ops.rdivmod)
+ @classmethod
+ def _create_comparison_method(cls, op):
+ raise AbstractMethodError(cls)
+
@classmethod
def _add_comparison_ops(cls):
cls.__eq__ = cls._create_comparison_method(operator.eq)
@@ -1195,6 +1203,10 @@ def _add_comparison_ops(cls):
cls.__le__ = cls._create_comparison_method(operator.le)
cls.__ge__ = cls._create_comparison_method(operator.ge)
+ @classmethod
+ def _create_logical_method(cls, op):
+ raise AbstractMethodError(cls)
+
@classmethod
def _add_logical_ops(cls):
cls.__and__ = cls._create_logical_method(operator.and_)
diff --git a/pandas/core/groupby/base.py b/pandas/core/groupby/base.py
index e71b2f94c8014..999873e7b81e4 100644
--- a/pandas/core/groupby/base.py
+++ b/pandas/core/groupby/base.py
@@ -4,17 +4,22 @@
SeriesGroupBy and the DataFrameGroupBy objects.
"""
import collections
+from typing import List
from pandas.core.dtypes.common import is_list_like, is_scalar
+from pandas.core.base import PandasObject
+
OutputKey = collections.namedtuple("OutputKey", ["label", "position"])
-class GroupByMixin:
+class GroupByMixin(PandasObject):
"""
Provide the groupby facilities to the mixed object.
"""
+ _attributes: List[str]
+
def _gotitem(self, key, ndim, subset=None):
"""
Sub-classes to define. Return a sliced object.
@@ -22,7 +27,7 @@ def _gotitem(self, key, ndim, subset=None):
Parameters
----------
key : string / list of selections
- ndim : 1,2
+ ndim : {1, 2}
requested ndim of result
subset : object, default None
subset to act on
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index b1e5d5627e3f6..a07c3328def54 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3541,10 +3541,7 @@ def _join_multi(self, other, how, return_indexers=True):
if not overlap:
raise ValueError("cannot join with no overlapping index names")
- self_is_mi = isinstance(self, ABCMultiIndex)
- other_is_mi = isinstance(other, ABCMultiIndex)
-
- if self_is_mi and other_is_mi:
+ if isinstance(self, MultiIndex) and isinstance(other, MultiIndex):
# Drop the non-matching levels from left and right respectively
ldrop_names = list(self_names - overlap)
@@ -3590,7 +3587,7 @@ def _join_multi(self, other, how, return_indexers=True):
# Case where only one index is multi
# make the indices into mi's that match
flip_order = False
- if self_is_mi:
+ if isinstance(self, MultiIndex):
self, other = other, self
flip_order = True
# flip if join method is right or left
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 0e8d7c1b866b8..c5f9b0783d91b 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -1,7 +1,7 @@
"""
Base and utility classes for tseries type pandas objects.
"""
-from datetime import datetime
+from datetime import datetime, tzinfo
from typing import Any, List, Optional, TypeVar, Union, cast
import numpy as np
@@ -632,6 +632,8 @@ class DatetimeTimedeltaMixin(DatetimeIndexOpsMixin, Int64Index):
but not PeriodIndex
"""
+ tz: Optional[tzinfo]
+
# Compat for frequency inference, see GH#23789
_is_monotonic_increasing = Index.is_monotonic_increasing
_is_monotonic_decreasing = Index.is_monotonic_decreasing
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 731907993d08f..c3eb0496a1bc5 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -45,6 +45,8 @@ class NumericIndex(Index):
This is an abstract class.
"""
+ _default_dtype: np.dtype
+
_is_numeric_dtype = True
def __new__(cls, data=None, dtype=None, copy=False, name=None):
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index c62be4f767f00..94b62300e0af5 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1382,7 +1382,7 @@ def where_func(cond, values, other):
cond = cond.swapaxes(axis, 0)
mask = np.array([cond[i].all() for i in range(cond.shape[0])], dtype=bool)
- result_blocks = []
+ result_blocks: List["Block"] = []
for m in [mask, ~mask]:
if m.any():
taken = result.take(m.nonzero()[0], axis=axis)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index a5372b14d210f..ad79317aee1ef 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -334,7 +334,7 @@ def reduce(self: T, func) -> T:
# If 2D, we assume that we're operating column-wise
assert self.ndim == 2
- res_blocks = []
+ res_blocks: List[Block] = []
for blk in self.blocks:
nbs = blk.reduce(func)
res_blocks.extend(nbs)
@@ -730,7 +730,7 @@ def _combine(self, blocks: List[Block], copy: bool = True) -> "BlockManager":
indexer = np.sort(np.concatenate([b.mgr_locs.as_array for b in blocks]))
inv_indexer = lib.get_reverse_indexer(indexer, self.shape[0])
- new_blocks = []
+ new_blocks: List[Block] = []
for b in blocks:
b = b.copy(deep=copy)
b.mgr_locs = inv_indexer[b.mgr_locs.indexer]
| https://api.github.com/repos/pandas-dev/pandas/pulls/35933 | 2020-08-27T17:34:52Z | 2020-08-30T12:01:51Z | 2020-08-30T12:01:51Z | 2020-08-30T15:07:35Z | |
CLN remove unnecessary trailing commas from aggregation | diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index 891048ae82dfd..e2374b81ca13b 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -28,10 +28,8 @@
def reconstruct_func(
- func: Optional[AggFuncType], **kwargs,
-) -> Tuple[
- bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]],
-]:
+ func: Optional[AggFuncType], **kwargs
+) -> Tuple[bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]]]:
"""
This is the internal function to reconstruct func given if there is relabeling
or not and also normalize the keyword to get new order of columns.
| xref #35925
| https://api.github.com/repos/pandas-dev/pandas/pulls/35930 | 2020-08-27T16:22:26Z | 2020-08-28T09:21:59Z | 2020-08-28T09:21:59Z | 2020-08-28T16:50:33Z |
CLN remove unnecessary trailing commas | diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index 891048ae82dfd..e2374b81ca13b 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -28,10 +28,8 @@
def reconstruct_func(
- func: Optional[AggFuncType], **kwargs,
-) -> Tuple[
- bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]],
-]:
+ func: Optional[AggFuncType], **kwargs
+) -> Tuple[bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]]]:
"""
This is the internal function to reconstruct func given if there is relabeling
or not and also normalize the keyword to get new order of columns.
| xref #35927 | https://api.github.com/repos/pandas-dev/pandas/pulls/35929 | 2020-08-27T16:20:07Z | 2020-08-27T16:33:09Z | null | 2020-10-10T14:14:44Z |
Inconsistencies between python/cython groupby.agg behavior | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index ffd756bed43b6..8530d30af06a7 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -75,7 +75,14 @@
group_selection_context,
)
from pandas.core.groupby.numba_ import generate_numba_func, split_for_numba
-from pandas.core.indexes.api import Index, MultiIndex, all_indexes_same
+from pandas.core.indexes.api import (
+ DatetimeIndex,
+ Index,
+ MultiIndex,
+ PeriodIndex,
+ TimedeltaIndex,
+ all_indexes_same,
+)
import pandas.core.indexes.base as ibase
from pandas.core.internals import BlockManager
from pandas.core.series import Series
@@ -257,17 +264,27 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs)
if self.grouper.nkeys > 1:
return self._python_agg_general(func, *args, **kwargs)
- try:
- return self._python_agg_general(func, *args, **kwargs)
- except (ValueError, KeyError):
- # TODO: KeyError is raised in _python_agg_general,
- # see see test_groupby.test_basic
- result = self._aggregate_named(func, *args, **kwargs)
+ if isinstance(
+ self._selected_obj.index, (DatetimeIndex, TimedeltaIndex, PeriodIndex)
+ ):
+ # using _python_agg_general would end up incorrectly patching
+ # _index_data in reduction.pyx
+ result = self._aggregate_maybe_named(func, *args, **kwargs)
+ else:
+ try:
+ return self._python_agg_general(func, *args, **kwargs)
+ except (ValueError, KeyError):
+ # TODO: KeyError is raised in _python_agg_general,
+ # see see test_groupby.test_basic
+ result = self._aggregate_maybe_named(func, *args, **kwargs)
+
+ index = self.grouper.result_index
+ assert index.name == self.grouper.names[0]
- index = Index(sorted(result), name=self.grouper.names[0])
ret = create_series_with_explicit_dtype(
result, index=index, dtype_if_empty=object
)
+ ret.name = self._selected_obj.name # test_metadata_propagation_indiv
if not self.as_index: # pragma: no cover
print("Warning, ignoring as_index=True")
@@ -470,14 +487,34 @@ def _get_index() -> Index:
)
return self._reindex_output(result)
- def _aggregate_named(self, func, *args, **kwargs):
+ def _aggregate_maybe_named(self, func, *args, **kwargs):
+ """
+ Try the named-aggregator first, then unnamed, which better matches
+ what libreduction does.
+ """
+ try:
+ return self._aggregate_named(func, *args, named=True, **kwargs)
+ except KeyError:
+ return self._aggregate_named(func, *args, named=False, **kwargs)
+
+ def _aggregate_named(self, func, *args, named: bool = True, **kwargs):
result = {}
- for name, group in self:
- group.name = name
+ for name, group in self: # TODO: could we have duplicate names?
+ if named:
+ group.name = name
+
output = func(group, *args, **kwargs)
if isinstance(output, (Series, Index, np.ndarray)):
- raise ValueError("Must produce aggregated value")
+ if (
+ isinstance(output, Series)
+ and len(output) == 1
+ and name in output.index
+ ):
+ # FIXME: kludge for test_resampler_grouper.test_apply
+ output = output.iloc[0]
+ else:
+ raise ValueError("Must produce aggregated value")
result[name] = output
return result
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index e9525f03368fa..054d6165b31aa 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -45,7 +45,7 @@
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame
from pandas.core.groupby import base, grouper
-from pandas.core.indexes.api import Index, MultiIndex, ensure_index
+from pandas.core.indexes.api import Index, MultiIndex, RangeIndex, ensure_index
from pandas.core.series import Series
from pandas.core.sorting import (
compress_group_index,
@@ -616,8 +616,10 @@ def agg_series(self, obj: Series, func: F):
# TODO: can we get a performant workaround for EAs backed by ndarray?
return self._aggregate_series_pure_python(obj, func)
- elif obj.index._has_complex_internals:
+ elif obj.index._has_complex_internals or isinstance(obj.index, RangeIndex):
# Preempt TypeError in _aggregate_series_fast
+ # exclude RangeIndex because patching it in libreduction would
+ # silently be incorrect
return self._aggregate_series_pure_python(obj, func)
try:
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 28d33ebb23c20..5827b1f456bd7 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -195,14 +195,17 @@ def test_resample_empty_dtypes(index, dtype, resample_method):
@all_ts
-def test_apply_to_empty_series(empty_series_dti):
+@pytest.mark.parametrize("freq", ["M", "D", "H"])
+def test_apply_to_empty_series(empty_series_dti, freq):
# GH 14313
s = empty_series_dti
- for freq in ["M", "D", "H"]:
- result = s.resample(freq).apply(lambda x: 1)
- expected = s.resample(freq).apply(np.sum)
- tm.assert_series_equal(result, expected, check_dtype=False)
+ result = s.resample(freq).apply(lambda x: 1)
+ expected = s.resample(freq).apply(np.sum)
+
+ assert result.index.dtype == expected.index.dtype
+
+ tm.assert_series_equal(result, expected, check_dtype=False)
@all_ts
| This is pretty ugly, but tentatively is sufficient to make #34997 pass.
The upshot is that we have two problems:
1) in libreduction setting `setattr(cached_ityp, '_index_data', islider.buf)` silently does the wrong thing for EA-backed indexes
2) when we go through the non-libreduction path, we do things slightly differently, which requires more patches to get tests passing.
cc @WillAyd
| https://api.github.com/repos/pandas-dev/pandas/pulls/35928 | 2020-08-27T16:08:57Z | 2020-09-17T20:56:40Z | null | 2020-09-17T20:56:49Z |
CLN remove unnecessary trailing commas | diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index 891048ae82dfd..e2374b81ca13b 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -28,10 +28,8 @@
def reconstruct_func(
- func: Optional[AggFuncType], **kwargs,
-) -> Tuple[
- bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]],
-]:
+ func: Optional[AggFuncType], **kwargs
+) -> Tuple[bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]]]:
"""
This is the internal function to reconstruct func given if there is relabeling
or not and also normalize the keyword to get new order of columns.
| xref #35925 | https://api.github.com/repos/pandas-dev/pandas/pulls/35927 | 2020-08-27T16:06:48Z | 2020-08-27T16:18:47Z | null | 2020-08-27T16:18:53Z |
remove unnecessary trailing commas | diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index 891048ae82dfd..e2374b81ca13b 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -28,10 +28,8 @@
def reconstruct_func(
- func: Optional[AggFuncType], **kwargs,
-) -> Tuple[
- bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]],
-]:
+ func: Optional[AggFuncType], **kwargs
+) -> Tuple[bool, Optional[AggFuncType], Optional[List[str]], Optional[List[int]]]:
"""
This is the internal function to reconstruct func given if there is relabeling
or not and also normalize the keyword to get new order of columns.
| xref #35925 | https://api.github.com/repos/pandas-dev/pandas/pulls/35926 | 2020-08-27T16:04:04Z | 2020-08-27T16:05:29Z | null | 2020-08-27T16:05:37Z |
REF: use BlockManager.apply for DataFrameGroupBy.count | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 2afa56b50c3c7..039f52e6f5b8d 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -76,7 +76,7 @@
from pandas.core.groupby.numba_ import generate_numba_func, split_for_numba
from pandas.core.indexes.api import Index, MultiIndex, all_indexes_same
import pandas.core.indexes.base as ibase
-from pandas.core.internals import BlockManager, make_block
+from pandas.core.internals import BlockManager
from pandas.core.series import Series
from pandas.core.util.numba_ import NUMBA_FUNC_CACHE, maybe_use_numba
@@ -1765,20 +1765,24 @@ def count(self):
ids, _, ngroups = self.grouper.group_info
mask = ids != -1
- # TODO(2DEA): reshape would not be necessary with 2D EAs
- vals = ((mask & ~isna(blk.values).reshape(blk.shape)) for blk in data.blocks)
- locs = (blk.mgr_locs for blk in data.blocks)
+ def hfunc(bvalues: ArrayLike) -> ArrayLike:
+ # TODO(2DEA): reshape would not be necessary with 2D EAs
+ if bvalues.ndim == 1:
+ # EA
+ masked = mask & ~isna(bvalues).reshape(1, -1)
+ else:
+ masked = mask & ~isna(bvalues)
- counted = (
- lib.count_level_2d(x, labels=ids, max_bin=ngroups, axis=1) for x in vals
- )
- blocks = [make_block(val, placement=loc) for val, loc in zip(counted, locs)]
+ counted = lib.count_level_2d(masked, labels=ids, max_bin=ngroups, axis=1)
+ return counted
+
+ new_mgr = data.apply(hfunc)
# If we are grouping on categoricals we want unobserved categories to
# return zero, rather than the default of NaN which the reindexing in
# _wrap_agged_blocks() returns. GH 35028
with com.temp_setattr(self, "observed", True):
- result = self._wrap_agged_blocks(blocks, items=data.items)
+ result = self._wrap_agged_blocks(new_mgr.blocks, items=data.items)
return self._reindex_output(result, fill_value=0)
| https://api.github.com/repos/pandas-dev/pandas/pulls/35924 | 2020-08-27T14:43:01Z | 2020-09-04T20:47:13Z | 2020-09-04T20:47:13Z | 2020-09-04T20:50:50Z | |
Backport PR #35794: BUG: issubclass check with dtype instead of type,… | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 5c4e770c7b33c..a87e06678faad 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -24,7 +24,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-
+- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
- Bug in :class:`Series` constructor raising a ``TypeError`` when constructing sparse datetime64 dtypes (:issue:`35762`)
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should bw ``""`` (:issue:`35712`)
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index bc9ff7c44b689..e55df1e1d8155 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -481,13 +481,21 @@ def stringify(value):
self.lhs.update(v)
def _disallow_scalar_only_bool_ops(self):
+ rhs = self.rhs
+ lhs = self.lhs
+
+ # GH#24883 unwrap dtype if necessary to ensure we have a type object
+ rhs_rt = rhs.return_type
+ rhs_rt = getattr(rhs_rt, "type", rhs_rt)
+ lhs_rt = lhs.return_type
+ lhs_rt = getattr(lhs_rt, "type", lhs_rt)
if (
- (self.lhs.is_scalar or self.rhs.is_scalar)
+ (lhs.is_scalar or rhs.is_scalar)
and self.op in _bool_ops_dict
and (
not (
- issubclass(self.rhs.return_type, (bool, np.bool_))
- and issubclass(self.lhs.return_type, (bool, np.bool_))
+ issubclass(rhs_rt, (bool, np.bool_))
+ and issubclass(lhs_rt, (bool, np.bool_))
)
)
):
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 628b955a1de92..56d178daee7fd 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -160,6 +160,13 @@ def test_eval_resolvers_as_list(self):
assert df.eval("a + b", resolvers=[dict1, dict2]) == dict1["a"] + dict2["b"]
assert pd.eval("a + b", resolvers=[dict1, dict2]) == dict1["a"] + dict2["b"]
+ def test_eval_object_dtype_binop(self):
+ # GH#24883
+ df = pd.DataFrame({"a1": ["Y", "N"]})
+ res = df.eval("c = ((a1 == 'Y') & True)")
+ expected = pd.DataFrame({"a1": ["Y", "N"], "c": [True, False]})
+ tm.assert_frame_equal(res, expected)
+
class TestDataFrameQueryWithMultiIndex:
def test_query_with_named_multiindex(self, parser, engine):
| xref #35794 | https://api.github.com/repos/pandas-dev/pandas/pulls/35919 | 2020-08-27T09:18:35Z | 2020-08-27T10:37:07Z | 2020-08-27T10:37:07Z | 2020-08-27T10:37:16Z |
REF: window/test_dtypes.py with pytest idioms | diff --git a/pandas/tests/window/conftest.py b/pandas/tests/window/conftest.py
index eb8252d5731be..7f03fa2a5ea0d 100644
--- a/pandas/tests/window/conftest.py
+++ b/pandas/tests/window/conftest.py
@@ -308,3 +308,34 @@ def which(request):
def halflife_with_times(request):
"""Halflife argument for EWM when times is specified."""
return request.param
+
+
+@pytest.fixture(
+ params=[
+ "object",
+ "category",
+ "int8",
+ "int16",
+ "int32",
+ "int64",
+ "uint8",
+ "uint16",
+ "uint32",
+ "uint64",
+ "float16",
+ "float32",
+ "float64",
+ "m8[ns]",
+ "M8[ns]",
+ pytest.param(
+ "datetime64[ns, UTC]",
+ marks=pytest.mark.skip(
+ "direct creation of extension dtype datetime64[ns, UTC] "
+ "is not supported ATM"
+ ),
+ ),
+ ]
+)
+def dtypes(request):
+ """Dtypes for window tests"""
+ return request.param
diff --git a/pandas/tests/window/test_dtypes.py b/pandas/tests/window/test_dtypes.py
index 0aa5bf019ff5e..245b48b351684 100644
--- a/pandas/tests/window/test_dtypes.py
+++ b/pandas/tests/window/test_dtypes.py
@@ -1,5 +1,3 @@
-from itertools import product
-
import numpy as np
import pytest
@@ -10,234 +8,95 @@
# gh-12373 : rolling functions error on float32 data
# make sure rolling functions works for different dtypes
#
-# NOTE that these are yielded tests and so _create_data
-# is explicitly called.
-#
# further note that we are only checking rolling for fully dtype
# compliance (though both expanding and ewm inherit)
-class Dtype:
- window = 2
-
- funcs = {
- "count": lambda v: v.count(),
- "max": lambda v: v.max(),
- "min": lambda v: v.min(),
- "sum": lambda v: v.sum(),
- "mean": lambda v: v.mean(),
- "std": lambda v: v.std(),
- "var": lambda v: v.var(),
- "median": lambda v: v.median(),
- }
-
- def get_expects(self):
- expects = {
- "sr1": {
- "count": Series([1, 2, 2, 2, 2], dtype="float64"),
- "max": Series([np.nan, 1, 2, 3, 4], dtype="float64"),
- "min": Series([np.nan, 0, 1, 2, 3], dtype="float64"),
- "sum": Series([np.nan, 1, 3, 5, 7], dtype="float64"),
- "mean": Series([np.nan, 0.5, 1.5, 2.5, 3.5], dtype="float64"),
- "std": Series([np.nan] + [np.sqrt(0.5)] * 4, dtype="float64"),
- "var": Series([np.nan, 0.5, 0.5, 0.5, 0.5], dtype="float64"),
- "median": Series([np.nan, 0.5, 1.5, 2.5, 3.5], dtype="float64"),
+def get_dtype(dtype, coerce_int=None):
+ if coerce_int is False and "int" in dtype:
+ return None
+ if dtype != "category":
+ return np.dtype(dtype)
+ return dtype
+
+
+@pytest.mark.parametrize(
+ "method, data, expected_data, coerce_int",
+ [
+ ("count", np.arange(5), [1, 2, 2, 2, 2], True),
+ ("count", np.arange(10, 0, -2), [1, 2, 2, 2, 2], True),
+ ("count", [0, 1, 2, np.nan, 4], [1, 2, 2, 1, 1], False),
+ ("max", np.arange(5), [np.nan, 1, 2, 3, 4], True),
+ ("max", np.arange(10, 0, -2), [np.nan, 10, 8, 6, 4], True),
+ ("max", [0, 1, 2, np.nan, 4], [np.nan, 1, 2, np.nan, np.nan], False),
+ ("min", np.arange(5), [np.nan, 0, 1, 2, 3], True),
+ ("min", np.arange(10, 0, -2), [np.nan, 8, 6, 4, 2], True),
+ ("min", [0, 1, 2, np.nan, 4], [np.nan, 0, 1, np.nan, np.nan], False),
+ ("sum", np.arange(5), [np.nan, 1, 3, 5, 7], True),
+ ("sum", np.arange(10, 0, -2), [np.nan, 18, 14, 10, 6], True),
+ ("sum", [0, 1, 2, np.nan, 4], [np.nan, 1, 3, np.nan, np.nan], False),
+ ("mean", np.arange(5), [np.nan, 0.5, 1.5, 2.5, 3.5], True),
+ ("mean", np.arange(10, 0, -2), [np.nan, 9, 7, 5, 3], True),
+ ("mean", [0, 1, 2, np.nan, 4], [np.nan, 0.5, 1.5, np.nan, np.nan], False),
+ ("std", np.arange(5), [np.nan] + [np.sqrt(0.5)] * 4, True),
+ ("std", np.arange(10, 0, -2), [np.nan] + [np.sqrt(2)] * 4, True),
+ (
+ "std",
+ [0, 1, 2, np.nan, 4],
+ [np.nan] + [np.sqrt(0.5)] * 2 + [np.nan] * 2,
+ False,
+ ),
+ ("var", np.arange(5), [np.nan, 0.5, 0.5, 0.5, 0.5], True),
+ ("var", np.arange(10, 0, -2), [np.nan, 2, 2, 2, 2], True),
+ ("var", [0, 1, 2, np.nan, 4], [np.nan, 0.5, 0.5, np.nan, np.nan], False),
+ ("median", np.arange(5), [np.nan, 0.5, 1.5, 2.5, 3.5], True),
+ ("median", np.arange(10, 0, -2), [np.nan, 9, 7, 5, 3], True),
+ ("median", [0, 1, 2, np.nan, 4], [np.nan, 0.5, 1.5, np.nan, np.nan], False),
+ ],
+)
+def test_series_dtypes(method, data, expected_data, coerce_int, dtypes):
+ s = Series(data, dtype=get_dtype(dtypes, coerce_int=coerce_int))
+ if dtypes in ("m8[ns]", "M8[ns]") and method != "count":
+ msg = "No numeric types to aggregate"
+ with pytest.raises(DataError, match=msg):
+ getattr(s.rolling(2), method)()
+ else:
+ result = getattr(s.rolling(2), method)()
+ expected = Series(expected_data, dtype="float64")
+ tm.assert_almost_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "method, expected_data",
+ [
+ ("count", {0: Series([1, 2, 2, 2, 2]), 1: Series([1, 2, 2, 2, 2])}),
+ ("max", {0: Series([np.nan, 2, 4, 6, 8]), 1: Series([np.nan, 3, 5, 7, 9])}),
+ ("min", {0: Series([np.nan, 0, 2, 4, 6]), 1: Series([np.nan, 1, 3, 5, 7])}),
+ (
+ "sum",
+ {0: Series([np.nan, 2, 6, 10, 14]), 1: Series([np.nan, 4, 8, 12, 16])},
+ ),
+ ("mean", {0: Series([np.nan, 1, 3, 5, 7]), 1: Series([np.nan, 2, 4, 6, 8])}),
+ (
+ "std",
+ {
+ 0: Series([np.nan] + [np.sqrt(2)] * 4),
+ 1: Series([np.nan] + [np.sqrt(2)] * 4),
},
- "sr2": {
- "count": Series([1, 2, 2, 2, 2], dtype="float64"),
- "max": Series([np.nan, 10, 8, 6, 4], dtype="float64"),
- "min": Series([np.nan, 8, 6, 4, 2], dtype="float64"),
- "sum": Series([np.nan, 18, 14, 10, 6], dtype="float64"),
- "mean": Series([np.nan, 9, 7, 5, 3], dtype="float64"),
- "std": Series([np.nan] + [np.sqrt(2)] * 4, dtype="float64"),
- "var": Series([np.nan, 2, 2, 2, 2], dtype="float64"),
- "median": Series([np.nan, 9, 7, 5, 3], dtype="float64"),
- },
- "sr3": {
- "count": Series([1, 2, 2, 1, 1], dtype="float64"),
- "max": Series([np.nan, 1, 2, np.nan, np.nan], dtype="float64"),
- "min": Series([np.nan, 0, 1, np.nan, np.nan], dtype="float64"),
- "sum": Series([np.nan, 1, 3, np.nan, np.nan], dtype="float64"),
- "mean": Series([np.nan, 0.5, 1.5, np.nan, np.nan], dtype="float64"),
- "std": Series(
- [np.nan] + [np.sqrt(0.5)] * 2 + [np.nan] * 2, dtype="float64"
- ),
- "var": Series([np.nan, 0.5, 0.5, np.nan, np.nan], dtype="float64"),
- "median": Series([np.nan, 0.5, 1.5, np.nan, np.nan], dtype="float64"),
- },
- "df": {
- "count": DataFrame(
- {0: Series([1, 2, 2, 2, 2]), 1: Series([1, 2, 2, 2, 2])},
- dtype="float64",
- ),
- "max": DataFrame(
- {0: Series([np.nan, 2, 4, 6, 8]), 1: Series([np.nan, 3, 5, 7, 9])},
- dtype="float64",
- ),
- "min": DataFrame(
- {0: Series([np.nan, 0, 2, 4, 6]), 1: Series([np.nan, 1, 3, 5, 7])},
- dtype="float64",
- ),
- "sum": DataFrame(
- {
- 0: Series([np.nan, 2, 6, 10, 14]),
- 1: Series([np.nan, 4, 8, 12, 16]),
- },
- dtype="float64",
- ),
- "mean": DataFrame(
- {0: Series([np.nan, 1, 3, 5, 7]), 1: Series([np.nan, 2, 4, 6, 8])},
- dtype="float64",
- ),
- "std": DataFrame(
- {
- 0: Series([np.nan] + [np.sqrt(2)] * 4),
- 1: Series([np.nan] + [np.sqrt(2)] * 4),
- },
- dtype="float64",
- ),
- "var": DataFrame(
- {0: Series([np.nan, 2, 2, 2, 2]), 1: Series([np.nan, 2, 2, 2, 2])},
- dtype="float64",
- ),
- "median": DataFrame(
- {0: Series([np.nan, 1, 3, 5, 7]), 1: Series([np.nan, 2, 4, 6, 8])},
- dtype="float64",
- ),
- },
- }
- return expects
-
- def _create_dtype_data(self, dtype):
- sr1 = Series(np.arange(5), dtype=dtype)
- sr2 = Series(np.arange(10, 0, -2), dtype=dtype)
- sr3 = sr1.copy()
- sr3[3] = np.NaN
- df = DataFrame(np.arange(10).reshape((5, 2)), dtype=dtype)
-
- data = {"sr1": sr1, "sr2": sr2, "sr3": sr3, "df": df}
-
- return data
-
- def _create_data(self):
- self.data = self._create_dtype_data(self.dtype)
- self.expects = self.get_expects()
-
- def test_dtypes(self):
- self._create_data()
- for f_name, d_name in product(self.funcs.keys(), self.data.keys()):
-
- f = self.funcs[f_name]
- d = self.data[d_name]
- exp = self.expects[d_name][f_name]
- self.check_dtypes(f, f_name, d, d_name, exp)
-
- def check_dtypes(self, f, f_name, d, d_name, exp):
- roll = d.rolling(window=self.window)
- result = f(roll)
- tm.assert_almost_equal(result, exp)
-
-
-class TestDtype_object(Dtype):
- dtype = object
-
-
-class Dtype_integer(Dtype):
- pass
-
-
-class TestDtype_int8(Dtype_integer):
- dtype = np.int8
-
-
-class TestDtype_int16(Dtype_integer):
- dtype = np.int16
-
-
-class TestDtype_int32(Dtype_integer):
- dtype = np.int32
-
-
-class TestDtype_int64(Dtype_integer):
- dtype = np.int64
-
-
-class Dtype_uinteger(Dtype):
- pass
-
-
-class TestDtype_uint8(Dtype_uinteger):
- dtype = np.uint8
-
-
-class TestDtype_uint16(Dtype_uinteger):
- dtype = np.uint16
-
-
-class TestDtype_uint32(Dtype_uinteger):
- dtype = np.uint32
-
-
-class TestDtype_uint64(Dtype_uinteger):
- dtype = np.uint64
-
-
-class Dtype_float(Dtype):
- pass
-
-
-class TestDtype_float16(Dtype_float):
- dtype = np.float16
-
-
-class TestDtype_float32(Dtype_float):
- dtype = np.float32
-
-
-class TestDtype_float64(Dtype_float):
- dtype = np.float64
-
-
-class TestDtype_category(Dtype):
- dtype = "category"
- include_df = False
-
- def _create_dtype_data(self, dtype):
- sr1 = Series(range(5), dtype=dtype)
- sr2 = Series(range(10, 0, -2), dtype=dtype)
-
- data = {"sr1": sr1, "sr2": sr2}
-
- return data
-
-
-class DatetimeLike(Dtype):
- def check_dtypes(self, f, f_name, d, d_name, exp):
-
- roll = d.rolling(window=self.window)
- if f_name == "count":
- result = f(roll)
- tm.assert_almost_equal(result, exp)
-
- else:
- msg = "No numeric types to aggregate"
- with pytest.raises(DataError, match=msg):
- f(roll)
-
-
-class TestDtype_timedelta(DatetimeLike):
- dtype = np.dtype("m8[ns]")
-
-
-class TestDtype_datetime(DatetimeLike):
- dtype = np.dtype("M8[ns]")
-
-
-class TestDtype_datetime64UTC(DatetimeLike):
- dtype = "datetime64[ns, UTC]"
-
- def _create_data(self):
- pytest.skip(
- "direct creation of extension dtype "
- "datetime64[ns, UTC] is not supported ATM"
- )
+ ),
+ ("var", {0: Series([np.nan, 2, 2, 2, 2]), 1: Series([np.nan, 2, 2, 2, 2])}),
+ ("median", {0: Series([np.nan, 1, 3, 5, 7]), 1: Series([np.nan, 2, 4, 6, 8])}),
+ ],
+)
+def test_dataframe_dtypes(method, expected_data, dtypes):
+ if dtypes == "category":
+ pytest.skip("Category dataframe testing not implemented.")
+ df = DataFrame(np.arange(10).reshape((5, 2)), dtype=get_dtype(dtypes))
+ if dtypes in ("m8[ns]", "M8[ns]") and method != "count":
+ msg = "No numeric types to aggregate"
+ with pytest.raises(DataError, match=msg):
+ getattr(df.rolling(2), method)()
+ else:
+ result = getattr(df.rolling(2), method)()
+ expected = DataFrame(expected_data, dtype="float64")
+ tm.assert_frame_equal(result, expected)
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35918 | 2020-08-27T07:19:08Z | 2020-09-05T19:40:07Z | 2020-09-05T19:40:07Z | 2020-09-06T04:05:06Z |
"Backport PR #35838 on branch 1.1.x" | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index d60119f28c053..5c4e770c7b33c 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -24,6 +24,8 @@ Fixed regressions
Bug fixes
~~~~~~~~~
+
+- Bug in :class:`Series` constructor raising a ``TypeError`` when constructing sparse datetime64 dtypes (:issue:`35762`)
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should bw ``""`` (:issue:`35712`)
-
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 47f10f1f65f4a..e8c9f28e50084 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -35,6 +35,7 @@
is_iterator,
is_list_like,
is_object_dtype,
+ is_sparse,
is_timedelta64_ns_dtype,
)
from pandas.core.dtypes.generic import (
@@ -535,9 +536,10 @@ def _try_cast(
if maybe_castable(arr) and not copy and dtype is None:
return arr
- if isinstance(dtype, ExtensionDtype) and dtype.kind != "M":
+ if isinstance(dtype, ExtensionDtype) and (dtype.kind != "M" or is_sparse(dtype)):
# create an extension array from its dtype
- # DatetimeTZ case needs to go through maybe_cast_to_datetime
+ # DatetimeTZ case needs to go through maybe_cast_to_datetime but
+ # SparseDtype does not
array_type = dtype.construct_array_type()._from_sequence
subarr = array_type(arr, dtype=dtype, copy=copy)
return subarr
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 2697f42eb05a4..e6b4cb598989b 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -50,6 +50,7 @@
is_numeric_dtype,
is_object_dtype,
is_scalar,
+ is_sparse,
is_string_dtype,
is_timedelta64_dtype,
is_timedelta64_ns_dtype,
@@ -1323,7 +1324,9 @@ def maybe_cast_to_datetime(value, dtype, errors: str = "raise"):
f"Please pass in '{dtype.name}[ns]' instead."
)
- if is_datetime64 and not is_dtype_equal(dtype, DT64NS_DTYPE):
+ if is_datetime64 and not is_dtype_equal(
+ getattr(dtype, "subtype", dtype), DT64NS_DTYPE
+ ):
# pandas supports dtype whose granularity is less than [ns]
# e.g., [ps], [fs], [as]
@@ -1355,7 +1358,7 @@ def maybe_cast_to_datetime(value, dtype, errors: str = "raise"):
if is_scalar(value):
if value == iNaT or isna(value):
value = iNaT
- else:
+ elif not is_sparse(value):
value = np.array(value, copy=False)
# have a scalar array-like (e.g. NaT)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 1dd410ad02ee0..bcf7039ec9039 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1449,3 +1449,18 @@ def test_constructor_datetimelike_scalar_to_string_dtype(self):
result = Series("M", index=[1, 2, 3], dtype="string")
expected = pd.Series(["M", "M", "M"], index=[1, 2, 3], dtype="string")
tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "values",
+ [
+ [np.datetime64("2012-01-01"), np.datetime64("2013-01-01")],
+ ["2012-01-01", "2013-01-01"],
+ ],
+ )
+ def test_constructor_sparse_datetime64(self, values):
+ # https://github.com/pandas-dev/pandas/issues/35762
+ dtype = pd.SparseDtype("datetime64[ns]")
+ result = pd.Series(values, dtype=dtype)
+ arr = pd.arrays.SparseArray(values, dtype=dtype)
+ expected = pd.Series(arr)
+ tm.assert_series_equal(result, expected)
| xref #35838 | https://api.github.com/repos/pandas-dev/pandas/pulls/35915 | 2020-08-27T02:36:25Z | 2020-08-27T09:22:20Z | 2020-08-27T09:22:20Z | 2020-08-27T17:11:02Z |
Make MultiIndex.get_loc raise for unhashable type | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index d1a66256454ca..0fa5dd30f8cd9 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -17,6 +17,7 @@ Fixed regressions
- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
- Fix regression in updating a column inplace (e.g. using ``df['col'].fillna(.., inplace=True)``) (:issue:`35731`)
- Performance regression for :meth:`RangeIndex.format` (:issue:`35712`)
+- Regression where :meth:`MultiIndex.get_loc` would return a slice spanning the full index when passed an empty list (:issue:`35878`)
- Fix regression in invalid cache after an indexing operation; this can manifest when setting which does not update the data (:issue:`35521`)
- Regression in :meth:`DataFrame.replace` where a ``TypeError`` would be raised when attempting to replace elements of type :class:`Interval` (:issue:`35931`)
- Fix regression in pickle roundtrip of the ``closed`` attribute of :class:`IntervalIndex` (:issue:`35658`)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index f66b009e6d505..080ece8547479 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2725,6 +2725,8 @@ def get_loc(self, key, method=None):
"currently supported for MultiIndex"
)
+ hash(key)
+
def _maybe_to_slice(loc):
"""convert integer indexer to boolean mask or slice if possible"""
if not isinstance(loc, np.ndarray) or loc.dtype != "int64":
@@ -2739,8 +2741,7 @@ def _maybe_to_slice(loc):
mask[loc] = True
return mask
- if not isinstance(key, (tuple, list)):
- # not including list here breaks some indexing, xref #30892
+ if not isinstance(key, tuple):
loc = self._get_level_indexer(key, level=0)
return _maybe_to_slice(loc)
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index d27487dfb8aaa..e4549dfb3e68d 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -2111,7 +2111,7 @@ def test_type_error_multiindex(self):
)
dg = df.pivot_table(index="i", columns="c", values=["x", "y"])
- with pytest.raises(TypeError, match="is an invalid key"):
+ with pytest.raises(TypeError, match="unhashable type"):
dg[:, 0]
index = Index(range(2), name="i")
diff --git a/pandas/tests/indexing/multiindex/test_multiindex.py b/pandas/tests/indexing/multiindex/test_multiindex.py
index 5e5fcd3db88d8..4565d79c632de 100644
--- a/pandas/tests/indexing/multiindex/test_multiindex.py
+++ b/pandas/tests/indexing/multiindex/test_multiindex.py
@@ -1,4 +1,5 @@
import numpy as np
+import pytest
import pandas._libs.index as _index
from pandas.errors import PerformanceWarning
@@ -83,3 +84,10 @@ def test_nested_tuples_duplicates(self):
df3 = df.copy(deep=True)
df3.loc[[(dti[0], "a")], "c2"] = 1.0
tm.assert_frame_equal(df3, expected)
+
+ def test_multiindex_get_loc_list_raises(self):
+ # https://github.com/pandas-dev/pandas/issues/35878
+ idx = pd.MultiIndex.from_tuples([("a", 1), ("b", 2)])
+ msg = "unhashable type"
+ with pytest.raises(TypeError, match=msg):
+ idx.get_loc([])
diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py
index 3463de25ad91b..593d1c78a19e2 100644
--- a/pandas/tests/series/indexing/test_setitem.py
+++ b/pandas/tests/series/indexing/test_setitem.py
@@ -1,6 +1,7 @@
import numpy as np
-from pandas import NaT, Series, date_range
+from pandas import MultiIndex, NaT, Series, date_range
+import pandas.testing as tm
class TestSetitemDT64Values:
@@ -17,3 +18,11 @@ def test_setitem_none_nan(self):
series[5:7] = np.nan
assert series[6] is NaT
+
+ def test_setitem_multiindex_empty_slice(self):
+ # https://github.com/pandas-dev/pandas/issues/35878
+ idx = MultiIndex.from_tuples([("a", 1), ("b", 2)])
+ result = Series([1, 2], index=idx)
+ expected = result.copy()
+ result.loc[[]] = 0
+ tm.assert_series_equal(result, expected)
| - [x] closes #35878
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35914 | 2020-08-27T02:25:38Z | 2020-09-05T21:18:52Z | 2020-09-05T21:18:51Z | 2020-09-06T13:01:22Z |
TYP: annotate tseries.holiday | diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 8ab37f787bd10..d8a3040919e7b 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -12,7 +12,7 @@
from pandas.tseries.offsets import Day, Easter
-def next_monday(dt):
+def next_monday(dt: datetime) -> datetime:
"""
If holiday falls on Saturday, use following Monday instead;
if holiday falls on Sunday, use Monday instead
@@ -24,7 +24,7 @@ def next_monday(dt):
return dt
-def next_monday_or_tuesday(dt):
+def next_monday_or_tuesday(dt: datetime) -> datetime:
"""
For second holiday of two adjacent ones!
If holiday falls on Saturday, use following Monday instead;
@@ -39,7 +39,7 @@ def next_monday_or_tuesday(dt):
return dt
-def previous_friday(dt):
+def previous_friday(dt: datetime) -> datetime:
"""
If holiday falls on Saturday or Sunday, use previous Friday instead.
"""
@@ -50,7 +50,7 @@ def previous_friday(dt):
return dt
-def sunday_to_monday(dt):
+def sunday_to_monday(dt: datetime) -> datetime:
"""
If holiday falls on Sunday, use day thereafter (Monday) instead.
"""
@@ -59,7 +59,7 @@ def sunday_to_monday(dt):
return dt
-def weekend_to_monday(dt):
+def weekend_to_monday(dt: datetime) -> datetime:
"""
If holiday falls on Sunday or Saturday,
use day thereafter (Monday) instead.
@@ -72,7 +72,7 @@ def weekend_to_monday(dt):
return dt
-def nearest_workday(dt):
+def nearest_workday(dt: datetime) -> datetime:
"""
If holiday falls on Saturday, use day before (Friday) instead;
if holiday falls on Sunday, use day thereafter (Monday) instead.
@@ -84,7 +84,7 @@ def nearest_workday(dt):
return dt
-def next_workday(dt):
+def next_workday(dt: datetime) -> datetime:
"""
returns next weekday used for observances
"""
@@ -95,7 +95,7 @@ def next_workday(dt):
return dt
-def previous_workday(dt):
+def previous_workday(dt: datetime) -> datetime:
"""
returns previous weekday used for observances
"""
@@ -106,14 +106,14 @@ def previous_workday(dt):
return dt
-def before_nearest_workday(dt):
+def before_nearest_workday(dt: datetime) -> datetime:
"""
returns previous workday after nearest workday
"""
return previous_workday(nearest_workday(dt))
-def after_nearest_workday(dt):
+def after_nearest_workday(dt: datetime) -> datetime:
"""
returns next workday after nearest workday
needed for Boxing day or multiple holidays in a series
@@ -428,9 +428,11 @@ def holidays(self, start=None, end=None, return_name=False):
# If we don't have a cache or the dates are outside the prior cache, we
# get them again
if self._cache is None or start < self._cache[0] or end > self._cache[1]:
- holidays = [rule.dates(start, end, return_name=True) for rule in self.rules]
- if holidays:
- holidays = concat(holidays)
+ pre_holidays = [
+ rule.dates(start, end, return_name=True) for rule in self.rules
+ ]
+ if pre_holidays:
+ holidays = concat(pre_holidays)
else:
holidays = Series(index=DatetimeIndex([]), dtype=object)
diff --git a/setup.cfg b/setup.cfg
index e4c0b3dcf37ef..aa1535a171f0a 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -276,6 +276,3 @@ check_untyped_defs=False
[mypy-pandas.plotting._matplotlib.misc]
check_untyped_defs=False
-
-[mypy-pandas.tseries.holiday]
-check_untyped_defs=False
| cc @simonjayhawkins | https://api.github.com/repos/pandas-dev/pandas/pulls/35913 | 2020-08-27T02:08:26Z | 2020-08-27T18:25:03Z | 2020-08-27T18:25:03Z | 2020-08-27T18:50:01Z |
CI: Attempt to unpin pytest-xdist | diff --git a/ci/deps/azure-windows-37.yaml b/ci/deps/azure-windows-37.yaml
index 4894129915722..1d15ca41c0f8e 100644
--- a/ci/deps/azure-windows-37.yaml
+++ b/ci/deps/azure-windows-37.yaml
@@ -8,7 +8,7 @@ dependencies:
# tools
- cython>=0.29.16
- pytest>=5.0.1
- - pytest-xdist>=1.21,<2.0.0 # GH 35737
+ - pytest-xdist>=1.21
- hypothesis>=3.58.0
- pytest-azurepipelines
diff --git a/ci/deps/azure-windows-38.yaml b/ci/deps/azure-windows-38.yaml
index 2853e12b28e35..23bede5eb26f1 100644
--- a/ci/deps/azure-windows-38.yaml
+++ b/ci/deps/azure-windows-38.yaml
@@ -8,7 +8,7 @@ dependencies:
# tools
- cython>=0.29.16
- pytest>=5.0.1
- - pytest-xdist>=1.21,<2.0.0 # GH 35737
+ - pytest-xdist>=1.21
- hypothesis>=3.58.0
- pytest-azurepipelines
| - [x] closes #35756
2.1.0 was released yesterday: https://pypi.org/project/pytest-xdist/#history | https://api.github.com/repos/pandas-dev/pandas/pulls/35910 | 2020-08-26T19:48:33Z | 2020-08-27T16:06:26Z | 2020-08-27T16:06:26Z | 2020-08-27T16:06:38Z |
Truncate columns list to match tr_frame for correct dict formatters lookup | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 6dd011c588702..1fd2ebb69cad2 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -750,6 +750,7 @@ I/O
- :meth:`DataFrame.to_csv` was re-opening file-like handles that also implement ``os.PathLike`` (:issue:`38125`)
- Bug in the conversion of a sliced ``pyarrow.Table`` with missing values to a DataFrame (:issue:`38525`)
- Bug in :func:`read_sql_table` raising a ``sqlalchemy.exc.OperationalError`` when column names contained a percentage sign (:issue:`37517`)
+- :class:`DataFrameFormatter` was using the full list of columns when computing formatters for truncated displays (:issue:`35907`)
Period
^^^^^^
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 3514fbc8c6293..f1d1b82924b8e 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -840,7 +840,7 @@ def _get_formatter(self, i: Union[str, int]) -> Optional[Callable]:
return None
else:
if is_integer(i) and i not in self.columns:
- i = self.columns[i]
+ i = self.tr_frame.columns[i]
return self.formatters.get(i, None)
def _get_formatted_column_labels(self, frame: DataFrame) -> List[List[str]]:
diff --git a/pandas/tests/io/formats/test_to_string.py b/pandas/tests/io/formats/test_to_string.py
index 551734f343dfa..c7dcd18da2946 100644
--- a/pandas/tests/io/formats/test_to_string.py
+++ b/pandas/tests/io/formats/test_to_string.py
@@ -167,6 +167,35 @@ def test_to_string_with_formatters():
assert result == result2
+@pytest.mark.parametrize(
+ "formatters",
+ [
+ {
+ "int": lambda x: f"[1] {x}",
+ "float": lambda x: f"[2] {x}",
+ "object": lambda x: f"[3] {x}",
+ },
+ [lambda x: f"[1] {x}", lambda x: f"[2] {x}", lambda x: f"[3] {x}"],
+ ],
+)
+def test_to_string_with_truncated_formatters(formatters):
+ df = DataFrame(
+ {
+ "int": [1, 2, 3],
+ "float": [1.0, 2.0, 3.0],
+ "object": [(1, 2), True, False],
+ },
+ columns=["int", "float", "object"],
+ )
+ result = df.to_string(formatters=formatters, max_cols=2)
+ assert result == (
+ " int ... object\n"
+ "0 [1] 1 ... [3] (1, 2)\n"
+ "1 [1] 2 ... [3] True\n"
+ "2 [1] 3 ... [3] False"
+ )
+
+
def test_to_string_with_datetime64_monthformatter():
months = [datetime(2016, 1, 1), datetime(2016, 2, 2)]
x = DataFrame({"months": months})
| When using a dictionary of formatters and a truncated DataFrame is being displayed, the column list used for looking up the formatter in the dict are incorrect. The columns attribute of the DataFrameFormatter needs to be truncated as well for the lookup to be correct.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35907 | 2020-08-26T15:03:49Z | 2021-11-09T00:24:55Z | null | 2021-11-09T00:24:56Z |
TST/API: test column indexing copy/view semantics | diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 4fae01ec710fd..476f7900dc745 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -706,6 +706,15 @@ def test_iloc_setitem_categorical_updates_inplace(self):
expected = pd.Categorical(["C", "B", "A"])
tm.assert_categorical_equal(cat, expected)
+ # __setitem__ under the other hand does not work in-place
+ cat = pd.Categorical(["A", "B", "C"])
+ df = pd.DataFrame({1: cat, 2: [1, 2, 3]})
+
+ df[1] = cat[::-1]
+
+ expected = pd.Categorical(["A", "B", "C"])
+ tm.assert_categorical_equal(cat, expected)
+
def test_iloc_with_boolean_operation(self):
# GH 20627
result = DataFrame([[0, 1], [2, 3], [4, 5], [6, np.nan]])
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 5b7f013d5de31..d7b9afb552c9c 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -1110,3 +1110,77 @@ def test_setitem_categorical():
{"h": pd.Categorical(["m", "n"]).reorder_categories(["n", "m"])}
)
tm.assert_frame_equal(df, expected)
+
+
+def test_setitem_EA_column_update():
+ # https://github.com/pandas-dev/pandas/issues/33457
+
+ df = pd.DataFrame(
+ {
+ "int": [1, 2, 3],
+ "int2": [3, 4, 5],
+ "float": [0.1, 0.2, 0.3],
+ "EA": pd.array([1, 2, None], dtype="Int64"),
+ }
+ )
+ original_arr = df.EA.array
+
+ # overwrite column with new array
+ df["EA"] = pd.array([1, 2, 3], dtype="Int64")
+ # ensure original array was not modified
+ assert original_arr is not df.EA.array
+ expected = pd.array([1, 2, None], dtype="Int64")
+ tm.assert_extension_array_equal(original_arr, expected)
+
+
+def test_getitem_EA_no_copy():
+ # ensure we don't copy the EA when taking a subset
+
+ df = pd.DataFrame(
+ {
+ "int": [1, 2, 3],
+ "int2": [3, 4, 5],
+ "float": [0.1, 0.2, 0.3],
+ "EA": pd.array([1, 2, None], dtype="Int64"),
+ }
+ )
+ original_arr = df.EA.array
+ subset = df[["int", "EA"]]
+ assert subset.EA.array is original_arr
+ # check that they view the same data by modifying
+ df["EA"].array[0] = 10
+ expected = pd.array([10, 2, None], dtype="Int64")
+ tm.assert_extension_array_equal(subset["EA"].array, expected)
+
+ # TODO this way it doesn't modify subset - is this expected?
+ # df.iloc[0, 3] = 10
+ # expected = pd.array([10, 2, None], dtype="Int64")
+ # tm.assert_extension_array_equal(subset['EA'].array, expected)
+
+
+def test_getitem_column_view():
+ # test that getting a single column is a view on the data
+
+ df = pd.DataFrame(
+ {
+ "int": [1, 2, 3],
+ "int2": [3, 4, 5],
+ "float": [0.1, 0.2, 0.3],
+ "EA": pd.array([1, 2, None], dtype="Int64"),
+ }
+ )
+
+ # getitem with ExtensionArray
+ original_arr = df._mgr.blocks[2].values
+ col = df["EA"]
+ assert col.array is original_arr
+ col[0] = 10
+ expected = pd.array([10, 2, None], dtype="Int64")
+ tm.assert_extension_array_equal(df["EA"].array, expected)
+
+ # getitem from consolidated block
+ col = df["int"]
+ with pd.option_context("chained_assignment", "warn"):
+ with tm.assert_produces_warning(pd.core.common.SettingWithCopyWarning):
+ col[0] = 10
+ tm.assert_equal(df["int"], pd.Series([10, 2, 3], name="int"))
| This is adding some tests for current behaviour related to the discussions in #33457 and #35417 (and also adds the tests from https://github.com/pandas-dev/pandas/pull/35266 which got never merged). I think it is good to have some more explicit tests of current behaviour, while we are considering changing things.
Note: some semantics I am testing are certainly debatable, but I was experimenting a bit with what is happening on current master. | https://api.github.com/repos/pandas-dev/pandas/pulls/35906 | 2020-08-26T12:10:57Z | 2021-06-16T13:52:32Z | null | 2021-06-16T13:52:32Z |
Backport PR #35777: BUG: DataFrame.apply with result_type=reduce incorrect index | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 97bd4dccdcd84..748937deb5a9b 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -24,7 +24,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
-
+- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
-
-
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 6d44cf917a07a..99a9e1377563c 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -340,7 +340,10 @@ def wrap_results_for_axis(
if self.result_type == "reduce":
# e.g. test_apply_dict GH#8735
- return self.obj._constructor_sliced(results)
+ res = self.obj._constructor_sliced(results)
+ res.index = res_index
+ return res
+
elif self.result_type is None and all(
isinstance(x, dict) for x in results.values()
):
diff --git a/pandas/tests/frame/apply/test_frame_apply.py b/pandas/tests/frame/apply/test_frame_apply.py
index 538978358c8e7..5a1e448beb40f 100644
--- a/pandas/tests/frame/apply/test_frame_apply.py
+++ b/pandas/tests/frame/apply/test_frame_apply.py
@@ -1541,3 +1541,12 @@ def func(row):
tm.assert_frame_equal(result, expected)
tm.assert_frame_equal(df, result)
+
+
+def test_apply_empty_list_reduce():
+ # GH#35683 get columns correct
+ df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]], columns=["a", "b"])
+
+ result = df.apply(lambda x: [], result_type="reduce")
+ expected = pd.Series({"a": [], "b": []}, dtype=object)
+ tm.assert_series_equal(result, expected)
| #35777 | https://api.github.com/repos/pandas-dev/pandas/pulls/35905 | 2020-08-26T11:39:27Z | 2020-08-26T13:32:27Z | 2020-08-26T13:32:27Z | 2020-08-26T13:32:36Z |
Backport PR #35712: PERF: RangeIndex.format performance | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 3cd920158f774..0f0f009307c75 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -540,7 +540,7 @@ with :attr:`numpy.nan` in the case of an empty :class:`DataFrame` (:issue:`26397
.. ipython:: python
- df.describe()
+ df.describe()
``__str__`` methods now call ``__repr__`` rather than vice versa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 748937deb5a9b..d60119f28c053 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -15,7 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
--
+- Performance regression for :meth:`RangeIndex.format` (:issue:`35712`)
-
.. ---------------------------------------------------------------------------
@@ -25,7 +25,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
--
+- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should bw ``""`` (:issue:`35712`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 1be381e38b157..32bbdf425acab 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -924,7 +924,9 @@ def format(
return self._format_with_header(header, na_rep=na_rep)
- def _format_with_header(self, header, na_rep="NaN") -> List[str_t]:
+ def _format_with_header(
+ self, header: List[str_t], na_rep: str_t = "NaN"
+ ) -> List[str_t]:
from pandas.io.formats.format import format_array
values = self._values
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 74b235655e345..8af6ee555306a 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -347,7 +347,7 @@ def _format_attrs(self):
attrs.append(("length", len(self)))
return attrs
- def _format_with_header(self, header, na_rep="NaN") -> List[str]:
+ def _format_with_header(self, header: List[str], na_rep: str = "NaN") -> List[str]:
from pandas.io.formats.printing import pprint_thing
result = [
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index ab0b3a394446d..9b57a25f1b0e9 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -350,15 +350,20 @@ def format(
"""
header = []
if name:
- fmt_name = ibase.pprint_thing(self.name, escape_chars=("\t", "\r", "\n"))
- header.append(fmt_name)
+ header.append(
+ ibase.pprint_thing(self.name, escape_chars=("\t", "\r", "\n"))
+ if self.name is not None
+ else ""
+ )
if formatter is not None:
return header + list(self.map(formatter))
return self._format_with_header(header, na_rep=na_rep, date_format=date_format)
- def _format_with_header(self, header, na_rep="NaT", date_format=None) -> List[str]:
+ def _format_with_header(
+ self, header: List[str], na_rep: str = "NaT", date_format: Optional[str] = None
+ ) -> List[str]:
return header + list(
self._format_native_types(na_rep=na_rep, date_format=date_format)
)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 9548ebbd9c3b2..446e57d58a779 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -948,7 +948,7 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
# Rendering Methods
# __repr__ associated methods are based on MultiIndex
- def _format_with_header(self, header, na_rep="NaN") -> List[str]:
+ def _format_with_header(self, header: List[str], na_rep: str = "NaN") -> List[str]:
return header + list(self._format_native_types(na_rep=na_rep))
def _format_native_types(self, na_rep="NaN", quoting=None, **kwargs):
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index eee610681087d..dcc0bdd86a98b 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -1,7 +1,7 @@
from datetime import timedelta
import operator
from sys import getsizeof
-from typing import Any, Optional
+from typing import Any, List, Optional
import warnings
import numpy as np
@@ -195,6 +195,15 @@ def _format_data(self, name=None):
# we are formatting thru the attributes
return None
+ def _format_with_header(self, header: List[str], na_rep: str = "NaN") -> List[str]:
+ if not len(self._range):
+ return header
+ first_val_str = str(self._range[0])
+ last_val_str = str(self._range[-1])
+ max_length = max(len(first_val_str), len(last_val_str))
+
+ return header + [f"{x:<{max_length}}" for x in self._range]
+
# --------------------------------------------------------------------
_deprecation_message = (
"RangeIndex.{} is deprecated and will be "
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 3b41c4bfacf73..5f82203d92dc3 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -1,5 +1,5 @@
import gc
-from typing import Optional, Type
+from typing import Type
import numpy as np
import pytest
@@ -33,7 +33,7 @@
class Base:
""" base class for index sub-class tests """
- _holder: Optional[Type[Index]] = None
+ _holder: Type[Index]
_compat_props = ["shape", "ndim", "size", "nbytes"]
def create_index(self) -> Index:
@@ -648,6 +648,12 @@ def test_format(self):
expected = [str(x) for x in idx]
assert idx.format() == expected
+ def test_format_empty(self):
+ # GH35712
+ empty_idx = self._holder([])
+ assert empty_idx.format() == []
+ assert empty_idx.format(name=True) == [""]
+
def test_hasnans_isnans(self, index):
# GH 11343, added tests for hasnans / isnans
if isinstance(index, MultiIndex):
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 15a88ab3819ce..085d41aaa5b76 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -536,6 +536,12 @@ def test_contains_raise_error_if_period_index_is_in_multi_index(self, msg, key):
with pytest.raises(KeyError, match=msg):
df.loc[key]
+ def test_format_empty(self):
+ # GH35712
+ empty_idx = self._holder([], freq="A")
+ assert empty_idx.format() == []
+ assert empty_idx.format(name=True) == [""]
+
def test_maybe_convert_timedelta():
pi = PeriodIndex(["2000", "2001"], freq="D")
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index 5b6f9cb358b7d..3bd3f6cc09db7 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -166,8 +166,14 @@ def test_cached_data(self):
idx.any()
assert idx._cached_data is None
+ idx.format()
+ assert idx._cache == {}
+
df = pd.DataFrame({"a": range(10)}, index=idx)
+ str(df)
+ assert idx._cache == {}
+
df.loc[50]
assert idx._cached_data is None
@@ -506,3 +512,9 @@ def test_engineless_lookup(self):
idx.get_loc("a")
assert "_engine" not in idx._cache
+
+ def test_format_empty(self):
+ # GH35712
+ empty_idx = self._holder(0)
+ assert empty_idx.format() == []
+ assert empty_idx.format(name=True) == [""]
| #35712 | https://api.github.com/repos/pandas-dev/pandas/pulls/35904 | 2020-08-26T11:32:06Z | 2020-08-26T14:40:22Z | 2020-08-26T14:40:22Z | 2020-08-26T14:40:27Z |
REF: use BlockManager.apply for cython_agg_blocks, apply_blockwise | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index a92e3af0764a7..537feace59fcb 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1035,8 +1035,6 @@ def _cython_agg_blocks(
if numeric_only:
data = data.get_numeric_data(copy=False)
- agg_blocks: List["Block"] = []
-
no_result = object()
def cast_agg_result(result, values: ArrayLike, how: str) -> ArrayLike:
@@ -1118,23 +1116,14 @@ def blk_func(bvalues: ArrayLike) -> ArrayLike:
res_values = cast_agg_result(result, bvalues, how)
return res_values
- for i, block in enumerate(data.blocks):
- try:
- nbs = block.apply(blk_func)
- except (NotImplementedError, TypeError):
- # TypeError -> we may have an exception in trying to aggregate
- # continue and exclude the block
- # NotImplementedError -> "ohlc" with wrong dtype
- pass
- else:
- agg_blocks.extend(nbs)
+ # TypeError -> we may have an exception in trying to aggregate
+ # continue and exclude the block
+ # NotImplementedError -> "ohlc" with wrong dtype
+ new_mgr = data.apply(blk_func, ignore_failures=True)
- if not agg_blocks:
+ if not len(new_mgr):
raise DataError("No numeric types to aggregate")
- # reset the locs in the blocks to correspond to our
- # current ordering
- new_mgr = data._combine(agg_blocks)
return new_mgr
def _aggregate_frame(self, func, *args, **kwargs) -> DataFrame:
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 389252e7ef0f2..2e3098d94afcb 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -350,7 +350,13 @@ def operate_blockwise(self, other: "BlockManager", array_op) -> "BlockManager":
"""
return operate_blockwise(self, other, array_op)
- def apply(self: T, f, align_keys=None, **kwargs) -> T:
+ def apply(
+ self: T,
+ f,
+ align_keys: Optional[List[str]] = None,
+ ignore_failures: bool = False,
+ **kwargs,
+ ) -> T:
"""
Iterate over the blocks, collect and create a new BlockManager.
@@ -358,6 +364,10 @@ def apply(self: T, f, align_keys=None, **kwargs) -> T:
----------
f : str or callable
Name of the Block method to apply.
+ align_keys: List[str] or None, default None
+ ignore_failures: bool, default False
+ **kwargs
+ Keywords to pass to `f`
Returns
-------
@@ -387,12 +397,20 @@ def apply(self: T, f, align_keys=None, **kwargs) -> T:
# otherwise we have an ndarray
kwargs[k] = obj[b.mgr_locs.indexer]
- if callable(f):
- applied = b.apply(f, **kwargs)
- else:
- applied = getattr(b, f)(**kwargs)
+ try:
+ if callable(f):
+ applied = b.apply(f, **kwargs)
+ else:
+ applied = getattr(b, f)(**kwargs)
+ except (TypeError, NotImplementedError):
+ if not ignore_failures:
+ raise
+ continue
result_blocks = _extend_blocks(applied, result_blocks)
+ if ignore_failures:
+ return self._combine(result_blocks)
+
if len(result_blocks) == 0:
return self.make_empty(self.axes)
@@ -704,7 +722,7 @@ def get_numeric_data(self, copy: bool = False) -> "BlockManager":
self._consolidate_inplace()
return self._combine([b for b in self.blocks if b.is_numeric], copy)
- def _combine(self, blocks: List[Block], copy: bool = True) -> "BlockManager":
+ def _combine(self: T, blocks: List[Block], copy: bool = True) -> T:
""" return a new manager with the blocks """
if len(blocks) == 0:
return self.make_empty()
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index a3f60c0bc5098..558c0eeb0ea65 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -489,8 +489,6 @@ def _apply_blockwise(
if self._selected_obj.ndim == 1:
return self._apply_series(homogeneous_func)
- # This isn't quite blockwise, since `blocks` is actually a collection
- # of homogenenous DataFrames.
_, obj = self._create_blocks(self._selected_obj)
mgr = obj._mgr
@@ -500,25 +498,14 @@ def hfunc(bvalues: ArrayLike) -> ArrayLike:
res_values = homogeneous_func(values)
return getattr(res_values, "T", res_values)
- skipped: List[int] = []
- res_blocks: List["Block"] = []
- for i, blk in enumerate(mgr.blocks):
- try:
- nbs = blk.apply(hfunc)
-
- except (TypeError, NotImplementedError):
- skipped.append(i)
- continue
-
- res_blocks.extend(nbs)
+ new_mgr = mgr.apply(hfunc, ignore_failures=True)
+ out = obj._constructor(new_mgr)
- if not len(res_blocks) and skipped:
+ if out.shape[1] == 0 and obj.shape[1] > 0:
raise DataError("No numeric types to aggregate")
- elif not len(res_blocks):
+ elif out.shape[1] == 0:
return obj.astype("float64")
- new_mgr = mgr._combine(res_blocks)
- out = obj._constructor(new_mgr)
self._insert_on_column(out, obj)
return out
| - [x] closes #34714
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35900 | 2020-08-26T00:30:11Z | 2020-09-03T02:56:34Z | 2020-09-03T02:56:33Z | 2020-09-03T02:57:42Z |
REF: handle axis=None case inside DataFrame.any/all to simplify _reduce | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 606bd4cc3b52d..31611f441ceea 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -8598,14 +8598,11 @@ def _reduce(
cols = self.columns[~dtype_is_dt]
self = self[cols]
- if axis is None and filter_type == "bool":
- labels = None
- constructor = None
- else:
- # TODO: Make other agg func handle axis=None properly
- axis = self._get_axis_number(axis)
- labels = self._get_agg_axis(axis)
- constructor = self._constructor
+ # TODO: Make other agg func handle axis=None properly
+ axis = self._get_axis_number(axis)
+ labels = self._get_agg_axis(axis)
+ constructor = self._constructor
+ assert axis in [0, 1]
def func(values):
if is_extension_array_dtype(values.dtype):
@@ -8613,7 +8610,7 @@ def func(values):
else:
return op(values, axis=axis, skipna=skipna, **kwds)
- def _get_data(axis_matters):
+ def _get_data(axis_matters: bool) -> "DataFrame":
if filter_type is None:
data = self._get_numeric_data()
elif filter_type == "bool":
@@ -8630,7 +8627,7 @@ def _get_data(axis_matters):
raise NotImplementedError(msg)
return data
- if numeric_only is not None and axis in [0, 1]:
+ if numeric_only is not None:
df = self
if numeric_only is True:
df = _get_data(axis_matters=True)
@@ -8656,6 +8653,8 @@ def blk_func(values):
out[:] = coerce_to_dtypes(out.values, df.dtypes)
return out
+ assert numeric_only is None
+
if not self._is_homogeneous_type or self._mgr.any_extension_types:
# try to avoid self.values call
@@ -8683,40 +8682,24 @@ def blk_func(values):
result = result.iloc[0].rename(None)
return result
- if numeric_only is None:
- data = self
- values = data.values
-
- try:
- result = func(values)
-
- except TypeError:
- # e.g. in nanops trying to convert strs to float
+ data = self
+ values = data.values
- # TODO: why doesnt axis matter here?
- data = _get_data(axis_matters=False)
- labels = data._get_agg_axis(axis)
+ try:
+ result = func(values)
- values = data.values
- with np.errstate(all="ignore"):
- result = func(values)
+ except TypeError:
+ # e.g. in nanops trying to convert strs to float
- else:
- if numeric_only:
- data = _get_data(axis_matters=True)
- labels = data._get_agg_axis(axis)
+ # TODO: why doesnt axis matter here?
+ data = _get_data(axis_matters=False)
+ labels = data._get_agg_axis(axis)
- values = data.values
- else:
- data = self
- values = data.values
- result = func(values)
+ values = data.values
+ with np.errstate(all="ignore"):
+ result = func(values)
- if filter_type == "bool" and is_object_dtype(values) and axis is None:
- # work around https://github.com/numpy/numpy/issues/10489
- # TODO: can we de-duplicate parts of this with the next blocK?
- result = np.bool_(result)
- elif hasattr(result, "dtype") and is_object_dtype(result.dtype):
+ if is_object_dtype(result.dtype):
try:
if filter_type is None:
result = result.astype(np.float64)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 286da6e1de9d5..e55d5dcb001b4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11658,6 +11658,14 @@ def logical_func(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs
"Option bool_only is not implemented with option level."
)
return self._agg_by_level(name, axis=axis, level=level, skipna=skipna)
+
+ if self.ndim > 1 and axis is None:
+ # Reduce along one dimension then the other, to simplify DataFrame._reduce
+ res = logical_func(
+ self, axis=0, bool_only=bool_only, skipna=skipna, **kwargs
+ )
+ return logical_func(res, skipna=skipna, **kwargs)
+
return self._reduce(
func,
name=name,
| Between this, #35881, and the PR coming after 35881, we'll be able to simplify _reduce quite a bit. | https://api.github.com/repos/pandas-dev/pandas/pulls/35899 | 2020-08-25T23:27:39Z | 2020-09-02T01:56:40Z | 2020-09-02T01:56:40Z | 2020-09-02T02:30:58Z |
CI: docker 32-bit linux build #32709 | diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 113ad3e338952..b1091ea7f60e4 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -26,3 +26,28 @@ jobs:
parameters:
name: Windows
vmImage: vs2017-win2016
+
+- job: py37_32bit
+ pool:
+ vmImage: ubuntu-18.04
+
+ steps:
+ - script: |
+ docker pull quay.io/pypa/manylinux2014_i686
+ docker run -v $(pwd):/pandas quay.io/pypa/manylinux2014_i686 \
+ /bin/bash -xc "cd pandas && \
+ /opt/python/cp37-cp37m/bin/python -m venv ~/virtualenvs/pandas-dev && \
+ . ~/virtualenvs/pandas-dev/bin/activate && \
+ python -m pip install --no-deps -U pip wheel setuptools && \
+ pip install cython numpy python-dateutil pytz pytest pytest-xdist hypothesis pytest-azurepipelines && \
+ python setup.py build_ext -q -i -j2 && \
+ python -m pip install --no-build-isolation -e . && \
+ pytest -m 'not slow and not network and not clipboard' pandas --junitxml=test-data.xml"
+ displayName: 'Run 32-bit manylinux2014 Docker Build / Tests'
+
+ - task: PublishTestResults@2
+ condition: succeededOrFailed()
+ inputs:
+ testResultsFiles: '**/test-*.xml'
+ failTaskOnFailedTests: true
+ testRunTitle: 'Publish test results for Python 3.7-32 bit full Linux'
diff --git a/pandas/tests/arrays/floating/test_function.py b/pandas/tests/arrays/floating/test_function.py
index 2767d93741d4c..baf60a363ad29 100644
--- a/pandas/tests/arrays/floating/test_function.py
+++ b/pandas/tests/arrays/floating/test_function.py
@@ -1,6 +1,8 @@
import numpy as np
import pytest
+from pandas.compat import IS64
+
import pandas as pd
import pandas._testing as tm
@@ -71,6 +73,7 @@ def test_ufunc_reduce_raises(values):
np.add.reduce(a)
+@pytest.mark.skipif(not IS64, reason="GH 36579: fail on 32-bit system")
@pytest.mark.parametrize(
"pandasmethname, kwargs",
[
diff --git a/pandas/tests/base/test_misc.py b/pandas/tests/base/test_misc.py
index f7952c81cfd61..6a9d58021a4d9 100644
--- a/pandas/tests/base/test_misc.py
+++ b/pandas/tests/base/test_misc.py
@@ -3,7 +3,7 @@
import numpy as np
import pytest
-from pandas.compat import PYPY
+from pandas.compat import IS64, PYPY
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -128,7 +128,10 @@ def test_memory_usage(index_or_series_obj):
)
if len(obj) == 0:
- expected = 0 if isinstance(obj, Index) else 80
+ if isinstance(obj, Index):
+ expected = 0
+ else:
+ expected = 80 if IS64 else 48
assert res_deep == res == expected
elif is_object or is_categorical:
# only deep will pick them up
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 2e51fca71e139..b57fa2540add9 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1268,8 +1268,8 @@ def test_groupby_nat_exclude():
assert grouped.ngroups == 2
expected = {
- Timestamp("2013-01-01 00:00:00"): np.array([1, 7], dtype=np.int64),
- Timestamp("2013-02-01 00:00:00"): np.array([3, 5], dtype=np.int64),
+ Timestamp("2013-01-01 00:00:00"): np.array([1, 7], dtype=np.intp),
+ Timestamp("2013-02-01 00:00:00"): np.array([3, 5], dtype=np.intp),
}
for k in grouped.indices:
diff --git a/pandas/tests/io/formats/test_info.py b/pandas/tests/io/formats/test_info.py
index 418d05a6b8752..8c2155aec7248 100644
--- a/pandas/tests/io/formats/test_info.py
+++ b/pandas/tests/io/formats/test_info.py
@@ -7,7 +7,7 @@
import numpy as np
import pytest
-from pandas.compat import PYPY
+from pandas.compat import IS64, PYPY
from pandas import (
CategoricalIndex,
@@ -475,6 +475,7 @@ def test_info_categorical():
df.info(buf=buf)
+@pytest.mark.xfail(not IS64, reason="GH 36579: fail on 32-bit system")
def test_info_int_columns():
# GH#37245
df = DataFrame({1: [1, 2], 2: [2, 3]}, index=["A", "B"])
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index ae63b6af3a8b6..eee111dd4579c 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -13,6 +13,7 @@
import numpy as np
import pytest
+from pandas.compat import IS64
from pandas.errors import ParserError
import pandas.util._test_decorators as td
@@ -717,7 +718,10 @@ def test_float_precision_options(c_parser_only):
df3 = parser.read_csv(StringIO(s), float_precision="legacy")
- assert not df.iloc[0, 0] == df3.iloc[0, 0]
+ if IS64:
+ assert not df.iloc[0, 0] == df3.iloc[0, 0]
+ else:
+ assert df.iloc[0, 0] == df3.iloc[0, 0]
msg = "Unrecognized float_precision option: junk"
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 92128def4540a..642e6a691463e 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -4,6 +4,8 @@
import numpy as np
import pytest
+from pandas.compat import IS64
+
import pandas as pd
from pandas import (
Categorical,
@@ -2104,6 +2106,7 @@ def test_pivot_duplicates(self):
with pytest.raises(ValueError, match="duplicate entries"):
data.pivot("a", "b", "c")
+ @pytest.mark.xfail(not IS64, reason="GH 36579: fail on 32-bit system")
def test_pivot_empty(self):
df = DataFrame(columns=["a", "b", "c"])
result = df.pivot("a", "b", "c")
| - [x] closes #32709
| https://api.github.com/repos/pandas-dev/pandas/pulls/35898 | 2020-08-25T22:09:26Z | 2020-10-28T03:15:11Z | 2020-10-28T03:15:11Z | 2022-11-18T02:21:03Z |
CI: Mark s3 tests parallel safe | diff --git a/pandas/tests/io/conftest.py b/pandas/tests/io/conftest.py
index 518f31d73efa9..193baa8c3ed74 100644
--- a/pandas/tests/io/conftest.py
+++ b/pandas/tests/io/conftest.py
@@ -34,12 +34,13 @@ def feather_file(datapath):
@pytest.fixture
-def s3so():
- return dict(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})
+def s3so(worker_id):
+ worker_id = "5" if worker_id == "master" else worker_id.lstrip("gw")
+ return dict(client_kwargs={"endpoint_url": f"http://127.0.0.1:555{worker_id}/"})
-@pytest.fixture(scope="module")
-def s3_base():
+@pytest.fixture(scope="session")
+def s3_base(worker_id):
"""
Fixture for mocking S3 interaction.
@@ -61,11 +62,13 @@ def s3_base():
# Launching moto in server mode, i.e., as a separate process
# with an S3 endpoint on localhost
- endpoint_uri = "http://127.0.0.1:5555/"
+ worker_id = "5" if worker_id == "master" else worker_id.lstrip("gw")
+ endpoint_port = f"555{worker_id}"
+ endpoint_uri = f"http://127.0.0.1:{endpoint_port}/"
# pipe to null to avoid logging in terminal
proc = subprocess.Popen(
- shlex.split("moto_server s3 -p 5555"), stdout=subprocess.DEVNULL
+ shlex.split(f"moto_server s3 -p {endpoint_port}"), stdout=subprocess.DEVNULL
)
timeout = 5
@@ -79,7 +82,7 @@ def s3_base():
pass
timeout -= 0.1
time.sleep(0.1)
- yield
+ yield endpoint_uri
proc.terminate()
proc.wait()
@@ -119,9 +122,8 @@ def add_tips_files(bucket_name):
cli.put_object(Bucket=bucket_name, Key=s3_key, Body=f)
bucket = "pandas-test"
- endpoint_uri = "http://127.0.0.1:5555/"
- conn = boto3.resource("s3", endpoint_url=endpoint_uri)
- cli = boto3.client("s3", endpoint_url=endpoint_uri)
+ conn = boto3.resource("s3", endpoint_url=s3_base)
+ cli = boto3.client("s3", endpoint_url=s3_base)
try:
cli.create_bucket(Bucket=bucket)
@@ -143,7 +145,7 @@ def add_tips_files(bucket_name):
s3fs.S3FileSystem.clear_instance_cache()
yield conn
- s3 = s3fs.S3FileSystem(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})
+ s3 = s3fs.S3FileSystem(client_kwargs={"endpoint_url": s3_base})
try:
s3.rm(bucket, recursive=True)
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 5bb205842269e..c0e3220454bf1 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -34,7 +34,7 @@ def test_read_zipped_json(datapath):
@td.skip_if_not_us_locale
-def test_with_s3_url(compression, s3_resource):
+def test_with_s3_url(compression, s3_resource, s3so):
# Bucket "pandas-test" created in tests/io/conftest.py
df = pd.read_json('{"a": [1, 2, 3], "b": [4, 5, 6]}')
@@ -45,9 +45,7 @@ def test_with_s3_url(compression, s3_resource):
s3_resource.Bucket("pandas-test").put_object(Key="test-1", Body=f)
roundtripped_df = pd.read_json(
- "s3://pandas-test/test-1",
- compression=compression,
- storage_options=dict(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"}),
+ "s3://pandas-test/test-1", compression=compression, storage_options=s3so,
)
tm.assert_frame_equal(df, roundtripped_df)
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 64a666079876f..2022abbaee323 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1702,17 +1702,14 @@ def test_json_multiindex(self, dataframe, expected):
result = series.to_json(orient="index")
assert result == expected
- def test_to_s3(self, s3_resource):
+ def test_to_s3(self, s3_resource, s3so):
import time
# GH 28375
mock_bucket_name, target_file = "pandas-test", "test.json"
df = DataFrame({"x": [1, 2, 3], "y": [2, 4, 6]})
df.to_json(
- f"s3://{mock_bucket_name}/{target_file}",
- storage_options=dict(
- client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"}
- ),
+ f"s3://{mock_bucket_name}/{target_file}", storage_options=s3so,
)
timeout = 5
while True:
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 4e0c16c71a6a8..15f9837176315 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -158,10 +158,6 @@ def check_round_trip(
"""
write_kwargs = write_kwargs or {"compression": None}
read_kwargs = read_kwargs or {}
- if isinstance(path, str) and "s3://" in path:
- s3so = dict(client_kwargs={"endpoint_url": "http://127.0.0.1:5555/"})
- read_kwargs["storage_options"] = s3so
- write_kwargs["storage_options"] = s3so
if expected is None:
expected = df
@@ -555,15 +551,24 @@ def test_s3_roundtrip_explicit_fs(self, df_compat, s3_resource, pa, s3so):
write_kwargs=kw,
)
- def test_s3_roundtrip(self, df_compat, s3_resource, pa):
+ def test_s3_roundtrip(self, df_compat, s3_resource, pa, s3so):
if LooseVersion(pyarrow.__version__) <= LooseVersion("0.17.0"):
pytest.skip()
# GH #19134
- check_round_trip(df_compat, pa, path="s3://pandas-test/pyarrow.parquet")
+ s3so = dict(storage_options=s3so)
+ check_round_trip(
+ df_compat,
+ pa,
+ path="s3://pandas-test/pyarrow.parquet",
+ read_kwargs=s3so,
+ write_kwargs=s3so,
+ )
@td.skip_if_no("s3fs")
@pytest.mark.parametrize("partition_col", [["A"], []])
- def test_s3_roundtrip_for_dir(self, df_compat, s3_resource, pa, partition_col):
+ def test_s3_roundtrip_for_dir(
+ self, df_compat, s3_resource, pa, partition_col, s3so
+ ):
# GH #26388
expected_df = df_compat.copy()
@@ -587,7 +592,10 @@ def test_s3_roundtrip_for_dir(self, df_compat, s3_resource, pa, partition_col):
pa,
expected=expected_df,
path="s3://pandas-test/parquet_dir",
- write_kwargs={"partition_cols": partition_col, "compression": None},
+ read_kwargs=dict(storage_options=s3so),
+ write_kwargs=dict(
+ partition_cols=partition_col, compression=None, storage_options=s3so
+ ),
check_like=True,
repeat=1,
)
@@ -761,9 +769,15 @@ def test_filter_row_groups(self, fp):
result = read_parquet(path, fp, filters=[("a", "==", 0)])
assert len(result) == 1
- def test_s3_roundtrip(self, df_compat, s3_resource, fp):
+ def test_s3_roundtrip(self, df_compat, s3_resource, fp, s3so):
# GH #19134
- check_round_trip(df_compat, fp, path="s3://pandas-test/fastparquet.parquet")
+ check_round_trip(
+ df_compat,
+ fp,
+ path="s3://pandas-test/fastparquet.parquet",
+ read_kwargs=dict(storage_options=s3so),
+ write_kwargs=dict(compression=None, storage_options=s3so),
+ )
def test_partition_cols_supported(self, fp, df_full):
# GH #23283
| Closes https://github.com/pandas-dev/pandas/issues/35856
I think we need to update the pytest pattern though, so this should
fail. | https://api.github.com/repos/pandas-dev/pandas/pulls/35895 | 2020-08-25T14:49:51Z | 2020-08-26T02:21:41Z | 2020-08-26T02:21:41Z | 2020-09-10T16:30:56Z |
DOC: avoid StorageOptions type alias in docstrings | diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index aaef71910c9ab..3cd0d721bbdc6 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -200,13 +200,13 @@
Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
'X'...'X'. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
-storage_options : StorageOptions
+storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc., if using a URL that will
be parsed by ``fsspec``, e.g., starting "s3://", "gcs://". An error
will be raised if providing this argument with a local path or
a file-like buffer. See the fsspec and backend storage implementation
- docs for the set of allowed keys and values
+ docs for the set of allowed keys and values.
.. versionadded:: 1.2.0
diff --git a/pandas/io/excel/_odfreader.py b/pandas/io/excel/_odfreader.py
index a6cd8f524503b..6cbca59aed97e 100644
--- a/pandas/io/excel/_odfreader.py
+++ b/pandas/io/excel/_odfreader.py
@@ -18,7 +18,7 @@ class _ODFReader(_BaseExcelReader):
----------
filepath_or_buffer : string, path to be parsed or
an open readable stream.
- storage_options : StorageOptions
+ storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index 73239190604db..c2730536af8a3 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -479,7 +479,7 @@ def __init__(
----------
filepath_or_buffer : string, path object or Workbook
Object to be parsed.
- storage_options : StorageOptions
+ storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
import_optional_dependency("openpyxl")
diff --git a/pandas/io/excel/_pyxlsb.py b/pandas/io/excel/_pyxlsb.py
index c0e281ff6c2da..c15a52abe4d53 100644
--- a/pandas/io/excel/_pyxlsb.py
+++ b/pandas/io/excel/_pyxlsb.py
@@ -19,7 +19,7 @@ def __init__(
----------
filepath_or_buffer : str, path object, or Workbook
Object to be parsed.
- storage_options : StorageOptions
+ storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
import_optional_dependency("pyxlsb")
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index ff1b3c8bdb964..a7fb519af61c6 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -17,7 +17,7 @@ def __init__(self, filepath_or_buffer, storage_options: StorageOptions = None):
----------
filepath_or_buffer : string, path object or Workbook
Object to be parsed.
- storage_options : StorageOptions
+ storage_options : dict, optional
passed to fsspec for appropriate URLs (see ``get_filepath_or_buffer``)
"""
err_msg = "Install xlrd >= 1.0.0 for Excel support"
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 2d86fa44f22a4..fb606b5ec8aef 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -16,14 +16,13 @@ def to_feather(df: DataFrame, path, storage_options: StorageOptions = None, **kw
----------
df : DataFrame
path : string file path, or file-like object
-
storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc., if using a URL that will
be parsed by ``fsspec``, e.g., starting "s3://", "gcs://". An error
will be raised if providing this argument with a local path or
a file-like buffer. See the fsspec and backend storage implementation
- docs for the set of allowed keys and values
+ docs for the set of allowed keys and values.
.. versionadded:: 1.2.0
@@ -106,6 +105,15 @@ def read_feather(
Whether to parallelize reading using multiple threads.
.. versionadded:: 0.24.0
+ storage_options : dict, optional
+ Extra options that make sense for a particular storage connection, e.g.
+ host, port, username, password, etc., if using a URL that will
+ be parsed by ``fsspec``, e.g., starting "s3://", "gcs://". An error
+ will be raised if providing this argument with a local path or
+ a file-like buffer. See the fsspec and backend storage implementation
+ docs for the set of allowed keys and values.
+
+ .. versionadded:: 1.2.0
Returns
-------
| Small follow-up on https://github.com/pandas-dev/pandas/pull/35655, replacing the "StorageOptions" with a plain "dict" in the docstrings ("StorageOptions" is not something known to users, in the type annotations it will expand but not in the docstrings)
(cc @martindurant) | https://api.github.com/repos/pandas-dev/pandas/pulls/35894 | 2020-08-25T14:42:06Z | 2020-08-25T20:12:32Z | 2020-08-25T20:12:32Z | 2020-08-25T20:12:35Z |
Fix interpolate limit area and limit direction with pad | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index adc1806523d6e..2df1db260eeb5 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -225,6 +225,7 @@ Missing
^^^^^^^
- Bug in :meth:`SeriesGroupBy.transform` now correctly handles missing values for `dropna=False` (:issue:`35014`)
+- Bug in :meth:`Series.interpolate` where kwarg ``limit_area`` and ``limit_direction`` had no effect when using methods ``pad`` and `backfill`` (:issue:`31048`)
-
MultiIndex
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index c62be4f767f00..9dcfa807ebc6f 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1127,6 +1127,7 @@ def interpolate(
axis=axis,
inplace=inplace,
limit=limit,
+ limit_area=limit_area,
fill_value=fill_value,
coerce=coerce,
downcast=downcast,
@@ -1155,6 +1156,7 @@ def _interpolate_with_fill(
axis: int = 0,
inplace: bool = False,
limit: Optional[int] = None,
+ limit_area=None,
fill_value: Optional[Any] = None,
coerce: bool = False,
downcast: Optional[str] = None,
@@ -1176,16 +1178,17 @@ def _interpolate_with_fill(
# We only get here for non-ExtensionBlock
fill_value = convert_scalar_for_putitemlike(fill_value, self.values.dtype)
- values = missing.interpolate_2d(
+ interp_values = missing.interpolate_2d(
values,
method=method,
axis=axis,
limit=limit,
fill_value=fill_value,
+ limit_area=limit_area,
dtype=self.dtype,
)
- blocks = [self.make_block_same_class(values, ndim=self.ndim)]
+ blocks = [self.make_block_same_class(interp_values, ndim=self.ndim)]
return self._maybe_downcast(blocks, downcast)
def _interpolate(
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index 7802c5cbdbfb3..b1a0e9d711af0 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -2,11 +2,12 @@
Routines for filling missing data.
"""
-from typing import Any, List, Optional, Set, Union
+from typing import Any, List, Optional
import numpy as np
from pandas._libs import algos, lib
+from pandas._typing import ArrayLike, Dtype, Hashable
from pandas.compat._optional import import_optional_dependency
from pandas.core.dtypes.cast import infer_dtype_from_array
@@ -230,41 +231,12 @@ def interpolate_1d(
# default limit is unlimited GH #16282
limit = algos._validate_limit(nobs=None, limit=limit)
- # These are sets of index pointers to invalid values... i.e. {0, 1, etc...
- all_nans = set(np.flatnonzero(invalid))
- start_nans = set(range(find_valid_index(yvalues, "first")))
- end_nans = set(range(1 + find_valid_index(yvalues, "last"), len(valid)))
- mid_nans = all_nans - start_nans - end_nans
-
- # Like the sets above, preserve_nans contains indices of invalid values,
- # but in this case, it is the final set of indices that need to be
- # preserved as NaN after the interpolation.
-
- # For example if limit_direction='forward' then preserve_nans will
- # contain indices of NaNs at the beginning of the series, and NaNs that
- # are more than'limit' away from the prior non-NaN.
-
- # set preserve_nans based on direction using _interp_limit
- preserve_nans: Union[List, Set]
- if limit_direction == "forward":
- preserve_nans = start_nans | set(_interp_limit(invalid, limit, 0))
- elif limit_direction == "backward":
- preserve_nans = end_nans | set(_interp_limit(invalid, 0, limit))
- else:
- # both directions... just use _interp_limit
- preserve_nans = set(_interp_limit(invalid, limit, limit))
-
- # if limit_area is set, add either mid or outside indices
- # to preserve_nans GH #16284
- if limit_area == "inside":
- # preserve NaNs on the outside
- preserve_nans |= start_nans | end_nans
- elif limit_area == "outside":
- # preserve NaNs on the inside
- preserve_nans |= mid_nans
-
- # sort preserve_nans and covert to list
- preserve_nans = sorted(preserve_nans)
+ preserve_nans = _derive_indices_of_nans_to_preserve(
+ yvalues=yvalues,
+ limit=limit,
+ limit_area=limit_area,
+ limit_direction=limit_direction,
+ )
yvalues = getattr(yvalues, "values", yvalues)
result = yvalues.copy()
@@ -307,6 +279,73 @@ def interpolate_1d(
return result
+def _derive_indices_of_nans_to_preserve(
+ yvalues: ArrayLike,
+ limit: Optional[int] = None,
+ limit_area: Optional[str] = None,
+ limit_direction: Optional[str] = None,
+) -> List[int]:
+ """
+ Derive the indices of NaNs that shall be preserved after interpolation
+ This function is called by `interpolate_1d` and takes the arguments with
+ the same name from there. In `interpolate_1d`, after performing the
+ interpolation, the list of indices of NaNs to preserve is used to put
+ NaNs in the desired locations.
+
+ Parameters
+ ----------
+ yvalues: ArrayLike
+ 1-d array of values of the initial Series or DataFrame
+ limit: int
+ limit_area: str
+ limit_direction: str
+
+ Returns
+ -------
+ preserve_nans: list of int
+ Set of index pointers to where NaNs should be preserved in `yvalues`
+ """
+
+ invalid = isna(yvalues)
+ valid = ~invalid
+
+ # These are sets of index pointers to invalid values... i.e. {0, 1, etc...
+ all_nans = set(np.flatnonzero(invalid))
+ start_nans = set(range(find_valid_index(yvalues, "first")))
+ end_nans = set(range(1 + find_valid_index(yvalues, "last"), len(valid)))
+ mid_nans = all_nans - start_nans - end_nans
+
+ # Like the sets above, preserve_nans contains indices of invalid values,
+ # but in this case, it is the final set of indices that need to be
+ # preserved as NaN after the interpolation.
+
+ # For example if limit_direction='forward' then preserve_nans will
+ # contain indices of NaNs at the beginning of the series, and NaNs that
+ # are more than'limit' away from the prior non-NaN.
+
+ # set preserve_nans based on direction using _interp_limit
+ if limit_direction == "forward":
+ preserve_nans = start_nans | set(_interp_limit(invalid, limit, 0))
+ elif limit_direction == "backward":
+ preserve_nans = end_nans | set(_interp_limit(invalid, 0, limit))
+ else:
+ # both directions... just use _interp_limit
+ preserve_nans = set(_interp_limit(invalid, limit, limit))
+
+ # if limit_area is set, add either mid or outside indices
+ # to preserve_nans GH #16284
+ if limit_area == "inside":
+ # preserve NaNs on the outside
+ preserve_nans |= start_nans | end_nans
+ elif limit_area == "outside":
+ # preserve NaNs on the inside
+ preserve_nans |= mid_nans
+
+ # sort preserve_nans and covert to list
+ preserve_nans_sorted = sorted(preserve_nans)
+ return preserve_nans_sorted
+
+
def _interpolate_scipy_wrapper(
x, y, new_x, method, fill_value=None, bounds_error=False, order=None, **kwargs
):
@@ -542,45 +581,127 @@ def _cubicspline_interpolate(xi, yi, x, axis=0, bc_type="not-a-knot", extrapolat
return P(x)
-def interpolate_2d(
- values, method="pad", axis=0, limit=None, fill_value=None, dtype=None
+def interpolate_1d_fill(
+ values,
+ method: str = "pad",
+ limit: Optional[int] = None,
+ limit_area: Optional[str] = None,
+ fill_value: Optional[Hashable] = None,
+ dtype: Optional[Dtype] = None,
):
"""
- Perform an actual interpolation of values, values will be make 2-d if
- needed fills inplace, returns the result.
+ This is a 1D-versoin of `interpolate_2d`, which is used for methods `pad`
+ and `backfill` when interpolating. This 1D-version is necessary to be
+ able to handle kwarg `limit_area` via the function
+ ` _derive_indices_of_nans_to_preserve`. It is used the same way as the
+ 1D-interpolation functions which are based on scipy-interpolation, i.e.
+ via np.apply_along_axis.
"""
+ if method == "pad":
+ limit_direction = "forward"
+ elif method == "backfill":
+ limit_direction = "backward"
+ else:
+ raise ValueError("`method` must be either 'pad' or 'backfill'.")
+
orig_values = values
- transf = (lambda x: x) if axis == 0 else (lambda x: x.T)
+ yvalues = values
- # reshape a 1 dim if needed
- ndim = values.ndim
- if values.ndim == 1:
- if axis != 0: # pragma: no cover
- raise AssertionError("cannot interpolate on a ndim == 1 with axis != 0")
- values = values.reshape(tuple((1,) + values.shape))
+ if values.ndim > 1:
+ raise AssertionError("This only works with 1D data.")
if fill_value is None:
mask = None
else: # todo create faster fill func without masking
- mask = mask_missing(transf(values), fill_value)
+ mask = mask_missing(values, fill_value)
+
+ preserve_nans = _derive_indices_of_nans_to_preserve(
+ yvalues=yvalues,
+ limit=limit,
+ limit_area=limit_area,
+ limit_direction=limit_direction,
+ )
method = clean_fill_method(method)
if method == "pad":
- values = transf(pad_2d(transf(values), limit=limit, mask=mask, dtype=dtype))
+ values = pad_1d(values, limit=limit, mask=mask, dtype=dtype)
else:
- values = transf(
- backfill_2d(transf(values), limit=limit, mask=mask, dtype=dtype)
- )
-
- # reshape back
- if ndim == 1:
- values = values[0]
+ values = backfill_1d(values, limit=limit, mask=mask, dtype=dtype)
if orig_values.dtype.kind == "M":
# convert float back to datetime64
values = values.astype(orig_values.dtype)
+ values[preserve_nans] = fill_value
+ return values
+
+
+def interpolate_2d(
+ values,
+ method="pad",
+ axis=0,
+ limit=None,
+ fill_value=None,
+ limit_area=None,
+ dtype=None,
+):
+ """
+ Perform an actual interpolation of values, values will be make 2-d if
+ needed fills inplace, returns the result.
+ """
+ orig_values = values
+
+ # We have to distinguish two cases:
+ # 1. When kwarg `limit_area` is used: It is not
+ # supported by `pad_2d` and `backfill_2d`. Using this kwarg only
+ # works by applying the fill along a certain axis.
+ # 2. All other cases.
+ if limit_area is not None:
+
+ def func(x):
+ return interpolate_1d_fill(
+ x,
+ method=method,
+ limit=limit,
+ limit_area=limit_area,
+ fill_value=fill_value,
+ dtype=dtype,
+ )
+
+ # Beware that this also changes the input array `values`!
+ values = np.apply_along_axis(func, axis, values)
+ else:
+ transf = (lambda x: x) if axis == 0 else (lambda x: x.T)
+
+ # reshape a 1 dim if needed
+ ndim = values.ndim
+ if values.ndim == 1:
+ if axis != 0: # pragma: no cover
+ raise AssertionError("cannot interpolate on a ndim == 1 with axis != 0")
+ values = values.reshape(tuple((1,) + values.shape))
+
+ if fill_value is None:
+ mask = None
+ else: # todo create faster fill func without masking
+ mask = mask_missing(transf(values), fill_value)
+
+ method = clean_fill_method(method)
+ if method == "pad":
+ values = transf(pad_2d(transf(values), limit=limit, mask=mask, dtype=dtype))
+ else:
+ values = transf(
+ backfill_2d(transf(values), limit=limit, mask=mask, dtype=dtype)
+ )
+
+ # reshape back
+ if ndim == 1:
+ values = values[0]
+
+ if orig_values.dtype.kind == "M":
+ # convert float back to datetime64
+ values = values.astype(orig_values.dtype)
+
return values
diff --git a/pandas/tests/series/methods/test_interpolate.py b/pandas/tests/series/methods/test_interpolate.py
index c4b10e0ccdc3e..509362839632a 100644
--- a/pandas/tests/series/methods/test_interpolate.py
+++ b/pandas/tests/series/methods/test_interpolate.py
@@ -429,6 +429,54 @@ def test_interp_limit_area(self):
with pytest.raises(ValueError, match=msg):
s.interpolate(method="linear", limit_area="abc")
+ def test_interp_limit_area_with_pad(self):
+ # Test for issue #26796
+ s = Series([np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan])
+
+ expected = Series([np.nan, np.nan, 3.0, 3.0, 3.0, 3.0, 7.0, np.nan, np.nan])
+ result = s.interpolate(method="pad", limit_area="inside")
+ tm.assert_series_equal(result, expected)
+
+ expected = Series(
+ [np.nan, np.nan, 3.0, 3.0, np.nan, np.nan, 7.0, np.nan, np.nan]
+ )
+ result = s.interpolate(method="pad", limit_area="inside", limit=1)
+ tm.assert_series_equal(result, expected)
+
+ expected = Series([np.nan, np.nan, 3.0, np.nan, np.nan, np.nan, 7.0, 7.0, 7.0])
+ result = s.interpolate(method="pad", limit_area="outside")
+ tm.assert_series_equal(result, expected)
+
+ expected = Series(
+ [np.nan, np.nan, 3.0, np.nan, np.nan, np.nan, 7.0, 7.0, np.nan]
+ )
+ result = s.interpolate(method="pad", limit_area="outside", limit=1)
+ tm.assert_series_equal(result, expected)
+
+ def test_interp_limit_area_with_backfill(self):
+ # Test for issue #26796
+ s = Series([np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan])
+
+ expected = Series([np.nan, np.nan, 3.0, 7.0, 7.0, 7.0, 7.0, np.nan, np.nan])
+ result = s.interpolate(method="bfill", limit_area="inside")
+ tm.assert_series_equal(result, expected)
+
+ expected = Series(
+ [np.nan, np.nan, 3.0, np.nan, np.nan, 7.0, 7.0, np.nan, np.nan]
+ )
+ result = s.interpolate(method="bfill", limit_area="inside", limit=1)
+ tm.assert_series_equal(result, expected)
+
+ expected = Series([3.0, 3.0, 3.0, np.nan, np.nan, np.nan, 7.0, np.nan, np.nan])
+ result = s.interpolate(method="bfill", limit_area="outside")
+ tm.assert_series_equal(result, expected)
+
+ expected = Series(
+ [np.nan, 3.0, 3.0, np.nan, np.nan, np.nan, 7.0, np.nan, np.nan]
+ )
+ result = s.interpolate(method="bfill", limit_area="outside", limit=1)
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.parametrize(
"method, limit_direction, expected",
[
| fork of #31048
@jreback I don't seem to be able to push updates to #31048, so could close and carry on here | https://api.github.com/repos/pandas-dev/pandas/pulls/35893 | 2020-08-25T14:35:26Z | 2020-09-15T09:47:59Z | null | 2020-09-15T11:34:43Z |
Backport PR #35814: TST: Fix test_parquet failures for pyarrow 1.0 | diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 82157f3d722a9..306b2a7849586 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -557,13 +557,23 @@ def test_s3_roundtrip(self, df_compat, s3_resource, pa):
@pytest.mark.parametrize("partition_col", [["A"], []])
def test_s3_roundtrip_for_dir(self, df_compat, s3_resource, pa, partition_col):
# GH #26388
- # https://github.com/apache/arrow/blob/master/python/pyarrow/tests/test_parquet.py#L2716
- # As per pyarrow partitioned columns become 'categorical' dtypes
- # and are added to back of dataframe on read
-
expected_df = df_compat.copy()
- if partition_col:
- expected_df[partition_col] = expected_df[partition_col].astype("category")
+
+ # GH #35791
+ # read_table uses the new Arrow Datasets API since pyarrow 1.0.0
+ # Previous behaviour was pyarrow partitioned columns become 'category' dtypes
+ # These are added to back of dataframe on read. In new API category dtype is
+ # only used if partition field is string.
+ legacy_read_table = LooseVersion(pyarrow.__version__) < LooseVersion("1.0.0")
+ if partition_col and legacy_read_table:
+ partition_col_type = "category"
+ else:
+ partition_col_type = "int32"
+
+ expected_df[partition_col] = expected_df[partition_col].astype(
+ partition_col_type
+ )
+
check_round_trip(
df_compat,
pa,
| Backport https://github.com/pandas-dev/pandas/pull/35814 | https://api.github.com/repos/pandas-dev/pandas/pulls/35887 | 2020-08-25T07:07:52Z | 2020-08-25T10:04:02Z | 2020-08-25T10:04:02Z | 2020-08-25T15:18:11Z |
DOC: Fix documentation for pandas.Series.transform #35870 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9f36405bf6428..286da6e1de9d5 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10703,7 +10703,7 @@ def transform(self, func, *args, **kwargs):
- function
- string function name
- - list of functions and/or function names, e.g. ``[np.exp. 'sqrt']``
+ - list of functions and/or function names, e.g. ``[np.exp, 'sqrt']``
- dict of axis labels -> functions, function names or list of such.
{axis}
*args
| - [x] closes #35870
- [x] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35885 | 2020-08-25T03:46:11Z | 2020-08-25T05:37:04Z | 2020-08-25T05:37:04Z | 2020-08-27T13:41:27Z |
REF: reuse _combine instead of reset_dropped_locs | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 1198baab12ac1..70a8379de64e9 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -21,7 +21,6 @@
Mapping,
Optional,
Sequence,
- Tuple,
Type,
Union,
)
@@ -1025,16 +1024,14 @@ def _iterate_slices(self) -> Iterable[Series]:
def _cython_agg_general(
self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1
) -> DataFrame:
- agg_blocks, agg_items = self._cython_agg_blocks(
+ agg_mgr = self._cython_agg_blocks(
how, alt=alt, numeric_only=numeric_only, min_count=min_count
)
- return self._wrap_agged_blocks(agg_blocks, items=agg_items)
+ return self._wrap_agged_blocks(agg_mgr.blocks, items=agg_mgr.items)
def _cython_agg_blocks(
self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1
- ) -> "Tuple[List[Block], Index]":
- # TODO: the actual managing of mgr_locs is a PITA
- # here, it should happen via BlockManager.combine
+ ) -> BlockManager:
data: BlockManager = self._get_data_to_aggregate()
@@ -1124,7 +1121,6 @@ def blk_func(bvalues: ArrayLike) -> ArrayLike:
res_values = cast_agg_result(result, bvalues, how)
return res_values
- skipped: List[int] = []
for i, block in enumerate(data.blocks):
try:
nbs = block.apply(blk_func)
@@ -1132,7 +1128,7 @@ def blk_func(bvalues: ArrayLike) -> ArrayLike:
# TypeError -> we may have an exception in trying to aggregate
# continue and exclude the block
# NotImplementedError -> "ohlc" with wrong dtype
- skipped.append(i)
+ pass
else:
agg_blocks.extend(nbs)
@@ -1141,9 +1137,8 @@ def blk_func(bvalues: ArrayLike) -> ArrayLike:
# reset the locs in the blocks to correspond to our
# current ordering
- agg_items = data.reset_dropped_locs(agg_blocks, skipped)
-
- return agg_blocks, agg_items
+ new_mgr = data._combine(agg_blocks)
+ return new_mgr
def _aggregate_frame(self, func, *args, **kwargs) -> DataFrame:
if self.grouper.nkeys != 1:
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 297ad3077ef1d..6f16254c56ec4 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1491,38 +1491,6 @@ def unstack(self, unstacker, fill_value) -> "BlockManager":
bm = BlockManager(new_blocks, [new_columns, new_index])
return bm
- def reset_dropped_locs(self, blocks: List[Block], skipped: List[int]) -> Index:
- """
- Decrement the mgr_locs of the given blocks with `skipped` removed.
-
- Notes
- -----
- Alters each block's mgr_locs inplace.
- """
- ncols = len(self)
-
- new_locs = [blk.mgr_locs.as_array for blk in blocks]
- indexer = np.concatenate(new_locs)
-
- new_items = self.items.take(np.sort(indexer))
-
- if skipped:
- # we need to adjust the indexer to account for the
- # items we have removed
- deleted_items = [self.blocks[i].mgr_locs.as_array for i in skipped]
- deleted = np.concatenate(deleted_items)
- ai = np.arange(ncols)
- mask = np.zeros(ncols)
- mask[deleted] = 1
- indexer = (ai - mask.cumsum())[indexer]
-
- offset = 0
- for blk in blocks:
- loc = len(blk.mgr_locs)
- blk.mgr_locs = indexer[offset : (offset + loc)]
- offset += loc
- return new_items
-
class SingleBlockManager(BlockManager):
""" manage a single block with """
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index a70247d9f7f9c..baabdf0fca29a 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -561,8 +561,7 @@ def hfunc(bvalues: ArrayLike) -> ArrayLike:
elif not len(res_blocks):
return obj.astype("float64")
- new_cols = mgr.reset_dropped_locs(res_blocks, skipped)
- new_mgr = type(mgr).from_blocks(res_blocks, [new_cols, obj.index])
+ new_mgr = mgr._combine(res_blocks)
out = obj._constructor(new_mgr)
self._insert_on_column(out, obj)
return out
| https://api.github.com/repos/pandas-dev/pandas/pulls/35884 | 2020-08-25T03:40:01Z | 2020-08-25T12:59:42Z | 2020-08-25T12:59:42Z | 2020-08-25T15:14:10Z | |
REF: use BlockManager.apply for Rolling.count | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 04509a40b98df..246bf8e6f71b7 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -22,7 +22,7 @@
from pandas._libs.tslibs import BaseOffset, to_offset
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import ArrayLike, Axis, FrameOrSeries, FrameOrSeriesUnion, Label
+from pandas._typing import ArrayLike, Axis, FrameOrSeries, FrameOrSeriesUnion
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, cache_readonly, doc
@@ -44,6 +44,7 @@
ABCSeries,
ABCTimedeltaIndex,
)
+from pandas.core.dtypes.missing import notna
from pandas.core.base import DataError, PandasObject, SelectionMixin, ShallowMixin
import pandas.core.common as com
@@ -395,40 +396,6 @@ def _wrap_result(self, result, block=None, obj=None):
return type(obj)(result, index=index, columns=block.columns)
return result
- def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeriesUnion:
- """
- Wrap the results.
-
- Parameters
- ----------
- results : list of ndarrays
- obj : conformed data (may be resampled)
- skipped: List[int]
- Indices of blocks that are skipped.
- """
- from pandas import Series, concat
-
- if obj.ndim == 1:
- if not results:
- raise DataError("No numeric types to aggregate")
- assert len(results) == 1
- return Series(results[0], index=obj.index, name=obj.name)
-
- exclude: List[Label] = []
- orig_blocks = list(obj._to_dict_of_blocks(copy=False).values())
- for i in skipped:
- exclude.extend(orig_blocks[i].columns)
-
- columns = [c for c in self._selected_obj.columns if c not in exclude]
- if not columns and not len(results) and exclude:
- raise DataError("No numeric types to aggregate")
- elif not len(results):
- return obj.astype("float64")
-
- df = concat(results, axis=1).reindex(columns=columns, copy=False)
- self._insert_on_column(df, obj)
- return df
-
def _insert_on_column(self, result: "DataFrame", obj: "DataFrame"):
# if we have an 'on' column we want to put it back into
# the results in the same location
@@ -1325,21 +1292,29 @@ def count(self):
# implementations shouldn't end up here
assert not isinstance(self.window, BaseIndexer)
- blocks, obj = self._create_blocks(self._selected_obj)
- results = []
- for b in blocks:
- result = b.notna().astype(int)
+ _, obj = self._create_blocks(self._selected_obj)
+
+ def hfunc(values: np.ndarray) -> np.ndarray:
+ result = notna(values)
+ result = result.astype(int)
+ frame = type(obj)(result.T)
result = self._constructor(
- result,
+ frame,
window=self._get_window(),
min_periods=self.min_periods or 0,
center=self.center,
axis=self.axis,
closed=self.closed,
).sum()
- results.append(result)
+ return result.values.T
- return self._wrap_results(results, obj, skipped=[])
+ new_mgr = obj._mgr.apply(hfunc)
+ out = obj._constructor(new_mgr)
+ if obj.ndim == 1:
+ out.name = obj.name
+ else:
+ self._insert_on_column(out, obj)
+ return out
_shared_docs["apply"] = dedent(
r"""
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35883 | 2020-08-25T02:12:19Z | 2020-08-31T18:28:25Z | 2020-08-31T18:28:25Z | 2020-08-31T19:21:05Z |
BUG: item_cache invalidation in get_numeric_data | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index ac9fe9d2fca26..8cf79500c0384 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -33,6 +33,7 @@ Bug fixes
- Bug in :meth:`DateTimeIndex.format` and :meth:`PeriodIndex.format` with ``name=True`` setting the first item to ``"None"`` where it should be ``""`` (:issue:`35712`)
- Bug in :meth:`Float64Index.__contains__` incorrectly raising ``TypeError`` instead of returning ``False`` (:issue:`35788`)
- Bug in :class:`DataFrame` indexing returning an incorrect :class:`Series` in some cases when the series has been altered and a cache not invalidated (:issue:`33675`)
+- Bug in :meth:`DataFrame.corr` causing subsequent indexing lookups to be incorrect (:issue:`35882`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 2e3098d94afcb..f4dba46cb965c 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -719,7 +719,6 @@ def get_numeric_data(self, copy: bool = False) -> "BlockManager":
copy : bool, default False
Whether to copy the blocks
"""
- self._consolidate_inplace()
return self._combine([b for b in self.blocks if b.is_numeric], copy)
def _combine(self: T, blocks: List[Block], copy: bool = True) -> T:
diff --git a/pandas/tests/frame/methods/test_cov_corr.py b/pandas/tests/frame/methods/test_cov_corr.py
index d3548b639572d..f307acd8c2178 100644
--- a/pandas/tests/frame/methods/test_cov_corr.py
+++ b/pandas/tests/frame/methods/test_cov_corr.py
@@ -191,6 +191,23 @@ def test_corr_nullable_integer(self, nullable_column, other_column, method):
expected = pd.DataFrame(np.ones((2, 2)), columns=["a", "b"], index=["a", "b"])
tm.assert_frame_equal(result, expected)
+ def test_corr_item_cache(self):
+ # Check that corr does not lead to incorrect entries in item_cache
+
+ df = pd.DataFrame({"A": range(10)})
+ df["B"] = range(10)[::-1]
+
+ ser = df["A"] # populate item_cache
+ assert len(df._mgr.blocks) == 2
+
+ _ = df.corr()
+
+ # Check that the corr didnt break link between ser and df
+ ser.values[0] = 99
+ assert df.loc[0, "A"] == 99
+ assert df["A"] is ser
+ assert df.values[0, 0] == 99
+
class TestDataFrameCorrWith:
def test_corrwith(self, datetime_frame):
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35882 | 2020-08-25T01:13:32Z | 2020-09-05T19:55:37Z | 2020-09-05T19:55:37Z | 2020-09-07T11:33:18Z |
REF: ignore_failures in BlockManager.reduce | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1f9987d9d3f5b..8efe2fc090fc5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -8595,6 +8595,7 @@ def _reduce(
cols = self.columns[~dtype_is_dt]
self = self[cols]
+ any_object = self.dtypes.apply(is_object_dtype).any()
# TODO: Make other agg func handle axis=None properly GH#21597
axis = self._get_axis_number(axis)
labels = self._get_agg_axis(axis)
@@ -8621,7 +8622,17 @@ def _get_data() -> DataFrame:
data = self._get_bool_data()
return data
- if numeric_only is not None:
+ if numeric_only is not None or (
+ numeric_only is None
+ and axis == 0
+ and not any_object
+ and not self._mgr.any_extension_types
+ ):
+ # For numeric_only non-None and axis non-None, we know
+ # which blocks to use and no try/except is needed.
+ # For numeric_only=None only the case with axis==0 and no object
+ # dtypes are unambiguous can be handled with BlockManager.reduce
+ # Case with EAs see GH#35881
df = self
if numeric_only is True:
df = _get_data()
@@ -8629,14 +8640,18 @@ def _get_data() -> DataFrame:
df = df.T
axis = 0
+ ignore_failures = numeric_only is None
+
# After possibly _get_data and transposing, we are now in the
# simple case where we can use BlockManager.reduce
- res = df._mgr.reduce(blk_func)
- out = df._constructor(res).iloc[0].rename(None)
+ res, indexer = df._mgr.reduce(blk_func, ignore_failures=ignore_failures)
+ out = df._constructor(res).iloc[0]
if out_dtype is not None:
out = out.astype(out_dtype)
if axis == 0 and is_object_dtype(out.dtype):
- out[:] = coerce_to_dtypes(out.values, df.dtypes)
+ # GH#35865 careful to cast explicitly to object
+ nvs = coerce_to_dtypes(out.values, df.dtypes.iloc[np.sort(indexer)])
+ out[:] = np.array(nvs, dtype=object)
return out
assert numeric_only is None
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 09f276be7d64a..9b6c4b664285e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -348,12 +348,18 @@ def apply(self, func, **kwargs) -> List["Block"]:
return self._split_op_result(result)
- def reduce(self, func) -> List["Block"]:
+ def reduce(self, func, ignore_failures: bool = False) -> List["Block"]:
# We will apply the function and reshape the result into a single-row
# Block with the same mgr_locs; squeezing will be done at a higher level
assert self.ndim == 2
- result = func(self.values)
+ try:
+ result = func(self.values)
+ except (TypeError, NotImplementedError):
+ if ignore_failures:
+ return []
+ raise
+
if np.ndim(result) == 0:
# TODO(EA2D): special case not needed with 2D EAs
res_values = np.array([[result]])
@@ -2454,6 +2460,34 @@ def is_bool(self):
"""
return lib.is_bool_array(self.values.ravel("K"))
+ def reduce(self, func, ignore_failures: bool = False) -> List[Block]:
+ """
+ For object-dtype, we operate column-wise.
+ """
+ assert self.ndim == 2
+
+ values = self.values
+ if len(values) > 1:
+ # split_and_operate expects func with signature (mask, values, inplace)
+ def mask_func(mask, values, inplace):
+ if values.ndim == 1:
+ values = values.reshape(1, -1)
+ return func(values)
+
+ return self.split_and_operate(None, mask_func, False)
+
+ try:
+ res = func(values)
+ except TypeError:
+ if not ignore_failures:
+ raise
+ return []
+
+ assert isinstance(res, np.ndarray)
+ assert res.ndim == 1
+ res = res.reshape(1, -1)
+ return [self.make_block_same_class(res)]
+
def convert(
self,
copy: bool = True,
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index f2480adce89b4..7f5e99c3348b7 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -2,6 +2,7 @@
import itertools
from typing import (
Any,
+ Callable,
DefaultDict,
Dict,
List,
@@ -324,18 +325,44 @@ def _verify_integrity(self) -> None:
f"tot_items: {tot_items}"
)
- def reduce(self: T, func) -> T:
+ def reduce(
+ self: T, func: Callable, ignore_failures: bool = False
+ ) -> Tuple[T, np.ndarray]:
+ """
+ Apply reduction function blockwise, returning a single-row BlockManager.
+
+ Parameters
+ ----------
+ func : reduction function
+ ignore_failures : bool, default False
+ Whether to drop blocks where func raises TypeError.
+
+ Returns
+ -------
+ BlockManager
+ np.ndarray
+ Indexer of mgr_locs that are retained.
+ """
# If 2D, we assume that we're operating column-wise
assert self.ndim == 2
res_blocks: List[Block] = []
for blk in self.blocks:
- nbs = blk.reduce(func)
+ nbs = blk.reduce(func, ignore_failures)
res_blocks.extend(nbs)
- index = Index([0]) # placeholder
- new_mgr = BlockManager.from_blocks(res_blocks, [self.items, index])
- return new_mgr
+ index = Index([None]) # placeholder
+ if ignore_failures:
+ if res_blocks:
+ indexer = np.concatenate([blk.mgr_locs.as_array for blk in res_blocks])
+ new_mgr = self._combine(res_blocks, copy=False, index=index)
+ else:
+ indexer = []
+ new_mgr = type(self).from_blocks([], [Index([]), index])
+ else:
+ indexer = np.arange(self.shape[0])
+ new_mgr = type(self).from_blocks(res_blocks, [self.items, index])
+ return new_mgr, indexer
def operate_blockwise(self, other: "BlockManager", array_op) -> "BlockManager":
"""
@@ -700,7 +727,9 @@ def get_numeric_data(self, copy: bool = False) -> "BlockManager":
"""
return self._combine([b for b in self.blocks if b.is_numeric], copy)
- def _combine(self: T, blocks: List[Block], copy: bool = True) -> T:
+ def _combine(
+ self: T, blocks: List[Block], copy: bool = True, index: Optional[Index] = None
+ ) -> T:
""" return a new manager with the blocks """
if len(blocks) == 0:
return self.make_empty()
@@ -716,6 +745,8 @@ def _combine(self: T, blocks: List[Block], copy: bool = True) -> T:
new_blocks.append(b)
axes = list(self.axes)
+ if index is not None:
+ axes[-1] = index
axes[0] = self.items.take(indexer)
return type(self).from_blocks(new_blocks, axes)
| Moving towards collecting all of the ignore_failures code in one place.
The case where we have object dtypes is kept separate in this PR, will be handled in the next pass. | https://api.github.com/repos/pandas-dev/pandas/pulls/35881 | 2020-08-25T00:54:54Z | 2020-10-10T18:36:03Z | 2020-10-10T18:36:03Z | 2020-10-14T13:04:24Z |
Backport PR #35877 on branch 1.1.x (REGR: DatetimeIndex.intersection incorrectly raising AssertionError) | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 81acd567027e5..97bd4dccdcd84 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -14,7 +14,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
-
+- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
-
-
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 15a7e25238983..ab0b3a394446d 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -700,16 +700,16 @@ def intersection(self, other, sort=False):
if result.freq is None:
# TODO: no tests rely on this; needed?
result = result._with_freq("infer")
- assert result.name == res_name
+ result.name = res_name
return result
elif not self._can_fast_intersect(other):
result = Index.intersection(self, other, sort=sort)
- assert result.name == res_name
# We need to invalidate the freq because Index.intersection
# uses _shallow_copy on a view of self._data, which will preserve
# self.freq if we're not careful.
result = result._with_freq(None)._with_freq("infer")
+ result.name = res_name
return result
# to make our life easier, "sort" the two ranges
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index 6670b079ddd29..f19e78323ab23 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -470,6 +470,13 @@ def test_intersection_bug(self):
tm.assert_index_equal(result, b)
assert result.freq == b.freq
+ def test_intersection_list(self):
+ # GH#35876
+ values = [pd.Timestamp("2020-01-01"), pd.Timestamp("2020-02-01")]
+ idx = pd.DatetimeIndex(values, name="a")
+ res = idx.intersection(values)
+ tm.assert_index_equal(res, idx)
+
def test_month_range_union_tz_pytz(self, sort):
from pytz import timezone
| Backport PR #35877: REGR: DatetimeIndex.intersection incorrectly raising AssertionError | https://api.github.com/repos/pandas-dev/pandas/pulls/35879 | 2020-08-24T23:46:22Z | 2020-08-25T06:27:56Z | 2020-08-25T06:27:55Z | 2020-08-25T06:27:56Z |
REGR: DatetimeIndex.intersection incorrectly raising AssertionError | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index c1b73c60be92b..af61354470a71 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -14,7 +14,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
-
+- Regression in :meth:`DatetimeIndex.intersection` incorrectly raising ``AssertionError`` when intersecting against a list (:issue:`35876`)
-
-
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 6d9d75a69e91d..9d00f50a65a06 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -704,16 +704,16 @@ def intersection(self, other, sort=False):
if result.freq is None:
# TODO: no tests rely on this; needed?
result = result._with_freq("infer")
- assert result.name == res_name
+ result.name = res_name
return result
elif not self._can_fast_intersect(other):
result = Index.intersection(self, other, sort=sort)
- assert result.name == res_name
# We need to invalidate the freq because Index.intersection
# uses _shallow_copy on a view of self._data, which will preserve
# self.freq if we're not careful.
result = result._with_freq(None)._with_freq("infer")
+ result.name = res_name
return result
# to make our life easier, "sort" the two ranges
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index 6670b079ddd29..f19e78323ab23 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -470,6 +470,13 @@ def test_intersection_bug(self):
tm.assert_index_equal(result, b)
assert result.freq == b.freq
+ def test_intersection_list(self):
+ # GH#35876
+ values = [pd.Timestamp("2020-01-01"), pd.Timestamp("2020-02-01")]
+ idx = pd.DatetimeIndex(values, name="a")
+ res = idx.intersection(values)
+ tm.assert_index_equal(res, idx)
+
def test_month_range_union_tz_pytz(self, sort):
from pytz import timezone
| - [x] closes #35876
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
@dsaxton this PR's test and whatsnew note reference the underlying issue you identified in #35876, but not the OP issue. Do you think we should add a test/note for that directly? | https://api.github.com/repos/pandas-dev/pandas/pulls/35877 | 2020-08-24T21:39:04Z | 2020-08-24T23:45:46Z | 2020-08-24T23:45:46Z | 2020-08-25T00:06:31Z |
COMPAT: Ensure rolling indexers return intp during take operations | diff --git a/pandas/core/window/indexers.py b/pandas/core/window/indexers.py
index 7cbe34cdebf9f..7c76a8e2a0b22 100644
--- a/pandas/core/window/indexers.py
+++ b/pandas/core/window/indexers.py
@@ -7,6 +7,8 @@
from pandas._libs.window.indexers import calculate_variable_window_bounds
from pandas.util._decorators import Appender
+from pandas.core.dtypes.common import ensure_platform_int
+
from pandas.tseries.offsets import Nano
get_window_bounds_doc = """
@@ -296,9 +298,9 @@ def get_window_bounds(
start_arrays = []
end_arrays = []
window_indicies_start = 0
- for key, indicies in self.groupby_indicies.items():
+ for key, indices in self.groupby_indicies.items():
if self.index_array is not None:
- index_array = self.index_array.take(indicies)
+ index_array = self.index_array.take(ensure_platform_int(indices))
else:
index_array = self.index_array
indexer = self.rolling_indexer(
@@ -307,22 +309,22 @@ def get_window_bounds(
**self.indexer_kwargs,
)
start, end = indexer.get_window_bounds(
- len(indicies), min_periods, center, closed
+ len(indices), min_periods, center, closed
)
start = start.astype(np.int64)
end = end.astype(np.int64)
# Cannot use groupby_indicies as they might not be monotonic with the object
# we're rolling over
window_indicies = np.arange(
- window_indicies_start, window_indicies_start + len(indicies),
+ window_indicies_start, window_indicies_start + len(indices),
)
- window_indicies_start += len(indicies)
+ window_indicies_start += len(indices)
# Extend as we'll be slicing window like [start, end)
window_indicies = np.append(
window_indicies, [window_indicies[-1] + 1]
).astype(np.int64)
- start_arrays.append(window_indicies.take(start))
- end_arrays.append(window_indicies.take(end))
+ start_arrays.append(window_indicies.take(ensure_platform_int(start)))
+ end_arrays.append(window_indicies.take(ensure_platform_int(end)))
start = np.concatenate(start_arrays)
end = np.concatenate(end_arrays)
# GH 35552: Need to adjust start and end based on the nans appended to values
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index f18aaa5e86829..73bf7dafac254 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -7,7 +7,7 @@
from pandas.util._test_decorators import async_mark
import pandas as pd
-from pandas import DataFrame, Series, Timestamp, compat
+from pandas import DataFrame, Series, Timestamp
import pandas._testing as tm
from pandas.core.indexes.datetimes import date_range
@@ -319,7 +319,6 @@ def test_resample_groupby_with_label():
tm.assert_frame_equal(result, expected)
-@pytest.mark.xfail(not compat.IS64, reason="GH-35148")
def test_consistency_with_window():
# consistent return values with window
diff --git a/pandas/tests/window/test_api.py b/pandas/tests/window/test_api.py
index 28e27791cad35..2c3d8b4608806 100644
--- a/pandas/tests/window/test_api.py
+++ b/pandas/tests/window/test_api.py
@@ -6,7 +6,7 @@
import pandas.util._test_decorators as td
import pandas as pd
-from pandas import DataFrame, Index, Series, Timestamp, compat, concat
+from pandas import DataFrame, Index, Series, Timestamp, concat
import pandas._testing as tm
from pandas.core.base import SpecificationError
@@ -277,7 +277,7 @@ def test_preserve_metadata():
@pytest.mark.parametrize(
"func,window_size,expected_vals",
[
- pytest.param(
+ (
"rolling",
2,
[
@@ -289,7 +289,6 @@ def test_preserve_metadata():
[35.0, 40.0, 60.0, 40.0],
[60.0, 80.0, 85.0, 80],
],
- marks=pytest.mark.xfail(not compat.IS64, reason="GH-35294"),
),
(
"expanding",
diff --git a/pandas/tests/window/test_apply.py b/pandas/tests/window/test_apply.py
index 2aaf6af103e98..bc38634da8941 100644
--- a/pandas/tests/window/test_apply.py
+++ b/pandas/tests/window/test_apply.py
@@ -4,7 +4,7 @@
from pandas.errors import NumbaUtilError
import pandas.util._test_decorators as td
-from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, compat, date_range
+from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, date_range
import pandas._testing as tm
@@ -142,7 +142,6 @@ def test_invalid_kwargs_nopython():
@pytest.mark.parametrize("args_kwargs", [[None, {"par": 10}], [(10,), None]])
-@pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_rolling_apply_args_kwargs(args_kwargs):
# GH 33433
def foo(x, par):
diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index d0a62374d0888..170bf100b3891 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -2,7 +2,7 @@
import pytest
import pandas as pd
-from pandas import DataFrame, Series, compat
+from pandas import DataFrame, Series
import pandas._testing as tm
from pandas.core.groupby.groupby import get_groupby
@@ -23,7 +23,6 @@ def test_mutated(self):
g = get_groupby(self.frame, by="A", mutated=True)
assert g.mutated
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_getitem(self):
g = self.frame.groupby("A")
g_mutated = get_groupby(self.frame, by="A", mutated=True)
@@ -56,7 +55,6 @@ def test_getitem_multiple(self):
result = r.B.count()
tm.assert_series_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_rolling(self):
g = self.frame.groupby("A")
r = g.rolling(window=4)
@@ -74,7 +72,6 @@ def test_rolling(self):
@pytest.mark.parametrize(
"interpolation", ["linear", "lower", "higher", "midpoint", "nearest"]
)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_rolling_quantile(self, interpolation):
g = self.frame.groupby("A")
r = g.rolling(window=4)
@@ -105,7 +102,6 @@ def func(x):
expected = g.apply(func)
tm.assert_series_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_rolling_apply(self, raw):
g = self.frame.groupby("A")
r = g.rolling(window=4)
@@ -115,7 +111,6 @@ def test_rolling_apply(self, raw):
expected = g.apply(lambda x: x.rolling(4).apply(lambda y: y.sum(), raw=raw))
tm.assert_frame_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_rolling_apply_mutability(self):
# GH 14013
df = pd.DataFrame({"A": ["foo"] * 3 + ["bar"] * 3, "B": [1] * 6})
@@ -197,7 +192,6 @@ def test_expanding_apply(self, raw):
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("expected_value,raw_value", [[1.0, True], [0.0, False]])
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling(self, expected_value, raw_value):
# GH 31754
@@ -215,7 +209,6 @@ def foo(x):
)
tm.assert_series_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_center_center(self):
# GH 35552
series = Series(range(1, 6))
@@ -281,7 +274,6 @@ def test_groupby_rolling_center_center(self):
)
tm.assert_frame_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_subselect_rolling(self):
# GH 35486
df = DataFrame(
@@ -307,7 +299,6 @@ def test_groupby_subselect_rolling(self):
)
tm.assert_series_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_custom_indexer(self):
# GH 35557
class SimpleIndexer(pd.api.indexers.BaseIndexer):
@@ -331,7 +322,6 @@ def get_window_bounds(
expected = df.groupby(df.index).rolling(window=3, min_periods=1).sum()
tm.assert_frame_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
@@ -356,7 +346,6 @@ def test_groupby_rolling_subset_with_closed(self):
)
tm.assert_series_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_subset_rolling_subset_with_closed(self):
# GH 35549
df = pd.DataFrame(
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py
index bea239a245a4f..8d72e2cb92ca9 100644
--- a/pandas/tests/window/test_rolling.py
+++ b/pandas/tests/window/test_rolling.py
@@ -7,7 +7,7 @@
import pandas.util._test_decorators as td
import pandas as pd
-from pandas import DataFrame, Series, compat, date_range
+from pandas import DataFrame, Series, date_range
import pandas._testing as tm
from pandas.core.window import Rolling
@@ -150,7 +150,6 @@ def test_closed_one_entry(func):
@pytest.mark.parametrize("func", ["min", "max"])
-@pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_closed_one_entry_groupby(func):
# GH24718
ser = pd.DataFrame(
@@ -683,7 +682,6 @@ def test_iter_rolling_datetime(expected, expected_index, window):
),
],
)
-@pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_rolling_positional_argument(grouping, _index, raw):
# GH 34605
diff --git a/pandas/tests/window/test_timeseries_window.py b/pandas/tests/window/test_timeseries_window.py
index 90f919d5565b0..8aa4d7103e48a 100644
--- a/pandas/tests/window/test_timeseries_window.py
+++ b/pandas/tests/window/test_timeseries_window.py
@@ -7,7 +7,6 @@
MultiIndex,
Series,
Timestamp,
- compat,
date_range,
to_datetime,
)
@@ -657,7 +656,6 @@ def agg_by_day(x):
tm.assert_frame_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_groupby_monotonic(self):
# GH 15130
@@ -687,7 +685,6 @@ def test_groupby_monotonic(self):
result = df.groupby("name").rolling("180D", on="date")["amount"].sum()
tm.assert_series_equal(result, expected)
- @pytest.mark.xfail(not compat.IS64, reason="GH-35294")
def test_non_monotonic(self):
# GH 13966 (similar to #15130, closed by #15175)
| - [x] closes #35294
- [x] closes #35148
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Tested locally with @TomAugspurger docker container and the tests were working: https://github.com/pandas-dev/pandas/pull/35228#issuecomment-658833239
| https://api.github.com/repos/pandas-dev/pandas/pulls/35875 | 2020-08-24T17:55:45Z | 2020-08-24T23:42:02Z | 2020-08-24T23:42:02Z | 2020-10-28T15:55:27Z |
BUG: to_dict_of_blocks failing to invalidate item_cache | diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index f05d4cf1c4be6..33aaf26698540 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -909,12 +909,7 @@ def to_dict(self, copy: bool = True):
Returns
-------
values : a dict of dtype -> BlockManager
-
- Notes
- -----
- This consolidates based on str(dtype)
"""
- self._consolidate_inplace()
bd: Dict[str, List[Block]] = {}
for b in self.blocks:
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index c9fec3215d57f..8ecd9066ceff0 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -626,3 +626,21 @@ def test_add_column_with_pandas_array(self):
assert type(df["c"]._mgr.blocks[0]) == ObjectBlock
assert type(df2["c"]._mgr.blocks[0]) == ObjectBlock
tm.assert_frame_equal(df, df2)
+
+
+def test_to_dict_of_blocks_item_cache():
+ # Calling to_dict_of_blocks should not poison item_cache
+ df = pd.DataFrame({"a": [1, 2, 3, 4], "b": ["a", "b", "c", "d"]})
+ df["c"] = pd.arrays.PandasArray(np.array([1, 2, None, 3], dtype=object))
+ mgr = df._mgr
+ assert len(mgr.blocks) == 3 # i.e. not consolidated
+
+ ser = df["b"] # populations item_cache["b"]
+
+ df._to_dict_of_blocks()
+
+ # Check that the to_dict_of_blocks didnt break link between ser and df
+ ser.values[0] = "foo"
+ assert df.loc[0, "b"] == "foo"
+
+ assert df["b"] is ser
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/35874 | 2020-08-24T16:15:02Z | 2020-08-25T12:58:44Z | 2020-08-25T12:58:44Z | 2020-08-25T15:13:21Z |
REF: simplify latex formatting | diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 3dc4290953360..bfe8ed8ddafd0 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -938,17 +938,18 @@ def to_latex(
"""
from pandas.io.formats.latex import LatexFormatter
- return LatexFormatter(
+ latex_formatter = LatexFormatter(
self,
- column_format=column_format,
longtable=longtable,
+ column_format=column_format,
multicolumn=multicolumn,
multicolumn_format=multicolumn_format,
multirow=multirow,
caption=caption,
label=label,
position=position,
- ).get_result(buf=buf, encoding=encoding)
+ )
+ return latex_formatter.get_result(buf=buf, encoding=encoding)
def _format_col(self, i: int) -> List[str]:
frame = self.tr_frame
diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
index 715b8bbdf5672..8080d953da308 100644
--- a/pandas/io/formats/latex.py
+++ b/pandas/io/formats/latex.py
@@ -1,7 +1,8 @@
"""
Module for formatting output data in Latex.
"""
-from typing import IO, List, Optional, Tuple
+from abc import ABC, abstractmethod
+from typing import IO, Iterator, List, Optional, Type
import numpy as np
@@ -10,56 +11,95 @@
from pandas.io.formats.format import DataFrameFormatter, TableFormatter
-class LatexFormatter(TableFormatter):
- """
- Used to render a DataFrame to a LaTeX tabular/longtable environment output.
+class RowStringConverter(ABC):
+ r"""Converter for dataframe rows into LaTeX strings.
Parameters
----------
formatter : `DataFrameFormatter`
- column_format : str, default None
- The columns format as specified in `LaTeX table format
- <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3 columns
- longtable : boolean, default False
- Use a longtable environment instead of tabular.
+ Instance of `DataFrameFormatter`.
+ multicolumn: bool, optional
+ Whether to use \multicolumn macro.
+ multicolumn_format: str, optional
+ Multicolumn format.
+ multirow: bool, optional
+ Whether to use \multirow macro.
- See Also
- --------
- HTMLFormatter
"""
def __init__(
self,
formatter: DataFrameFormatter,
- column_format: Optional[str] = None,
- longtable: bool = False,
multicolumn: bool = False,
multicolumn_format: Optional[str] = None,
multirow: bool = False,
- caption: Optional[str] = None,
- label: Optional[str] = None,
- position: Optional[str] = None,
):
self.fmt = formatter
self.frame = self.fmt.frame
- self.bold_rows = self.fmt.bold_rows
- self.column_format = column_format
- self.longtable = longtable
self.multicolumn = multicolumn
self.multicolumn_format = multicolumn_format
self.multirow = multirow
- self.caption = caption
- self.label = label
- self.escape = self.fmt.escape
- self.position = position
- self._table_float = any(p is not None for p in (caption, label, position))
+ self.clinebuf: List[List[int]] = []
+ self.strcols = self._get_strcols()
+ self.strrows: List[List[str]] = (
+ list(zip(*self.strcols)) # type: ignore[arg-type]
+ )
+
+ def get_strrow(self, row_num: int) -> str:
+ """Get string representation of the row."""
+ row = self.strrows[row_num]
+
+ is_multicol = (
+ row_num < self.column_levels and self.fmt.header and self.multicolumn
+ )
+
+ is_multirow = (
+ row_num >= self.header_levels
+ and self.fmt.index
+ and self.multirow
+ and self.index_levels > 1
+ )
+
+ is_cline_maybe_required = is_multirow and row_num < len(self.strrows) - 1
+
+ crow = self._preprocess_row(row)
+
+ if is_multicol:
+ crow = self._format_multicolumn(crow)
+ if is_multirow:
+ crow = self._format_multirow(crow, row_num)
+
+ lst = []
+ lst.append(" & ".join(crow))
+ lst.append(" \\\\")
+ if is_cline_maybe_required:
+ cline = self._compose_cline(row_num, len(self.strcols))
+ lst.append(cline)
+ return "".join(lst)
+
+ @property
+ def _header_row_num(self) -> int:
+ """Number of rows in header."""
+ return self.header_levels if self.fmt.header else 0
+
+ @property
+ def index_levels(self) -> int:
+ """Integer number of levels in index."""
+ return self.frame.index.nlevels
+
+ @property
+ def column_levels(self) -> int:
+ return self.frame.columns.nlevels
+
+ @property
+ def header_levels(self) -> int:
+ nlevels = self.column_levels
+ if self.fmt.has_index_names and self.fmt.show_index_names:
+ nlevels += 1
+ return nlevels
- def write_result(self, buf: IO[str]) -> None:
- """
- Render a DataFrame to a LaTeX tabular, longtable, or table/tabular
- environment output.
- """
- # string representation of the columns
+ def _get_strcols(self) -> List[List[str]]:
+ """String representation of the columns."""
if len(self.frame.columns) == 0 or len(self.frame.index) == 0:
info_line = (
f"Empty {type(self.frame).__name__}\n"
@@ -70,12 +110,6 @@ def write_result(self, buf: IO[str]) -> None:
else:
strcols = self.fmt._to_str_columns()
- def get_col_type(dtype):
- if issubclass(dtype.type, np.number):
- return "r"
- else:
- return "l"
-
# reestablish the MultiIndex that has been joined by _to_str_column
if self.fmt.index and isinstance(self.frame.index, ABCMultiIndex):
out = self.frame.index.format(
@@ -107,89 +141,19 @@ def pad_empties(x):
# Get rid of old multiindex column and add new ones
strcols = out + strcols[1:]
+ return strcols
- if self.column_format is None:
- dtypes = self.frame.dtypes._values
- column_format = "".join(map(get_col_type, dtypes))
- if self.fmt.index:
- index_format = "l" * self.frame.index.nlevels
- column_format = index_format + column_format
- elif not isinstance(self.column_format, str): # pragma: no cover
- raise AssertionError(
- f"column_format must be str or unicode, not {type(column_format)}"
- )
+ def _preprocess_row(self, row: List[str]) -> List[str]:
+ """Preprocess elements of the row."""
+ if self.fmt.escape:
+ crow = _escape_symbols(row)
else:
- column_format = self.column_format
-
- self._write_tabular_begin(buf, column_format)
-
- buf.write("\\toprule\n")
+ crow = [x if x else "{}" for x in row]
+ if self.fmt.bold_rows and self.fmt.index:
+ crow = _convert_to_bold(crow, self.index_levels)
+ return crow
- ilevels = self.frame.index.nlevels
- clevels = self.frame.columns.nlevels
- nlevels = clevels
- if self.fmt.has_index_names and self.fmt.show_index_names:
- nlevels += 1
- strrows = list(zip(*strcols))
- self.clinebuf: List[List[int]] = []
-
- for i, row in enumerate(strrows):
- if i == nlevels and self.fmt.header:
- buf.write("\\midrule\n") # End of header
- if self.longtable:
- buf.write("\\endhead\n")
- buf.write("\\midrule\n")
- buf.write(
- f"\\multicolumn{{{len(row)}}}{{r}}"
- "{{Continued on next page}} \\\\\n"
- )
- buf.write("\\midrule\n")
- buf.write("\\endfoot\n\n")
- buf.write("\\bottomrule\n")
- buf.write("\\endlastfoot\n")
- if self.escape:
- # escape backslashes first
- crow = [
- (
- x.replace("\\", "\\textbackslash ")
- .replace("_", "\\_")
- .replace("%", "\\%")
- .replace("$", "\\$")
- .replace("#", "\\#")
- .replace("{", "\\{")
- .replace("}", "\\}")
- .replace("~", "\\textasciitilde ")
- .replace("^", "\\textasciicircum ")
- .replace("&", "\\&")
- if (x and x != "{}")
- else "{}"
- )
- for x in row
- ]
- else:
- crow = [x if x else "{}" for x in row]
- if self.bold_rows and self.fmt.index:
- # bold row labels
- crow = [
- f"\\textbf{{{x}}}"
- if j < ilevels and x.strip() not in ["", "{}"]
- else x
- for j, x in enumerate(crow)
- ]
- if i < clevels and self.fmt.header and self.multicolumn:
- # sum up columns to multicolumns
- crow = self._format_multicolumn(crow, ilevels)
- if i >= nlevels and self.fmt.index and self.multirow and ilevels > 1:
- # sum up rows to multirows
- crow = self._format_multirow(crow, ilevels, i, strrows)
- buf.write(" & ".join(crow))
- buf.write(" \\\\\n")
- if self.multirow and i < len(strrows) - 1:
- self._print_cline(buf, i, len(strcols))
-
- self._write_tabular_end(buf)
-
- def _format_multicolumn(self, row: List[str], ilevels: int) -> List[str]:
+ def _format_multicolumn(self, row: List[str]) -> List[str]:
r"""
Combine columns belonging to a group to a single multicolumn entry
according to self.multicolumn_format
@@ -199,7 +163,7 @@ def _format_multicolumn(self, row: List[str], ilevels: int) -> List[str]:
will become
\multicolumn{3}{l}{a} & b & \multicolumn{2}{l}{c}
"""
- row2 = list(row[:ilevels])
+ row2 = row[: self.index_levels]
ncol = 1
coltext = ""
@@ -214,7 +178,7 @@ def append_col():
else:
row2.append(coltext)
- for c in row[ilevels:]:
+ for c in row[self.index_levels :]:
# if next col has text, write the previous
if c.strip():
if coltext:
@@ -229,9 +193,7 @@ def append_col():
append_col()
return row2
- def _format_multirow(
- self, row: List[str], ilevels: int, i: int, rows: List[Tuple[str, ...]]
- ) -> List[str]:
+ def _format_multirow(self, row: List[str], i: int) -> List[str]:
r"""
Check following rows, whether row should be a multirow
@@ -241,10 +203,10 @@ def _format_multirow(
b & 0 & \cline{1-2}
b & 0 &
"""
- for j in range(ilevels):
+ for j in range(self.index_levels):
if row[j].strip():
nrow = 1
- for r in rows[i + 1 :]:
+ for r in self.strrows[i + 1 :]:
if not r[j].strip():
nrow += 1
else:
@@ -256,88 +218,524 @@ def _format_multirow(
self.clinebuf.append([i + nrow - 1, j + 1])
return row
- def _print_cline(self, buf: IO[str], i: int, icol: int) -> None:
+ def _compose_cline(self, i: int, icol: int) -> str:
"""
- Print clines after multirow-blocks are finished.
+ Create clines after multirow-blocks are finished.
"""
+ lst = []
for cl in self.clinebuf:
if cl[0] == i:
- buf.write(f"\\cline{{{cl[1]:d}-{icol:d}}}\n")
- # remove entries that have been written to buffer
- self.clinebuf = [x for x in self.clinebuf if x[0] != i]
+ lst.append(f"\n\\cline{{{cl[1]:d}-{icol:d}}}")
+ # remove entries that have been written to buffer
+ self.clinebuf = [x for x in self.clinebuf if x[0] != i]
+ return "".join(lst)
+
+
+class RowStringIterator(RowStringConverter):
+ """Iterator over rows of the header or the body of the table."""
+
+ @abstractmethod
+ def __iter__(self) -> Iterator[str]:
+ """Iterate over LaTeX string representations of rows."""
+
+
+class RowHeaderIterator(RowStringIterator):
+ """Iterator for the table header rows."""
+
+ def __iter__(self) -> Iterator[str]:
+ for row_num in range(len(self.strrows)):
+ if row_num < self._header_row_num:
+ yield self.get_strrow(row_num)
+
+
+class RowBodyIterator(RowStringIterator):
+ """Iterator for the table body rows."""
+
+ def __iter__(self) -> Iterator[str]:
+ for row_num in range(len(self.strrows)):
+ if row_num >= self._header_row_num:
+ yield self.get_strrow(row_num)
- def _write_tabular_begin(self, buf, column_format: str):
- """
- Write the beginning of a tabular environment or
- nested table/tabular environments including caption and label.
+
+class TableBuilderAbstract(ABC):
+ """
+ Abstract table builder producing string representation of LaTeX table.
+
+ Parameters
+ ----------
+ formatter : `DataFrameFormatter`
+ Instance of `DataFrameFormatter`.
+ column_format: str, optional
+ Column format, for example, 'rcl' for three columns.
+ multicolumn: bool, optional
+ Use multicolumn to enhance MultiIndex columns.
+ multicolumn_format: str, optional
+ The alignment for multicolumns, similar to column_format.
+ multirow: bool, optional
+ Use multirow to enhance MultiIndex rows.
+ caption: str, optional
+ Table caption.
+ label: str, optional
+ LaTeX label.
+ position: str, optional
+ Float placement specifier, for example, 'htb'.
+ """
+
+ def __init__(
+ self,
+ formatter: DataFrameFormatter,
+ column_format: Optional[str] = None,
+ multicolumn: bool = False,
+ multicolumn_format: Optional[str] = None,
+ multirow: bool = False,
+ caption: Optional[str] = None,
+ label: Optional[str] = None,
+ position: Optional[str] = None,
+ ):
+ self.fmt = formatter
+ self.column_format = column_format
+ self.multicolumn = multicolumn
+ self.multicolumn_format = multicolumn_format
+ self.multirow = multirow
+ self.caption = caption
+ self.label = label
+ self.position = position
+
+ def get_result(self) -> str:
+ """String representation of LaTeX table."""
+ elements = [
+ self.env_begin,
+ self.top_separator,
+ self.header,
+ self.middle_separator,
+ self.env_body,
+ self.bottom_separator,
+ self.env_end,
+ ]
+ result = "\n".join([item for item in elements if item])
+ trailing_newline = "\n"
+ result += trailing_newline
+ return result
+
+ @property
+ @abstractmethod
+ def env_begin(self) -> str:
+ """Beginning of the environment."""
+
+ @property
+ @abstractmethod
+ def top_separator(self) -> str:
+ """Top level separator."""
+
+ @property
+ @abstractmethod
+ def header(self) -> str:
+ """Header lines."""
+
+ @property
+ @abstractmethod
+ def middle_separator(self) -> str:
+ """Middle level separator."""
+
+ @property
+ @abstractmethod
+ def env_body(self) -> str:
+ """Environment body."""
+
+ @property
+ @abstractmethod
+ def bottom_separator(self) -> str:
+ """Bottom level separator."""
+
+ @property
+ @abstractmethod
+ def env_end(self) -> str:
+ """End of the environment."""
+
+
+class GenericTableBuilder(TableBuilderAbstract):
+ """Table builder producing string representation of LaTeX table."""
+
+ @property
+ def header(self) -> str:
+ iterator = self._create_row_iterator(over="header")
+ return "\n".join(list(iterator))
+
+ @property
+ def top_separator(self) -> str:
+ return "\\toprule"
+
+ @property
+ def middle_separator(self) -> str:
+ return "\\midrule" if self._is_separator_required() else ""
+
+ @property
+ def env_body(self) -> str:
+ iterator = self._create_row_iterator(over="body")
+ return "\n".join(list(iterator))
+
+ def _is_separator_required(self) -> bool:
+ return bool(self.header and self.env_body)
+
+ @property
+ def _position_macro(self) -> str:
+ r"""Position macro, extracted from self.position, like [h]."""
+ return f"[{self.position}]" if self.position else ""
+
+ @property
+ def _caption_macro(self) -> str:
+ r"""Caption macro, extracted from self.caption, like \caption{cap}."""
+ return f"\\caption{{{self.caption}}}" if self.caption else ""
+
+ @property
+ def _label_macro(self) -> str:
+ r"""Label macro, extracted from self.label, like \label{ref}."""
+ return f"\\label{{{self.label}}}" if self.label else ""
+
+ def _create_row_iterator(self, over: str) -> RowStringIterator:
+ """Create iterator over header or body of the table.
Parameters
----------
- buf : string or file handle
- File path or object. If not specified, the result is returned as
- a string.
- column_format : str
- The columns format as specified in `LaTeX table format
- <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl'
- for 3 columns
+ over : {'body', 'header'}
+ Over what to iterate.
+
+ Returns
+ -------
+ RowStringIterator
+ Iterator over body or header.
"""
- if self._table_float:
- # then write output in a nested table/tabular or longtable environment
- if self.caption is None:
- caption_ = ""
- else:
- caption_ = f"\n\\caption{{{self.caption}}}"
+ iterator_kind = self._select_iterator(over)
+ return iterator_kind(
+ formatter=self.fmt,
+ multicolumn=self.multicolumn,
+ multicolumn_format=self.multicolumn_format,
+ multirow=self.multirow,
+ )
+
+ def _select_iterator(self, over: str) -> Type[RowStringIterator]:
+ """Select proper iterator over table rows."""
+ if over == "header":
+ return RowHeaderIterator
+ elif over == "body":
+ return RowBodyIterator
+ else:
+ msg = f"'over' must be either 'header' or 'body', but {over} was provided"
+ raise ValueError(msg)
+
+
+class LongTableBuilder(GenericTableBuilder):
+ """Concrete table builder for longtable.
+
+ >>> from pandas import DataFrame
+ >>> from pandas.io.formats import format as fmt
+ >>> df = DataFrame({"a": [1, 2], "b": ["b1", "b2"]})
+ >>> formatter = fmt.DataFrameFormatter(df)
+ >>> builder = LongTableBuilder(formatter, caption='caption', label='lab',
+ ... column_format='lrl')
+ >>> table = builder.get_result()
+ >>> print(table)
+ \\begin{longtable}{lrl}
+ \\caption{caption}
+ \\label{lab}\\\\
+ \\toprule
+ {} & a & b \\\\
+ \\midrule
+ \\endhead
+ \\midrule
+ \\multicolumn{3}{r}{{Continued on next page}} \\\\
+ \\midrule
+ \\endfoot
+ <BLANKLINE>
+ \\bottomrule
+ \\endlastfoot
+ 0 & 1 & b1 \\\\
+ 1 & 2 & b2 \\\\
+ \\end{longtable}
+ <BLANKLINE>
+ """
- if self.label is None:
- label_ = ""
- else:
- label_ = f"\n\\label{{{self.label}}}"
+ @property
+ def env_begin(self) -> str:
+ first_row = (
+ f"\\begin{{longtable}}{self._position_macro}{{{self.column_format}}}"
+ )
+ elements = [first_row, f"{self._caption_and_label()}"]
+ return "\n".join([item for item in elements if item])
+
+ def _caption_and_label(self) -> str:
+ if self.caption or self.label:
+ double_backslash = "\\\\"
+ elements = [f"{self._caption_macro}", f"{self._label_macro}"]
+ caption_and_label = "\n".join([item for item in elements if item])
+ caption_and_label += double_backslash
+ return caption_and_label
+ else:
+ return ""
+
+ @property
+ def middle_separator(self) -> str:
+ iterator = self._create_row_iterator(over="header")
+ elements = [
+ "\\midrule",
+ "\\endhead",
+ "\\midrule",
+ f"\\multicolumn{{{len(iterator.strcols)}}}{{r}}"
+ "{{Continued on next page}} \\\\",
+ "\\midrule",
+ "\\endfoot\n",
+ "\\bottomrule",
+ "\\endlastfoot",
+ ]
+ if self._is_separator_required():
+ return "\n".join(elements)
+ return ""
+
+ @property
+ def bottom_separator(self) -> str:
+ return ""
+
+ @property
+ def env_end(self) -> str:
+ return "\\end{longtable}"
+
+
+class RegularTableBuilder(GenericTableBuilder):
+ """Concrete table builder for regular table.
+
+ >>> from pandas import DataFrame
+ >>> from pandas.io.formats import format as fmt
+ >>> df = DataFrame({"a": [1, 2], "b": ["b1", "b2"]})
+ >>> formatter = fmt.DataFrameFormatter(df)
+ >>> builder = RegularTableBuilder(formatter, caption='caption', label='lab',
+ ... column_format='lrc')
+ >>> table = builder.get_result()
+ >>> print(table)
+ \\begin{table}
+ \\centering
+ \\caption{caption}
+ \\label{lab}
+ \\begin{tabular}{lrc}
+ \\toprule
+ {} & a & b \\\\
+ \\midrule
+ 0 & 1 & b1 \\\\
+ 1 & 2 & b2 \\\\
+ \\bottomrule
+ \\end{tabular}
+ \\end{table}
+ <BLANKLINE>
+ """
- if self.position is None:
- position_ = ""
- else:
- position_ = f"[{self.position}]"
+ @property
+ def env_begin(self) -> str:
+ elements = [
+ f"\\begin{{table}}{self._position_macro}",
+ "\\centering",
+ f"{self._caption_macro}",
+ f"{self._label_macro}",
+ f"\\begin{{tabular}}{{{self.column_format}}}",
+ ]
+ return "\n".join([item for item in elements if item])
+
+ @property
+ def bottom_separator(self) -> str:
+ return "\\bottomrule"
+
+ @property
+ def env_end(self) -> str:
+ return "\n".join(["\\end{tabular}", "\\end{table}"])
+
+
+class TabularBuilder(GenericTableBuilder):
+ """Concrete table builder for tabular environment.
+
+ >>> from pandas import DataFrame
+ >>> from pandas.io.formats import format as fmt
+ >>> df = DataFrame({"a": [1, 2], "b": ["b1", "b2"]})
+ >>> formatter = fmt.DataFrameFormatter(df)
+ >>> builder = TabularBuilder(formatter, column_format='lrc')
+ >>> table = builder.get_result()
+ >>> print(table)
+ \\begin{tabular}{lrc}
+ \\toprule
+ {} & a & b \\\\
+ \\midrule
+ 0 & 1 & b1 \\\\
+ 1 & 2 & b2 \\\\
+ \\bottomrule
+ \\end{tabular}
+ <BLANKLINE>
+ """
- if self.longtable:
- table_ = f"\\begin{{longtable}}{position_}{{{column_format}}}"
- tabular_ = "\n"
- else:
- table_ = f"\\begin{{table}}{position_}\n\\centering"
- tabular_ = f"\n\\begin{{tabular}}{{{column_format}}}\n"
-
- if self.longtable and (self.caption is not None or self.label is not None):
- # a double-backslash is required at the end of the line
- # as discussed here:
- # https://tex.stackexchange.com/questions/219138
- backlash_ = "\\\\"
- else:
- backlash_ = ""
- buf.write(f"{table_}{caption_}{label_}{backlash_}{tabular_}")
- else:
- if self.longtable:
- tabletype_ = "longtable"
- else:
- tabletype_ = "tabular"
- buf.write(f"\\begin{{{tabletype_}}}{{{column_format}}}\n")
+ @property
+ def env_begin(self) -> str:
+ return f"\\begin{{tabular}}{{{self.column_format}}}"
+
+ @property
+ def bottom_separator(self) -> str:
+ return "\\bottomrule"
+
+ @property
+ def env_end(self) -> str:
+ return "\\end{tabular}"
+
+
+class LatexFormatter(TableFormatter):
+ """
+ Used to render a DataFrame to a LaTeX tabular/longtable environment output.
+
+ Parameters
+ ----------
+ formatter : `DataFrameFormatter`
+ column_format : str, default None
+ The columns format as specified in `LaTeX table format
+ <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3 columns
+
+ See Also
+ --------
+ HTMLFormatter
+ """
+
+ def __init__(
+ self,
+ formatter: DataFrameFormatter,
+ longtable: bool = False,
+ column_format: Optional[str] = None,
+ multicolumn: bool = False,
+ multicolumn_format: Optional[str] = None,
+ multirow: bool = False,
+ caption: Optional[str] = None,
+ label: Optional[str] = None,
+ position: Optional[str] = None,
+ ):
+ self.fmt = formatter
+ self.frame = self.fmt.frame
+ self.longtable = longtable
+ self.column_format = column_format # type: ignore[assignment]
+ self.multicolumn = multicolumn
+ self.multicolumn_format = multicolumn_format
+ self.multirow = multirow
+ self.caption = caption
+ self.label = label
+ self.position = position
- def _write_tabular_end(self, buf):
+ def write_result(self, buf: IO[str]) -> None:
"""
- Write the end of a tabular environment or nested table/tabular
- environment.
+ Render a DataFrame to a LaTeX tabular, longtable, or table/tabular
+ environment output.
+ """
+ table_string = self.builder.get_result()
+ buf.write(table_string)
- Parameters
- ----------
- buf : string or file handle
- File path or object. If not specified, the result is returned as
- a string.
+ @property
+ def builder(self) -> TableBuilderAbstract:
+ """Concrete table builder.
+ Returns
+ -------
+ TableBuilder
"""
+ builder = self._select_builder()
+ return builder(
+ formatter=self.fmt,
+ column_format=self.column_format,
+ multicolumn=self.multicolumn,
+ multicolumn_format=self.multicolumn_format,
+ multirow=self.multirow,
+ caption=self.caption,
+ label=self.label,
+ position=self.position,
+ )
+
+ def _select_builder(self) -> Type[TableBuilderAbstract]:
+ """Select proper table builder."""
if self.longtable:
- buf.write("\\end{longtable}\n")
+ return LongTableBuilder
+ if any([self.caption, self.label, self.position]):
+ return RegularTableBuilder
+ return TabularBuilder
+
+ @property
+ def column_format(self) -> str:
+ """Column format."""
+ return self._column_format
+
+ @column_format.setter
+ def column_format(self, input_column_format: Optional[str]) -> None:
+ """Setter for column format."""
+ if input_column_format is None:
+ self._column_format = (
+ self._get_index_format() + self._get_column_format_based_on_dtypes()
+ )
+ elif not isinstance(input_column_format, str):
+ raise ValueError(
+ f"column_format must be str or unicode, "
+ f"not {type(input_column_format)}"
+ )
else:
- buf.write("\\bottomrule\n")
- buf.write("\\end{tabular}\n")
- if self._table_float:
- buf.write("\\end{table}\n")
- else:
- pass
+ self._column_format = input_column_format
+
+ def _get_column_format_based_on_dtypes(self) -> str:
+ """Get column format based on data type.
+
+ Right alignment for numbers and left - for strings.
+ """
+
+ def get_col_type(dtype):
+ if issubclass(dtype.type, np.number):
+ return "r"
+ return "l"
+
+ dtypes = self.frame.dtypes._values
+ return "".join(map(get_col_type, dtypes))
+
+ def _get_index_format(self) -> str:
+ """Get index column format."""
+ return "l" * self.frame.index.nlevels if self.fmt.index else ""
+
+
+def _escape_symbols(row: List[str]) -> List[str]:
+ """Carry out string replacements for special symbols.
+
+ Parameters
+ ----------
+ row : list
+ List of string, that may contain special symbols.
+
+ Returns
+ -------
+ list
+ list of strings with the special symbols replaced.
+ """
+ return [
+ (
+ x.replace("\\", "\\textbackslash ")
+ .replace("_", "\\_")
+ .replace("%", "\\%")
+ .replace("$", "\\$")
+ .replace("#", "\\#")
+ .replace("{", "\\{")
+ .replace("}", "\\}")
+ .replace("~", "\\textasciitilde ")
+ .replace("^", "\\textasciicircum ")
+ .replace("&", "\\&")
+ if (x and x != "{}")
+ else "{}"
+ )
+ for x in row
+ ]
+
+
+def _convert_to_bold(crow: List[str], ilevels: int) -> List[str]:
+ """Convert elements in ``crow`` to bold."""
+ return [
+ f"\\textbf{{{x}}}" if j < ilevels and x.strip() not in ["", "{}"] else x
+ for j, x in enumerate(crow)
+ ]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 96a9ed2b86cf4..a2cb8f52dfd5b 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -7,6 +7,14 @@
from pandas import DataFrame, Series
import pandas._testing as tm
+from pandas.io.formats.format import DataFrameFormatter
+from pandas.io.formats.latex import (
+ RegularTableBuilder,
+ RowBodyIterator,
+ RowHeaderIterator,
+ RowStringConverter,
+)
+
class TestToLatex:
def test_to_latex_filename(self, float_frame):
@@ -60,6 +68,16 @@ def test_to_latex(self, float_frame):
assert withoutindex_result == withoutindex_expected
+ @pytest.mark.parametrize(
+ "bad_column_format",
+ [5, 1.2, ["l", "r"], ("r", "c"), {"r", "c", "l"}, dict(a="r", b="l")],
+ )
+ def test_to_latex_bad_column_format(self, bad_column_format):
+ df = DataFrame({"a": [1, 2], "b": ["b1", "b2"]})
+ msg = r"column_format must be str or unicode"
+ with pytest.raises(ValueError, match=msg):
+ df.to_latex(column_format=bad_column_format)
+
def test_to_latex_format(self, float_frame):
# GH Bug #9402
float_frame.to_latex(column_format="ccc")
@@ -930,3 +948,87 @@ def test_to_latex_multindex_header(self):
\end{tabular}
"""
assert observed == expected
+
+
+class TestTableBuilder:
+ @pytest.fixture
+ def dataframe(self):
+ return DataFrame({"a": [1, 2], "b": ["b1", "b2"]})
+
+ @pytest.fixture
+ def table_builder(self, dataframe):
+ return RegularTableBuilder(formatter=DataFrameFormatter(dataframe))
+
+ def test_create_row_iterator(self, table_builder):
+ iterator = table_builder._create_row_iterator(over="header")
+ assert isinstance(iterator, RowHeaderIterator)
+
+ def test_create_body_iterator(self, table_builder):
+ iterator = table_builder._create_row_iterator(over="body")
+ assert isinstance(iterator, RowBodyIterator)
+
+ def test_create_body_wrong_kwarg_raises(self, table_builder):
+ with pytest.raises(ValueError, match="must be either 'header' or 'body'"):
+ table_builder._create_row_iterator(over="SOMETHING BAD")
+
+
+class TestRowStringConverter:
+ @pytest.mark.parametrize(
+ "row_num, expected",
+ [
+ (0, r"{} & Design & ratio & xy \\"),
+ (1, r"0 & 1 & 4 & 10 \\"),
+ (2, r"1 & 2 & 5 & 11 \\"),
+ ],
+ )
+ def test_get_strrow_normal_without_escape(self, row_num, expected):
+ df = DataFrame({r"Design": [1, 2, 3], r"ratio": [4, 5, 6], r"xy": [10, 11, 12]})
+ row_string_converter = RowStringConverter(
+ formatter=DataFrameFormatter(df, escape=True),
+ )
+ assert row_string_converter.get_strrow(row_num=row_num) == expected
+
+ @pytest.mark.parametrize(
+ "row_num, expected",
+ [
+ (0, r"{} & Design \# & ratio, \% & x\&y \\"),
+ (1, r"0 & 1 & 4 & 10 \\"),
+ (2, r"1 & 2 & 5 & 11 \\"),
+ ],
+ )
+ def test_get_strrow_normal_with_escape(self, row_num, expected):
+ df = DataFrame(
+ {r"Design #": [1, 2, 3], r"ratio, %": [4, 5, 6], r"x&y": [10, 11, 12]}
+ )
+ row_string_converter = RowStringConverter(
+ formatter=DataFrameFormatter(df, escape=True),
+ )
+ assert row_string_converter.get_strrow(row_num=row_num) == expected
+
+ @pytest.mark.parametrize(
+ "row_num, expected",
+ [
+ (0, r"{} & \multicolumn{2}{r}{c1} & \multicolumn{2}{r}{c2} & c3 \\"),
+ (1, r"{} & 0 & 1 & 0 & 1 & 0 \\"),
+ (2, r"0 & 0 & 5 & 0 & 5 & 0 \\"),
+ ],
+ )
+ def test_get_strrow_multindex_multicolumn(self, row_num, expected):
+ df = DataFrame(
+ {
+ ("c1", 0): {x: x for x in range(5)},
+ ("c1", 1): {x: x + 5 for x in range(5)},
+ ("c2", 0): {x: x for x in range(5)},
+ ("c2", 1): {x: x + 5 for x in range(5)},
+ ("c3", 0): {x: x for x in range(5)},
+ }
+ )
+
+ row_string_converter = RowStringConverter(
+ formatter=DataFrameFormatter(df),
+ multicolumn=True,
+ multicolumn_format="r",
+ multirow=True,
+ )
+
+ assert row_string_converter.get_strrow(row_num=row_num) == expected
| - [x] closes https://github.com/pandas-dev/pandas/issues/35790
- [x] tests added ``pandas/tests/io/formats/test_latex.py``
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Refactor ``to_latex`` using polymorphism and the builder design pattern.
Polymorphism
--------------
Previously there was a complicated logic in multiple methods
for either longtable or regular table.
This PR implements various builders:
- ``RegularTableBuilder``
- ``LongTableBuilder``
- ``TabularBuilder``
Selection of the appropriate builder is carried out in ``LatexFormatter``,
depending on the kwargs provided.
Each builder must implement construction of all the table's components:
- Beginning and end of the environment,
- Separators (top, middle, bottom),
- Header,
- Body.
This allows one eliminate complex logic throughout multiple methods.
Separate Concerns
------------------
Separated concerns: row formatting into string (separate class ``RowStringConverter``)
is carried out independently of the table builder.
Derived from ``RowStringConverter`` are two classes used to iterate over headers or body of the tables:
- ``RowHeaderIterator``
- ``RowBodyIterator``
Extract Methods
----------------
Multiple methods were extracted to improve code readability.
| https://api.github.com/repos/pandas-dev/pandas/pulls/35872 | 2020-08-24T14:32:13Z | 2020-09-07T18:58:51Z | 2020-09-07T18:58:51Z | 2020-11-06T15:34:44Z |
REF: implement Block.reduce for DataFrame._reduce | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 837bd35414773..606bd4cc3b52d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -8647,13 +8647,11 @@ def blk_func(values):
return op(values, axis=1, skipna=skipna, **kwds)
# After possibly _get_data and transposing, we are now in the
- # simple case where we can use BlockManager._reduce
+ # simple case where we can use BlockManager.reduce
res = df._mgr.reduce(blk_func)
- assert isinstance(res, dict)
- if len(res):
- assert len(res) == max(list(res.keys())) + 1, res.keys()
- out = df._constructor_sliced(res, index=range(len(res)), dtype=out_dtype)
- out.index = df.columns
+ out = df._constructor(res,).iloc[0].rename(None)
+ if out_dtype is not None:
+ out = out.astype(out_dtype)
if axis == 0 and is_object_dtype(out.dtype):
out[:] = coerce_to_dtypes(out.values, df.dtypes)
return out
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f3286b3c20965..c62be4f767f00 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -346,6 +346,21 @@ def apply(self, func, **kwargs) -> List["Block"]:
return self._split_op_result(result)
+ def reduce(self, func) -> List["Block"]:
+ # We will apply the function and reshape the result into a single-row
+ # Block with the same mgr_locs; squeezing will be done at a higher level
+ assert self.ndim == 2
+
+ result = func(self.values)
+ if np.ndim(result) == 0:
+ # TODO(EA2D): special case not needed with 2D EAs
+ res_values = np.array([[result]])
+ else:
+ res_values = result.reshape(-1, 1)
+
+ nb = self.make_block(res_values)
+ return [nb]
+
def _split_op_result(self, result) -> List["Block"]:
# See also: split_and_operate
if is_extension_array_dtype(result) and result.ndim > 1:
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index f05d4cf1c4be6..297ad3077ef1d 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -330,31 +330,18 @@ def _verify_integrity(self) -> None:
f"tot_items: {tot_items}"
)
- def reduce(self, func):
+ def reduce(self: T, func) -> T:
# If 2D, we assume that we're operating column-wise
- if self.ndim == 1:
- # we'll be returning a scalar
- blk = self.blocks[0]
- return func(blk.values)
+ assert self.ndim == 2
- res = {}
+ res_blocks = []
for blk in self.blocks:
- bres = func(blk.values)
-
- if np.ndim(bres) == 0:
- # EA
- assert blk.shape[0] == 1
- new_res = zip(blk.mgr_locs.as_array, [bres])
- else:
- assert bres.ndim == 1, bres.shape
- assert blk.shape[0] == len(bres), (blk.shape, bres.shape)
- new_res = zip(blk.mgr_locs.as_array, bres)
-
- nr = dict(new_res)
- assert not any(key in res for key in nr)
- res.update(nr)
+ nbs = blk.reduce(func)
+ res_blocks.extend(nbs)
- return res
+ index = Index([0]) # placeholder
+ new_mgr = BlockManager.from_blocks(res_blocks, [self.items, index])
+ return new_mgr
def operate_blockwise(self, other: "BlockManager", array_op) -> "BlockManager":
"""
| This lets us avoid reconstructing results from a dict, makes it feasible to use the same block-skipping code for DataFrame._reduce that we use for cython_agg_blocks and apply_blockwise. | https://api.github.com/repos/pandas-dev/pandas/pulls/35867 | 2020-08-24T02:57:40Z | 2020-08-24T23:35:41Z | 2020-08-24T23:35:41Z | 2020-08-25T00:08:39Z |
CLN/PERF: delay evaluation of get_day_of_month | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 7f0314d737619..161e5f4e54f51 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -3705,7 +3705,7 @@ cdef inline void _shift_months(const int64_t[:] dtindex,
"""See shift_months.__doc__"""
cdef:
Py_ssize_t i
- int months_to_roll, compare_day
+ int months_to_roll
npy_datetimestruct dts
for i in range(count):
@@ -3715,10 +3715,8 @@ cdef inline void _shift_months(const int64_t[:] dtindex,
dt64_to_dtstruct(dtindex[i], &dts)
months_to_roll = months
- compare_day = get_day_of_month(&dts, day_opt)
- months_to_roll = roll_convention(dts.day, months_to_roll,
- compare_day)
+ months_to_roll = _roll_qtrday(&dts, months_to_roll, 0, day_opt)
dts.year = year_add_months(dts, months_to_roll)
dts.month = month_add_months(dts, months_to_roll)
| I don't expect a major perf improvement, but in some corner cases we can avoid evaluating get_day_of_month | https://api.github.com/repos/pandas-dev/pandas/pulls/35866 | 2020-08-24T00:27:50Z | 2020-08-24T14:24:04Z | 2020-08-24T14:24:04Z | 2020-08-24T14:59:00Z |
REF: make window _apply_blockwise actually blockwise | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index f7e81f41b8675..a70247d9f7f9c 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -66,7 +66,8 @@
from pandas.core.window.numba_ import generate_numba_apply_func
if TYPE_CHECKING:
- from pandas import Series
+ from pandas import DataFrame, Series
+ from pandas.core.internals import Block # noqa:F401
def calculate_center_offset(window) -> int:
@@ -418,35 +419,40 @@ def _wrap_results(self, results, obj, skipped: List[int]) -> FrameOrSeriesUnion:
for i in skipped:
exclude.extend(orig_blocks[i].columns)
- kept_blocks = [blk for i, blk in enumerate(orig_blocks) if i not in skipped]
-
- final = []
- for result, block in zip(results, kept_blocks):
-
- result = type(obj)(result, index=obj.index, columns=block.columns)
- final.append(result)
-
- exclude = exclude or []
columns = [c for c in self._selected_obj.columns if c not in exclude]
- if not columns and not len(final) and exclude:
+ if not columns and not len(results) and exclude:
raise DataError("No numeric types to aggregate")
- elif not len(final):
+ elif not len(results):
return obj.astype("float64")
- df = concat(final, axis=1).reindex(columns=columns, copy=False)
+ df = concat(results, axis=1).reindex(columns=columns, copy=False)
+ self._insert_on_column(df, obj)
+ return df
+ def _insert_on_column(self, result: "DataFrame", obj: "DataFrame"):
# if we have an 'on' column we want to put it back into
# the results in the same location
+ from pandas import Series
+
if self.on is not None and not self._on.equals(obj.index):
name = self._on.name
extra_col = Series(self._on, index=obj.index, name=name)
- if name not in df.columns and name not in df.index.names:
- new_loc = len(df.columns)
- df.insert(new_loc, name, extra_col)
- elif name in df.columns:
+ if name in result.columns:
# TODO: sure we want to overwrite results?
- df[name] = extra_col
- return df
+ result[name] = extra_col
+ elif name in result.index.names:
+ pass
+ elif name in self._selected_obj.columns:
+ # insert in the same location as we had in _selected_obj
+ old_cols = self._selected_obj.columns
+ new_cols = result.columns
+ old_loc = old_cols.get_loc(name)
+ overlap = new_cols.intersection(old_cols[:old_loc])
+ new_loc = len(overlap)
+ result.insert(new_loc, name, extra_col)
+ else:
+ # insert at the end
+ result[name] = extra_col
def _center_window(self, result, window) -> np.ndarray:
"""
@@ -530,21 +536,36 @@ def _apply_blockwise(
# This isn't quite blockwise, since `blocks` is actually a collection
# of homogenenous DataFrames.
blocks, obj = self._create_blocks(self._selected_obj)
+ mgr = obj._mgr
+
+ def hfunc(bvalues: ArrayLike) -> ArrayLike:
+ # TODO(EA2D): getattr unnecessary with 2D EAs
+ values = self._prep_values(getattr(bvalues, "T", bvalues))
+ res_values = homogeneous_func(values)
+ return getattr(res_values, "T", res_values)
skipped: List[int] = []
- results: List[ArrayLike] = []
- for i, b in enumerate(blocks):
+ res_blocks: List["Block"] = []
+ for i, blk in enumerate(mgr.blocks):
try:
- values = self._prep_values(b.values)
+ nbs = blk.apply(hfunc)
except (TypeError, NotImplementedError):
skipped.append(i)
continue
- result = homogeneous_func(values)
- results.append(result)
+ res_blocks.extend(nbs)
+
+ if not len(res_blocks) and skipped:
+ raise DataError("No numeric types to aggregate")
+ elif not len(res_blocks):
+ return obj.astype("float64")
- return self._wrap_results(results, obj, skipped)
+ new_cols = mgr.reset_dropped_locs(res_blocks, skipped)
+ new_mgr = type(mgr).from_blocks(res_blocks, [new_cols, obj.index])
+ out = obj._constructor(new_mgr)
+ self._insert_on_column(out, obj)
+ return out
def _apply(
self,
| There will be one more clean-up pass after this, kept separate to maintain targeted diff. | https://api.github.com/repos/pandas-dev/pandas/pulls/35861 | 2020-08-23T03:43:55Z | 2020-08-24T23:36:51Z | 2020-08-24T23:36:50Z | 2020-08-25T00:23:27Z |
DOC: Fix code of conduct link | diff --git a/web/pandas/about/team.md b/web/pandas/about/team.md
index 8eb2edebec817..39f63202e1986 100644
--- a/web/pandas/about/team.md
+++ b/web/pandas/about/team.md
@@ -2,7 +2,7 @@
## Contributors
-_pandas_ is made with love by more than [1,500 volunteer contributors](https://github.com/pandas-dev/pandas/graphs/contributors).
+_pandas_ is made with love by more than [2,000 volunteer contributors](https://github.com/pandas-dev/pandas/graphs/contributors).
If you want to support pandas development, you can find information in the [donations page](../donate.html).
@@ -42,7 +42,7 @@ If you want to support pandas development, you can find information in the [dona
> or anyone willing to increase the diversity of our team.
> We have identified visible gaps and obstacles in sustaining diversity and inclusion in the open-source communities and we are proactive in increasing
> the diversity of our team.
-> We have a [code of conduct]({base_url}/community/coc.html) to ensure a friendly and welcoming environment.
+> We have a [code of conduct](../community/coc.html) to ensure a friendly and welcoming environment.
> Please send an email to [pandas-code-of-conduct-committee](mailto:pandas-coc@googlegroups.com), if you think we can do a
> better job at achieving this goal.
| closes #35855
| https://api.github.com/repos/pandas-dev/pandas/pulls/35857 | 2020-08-22T18:35:39Z | 2020-08-23T07:40:30Z | 2020-08-23T07:40:30Z | 2020-08-23T07:40:30Z |
REF: use Block.apply in cython_agg_blocks | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 4b1f6cfe0a662..85bd67e526487 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1058,16 +1058,17 @@ def cast_agg_result(result, values: ArrayLike, how: str) -> ArrayLike:
# reshape to be valid for non-Extension Block
result = result.reshape(1, -1)
+ elif isinstance(result, np.ndarray) and result.ndim == 1:
+ # We went through a SeriesGroupByPath and need to reshape
+ result = result.reshape(1, -1)
+
return result
- def blk_func(block: "Block") -> List["Block"]:
- new_blocks: List["Block"] = []
+ def blk_func(bvalues: ArrayLike) -> ArrayLike:
- result = no_result
- locs = block.mgr_locs.as_array
try:
result, _ = self.grouper.aggregate(
- block.values, how, axis=1, min_count=min_count
+ bvalues, how, axis=1, min_count=min_count
)
except NotImplementedError:
# generally if we have numeric_only=False
@@ -1080,12 +1081,17 @@ def blk_func(block: "Block") -> List["Block"]:
assert how == "ohlc"
raise
+ obj: Union[Series, DataFrame]
# call our grouper again with only this block
- obj = self.obj[data.items[locs]]
- if obj.shape[1] == 1:
- # Avoid call to self.values that can occur in DataFrame
- # reductions; see GH#28949
- obj = obj.iloc[:, 0]
+ if isinstance(bvalues, ExtensionArray):
+ # TODO(EA2D): special case not needed with 2D EAs
+ obj = Series(bvalues)
+ else:
+ obj = DataFrame(bvalues.T)
+ if obj.shape[1] == 1:
+ # Avoid call to self.values that can occur in DataFrame
+ # reductions; see GH#28949
+ obj = obj.iloc[:, 0]
# Create SeriesGroupBy with observed=True so that it does
# not try to add missing categories if grouping over multiple
@@ -1103,21 +1109,14 @@ def blk_func(block: "Block") -> List["Block"]:
# unwrap DataFrame to get array
result = result._mgr.blocks[0].values
- if isinstance(result, np.ndarray) and result.ndim == 1:
- result = result.reshape(1, -1)
- res_values = cast_agg_result(result, block.values, how)
- agg_block = block.make_block(res_values)
- new_blocks = [agg_block]
- else:
- res_values = cast_agg_result(result, block.values, how)
- agg_block = block.make_block(res_values)
- new_blocks = [agg_block]
- return new_blocks
+
+ res_values = cast_agg_result(result, bvalues, how)
+ return res_values
skipped: List[int] = []
for i, block in enumerate(data.blocks):
try:
- nbs = blk_func(block)
+ nbs = block.apply(blk_func)
except (NotImplementedError, TypeError):
# TypeError -> we may have an exception in trying to aggregate
# continue and exclude the block
| Getting closer to making this use a BlockManager method | https://api.github.com/repos/pandas-dev/pandas/pulls/35854 | 2020-08-22T16:26:36Z | 2020-08-24T23:32:54Z | 2020-08-24T23:32:54Z | 2020-08-25T00:17:08Z |
DEPR: deprecate dtype param in Index.copy | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 09a5bcb0917c2..adc1806523d6e 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -143,7 +143,7 @@ See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for mor
Deprecations
~~~~~~~~~~~~
- Deprecated parameter ``inplace`` in :meth:`MultiIndex.set_codes` and :meth:`MultiIndex.set_levels` (:issue:`35626`)
--
+- Deprecated parameter ``dtype`` in :~meth:`Index.copy` on method all index classes. Use the :meth:`Index.astype` method instead for changing dtype(:issue:`35853`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 623ce68201492..ceb109fdf6d7a 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -800,6 +800,9 @@ def copy(self, name=None, deep=False, dtype=None, names=None):
deep : bool, default False
dtype : numpy dtype or pandas type, optional
Set dtype for new object.
+
+ .. deprecated:: 1.2.0
+ use ``astype`` method instead.
names : list-like, optional
Kept for compatibility with MultiIndex. Should not be used.
@@ -820,6 +823,12 @@ def copy(self, name=None, deep=False, dtype=None, names=None):
new_index = self._shallow_copy(name=name)
if dtype:
+ warnings.warn(
+ "parameter dtype is deprecated and will be removed in a future "
+ "version. Use the astype method instead.",
+ FutureWarning,
+ stacklevel=2,
+ )
new_index = new_index.astype(dtype)
return new_index
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index ffbd03d0c3ba7..b29c27982f087 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1030,7 +1030,6 @@ def _shallow_copy(
name=lib.no_default,
levels=None,
codes=None,
- dtype=None,
sortorder=None,
names=lib.no_default,
_set_identity: bool = True,
@@ -1041,7 +1040,7 @@ def _shallow_copy(
names = name if name is not lib.no_default else self.names
if values is not None:
- assert levels is None and codes is None and dtype is None
+ assert levels is None and codes is None
return MultiIndex.from_tuples(values, sortorder=sortorder, names=names)
levels = levels if levels is not None else self.levels
@@ -1050,7 +1049,6 @@ def _shallow_copy(
result = MultiIndex(
levels=levels,
codes=codes,
- dtype=dtype,
sortorder=sortorder,
names=names,
verify_integrity=False,
@@ -1092,6 +1090,8 @@ def copy(
----------
names : sequence, optional
dtype : numpy dtype or pandas type, optional
+
+ .. deprecated:: 1.2.0
levels : sequence, optional
codes : sequence, optional
deep : bool, default False
@@ -1117,15 +1117,24 @@ def copy(
if codes is None:
codes = deepcopy(self.codes)
- return self._shallow_copy(
+ new_index = self._shallow_copy(
levels=levels,
codes=codes,
names=names,
- dtype=dtype,
sortorder=self.sortorder,
_set_identity=_set_identity,
)
+ if dtype:
+ warnings.warn(
+ "parameter dtype is deprecated and will be removed in a future "
+ "version. Use the astype method instead.",
+ FutureWarning,
+ stacklevel=2,
+ )
+ new_index = new_index.astype(dtype)
+ return new_index
+
def __array__(self, dtype=None) -> np.ndarray:
""" the array interface, return my values """
return self.values
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index c65c3d5ff3d9c..c5572a9de7fa5 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -390,10 +390,17 @@ def _shallow_copy(self, values=None, name: Label = no_default):
@doc(Int64Index.copy)
def copy(self, name=None, deep=False, dtype=None, names=None):
- self._validate_dtype(dtype)
-
name = self._validate_names(name=name, names=names, deep=deep)[0]
new_index = self._shallow_copy(name=name)
+
+ if dtype:
+ warnings.warn(
+ "parameter dtype is deprecated and will be removed in a future "
+ "version. Use the astype method instead.",
+ FutureWarning,
+ stacklevel=2,
+ )
+ new_index = new_index.astype(dtype)
return new_index
def _minmax(self, meth: str):
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 98f7c0eadb4bb..e4d0b46f7c716 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -270,7 +270,7 @@ def test_copy_name(self, index):
s3 = s1 * s2
assert s3.index.name == "mario"
- def test_name2(self, index):
+ def test_copy_name2(self, index):
# gh-35592
if isinstance(index, MultiIndex):
return
@@ -284,6 +284,11 @@ def test_name2(self, index):
with pytest.raises(TypeError, match=msg):
index.copy(name=[["mario"]])
+ def test_copy_dtype_deprecated(self, index):
+ # GH35853
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ index.copy(dtype=object)
+
def test_ensure_copied_data(self, index):
# Check the "copy" argument of each Index.__new__ is honoured
# GH12309
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 70eb9e502f78a..aee4b16621b4d 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -62,11 +62,6 @@ def test_new_axis(self, index):
assert new_index.ndim == 2
assert isinstance(new_index, np.ndarray)
- @pytest.mark.parametrize("index", ["int", "uint", "float"], indirect=True)
- def test_copy_and_deepcopy(self, index):
- new_copy2 = index.copy(dtype=int)
- assert new_copy2.dtype.kind == "i"
-
def test_constructor_regular(self, index):
tm.assert_contains_all(index, index)
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index 02a173eb4958d..db260b71e7186 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -374,8 +374,7 @@ def test_has_duplicates(self, index):
"dtype",
["int64", "uint64", "float64", "category", "datetime64[ns]", "timedelta64[ns]"],
)
- @pytest.mark.parametrize("copy", [True, False])
- def test_astype_preserves_name(self, index, dtype, copy):
+ def test_astype_preserves_name(self, index, dtype):
# https://github.com/pandas-dev/pandas/issues/32013
if isinstance(index, MultiIndex):
index.names = ["idx" + str(i) for i in range(index.nlevels)]
@@ -384,10 +383,7 @@ def test_astype_preserves_name(self, index, dtype, copy):
try:
# Some of these conversions cannot succeed so we use a try / except
- if copy:
- result = index.copy(dtype=dtype)
- else:
- result = index.astype(dtype)
+ result = index.astype(dtype)
except (ValueError, TypeError, NotImplementedError, SystemError):
return
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index bfcac5d433d2c..e6f455e60eee3 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -394,7 +394,7 @@ def test_identical(self):
same_values_different_type = Index(i, dtype=object)
assert not i.identical(same_values_different_type)
- i = index.copy(dtype=object)
+ i = index.astype(dtype=object)
i = i.rename("foo")
same_values = Index(i, dtype=object)
assert same_values.identical(i)
@@ -402,7 +402,7 @@ def test_identical(self):
assert not i.identical(index)
assert Index(same_values, name="foo", dtype=object).identical(i)
- assert not index.copy(dtype=object).identical(index.copy(dtype=self._dtype))
+ assert not index.astype(dtype=object).identical(index.astype(dtype=self._dtype))
def test_union_noncomparable(self):
# corner case, non-Int64Index
| Deprecate ``dtype`` param in ``Index.copy`` and child methods. If users want to change dtype, they should use ``Index.astype``. | https://api.github.com/repos/pandas-dev/pandas/pulls/35853 | 2020-08-22T15:47:59Z | 2020-08-24T22:55:43Z | 2020-08-24T22:55:43Z | 2020-08-27T13:17:26Z |
API: replace dropna=False option with na_sentinel=None in factorize | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index 9b1ad658d4666..fdfb084b47a89 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -35,6 +35,14 @@ Bug fixes
.. ---------------------------------------------------------------------------
+.. _whatsnew_112.other:
+
+Other
+~~~~~
+- :meth:`factorize` now supports ``na_sentinel=None`` to include NaN in the uniques of the values and remove ``dropna`` keyword which was unintentionally exposed to public facing API in 1.1 version from :meth:`factorize`(:issue:`35667`)
+
+.. ---------------------------------------------------------------------------
+
.. _whatsnew_112.contributors:
Contributors
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 6d6bb21165814..d2af6c132eca2 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -526,9 +526,8 @@ def _factorize_array(
def factorize(
values,
sort: bool = False,
- na_sentinel: int = -1,
+ na_sentinel: Optional[int] = -1,
size_hint: Optional[int] = None,
- dropna: bool = True,
) -> Tuple[np.ndarray, Union[np.ndarray, ABCIndex]]:
"""
Encode the object as an enumerated type or categorical variable.
@@ -541,8 +540,11 @@ def factorize(
Parameters
----------
{values}{sort}
- na_sentinel : int, default -1
- Value to mark "not found".
+ na_sentinel : int or None, default -1
+ Value to mark "not found". If None, will not drop the NaN
+ from the uniques of the values.
+
+ .. versionchanged:: 1.1.2
{size_hint}\
Returns
@@ -620,6 +622,22 @@ def factorize(
array([0, 0, 1]...)
>>> uniques
Index(['a', 'c'], dtype='object')
+
+ If NaN is in the values, and we want to include NaN in the uniques of the
+ values, it can be achieved by setting ``na_sentinel=None``.
+
+ >>> values = np.array([1, 2, 1, np.nan])
+ >>> codes, uniques = pd.factorize(values) # default: na_sentinel=-1
+ >>> codes
+ array([ 0, 1, 0, -1])
+ >>> uniques
+ array([1., 2.])
+
+ >>> codes, uniques = pd.factorize(values, na_sentinel=None)
+ >>> codes
+ array([0, 1, 0, 2])
+ >>> uniques
+ array([ 1., 2., nan])
"""
# Implementation notes: This method is responsible for 3 things
# 1.) coercing data to array-like (ndarray, Index, extension array)
@@ -633,6 +651,13 @@ def factorize(
values = _ensure_arraylike(values)
original = values
+ # GH35667, if na_sentinel=None, we will not dropna NaNs from the uniques
+ # of values, assign na_sentinel=-1 to replace code value for NaN.
+ dropna = True
+ if na_sentinel is None:
+ na_sentinel = -1
+ dropna = False
+
if is_extension_array_dtype(values.dtype):
values = extract_array(values)
codes, uniques = values.factorize(na_sentinel=na_sentinel)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index b62ef668df5e1..1926803d8f04b 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1398,7 +1398,7 @@ def memory_usage(self, deep=False):
"""
),
)
- def factorize(self, sort=False, na_sentinel=-1):
+ def factorize(self, sort: bool = False, na_sentinel: Optional[int] = -1):
return algorithms.factorize(self, sort=sort, na_sentinel=na_sentinel)
_shared_docs[
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 3017521c6a065..6678edc3821c8 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -587,8 +587,13 @@ def _make_codes(self) -> None:
codes = self.grouper.codes_info
uniques = self.grouper.result_index
else:
+ # GH35667, replace dropna=False with na_sentinel=None
+ if not self.dropna:
+ na_sentinel = None
+ else:
+ na_sentinel = -1
codes, uniques = algorithms.factorize(
- self.grouper, sort=self.sort, dropna=self.dropna
+ self.grouper, sort=self.sort, na_sentinel=na_sentinel
)
uniques = Index(uniques, name=self.name)
self._codes = codes
diff --git a/pandas/tests/base/test_factorize.py b/pandas/tests/base/test_factorize.py
index 415a8b7e4362f..9fad9856d53cc 100644
--- a/pandas/tests/base/test_factorize.py
+++ b/pandas/tests/base/test_factorize.py
@@ -26,3 +26,16 @@ def test_factorize(index_or_series_obj, sort):
tm.assert_numpy_array_equal(result_codes, expected_codes)
tm.assert_index_equal(result_uniques, expected_uniques)
+
+
+def test_series_factorize_na_sentinel_none():
+ # GH35667
+ values = np.array([1, 2, 1, np.nan])
+ ser = pd.Series(values)
+ codes, uniques = ser.factorize(na_sentinel=None)
+
+ expected_codes = np.array([0, 1, 0, 2], dtype="int64")
+ expected_uniques = pd.Index([1.0, 2.0, np.nan])
+
+ tm.assert_numpy_array_equal(codes, expected_codes)
+ tm.assert_index_equal(uniques, expected_uniques)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 67a2dc2303550..b4e97f1e341e4 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -340,73 +340,47 @@ def test_factorize_na_sentinel(self, sort, na_sentinel, data, uniques):
tm.assert_extension_array_equal(uniques, expected_uniques)
@pytest.mark.parametrize(
- "data, dropna, expected_codes, expected_uniques",
+ "data, expected_codes, expected_uniques",
[
(
["a", None, "b", "a"],
- True,
- np.array([0, -1, 1, 0], dtype=np.dtype("intp")),
- np.array(["a", "b"], dtype=object),
- ),
- (
- ["a", np.nan, "b", "a"],
- True,
- np.array([0, -1, 1, 0], dtype=np.dtype("intp")),
- np.array(["a", "b"], dtype=object),
- ),
- (
- ["a", None, "b", "a"],
- False,
np.array([0, 2, 1, 0], dtype=np.dtype("intp")),
np.array(["a", "b", np.nan], dtype=object),
),
(
["a", np.nan, "b", "a"],
- False,
np.array([0, 2, 1, 0], dtype=np.dtype("intp")),
np.array(["a", "b", np.nan], dtype=object),
),
],
)
- def test_object_factorize_dropna(
- self, data, dropna, expected_codes, expected_uniques
+ def test_object_factorize_na_sentinel_none(
+ self, data, expected_codes, expected_uniques
):
- codes, uniques = algos.factorize(data, dropna=dropna)
+ codes, uniques = algos.factorize(data, na_sentinel=None)
tm.assert_numpy_array_equal(uniques, expected_uniques)
tm.assert_numpy_array_equal(codes, expected_codes)
@pytest.mark.parametrize(
- "data, dropna, expected_codes, expected_uniques",
+ "data, expected_codes, expected_uniques",
[
(
[1, None, 1, 2],
- True,
- np.array([0, -1, 0, 1], dtype=np.dtype("intp")),
- np.array([1, 2], dtype="O"),
- ),
- (
- [1, np.nan, 1, 2],
- True,
- np.array([0, -1, 0, 1], dtype=np.dtype("intp")),
- np.array([1, 2], dtype=np.float64),
- ),
- (
- [1, None, 1, 2],
- False,
np.array([0, 2, 0, 1], dtype=np.dtype("intp")),
np.array([1, 2, np.nan], dtype="O"),
),
(
[1, np.nan, 1, 2],
- False,
np.array([0, 2, 0, 1], dtype=np.dtype("intp")),
np.array([1, 2, np.nan], dtype=np.float64),
),
],
)
- def test_int_factorize_dropna(self, data, dropna, expected_codes, expected_uniques):
- codes, uniques = algos.factorize(data, dropna=dropna)
+ def test_int_factorize_na_sentinel_none(
+ self, data, expected_codes, expected_uniques
+ ):
+ codes, uniques = algos.factorize(data, na_sentinel=None)
tm.assert_numpy_array_equal(uniques, expected_uniques)
tm.assert_numpy_array_equal(codes, expected_codes)
| - [x] closes #35667
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/35852 | 2020-08-22T10:50:31Z | 2020-09-02T15:00:50Z | 2020-09-02T15:00:50Z | 2020-09-02T19:41:06Z |
BUG: Index.get_slice_bounds does not accept datetime.date or tz naive datetime.datetimes | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 0cfe010b63a6f..9c8ee10a8a0af 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -176,7 +176,8 @@ Datetimelike
- Bug in :attr:`DatetimeArray.date` where a ``ValueError`` would be raised with a read-only backing array (:issue:`33530`)
- Bug in ``NaT`` comparisons failing to raise ``TypeError`` on invalid inequality comparisons (:issue:`35046`)
- Bug in :class:`DateOffset` where attributes reconstructed from pickle files differ from original objects when input values exceed normal ranges (e.g months=12) (:issue:`34511`)
--
+- Bug in :meth:`DatetimeIndex.get_slice_bound` where ``datetime.date`` objects were not accepted or naive :class:`Timestamp` with a tz-aware :class:`DatetimeIndex` (:issue:`35690`)
+- Bug in :meth:`DatetimeIndex.slice_locs` where ``datetime.date`` objects were not accepted (:issue:`34077`)
Timedelta
^^^^^^^^^
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e66f513e347a9..6dcb9250812d0 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -632,7 +632,7 @@ def get_loc(self, key, method=None, tolerance=None):
raise KeyError(orig_key) from err
def _maybe_cast_for_get_loc(self, key) -> Timestamp:
- # needed to localize naive datetimes
+ # needed to localize naive datetimes or dates (GH 35690)
key = Timestamp(key)
if key.tzinfo is None:
key = key.tz_localize(self.tz)
@@ -677,8 +677,7 @@ def _maybe_cast_slice_bound(self, label, side: str, kind):
if self._is_strictly_monotonic_decreasing and len(self) > 1:
return upper if side == "left" else lower
return lower if side == "left" else upper
- else:
- return label
+ return self._maybe_cast_for_get_loc(label)
def _get_string_slice(self, key: str, use_lhs: bool = True, use_rhs: bool = True):
freq = getattr(self, "freqstr", getattr(self, "inferred_freq", None))
diff --git a/pandas/tests/indexes/base_class/test_indexing.py b/pandas/tests/indexes/base_class/test_indexing.py
new file mode 100644
index 0000000000000..196c0401a72be
--- /dev/null
+++ b/pandas/tests/indexes/base_class/test_indexing.py
@@ -0,0 +1,26 @@
+import pytest
+
+from pandas import Index
+
+
+class TestGetSliceBounds:
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ @pytest.mark.parametrize("side, expected", [("left", 4), ("right", 5)])
+ def test_get_slice_bounds_within(self, kind, side, expected):
+ index = Index(list("abcdef"))
+ result = index.get_slice_bound("e", kind=kind, side=side)
+ assert result == expected
+
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ @pytest.mark.parametrize("side", ["left", "right"])
+ @pytest.mark.parametrize(
+ "data, bound, expected", [(list("abcdef"), "x", 6), (list("bcdefg"), "a", 0)],
+ )
+ def test_get_slice_bounds_outside(self, kind, side, expected, data, bound):
+ index = Index(data)
+ result = index.get_slice_bound(bound, kind=kind, side=side)
+ assert result == expected
+
+ def test_get_slice_bounds_invalid_side(self):
+ with pytest.raises(ValueError, match="Invalid value for side kwarg"):
+ Index([]).get_slice_bound("a", kind=None, side="middle")
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 5d2c6daba3f57..539d9cb8f06a7 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -6,7 +6,7 @@
from pandas.errors import InvalidIndexError
import pandas as pd
-from pandas import DatetimeIndex, Index, Timestamp, date_range, notna
+from pandas import DatetimeIndex, Index, Timestamp, bdate_range, date_range, notna
import pandas._testing as tm
from pandas.tseries.offsets import BDay, CDay
@@ -665,3 +665,43 @@ def test_get_value(self):
with tm.assert_produces_warning(FutureWarning):
result = dti.get_value(ser, key.to_datetime64())
assert result == 7
+
+
+class TestGetSliceBounds:
+ @pytest.mark.parametrize("box", [date, datetime, Timestamp])
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ @pytest.mark.parametrize("side, expected", [("left", 4), ("right", 5)])
+ def test_get_slice_bounds_datetime_within(
+ self, box, kind, side, expected, tz_aware_fixture
+ ):
+ # GH 35690
+ index = bdate_range("2000-01-03", "2000-02-11").tz_localize(tz_aware_fixture)
+ result = index.get_slice_bound(
+ box(year=2000, month=1, day=7), kind=kind, side=side
+ )
+ assert result == expected
+
+ @pytest.mark.parametrize("box", [date, datetime, Timestamp])
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ @pytest.mark.parametrize("side", ["left", "right"])
+ @pytest.mark.parametrize("year, expected", [(1999, 0), (2020, 30)])
+ def test_get_slice_bounds_datetime_outside(
+ self, box, kind, side, year, expected, tz_aware_fixture
+ ):
+ # GH 35690
+ index = bdate_range("2000-01-03", "2000-02-11").tz_localize(tz_aware_fixture)
+ result = index.get_slice_bound(
+ box(year=year, month=1, day=7), kind=kind, side=side
+ )
+ assert result == expected
+
+ @pytest.mark.parametrize("box", [date, datetime, Timestamp])
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ def test_slice_datetime_locs(self, box, kind, tz_aware_fixture):
+ # GH 34077
+ index = DatetimeIndex(["2010-01-01", "2010-01-03"]).tz_localize(
+ tz_aware_fixture
+ )
+ result = index.slice_locs(box(2010, 1, 1), box(2010, 1, 2))
+ expected = (0, 1)
+ assert result == expected
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index e6f455e60eee3..1ffdbbc9afd3f 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -679,3 +679,22 @@ def test_float64_index_difference():
result = string_index.difference(float_index)
tm.assert_index_equal(result, string_index)
+
+
+class TestGetSliceBounds:
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ @pytest.mark.parametrize("side, expected", [("left", 4), ("right", 5)])
+ def test_get_slice_bounds_within(self, kind, side, expected):
+ index = Index(range(6))
+ result = index.get_slice_bound(4, kind=kind, side=side)
+ assert result == expected
+
+ @pytest.mark.parametrize("kind", ["getitem", "loc", None])
+ @pytest.mark.parametrize("side", ["left", "right"])
+ @pytest.mark.parametrize(
+ "bound, expected", [(-1, 0), (10, 6)],
+ )
+ def test_get_slice_bounds_outside(self, kind, side, expected, bound):
+ index = Index(range(6))
+ result = index.get_slice_bound(bound, kind=kind, side=side)
+ assert result == expected
| - [x] closes #35690
- [x] closes #34077
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This method appeared under tested so added some additional tests for numeric and object `Index`
| https://api.github.com/repos/pandas-dev/pandas/pulls/35848 | 2020-08-22T06:47:21Z | 2020-09-02T21:36:38Z | 2020-09-02T21:36:38Z | 2020-11-12T05:15:24Z |
CI: Revert 31323 for deprecation warning from Jedi | diff --git a/pandas/tests/arrays/categorical/test_warnings.py b/pandas/tests/arrays/categorical/test_warnings.py
index 9e164a250cdb1..f66c327e9967d 100644
--- a/pandas/tests/arrays/categorical/test_warnings.py
+++ b/pandas/tests/arrays/categorical/test_warnings.py
@@ -14,16 +14,6 @@ async def test_tab_complete_warning(self, ip):
code = "import pandas as pd; c = Categorical([])"
await ip.run_code(code)
-
- # GH 31324 newer jedi version raises Deprecation warning
- import jedi
-
- if jedi.__version__ < "0.16.0":
- warning = tm.assert_produces_warning(None)
- else:
- warning = tm.assert_produces_warning(
- DeprecationWarning, check_stacklevel=False
- )
- with warning:
+ with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
list(ip.Completer.completions("c.", 1))
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 8db1bcc84bfa6..4fb5d07d720c1 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -2199,17 +2199,7 @@ async def test_tab_complete_warning(self, ip):
code = "import pandas as pd; idx = pd.Index([1, 2])"
await ip.run_code(code)
-
- # GH 31324 newer jedi version raises Deprecation warning
- import jedi
-
- if jedi.__version__ < "0.16.0":
- warning = tm.assert_produces_warning(None)
- else:
- warning = tm.assert_produces_warning(
- DeprecationWarning, check_stacklevel=False
- )
- with warning:
+ with tm.assert_produces_warning(None):
with provisionalcompleter("ignore"):
list(ip.Completer.completions("idx.", 4))
| - [x] closes #31407
ipython/ipython#12102 has been solved, and the new release came out a couple weeks ago, so revert should be fine according to https://github.com/ipython/ipython/issues/12102#issuecomment-670987484
| https://api.github.com/repos/pandas-dev/pandas/pulls/35845 | 2020-08-21T19:24:07Z | 2020-12-29T20:45:23Z | null | 2020-12-29T20:45:23Z |
REF: simplify _cython_agg_blocks | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 166631e69f523..60e23b14eaf09 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -24,7 +24,6 @@
Tuple,
Type,
Union,
- cast,
)
import warnings
@@ -1100,24 +1099,19 @@ def blk_func(block: "Block") -> List["Block"]:
# continue and exclude the block
raise
else:
- result = cast(DataFrame, result)
+ assert isinstance(result, (Series, DataFrame)) # for mypy
+ # In the case of object dtype block, it may have been split
+ # in the operation. We un-split here.
+ result = result._consolidate()
+ assert isinstance(result, (Series, DataFrame)) # for mypy
+ assert len(result._mgr.blocks) == 1
+
# unwrap DataFrame to get array
- if len(result._mgr.blocks) != 1:
- # We've split an object block! Everything we've assumed
- # about a single block input returning a single block output
- # is a lie. To keep the code-path for the typical non-split case
- # clean, we choose to clean up this mess later on.
- assert len(locs) == result.shape[1]
- for i, loc in enumerate(locs):
- agg_block = result.iloc[:, [i]]._mgr.blocks[0]
- agg_block.mgr_locs = [loc]
- new_blocks.append(agg_block)
- else:
- result = result._mgr.blocks[0].values
- if isinstance(result, np.ndarray) and result.ndim == 1:
- result = result.reshape(1, -1)
- agg_block = cast_result_block(result, block, how)
- new_blocks = [agg_block]
+ result = result._mgr.blocks[0].values
+ if isinstance(result, np.ndarray) and result.ndim == 1:
+ result = result.reshape(1, -1)
+ agg_block = cast_result_block(result, block, how)
+ new_blocks = [agg_block]
else:
agg_block = cast_result_block(result, block, how)
new_blocks = [agg_block]
| Orthogonal to #35839, though a rebase will be needed.
cc @TomAugspurger did you already try this in #31616? If so, we need to identify a test case in which this doesnt work | https://api.github.com/repos/pandas-dev/pandas/pulls/35841 | 2020-08-21T15:02:36Z | 2020-08-21T21:07:29Z | 2020-08-21T21:07:29Z | 2020-08-21T21:22:26Z |
REF: remove unnecesary try/except | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 60e23b14eaf09..4b1f6cfe0a662 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -30,7 +30,7 @@
import numpy as np
from pandas._libs import lib
-from pandas._typing import FrameOrSeries, FrameOrSeriesUnion
+from pandas._typing import ArrayLike, FrameOrSeries, FrameOrSeriesUnion
from pandas.util._decorators import Appender, Substitution, doc
from pandas.core.dtypes.cast import (
@@ -59,6 +59,7 @@
validate_func_kwargs,
)
import pandas.core.algorithms as algorithms
+from pandas.core.arrays import ExtensionArray
from pandas.core.base import DataError, SpecificationError
import pandas.core.common as com
from pandas.core.construction import create_series_with_explicit_dtype
@@ -1033,32 +1034,31 @@ def _cython_agg_blocks(
no_result = object()
- def cast_result_block(result, block: "Block", how: str) -> "Block":
- # see if we can cast the block to the desired dtype
+ def cast_agg_result(result, values: ArrayLike, how: str) -> ArrayLike:
+ # see if we can cast the values to the desired dtype
# this may not be the original dtype
assert not isinstance(result, DataFrame)
assert result is not no_result
- dtype = maybe_cast_result_dtype(block.dtype, how)
+ dtype = maybe_cast_result_dtype(values.dtype, how)
result = maybe_downcast_numeric(result, dtype)
- if block.is_extension and isinstance(result, np.ndarray):
- # e.g. block.values was an IntegerArray
- # (1, N) case can occur if block.values was Categorical
+ if isinstance(values, ExtensionArray) and isinstance(result, np.ndarray):
+ # e.g. values was an IntegerArray
+ # (1, N) case can occur if values was Categorical
# and result is ndarray[object]
# TODO(EA2D): special casing not needed with 2D EAs
assert result.ndim == 1 or result.shape[0] == 1
try:
# Cast back if feasible
- result = type(block.values)._from_sequence(
- result.ravel(), dtype=block.values.dtype
+ result = type(values)._from_sequence(
+ result.ravel(), dtype=values.dtype
)
except (ValueError, TypeError):
# reshape to be valid for non-Extension Block
result = result.reshape(1, -1)
- agg_block: "Block" = block.make_block(result)
- return agg_block
+ return result
def blk_func(block: "Block") -> List["Block"]:
new_blocks: List["Block"] = []
@@ -1092,28 +1092,25 @@ def blk_func(block: "Block") -> List["Block"]:
# Categoricals. This will done by later self._reindex_output()
# Doing it here creates an error. See GH#34951
sgb = get_groupby(obj, self.grouper, observed=True)
- try:
- result = sgb.aggregate(lambda x: alt(x, axis=self.axis))
- except TypeError:
- # we may have an exception in trying to aggregate
- # continue and exclude the block
- raise
- else:
- assert isinstance(result, (Series, DataFrame)) # for mypy
- # In the case of object dtype block, it may have been split
- # in the operation. We un-split here.
- result = result._consolidate()
- assert isinstance(result, (Series, DataFrame)) # for mypy
- assert len(result._mgr.blocks) == 1
-
- # unwrap DataFrame to get array
- result = result._mgr.blocks[0].values
- if isinstance(result, np.ndarray) and result.ndim == 1:
- result = result.reshape(1, -1)
- agg_block = cast_result_block(result, block, how)
- new_blocks = [agg_block]
+ result = sgb.aggregate(lambda x: alt(x, axis=self.axis))
+
+ assert isinstance(result, (Series, DataFrame)) # for mypy
+ # In the case of object dtype block, it may have been split
+ # in the operation. We un-split here.
+ result = result._consolidate()
+ assert isinstance(result, (Series, DataFrame)) # for mypy
+ assert len(result._mgr.blocks) == 1
+
+ # unwrap DataFrame to get array
+ result = result._mgr.blocks[0].values
+ if isinstance(result, np.ndarray) and result.ndim == 1:
+ result = result.reshape(1, -1)
+ res_values = cast_agg_result(result, block.values, how)
+ agg_block = block.make_block(res_values)
+ new_blocks = [agg_block]
else:
- agg_block = cast_result_block(result, block, how)
+ res_values = cast_agg_result(result, block.values, how)
+ agg_block = block.make_block(res_values)
new_blocks = [agg_block]
return new_blocks
| and make cast_result_block into cast_agg_result operate on values instead of blocks | https://api.github.com/repos/pandas-dev/pandas/pulls/35839 | 2020-08-21T04:19:57Z | 2020-08-22T02:02:02Z | 2020-08-22T02:02:02Z | 2020-08-22T03:17:42Z |
Fix Series construction from Sparse["datetime64[ns]"] | diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
index c1b73c60be92b..c9ead81c6d780 100644
--- a/doc/source/whatsnew/v1.1.2.rst
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -25,6 +25,7 @@ Fixed regressions
Bug fixes
~~~~~~~~~
- Bug in :meth:`DataFrame.eval` with ``object`` dtype column binary operations (:issue:`35794`)
+- Bug in :class:`Series` constructor raising a ``TypeError`` when constructing sparse datetime64 dtypes (:issue:`35762`)
- Bug in :meth:`DataFrame.apply` with ``result_type="reduce"`` returning with incorrect index (:issue:`35683`)
-
-
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 47f10f1f65f4a..e8c9f28e50084 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -35,6 +35,7 @@
is_iterator,
is_list_like,
is_object_dtype,
+ is_sparse,
is_timedelta64_ns_dtype,
)
from pandas.core.dtypes.generic import (
@@ -535,9 +536,10 @@ def _try_cast(
if maybe_castable(arr) and not copy and dtype is None:
return arr
- if isinstance(dtype, ExtensionDtype) and dtype.kind != "M":
+ if isinstance(dtype, ExtensionDtype) and (dtype.kind != "M" or is_sparse(dtype)):
# create an extension array from its dtype
- # DatetimeTZ case needs to go through maybe_cast_to_datetime
+ # DatetimeTZ case needs to go through maybe_cast_to_datetime but
+ # SparseDtype does not
array_type = dtype.construct_array_type()._from_sequence
subarr = array_type(arr, dtype=dtype, copy=copy)
return subarr
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 2697f42eb05a4..e6b4cb598989b 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -50,6 +50,7 @@
is_numeric_dtype,
is_object_dtype,
is_scalar,
+ is_sparse,
is_string_dtype,
is_timedelta64_dtype,
is_timedelta64_ns_dtype,
@@ -1323,7 +1324,9 @@ def maybe_cast_to_datetime(value, dtype, errors: str = "raise"):
f"Please pass in '{dtype.name}[ns]' instead."
)
- if is_datetime64 and not is_dtype_equal(dtype, DT64NS_DTYPE):
+ if is_datetime64 and not is_dtype_equal(
+ getattr(dtype, "subtype", dtype), DT64NS_DTYPE
+ ):
# pandas supports dtype whose granularity is less than [ns]
# e.g., [ps], [fs], [as]
@@ -1355,7 +1358,7 @@ def maybe_cast_to_datetime(value, dtype, errors: str = "raise"):
if is_scalar(value):
if value == iNaT or isna(value):
value = iNaT
- else:
+ elif not is_sparse(value):
value = np.array(value, copy=False)
# have a scalar array-like (e.g. NaT)
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 1dd410ad02ee0..bcf7039ec9039 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1449,3 +1449,18 @@ def test_constructor_datetimelike_scalar_to_string_dtype(self):
result = Series("M", index=[1, 2, 3], dtype="string")
expected = pd.Series(["M", "M", "M"], index=[1, 2, 3], dtype="string")
tm.assert_series_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "values",
+ [
+ [np.datetime64("2012-01-01"), np.datetime64("2013-01-01")],
+ ["2012-01-01", "2013-01-01"],
+ ],
+ )
+ def test_constructor_sparse_datetime64(self, values):
+ # https://github.com/pandas-dev/pandas/issues/35762
+ dtype = pd.SparseDtype("datetime64[ns]")
+ result = pd.Series(values, dtype=dtype)
+ arr = pd.arrays.SparseArray(values, dtype=dtype)
+ expected = pd.Series(arr)
+ tm.assert_series_equal(result, expected)
| - [x] closes #35762
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/35838 | 2020-08-21T03:15:17Z | 2020-08-27T02:22:03Z | 2020-08-27T02:22:02Z | 2020-08-27T02:37:46Z |
Update SparseDtype user guide doc | diff --git a/doc/source/user_guide/sparse.rst b/doc/source/user_guide/sparse.rst
index ca8e9a2f313f6..35e0e0fb86472 100644
--- a/doc/source/user_guide/sparse.rst
+++ b/doc/source/user_guide/sparse.rst
@@ -87,14 +87,15 @@ The :attr:`SparseArray.dtype` property stores two pieces of information
sparr.dtype
-A :class:`SparseDtype` may be constructed by passing each of these
+A :class:`SparseDtype` may be constructed by passing only a dtype
.. ipython:: python
pd.SparseDtype(np.dtype('datetime64[ns]'))
-The default fill value for a given NumPy dtype is the "missing" value for that dtype,
-though it may be overridden.
+in which case a default fill value will be used (for NumPy dtypes this is often the
+"missing" value for that dtype). To override this default an explicit fill value may be
+passed instead
.. ipython:: python
| Tiny doc nit. The doc says you can pass both dtype and fill_value to SparseDtype then only passes one in the example. | https://api.github.com/repos/pandas-dev/pandas/pulls/35837 | 2020-08-21T00:40:57Z | 2020-08-31T18:45:12Z | 2020-08-31T18:45:11Z | 2020-08-31T18:47:20Z |
CI: avoid file leak from ipython tests | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 97cc514e31bb3..0878380d00837 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1181,7 +1181,13 @@ def ip():
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.interactiveshell import InteractiveShell
- return InteractiveShell()
+ # GH#35711 make sure sqlite history file handle is not leaked
+ from traitlets.config import Config # noqa: F401 isort:skip
+
+ c = Config()
+ c.HistoryManager.hist_file = ":memory:"
+
+ return InteractiveShell(config=c)
@pytest.fixture(params=["bsr", "coo", "csc", "csr", "dia", "dok", "lil"])
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 2fb1f7f911a9c..0716cf5e27119 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -6,6 +6,7 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
from pandas.util._test_decorators import async_mark, skip_if_no
import pandas as pd
@@ -521,6 +522,7 @@ def _check_f(base, f):
_check_f(d.copy(), f)
@async_mark()
+ @td.check_file_leaks
async def test_tab_complete_warning(self, ip):
# GH 16409
pytest.importorskip("IPython", minversion="6.0.0")
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 84805d06df4a8..1bbfe4d7d74af 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -19,6 +19,7 @@
import pytz
from pandas.compat import is_platform_32bit, is_platform_windows
+import pandas.util._test_decorators as td
import pandas as pd
from pandas import (
@@ -3338,6 +3339,7 @@ def test_format_percentiles_integer_idx():
assert result == expected
+@td.check_file_leaks
def test_repr_html_ipython_config(ip):
code = textwrap.dedent(
"""\
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index b36b11582c1ec..f18aaa5e86829 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -3,6 +3,7 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
from pandas.util._test_decorators import async_mark
import pandas as pd
@@ -17,6 +18,7 @@
@async_mark()
+@td.check_file_leaks
async def test_tab_complete_ipython6_warning(ip):
from IPython.core.completer import provisionalcompleter
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index b174eb0e42776..d81e8a4f82ffb 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -5,6 +5,7 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
from pandas.util._test_decorators import async_mark
import pandas as pd
@@ -486,6 +487,7 @@ def test_empty_method(self):
assert not full_series.empty
@async_mark()
+ @td.check_file_leaks
async def test_tab_complete_warning(self, ip):
# https://github.com/pandas-dev/pandas/issues/16409
pytest.importorskip("IPython", minversion="6.0.0")
| Broken off from #35711 | https://api.github.com/repos/pandas-dev/pandas/pulls/35836 | 2020-08-21T00:37:00Z | 2020-08-21T21:11:34Z | 2020-08-21T21:11:34Z | 2020-08-21T21:16:45Z |
MAINT: Manual Backport PR #35825 on branch 1.1.x | diff --git a/ci/deps/travis-36-locale.yaml b/ci/deps/travis-36-locale.yaml
index 03a1e751b6a86..8f7e29abc5f3b 100644
--- a/ci/deps/travis-36-locale.yaml
+++ b/ci/deps/travis-36-locale.yaml
@@ -28,6 +28,7 @@ dependencies:
- openpyxl
- pandas-gbq=0.12.0
- psycopg2=2.6.2
+ - pyarrow>=0.13.0 # GH #35813
- pymysql=0.7.11
- pytables
- python-dateutil
| Manual backport of below
Ref: https://github.com/pandas-dev/pandas/pull/35828#issuecomment-677917854
This is because we dropped 3.6 support on master (https://github.com/pandas-dev/pandas/pull/35214)
cc @simonjayhawkins
| https://api.github.com/repos/pandas-dev/pandas/pulls/35835 | 2020-08-20T23:28:05Z | 2020-08-21T08:23:56Z | 2020-08-21T08:23:56Z | 2020-08-21T08:24:10Z |
Backport PR #35825 on branch 1.1.x (DOC: Start 1.1.2) | diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst
index 8ce10136dd2bb..1b5e63dfcf359 100644
--- a/doc/source/whatsnew/index.rst
+++ b/doc/source/whatsnew/index.rst
@@ -16,6 +16,7 @@ Version 1.1
.. toctree::
:maxdepth: 2
+ v1.1.2
v1.1.1
v1.1.0
diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 721f07c865409..77ea67f76f655 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -53,4 +53,4 @@ Bug fixes
Contributors
~~~~~~~~~~~~
-.. contributors:: v1.1.0..v1.1.1|HEAD
+.. contributors:: v1.1.0..v1.1.1
diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
new file mode 100644
index 0000000000000..81acd567027e5
--- /dev/null
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -0,0 +1,38 @@
+.. _whatsnew_112:
+
+What's new in 1.1.2 (??)
+------------------------
+
+These are the changes in pandas 1.1.2. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+{{ header }}
+
+.. ---------------------------------------------------------------------------
+
+.. _whatsnew_112.regressions:
+
+Fixed regressions
+~~~~~~~~~~~~~~~~~
+
+-
+-
+
+.. ---------------------------------------------------------------------------
+
+.. _whatsnew_112.bug_fixes:
+
+Bug fixes
+~~~~~~~~~
+
+-
+-
+
+.. ---------------------------------------------------------------------------
+
+.. _whatsnew_112.contributors:
+
+Contributors
+~~~~~~~~~~~~
+
+.. contributors:: v1.1.1..v1.1.2|HEAD
| Backport PR #35825: DOC: Start 1.1.2 | https://api.github.com/repos/pandas-dev/pandas/pulls/35834 | 2020-08-20T22:19:11Z | 2020-08-21T09:07:34Z | 2020-08-21T09:07:34Z | 2020-08-21T09:07:34Z |
Backport PR #35757 on branch 1.1.x (CI: Unpin Pytest + Pytest Asyncio Min Version) | diff --git a/ci/deps/azure-36-locale.yaml b/ci/deps/azure-36-locale.yaml
index 3034ed3dc43af..536bb6f899773 100644
--- a/ci/deps/azure-36-locale.yaml
+++ b/ci/deps/azure-36-locale.yaml
@@ -7,9 +7,9 @@ dependencies:
# tools
- cython>=0.29.16
- - pytest>=5.0.1,<6.0.0 # https://github.com/pandas-dev/pandas/issues/35620
+ - pytest>=5.0.1
- pytest-xdist>=1.21
- - pytest-asyncio
+ - pytest-asyncio>=0.12.0
- hypothesis>=3.58.0
- pytest-azurepipelines
| Backport PR #35757: CI: Unpin Pytest + Pytest Asyncio Min Version | https://api.github.com/repos/pandas-dev/pandas/pulls/35833 | 2020-08-20T21:47:34Z | 2020-08-21T09:08:08Z | 2020-08-21T09:08:08Z | 2020-08-21T09:08:08Z |
Jsonlines append mode | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index fe412bc0ce937..c3c4a14146541 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2063,6 +2063,7 @@ def to_json(
index: bool_t = True,
indent: Optional[int] = None,
storage_options: StorageOptions = None,
+ mode: str = "w",
) -> Optional[str]:
"""
Convert the object to a JSON string.
@@ -2156,6 +2157,13 @@ def to_json(
.. versionadded:: 1.2.0
+ mode : str, default 'w'
+ If 'orient' is 'records' and 'lines' is 'True' enable option to append
+ mode ('mode' is 'a') to a json file instead of overwriting.
+ Will throw ValueError if incorrect 'orient' and 'lines'.
+
+ .. versionadded:: 1.2.0
+
Returns
-------
None or str
@@ -2335,6 +2343,7 @@ def to_json(
index=index,
indent=indent,
storage_options=storage_options,
+ mode=mode,
)
def to_hdf(
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index fe5e172655ae1..627b8a48100a8 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -50,6 +50,7 @@ def to_json(
index: bool = True,
indent: int = 0,
storage_options: StorageOptions = None,
+ mode: str = "w",
):
if not index and orient not in ["split", "table"]:
@@ -68,6 +69,11 @@ def to_json(
if lines and orient != "records":
raise ValueError("'lines' keyword only valid when 'orient' is records")
+ if mode == "a" and (not lines or orient != "records"):
+ raise ValueError(
+ "'append mode' only valid when 'line' is True and 'orient' is records"
+ )
+
if orient == "table" and isinstance(obj, Series):
obj = obj.to_frame(name=obj.name or "values")
@@ -95,9 +101,19 @@ def to_json(
if lines:
s = convert_to_line_delimits(s)
+ try:
+ add_new_line = (
+ mode == "a"
+ and os.path.exists(path_or_buf)
+ and os.path.isfile(path_or_buf)
+ and os.path.getsize(path_or_buf)
+ )
+ s = "\n" + s if add_new_line else s
+ except (TypeError, ValueError):
+ pass
if isinstance(path_or_buf, str):
- fh, handles = get_handle(path_or_buf, "w", compression=compression)
+ fh, handles = get_handle(path_or_buf, mode, compression=compression)
try:
fh.write(s)
finally:
| **Description**: Adding support to append mode for `to_json` when 'orient' is `records` and 'lines' is `True`
**Motivation**: [jsonlines](http://jsonlines.org/) is a format that is gaining some space (e.g. [Cloud Storage](https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-json)). In pandas It's achieved using `to_json` with `orient='records'` and `lines=True`. But `to_json` doesn't have the option to use `mode='a'` to append to a file, this PR aims to provide a simple append mode for jsonlines using pandas.
---
- [X] closes #35849
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/35832 | 2020-08-20T21:10:35Z | 2021-01-01T07:18:35Z | null | 2021-01-01T08:21:55Z |
Fix unable to build Pandas with xlc on z/OS | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 74710ca48308c..441116376d52b 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -913,6 +913,7 @@ Other
- Bug in :func:`pandas.testing.assert_series_equal`, :func:`pandas.testing.assert_frame_equal`, :func:`pandas.testing.assert_index_equal` and :func:`pandas.testing.assert_extension_array_equal` incorrectly raising when an attribute has an unrecognized NA type (:issue:`39461`)
- Bug in :meth:`DataFrame.equals`, :meth:`Series.equals`, :meth:`Index.equals` with object-dtype containing ``np.datetime64("NaT")`` or ``np.timedelta64("NaT")`` (:issue:`39650`)
- Bug in :func:`pandas.util.show_versions` where console JSON output was not proper JSON (:issue:`39701`)
+- Let Pandas compile on z/OS when using `xlc <https://www.ibm.com/products/xl-cpp-compiler-zos>`_ (:issue:`35826`)
- Bug in :meth:`DataFrame.convert_dtypes` incorrectly raised ValueError when called on an empty DataFrame (:issue:`40393`)
- Bug in :meth:`DataFrame.clip` not interpreting missing values as no threshold (:issue:`40420`)
diff --git a/pandas/_libs/src/headers/cmath b/pandas/_libs/src/headers/cmath
index 632e1fc2390d0..9e7540cfefc13 100644
--- a/pandas/_libs/src/headers/cmath
+++ b/pandas/_libs/src/headers/cmath
@@ -25,6 +25,18 @@ namespace std {
__inline int isnan(double x) { return _isnan(x); }
__inline int notnan(double x) { return x == x; }
}
+#elif defined(__MVS__)
+#include <cmath>
+
+#define _signbit signbit
+#undef signbit
+#undef isnan
+
+namespace std {
+ __inline int notnan(double x) { return x == x; }
+ __inline int signbit(double num) { return _signbit(num); }
+ __inline int isnan(double x) { return isnan(x); }
+}
#else
#include <cmath>
diff --git a/setup.py b/setup.py
index b410c5c154648..17ee110bc4136 100755
--- a/setup.py
+++ b/setup.py
@@ -569,6 +569,17 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
include = data.get("include", [])
include.append(numpy.get_include())
+ undef_macros = []
+
+ if (
+ sys.platform == "zos"
+ and data.get("language") == "c++"
+ and os.path.basename(os.environ.get("CXX", "/bin/xlc++")) in ("xlc", "xlc++")
+ ):
+ data.get("macros", macros).append(("__s390__", "1"))
+ extra_compile_args.append("-qlanglvl=extended0x:nolibext")
+ undef_macros.append("_POSIX_THREADS")
+
obj = Extension(
f"pandas.{name}",
sources=sources,
@@ -578,6 +589,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
define_macros=data.get("macros", macros),
extra_compile_args=extra_compile_args,
extra_link_args=extra_link_args,
+ undef_macros=undef_macros,
)
extensions.append(obj)
| - [x] closes #35826
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This is an attempt at being able to build Pandas using xlc on z/OS. There's two issues being fixed here:
1) Raise the default standard that being used with xlc++ (`-qlanglvl`) only when compiling c++ code, and fix other macros that otherwise cause errors when compiling with that option. I put the change directly above where it's used, but an alternate spot that it could go is directly within the declarations of each cpp file + it's respective macros. But since this would need to be added to any additional c++ code that's added, and all extra options are the same, I left it here.
2) Fix the compilation errors that occur - it's currently missing two functions that aren't in the std:: namespace. The first is that `signbit` is declared as a macro and not a function, so we cannot redeclare the function without changing the macro. The next, `isnan`, is declared as both a macro and a function. According to the [xlc docs](https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.4.0/com.ibm.zos.v2r4.bpxbd00/risnan.htm), to use the function you can simply undefine `isnan`, which is done here, and then it's also placed in the std namespace. To use the macro form we could take the same approach as signbit.
If we don't want to touch the std namespace, an alternative could be to swap the declarations (for z/OS only) within `pandas/_libs/window/aggregations.pyx` to not use std for those two functions. | https://api.github.com/repos/pandas-dev/pandas/pulls/35829 | 2020-08-20T19:10:41Z | 2021-05-26T02:19:09Z | 2021-05-26T02:19:09Z | 2021-05-26T02:19:14Z |
CI: Pyarrow Min Version on Travis 3.7 Locale Build | diff --git a/ci/deps/travis-37-locale.yaml b/ci/deps/travis-37-locale.yaml
index 4427c1d940bf2..6dc1c2f89cc6f 100644
--- a/ci/deps/travis-37-locale.yaml
+++ b/ci/deps/travis-37-locale.yaml
@@ -28,6 +28,7 @@ dependencies:
- openpyxl
- pandas-gbq=0.12.0
- psycopg2=2.7
+ - pyarrow>=0.15.0 # GH #35813
- pymysql=0.7.11
- pytables
- python-dateutil
| - [x] closes #35813
Conda seems to be installing pyarrow 0.11 for this env.
Note pyarrow tests will now run on Travis 3.7 Locale Build | https://api.github.com/repos/pandas-dev/pandas/pulls/35828 | 2020-08-20T18:18:33Z | 2020-08-20T21:26:05Z | 2020-08-20T21:26:05Z | 2020-08-21T08:28:06Z |
DOC: Start 1.1.2 | diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst
index a280a981c789b..1827d151579a1 100644
--- a/doc/source/whatsnew/index.rst
+++ b/doc/source/whatsnew/index.rst
@@ -24,6 +24,7 @@ Version 1.1
.. toctree::
:maxdepth: 2
+ v1.1.2
v1.1.1
v1.1.0
diff --git a/doc/source/whatsnew/v1.1.1.rst b/doc/source/whatsnew/v1.1.1.rst
index 721f07c865409..77ea67f76f655 100644
--- a/doc/source/whatsnew/v1.1.1.rst
+++ b/doc/source/whatsnew/v1.1.1.rst
@@ -53,4 +53,4 @@ Bug fixes
Contributors
~~~~~~~~~~~~
-.. contributors:: v1.1.0..v1.1.1|HEAD
+.. contributors:: v1.1.0..v1.1.1
diff --git a/doc/source/whatsnew/v1.1.2.rst b/doc/source/whatsnew/v1.1.2.rst
new file mode 100644
index 0000000000000..81acd567027e5
--- /dev/null
+++ b/doc/source/whatsnew/v1.1.2.rst
@@ -0,0 +1,38 @@
+.. _whatsnew_112:
+
+What's new in 1.1.2 (??)
+------------------------
+
+These are the changes in pandas 1.1.2. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+{{ header }}
+
+.. ---------------------------------------------------------------------------
+
+.. _whatsnew_112.regressions:
+
+Fixed regressions
+~~~~~~~~~~~~~~~~~
+
+-
+-
+
+.. ---------------------------------------------------------------------------
+
+.. _whatsnew_112.bug_fixes:
+
+Bug fixes
+~~~~~~~~~
+
+-
+-
+
+.. ---------------------------------------------------------------------------
+
+.. _whatsnew_112.contributors:
+
+Contributors
+~~~~~~~~~~~~
+
+.. contributors:: v1.1.1..v1.1.2|HEAD
| https://api.github.com/repos/pandas-dev/pandas/pulls/35825 | 2020-08-20T15:35:24Z | 2020-08-20T22:19:01Z | 2020-08-20T22:19:01Z | 2020-08-21T08:38:08Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.