title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: Link to Python 3 versions of official Python docs | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index ecb9a8f2d79db..f9995472866ed 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1728,7 +1728,7 @@ built-in string methods. For example:
Powerful pattern-matching methods are provided as well, but note that
pattern-matching generally uses `regular expressions
-<https://docs.python.org/2/library/re.html>`__ by default (and in some cases
+<https://docs.python.org/3/library/re.html>`__ by default (and in some cases
always uses them).
Please see :ref:`Vectorized String Methods <text.string_methods>` for a complete
diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst
index cbe945e0cf2cf..d2ca76713ba3b 100644
--- a/doc/source/enhancingperf.rst
+++ b/doc/source/enhancingperf.rst
@@ -468,8 +468,8 @@ This Python syntax is **not** allowed:
* Statements
- - Neither `simple <http://docs.python.org/2/reference/simple_stmts.html>`__
- nor `compound <http://docs.python.org/2/reference/compound_stmts.html>`__
+ - Neither `simple <https://docs.python.org/3/reference/simple_stmts.html>`__
+ nor `compound <https://docs.python.org/3/reference/compound_stmts.html>`__
statements are allowed. This includes things like ``for``, ``while``, and
``if``.
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 2f3dbb9746066..3c2fd4d959d63 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -232,7 +232,7 @@ as an attribute:
- You can use this access only if the index element is a valid python identifier, e.g. ``s.1`` is not allowed.
See `here for an explanation of valid identifiers
- <http://docs.python.org/2.7/reference/lexical_analysis.html#identifiers>`__.
+ <https://docs.python.org/3/reference/lexical_analysis.html#identifiers>`__.
- The attribute will not be available if it conflicts with an existing method name, e.g. ``s.min`` is not allowed.
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 979d5afd0a04f..6133da220aa8d 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -234,7 +234,7 @@ Optional Dependencies
* `psycopg2 <http://initd.org/psycopg/>`__: for PostgreSQL
* `pymysql <https://github.com/PyMySQL/PyMySQL>`__: for MySQL.
- * `SQLite <https://docs.python.org/3.5/library/sqlite3.html>`__: for SQLite, this is included in Python's standard library by default.
+ * `SQLite <https://docs.python.org/3/library/sqlite3.html>`__: for SQLite, this is included in Python's standard library by default.
* `matplotlib <http://matplotlib.org/>`__: for plotting, Version 1.4.3 or higher.
* For Excel I/O:
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 03c2ce23eb35d..a5a0a41147a6b 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -3065,7 +3065,7 @@ any pickled pandas object (or any other pickled object) from file:
Loading pickled data received from untrusted sources can be unsafe.
- See: https://docs.python.org/3.6/library/pickle.html
+ See: https://docs.python.org/3/library/pickle.html
.. warning::
@@ -4545,7 +4545,7 @@ facilitate data retrieval and to reduce dependency on DB-specific API. Database
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are `psycopg2 <http://initd.org/psycopg/>`__
for PostgreSQL or `pymysql <https://github.com/PyMySQL/PyMySQL>`__ for MySQL.
-For `SQLite <https://docs.python.org/3.5/library/sqlite3.html>`__ this is
+For `SQLite <https://docs.python.org/3/library/sqlite3.html>`__ this is
included in Python's standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
`SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__.
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index f968cdad100ba..e20537efc0e71 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -559,7 +559,7 @@ String/Regular Expression Replacement
backslashes than strings without this prefix. Backslashes in raw strings
will be interpreted as an escaped backslash, e.g., ``r'\' == '\\'``. You
should `read about them
- <http://docs.python.org/2/reference/lexical_analysis.html#string-literals>`__
+ <https://docs.python.org/3/reference/lexical_analysis.html#string-literals>`__
if this is unclear.
Replace the '.' with ``NaN`` (str -> str)
diff --git a/doc/source/text.rst b/doc/source/text.rst
index 85b8aa6aa1857..2a86d92978043 100644
--- a/doc/source/text.rst
+++ b/doc/source/text.rst
@@ -119,7 +119,7 @@ i.e., from the end of the string to the beginning of the string:
s2.str.rsplit('_', expand=True, n=1)
Methods like ``replace`` and ``findall`` take `regular expressions
-<https://docs.python.org/2/library/re.html>`__, too:
+<https://docs.python.org/3/library/re.html>`__, too:
.. ipython:: python
@@ -221,7 +221,7 @@ Extract first match in each subject (extract)
confusing from the perspective of a user.
The ``extract`` method accepts a `regular expression
-<https://docs.python.org/2/library/re.html>`__ with at least one
+<https://docs.python.org/3/library/re.html>`__ with at least one
capture group.
Extracting a regular expression with more than one group returns a
diff --git a/pandas/_libs/tslibs/ccalendar.pyx b/pandas/_libs/tslibs/ccalendar.pyx
index d7edae865911a..ebd5fc12775a4 100644
--- a/pandas/_libs/tslibs/ccalendar.pyx
+++ b/pandas/_libs/tslibs/ccalendar.pyx
@@ -94,7 +94,7 @@ cdef int dayofweek(int y, int m, int d) nogil:
See Also
--------
- [1] https://docs.python.org/3.6/library/calendar.html#calendar.weekday
+ [1] https://docs.python.org/3/library/calendar.html#calendar.weekday
[2] https://en.wikipedia.org/wiki/\
Determination_of_the_day_of_the_week#Sakamoto.27s_methods
diff --git a/pandas/core/computation/eval.py b/pandas/core/computation/eval.py
index f44fa347cb053..434d7f6ccfe13 100644
--- a/pandas/core/computation/eval.py
+++ b/pandas/core/computation/eval.py
@@ -169,9 +169,9 @@ def eval(expr, parser='pandas', engine=None, truediv=True,
expr : str or unicode
The expression to evaluate. This string cannot contain any Python
`statements
- <http://docs.python.org/2/reference/simple_stmts.html#simple-statements>`__,
+ <https://docs.python.org/3/reference/simple_stmts.html#simple-statements>`__,
only Python `expressions
- <http://docs.python.org/2/reference/simple_stmts.html#expression-statements>`__.
+ <https://docs.python.org/3/reference/simple_stmts.html#expression-statements>`__.
parser : string, default 'pandas', {'pandas', 'python'}
The parser to use to construct the syntax tree from the expression. The
default of ``'pandas'`` parses code slightly different than standard
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 4a66475c85691..a441e6c3fd36a 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -65,7 +65,7 @@ def strftime(self, date_format):
Returns
-------
ndarray of formatted strings
- """.format("https://docs.python.org/2/library/datetime.html"
+ """.format("https://docs.python.org/3/library/datetime.html"
"#strftime-and-strptime-behavior")
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 143b76575e36b..fa953f7d876cc 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -54,7 +54,7 @@ def read_pickle(path, compression='infer'):
file path
Warning: Loading pickled data received from untrusted sources can be
- unsafe. See: http://docs.python.org/2.7/library/pickle.html
+ unsafe. See: https://docs.python.org/3/library/pickle.html
Parameters
----------
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 6553dd66cba5f..1fefec6035a20 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -254,8 +254,7 @@ def test_repr_is_valid_construction_code(self):
tm.assert_series_equal(Series(res), Series(idx))
def test_repr_should_return_str(self):
- # http://docs.python.org/py3k/reference/datamodel.html#object.__repr__
- # http://docs.python.org/reference/datamodel.html#object.__repr__
+ # https://docs.python.org/3/reference/datamodel.html#object.__repr__
# "...The return value must be a string object."
# (str on py2.x, str (unicode) on py3)
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index bf3e584657763..97236f028b1c4 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -140,8 +140,7 @@ def test_repr_name_iterable_indexable(self):
repr(s)
def test_repr_should_return_str(self):
- # http://docs.python.org/py3k/reference/datamodel.html#object.__repr__
- # http://docs.python.org/reference/datamodel.html#object.__repr__
+ # https://docs.python.org/3/reference/datamodel.html#object.__repr__
# ...The return value must be a string object.
# (str on py2.x, str (unicode) on py3)
| - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Went through and made sure that all links to the official Python docs start with `https://docs.python.org/3/` to ensure that the links go to the most recent stable version of Python 3. | https://api.github.com/repos/pandas-dev/pandas/pulls/18962 | 2017-12-28T00:27:21Z | 2017-12-28T11:37:48Z | 2017-12-28T11:37:48Z | 2017-12-28T20:47:52Z |
Fix Timedelta.__floordiv__, __rfloordiv__ | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 77de1851490b2..9abf04bf8a83c 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -415,6 +415,7 @@ Numeric
- Bug in :func:`Series.__sub__` subtracting a non-nanosecond ``np.datetime64`` object from a ``Series`` gave incorrect results (:issue:`7996`)
- Bug in :class:`DatetimeIndex`, :class:`TimedeltaIndex` addition and subtraction of zero-dimensional integer arrays gave incorrect results (:issue:`19012`)
- Bug in :func:`Series.__add__` adding Series with dtype ``timedelta64[ns]`` to a timezone-aware ``DatetimeIndex`` incorrectly dropped timezone information (:issue:`13905`)
+- Bug in :func:`Timedelta.__floordiv__` and :func:`Timedelta.__rfloordiv__` dividing by many incompatible numpy objects was incorrectly allowed (:issue:`18846`)
-
Categorical
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index af3fa738fad14..8dba8c15f0b81 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1031,13 +1031,27 @@ class Timedelta(_Timedelta):
__rdiv__ = __rtruediv__
def __floordiv__(self, other):
+ # numpy does not implement floordiv for timedelta64 dtype, so we cannot
+ # just defer
+ if hasattr(other, '_typ'):
+ # Series, DataFrame, ...
+ return NotImplemented
+
if hasattr(other, 'dtype'):
- # work with i8
- other = other.astype('m8[ns]').astype('i8')
- return self.value // other
+ if other.dtype.kind == 'm':
+ # also timedelta-like
+ return _broadcast_floordiv_td64(self.value, other, _floordiv)
+ elif other.dtype.kind in ['i', 'u', 'f']:
+ if other.ndim == 0:
+ return Timedelta(self.value // other)
+ else:
+ return self.to_timedelta64() // other
+
+ raise TypeError('Invalid dtype {dtype} for '
+ '{op}'.format(dtype=other.dtype,
+ op='__floordiv__'))
- elif is_integer_object(other):
- # integers only
+ elif is_integer_object(other) or is_float_object(other):
return Timedelta(self.value // other, unit='ns')
elif not _validate_ops_compat(other):
@@ -1049,20 +1063,79 @@ class Timedelta(_Timedelta):
return self.value // other.value
def __rfloordiv__(self, other):
- if hasattr(other, 'dtype'):
- # work with i8
- other = other.astype('m8[ns]').astype('i8')
- return other // self.value
+ # numpy does not implement floordiv for timedelta64 dtype, so we cannot
+ # just defer
+ if hasattr(other, '_typ'):
+ # Series, DataFrame, ...
+ return NotImplemented
+ if hasattr(other, 'dtype'):
+ if other.dtype.kind == 'm':
+ # also timedelta-like
+ return _broadcast_floordiv_td64(self.value, other, _rfloordiv)
+ raise TypeError('Invalid dtype {dtype} for '
+ '{op}'.format(dtype=other.dtype,
+ op='__floordiv__'))
+
+ if is_float_object(other) and util._checknull(other):
+ # i.e. np.nan
+ return NotImplemented
elif not _validate_ops_compat(other):
return NotImplemented
other = Timedelta(other)
if other is NaT:
- return NaT
+ return np.nan
return other.value // self.value
+cdef _floordiv(int64_t value, right):
+ return value // right
+
+
+cdef _rfloordiv(int64_t value, right):
+ # analogous to referencing operator.div, but there is no operator.rfloordiv
+ return right // value
+
+
+cdef _broadcast_floordiv_td64(int64_t value, object other,
+ object (*operation)(int64_t value,
+ object right)):
+ """Boilerplate code shared by Timedelta.__floordiv__ and
+ Timedelta.__rfloordiv__ because np.timedelta64 does not implement these.
+
+ Parameters
+ ----------
+ value : int64_t; `self.value` from a Timedelta object
+ other : object
+ operation : function, either _floordiv or _rfloordiv
+
+ Returns
+ -------
+ result : varies based on `other`
+ """
+ # assumes other.dtype.kind == 'm', i.e. other is timedelta-like
+ cdef:
+ int ndim = getattr(other, 'ndim', -1)
+
+ # We need to watch out for np.timedelta64('NaT').
+ mask = other.view('i8') == NPY_NAT
+
+ if ndim == 0:
+ if mask:
+ return np.nan
+
+ return operation(value, other.astype('m8[ns]').astype('i8'))
+
+ else:
+ res = operation(value, other.astype('m8[ns]').astype('i8'))
+
+ if mask.any():
+ res = res.astype('f8')
+ res[mask] = np.nan
+ return res
+
+
# resolution in ns
-Timedelta.min = Timedelta(np.iinfo(np.int64).min +1)
+Timedelta.min = Timedelta(np.iinfo(np.int64).min + 1)
Timedelta.max = Timedelta(np.iinfo(np.int64).max)
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index 69ce7a42851a1..d0d204253e3f1 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -273,6 +273,16 @@ def test_nat_arithmetic():
assert right - left is NaT
+def test_nat_rfloordiv_timedelta():
+ # GH#18846
+ # See also test_timedelta.TestTimedeltaArithmetic.test_floordiv
+ td = Timedelta(hours=3, minutes=4)
+
+ assert td // np.nan is NaT
+ assert np.isnan(td // NaT)
+ assert np.isnan(td // np.timedelta64('NaT'))
+
+
def test_nat_arithmetic_index():
# GH 11718
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index 310555c19ea99..8c574d8f8873b 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -136,6 +136,7 @@ def test_binary_ops_nat(self):
assert (td * pd.NaT) is pd.NaT
assert (td / pd.NaT) is np.nan
assert (td // pd.NaT) is np.nan
+ assert (td // np.timedelta64('NaT')) is np.nan
def test_binary_ops_integers(self):
td = Timedelta(10, unit='d')
@@ -162,6 +163,98 @@ def test_binary_ops_with_timedelta(self):
# invalid multiply with another timedelta
pytest.raises(TypeError, lambda: td * td)
+ def test_floordiv(self):
+ # GH#18846
+ td = Timedelta(hours=3, minutes=4)
+ scalar = Timedelta(hours=3, minutes=3)
+
+ # scalar others
+ assert td // scalar == 1
+ assert -td // scalar.to_pytimedelta() == -2
+ assert (2 * td) // scalar.to_timedelta64() == 2
+
+ assert td // np.nan is pd.NaT
+ assert np.isnan(td // pd.NaT)
+ assert np.isnan(td // np.timedelta64('NaT'))
+
+ with pytest.raises(TypeError):
+ td // np.datetime64('2016-01-01', dtype='datetime64[us]')
+
+ expected = Timedelta(hours=1, minutes=32)
+ assert td // 2 == expected
+ assert td // 2.0 == expected
+ assert td // np.float64(2.0) == expected
+ assert td // np.int32(2.0) == expected
+ assert td // np.uint8(2.0) == expected
+
+ # Array-like others
+ assert td // np.array(scalar.to_timedelta64()) == 1
+
+ res = (3 * td) // np.array([scalar.to_timedelta64()])
+ expected = np.array([3], dtype=np.int64)
+ tm.assert_numpy_array_equal(res, expected)
+
+ res = (10 * td) // np.array([scalar.to_timedelta64(),
+ np.timedelta64('NaT')])
+ expected = np.array([10, np.nan])
+ tm.assert_numpy_array_equal(res, expected)
+
+ ser = pd.Series([1], dtype=np.int64)
+ res = td // ser
+ assert res.dtype.kind == 'm'
+
+ def test_rfloordiv(self):
+ # GH#18846
+ td = Timedelta(hours=3, minutes=3)
+ scalar = Timedelta(hours=3, minutes=4)
+
+ # scalar others
+ # x // Timedelta is defined only for timedelta-like x. int-like,
+ # float-like, and date-like, in particular, should all either
+ # a) raise TypeError directly or
+ # b) return NotImplemented, following which the reversed
+ # operation will raise TypeError.
+ assert td.__rfloordiv__(scalar) == 1
+ assert (-td).__rfloordiv__(scalar.to_pytimedelta()) == -2
+ assert (2 * td).__rfloordiv__(scalar.to_timedelta64()) == 0
+
+ assert np.isnan(td.__rfloordiv__(pd.NaT))
+ assert np.isnan(td.__rfloordiv__(np.timedelta64('NaT')))
+
+ dt64 = np.datetime64('2016-01-01', dtype='datetime64[us]')
+ with pytest.raises(TypeError):
+ td.__rfloordiv__(dt64)
+
+ assert td.__rfloordiv__(np.nan) is NotImplemented
+ assert td.__rfloordiv__(3.5) is NotImplemented
+ assert td.__rfloordiv__(2) is NotImplemented
+
+ with pytest.raises(TypeError):
+ td.__rfloordiv__(np.float64(2.0))
+ with pytest.raises(TypeError):
+ td.__rfloordiv__(np.int32(2.0))
+ with pytest.raises(TypeError):
+ td.__rfloordiv__(np.uint8(9))
+
+ # Array-like others
+ assert td.__rfloordiv__(np.array(scalar.to_timedelta64())) == 1
+
+ res = td.__rfloordiv__(np.array([(3 * scalar).to_timedelta64()]))
+ expected = np.array([3], dtype=np.int64)
+ tm.assert_numpy_array_equal(res, expected)
+
+ arr = np.array([(10 * scalar).to_timedelta64(),
+ np.timedelta64('NaT')])
+ res = td.__rfloordiv__(arr)
+ expected = np.array([10, np.nan])
+ tm.assert_numpy_array_equal(res, expected)
+
+ ser = pd.Series([1], dtype=np.int64)
+ res = td.__rfloordiv__(ser)
+ assert res is NotImplemented
+ with pytest.raises(TypeError):
+ ser // td
+
class TestTimedeltaComparison(object):
def test_comparison_object_array(self):
| It would not at all surprise me to learn that there are more corner cases that this misses. Needs some eyeballs.
- [x] closes #18846
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18961 | 2017-12-27T21:58:49Z | 2018-01-07T02:40:29Z | 2018-01-07T02:40:29Z | 2018-01-23T04:40:36Z |
DOC: Update doc strings to show pd.Panel has been deprecated | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 26257f6ecbc37..3243baa0008ae 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1403,6 +1403,8 @@ def to_panel(self):
Transform long (stacked) format (DataFrame) into wide (3D, Panel)
format.
+ .. deprecated:: 0.20.0
+
Currently the index of the DataFrame must be a 2-level MultiIndex. This
may be generalized later
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 0f3c5cb85249a..6d85e5bf7c7f9 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -111,6 +111,13 @@ class Panel(NDFrame):
"""
Represents wide format panel data, stored as 3-dimensional array
+ .. deprecated:: 0.20.0
+ The recommended way to represent 3-D data are with a MultiIndex on a
+ DataFrame via the :attr:`~Panel.to_frame()` method or with the
+ `xarray package <http://xarray.pydata.org/en/stable/>`__.
+ Pandas provides a :attr:`~Panel.to_xarray()` method to automate this
+ conversion.
+
Parameters
----------
data : ndarray (items x major x minor), or dict of DataFrames
| Add lines to doc string to show that Panel has been deprecated. | https://api.github.com/repos/pandas-dev/pandas/pulls/18956 | 2017-12-27T10:00:49Z | 2017-12-27T19:39:23Z | 2017-12-27T19:39:23Z | 2018-02-02T17:06:40Z |
CLN: Drop the .reshape method from classes | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 8c94cef4d8ea7..da750c071c4ae 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -241,6 +241,7 @@ Removal of prior version deprecations/changes
- :func:`read_csv` has dropped the ``buffer_lines`` parameter (:issue:`13360`)
- :func:`read_csv` has dropped the ``compact_ints`` and ``use_unsigned`` parameters (:issue:`13323`)
- The ``Timestamp`` class has dropped the ``offset`` attribute in favor of ``freq`` (:issue:`13593`)
+- The ``Series``, ``Categorical``, and ``Index`` classes have dropped the ``reshape`` method (:issue:`13012`)
.. _whatsnew_0230.performance:
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 845d0243c39e9..baf15b3ca5bc4 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -471,32 +471,6 @@ def tolist(self):
return [_maybe_box_datetimelike(x) for x in self]
return np.array(self).tolist()
- def reshape(self, new_shape, *args, **kwargs):
- """
- .. deprecated:: 0.19.0
- Calling this method will raise an error in a future release.
-
- An ndarray-compatible method that returns `self` because
- `Categorical` instances cannot actually be reshaped.
-
- Parameters
- ----------
- new_shape : int or tuple of ints
- A 1-D array of integers that correspond to the new
- shape of the `Categorical`. For more information on
- the parameter, please refer to `np.reshape`.
- """
- warn("reshape is deprecated and will raise "
- "in a subsequent release", FutureWarning, stacklevel=2)
-
- nv.validate_reshape(args, kwargs)
-
- # while the 'new_shape' parameter has no effect,
- # we should still enforce valid shape parameters
- np.reshape(self.codes, new_shape)
-
- return self
-
@property
def base(self):
""" compat, we are always our own object """
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 79de63b0caeb6..128cd8a9325d6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1190,16 +1190,6 @@ def rename(self, name, inplace=False):
"""
return self.set_names([name], inplace=inplace)
- def reshape(self, *args, **kwargs):
- """
- NOT IMPLEMENTED: do not call this method, as reshaping is not
- supported for Index objects and will raise an error.
-
- Reshape an Index.
- """
- raise NotImplementedError("reshaping is not supported "
- "for Index objects")
-
@property
def _has_complex_internals(self):
# to disable groupby tricks in MultiIndex
diff --git a/pandas/core/series.py b/pandas/core/series.py
index ab26a309533ef..360095c386e8b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -910,37 +910,6 @@ def repeat(self, repeats, *args, **kwargs):
return self._constructor(new_values,
index=new_index).__finalize__(self)
- def reshape(self, *args, **kwargs):
- """
- .. deprecated:: 0.19.0
- Calling this method will raise an error. Please call
- ``.values.reshape(...)`` instead.
-
- return an ndarray with the values shape
- if the specified shape matches exactly the current shape, then
- return self (for compat)
-
- See also
- --------
- numpy.ndarray.reshape
- """
- warnings.warn("reshape is deprecated and will raise "
- "in a subsequent release. Please use "
- ".values.reshape(...) instead", FutureWarning,
- stacklevel=2)
-
- if len(args) == 1 and hasattr(args[0], '__iter__'):
- shape = args[0]
- else:
- shape = args
-
- if tuple(shape) == self.shape:
- # XXX ignoring the "order" keyword.
- nv.validate_reshape(tuple(), kwargs)
- return self
-
- return self._values.reshape(shape, **kwargs)
-
def get_value(self, label, takeable=False):
"""
Quickly retrieve single value at passed index label
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 72b312f29a793..e09f4ad360843 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1684,12 +1684,6 @@ def test_take_fill_value(self):
with pytest.raises(IndexError):
idx.take(np.array([1, -5]))
- def test_reshape_raise(self):
- msg = "reshaping is not supported"
- idx = pd.Index([0, 1, 2])
- tm.assert_raises_regex(NotImplementedError, msg,
- idx.reshape, idx.shape)
-
def test_reindex_preserves_name_if_target_is_list_or_ndarray(self):
# GH6552
idx = pd.Index([0, 1, 2])
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index f472c6ae9383c..0312af12e0715 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -476,41 +476,6 @@ def test_reshaping_panel_categorical(self):
index=p.major_axis.set_names('major'))
tm.assert_frame_equal(result, expected)
- def test_reshape_categorical(self):
- cat = Categorical([], categories=["a", "b"])
- tm.assert_produces_warning(FutureWarning, cat.reshape, 0)
-
- with tm.assert_produces_warning(FutureWarning):
- cat = Categorical([], categories=["a", "b"])
- tm.assert_categorical_equal(cat.reshape(0), cat)
-
- with tm.assert_produces_warning(FutureWarning):
- cat = Categorical([], categories=["a", "b"])
- tm.assert_categorical_equal(cat.reshape((5, -1)), cat)
-
- with tm.assert_produces_warning(FutureWarning):
- cat = Categorical(["a", "b"], categories=["a", "b"])
- tm.assert_categorical_equal(cat.reshape(cat.shape), cat)
-
- with tm.assert_produces_warning(FutureWarning):
- cat = Categorical(["a", "b"], categories=["a", "b"])
- tm.assert_categorical_equal(cat.reshape(cat.size), cat)
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- msg = "can only specify one unknown dimension"
- cat = Categorical(["a", "b"], categories=["a", "b"])
- tm.assert_raises_regex(ValueError, msg, cat.reshape, (-2, -1))
-
- def test_reshape_categorical_numpy(self):
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- cat = Categorical(["a", "b"], categories=["a", "b"])
- tm.assert_categorical_equal(np.reshape(cat, cat.shape), cat)
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- msg = "the 'order' parameter is not supported"
- tm.assert_raises_regex(ValueError, msg, np.reshape,
- cat, cat.shape, order='F')
-
class TestMakeAxisDummies(object):
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 203a0b4a54858..0dae6aa96ced1 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1542,66 +1542,6 @@ def test_shift_categorical(self):
assert_index_equal(s.values.categories, sp1.values.categories)
assert_index_equal(s.values.categories, sn2.values.categories)
- def test_reshape_deprecate(self):
- x = Series(np.random.random(10), name='x')
- tm.assert_produces_warning(FutureWarning, x.reshape, x.shape)
-
- def test_reshape_non_2d(self):
- # see gh-4554
- with tm.assert_produces_warning(FutureWarning):
- x = Series(np.random.random(201), name='x')
- assert x.reshape(x.shape, ) is x
-
- # see gh-2719
- with tm.assert_produces_warning(FutureWarning):
- a = Series([1, 2, 3, 4])
- result = a.reshape(2, 2)
- expected = a.values.reshape(2, 2)
- tm.assert_numpy_array_equal(result, expected)
- assert isinstance(result, type(expected))
-
- def test_reshape_2d_return_array(self):
- x = Series(np.random.random(201), name='x')
-
- with tm.assert_produces_warning(FutureWarning):
- result = x.reshape((-1, 1))
- assert not isinstance(result, Series)
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- result2 = np.reshape(x, (-1, 1))
- assert not isinstance(result2, Series)
-
- with tm.assert_produces_warning(FutureWarning):
- result = x[:, None]
- expected = x.reshape((-1, 1))
- tm.assert_almost_equal(result, expected)
-
- def test_reshape_bad_kwarg(self):
- a = Series([1, 2, 3, 4])
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- msg = "'foo' is an invalid keyword argument for this function"
- tm.assert_raises_regex(
- TypeError, msg, a.reshape, (2, 2), foo=2)
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- msg = r"reshape\(\) got an unexpected keyword argument 'foo'"
- tm.assert_raises_regex(
- TypeError, msg, a.reshape, a.shape, foo=2)
-
- def test_numpy_reshape(self):
- a = Series([1, 2, 3, 4])
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- result = np.reshape(a, (2, 2))
- expected = a.values.reshape(2, 2)
- tm.assert_numpy_array_equal(result, expected)
- assert isinstance(result, type(expected))
-
- with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
- result = np.reshape(a, a.shape)
- tm.assert_series_equal(result, a)
-
def test_unstack(self):
from numpy import nan
| Remove the method for Series, Categorical, and Index. Deprecated or errored in v0.19.0
xref #13012 | https://api.github.com/repos/pandas-dev/pandas/pulls/18954 | 2017-12-27T09:15:49Z | 2017-12-27T19:40:00Z | 2017-12-27T19:40:00Z | 2017-12-28T17:29:08Z |
DOC: Added note about groupby excluding Decimal columns by default | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index fecc336049a40..1f0b43bab8d4d 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -984,6 +984,33 @@ Note that ``df.groupby('A').colname.std().`` is more efficient than
is only interesting over one column (here ``colname``), it may be filtered
*before* applying the aggregation function.
+.. note::
+ Any object column, also if it contains numerical values such as ``Decimal``
+ objects, is considered as a "nuisance" columns. They are excluded from
+ aggregate functions automatically in groupby.
+
+ If you do wish to include decimal or object columns in an aggregation with
+ other non-nuisance data types, you must do so explicitly.
+
+.. ipython:: python
+
+ from decimal import Decimal
+ df_dec = pd.DataFrame(
+ {'id': [1, 2, 1, 2],
+ 'int_column': [1, 2, 3, 4],
+ 'dec_column': [Decimal('0.50'), Decimal('0.15'), Decimal('0.25'), Decimal('0.40')]
+ }
+ )
+
+ # Decimal columns can be sum'd explicitly by themselves...
+ df_dec.groupby(['id'])[['dec_column']].sum()
+
+ # ...but cannot be combined with standard data types or they will be excluded
+ df_dec.groupby(['id'])[['int_column', 'dec_column']].sum()
+
+ # Use .agg function to aggregate over standard and "nuisance" data types at the same time
+ df_dec.groupby(['id']).agg({'int_column': 'sum', 'dec_column': 'sum'})
+
.. _groupby.observed:
Handling of (un)observed Categorical values
| Also included example of how to explicitly aggregate by Decimal columns.
- closes #13821
| https://api.github.com/repos/pandas-dev/pandas/pulls/18953 | 2017-12-27T08:26:56Z | 2018-11-08T15:08:41Z | 2018-11-08T15:08:40Z | 2018-11-08T15:08:55Z |
BUG: Adjust time values with Period objects in Series.dt.end_time | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index d2d5d40393b62..e3e1b35f89cbb 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -281,6 +281,43 @@ that the dates have been converted to UTC
.. ipython:: python
pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"], utc=True)
+.. _whatsnew_0240.api_breaking.period_end_time:
+
+Time values in ``dt.end_time`` and ``to_timestamp(how='end')``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The time values in :class:`Period` and :class:`PeriodIndex` objects are now set
+to '23:59:59.999999999' when calling :attr:`Series.dt.end_time`, :attr:`Period.end_time`,
+:attr:`PeriodIndex.end_time`, :func:`Period.to_timestamp()` with ``how='end'``,
+or :func:`PeriodIndex.to_timestamp()` with ``how='end'`` (:issue:`17157`)
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [2]: p = pd.Period('2017-01-01', 'D')
+ In [3]: pi = pd.PeriodIndex([p])
+
+ In [4]: pd.Series(pi).dt.end_time[0]
+ Out[4]: Timestamp(2017-01-01 00:00:00)
+
+ In [5]: p.end_time
+ Out[5]: Timestamp(2017-01-01 23:59:59.999999999)
+
+Current Behavior:
+
+Calling :attr:`Series.dt.end_time` will now result in a time of '23:59:59.999999999' as
+is the case with :attr:`Period.end_time`, for example
+
+.. ipython:: python
+
+ p = pd.Period('2017-01-01', 'D')
+ pi = pd.PeriodIndex([p])
+
+ pd.Series(pi).dt.end_time[0]
+
+ p.end_time
+
.. _whatsnew_0240.api.datetimelike.normalize:
Tick DateOffset Normalize Restrictions
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 65fb0f331d039..96d7994bdc822 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -34,6 +34,7 @@ cdef extern from "../src/datetime/np_datetime.h":
cimport util
from util cimport is_period_object, is_string_object, INT32_MIN
+from pandas._libs.tslibs.timedeltas import Timedelta
from timestamps import Timestamp
from timezones cimport is_utc, is_tzlocal, get_dst_info
from timedeltas cimport delta_to_nanoseconds
@@ -1221,6 +1222,10 @@ cdef class _Period(object):
freq = self._maybe_convert_freq(freq)
how = _validate_end_alias(how)
+ end = how == 'E'
+ if end:
+ return (self + 1).to_timestamp(how='start') - Timedelta(1, 'ns')
+
if freq is None:
base, mult = get_freq_code(self.freq)
freq = get_to_timestamp_base(base)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 00d53ad82b2dc..26aaab2b1b237 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1235,11 +1235,9 @@ def _generate_regular_range(cls, start, end, periods, freq):
tz = None
if isinstance(start, Timestamp):
tz = start.tz
- start = start.to_pydatetime()
if isinstance(end, Timestamp):
tz = end.tz
- end = end.to_pydatetime()
xdr = generate_range(start=start, end=end,
periods=periods, offset=freq)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index b315e3ec20830..32aa89010b206 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -25,7 +25,7 @@
from pandas.core.tools.datetimes import parse_time_string
from pandas._libs.lib import infer_dtype
-from pandas._libs import tslib, index as libindex
+from pandas._libs import tslib, index as libindex, Timedelta
from pandas._libs.tslibs.period import (Period, IncompatibleFrequency,
DIFFERENT_FREQ_INDEX,
_validate_end_alias)
@@ -501,6 +501,16 @@ def to_timestamp(self, freq=None, how='start'):
"""
how = _validate_end_alias(how)
+ end = how == 'E'
+ if end:
+ if freq == 'B':
+ # roll forward to ensure we land on B date
+ adjust = Timedelta(1, 'D') - Timedelta(1, 'ns')
+ return self.to_timestamp(how='start') + adjust
+ else:
+ adjust = Timedelta(1, 'ns')
+ return (self + 1).to_timestamp(how='start') - adjust
+
if freq is None:
base, mult = _gfc(self.freq)
freq = frequencies.get_to_timestamp_base(base)
diff --git a/pandas/tests/frame/test_period.py b/pandas/tests/frame/test_period.py
index 482210966fe6b..d56df2371b2e3 100644
--- a/pandas/tests/frame/test_period.py
+++ b/pandas/tests/frame/test_period.py
@@ -5,7 +5,7 @@
import pandas as pd
import pandas.util.testing as tm
from pandas import (PeriodIndex, period_range, DataFrame, date_range,
- Index, to_datetime, DatetimeIndex)
+ Index, to_datetime, DatetimeIndex, Timedelta)
def _permute(obj):
@@ -51,6 +51,7 @@ def test_frame_to_time_stamp(self):
df['mix'] = 'a'
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
+ exp_index = exp_index + Timedelta(1, 'D') - Timedelta(1, 'ns')
result = df.to_timestamp('D', 'end')
tm.assert_index_equal(result.index, exp_index)
tm.assert_numpy_array_equal(result.values, df.values)
@@ -66,22 +67,26 @@ def _get_with_delta(delta, freq='A-DEC'):
delta = timedelta(hours=23)
result = df.to_timestamp('H', 'end')
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 'h') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
delta = timedelta(hours=23, minutes=59)
result = df.to_timestamp('T', 'end')
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 'm') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
result = df.to_timestamp('S', 'end')
delta = timedelta(hours=23, minutes=59, seconds=59)
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 's') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
# columns
df = df.T
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
+ exp_index = exp_index + Timedelta(1, 'D') - Timedelta(1, 'ns')
result = df.to_timestamp('D', 'end', axis=1)
tm.assert_index_equal(result.columns, exp_index)
tm.assert_numpy_array_equal(result.values, df.values)
@@ -93,16 +98,19 @@ def _get_with_delta(delta, freq='A-DEC'):
delta = timedelta(hours=23)
result = df.to_timestamp('H', 'end', axis=1)
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 'h') - Timedelta(1, 'ns')
tm.assert_index_equal(result.columns, exp_index)
delta = timedelta(hours=23, minutes=59)
result = df.to_timestamp('T', 'end', axis=1)
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 'm') - Timedelta(1, 'ns')
tm.assert_index_equal(result.columns, exp_index)
result = df.to_timestamp('S', 'end', axis=1)
delta = timedelta(hours=23, minutes=59, seconds=59)
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 's') - Timedelta(1, 'ns')
tm.assert_index_equal(result.columns, exp_index)
# invalid axis
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 923d826fe1a5e..405edba83dc7a 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -366,6 +366,19 @@ def test_periods_number_check(self):
with pytest.raises(ValueError):
period_range('2011-1-1', '2012-1-1', 'B')
+ def test_start_time(self):
+ # GH 17157
+ index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31')
+ expected_index = date_range('2016-01-01', end='2016-05-31', freq='MS')
+ tm.assert_index_equal(index.start_time, expected_index)
+
+ def test_end_time(self):
+ # GH 17157
+ index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31')
+ expected_index = date_range('2016-01-01', end='2016-05-31', freq='M')
+ expected_index = expected_index.shift(1, freq='D').shift(-1, freq='ns')
+ tm.assert_index_equal(index.end_time, expected_index)
+
def test_index_duplicate_periods(self):
# monotonic
idx = PeriodIndex([2000, 2007, 2007, 2009, 2009], freq='A-JUN')
diff --git a/pandas/tests/indexes/period/test_scalar_compat.py b/pandas/tests/indexes/period/test_scalar_compat.py
index 56bd2adf58719..a66a81fe99cd4 100644
--- a/pandas/tests/indexes/period/test_scalar_compat.py
+++ b/pandas/tests/indexes/period/test_scalar_compat.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""Tests for PeriodIndex behaving like a vectorized Period scalar"""
-from pandas import PeriodIndex, date_range
+from pandas import PeriodIndex, date_range, Timedelta
import pandas.util.testing as tm
@@ -14,4 +14,5 @@ def test_start_time(self):
def test_end_time(self):
index = PeriodIndex(freq='M', start='2016-01-01', end='2016-05-31')
expected_index = date_range('2016-01-01', end='2016-05-31', freq='M')
+ expected_index += Timedelta(1, 'D') - Timedelta(1, 'ns')
tm.assert_index_equal(index.end_time, expected_index)
diff --git a/pandas/tests/indexes/period/test_tools.py b/pandas/tests/indexes/period/test_tools.py
index 16b558916df2d..c4ed07d98413f 100644
--- a/pandas/tests/indexes/period/test_tools.py
+++ b/pandas/tests/indexes/period/test_tools.py
@@ -3,6 +3,7 @@
import pytest
import pandas as pd
+from pandas import Timedelta
import pandas.util.testing as tm
import pandas.core.indexes.period as period
from pandas.compat import lrange
@@ -60,6 +61,7 @@ def test_to_timestamp(self):
exp_index = date_range('1/1/2001', end='12/31/2009', freq='A-DEC')
result = series.to_timestamp(how='end')
+ exp_index = exp_index + Timedelta(1, 'D') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
assert result.name == 'foo'
@@ -74,16 +76,19 @@ def _get_with_delta(delta, freq='A-DEC'):
delta = timedelta(hours=23)
result = series.to_timestamp('H', 'end')
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 'h') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
delta = timedelta(hours=23, minutes=59)
result = series.to_timestamp('T', 'end')
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 'm') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
result = series.to_timestamp('S', 'end')
delta = timedelta(hours=23, minutes=59, seconds=59)
exp_index = _get_with_delta(delta)
+ exp_index = exp_index + Timedelta(1, 's') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001')
@@ -92,6 +97,7 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = date_range('1/1/2001 00:59:59', end='1/2/2001 00:59:59',
freq='H')
result = series.to_timestamp(how='end')
+ exp_index = exp_index + Timedelta(1, 's') - Timedelta(1, 'ns')
tm.assert_index_equal(result.index, exp_index)
assert result.name == 'foo'
@@ -284,6 +290,7 @@ def test_to_timestamp_pi_mult(self):
result = idx.to_timestamp(how='E')
expected = DatetimeIndex(['2011-02-28', 'NaT', '2011-03-31'],
name='idx')
+ expected = expected + Timedelta(1, 'D') - Timedelta(1, 'ns')
tm.assert_index_equal(result, expected)
def test_to_timestamp_pi_combined(self):
@@ -298,11 +305,13 @@ def test_to_timestamp_pi_combined(self):
expected = DatetimeIndex(['2011-01-02 00:59:59',
'2011-01-03 01:59:59'],
name='idx')
+ expected = expected + Timedelta(1, 's') - Timedelta(1, 'ns')
tm.assert_index_equal(result, expected)
result = idx.to_timestamp(how='E', freq='H')
expected = DatetimeIndex(['2011-01-02 00:00', '2011-01-03 01:00'],
name='idx')
+ expected = expected + Timedelta(1, 'h') - Timedelta(1, 'ns')
tm.assert_index_equal(result, expected)
def test_period_astype_to_timestamp(self):
@@ -312,6 +321,7 @@ def test_period_astype_to_timestamp(self):
tm.assert_index_equal(pi.astype('datetime64[ns]'), exp)
exp = pd.DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'])
+ exp = exp + Timedelta(1, 'D') - Timedelta(1, 'ns')
tm.assert_index_equal(pi.astype('datetime64[ns]', how='end'), exp)
exp = pd.DatetimeIndex(['2011-01-01', '2011-02-01', '2011-03-01'],
@@ -321,6 +331,7 @@ def test_period_astype_to_timestamp(self):
exp = pd.DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'],
tz='US/Eastern')
+ exp = exp + Timedelta(1, 'D') - Timedelta(1, 'ns')
res = pi.astype('datetime64[ns, US/Eastern]', how='end')
tm.assert_index_equal(res, exp)
diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py
index eccd86a888fb9..4a17b2efd1dec 100644
--- a/pandas/tests/scalar/period/test_period.py
+++ b/pandas/tests/scalar/period/test_period.py
@@ -5,6 +5,7 @@
from datetime import datetime, date, timedelta
import pandas as pd
+from pandas import Timedelta
import pandas.util.testing as tm
import pandas.core.indexes.period as period
from pandas.compat import text_type, iteritems
@@ -274,12 +275,14 @@ def test_timestamp_tz_arg_dateutil_from_string(self):
def test_timestamp_mult(self):
p = pd.Period('2011-01', freq='M')
- assert p.to_timestamp(how='S') == pd.Timestamp('2011-01-01')
- assert p.to_timestamp(how='E') == pd.Timestamp('2011-01-31')
+ assert p.to_timestamp(how='S') == Timestamp('2011-01-01')
+ expected = Timestamp('2011-02-01') - Timedelta(1, 'ns')
+ assert p.to_timestamp(how='E') == expected
p = pd.Period('2011-01', freq='3M')
- assert p.to_timestamp(how='S') == pd.Timestamp('2011-01-01')
- assert p.to_timestamp(how='E') == pd.Timestamp('2011-03-31')
+ assert p.to_timestamp(how='S') == Timestamp('2011-01-01')
+ expected = Timestamp('2011-04-01') - Timedelta(1, 'ns')
+ assert p.to_timestamp(how='E') == expected
def test_construction(self):
i1 = Period('1/1/2005', freq='M')
@@ -611,19 +614,19 @@ def _ex(p):
p = Period('1985', freq='A')
result = p.to_timestamp('H', how='end')
- expected = datetime(1985, 12, 31, 23)
+ expected = Timestamp(1986, 1, 1) - Timedelta(1, 'ns')
assert result == expected
result = p.to_timestamp('3H', how='end')
assert result == expected
result = p.to_timestamp('T', how='end')
- expected = datetime(1985, 12, 31, 23, 59)
+ expected = Timestamp(1986, 1, 1) - Timedelta(1, 'ns')
assert result == expected
result = p.to_timestamp('2T', how='end')
assert result == expected
result = p.to_timestamp(how='end')
- expected = datetime(1985, 12, 31)
+ expected = Timestamp(1986, 1, 1) - Timedelta(1, 'ns')
assert result == expected
expected = datetime(1985, 1, 1)
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 63726f27914f3..90dbe26a2f0ea 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -3,7 +3,8 @@
import pandas as pd
import pandas.util.testing as tm
import pandas.core.indexes.period as period
-from pandas import Series, period_range, DataFrame
+from pandas import Series, period_range, DataFrame, Period
+import pytest
def _permute(obj):
@@ -167,3 +168,23 @@ def test_truncate(self):
pd.Period('2017-09-02')
])
tm.assert_series_equal(result2, pd.Series([2], index=expected_idx2))
+
+ @pytest.mark.parametrize('input_vals', [
+ [Period('2016-01', freq='M'), Period('2016-02', freq='M')],
+ [Period('2016-01-01', freq='D'), Period('2016-01-02', freq='D')],
+ [Period('2016-01-01 00:00:00', freq='H'),
+ Period('2016-01-01 01:00:00', freq='H')],
+ [Period('2016-01-01 00:00:00', freq='M'),
+ Period('2016-01-01 00:01:00', freq='M')],
+ [Period('2016-01-01 00:00:00', freq='S'),
+ Period('2016-01-01 00:00:01', freq='S')]
+ ])
+ def test_end_time_timevalues(self, input_vals):
+ # GH 17157
+ # Check that the time part of the Period is adjusted by end_time
+ # when using the dt accessor on a Series
+
+ s = Series(input_vals)
+ result = s.dt.end_time
+ expected = s.apply(lambda x: x.end_time)
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 1f70d09e43b37..de4dc2bcf25a4 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -21,7 +21,7 @@
import pandas as pd
from pandas import (Series, DataFrame, Panel, Index, isna,
- notna, Timestamp)
+ notna, Timestamp, Timedelta)
from pandas.compat import range, lrange, zip, OrderedDict
from pandas.errors import UnsupportedFunctionCall
@@ -1702,12 +1702,14 @@ def test_resample_anchored_intraday(self):
result = df.resample('M').mean()
expected = df.resample(
'M', kind='period').mean().to_timestamp(how='end')
+ expected.index += Timedelta(1, 'ns') - Timedelta(1, 'D')
tm.assert_frame_equal(result, expected)
result = df.resample('M', closed='left').mean()
exp = df.tshift(1, freq='D').resample('M', kind='period').mean()
exp = exp.to_timestamp(how='end')
+ exp.index = exp.index + Timedelta(1, 'ns') - Timedelta(1, 'D')
tm.assert_frame_equal(result, exp)
rng = date_range('1/1/2012', '4/1/2012', freq='100min')
@@ -1716,12 +1718,14 @@ def test_resample_anchored_intraday(self):
result = df.resample('Q').mean()
expected = df.resample(
'Q', kind='period').mean().to_timestamp(how='end')
+ expected.index += Timedelta(1, 'ns') - Timedelta(1, 'D')
tm.assert_frame_equal(result, expected)
result = df.resample('Q', closed='left').mean()
expected = df.tshift(1, freq='D').resample('Q', kind='period',
closed='left').mean()
expected = expected.to_timestamp(how='end')
+ expected.index += Timedelta(1, 'ns') - Timedelta(1, 'D')
tm.assert_frame_equal(result, expected)
ts = _simple_ts('2012-04-29 23:00', '2012-04-30 5:00', freq='h')
@@ -2473,7 +2477,7 @@ def test_resample_to_timestamps(self):
ts = _simple_pts('1/1/1990', '12/31/1995', freq='M')
result = ts.resample('A-DEC', kind='timestamp').mean()
- expected = ts.to_timestamp(how='end').resample('A-DEC').mean()
+ expected = ts.to_timestamp(how='start').resample('A-DEC').mean()
assert_series_equal(result, expected)
def test_resample_to_quarterly(self):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 60981f41ec716..9d41401a7eefc 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -1321,7 +1321,7 @@ def _end_apply_index(self, dtindex):
roll = self.n
base = (base_period + roll).to_timestamp(how='end')
- return base + off
+ return base + off + Timedelta(1, 'ns') - Timedelta(1, 'D')
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
| - [x] closes #17157
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18952 | 2017-12-27T01:04:56Z | 2018-07-31T13:03:03Z | 2018-07-31T13:03:03Z | 2018-08-03T20:14:28Z |
DOC: greater consistency and spell-check for intro docs | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index 49142311ff057..46c3ffef58228 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -25,7 +25,7 @@
********************
This is a short introduction to pandas, geared mainly for new users.
-You can see more complex recipes in the :ref:`Cookbook<cookbook>`
+You can see more complex recipes in the :ref:`Cookbook<cookbook>`.
Customarily, we import as follows:
@@ -38,7 +38,7 @@ Customarily, we import as follows:
Object Creation
---------------
-See the :ref:`Data Structure Intro section <dsintro>`
+See the :ref:`Data Structure Intro section <dsintro>`.
Creating a :class:`Series` by passing a list of values, letting pandas create
a default integer index:
@@ -70,7 +70,8 @@ Creating a ``DataFrame`` by passing a dict of objects that can be converted to s
'F' : 'foo' })
df2
-Having specific :ref:`dtypes <basics.dtypes>`
+The columns of the resulting ``DataFrame`` have different
+:ref:`dtypes <basics.dtypes>`.
.. ipython:: python
@@ -104,16 +105,16 @@ truncated for brevity.
Viewing Data
------------
-See the :ref:`Basics section <basics>`
+See the :ref:`Basics section <basics>`.
-See the top & bottom rows of the frame
+Here is how to view the top and bottom rows of the frame:
.. ipython:: python
df.head()
df.tail(3)
-Display the index, columns, and the underlying numpy data
+Display the index, columns, and the underlying numpy data:
.. ipython:: python
@@ -121,25 +122,25 @@ Display the index, columns, and the underlying numpy data
df.columns
df.values
-Describe shows a quick statistic summary of your data
+:func:`~DataFrame.describe` shows a quick statistic summary of your data:
.. ipython:: python
df.describe()
-Transposing your data
+Transposing your data:
.. ipython:: python
df.T
-Sorting by an axis
+Sorting by an axis:
.. ipython:: python
df.sort_index(axis=1, ascending=False)
-Sorting by values
+Sorting by values:
.. ipython:: python
@@ -155,13 +156,13 @@ Selection
recommend the optimized pandas data access methods, ``.at``, ``.iat``,
``.loc``, ``.iloc`` and ``.ix``.
-See the indexing documentation :ref:`Indexing and Selecting Data <indexing>` and :ref:`MultiIndex / Advanced Indexing <advanced>`
+See the indexing documentation :ref:`Indexing and Selecting Data <indexing>` and :ref:`MultiIndex / Advanced Indexing <advanced>`.
Getting
~~~~~~~
Selecting a single column, which yields a ``Series``,
-equivalent to ``df.A``
+equivalent to ``df.A``:
.. ipython:: python
@@ -177,39 +178,39 @@ Selecting via ``[]``, which slices the rows.
Selection by Label
~~~~~~~~~~~~~~~~~~
-See more in :ref:`Selection by Label <indexing.label>`
+See more in :ref:`Selection by Label <indexing.label>`.
-For getting a cross section using a label
+For getting a cross section using a label:
.. ipython:: python
df.loc[dates[0]]
-Selecting on a multi-axis by label
+Selecting on a multi-axis by label:
.. ipython:: python
df.loc[:,['A','B']]
-Showing label slicing, both endpoints are *included*
+Showing label slicing, both endpoints are *included*:
.. ipython:: python
df.loc['20130102':'20130104',['A','B']]
-Reduction in the dimensions of the returned object
+Reduction in the dimensions of the returned object:
.. ipython:: python
df.loc['20130102',['A','B']]
-For getting a scalar value
+For getting a scalar value:
.. ipython:: python
df.loc[dates[0],'A']
-For getting fast access to a scalar (equiv to the prior method)
+For getting fast access to a scalar (equivalent to the prior method):
.. ipython:: python
@@ -218,45 +219,45 @@ For getting fast access to a scalar (equiv to the prior method)
Selection by Position
~~~~~~~~~~~~~~~~~~~~~
-See more in :ref:`Selection by Position <indexing.integer>`
+See more in :ref:`Selection by Position <indexing.integer>`.
-Select via the position of the passed integers
+Select via the position of the passed integers:
.. ipython:: python
df.iloc[3]
-By integer slices, acting similar to numpy/python
+By integer slices, acting similar to numpy/python:
.. ipython:: python
df.iloc[3:5,0:2]
-By lists of integer position locations, similar to the numpy/python style
+By lists of integer position locations, similar to the numpy/python style:
.. ipython:: python
df.iloc[[1,2,4],[0,2]]
-For slicing rows explicitly
+For slicing rows explicitly:
.. ipython:: python
df.iloc[1:3,:]
-For slicing columns explicitly
+For slicing columns explicitly:
.. ipython:: python
df.iloc[:,1:3]
-For getting a value explicitly
+For getting a value explicitly:
.. ipython:: python
df.iloc[1,1]
-For getting fast access to a scalar (equiv to the prior method)
+For getting fast access to a scalar (equivalent to the prior method):
.. ipython:: python
@@ -290,7 +291,7 @@ Setting
~~~~~~~
Setting a new column automatically aligns the data
-by the indexes
+by the indexes.
.. ipython:: python
@@ -298,25 +299,25 @@ by the indexes
s1
df['F'] = s1
-Setting values by label
+Setting values by label:
.. ipython:: python
df.at[dates[0],'A'] = 0
-Setting values by position
+Setting values by position:
.. ipython:: python
df.iat[0,1] = 0
-Setting by assigning with a numpy array
+Setting by assigning with a numpy array:
.. ipython:: python
df.loc[:,'D'] = np.array([5] * len(df))
-The result of the prior setting operations
+The result of the prior setting operations.
.. ipython:: python
@@ -336,7 +337,7 @@ Missing Data
pandas primarily uses the value ``np.nan`` to represent missing data. It is by
default not included in computations. See the :ref:`Missing Data section
-<missing_data>`
+<missing_data>`.
Reindexing allows you to change/add/delete the index on a specified axis. This
returns a copy of the data.
@@ -353,13 +354,13 @@ To drop any rows that have missing data.
df1.dropna(how='any')
-Filling missing data
+Filling missing data.
.. ipython:: python
df1.fillna(value=5)
-To get the boolean mask where values are ``nan``
+To get the boolean mask where values are ``nan``.
.. ipython:: python
@@ -369,20 +370,20 @@ To get the boolean mask where values are ``nan``
Operations
----------
-See the :ref:`Basic section on Binary Ops <basics.binop>`
+See the :ref:`Basic section on Binary Ops <basics.binop>`.
Stats
~~~~~
Operations in general *exclude* missing data.
-Performing a descriptive statistic
+Performing a descriptive statistic:
.. ipython:: python
df.mean()
-Same operation on the other axis
+Same operation on the other axis:
.. ipython:: python
@@ -401,7 +402,7 @@ In addition, pandas automatically broadcasts along the specified dimension.
Apply
~~~~~
-Applying functions to the data
+Applying functions to the data:
.. ipython:: python
@@ -411,7 +412,7 @@ Applying functions to the data
Histogramming
~~~~~~~~~~~~~
-See more at :ref:`Histogramming and Discretization <basics.discretization>`
+See more at :ref:`Histogramming and Discretization <basics.discretization>`.
.. ipython:: python
@@ -425,7 +426,7 @@ String Methods
Series is equipped with a set of string processing methods in the `str`
attribute that make it easy to operate on each element of the array, as in the
code snippet below. Note that pattern-matching in `str` generally uses `regular
-expressions <https://docs.python.org/2/library/re.html>`__ by default (and in
+expressions <https://docs.python.org/3/library/re.html>`__ by default (and in
some cases always uses them). See more at :ref:`Vectorized String Methods
<text.string_methods>`.
@@ -445,7 +446,7 @@ DataFrame, and Panel objects with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
-See the :ref:`Merging section <merging>`
+See the :ref:`Merging section <merging>`.
Concatenating pandas objects together with :func:`concat`:
@@ -462,7 +463,7 @@ Concatenating pandas objects together with :func:`concat`:
Join
~~~~
-SQL style merges. See the :ref:`Database style joining <merging.join>`
+SQL style merges. See the :ref:`Database style joining <merging.join>` section.
.. ipython:: python
@@ -486,7 +487,8 @@ Another example that can be given is:
Append
~~~~~~
-Append rows to a dataframe. See the :ref:`Appending <merging.concatenation>`
+Append rows to a dataframe. See the :ref:`Appending <merging.concatenation>`
+section.
.. ipython:: python
@@ -500,13 +502,13 @@ Grouping
--------
By "group by" we are referring to a process involving one or more of the
-following steps
+following steps:
- **Splitting** the data into groups based on some criteria
- **Applying** a function to each group independently
- **Combining** the results into a data structure
-See the :ref:`Grouping section <groupby>`
+See the :ref:`Grouping section <groupby>`.
.. ipython:: python
@@ -518,14 +520,15 @@ See the :ref:`Grouping section <groupby>`
'D' : np.random.randn(8)})
df
-Grouping and then applying a function ``sum`` to the resulting groups.
+Grouping and then applying the :meth:`~DataFrame.sum` function to the resulting
+groups.
.. ipython:: python
df.groupby('A').sum()
-Grouping by multiple columns forms a hierarchical index, which we then apply
-the function.
+Grouping by multiple columns forms a hierarchical index, and again we can
+apply the ``sum`` function.
.. ipython:: python
@@ -595,7 +598,7 @@ Time Series
pandas has simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
-financial applications. See the :ref:`Time Series section <timeseries>`
+financial applications. See the :ref:`Time Series section <timeseries>`.
.. ipython:: python
@@ -603,7 +606,7 @@ financial applications. See the :ref:`Time Series section <timeseries>`
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()
-Time zone representation
+Time zone representation:
.. ipython:: python
@@ -613,13 +616,13 @@ Time zone representation
ts_utc = ts.tz_localize('UTC')
ts_utc
-Convert to another time zone
+Converting to another time zone:
.. ipython:: python
ts_utc.tz_convert('US/Eastern')
-Converting between time span representations
+Converting between time span representations:
.. ipython:: python
@@ -659,14 +662,15 @@ Convert the raw grades to a categorical data type.
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
-Rename the categories to more meaningful names (assigning to ``Series.cat.categories`` is inplace!)
+Rename the categories to more meaningful names (assigning to
+``Series.cat.categories`` is inplace!).
.. ipython:: python
df["grade"].cat.categories = ["very good", "good", "very bad"]
Reorder the categories and simultaneously add the missing categories (methods under ``Series
-.cat`` return a new ``Series`` per default).
+.cat`` return a new ``Series`` by default).
.. ipython:: python
@@ -679,7 +683,7 @@ Sorting is per order in the categories, not lexical order.
df.sort_values(by="grade")
-Grouping by a categorical column shows also empty categories.
+Grouping by a categorical column also shows empty categories.
.. ipython:: python
@@ -689,7 +693,7 @@ Grouping by a categorical column shows also empty categories.
Plotting
--------
-:ref:`Plotting <visualization>` docs.
+See the :ref:`Plotting <visualization>` docs.
.. ipython:: python
:suppress:
@@ -705,8 +709,8 @@ Plotting
@savefig series_plot_basic.png
ts.plot()
-On DataFrame, :meth:`~DataFrame.plot` is a convenience to plot all of the
-columns with labels:
+On a DataFrame, the :meth:`~DataFrame.plot` method is a convenience to plot all
+of the columns with labels:
.. ipython:: python
@@ -723,13 +727,13 @@ Getting Data In/Out
CSV
~~~
-:ref:`Writing to a csv file <io.store_in_csv>`
+:ref:`Writing to a csv file. <io.store_in_csv>`
.. ipython:: python
df.to_csv('foo.csv')
-:ref:`Reading from a csv file <io.read_csv_table>`
+:ref:`Reading from a csv file. <io.read_csv_table>`
.. ipython:: python
@@ -743,15 +747,15 @@ CSV
HDF5
~~~~
-Reading and writing to :ref:`HDFStores <io.hdf5>`
+Reading and writing to :ref:`HDFStores <io.hdf5>`.
-Writing to a HDF5 Store
+Writing to a HDF5 Store.
.. ipython:: python
df.to_hdf('foo.h5','df')
-Reading from a HDF5 Store
+Reading from a HDF5 Store.
.. ipython:: python
@@ -765,15 +769,15 @@ Reading from a HDF5 Store
Excel
~~~~~
-Reading and writing to :ref:`MS Excel <io.excel>`
+Reading and writing to :ref:`MS Excel <io.excel>`.
-Writing to an excel file
+Writing to an excel file.
.. ipython:: python
df.to_excel('foo.xlsx', sheet_name='Sheet1')
-Reading from an excel file
+Reading from an excel file.
.. ipython:: python
@@ -787,7 +791,7 @@ Reading from an excel file
Gotchas
-------
-If you are trying an operation and you see an exception like:
+If you are attempting to perform an operation you might see an exception like:
.. code-block:: python
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 9318df2b76564..ecb9a8f2d79db 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -133,7 +133,7 @@ of interest:
* Broadcasting behavior between higher- (e.g. DataFrame) and
lower-dimensional (e.g. Series) objects.
- * Missing data in computations
+ * Missing data in computations.
We will demonstrate how to manage these issues independently, though they can
be handled simultaneously.
@@ -226,12 +226,12 @@ We can also do elementwise :func:`divmod`:
Missing data / operations with fill values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In Series and DataFrame (though not yet in Panel), the arithmetic functions
-have the option of inputting a *fill_value*, namely a value to substitute when
-at most one of the values at a location are missing. For example, when adding
-two DataFrame objects, you may wish to treat NaN as 0 unless both DataFrames
-are missing that value, in which case the result will be NaN (you can later
-replace NaN with some other value using ``fillna`` if you wish).
+In Series and DataFrame, the arithmetic functions have the option of inputting
+a *fill_value*, namely a value to substitute when at most one of the values at
+a location are missing. For example, when adding two DataFrame objects, you may
+wish to treat NaN as 0 unless both DataFrames are missing that value, in which
+case the result will be NaN (you can later replace NaN with some other value
+using ``fillna`` if you wish).
.. ipython:: python
:suppress:
@@ -260,9 +260,9 @@ arithmetic operations described above:
df.gt(df2)
df2.ne(df)
-These operations produce a pandas object the same type as the left-hand-side input
-that if of dtype ``bool``. These ``boolean`` objects can be used in indexing operations,
-see :ref:`here<indexing.boolean>`
+These operations produce a pandas object of the same type as the left-hand-side
+input that is of dtype ``bool``. These ``boolean`` objects can be used in
+indexing operations, see the section on :ref:`Boolean indexing<indexing.boolean>`.
.. _basics.reductions:
@@ -316,7 +316,7 @@ To evaluate single-element pandas objects in a boolean context, use the method
>>> df and df2
- These both will raise as you are trying to compare multiple values.
+ These will both raise errors, as you are trying to compare multiple values.
.. code-block:: python
@@ -329,7 +329,7 @@ See :ref:`gotchas<gotchas.truth>` for a more detailed discussion.
Comparing if objects are equivalent
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Often you may find there is more than one way to compute the same
+Often you may find that there is more than one way to compute the same
result. As a simple example, consider ``df+df`` and ``df*2``. To test
that these two computations produce the same result, given the tools
shown above, you might imagine using ``(df+df == df*2).all()``. But in
@@ -341,7 +341,7 @@ fact, this expression is False:
(df+df == df*2).all()
Notice that the boolean DataFrame ``df+df == df*2`` contains some False values!
-That is because NaNs do not compare as equals:
+This is because NaNs do not compare as equals:
.. ipython:: python
@@ -368,7 +368,7 @@ equality to be True:
Comparing array-like objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-You can conveniently do element-wise comparisons when comparing a pandas
+You can conveniently perform element-wise comparisons when comparing a pandas
data structure with a scalar value:
.. ipython:: python
@@ -452,8 +452,8 @@ So, for instance, to reproduce :meth:`~DataFrame.combine_first` as above:
Descriptive statistics
----------------------
-A large number of methods for computing descriptive statistics and other related
-operations on :ref:`Series <api.series.stats>`, :ref:`DataFrame
+There exists a large number of methods for computing descriptive statistics and
+other related operations on :ref:`Series <api.series.stats>`, :ref:`DataFrame
<api.dataframe.stats>`, and :ref:`Panel <api.panel.stats>`. Most of these
are aggregations (hence producing a lower-dimensional result) like
:meth:`~DataFrame.sum`, :meth:`~DataFrame.mean`, and :meth:`~DataFrame.quantile`,
@@ -764,7 +764,7 @@ For example, we can fit a regression using statsmodels. Their API expects a form
The pipe method is inspired by unix pipes and more recently dplyr_ and magrittr_, which
have introduced the popular ``(%>%)`` (read pipe) operator for R_.
The implementation of ``pipe`` here is quite clean and feels right at home in python.
-We encourage you to view the source code (``pd.DataFrame.pipe??`` in IPython).
+We encourage you to view the source code of :meth:`~DataFrame.pipe`.
.. _dplyr: https://github.com/hadley/dplyr
.. _magrittr: https://github.com/smbache/magrittr
@@ -786,7 +786,7 @@ statistics methods, take an optional ``axis`` argument:
df.apply(np.cumsum)
df.apply(np.exp)
-``.apply()`` will also dispatch on a string method name.
+The :meth:`~DataFrame.apply` method will also dispatch on a string method name.
.. ipython:: python
@@ -863,8 +863,9 @@ We will use a similar starting frame from above:
tsdf.iloc[3:7] = np.nan
tsdf
-Using a single function is equivalent to :meth:`~DataFrame.apply`; You can also pass named methods as strings.
-These will return a ``Series`` of the aggregated output:
+Using a single function is equivalent to :meth:`~DataFrame.apply`. You can also
+pass named methods as strings. These will return a ``Series`` of the aggregated
+output:
.. ipython:: python
@@ -875,7 +876,7 @@ These will return a ``Series`` of the aggregated output:
# these are equivalent to a ``.sum()`` because we are aggregating on a single function
tsdf.sum()
-Single aggregations on a ``Series`` this will result in a scalar value:
+Single aggregations on a ``Series`` this will return a scalar value:
.. ipython:: python
@@ -885,8 +886,8 @@ Single aggregations on a ``Series`` this will result in a scalar value:
Aggregating with multiple functions
+++++++++++++++++++++++++++++++++++
-You can pass multiple aggregation arguments as a list.
-The results of each of the passed functions will be a row in the resultant ``DataFrame``.
+You can pass multiple aggregation arguments as a list.
+The results of each of the passed functions will be a row in the resulting ``DataFrame``.
These are naturally named from the aggregation function.
.. ipython:: python
@@ -989,7 +990,7 @@ The :meth:`~DataFrame.transform` method returns an object that is indexed the sa
as the original. This API allows you to provide *multiple* operations at the same
time rather than one-by-one. Its API is quite similar to the ``.agg`` API.
-Use a similar frame to the above sections.
+We create a frame similar to the one used in the above sections.
.. ipython:: python
@@ -1008,7 +1009,7 @@ function name or a user defined function.
tsdf.transform('abs')
tsdf.transform(lambda x: x.abs())
-Here ``.transform()`` received a single function; this is equivalent to a ufunc application
+Here :meth:`~DataFrame.transform` received a single function; this is equivalent to a ufunc application.
.. ipython:: python
@@ -1044,7 +1045,7 @@ Transforming with a dict
++++++++++++++++++++++++
-Passing a dict of functions will will allow selective transforming per column.
+Passing a dict of functions will allow selective transforming per column.
.. ipython:: python
@@ -1080,7 +1081,7 @@ a single value and returning a single value. For example:
df4['one'].map(f)
df4.applymap(f)
-:meth:`Series.map` has an additional feature which is that it can be used to easily
+:meth:`Series.map` has an additional feature; it can be used to easily
"link" or "map" values defined by a secondary series. This is closely related
to :ref:`merging/joining functionality <merging>`:
@@ -1123,13 +1124,13 @@ A reduction operation.
panel.apply(lambda x: x.dtype, axis='items')
-A similar reduction type operation
+A similar reduction type operation.
.. ipython:: python
panel.apply(lambda x: x.sum(), axis='major_axis')
-This last reduction is equivalent to
+This last reduction is equivalent to:
.. ipython:: python
@@ -1157,7 +1158,7 @@ Apply can also accept multiple axes in the ``axis`` argument. This will pass a
result
result.loc[:,:,'ItemA']
-This is equivalent to the following
+This is equivalent to the following:
.. ipython:: python
@@ -1358,9 +1359,9 @@ Note that the same result could have been achieved using
ts2.reindex(ts.index).fillna(method='ffill')
-:meth:`~Series.reindex` will raise a ValueError if the index is not monotonic
+:meth:`~Series.reindex` will raise a ValueError if the index is not monotonically
increasing or decreasing. :meth:`~Series.fillna` and :meth:`~Series.interpolate`
-will not make any checks on the order of the index.
+will not perform any checks on the order of the index.
.. _basics.limits_on_reindex_fill:
@@ -1428,7 +1429,7 @@ Series can also be used:
df.rename(columns={'one': 'foo', 'two': 'bar'},
index={'a': 'apple', 'b': 'banana', 'd': 'durian'})
-If the mapping doesn't include a column/index label, it isn't renamed. Also
+If the mapping doesn't include a column/index label, it isn't renamed. Note that
extra labels in the mapping don't throw an error.
.. versionadded:: 0.21.0
@@ -1438,8 +1439,8 @@ you specify a single ``mapper`` and the ``axis`` to apply that mapping to.
.. ipython:: python
- df.rename({'one': 'foo', 'two': 'bar'}, axis='columns'})
- df.rename({'a': 'apple', 'b': 'banana', 'd': 'durian'}, axis='columns'})
+ df.rename({'one': 'foo', 'two': 'bar'}, axis='columns')
+ df.rename({'a': 'apple', 'b': 'banana', 'd': 'durian'}, axis='index')
The :meth:`~DataFrame.rename` method also provides an ``inplace`` named
@@ -1515,7 +1516,7 @@ To iterate over the rows of a DataFrame, you can use the following methods:
over the values. See the docs on :ref:`function application <basics.apply>`.
* If you need to do iterative manipulations on the values but performance is
- important, consider writing the inner loop using e.g. cython or numba.
+ important, consider writing the inner loop with cython or numba.
See the :ref:`enhancing performance <enhancingperf>` section for some
examples of this approach.
@@ -1594,7 +1595,7 @@ index value along with a Series containing the data in each row:
To preserve dtypes while iterating over the rows, it is better
to use :meth:`~DataFrame.itertuples` which returns namedtuples of the values
- and which is generally much faster as ``iterrows``.
+ and which is generally much faster than :meth:`~DataFrame.iterrows`.
For instance, a contrived way to transpose the DataFrame would be:
@@ -1615,14 +1616,14 @@ yielding a namedtuple for each row in the DataFrame. The first element
of the tuple will be the row's corresponding index value, while the
remaining values are the row values.
-For instance,
+For instance:
.. ipython:: python
for row in df.itertuples():
print(row)
-This method does not convert the row to a Series object but just
+This method does not convert the row to a Series object; it merely
returns the values inside a namedtuple. Therefore,
:meth:`~DataFrame.itertuples` preserves the data type of the values
and is generally faster as :meth:`~DataFrame.iterrows`.
@@ -1709,7 +1710,7 @@ The ``.dt`` accessor works for period and timedelta dtypes.
.. note::
- ``Series.dt`` will raise a ``TypeError`` if you access with a non-datetimelike values
+ ``Series.dt`` will raise a ``TypeError`` if you access with a non-datetime-like values.
Vectorized string methods
-------------------------
@@ -1763,7 +1764,7 @@ labels (indexes) are the ``Series.sort_index()`` and the ``DataFrame.sort_index(
By Values
~~~~~~~~~
-The :meth:`Series.sort_values` and :meth:`DataFrame.sort_values` are the entry points for **value** sorting (that is the values in a column or row).
+The :meth:`Series.sort_values` and :meth:`DataFrame.sort_values` are the entry points for **value** sorting (i.e. the values in a column or row).
:meth:`DataFrame.sort_values` can accept an optional ``by`` argument for ``axis=0``
which will use an arbitrary vector or a column name of the DataFrame to
determine the sort order:
@@ -1794,7 +1795,7 @@ argument:
searchsorted
~~~~~~~~~~~~
-Series has the :meth:`~Series.searchsorted` method, which works similar to
+Series has the :meth:`~Series.searchsorted` method, which works similarly to
:meth:`numpy.ndarray.searchsorted`.
.. ipython:: python
@@ -1859,14 +1860,14 @@ the axis indexes, since they are immutable) and returns a new object. Note that
**it is seldom necessary to copy objects**. For example, there are only a
handful of ways to alter a DataFrame *in-place*:
- * Inserting, deleting, or modifying a column
- * Assigning to the ``index`` or ``columns`` attributes
+ * Inserting, deleting, or modifying a column.
+ * Assigning to the ``index`` or ``columns`` attributes.
* For homogeneous data, directly modifying the values via the ``values``
- attribute or advanced indexing
+ attribute or advanced indexing.
-To be clear, no pandas methods have the side effect of modifying your data;
-almost all methods return new objects, leaving the original object
-untouched. If data is modified, it is because you did so explicitly.
+To be clear, no pandas method has the side effect of modifying your data;
+almost every method returns a new object, leaving the original object
+untouched. If the data is modified, it is because you did so explicitly.
.. _basics.dtypes:
@@ -1879,7 +1880,8 @@ The main types stored in pandas objects are ``float``, ``int``, ``bool``,
``int64`` and ``int32``. See :ref:`Series with TZ <timeseries.timezone_series>`
for more detail on ``datetime64[ns, tz]`` dtypes.
-A convenient :attr:`~DataFrame.dtypes` attribute for DataFrames returns a Series with the data type of each column.
+A convenient :attr:`~DataFrame.dtypes` attribute for DataFrame returns a Series
+with the data type of each column.
.. ipython:: python
@@ -1893,15 +1895,15 @@ A convenient :attr:`~DataFrame.dtypes` attribute for DataFrames returns a Series
dft
dft.dtypes
-On a ``Series`` use the :attr:`~Series.dtype` attribute.
+On a ``Series`` object, use the :attr:`~Series.dtype` attribute.
.. ipython:: python
dft['A'].dtype
-If a pandas object contains data multiple dtypes *IN A SINGLE COLUMN*, the dtype of the
-column will be chosen to accommodate all of the data types (``object`` is the most
-general).
+If a pandas object contains data with multiple dtypes *in a single column*, the
+dtype of the column will be chosen to accommodate all of the data types
+(``object`` is the most general).
.. ipython:: python
@@ -1938,7 +1940,8 @@ defaults
~~~~~~~~
By default integer types are ``int64`` and float types are ``float64``,
-*REGARDLESS* of platform (32-bit or 64-bit). The following will all result in ``int64`` dtypes.
+*regardless* of platform (32-bit or 64-bit).
+The following will all result in ``int64`` dtypes.
.. ipython:: python
@@ -1946,7 +1949,7 @@ By default integer types are ``int64`` and float types are ``float64``,
pd.DataFrame({'a': [1, 2]}).dtypes
pd.DataFrame({'a': 1 }, index=list(range(2))).dtypes
-Numpy, however will choose *platform-dependent* types when creating arrays.
+Note that Numpy will choose *platform-dependent* types when creating arrays.
The following **WILL** result in ``int32`` on 32-bit platform.
.. ipython:: python
@@ -1958,7 +1961,7 @@ upcasting
~~~~~~~~~
Types can potentially be *upcasted* when combined with other types, meaning they are promoted
-from the current type (say ``int`` to ``float``)
+from the current type (e.g. ``int`` to ``float``).
.. ipython:: python
@@ -1995,7 +1998,7 @@ then the more *general* one will be used as the result of the operation.
df3.astype('float32').dtypes
-Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`
+Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`.
.. ipython:: python
@@ -2006,7 +2009,7 @@ Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`
.. versionadded:: 0.19.0
-Convert certain columns to a specific dtype by passing a dict to :meth:`~DataFrame.astype`
+Convert certain columns to a specific dtype by passing a dict to :meth:`~DataFrame.astype`.
.. ipython:: python
@@ -2148,7 +2151,7 @@ gotchas
Performing selection operations on ``integer`` type data can easily upcast the data to ``floating``.
The dtype of the input data will be preserved in cases where ``nans`` are not introduced.
-See also :ref:`Support for integer NA <gotchas.intna>`
+See also :ref:`Support for integer NA <gotchas.intna>`.
.. ipython:: python
@@ -2200,17 +2203,17 @@ dtypes:
df['tz_aware_dates'] = pd.date_range('20130101', periods=3, tz='US/Eastern')
df
-And the dtypes
+And the dtypes:
.. ipython:: python
df.dtypes
:meth:`~DataFrame.select_dtypes` has two parameters ``include`` and ``exclude`` that allow you to
-say "give me the columns WITH these dtypes" (``include``) and/or "give the
-columns WITHOUT these dtypes" (``exclude``).
+say "give me the columns *with* these dtypes" (``include``) and/or "give the
+columns *without* these dtypes" (``exclude``).
-For example, to select ``bool`` columns
+For example, to select ``bool`` columns:
.. ipython:: python
@@ -2226,7 +2229,7 @@ You can also pass the name of a dtype in the `numpy dtype hierarchy
:meth:`~pandas.DataFrame.select_dtypes` also works with generic dtypes as well.
For example, to select all numeric and boolean columns while excluding unsigned
-integers
+integers:
.. ipython:: python
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index e5c7637ddb499..c8018c8e66f72 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -93,10 +93,12 @@ constructed from the sorted keys of the dict, if possible.
.. note::
- NaN (not a number) is the standard missing data marker used in pandas
+ NaN (not a number) is the standard missing data marker used in pandas.
-**From scalar value** If ``data`` is a scalar value, an index must be
-provided. The value will be repeated to match the length of **index**
+**From scalar value**
+
+If ``data`` is a scalar value, an index must be
+provided. The value will be repeated to match the length of **index**.
.. ipython:: python
@@ -106,7 +108,7 @@ Series is ndarray-like
~~~~~~~~~~~~~~~~~~~~~~
``Series`` acts very similarly to a ``ndarray``, and is a valid argument to most NumPy functions.
-However, things like slicing also slice the index.
+However, operations such as slicing will also slice the index.
.. ipython :: python
@@ -152,10 +154,9 @@ See also the :ref:`section on attribute access<indexing.attribute_access>`.
Vectorized operations and label alignment with Series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-When doing data analysis, as with raw NumPy arrays looping through Series
-value-by-value is usually not necessary. Series can also be passed into most
-NumPy methods expecting an ndarray.
-
+When working with raw NumPy arrays, looping through value-by-value is usually
+not necessary. The same is true when working with Series in pandas.
+Series can also be passed into most NumPy methods expecting an ndarray.
.. ipython:: python
@@ -245,8 +246,8 @@ based on common sense rules.
From dict of Series or dicts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The result **index** will be the **union** of the indexes of the various
-Series. If there are any nested dicts, these will be first converted to
+The resulting **index** will be the **union** of the indexes of the various
+Series. If there are any nested dicts, these will first be converted to
Series. If no columns are passed, the columns will be the sorted list of dict
keys.
@@ -323,7 +324,8 @@ From a list of dicts
From a dict of tuples
~~~~~~~~~~~~~~~~~~~~~
-You can automatically create a multi-indexed frame by passing a tuples dictionary
+You can automatically create a multi-indexed frame by passing a tuples
+dictionary.
.. ipython:: python
@@ -345,8 +347,8 @@ column name provided).
**Missing Data**
Much more will be said on this topic in the :ref:`Missing data <missing_data>`
-section. To construct a DataFrame with missing data, use ``np.nan`` for those
-values which are missing. Alternatively, you may pass a ``numpy.MaskedArray``
+section. To construct a DataFrame with missing data, we use ``np.nan`` to
+represent missing values. Alternatively, you may pass a ``numpy.MaskedArray``
as the data argument to the DataFrame constructor, and its masked entries will
be considered missing.
@@ -367,9 +369,9 @@ set to ``'index'`` in order to use the dict keys as row labels.
**DataFrame.from_records**
``DataFrame.from_records`` takes a list of tuples or an ndarray with structured
-dtype. Works analogously to the normal ``DataFrame`` constructor, except that
-index maybe be a specific field of the structured dtype to use as the index.
-For example:
+dtype. It works analogously to the normal ``DataFrame`` constructor, except that
+the resulting DataFrame index may be a specific field of the structured
+dtype. For example:
.. ipython:: python
@@ -467,7 +469,7 @@ derived from existing columns.
(iris.assign(sepal_ratio = iris['SepalWidth'] / iris['SepalLength'])
.head())
-Above was an example of inserting a precomputed value. We can also pass in
+In the example above, we inserted a precomputed value. We can also pass in
a function of one argument to be evalutated on the DataFrame being assigned to.
.. ipython:: python
@@ -480,7 +482,7 @@ DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is
useful when you don't have a reference to the DataFrame at hand. This is
-common when using ``assign`` in chains of operations. For example,
+common when using ``assign`` in a chain of operations. For example,
we can limit the DataFrame to just those observations with a Sepal Length
greater than 5, calculate the ratio, and plot:
@@ -546,7 +548,7 @@ DataFrame:
df.loc['b']
df.iloc[2]
-For a more exhaustive treatment of more sophisticated label-based indexing and
+For a more exhaustive treatment of sophisticated label-based indexing and
slicing, see the :ref:`section on indexing <indexing>`. We will address the
fundamentals of reindexing / conforming to new sets of labels in the
:ref:`section on reindexing <basics.reindexing>`.
@@ -739,7 +741,7 @@ DataFrame column attribute access and IPython completion
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a DataFrame column label is a valid Python variable name, the column can be
-accessed like attributes:
+accessed like an attribute:
.. ipython:: python
@@ -912,7 +914,8 @@ For example, using the earlier example data, we could do:
Squeezing
~~~~~~~~~
-Another way to change the dimensionality of an object is to ``squeeze`` a 1-len object, similar to ``wp['Item1']``
+Another way to change the dimensionality of an object is to ``squeeze`` a 1-len
+object, similar to ``wp['Item1']``.
.. ipython:: python
:okwarning:
@@ -964,7 +967,7 @@ support the multi-dimensional analysis that is one of ``Panel`` s main usecases.
p = tm.makePanel()
p
-Convert to a MultiIndex DataFrame
+Convert to a MultiIndex DataFrame.
.. ipython:: python
:okwarning:
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index 0354f6e7f06f7..73e7704b43be6 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -10,21 +10,21 @@ Package overview
easy-to-use data structures and data analysis tools for the `Python <https://www.python.org/>`__
programming language.
-:mod:`pandas` consists of the following elements
+:mod:`pandas` consists of the following elements:
* A set of labeled array data structures, the primary of which are
- Series and DataFrame
+ Series and DataFrame.
* Index objects enabling both simple axis indexing and multi-level /
- hierarchical axis indexing
- * An integrated group by engine for aggregating and transforming data sets
+ hierarchical axis indexing.
+ * An integrated group by engine for aggregating and transforming data sets.
* Date range generation (date_range) and custom date offsets enabling the
- implementation of customized frequencies
+ implementation of customized frequencies.
* Input/Output tools: loading tabular data from flat files (CSV, delimited,
Excel 2003), and saving and loading pandas objects from the fast and
efficient PyTables/HDF5 format.
* Memory-efficient "sparse" versions of the standard data structures for storing
- data that is mostly missing or mostly constant (some fixed value)
- * Moving window statistics (rolling mean, rolling standard deviation, etc.)
+ data that is mostly missing or mostly constant (some fixed value).
+ * Moving window statistics (rolling mean, rolling standard deviation, etc.).
Data Structures
---------------
@@ -58,7 +58,7 @@ transformations in downstream functions.
For example, with tabular data (DataFrame) it is more semantically helpful to
think of the **index** (the rows) and the **columns** rather than axis 0 and
-axis 1. And iterating through the columns of the DataFrame thus results in more
+axis 1. Iterating through the columns of the DataFrame thus results in more
readable code:
::
@@ -74,8 +74,7 @@ All pandas data structures are value-mutable (the values they contain can be
altered) but not always size-mutable. The length of a Series cannot be
changed, but, for example, columns can be inserted into a DataFrame. However,
the vast majority of methods produce new objects and leave the input data
-untouched. In general, though, we like to **favor immutability** where
-sensible.
+untouched. In general we like to **favor immutability** where sensible.
Getting Support
---------------
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 8d9b75ccd6c2c..861c8e7d622fc 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -104,25 +104,25 @@
Here are just a few of the things that pandas does well:
- Easy handling of missing data in floating point as well as non-floating
- point data
+ point data.
- Size mutability: columns can be inserted and deleted from DataFrame and
- higher dimensional objects
+ higher dimensional objects.
- Automatic and explicit data alignment: objects can be explicitly aligned
to a set of labels, or the user can simply ignore the labels and let
`Series`, `DataFrame`, etc. automatically align the data for you in
- computations
+ computations.
- Powerful, flexible group by functionality to perform split-apply-combine
- operations on data sets, for both aggregating and transforming data
+ operations on data sets, for both aggregating and transforming data.
- Make it easy to convert ragged, differently-indexed data in other Python
- and NumPy data structures into DataFrame objects
+ and NumPy data structures into DataFrame objects.
- Intelligent label-based slicing, fancy indexing, and subsetting of large
- data sets
- - Intuitive merging and joining data sets
- - Flexible reshaping and pivoting of data sets
- - Hierarchical labeling of axes (possible to have multiple labels per tick)
+ data sets.
+ - Intuitive merging and joining data sets.
+ - Flexible reshaping and pivoting of data sets.
+ - Hierarchical labeling of axes (possible to have multiple labels per tick).
- Robust IO tools for loading data from flat files (CSV and delimited),
Excel files, databases, and saving/loading data from the ultrafast HDF5
- format
+ format.
- Time series-specific functionality: date range generation and frequency
conversion, moving window statistics, moving window linear regressions,
date shifting and lagging, etc.
| I read through introductory docs, and made the following changes:
- Added missing periods, which were sometimes absent in short sentences.
- In other parts of the documentation, code examples appear to typically be introduced with a colon. This has been added in some sentences which introduce code and do not end in a period nor a colon.
- Updated a reference to the Python 2 docs to Python 3.
- Restructured a few sentences slightly.
- Found an example of code that did not run due to an extra `}`, which I removed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/18948 | 2017-12-26T16:14:47Z | 2017-12-27T19:30:01Z | 2017-12-27T19:30:01Z | 2017-12-28T00:56:29Z |
CLN: ASV reshape | diff --git a/asv_bench/benchmarks/reshape.py b/asv_bench/benchmarks/reshape.py
index 951f718257170..bd3b580d9d130 100644
--- a/asv_bench/benchmarks/reshape.py
+++ b/asv_bench/benchmarks/reshape.py
@@ -1,13 +1,16 @@
-from .pandas_vb_common import *
-from pandas import melt, wide_to_long
+from itertools import product
+import numpy as np
+from pandas import DataFrame, MultiIndex, date_range, melt, wide_to_long
+
+from .pandas_vb_common import setup # noqa
+
+
+class Melt(object):
-class melt_dataframe(object):
goal_time = 0.2
def setup(self):
- self.index = MultiIndex.from_arrays([np.arange(100).repeat(100), np.roll(np.tile(np.arange(100), 100), 25)])
- self.df = DataFrame(np.random.randn(10000, 4), index=self.index)
self.df = DataFrame(np.random.randn(10000, 3), columns=['A', 'B', 'C'])
self.df['id1'] = np.random.randint(0, 10, 10000)
self.df['id2'] = np.random.randint(100, 1000, 10000)
@@ -16,50 +19,42 @@ def time_melt_dataframe(self):
melt(self.df, id_vars=['id1', 'id2'])
-class reshape_pivot_time_series(object):
+class Pivot(object):
+
goal_time = 0.2
def setup(self):
- self.index = MultiIndex.from_arrays([np.arange(100).repeat(100), np.roll(np.tile(np.arange(100), 100), 25)])
- self.df = DataFrame(np.random.randn(10000, 4), index=self.index)
- self.index = date_range('1/1/2000', periods=10000, freq='h')
- self.df = DataFrame(randn(10000, 50), index=self.index, columns=range(50))
- self.pdf = self.unpivot(self.df)
- self.f = (lambda : self.pdf.pivot('date', 'variable', 'value'))
+ N = 10000
+ index = date_range('1/1/2000', periods=N, freq='h')
+ data = {'value': np.random.randn(N * 50),
+ 'variable': np.arange(50).repeat(N),
+ 'date': np.tile(index.values, 50)}
+ self.df = DataFrame(data)
def time_reshape_pivot_time_series(self):
- self.f()
+ self.df.pivot('date', 'variable', 'value')
- def unpivot(self, frame):
- (N, K) = frame.shape
- self.data = {'value': frame.values.ravel('F'), 'variable': np.asarray(frame.columns).repeat(N), 'date': np.tile(np.asarray(frame.index), K), }
- return DataFrame(self.data, columns=['date', 'variable', 'value'])
+class SimpleReshape(object):
-class reshape_stack_simple(object):
goal_time = 0.2
def setup(self):
- self.index = MultiIndex.from_arrays([np.arange(100).repeat(100), np.roll(np.tile(np.arange(100), 100), 25)])
- self.df = DataFrame(np.random.randn(10000, 4), index=self.index)
+ arrays = [np.arange(100).repeat(100),
+ np.roll(np.tile(np.arange(100), 100), 25)]
+ index = MultiIndex.from_arrays(arrays)
+ self.df = DataFrame(np.random.randn(10000, 4), index=index)
self.udf = self.df.unstack(1)
- def time_reshape_stack_simple(self):
+ def time_stack(self):
self.udf.stack()
-
-class reshape_unstack_simple(object):
- goal_time = 0.2
-
- def setup(self):
- self.index = MultiIndex.from_arrays([np.arange(100).repeat(100), np.roll(np.tile(np.arange(100), 100), 25)])
- self.df = DataFrame(np.random.randn(10000, 4), index=self.index)
-
- def time_reshape_unstack_simple(self):
+ def time_unstack(self):
self.df.unstack(1)
-class reshape_unstack_large_single_dtype(object):
+class Unstack(object):
+
goal_time = 0.2
def setup(self):
@@ -67,59 +62,59 @@ def setup(self):
n = 1000
levels = np.arange(m)
- index = pd.MultiIndex.from_product([levels]*2)
+ index = MultiIndex.from_product([levels] * 2)
columns = np.arange(n)
- values = np.arange(m*m*n).reshape(m*m, n)
- self.df = pd.DataFrame(values, index, columns)
+ values = np.arange(m * m * n).reshape(m * m, n)
+ self.df = DataFrame(values, index, columns)
self.df2 = self.df.iloc[:-1]
- def time_unstack_full_product(self):
+ def time_full_product(self):
self.df.unstack()
- def time_unstack_with_mask(self):
+ def time_without_last_row(self):
self.df2.unstack()
-class unstack_sparse_keyspace(object):
+class SparseIndex(object):
+
goal_time = 0.2
def setup(self):
- self.index = MultiIndex.from_arrays([np.arange(100).repeat(100), np.roll(np.tile(np.arange(100), 100), 25)])
- self.df = DataFrame(np.random.randn(10000, 4), index=self.index)
- self.NUM_ROWS = 1000
- for iter in range(10):
- self.df = DataFrame({'A': np.random.randint(50, size=self.NUM_ROWS), 'B': np.random.randint(50, size=self.NUM_ROWS), 'C': np.random.randint((-10), 10, size=self.NUM_ROWS), 'D': np.random.randint((-10), 10, size=self.NUM_ROWS), 'E': np.random.randint(10, size=self.NUM_ROWS), 'F': np.random.randn(self.NUM_ROWS), })
- self.idf = self.df.set_index(['A', 'B', 'C', 'D', 'E'])
- if (len(self.idf.index.unique()) == self.NUM_ROWS):
- break
+ NUM_ROWS = 1000
+ self.df = DataFrame({'A': np.random.randint(50, size=NUM_ROWS),
+ 'B': np.random.randint(50, size=NUM_ROWS),
+ 'C': np.random.randint(-10, 10, size=NUM_ROWS),
+ 'D': np.random.randint(-10, 10, size=NUM_ROWS),
+ 'E': np.random.randint(10, size=NUM_ROWS),
+ 'F': np.random.randn(NUM_ROWS)})
+ self.df = self.df.set_index(['A', 'B', 'C', 'D', 'E'])
+
+ def time_unstack(self):
+ self.df.unstack()
- def time_unstack_sparse_keyspace(self):
- self.idf.unstack()
+class WideToLong(object):
-class wide_to_long_big(object):
goal_time = 0.2
def setup(self):
- vars = 'ABCD'
nyrs = 20
nidvars = 20
N = 5000
- yrvars = []
- for var in vars:
- for yr in range(1, nyrs + 1):
- yrvars.append(var + str(yr))
+ self.letters = list('ABCD')
+ yrvars = [l + str(num)
+ for l, num in product(self.letters, range(1, nyrs + 1))]
- self.df = pd.DataFrame(np.random.randn(N, nidvars + len(yrvars)),
- columns=list(range(nidvars)) + yrvars)
- self.vars = vars
+ self.df = DataFrame(np.random.randn(N, nidvars + len(yrvars)),
+ columns=list(range(nidvars)) + yrvars)
+ self.df['id'] = self.df.index
def time_wide_to_long_big(self):
- self.df['id'] = self.df.index
- wide_to_long(self.df, list(self.vars), i='id', j='year')
+ wide_to_long(self.df, self.letters, i='id', j='year')
class PivotTable(object):
+
goal_time = 0.2
def setup(self):
| Flake8'd and simplified setups where available.
```
$ asv dev -b ^reshape
· Discovering benchmarks
· Running 9 total benchmarks (1 commits * 1 environments * 9 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 11.11%] ··· Running reshape.Melt.time_melt_dataframe 7.27ms
[ 22.22%] ··· Running reshape.Pivot.time_reshape_pivot_time_series 441ms
[ 33.33%] ··· Running reshape.PivotTable.time_pivot_table 46.0ms
[ 44.44%] ··· Running reshape.SimpleReshape.time_stack 6.73ms
[ 55.56%] ··· Running reshape.SimpleReshape.time_unstack 6.05ms
[ 66.67%] ··· Running reshape.SparseIndex.time_unstack 2.76ms
[ 77.78%] ··· Running reshape.Unstack.time_full_product 254ms
[ 88.89%] ··· Running reshape.Unstack.time_without_last_row 477ms
[100.00%] ··· Running reshape.WideToLong.time_wide_to_long_big 378ms
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/18944 | 2017-12-26T07:28:34Z | 2017-12-26T21:38:26Z | 2017-12-26T21:38:26Z | 2017-12-31T04:33:36Z |
DOC: typo in documentation | diff --git a/doc/source/internals.rst b/doc/source/internals.rst
index 3d96b93de4cc9..a321b4202296f 100644
--- a/doc/source/internals.rst
+++ b/doc/source/internals.rst
@@ -217,7 +217,7 @@ Below is an example to define 2 original properties, "internal_cache" as a tempo
.. code-block:: python
- >>> df = SubclassedDataFrame2({'A', [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
+ >>> df = SubclassedDataFrame2({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
>>> df
A B C
0 1 4 7
| https://api.github.com/repos/pandas-dev/pandas/pulls/18942 | 2017-12-25T23:50:39Z | 2017-12-26T08:09:36Z | 2017-12-26T08:09:36Z | 2017-12-26T08:09:41Z | |
DOC: Fixed minor spelling errors | diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index 00a71603e1261..0354f6e7f06f7 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -102,7 +102,7 @@ project, and makes it possible to `donate <https://pandas.pydata.org/donate.html
Project Governance
------------------
-The governance process that pandas project has used informally since its inception in 2008 is formalized in `Project Governance documents <https://github.com/pandas-dev/pandas-governance>`__ .
+The governance process that pandas project has used informally since its inception in 2008 is formalized in `Project Governance documents <https://github.com/pandas-dev/pandas-governance>`__.
The documents clarify how decisions are made and how the various elements of our community interact, including the relationship between open source collaborative development and work that may be funded by for-profit or non-profit entities.
Wes McKinney is the Benevolent Dictator for Life (BDFL).
@@ -116,7 +116,7 @@ The list of the Core Team members and more detailed information can be found on
Institutional Partners
----------------------
-The information about current institutional partners can be found on `pandas website page <https://pandas.pydata.org/about.html>`__
+The information about current institutional partners can be found on `pandas website page <https://pandas.pydata.org/about.html>`__.
License
-------
diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst
index 1c34c16ea965a..0b8a2cb89b45e 100644
--- a/doc/source/tutorials.rst
+++ b/doc/source/tutorials.rst
@@ -161,6 +161,7 @@ Modern Pandas
- `Performance <http://tomaugspurger.github.io/modern-4-performance.html>`_
- `Tidy Data <http://tomaugspurger.github.io/modern-5-tidy.html>`_
- `Visualization <http://tomaugspurger.github.io/modern-6-visualization.html>`_
+- `Timeseries <http://tomaugspurger.github.io/modern-7-timeseries.html>`_
Excel charts with pandas, vincent and xlsxwriter
------------------------------------------------
| Two small changes:
- Two minor spelling errors corrected.
- Added part 7 of a tutorial series to the docs (parts 1-6 were included already, but not part 7).
| https://api.github.com/repos/pandas-dev/pandas/pulls/18941 | 2017-12-25T22:03:29Z | 2017-12-26T08:11:58Z | 2017-12-26T08:11:58Z | 2017-12-26T08:39:12Z |
ENH: Let Resampler objects have a pipe method | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 64f972e52d190..68721b76eed7e 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -2274,6 +2274,7 @@ Function application
Resampler.apply
Resampler.aggregate
Resampler.transform
+ Resampler.pipe
Upsampling
~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 3f300deddebeb..735742964f3ee 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -142,6 +142,8 @@ Other Enhancements
- ``Categorical.rename_categories``, ``CategoricalIndex.rename_categories`` and :attr:`Series.cat.rename_categories`
can now take a callable as their argument (:issue:`18862`)
- :class:`Interval` and :class:`IntervalIndex` have gained a ``length`` attribute (:issue:`18789`)
+- ``Resampler`` objects now have a functioning :attr:`~pandas.core.resample.Resampler.pipe` method.
+ Previously, calls to ``pipe`` were diverted to the ``mean`` method (:issue:`17905`).
.. _whatsnew_0230.api_breaking:
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index ced120fbdbe29..47b80c00da4d4 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -191,6 +191,60 @@
dtype: int64
""")
+_pipe_template = """\
+Apply a function ``func`` with arguments to this %(klass)s object and return
+the function's result.
+
+%(versionadded)s
+
+Use ``.pipe`` when you want to improve readability by chaining together
+functions that expect Series, DataFrames, GroupBy or Resampler objects.
+Instead of writing
+
+>>> h(g(f(df.groupby('group')), arg1=a), arg2=b, arg3=c)
+
+You can write
+
+>>> (df.groupby('group')
+... .pipe(f)
+... .pipe(g, arg1=a)
+... .pipe(h, arg2=b, arg3=c))
+
+which is much more readable.
+
+Parameters
+----------
+func : callable or tuple of (callable, string)
+ Function to apply to this %(klass)s object or, alternatively,
+ a ``(callable, data_keyword)`` tuple where ``data_keyword`` is a
+ string indicating the keyword of ``callable`` that expects the
+ %(klass)s object.
+args : iterable, optional
+ positional arguments passed into ``func``.
+kwargs : dict, optional
+ a dictionary of keyword arguments passed into ``func``.
+
+Returns
+-------
+object : the return type of ``func``.
+
+Notes
+-----
+See more `here
+<http://pandas.pydata.org/pandas-docs/stable/groupby.html#pipe>`_
+
+Examples
+--------
+%(examples)s
+
+See Also
+--------
+pandas.Series.pipe : Apply a function with arguments to a series
+pandas.DataFrame.pipe: Apply a function with arguments to a dataframe
+apply : Apply function to each group instead of to the
+ full %(klass)s object.
+"""
+
_transform_template = """
Call function producing a like-indexed %(klass)s on each group and
return a %(klass)s having the same indexes as the original object
@@ -676,6 +730,29 @@ def __getattr__(self, attr):
raise AttributeError("%r object has no attribute %r" %
(type(self).__name__, attr))
+ @Substitution(klass='GroupBy',
+ versionadded='.. versionadded:: 0.21.0',
+ examples="""\
+>>> df = pd.DataFrame({'A': 'a b a b'.split(), 'B': [1, 2, 3, 4]})
+>>> df
+ A B
+0 a 1
+1 b 2
+2 a 3
+3 b 4
+
+To get the difference between each groups maximum and minimum value in one
+pass, you can do
+
+>>> df.groupby('A').pipe(lambda x: x.max() - x.min())
+ B
+A
+a 2
+b 2""")
+ @Appender(_pipe_template)
+ def pipe(self, func, *args, **kwargs):
+ return _pipe(self, func, *args, **kwargs)
+
plot = property(GroupByPlot)
def _make_wrapper(self, name):
@@ -1779,54 +1856,6 @@ def tail(self, n=5):
mask = self._cumcount_array(ascending=False) < n
return self._selected_obj[mask]
- def pipe(self, func, *args, **kwargs):
- """ Apply a function with arguments to this GroupBy object,
-
- .. versionadded:: 0.21.0
-
- Parameters
- ----------
- func : callable or tuple of (callable, string)
- Function to apply to this GroupBy object or, alternatively, a
- ``(callable, data_keyword)`` tuple where ``data_keyword`` is a
- string indicating the keyword of ``callable`` that expects the
- GroupBy object.
- args : iterable, optional
- positional arguments passed into ``func``.
- kwargs : dict, optional
- a dictionary of keyword arguments passed into ``func``.
-
- Returns
- -------
- object : the return type of ``func``.
-
- Notes
- -----
- Use ``.pipe`` when chaining together functions that expect
- Series, DataFrames or GroupBy objects. Instead of writing
-
- >>> f(g(h(df.groupby('group')), arg1=a), arg2=b, arg3=c)
-
- You can write
-
- >>> (df
- ... .groupby('group')
- ... .pipe(f, arg1)
- ... .pipe(g, arg2)
- ... .pipe(h, arg3))
-
- See more `here
- <http://pandas.pydata.org/pandas-docs/stable/groupby.html#pipe>`_
-
- See Also
- --------
- pandas.Series.pipe : Apply a function with arguments to a series
- pandas.DataFrame.pipe: Apply a function with arguments to a dataframe
- apply : Apply function to each group instead of to the
- full GroupBy object.
- """
- return _pipe(self, func, *args, **kwargs)
-
GroupBy._add_numeric_operations()
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 9f5439b68558b..c2bf7cff746eb 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -8,7 +8,8 @@
from pandas.core.base import AbstractMethodError, GroupByMixin
from pandas.core.groupby import (BinGrouper, Grouper, _GroupBy, GroupBy,
- SeriesGroupBy, groupby, PanelGroupBy)
+ SeriesGroupBy, groupby, PanelGroupBy,
+ _pipe_template)
from pandas.tseries.frequencies import to_offset, is_subperiod, is_superperiod
from pandas.core.indexes.datetimes import DatetimeIndex, date_range
@@ -26,7 +27,7 @@
from pandas._libs.lib import Timestamp
from pandas._libs.tslibs.period import IncompatibleFrequency
-from pandas.util._decorators import Appender
+from pandas.util._decorators import Appender, Substitution
from pandas.core.generic import _shared_docs
_shared_docs_kwargs = dict()
@@ -257,6 +258,29 @@ def _assure_grouper(self):
""" make sure that we are creating our binner & grouper """
self._set_binner()
+ @Substitution(klass='Resampler',
+ versionadded='.. versionadded:: 0.23.0',
+ examples="""
+>>> df = pd.DataFrame({'A': [1, 2, 3, 4]},
+... index=pd.date_range('2012-08-02', periods=4))
+>>> df
+ A
+2012-08-02 1
+2012-08-03 2
+2012-08-04 3
+2012-08-05 4
+
+To get the difference between each 2-day period's maximum and minimum value in
+one pass, you can do
+
+>>> df.resample('2D').pipe(lambda x: x.max() - x.min())
+ A
+2012-08-02 1
+2012-08-04 1""")
+ @Appender(_pipe_template)
+ def pipe(self, func, *args, **kwargs):
+ return super(Resampler, self).pipe(func, *args, **kwargs)
+
def plot(self, *args, **kwargs):
# for compat with prior versions, we want to
# have the warnings shown here and just have this work
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index f00fa07d868a1..38f4b8be469a5 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -235,6 +235,21 @@ def test_groupby_resample_on_api(self):
result = df.groupby('key').resample('D', on='dates').mean()
assert_frame_equal(result, expected)
+ def test_pipe(self):
+ # GH17905
+
+ # series
+ r = self.series.resample('H')
+ expected = r.max() - r.mean()
+ result = r.pipe(lambda x: x.max() - x.mean())
+ tm.assert_series_equal(result, expected)
+
+ # dataframe
+ r = self.frame.resample('H')
+ expected = r.max() - r.mean()
+ result = r.pipe(lambda x: x.max() - x.mean())
+ tm.assert_frame_equal(result, expected)
+
@td.skip_if_no_mpl
def test_plot_api(self):
# .resample(....).plot(...)
| - [x] closes #17905
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Currently, calls to ``df.resample(....).pipe(...)`` are converted to ``df.resample(....).mean().pipe(...)`` and a warning is emitted (see #17905).
This PR solves this by moving the ``pipe`` method from the ``GroupBy`` class to the ``_GroupBy`` class. As ``_GroupBy`` is a common parent class of both ``GroupBy`` and ``Resampler``, the ``pipe`` method is now available for ``Resampler`` too.
See also #17871. | https://api.github.com/repos/pandas-dev/pandas/pulls/18940 | 2017-12-25T14:21:25Z | 2017-12-26T21:45:49Z | 2017-12-26T21:45:49Z | 2017-12-26T22:11:05Z |
CLN: ASV reindex | diff --git a/asv_bench/benchmarks/reindex.py b/asv_bench/benchmarks/reindex.py
index 537d275e7c727..69a1a604b1ccc 100644
--- a/asv_bench/benchmarks/reindex.py
+++ b/asv_bench/benchmarks/reindex.py
@@ -1,89 +1,77 @@
-from .pandas_vb_common import *
-from random import shuffle
+import numpy as np
+import pandas.util.testing as tm
+from pandas import (DataFrame, Series, DatetimeIndex, MultiIndex, Index,
+ date_range)
+from .pandas_vb_common import setup, lib # noqa
-class Reindexing(object):
+class Reindex(object):
+
goal_time = 0.2
def setup(self):
- self.rng = DatetimeIndex(start='1/1/1970', periods=10000, freq='1min')
- self.df = DataFrame(np.random.rand(10000, 10), index=self.rng,
+ rng = DatetimeIndex(start='1/1/1970', periods=10000, freq='1min')
+ self.df = DataFrame(np.random.rand(10000, 10), index=rng,
columns=range(10))
self.df['foo'] = 'bar'
- self.rng2 = Index(self.rng[::2])
-
+ self.rng_subset = Index(rng[::2])
self.df2 = DataFrame(index=range(10000),
data=np.random.rand(10000, 30), columns=range(30))
-
- # multi-index
N = 5000
K = 200
level1 = tm.makeStringIndex(N).values.repeat(K)
level2 = np.tile(tm.makeStringIndex(K).values, N)
index = MultiIndex.from_arrays([level1, level2])
- self.s1 = Series(np.random.randn((N * K)), index=index)
- self.s2 = self.s1[::2]
+ self.s = Series(np.random.randn(N * K), index=index)
+ self.s_subset = self.s[::2]
def time_reindex_dates(self):
- self.df.reindex(self.rng2)
+ self.df.reindex(self.rng_subset)
def time_reindex_columns(self):
self.df2.reindex(columns=self.df.columns[1:5])
def time_reindex_multiindex(self):
- self.s1.reindex(self.s2.index)
+ self.s.reindex(self.s_subset.index)
-#----------------------------------------------------------------------
-# Pad / backfill
+class ReindexMethod(object):
-
-class FillMethod(object):
goal_time = 0.2
+ params = ['pad', 'backfill']
+ param_names = ['method']
- def setup(self):
- self.rng = date_range('1/1/2000', periods=100000, freq='1min')
- self.ts = Series(np.random.randn(len(self.rng)), index=self.rng)
- self.ts2 = self.ts[::2]
- self.ts3 = self.ts2.reindex(self.ts.index)
- self.ts4 = self.ts3.astype('float32')
-
- def pad(self, source_series, target_index):
- try:
- source_series.reindex(target_index, method='pad')
- except:
- source_series.reindex(target_index, fillMethod='pad')
-
- def backfill(self, source_series, target_index):
- try:
- source_series.reindex(target_index, method='backfill')
- except:
- source_series.reindex(target_index, fillMethod='backfill')
-
- def time_backfill_dates(self):
- self.backfill(self.ts2, self.ts.index)
+ def setup(self, method):
+ N = 100000
+ self.idx = date_range('1/1/2000', periods=N, freq='1min')
+ self.ts = Series(np.random.randn(N), index=self.idx)[::2]
- def time_pad_daterange(self):
- self.pad(self.ts2, self.ts.index)
+ def time_reindex_method(self, method):
+ self.ts.reindex(self.idx, method=method)
- def time_backfill(self):
- self.ts3.fillna(method='backfill')
- def time_backfill_float32(self):
- self.ts4.fillna(method='backfill')
+class Fillna(object):
- def time_pad(self):
- self.ts3.fillna(method='pad')
+ goal_time = 0.2
+ params = ['pad', 'backfill']
+ param_names = ['method']
- def time_pad_float32(self):
- self.ts4.fillna(method='pad')
+ def setup(self, method):
+ N = 100000
+ self.idx = date_range('1/1/2000', periods=N, freq='1min')
+ ts = Series(np.random.randn(N), index=self.idx)[::2]
+ self.ts_reindexed = ts.reindex(self.idx)
+ self.ts_float32 = self.ts_reindexed.astype('float32')
+ def time_reindexed(self, method):
+ self.ts_reindexed.fillna(method=method)
-#----------------------------------------------------------------------
-# align on level
+ def time_float_32(self, method):
+ self.ts_float32.fillna(method=method)
class LevelAlign(object):
+
goal_time = 0.2
def setup(self):
@@ -92,7 +80,6 @@ def setup(self):
labels=[np.arange(10).repeat(10000),
np.tile(np.arange(100).repeat(100), 10),
np.tile(np.tile(np.arange(100), 100), 10)])
- random.shuffle(self.index.values)
self.df = DataFrame(np.random.randn(len(self.index), 4),
index=self.index)
self.df_level = DataFrame(np.random.randn(100, 4),
@@ -102,103 +89,85 @@ def time_align_level(self):
self.df.align(self.df_level, level=1, copy=False)
def time_reindex_level(self):
- self.df_level.reindex(self.df.index, level=1)
+ self.df_level.reindex(self.index, level=1)
-#----------------------------------------------------------------------
-# drop_duplicates
+class DropDuplicates(object):
-
-class Duplicates(object):
goal_time = 0.2
-
- def setup(self):
- self.N = 10000
- self.K = 10
- self.key1 = tm.makeStringIndex(self.N).values.repeat(self.K)
- self.key2 = tm.makeStringIndex(self.N).values.repeat(self.K)
- self.df = DataFrame({'key1': self.key1, 'key2': self.key2,
- 'value': np.random.randn((self.N * self.K)),})
- self.col_array_list = list(self.df.values.T)
-
- self.df2 = self.df.copy()
- self.df2.ix[:10000, :] = np.nan
+ params = [True, False]
+ param_names = ['inplace']
+
+ def setup(self, inplace):
+ N = 10000
+ K = 10
+ key1 = tm.makeStringIndex(N).values.repeat(K)
+ key2 = tm.makeStringIndex(N).values.repeat(K)
+ self.df = DataFrame({'key1': key1, 'key2': key2,
+ 'value': np.random.randn(N * K)})
+ self.df_nan = self.df.copy()
+ self.df_nan.iloc[:10000, :] = np.nan
self.s = Series(np.random.randint(0, 1000, size=10000))
- self.s2 = Series(np.tile(tm.makeStringIndex(1000).values, 10))
-
- np.random.seed(1234)
- self.N = 1000000
- self.K = 10000
- self.key1 = np.random.randint(0, self.K, size=self.N)
- self.df_int = DataFrame({'key1': self.key1})
- self.df_bool = DataFrame({i: np.random.randint(0, 2, size=self.K,
- dtype=bool)
- for i in range(10)})
+ self.s_str = Series(np.tile(tm.makeStringIndex(1000).values, 10))
- def time_frame_drop_dups(self):
- self.df.drop_duplicates(['key1', 'key2'])
+ N = 1000000
+ K = 10000
+ key1 = np.random.randint(0, K, size=N)
+ self.df_int = DataFrame({'key1': key1})
+ self.df_bool = DataFrame(np.random.randint(0, 2, size=(K, 10),
+ dtype=bool))
- def time_frame_drop_dups_inplace(self):
- self.df.drop_duplicates(['key1', 'key2'], inplace=True)
+ def time_frame_drop_dups(self, inplace):
+ self.df.drop_duplicates(['key1', 'key2'], inplace=inplace)
- def time_frame_drop_dups_na(self):
- self.df2.drop_duplicates(['key1', 'key2'])
+ def time_frame_drop_dups_na(self, inplace):
+ self.df_nan.drop_duplicates(['key1', 'key2'], inplace=inplace)
- def time_frame_drop_dups_na_inplace(self):
- self.df2.drop_duplicates(['key1', 'key2'], inplace=True)
+ def time_series_drop_dups_int(self, inplace):
+ self.s.drop_duplicates(inplace=inplace)
- def time_series_drop_dups_int(self):
- self.s.drop_duplicates()
+ def time_series_drop_dups_string(self, inplace):
+ self.s_str.drop_duplicates(inplace=inplace)
- def time_series_drop_dups_string(self):
- self.s2.drop_duplicates()
+ def time_frame_drop_dups_int(self, inplace):
+ self.df_int.drop_duplicates(inplace=inplace)
- def time_frame_drop_dups_int(self):
- self.df_int.drop_duplicates()
-
- def time_frame_drop_dups_bool(self):
- self.df_bool.drop_duplicates()
-
-#----------------------------------------------------------------------
-# blog "pandas escaped the zoo"
+ def time_frame_drop_dups_bool(self, inplace):
+ self.df_bool.drop_duplicates(inplace=inplace)
class Align(object):
+ # blog "pandas escaped the zoo"
goal_time = 0.2
def setup(self):
n = 50000
indices = tm.makeStringIndex(n)
subsample_size = 40000
-
- def sample(values, k):
- sampler = np.arange(len(values))
- shuffle(sampler)
- return values.take(sampler[:k])
-
- self.x = Series(np.random.randn(50000), indices)
+ self.x = Series(np.random.randn(n), indices)
self.y = Series(np.random.randn(subsample_size),
- index=sample(indices, subsample_size))
+ index=np.random.choice(indices, subsample_size,
+ replace=False))
def time_align_series_irregular_string(self):
- (self.x + self.y)
+ self.x + self.y
class LibFastZip(object):
+
goal_time = 0.2
def setup(self):
- self.N = 10000
- self.K = 10
- self.key1 = tm.makeStringIndex(self.N).values.repeat(self.K)
- self.key2 = tm.makeStringIndex(self.N).values.repeat(self.K)
- self.df = DataFrame({'key1': self.key1, 'key2': self.key2, 'value': np.random.randn((self.N * self.K)), })
- self.col_array_list = list(self.df.values.T)
-
- self.df2 = self.df.copy()
- self.df2.ix[:10000, :] = np.nan
- self.col_array_list2 = list(self.df2.values.T)
+ N = 10000
+ K = 10
+ key1 = tm.makeStringIndex(N).values.repeat(K)
+ key2 = tm.makeStringIndex(N).values.repeat(K)
+ col_array = np.vstack([key1, key2, np.random.randn(N * K)])
+ col_array2 = col_array.copy()
+ col_array2[:, :10000] = np.nan
+ self.col_array_list = list(col_array)
+ self.col_array_list2 = list(col_array2)
def time_lib_fast_zip(self):
lib.fast_zip(self.col_array_list)
| Flake8'd, utilized `param`s and simplified setup where possible.
```
$ asv dev -b ^reindex
· Discovering benchmarks
· Running 17 total benchmarks (1 commits * 1 environments * 17 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 5.88%] ··· Running ...ex.Align.time_align_series_irregular_string 604ms
[ 11.76%] ··· Running reindex.DropDuplicates.time_frame_drop_dups ok
[ 11.76%] ····
========= ========
inplace
--------- --------
True 26.9ms
False 24.8ms
========= ========
[ 17.65%] ··· Running ...ex.DropDuplicates.time_frame_drop_dups_bool ok
[ 17.65%] ····
========= ========
inplace
--------- --------
True 6.48ms
False 7.86ms
========= ========
[ 23.53%] ··· Running ...dex.DropDuplicates.time_frame_drop_dups_int ok
[ 23.53%] ····
========= ========
inplace
--------- --------
True 73.2ms
False 65.5ms
========= ========
[ 29.41%] ··· Running reindex.DropDuplicates.time_frame_drop_dups_na ok
[ 29.41%] ····
========= ========
inplace
--------- --------
True 30.3ms
False 29.5ms
========= ========
[ 35.29%] ··· Running ...ex.DropDuplicates.time_series_drop_dups_int ok
[ 35.29%] ····
========= ========
inplace
--------- --------
True 1.41ms
False 1.35ms
========= ========
[ 41.18%] ··· Running ...DropDuplicates.time_series_drop_dups_string ok
[ 41.18%] ····
========= ========
inplace
--------- --------
True 1.81ms
False 1.78ms
========= ========
[ 47.06%] ··· Running reindex.Fillna.time_float_32 ok
[ 47.06%] ····
========== =======
method
---------- -------
pad 816μs
backfill 911μs
========== =======
[ 52.94%] ··· Running reindex.Fillna.time_reindexed ok
[ 52.94%] ····
========== ========
method
---------- --------
pad 1.68ms
backfill 1.45ms
========== ========
[ 58.82%] ··· Running reindex.LevelAlign.time_align_level 29.5ms
[ 64.71%] ··· Running reindex.LevelAlign.time_reindex_level 31.3ms
[ 70.59%] ··· Running reindex.LibFastZip.time_lib_fast_zip 30.2ms
[ 76.47%] ··· Running reindex.LibFastZip.time_lib_fast_zip_fillna 35.4ms
[ 82.35%] ··· Running reindex.Reindex.time_reindex_columns 2.25ms
[ 88.24%] ··· Running reindex.Reindex.time_reindex_dates 1.90ms
[ 94.12%] ··· Running reindex.Reindex.time_reindex_multiindex 650ms
[100.00%] ··· Running reindex.ReindexMethod.time_reindex_method ok
[100.00%] ····
========== ========
method
---------- --------
pad 7.01ms
backfill 6.87ms
========== ========
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/18938 | 2017-12-25T05:41:57Z | 2017-12-26T21:47:33Z | 2017-12-26T21:47:33Z | 2017-12-31T04:33:51Z |
CLN/BUG: Consolidate Index.astype and fix tz aware bugs | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 8c94cef4d8ea7..df17f6dd4c16f 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -288,6 +288,7 @@ Conversion
- Bug in :class:`Series` constructor with an int or float list where specifying ``dtype=str``, ``dtype='str'`` or ``dtype='U'`` failed to convert the data elements to strings (:issue:`16605`)
- Bug in :class:`Timestamp` where comparison with an array of ``Timestamp`` objects would result in a ``RecursionError`` (:issue:`15183`)
- Bug in :class:`WeekOfMonth` and class:`Week` where addition and subtraction did not roll correctly (:issue:`18510`,:issue:`18672`,:issue:`18864`)
+- Bug in :meth:`DatetimeIndex.astype` when converting between timezone aware dtypes, and converting from timezone aware to naive (:issue:`18951`)
Indexing
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 79de63b0caeb6..d5dbfec9ecc49 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1065,12 +1065,18 @@ def _to_embed(self, keep_tz=False, dtype=None):
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
- if is_categorical_dtype(dtype):
+ if is_dtype_equal(self.dtype, dtype):
+ return self.copy() if copy else self
+ elif is_categorical_dtype(dtype):
from .category import CategoricalIndex
return CategoricalIndex(self.values, name=self.name, dtype=dtype,
copy=copy)
- return Index(self.values.astype(dtype, copy=copy), name=self.name,
- dtype=dtype)
+ try:
+ return Index(self.values.astype(dtype, copy=copy), name=self.name,
+ dtype=dtype)
+ except (TypeError, ValueError):
+ msg = 'Cannot cast {name} to dtype {dtype}'
+ raise TypeError(msg.format(name=type(self).__name__, dtype=dtype))
def _to_safe_for_reshape(self):
""" convert to object if we are a categorical """
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 8cc996285fbbd..4a66475c85691 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -11,13 +11,22 @@
import numpy as np
from pandas.core.dtypes.common import (
- is_integer, is_float,
- is_bool_dtype, _ensure_int64,
- is_scalar, is_dtype_equal,
- is_list_like, is_timedelta64_dtype)
+ _ensure_int64,
+ is_dtype_equal,
+ is_float,
+ is_integer,
+ is_list_like,
+ is_scalar,
+ is_bool_dtype,
+ is_categorical_dtype,
+ is_datetime_or_timedelta_dtype,
+ is_float_dtype,
+ is_integer_dtype,
+ is_object_dtype,
+ is_string_dtype,
+ is_timedelta64_dtype)
from pandas.core.dtypes.generic import (
- ABCIndex, ABCSeries,
- ABCPeriodIndex, ABCIndexClass)
+ ABCIndex, ABCSeries, ABCPeriodIndex, ABCIndexClass)
from pandas.core.dtypes.missing import isna
from pandas.core import common as com, algorithms
from pandas.core.algorithms import checked_add_with_arr
@@ -859,6 +868,22 @@ def _concat_same_dtype(self, to_concat, name):
new_data = np.concatenate([c.asi8 for c in to_concat])
return self._simple_new(new_data, **attribs)
+ def astype(self, dtype, copy=True):
+ if is_object_dtype(dtype):
+ return self._box_values_as_index()
+ elif is_string_dtype(dtype) and not is_categorical_dtype(dtype):
+ return Index(self.format(), name=self.name, dtype=object)
+ elif is_integer_dtype(dtype):
+ return Index(self.values.astype('i8', copy=copy), name=self.name,
+ dtype='i8')
+ elif (is_datetime_or_timedelta_dtype(dtype) and
+ not is_dtype_equal(self.dtype, dtype)) or is_float_dtype(dtype):
+ # disallow conversion between datetime/timedelta,
+ # and conversions for any datetimelike to float
+ msg = 'Cannot cast {name} to dtype {dtype}'
+ raise TypeError(msg.format(name=type(self).__name__, dtype=dtype))
+ return super(DatetimeIndexOpsMixin, self).astype(dtype, copy=copy)
+
def _ensure_datetimelike_to_i8(other):
""" helper for coercing an input scalar or array to i8 """
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index bec26ef72d63a..9e804b6575c47 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -10,17 +10,19 @@
from pandas.core.base import _shared_docs
from pandas.core.dtypes.common import (
- _NS_DTYPE, _INT64_DTYPE,
- is_object_dtype, is_datetime64_dtype,
- is_datetimetz, is_dtype_equal,
+ _INT64_DTYPE,
+ _NS_DTYPE,
+ is_object_dtype,
+ is_datetime64_dtype,
+ is_datetimetz,
+ is_dtype_equal,
is_timedelta64_dtype,
- is_integer, is_float,
+ is_integer,
+ is_float,
is_integer_dtype,
is_datetime64_ns_dtype,
is_period_dtype,
is_bool_dtype,
- is_string_dtype,
- is_categorical_dtype,
is_string_like,
is_list_like,
is_scalar,
@@ -36,20 +38,17 @@
from pandas.core.algorithms import checked_add_with_arr
from pandas.core.indexes.base import Index, _index_shared_docs
-from pandas.core.indexes.category import CategoricalIndex
from pandas.core.indexes.numeric import Int64Index, Float64Index
import pandas.compat as compat
-from pandas.tseries.frequencies import (
- to_offset, get_period_alias,
- Resolution)
+from pandas.tseries.frequencies import to_offset, get_period_alias, Resolution
from pandas.core.indexes.datetimelike import (
DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin)
from pandas.tseries.offsets import (
DateOffset, generate_range, Tick, CDay, prefix_mapping)
from pandas.core.tools.timedeltas import to_timedelta
-from pandas.util._decorators import (Appender, cache_readonly,
- deprecate_kwarg, Substitution)
+from pandas.util._decorators import (
+ Appender, cache_readonly, deprecate_kwarg, Substitution)
import pandas.core.common as com
import pandas.tseries.offsets as offsets
import pandas.core.tools.datetimes as tools
@@ -906,25 +905,16 @@ def _format_native_types(self, na_rep='NaT', date_format=None, **kwargs):
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
dtype = pandas_dtype(dtype)
- if is_object_dtype(dtype):
- return self._box_values_as_index()
- elif is_integer_dtype(dtype):
- return Index(self.values.astype('i8', copy=copy), name=self.name,
- dtype='i8')
- elif is_datetime64_ns_dtype(dtype):
- if self.tz is not None:
- return self.tz_convert('UTC').tz_localize(None)
- elif copy is True:
- return self.copy()
- return self
- elif is_categorical_dtype(dtype):
- return CategoricalIndex(self.values, name=self.name, dtype=dtype,
- copy=copy)
- elif is_string_dtype(dtype):
- return Index(self.format(), name=self.name, dtype=object)
+ if (is_datetime64_ns_dtype(dtype) and
+ not is_dtype_equal(dtype, self.dtype)):
+ # GH 18951: datetime64_ns dtype but not equal means different tz
+ new_tz = getattr(dtype, 'tz', None)
+ if getattr(self.dtype, 'tz', None) is None:
+ return self.tz_localize(new_tz)
+ return self.tz_convert(new_tz)
elif is_period_dtype(dtype):
return self.to_period(freq=dtype.freq)
- raise TypeError('Cannot cast DatetimeIndex to dtype %s' % dtype)
+ return super(DatetimeIndex, self).astype(dtype, copy=copy)
def _get_time_micros(self):
values = self.asi8
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 49e574dcbae45..2a132f683c519 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -12,8 +12,6 @@
is_datetime_or_timedelta_dtype,
is_datetime64tz_dtype,
is_integer_dtype,
- is_object_dtype,
- is_categorical_dtype,
is_float_dtype,
is_interval_dtype,
is_scalar,
@@ -29,7 +27,6 @@
Interval, IntervalMixin, IntervalTree,
intervals_to_interval_bounds)
-from pandas.core.indexes.category import CategoricalIndex
from pandas.core.indexes.datetimes import date_range
from pandas.core.indexes.timedeltas import timedelta_range
from pandas.core.indexes.multi import MultiIndex
@@ -671,16 +668,8 @@ def copy(self, deep=False, name=None):
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
if is_interval_dtype(dtype):
- if copy:
- self = self.copy()
- return self
- elif is_object_dtype(dtype):
- return Index(self.values, dtype=object)
- elif is_categorical_dtype(dtype):
- return CategoricalIndex(self.values, name=self.name, dtype=dtype,
- copy=copy)
- raise ValueError('Cannot cast IntervalIndex to dtype {dtype}'
- .format(dtype=dtype))
+ return self.copy() if copy else self
+ return super(IntervalIndex, self).astype(dtype, copy=copy)
@cache_readonly
def dtype(self):
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 5fc9cb47362d6..5995b9fc7674c 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -4,10 +4,8 @@
from pandas.core.dtypes.common import (
is_dtype_equal,
pandas_dtype,
- is_float_dtype,
- is_object_dtype,
+ needs_i8_conversion,
is_integer_dtype,
- is_categorical_dtype,
is_bool,
is_bool_dtype,
is_scalar)
@@ -17,7 +15,6 @@
from pandas.core import algorithms
from pandas.core.indexes.base import (
Index, InvalidIndexError, _index_shared_docs)
-from pandas.core.indexes.category import CategoricalIndex
from pandas.util._decorators import Appender, cache_readonly
import pandas.core.dtypes.concat as _concat
import pandas.core.indexes.base as ibase
@@ -315,22 +312,14 @@ def inferred_type(self):
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
dtype = pandas_dtype(dtype)
- if is_float_dtype(dtype):
- values = self._values.astype(dtype, copy=copy)
- elif is_integer_dtype(dtype):
- if self.hasnans:
- raise ValueError('cannot convert float NaN to integer')
- values = self._values.astype(dtype, copy=copy)
- elif is_object_dtype(dtype):
- values = self._values.astype('object', copy=copy)
- elif is_categorical_dtype(dtype):
- return CategoricalIndex(self, name=self.name, dtype=dtype,
- copy=copy)
- else:
- raise TypeError('Setting {cls} dtype to anything other than '
- 'float64, object, or category is not supported'
- .format(cls=self.__class__))
- return Index(values, name=self.name, dtype=dtype)
+ if needs_i8_conversion(dtype):
+ msg = ('Cannot convert Float64Index to dtype {dtype}; integer '
+ 'values are required for conversion').format(dtype=dtype)
+ raise TypeError(msg)
+ elif is_integer_dtype(dtype) and self.hasnans:
+ # GH 13149
+ raise ValueError('Cannot convert NA to integer')
+ return super(Float64Index, self).astype(dtype, copy=copy)
@Appender(_index_shared_docs['_convert_scalar_indexer'])
def _convert_scalar_indexer(self, key, kind=None):
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 64756906d8a63..8b35b1a231551 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -7,16 +7,14 @@
from pandas.core.dtypes.common import (
is_integer,
is_float,
- is_object_dtype,
is_integer_dtype,
is_float_dtype,
is_scalar,
is_datetime64_dtype,
- is_datetime64tz_dtype,
+ is_datetime64_any_dtype,
is_timedelta64_dtype,
is_period_dtype,
is_bool_dtype,
- is_categorical_dtype,
pandas_dtype,
_ensure_object)
from pandas.core.dtypes.dtypes import PeriodDtype
@@ -24,7 +22,6 @@
import pandas.tseries.frequencies as frequencies
from pandas.tseries.frequencies import get_freq_code as _gfc
-from pandas.core.indexes.category import CategoricalIndex
from pandas.core.indexes.datetimes import DatetimeIndex, Int64Index, Index
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.core.indexes.datetimelike import DatelikeOps, DatetimeIndexOpsMixin
@@ -506,23 +503,14 @@ def asof_locs(self, where, mask):
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True, how='start'):
dtype = pandas_dtype(dtype)
- if is_object_dtype(dtype):
- return self._box_values_as_index()
- elif is_integer_dtype(dtype):
- if copy:
- return self._int64index.copy()
- else:
- return self._int64index
- elif is_datetime64_dtype(dtype):
- return self.to_timestamp(how=how)
- elif is_datetime64tz_dtype(dtype):
- return self.to_timestamp(how=how).tz_localize(dtype.tz)
+ if is_integer_dtype(dtype):
+ return self._int64index.copy() if copy else self._int64index
+ elif is_datetime64_any_dtype(dtype):
+ tz = getattr(dtype, 'tz', None)
+ return self.to_timestamp(how=how).tz_localize(tz)
elif is_period_dtype(dtype):
return self.asfreq(freq=dtype.freq)
- elif is_categorical_dtype(dtype):
- return CategoricalIndex(self.values, name=self.name, dtype=dtype,
- copy=copy)
- raise TypeError('Cannot cast PeriodIndex to dtype %s' % dtype)
+ return super(PeriodIndex, self).astype(dtype, copy=copy)
@Substitution(klass='PeriodIndex')
@Appender(_shared_docs['searchsorted'])
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 25c764b138465..d28a09225e8b8 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -4,15 +4,13 @@
import numpy as np
from pandas.core.dtypes.common import (
_TD_DTYPE,
- is_integer, is_float,
+ is_integer,
+ is_float,
is_bool_dtype,
is_list_like,
is_scalar,
- is_integer_dtype,
- is_object_dtype,
is_timedelta64_dtype,
is_timedelta64_ns_dtype,
- is_categorical_dtype,
pandas_dtype,
_ensure_int64)
from pandas.core.dtypes.missing import isna
@@ -20,7 +18,6 @@
from pandas.core.common import _maybe_box, _values_from_object
from pandas.core.indexes.base import Index
-from pandas.core.indexes.category import CategoricalIndex
from pandas.core.indexes.numeric import Int64Index
import pandas.compat as compat
from pandas.compat import u
@@ -483,28 +480,14 @@ def to_pytimedelta(self):
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
dtype = pandas_dtype(dtype)
-
- if is_object_dtype(dtype):
- return self._box_values_as_index()
- elif is_timedelta64_ns_dtype(dtype):
- if copy is True:
- return self.copy()
- return self
- elif is_timedelta64_dtype(dtype):
+ if is_timedelta64_dtype(dtype) and not is_timedelta64_ns_dtype(dtype):
# return an index (essentially this is division)
result = self.values.astype(dtype, copy=copy)
if self.hasnans:
- return Index(self._maybe_mask_results(result,
- convert='float64'),
- name=self.name)
+ values = self._maybe_mask_results(result, convert='float64')
+ return Index(values, name=self.name)
return Index(result.astype('i8'), name=self.name)
- elif is_integer_dtype(dtype):
- return Index(self.values.astype('i8', copy=copy), dtype='i8',
- name=self.name)
- elif is_categorical_dtype(dtype):
- return CategoricalIndex(self.values, name=self.name, dtype=dtype,
- copy=copy)
- raise TypeError('Cannot cast TimedeltaIndex to dtype %s' % dtype)
+ return super(TimedeltaIndex, self).astype(dtype, copy=copy)
def union(self, other):
"""
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index e211807b6a3e4..1d72ca609b1d3 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -57,6 +57,18 @@ def test_astype_with_tz(self):
dtype=object)
tm.assert_series_equal(result, expected)
+ # GH 18951: tz-aware to tz-aware
+ idx = date_range('20170101', periods=4, tz='US/Pacific')
+ result = idx.astype('datetime64[ns, US/Eastern]')
+ expected = date_range('20170101 03:00:00', periods=4, tz='US/Eastern')
+ tm.assert_index_equal(result, expected)
+
+ # GH 18951: tz-naive to tz-aware
+ idx = date_range('20170101', periods=4)
+ result = idx.astype('datetime64[ns, US/Eastern]')
+ expected = date_range('20170101', periods=4, tz='US/Eastern')
+ tm.assert_index_equal(result, expected)
+
def test_astype_str_compat(self):
# GH 13149, GH 13209
# verify that we are returing NaT as a string (and not unicode)
@@ -126,15 +138,15 @@ def test_astype_object(self):
tm.assert_index_equal(casted, Index(exp_values, dtype=np.object_))
assert casted.tolist() == exp_values
- def test_astype_raises(self):
+ @pytest.mark.parametrize('dtype', [
+ float, 'timedelta64', 'timedelta64[ns]', 'datetime64',
+ 'datetime64[D]'])
+ def test_astype_raises(self, dtype):
# GH 13149, GH 13209
idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN])
-
- pytest.raises(TypeError, idx.astype, float)
- pytest.raises(TypeError, idx.astype, 'timedelta64')
- pytest.raises(TypeError, idx.astype, 'timedelta64[ns]')
- pytest.raises(TypeError, idx.astype, 'datetime64')
- pytest.raises(TypeError, idx.astype, 'datetime64[D]')
+ msg = 'Cannot cast DatetimeIndex to dtype'
+ with tm.assert_raises_regex(TypeError, msg):
+ idx.astype(dtype)
def test_index_convert_to_datetime_array(self):
def _check_rng(rng):
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 599f6efd16f74..ab341b70dfe91 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -39,19 +39,23 @@ def test_astype_conversion(self):
dtype=np.int64)
tm.assert_index_equal(result, expected)
+ result = idx.astype(str)
+ expected = Index(str(x) for x in idx)
+ tm.assert_index_equal(result, expected)
+
idx = period_range('1990', '2009', freq='A')
result = idx.astype('i8')
tm.assert_index_equal(result, Index(idx.asi8))
tm.assert_numpy_array_equal(result.values, idx.asi8)
- def test_astype_raises(self):
+ @pytest.mark.parametrize('dtype', [
+ float, 'timedelta64', 'timedelta64[ns]'])
+ def test_astype_raises(self, dtype):
# GH 13149, GH 13209
idx = PeriodIndex(['2016-05-16', 'NaT', NaT, np.NaN], freq='D')
-
- pytest.raises(TypeError, idx.astype, str)
- pytest.raises(TypeError, idx.astype, float)
- pytest.raises(TypeError, idx.astype, 'timedelta64')
- pytest.raises(TypeError, idx.astype, 'timedelta64[ns]')
+ msg = 'Cannot cast PeriodIndex to dtype'
+ with tm.assert_raises_regex(TypeError, msg):
+ idx.astype(dtype)
def test_pickle_compat_construction(self):
pass
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index 74446af8b77f6..4169c93809059 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -390,14 +390,7 @@ def test_equals(self, closed):
assert not expected.equals(expected_other_closed)
def test_astype(self, closed):
-
idx = self.create_index(closed=closed)
-
- for dtype in [np.int64, np.float64, 'datetime64[ns]',
- 'datetime64[ns, US/Eastern]', 'timedelta64',
- 'period[M]']:
- pytest.raises(ValueError, idx.astype, dtype)
-
result = idx.astype(object)
tm.assert_index_equal(result, Index(idx.values, dtype='object'))
assert not idx.equals(result)
@@ -407,6 +400,15 @@ def test_astype(self, closed):
tm.assert_index_equal(result, idx)
assert result.equals(idx)
+ @pytest.mark.parametrize('dtype', [
+ np.int64, np.float64, 'period[M]', 'timedelta64', 'datetime64[ns]',
+ 'datetime64[ns, US/Eastern]'])
+ def test_astype_errors(self, closed, dtype):
+ idx = self.create_index(closed=closed)
+ msg = 'Cannot cast IntervalIndex to dtype'
+ with tm.assert_raises_regex(TypeError, msg):
+ idx.astype(dtype)
+
@pytest.mark.parametrize('klass', [list, tuple, np.array, pd.Series])
def test_where(self, closed, klass):
idx = self.create_index(closed=closed)
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 96d5981abc1bb..55c06e8854333 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -711,7 +711,7 @@ def test_nbytes(self):
# memory savings vs int index
i = RangeIndex(0, 1000)
- assert i.nbytes < i.astype(int).nbytes / 10
+ assert i.nbytes < i._int64index.nbytes / 10
# constant memory usage
i2 = RangeIndex(0, 10)
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index 0fa0e036096d0..af16fe71edcf3 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -40,8 +40,11 @@ def test_astype(self):
dtype=np.int64)
tm.assert_index_equal(result, expected)
- rng = timedelta_range('1 days', periods=10)
+ result = idx.astype(str)
+ expected = Index(str(x) for x in idx)
+ tm.assert_index_equal(result, expected)
+ rng = timedelta_range('1 days', periods=10)
result = rng.astype('i8')
tm.assert_index_equal(result, Index(rng.asi8))
tm.assert_numpy_array_equal(rng.asi8, result.values)
@@ -62,14 +65,14 @@ def test_astype_timedelta64(self):
tm.assert_index_equal(result, idx)
assert result is idx
- def test_astype_raises(self):
+ @pytest.mark.parametrize('dtype', [
+ float, 'datetime64', 'datetime64[ns]'])
+ def test_astype_raises(self, dtype):
# GH 13149, GH 13209
idx = TimedeltaIndex([1e14, 'NaT', pd.NaT, np.NaN])
-
- pytest.raises(TypeError, idx.astype, float)
- pytest.raises(TypeError, idx.astype, str)
- pytest.raises(TypeError, idx.astype, 'datetime64')
- pytest.raises(TypeError, idx.astype, 'datetime64[ns]')
+ msg = 'Cannot cast TimedeltaIndex to dtype'
+ with tm.assert_raises_regex(TypeError, msg):
+ idx.astype(dtype)
def test_pickle_compat_construction(self):
pass
| - [X] closes #18704
- [X] closes #18951
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
Only behavioral changes:
- Allowed `.astype(str)` on `TimedeltaIndex` and `PeriodIndex`, which previously raised.
- Couldn't see a reason why it shouldn't be supported.
- Fixed issues related to tz-aware conversion in #18951
- `RangeIndex.astype('int64')` now remains a `RangeIndex`
- Previously returned a `Int64Index` | https://api.github.com/repos/pandas-dev/pandas/pulls/18937 | 2017-12-25T02:00:32Z | 2017-12-27T19:43:00Z | 2017-12-27T19:43:00Z | 2017-12-27T20:12:47Z |
DOC: Using deprecated sphinx directive instead of non-standard messages in docstrings (#18928) | diff --git a/ci/lint.sh b/ci/lint.sh
index b4eafcaf28e39..d00e0c9afb6dc 100755
--- a/ci/lint.sh
+++ b/ci/lint.sh
@@ -117,6 +117,10 @@ if [ "$LINT" ]; then
fi
done
echo "Check for incorrect sphinx directives DONE"
+
+ echo "Check for deprecated messages without sphinx directive"
+ grep -R --include="*.py" --include="*.pyx" -E "(DEPRECATED|DEPRECATE|Deprecated)(:|,|\.)" pandas
+ echo "Check for deprecated messages without sphinx directive DONE"
else
echo "NOT Linting"
fi
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 83437022563d5..dc07104f64c65 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -547,7 +547,30 @@ Backwards Compatibility
Please try to maintain backward compatibility. *pandas* has lots of users with lots of
existing code, so don't break it if at all possible. If you think breakage is required,
clearly state why as part of the pull request. Also, be careful when changing method
-signatures and add deprecation warnings where needed.
+signatures and add deprecation warnings where needed. Also, add the deprecated sphinx
+directive to the deprecated functions or methods.
+
+If a function with the same arguments as the one being deprecated exist, you can use
+the ``pandas.util._decorators.deprecate``:
+
+.. code-block:: python
+
+ from pandas.util._decorators import deprecate
+
+ deprecate('old_func', 'new_func', '0.21.0')
+
+Otherwise, you need to do it manually:
+
+.. code-block:: python
+
+ def old_func():
+ """Summary of the function.
+
+ .. deprecated:: 0.21.0
+ Use new_func instead.
+ """
+ warnings.warn('Use new_func instead.', FutureWarning, stacklevel=2)
+ new_func()
.. _contributing.ci:
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 93c5b6484b840..78501620d780b 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -51,7 +51,7 @@
plot_params = pandas.plotting._style._Options(deprecated=True)
# do not import deprecate to top namespace
scatter_matrix = pandas.util._decorators.deprecate(
- 'pandas.scatter_matrix', pandas.plotting.scatter_matrix,
+ 'pandas.scatter_matrix', pandas.plotting.scatter_matrix, '0.20.0',
'pandas.plotting.scatter_matrix')
from pandas.util._print_versions import show_versions
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index ffc1c89dd8adf..de31643742d87 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -389,9 +389,6 @@ class Timestamp(_Timestamp):
Unit used for conversion if ts_input is of type int or float. The
valid values are 'D', 'h', 'm', 's', 'ms', 'us', and 'ns'. For
example, 's' means seconds and 'ms' means milliseconds.
- offset : str, DateOffset
- Deprecated, use freq
-
year, month, day : int
.. versionadded:: 0.19.0
hour, minute, second, microsecond : int, optional, default 0
diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py
index f46487cfa1b79..d194cd2404c9d 100644
--- a/pandas/computation/expressions.py
+++ b/pandas/computation/expressions.py
@@ -2,6 +2,10 @@
def set_use_numexpr(v=True):
+ """
+ .. deprecated:: 0.20.0
+ Use ``pandas.set_option('compute.use_numexpr', v)`` instead.
+ """
warnings.warn("pandas.computation.expressions.set_use_numexpr is "
"deprecated and will be removed in a future version.\n"
"you can toggle usage of numexpr via "
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index d47cb0762447b..630b68e9ed4a6 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -594,7 +594,8 @@ def _get_labels(self):
"""
Get the category labels (deprecated).
- Deprecated, use .codes!
+ .. deprecated:: 0.15.0
+ Use `.codes()` instead.
"""
warn("'labels' is deprecated. Use 'codes' instead", FutureWarning,
stacklevel=2)
diff --git a/pandas/core/datetools.py b/pandas/core/datetools.py
index 3444d09c6ed1b..83167a45369c4 100644
--- a/pandas/core/datetools.py
+++ b/pandas/core/datetools.py
@@ -1,4 +1,8 @@
-"""A collection of random tools for dealing with dates in Python"""
+"""A collection of random tools for dealing with dates in Python.
+
+.. deprecated:: 0.19.0
+ Use pandas.tseries module instead.
+"""
# flake8: noqa
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index e2ee3deb5396e..5d6fc7487eeb5 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -758,10 +758,9 @@ def is_dtype_union_equal(source, target):
def is_any_int_dtype(arr_or_dtype):
- """
- DEPRECATED: This function will be removed in a future version.
+ """Check whether the provided array or dtype is of an integer dtype.
- Check whether the provided array or dtype is of an integer dtype.
+ .. deprecated:: 0.20.0
In this function, timedelta64 instances are also considered "any-integer"
type objects and will return True.
@@ -1557,12 +1556,11 @@ def is_float_dtype(arr_or_dtype):
def is_floating_dtype(arr_or_dtype):
- """
- DEPRECATED: This function will be removed in a future version.
-
- Check whether the provided array or dtype is an instance of
+ """Check whether the provided array or dtype is an instance of
numpy's float dtype.
+ .. deprecated:: 0.20.0
+
Unlike, `is_float_dtype`, this check is a lot stricter, as it requires
`isinstance` of `np.floating` and not `issubclass`.
"""
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 821db3c263885..62993a3d168db 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1326,9 +1326,10 @@ def _from_arrays(cls, arrays, columns, index, dtype=None):
def from_csv(cls, path, header=0, sep=',', index_col=0, parse_dates=True,
encoding=None, tupleize_cols=None,
infer_datetime_format=False):
- """
- Read CSV file (DEPRECATED, please use :func:`pandas.read_csv`
- instead).
+ """Read CSV file.
+
+ .. deprecated:: 0.21.0
+ Use :func:`pandas.read_csv` instead.
It is preferable to use the more powerful :func:`pandas.read_csv`
for most general purposes, but ``from_csv`` makes for an easy
@@ -1979,12 +1980,10 @@ def _unpickle_matrix_compat(self, state): # pragma: no cover
# Getting and setting elements
def get_value(self, index, col, takeable=False):
- """
- Quickly retrieve single value at passed column and index
+ """Quickly retrieve single value at passed column and index
.. deprecated:: 0.21.0
-
- Please use .at[] or .iat[] accessors.
+ Use .at[] or .iat[] accessors instead.
Parameters
----------
@@ -2024,12 +2023,10 @@ def _get_value(self, index, col, takeable=False):
_get_value.__doc__ = get_value.__doc__
def set_value(self, index, col, value, takeable=False):
- """
- Put single value at passed column and index
+ """Put single value at passed column and index
.. deprecated:: 0.21.0
-
- Please use .at[] or .iat[] accessors.
+ Use .at[] or .iat[] accessors instead.
Parameters
----------
@@ -3737,12 +3734,13 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
def sortlevel(self, level=0, axis=0, ascending=True, inplace=False,
sort_remaining=True):
- """
- DEPRECATED: use :meth:`DataFrame.sort_index`
-
- Sort multilevel index by chosen axis and primary level. Data will be
+ """Sort multilevel index by chosen axis and primary level. Data will be
lexicographically sorted by the chosen level followed by the other
- levels (in order)
+ levels (in order).
+
+ .. deprecated:: 0.20.0
+ Use :meth:`DataFrame.sort_index`
+
Parameters
----------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 37247ab133948..c9672a43a95a8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2718,10 +2718,10 @@ def xs(self, key, axis=0, level=None, drop_level=True):
_xs = xs
def select(self, crit, axis=0):
- """
- Return data corresponding to axis labels matching criteria
+ """Return data corresponding to axis labels matching criteria
- DEPRECATED: use df.loc[df.index.map(crit)] to select via labels
+ .. deprecated:: 0.21.0
+ Use df.loc[df.index.map(crit)] to select via labels
Parameters
----------
@@ -4108,8 +4108,11 @@ def _consolidate(self, inplace=False):
return self._constructor(cons_data).__finalize__(self)
def consolidate(self, inplace=False):
- """
- DEPRECATED: consolidate will be an internal implementation only.
+ """Compute NDFrame with "consolidated" internals (data of each dtype
+ grouped together in a single ndarray).
+
+ .. deprecated:: 0.20.0
+ Consolidate will be an internal implementation only.
"""
# 15483
warnings.warn("consolidate is deprecated and will be removed in a "
@@ -4160,11 +4163,10 @@ def _get_bool_data(self):
# Internal Interface Methods
def as_matrix(self, columns=None):
- """
- DEPRECATED: as_matrix will be removed in a future version.
- Use :meth:`DataFrame.values` instead.
+ """Convert the frame to its Numpy-array representation.
- Convert the frame to its Numpy-array representation.
+ .. deprecated:: 0.23.0
+ Use :meth:`DataFrame.values` instead.
Parameters
----------
@@ -4479,12 +4481,11 @@ def _convert(self, datetime=False, numeric=False, timedelta=False,
timedelta=timedelta, coerce=coerce,
copy=copy)).__finalize__(self)
- # TODO: Remove in 0.18 or 2017, which ever is sooner
def convert_objects(self, convert_dates=True, convert_numeric=False,
convert_timedeltas=True, copy=True):
- """
- Deprecated.
- Attempt to infer better dtype for object columns
+ """Attempt to infer better dtype for object columns.
+
+ .. deprecated:: 0.21.0
Parameters
----------
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index ee2fdd213dd9a..07e001007d58d 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -441,9 +441,10 @@ def _isnan(self):
@property
def asobject(self):
- """DEPRECATED: Use ``astype(object)`` instead.
+ """Return object Index which contains boxed values.
- return object Index which contains boxed values
+ .. deprecated:: 0.23.0
+ Use ``astype(object)`` instead.
*this is an internal non-public method*
"""
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 1c401c4854306..26e7c192ad0af 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -477,8 +477,7 @@ def as_matrix(self):
# Getting and setting elements
def get_value(self, *args, **kwargs):
- """
- Quickly retrieve single value at (item, major, minor) location
+ """Quickly retrieve single value at (item, major, minor) location
.. deprecated:: 0.21.0
@@ -525,8 +524,7 @@ def _get_value(self, *args, **kwargs):
_get_value.__doc__ = get_value.__doc__
def set_value(self, *args, **kwargs):
- """
- Quickly set single value at (item, major, minor) location
+ """Quickly set single value at (item, major, minor) location
.. deprecated:: 0.21.0
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 5d8092fd30496..71cded4f9c888 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -93,8 +93,10 @@
# see gh-16971
def remove_na(arr):
- """
- DEPRECATED : this function will be removed in a future version.
+ """Remove null values from array like structure.
+
+ .. deprecated:: 0.21.0
+ Use s[s.notnull()] instead.
"""
warnings.warn("remove_na is deprecated and is a private "
@@ -290,8 +292,10 @@ def _init_dict(self, data, index=None, dtype=None):
@classmethod
def from_array(cls, arr, index=None, name=None, dtype=None, copy=False,
fastpath=False):
- """
- DEPRECATED: use the pd.Series(..) constructor instead.
+ """Construct Series from array.
+
+ .. deprecated :: 0.23.0
+ Use pd.Series(..) constructor instead.
"""
warnings.warn("'from_array' is deprecated and will be removed in a "
@@ -450,9 +454,11 @@ def get_values(self):
@property
def asobject(self):
- """DEPRECATED: Use ``astype(object)`` instead.
+ """Return object Series which contains boxed values.
+
+ .. deprecated :: 0.23.0
+ Use ``astype(object) instead.
- return object Series which contains boxed values
*this is an internal non-public method*
"""
@@ -911,12 +917,10 @@ def repeat(self, repeats, *args, **kwargs):
index=new_index).__finalize__(self)
def get_value(self, label, takeable=False):
- """
- Quickly retrieve single value at passed index label
+ """Quickly retrieve single value at passed index label
.. deprecated:: 0.21.0
-
- Please use .at[] or .iat[] accessors.
+ Please use .at[] or .iat[] accessors.
Parameters
----------
@@ -940,14 +944,12 @@ def _get_value(self, label, takeable=False):
_get_value.__doc__ = get_value.__doc__
def set_value(self, label, value, takeable=False):
- """
- Quickly set single value at passed label. If label is not contained, a
- new object is created with the label placed at the end of the result
- index
+ """Quickly set single value at passed label. If label is not contained,
+ a new object is created with the label placed at the end of the result
+ index.
.. deprecated:: 0.21.0
-
- Please use .at[] or .iat[] accessors.
+ Please use .at[] or .iat[] accessors.
Parameters
----------
@@ -1382,13 +1384,13 @@ def idxmax(self, axis=None, skipna=True, *args, **kwargs):
return self.index[i]
# ndarray compat
- argmin = deprecate('argmin', idxmin,
+ argmin = deprecate('argmin', idxmin, '0.21.0',
msg="'argmin' is deprecated, use 'idxmin' instead. "
"The behavior of 'argmin' will be corrected to "
"return the positional minimum in the future. "
"Use 'series.values.argmin' to get the position of "
"the minimum now.")
- argmax = deprecate('argmax', idxmax,
+ argmax = deprecate('argmax', idxmax, '0.21.0',
msg="'argmax' is deprecated, use 'idxmax' instead. "
"The behavior of 'argmax' will be corrected to "
"return the positional maximum in the future. "
@@ -2120,12 +2122,12 @@ def nsmallest(self, n=5, keep='first'):
return algorithms.SelectNSeries(self, n=n, keep=keep).nsmallest()
def sortlevel(self, level=0, ascending=True, sort_remaining=True):
- """
- DEPRECATED: use :meth:`Series.sort_index`
-
- Sort Series with MultiIndex by chosen level. Data will be
+ """Sort Series with MultiIndex by chosen level. Data will be
lexicographically sorted by the chosen level followed by the other
- levels (in order)
+ levels (in order),
+
+ .. deprecated:: 0.20.0
+ Use :meth:`Series.sort_index`
Parameters
----------
@@ -2670,7 +2672,12 @@ def shift(self, periods=1, freq=None, axis=0):
return super(Series, self).shift(periods=periods, freq=freq, axis=axis)
def reindex_axis(self, labels, axis=0, **kwargs):
- """ for compatibility with higher dims """
+ """Conform Series to new index with optional filling logic.
+
+ .. deprecated:: 0.21.0
+ Use ``Series.reindex`` instead.
+ """
+ # for compatibility with higher dims
if axis != 0:
raise ValueError("cannot reindex series on non-zero axis!")
msg = ("'.reindex_axis' is deprecated and will be removed in a future "
@@ -2808,9 +2815,10 @@ def between(self, left, right, inclusive=True):
@classmethod
def from_csv(cls, path, sep=',', parse_dates=True, header=None,
index_col=0, encoding=None, infer_datetime_format=False):
- """
- Read CSV file (DEPRECATED, please use :func:`pandas.read_csv`
- instead).
+ """Read CSV file.
+
+ .. deprecated:: 0.21.0
+ Use :func:`pandas.read_csv` instead.
It is preferable to use the more powerful :func:`pandas.read_csv`
for most general purposes, but ``from_csv`` makes for an easy
@@ -2978,8 +2986,10 @@ def dropna(self, axis=0, inplace=False, **kwargs):
return self.copy()
def valid(self, inplace=False, **kwargs):
- """DEPRECATED. Series.valid will be removed in a future version.
- Use :meth:`Series.dropna` instead.
+ """Return Series without null values.
+
+ .. deprecated:: 0.23.0
+ Use :meth:`Series.dropna` instead.
"""
warnings.warn("Method .valid will be removed in a future version. "
"Use .dropna instead.", FutureWarning, stacklevel=2)
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 05f39a8caa6f6..49a0b8d86ad31 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -820,12 +820,12 @@ def cumsum(self, axis=0, *args, **kwargs):
return self.apply(lambda x: x.cumsum(), axis=axis)
- @Appender(generic._shared_docs['isna'])
+ @Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
def isna(self):
return self._apply_columns(lambda x: x.isna())
isnull = isna
- @Appender(generic._shared_docs['notna'])
+ @Appender(generic._shared_docs['notna'] % _shared_doc_kwargs)
def notna(self):
return self._apply_columns(lambda x: x.notna())
notnull = notna
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index 8a38b1054a1f5..b5d2c0b607444 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -255,9 +255,10 @@ def npoints(self):
@classmethod
def from_array(cls, arr, index=None, name=None, copy=False,
fill_value=None, fastpath=False):
- """
- DEPRECATED: use the pd.SparseSeries(..) constructor instead.
+ """Construct SparseSeries from array.
+ .. deprecated:: 0.23.0
+ Use the pd.SparseSeries(..) constructor instead.
"""
warnings.warn("'from_array' is deprecated and will be removed in a "
"future version. Please use the pd.SparseSeries(..) "
@@ -571,8 +572,9 @@ def to_dense(self, sparse_only=False):
Parameters
----------
- sparse_only: bool, default False
- DEPRECATED: this argument will be removed in a future version.
+ sparse_only : bool, default False
+ .. deprecated:: 0.20.0
+ This argument will be removed in a future version.
If True, return just the non-sparse values, or the dense version
of `self.values` if False.
@@ -679,7 +681,7 @@ def cumsum(self, axis=0, *args, **kwargs):
new_array, index=self.index,
sparse_index=new_array.sp_index).__finalize__(self)
- @Appender(generic._shared_docs['isna'])
+ @Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
def isna(self):
arr = SparseArray(isna(self.values.sp_values),
sparse_index=self.values.sp_index,
@@ -687,7 +689,7 @@ def isna(self):
return self._constructor(arr, index=self.index).__finalize__(self)
isnull = isna
- @Appender(generic._shared_docs['notna'])
+ @Appender(generic._shared_docs['notna'] % _shared_doc_kwargs)
def notna(self):
arr = SparseArray(notna(self.values.sp_values),
sparse_index=self.values.sp_index,
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 3b7ec2ad8a508..e0012c25e366d 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -478,7 +478,8 @@ def str_match(arr, pat, case=True, flags=0, na=np.nan, as_indexer=None):
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
na : default NaN, fill value for missing values.
- as_indexer : DEPRECATED - Keyword is ignored.
+ as_indexer
+ .. deprecated:: 0.21.0
Returns
-------
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 6be6152b09fc8..eed9cee54efb3 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -7,10 +7,15 @@
from functools import wraps, update_wrapper
-def deprecate(name, alternative, alt_name=None, klass=None,
- stacklevel=2, msg=None):
- """
- Return a new function that emits a deprecation warning on use.
+def deprecate(name, alternative, version, alt_name=None,
+ klass=None, stacklevel=2, msg=None):
+ """Return a new function that emits a deprecation warning on use.
+
+ To use this method for a deprecated function, another function
+ `alternative` with the same signature must exist. The deprecated
+ function will emit a deprecation warning, and in the docstring
+ it will contain the deprecation directive with the provided version
+ so it can be detected for future removal.
Parameters
----------
@@ -18,6 +23,8 @@ def deprecate(name, alternative, alt_name=None, klass=None,
Name of function to deprecate
alternative : str
Name of function to use instead
+ version : str
+ Version of pandas in which the method has been deprecated
alt_name : str, optional
Name to use in preference of alternative.__name__
klass : Warning, default FutureWarning
@@ -29,16 +36,24 @@ def deprecate(name, alternative, alt_name=None, klass=None,
alt_name = alt_name or alternative.__name__
klass = klass or FutureWarning
- msg = msg or "{} is deprecated, use {} instead".format(name, alt_name)
+ warning_msg = msg or '{} is deprecated, use {} instead'.format(name,
+ alt_name)
@wraps(alternative)
def wrapper(*args, **kwargs):
- warnings.warn(msg, klass, stacklevel=stacklevel)
+ warnings.warn(warning_msg, klass, stacklevel=stacklevel)
return alternative(*args, **kwargs)
- if getattr(wrapper, '__doc__', None) is not None:
- wrapper.__doc__ = ('\n'.join(wrap(msg, 70)) + '\n'
- + dedent(wrapper.__doc__))
+ # adding deprecated directive to the docstring
+ msg = msg or 'Use `{alt_name}` instead.'
+ docstring = '.. deprecated:: {}\n'.format(version)
+ docstring += dedent(' ' + ('\n'.join(wrap(msg, 70))))
+
+ if getattr(wrapper, '__doc__') is not None:
+ docstring += dedent(wrapper.__doc__)
+
+ wrapper.__doc__ = docstring
+
return wrapper
diff --git a/scripts/announce.py b/scripts/announce.py
index 1459d2fc18d2a..7b7933eba54dd 100644
--- a/scripts/announce.py
+++ b/scripts/announce.py
@@ -30,7 +30,7 @@
From the bash command line with $GITHUB token.
- $ ./scripts/announce $GITHUB v1.11.0..v1.11.1 > announce.rst
+ $ ./scripts/announce.py $GITHUB v1.11.0..v1.11.1 > announce.rst
"""
from __future__ import print_function, division
diff --git a/scripts/api_rst_coverage.py b/scripts/api_rst_coverage.py
old mode 100644
new mode 100755
index 45340ba0923c4..28e761ef256d0
--- a/scripts/api_rst_coverage.py
+++ b/scripts/api_rst_coverage.py
@@ -1,3 +1,22 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+"""
+Script to generate a report with the coverage of the API in the docs.
+
+The output of this script shows the existing methods that are not
+included in the API documentation, as well as the methods documented
+that do not exist. Ideally, no method should be listed. Currently it
+considers the methods of Series, DataFrame and Panel.
+
+Deprecated methods are usually removed from the documentation, while
+still available for three minor versions. They are listed with the
+word deprecated and the version number next to them.
+
+Usage::
+
+ $ PYTHONPATH=.. ./api_rst_coverage.py
+
+"""
import pandas as pd
import inspect
import re
@@ -13,6 +32,32 @@ def class_name_sort_key(x):
else:
return x
+ def get_docstring(x):
+ class_name, method = x.split('.')
+ obj = getattr(getattr(pd, class_name), method)
+ return obj.__doc__
+
+ def deprecation_version(x):
+ pattern = re.compile('\.\. deprecated:: ([0-9]+\.[0-9]+\.[0-9]+)')
+ doc = get_docstring(x)
+ match = pattern.search(doc)
+ if match:
+ return match.groups()[0]
+
+ def add_notes(x):
+ # Some methods are not documented in api.rst because they
+ # have been deprecated. Adding a comment to detect them easier.
+ doc = get_docstring(x)
+ note = None
+ if not doc:
+ note = 'no docstring'
+ else:
+ version = deprecation_version(x)
+ if version:
+ note = 'deprecated in {}'.format(version)
+
+ return '{} ({})'.format(x, note) if note else x
+
# class members
class_members = set()
for cls in classes:
@@ -34,10 +79,12 @@ def class_name_sort_key(x):
print(x)
print()
- print("Class members (other than those beginning with '_') missing from api.rst:")
- for x in sorted(class_members.difference(api_rst_members), key=class_name_sort_key):
+ print("Class members (other than those beginning with '_') "
+ "missing from api.rst:")
+ for x in sorted(class_members.difference(api_rst_members),
+ key=class_name_sort_key):
if '._' not in x:
- print(x)
+ print(add_notes(x))
if __name__ == "__main__":
main()
| - [X] closes #18928
- [X] tests passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18934 | 2017-12-25T00:17:58Z | 2018-01-06T17:15:23Z | 2018-01-06T17:15:23Z | 2018-03-14T18:16:02Z |
CLN: ASV period | diff --git a/asv_bench/benchmarks/period.py b/asv_bench/benchmarks/period.py
index 15d7655293ea3..897a3338c164c 100644
--- a/asv_bench/benchmarks/period.py
+++ b/asv_bench/benchmarks/period.py
@@ -1,61 +1,24 @@
-import pandas as pd
-from pandas import Series, Period, PeriodIndex, date_range
+from pandas import (DataFrame, Series, Period, PeriodIndex, date_range,
+ period_range)
class PeriodProperties(object):
- params = ['M', 'min']
- param_names = ['freq']
-
- def setup(self, freq):
- self.per = Period('2012-06-01', freq=freq)
-
- def time_year(self, freq):
- self.per.year
-
- def time_month(self, freq):
- self.per.month
-
- def time_day(self, freq):
- self.per.day
-
- def time_hour(self, freq):
- self.per.hour
-
- def time_minute(self, freq):
- self.per.minute
-
- def time_second(self, freq):
- self.per.second
-
- def time_is_leap_year(self, freq):
- self.per.is_leap_year
- def time_quarter(self, freq):
- self.per.quarter
+ params = (['M', 'min'],
+ ['year', 'month', 'day', 'hour', 'minute', 'second',
+ 'is_leap_year', 'quarter', 'qyear', 'week', 'daysinmonth',
+ 'dayofweek', 'dayofyear', 'start_time', 'end_time'])
+ param_names = ['freq', 'attr']
- def time_qyear(self, freq):
- self.per.qyear
-
- def time_week(self, freq):
- self.per.week
-
- def time_daysinmonth(self, freq):
- self.per.daysinmonth
-
- def time_dayofweek(self, freq):
- self.per.dayofweek
-
- def time_dayofyear(self, freq):
- self.per.dayofyear
-
- def time_start_time(self, freq):
- self.per.start_time
+ def setup(self, freq, attr):
+ self.per = Period('2012-06-01', freq=freq)
- def time_end_time(self, freq):
- self.per.end_time
+ def time_property(self, freq, attr):
+ getattr(self.per, attr)
class PeriodUnaryMethods(object):
+
params = ['M', 'min']
param_names = ['freq']
@@ -73,6 +36,7 @@ def time_asfreq(self, freq):
class PeriodIndexConstructor(object):
+
goal_time = 0.2
params = ['D']
@@ -90,19 +54,19 @@ def time_from_pydatetime(self, freq):
class DataFramePeriodColumn(object):
+
goal_time = 0.2
- def setup_cache(self):
- rng = pd.period_range(start='1/1/1990', freq='S', periods=20000)
- df = pd.DataFrame(index=range(len(rng)))
- return rng, df
+ def setup(self):
+ self.rng = period_range(start='1/1/1990', freq='S', periods=20000)
+ self.df = DataFrame(index=range(len(self.rng)))
- def time_setitem_period_column(self, tup):
- rng, df = tup
- df['col'] = rng
+ def time_setitem_period_column(self):
+ self.df['col'] = self.rng
class Algorithms(object):
+
goal_time = 0.2
params = ['index', 'series']
@@ -125,6 +89,7 @@ def time_value_counts(self, typ):
class Indexing(object):
+
goal_time = 0.2
def setup(self):
@@ -145,7 +110,7 @@ def time_series_loc(self):
self.series.loc[self.period]
def time_align(self):
- pd.DataFrame({'a': self.series, 'b': self.series[:500]})
+ DataFrame({'a': self.series, 'b': self.series[:500]})
def time_intersection(self):
self.index[:750].intersection(self.index[250:])
| - Utilized `param` for the `PeriodProperties` benchmark
- Replaced `setup_cache` for just `setup` since only one benchmark was being run for that class.
```
$ asv dev -b ^period
· Discovering benchmarks
· Running 15 total benchmarks (1 commits * 1 environments * 15 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 6.67%] ··· Running period.Algorithms.time_drop_duplicates ok
[ 6.67%] ····
======== ========
typ
-------- --------
index 777μs
series 7.47ms
======== ========
[ 13.33%] ··· Running period.Algorithms.time_value_counts ok
[ 13.33%] ····
======== ========
typ
-------- --------
index 1.33ms
series 7.94ms
======== ========
[ 20.00%] ··· Running ...ramePeriodColumn.time_setitem_period_column 80.8ms
[ 26.67%] ··· Running period.Indexing.time_align 2.57ms
[ 33.33%] ··· Running period.Indexing.time_get_loc 211μs
[ 40.00%] ··· Running period.Indexing.time_intersection 516μs
[ 46.67%] ··· Running period.Indexing.time_series_loc 417μs
[ 53.33%] ··· Running period.Indexing.time_shallow_copy 53.4μs
[ 60.00%] ··· Running period.Indexing.time_shape 13.5μs
[ 66.67%] ··· Running ...PeriodIndexConstructor.time_from_date_range ok
[ 66.67%] ····
====== =======
freq
------ -------
D 408μs
====== =======
[ 73.33%] ··· Running ...PeriodIndexConstructor.time_from_pydatetime ok
[ 73.33%] ····
====== ========
freq
------ --------
D 15.3ms
====== ========
[ 80.00%] ··· Running period.PeriodProperties.time_property ok
[ 80.00%] ····
====== ============== ========
freq attr
------ -------------- --------
M year 17.5μs
M month 17.2μs
M day 17.4μs
M hour 17.4μs
M minute 17.6μs
M second 16.8μs
M is_leap_year 17.6μs
M quarter 17.1μs
M qyear 17.1μs
M week 17.8μs
M daysinmonth 17.7μs
M dayofweek 16.9μs
M dayofyear 17.4μs
M start_time 243μs
M end_time 263μs
min year 17.4μs
min month 18.5μs
min day 18.1μs
min hour 18.1μs
min minute 18.2μs
min second 18.1μs
min is_leap_year 19.4μs
min quarter 16.7μs
min qyear 17.7μs
min week 18.2μs
min daysinmonth 18.4μs
min dayofweek 18.2μs
min dayofyear 18.2μs
min start_time 242μs
min end_time 260μs
====== ============== ========
[ 86.67%] ··· Running period.PeriodUnaryMethods.time_asfreq ok
[ 86.67%] ····
====== =======
freq
------ -------
M 161μs
min 166μs
====== =======
[ 93.33%] ··· Running period.PeriodUnaryMethods.time_now ok
[ 93.33%] ····
====== =======
freq
------ -------
M 128μs
min 224μs
====== =======
[100.00%] ··· Running period.PeriodUnaryMethods.time_to_timestamp ok
[100.00%] ····
====== =======
freq
------ -------
M 245μs
min 242μs
====== =======
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/18932 | 2017-12-24T21:48:17Z | 2017-12-26T22:00:01Z | 2017-12-26T22:00:01Z | 2017-12-31T04:33:10Z |
TST: organize and cleanup pandas/tests/groupby/test_aggregate.py | diff --git a/.gitignore b/.gitignore
index b1748ae72b8ba..0d4e8c6fb75a6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -21,6 +21,7 @@
.ipynb_checkpoints
.tags
.cache/
+.vscode/
# Compiled source #
###################
diff --git a/pandas/tests/groupby/aggregate/__init__.py b/pandas/tests/groupby/aggregate/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
new file mode 100644
index 0000000000000..caf2365a54ec8
--- /dev/null
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -0,0 +1,294 @@
+# -*- coding: utf-8 -*-
+
+"""
+test .agg behavior / note that .apply is tested generally in test_groupby.py
+"""
+
+import pytest
+
+import numpy as np
+import pandas as pd
+
+from pandas import concat, DataFrame, Index, MultiIndex, Series
+from pandas.core.groupby import SpecificationError
+from pandas.compat import OrderedDict
+import pandas.util.testing as tm
+
+
+class TestGroupByAggregate(object):
+
+ def setup_method(self, method):
+ self.ts = tm.makeTimeSeries()
+
+ self.seriesd = tm.getSeriesData()
+ self.tsd = tm.getTimeSeriesData()
+ self.frame = DataFrame(self.seriesd)
+ self.tsframe = DataFrame(self.tsd)
+
+ self.df = DataFrame(
+ {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.random.randn(8)})
+
+ self.df_mixed_floats = DataFrame(
+ {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.array(np.random.randn(8), dtype='float32')})
+
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['first', 'second'])
+ self.mframe = DataFrame(np.random.randn(10, 3), index=index,
+ columns=['A', 'B', 'C'])
+
+ self.three_group = DataFrame(
+ {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny',
+ 'dull', 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
+
+ def test_agg_regression1(self):
+ grouped = self.tsframe.groupby([lambda x: x.year, lambda x: x.month])
+ result = grouped.agg(np.mean)
+ expected = grouped.mean()
+ tm.assert_frame_equal(result, expected)
+
+ def test_agg_must_agg(self):
+ grouped = self.df.groupby('A')['C']
+
+ msg = "Must produce aggregated value"
+ with tm.assert_raises_regex(Exception, msg):
+ grouped.agg(lambda x: x.describe())
+ with tm.assert_raises_regex(Exception, msg):
+ grouped.agg(lambda x: x.index[:2])
+
+ def test_agg_ser_multi_key(self):
+ # TODO(wesm): unused
+ ser = self.df.C # noqa
+
+ f = lambda x: x.sum()
+ results = self.df.C.groupby([self.df.A, self.df.B]).aggregate(f)
+ expected = self.df.groupby(['A', 'B']).sum()['C']
+ tm.assert_series_equal(results, expected)
+
+ def test_agg_apply_corner(self):
+ # nothing to group, all NA
+ grouped = self.ts.groupby(self.ts * np.nan)
+ assert self.ts.dtype == np.float64
+
+ # groupby float64 values results in Float64Index
+ exp = Series([], dtype=np.float64,
+ index=pd.Index([], dtype=np.float64))
+ tm.assert_series_equal(grouped.sum(), exp)
+ tm.assert_series_equal(grouped.agg(np.sum), exp)
+ tm.assert_series_equal(grouped.apply(np.sum), exp,
+ check_index_type=False)
+
+ # DataFrame
+ grouped = self.tsframe.groupby(self.tsframe['A'] * np.nan)
+ exp_df = DataFrame(columns=self.tsframe.columns, dtype=float,
+ index=pd.Index([], dtype=np.float64))
+ tm.assert_frame_equal(grouped.sum(), exp_df, check_names=False)
+ tm.assert_frame_equal(grouped.agg(np.sum), exp_df, check_names=False)
+ tm.assert_frame_equal(grouped.apply(np.sum), exp_df.iloc[:, :0],
+ check_names=False)
+
+ def test_agg_grouping_is_list_tuple(self):
+ from pandas.core.groupby import Grouping
+
+ df = tm.makeTimeDataFrame()
+
+ grouped = df.groupby(lambda x: x.year)
+ grouper = grouped.grouper.groupings[0].grouper
+ grouped.grouper.groupings[0] = Grouping(self.ts.index, list(grouper))
+
+ result = grouped.agg(np.mean)
+ expected = grouped.mean()
+ tm.assert_frame_equal(result, expected)
+
+ grouped.grouper.groupings[0] = Grouping(self.ts.index, tuple(grouper))
+
+ result = grouped.agg(np.mean)
+ expected = grouped.mean()
+ tm.assert_frame_equal(result, expected)
+
+ def test_agg_python_multiindex(self):
+ grouped = self.mframe.groupby(['A', 'B'])
+
+ result = grouped.agg(np.mean)
+ expected = grouped.mean()
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize('groupbyfunc', [
+ lambda x: x.weekday(),
+ [lambda x: x.month, lambda x: x.weekday()],
+ ])
+ def test_aggregate_str_func(self, groupbyfunc):
+ grouped = self.tsframe.groupby(groupbyfunc)
+
+ # single series
+ result = grouped['A'].agg('std')
+ expected = grouped['A'].std()
+ tm.assert_series_equal(result, expected)
+
+ # group frame by function name
+ result = grouped.aggregate('var')
+ expected = grouped.var()
+ tm.assert_frame_equal(result, expected)
+
+ # group frame by function dict
+ result = grouped.agg(OrderedDict([['A', 'var'],
+ ['B', 'std'],
+ ['C', 'mean'],
+ ['D', 'sem']]))
+ expected = DataFrame(OrderedDict([['A', grouped['A'].var()],
+ ['B', grouped['B'].std()],
+ ['C', grouped['C'].mean()],
+ ['D', grouped['D'].sem()]]))
+ tm.assert_frame_equal(result, expected)
+
+ def test_aggregate_item_by_item(self):
+ df = self.df.copy()
+ df['E'] = ['a'] * len(self.df)
+ grouped = self.df.groupby('A')
+
+ aggfun = lambda ser: ser.size
+ result = grouped.agg(aggfun)
+ foo = (self.df.A == 'foo').sum()
+ bar = (self.df.A == 'bar').sum()
+ K = len(result.columns)
+
+ # GH5782
+ # odd comparisons can result here, so cast to make easy
+ exp = pd.Series(np.array([foo] * K), index=list('BCD'),
+ dtype=np.float64, name='foo')
+ tm.assert_series_equal(result.xs('foo'), exp)
+
+ exp = pd.Series(np.array([bar] * K), index=list('BCD'),
+ dtype=np.float64, name='bar')
+ tm.assert_almost_equal(result.xs('bar'), exp)
+
+ def aggfun(ser):
+ return ser.size
+
+ result = DataFrame().groupby(self.df.A).agg(aggfun)
+ assert isinstance(result, DataFrame)
+ assert len(result) == 0
+
+ def test_wrap_agg_out(self):
+ grouped = self.three_group.groupby(['A', 'B'])
+
+ def func(ser):
+ if ser.dtype == np.object:
+ raise TypeError
+ else:
+ return ser.sum()
+
+ result = grouped.aggregate(func)
+ exp_grouped = self.three_group.loc[:, self.three_group.columns != 'C']
+ expected = exp_grouped.groupby(['A', 'B']).aggregate(func)
+ tm.assert_frame_equal(result, expected)
+
+ def test_agg_multiple_functions_maintain_order(self):
+ # GH #610
+ funcs = [('mean', np.mean), ('max', np.max), ('min', np.min)]
+ result = self.df.groupby('A')['C'].agg(funcs)
+ exp_cols = Index(['mean', 'max', 'min'])
+
+ tm.assert_index_equal(result.columns, exp_cols)
+
+ def test_multiple_functions_tuples_and_non_tuples(self):
+ # #1359
+ funcs = [('foo', 'mean'), 'std']
+ ex_funcs = [('foo', 'mean'), ('std', 'std')]
+
+ result = self.df.groupby('A')['C'].agg(funcs)
+ expected = self.df.groupby('A')['C'].agg(ex_funcs)
+ tm.assert_frame_equal(result, expected)
+
+ result = self.df.groupby('A').agg(funcs)
+ expected = self.df.groupby('A').agg(ex_funcs)
+ tm.assert_frame_equal(result, expected)
+
+ def test_agg_multiple_functions_too_many_lambdas(self):
+ grouped = self.df.groupby('A')
+ funcs = ['mean', lambda x: x.mean(), lambda x: x.std()]
+
+ msg = 'Function names must be unique, found multiple named <lambda>'
+ with tm.assert_raises_regex(SpecificationError, msg):
+ grouped.agg(funcs)
+
+ def test_more_flexible_frame_multi_function(self):
+ grouped = self.df.groupby('A')
+
+ exmean = grouped.agg(OrderedDict([['C', np.mean], ['D', np.mean]]))
+ exstd = grouped.agg(OrderedDict([['C', np.std], ['D', np.std]]))
+
+ expected = concat([exmean, exstd], keys=['mean', 'std'], axis=1)
+ expected = expected.swaplevel(0, 1, axis=1).sort_index(level=0, axis=1)
+
+ d = OrderedDict([['C', [np.mean, np.std]], ['D', [np.mean, np.std]]])
+ result = grouped.aggregate(d)
+
+ tm.assert_frame_equal(result, expected)
+
+ # be careful
+ result = grouped.aggregate(OrderedDict([['C', np.mean],
+ ['D', [np.mean, np.std]]]))
+ expected = grouped.aggregate(OrderedDict([['C', np.mean],
+ ['D', [np.mean, np.std]]]))
+ tm.assert_frame_equal(result, expected)
+
+ def foo(x):
+ return np.mean(x)
+
+ def bar(x):
+ return np.std(x, ddof=1)
+
+ # this uses column selection & renaming
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ d = OrderedDict([['C', np.mean],
+ ['D', OrderedDict([['foo', np.mean],
+ ['bar', np.std]])]])
+ result = grouped.aggregate(d)
+
+ d = OrderedDict([['C', [np.mean]], ['D', [foo, bar]]])
+ expected = grouped.aggregate(d)
+
+ tm.assert_frame_equal(result, expected)
+
+ def test_multi_function_flexible_mix(self):
+ # GH #1268
+ grouped = self.df.groupby('A')
+
+ # Expected
+ d = OrderedDict([['C', OrderedDict([['foo', 'mean'], ['bar', 'std']])],
+ ['D', {'sum': 'sum'}]])
+ # this uses column selection & renaming
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ expected = grouped.aggregate(d)
+
+ # Test 1
+ d = OrderedDict([['C', OrderedDict([['foo', 'mean'], ['bar', 'std']])],
+ ['D', 'sum']])
+ # this uses column selection & renaming
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = grouped.aggregate(d)
+ tm.assert_frame_equal(result, expected)
+
+ # Test 2
+ d = OrderedDict([['C', OrderedDict([['foo', 'mean'], ['bar', 'std']])],
+ ['D', ['sum']]])
+ # this uses column selection & renaming
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = grouped.aggregate(d)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py
new file mode 100644
index 0000000000000..c8ee05ddbb74f
--- /dev/null
+++ b/pandas/tests/groupby/aggregate/test_cython.py
@@ -0,0 +1,189 @@
+# -*- coding: utf-8 -*-
+
+"""
+test cython .agg behavior
+"""
+
+from __future__ import print_function
+
+import pytest
+
+import numpy as np
+from numpy import nan
+import pandas as pd
+
+from pandas import bdate_range, DataFrame, Index, Series
+from pandas.core.groupby import DataError
+import pandas.util.testing as tm
+
+
+@pytest.mark.parametrize('op_name', [
+ 'count',
+ 'sum',
+ 'std',
+ 'var',
+ 'sem',
+ 'mean',
+ 'median',
+ 'prod',
+ 'min',
+ 'max',
+])
+def test_cythonized_aggers(op_name):
+ data = {'A': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1., nan, nan],
+ 'B': ['A', 'B'] * 6,
+ 'C': np.random.randn(12)}
+ df = DataFrame(data)
+ df.loc[2:10:2, 'C'] = nan
+
+ op = lambda x: getattr(x, op_name)()
+
+ # single column
+ grouped = df.drop(['B'], axis=1).groupby('A')
+ exp = {}
+ for cat, group in grouped:
+ exp[cat] = op(group['C'])
+ exp = DataFrame({'C': exp})
+ exp.index.name = 'A'
+ result = op(grouped)
+ tm.assert_frame_equal(result, exp)
+
+ # multiple columns
+ grouped = df.groupby(['A', 'B'])
+ expd = {}
+ for (cat1, cat2), group in grouped:
+ expd.setdefault(cat1, {})[cat2] = op(group['C'])
+ exp = DataFrame(expd).T.stack(dropna=False)
+ exp.index.names = ['A', 'B']
+ exp.name = 'C'
+
+ result = op(grouped)['C']
+ if op_name in ['sum', 'prod']:
+ tm.assert_series_equal(result, exp)
+
+
+def test_cython_agg_boolean():
+ frame = DataFrame({'a': np.random.randint(0, 5, 50),
+ 'b': np.random.randint(0, 2, 50).astype('bool')})
+ result = frame.groupby('a')['b'].mean()
+ expected = frame.groupby('a')['b'].agg(np.mean)
+
+ tm.assert_series_equal(result, expected)
+
+
+def test_cython_agg_nothing_to_agg():
+ frame = DataFrame({'a': np.random.randint(0, 5, 50),
+ 'b': ['foo', 'bar'] * 25})
+ msg = "No numeric types to aggregate"
+
+ with tm.assert_raises_regex(DataError, msg):
+ frame.groupby('a')['b'].mean()
+
+ frame = DataFrame({'a': np.random.randint(0, 5, 50),
+ 'b': ['foo', 'bar'] * 25})
+ with tm.assert_raises_regex(DataError, msg):
+ frame[['b']].groupby(frame['a']).mean()
+
+
+def test_cython_agg_nothing_to_agg_with_dates():
+ frame = DataFrame({'a': np.random.randint(0, 5, 50),
+ 'b': ['foo', 'bar'] * 25,
+ 'dates': pd.date_range('now', periods=50, freq='T')})
+ msg = "No numeric types to aggregate"
+ with tm.assert_raises_regex(DataError, msg):
+ frame.groupby('b').dates.mean()
+
+
+def test_cython_agg_frame_columns():
+ # #2113
+ df = DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
+
+ df.groupby(level=0, axis='columns').mean()
+ df.groupby(level=0, axis='columns').mean()
+ df.groupby(level=0, axis='columns').mean()
+ df.groupby(level=0, axis='columns').mean()
+
+
+def test_cython_agg_return_dict():
+ # GH 16741
+ df = DataFrame(
+ {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
+ 'C': np.random.randn(8),
+ 'D': np.random.randn(8)})
+
+ ts = df.groupby('A')['B'].agg(lambda x: x.value_counts().to_dict())
+ expected = Series([{'two': 1, 'one': 1, 'three': 1},
+ {'two': 2, 'one': 2, 'three': 1}],
+ index=Index(['bar', 'foo'], name='A'),
+ name='B')
+ tm.assert_series_equal(ts, expected)
+
+
+def test_cython_fail_agg():
+ dr = bdate_range('1/1/2000', periods=50)
+ ts = Series(['A', 'B', 'C', 'D', 'E'] * 10, index=dr)
+
+ grouped = ts.groupby(lambda x: x.month)
+ summed = grouped.sum()
+ expected = grouped.agg(np.sum)
+ tm.assert_series_equal(summed, expected)
+
+
+@pytest.mark.parametrize('op, targop', [
+ ('mean', np.mean),
+ ('median', np.median),
+ ('var', np.var),
+ ('add', np.sum),
+ ('prod', np.prod),
+ ('min', np.min),
+ ('max', np.max),
+ ('first', lambda x: x.iloc[0]),
+ ('last', lambda x: x.iloc[-1]),
+])
+def test__cython_agg_general(op, targop):
+ df = DataFrame(np.random.randn(1000))
+ labels = np.random.randint(0, 50, size=1000).astype(float)
+
+ result = df.groupby(labels)._cython_agg_general(op)
+ expected = df.groupby(labels).agg(targop)
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize('op, targop', [
+ ('mean', np.mean),
+ ('median', lambda x: np.median(x) if len(x) > 0 else np.nan),
+ ('var', lambda x: np.var(x, ddof=1)),
+ ('min', np.min),
+ ('max', np.max), ]
+)
+def test_cython_agg_empty_buckets(op, targop):
+ df = pd.DataFrame([11, 12, 13])
+ grps = range(0, 55, 5)
+
+ # calling _cython_agg_general directly, instead of via the user API
+ # which sets different values for min_count, so do that here.
+ result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(op)
+ expected = df.groupby(pd.cut(df[0], grps)).agg(lambda x: targop(x))
+ tm.assert_frame_equal(result, expected)
+
+
+def test_cython_agg_empty_buckets_nanops():
+ # GH-18869 can't call nanops on empty groups, so hardcode expected
+ # for these
+ df = pd.DataFrame([11, 12, 13], columns=['a'])
+ grps = range(0, 25, 5)
+ # add / sum
+ result = df.groupby(pd.cut(df['a'], grps))._cython_agg_general('add')
+ intervals = pd.interval_range(0, 20, freq=5)
+ expected = pd.DataFrame(
+ {"a": [0, 0, 36, 0]},
+ index=pd.CategoricalIndex(intervals, name='a', ordered=True))
+ tm.assert_frame_equal(result, expected)
+
+ # prod
+ result = df.groupby(pd.cut(df['a'], grps))._cython_agg_general('prod')
+ expected = pd.DataFrame(
+ {"a": [1, 1, 1716, 1]},
+ index=pd.CategoricalIndex(intervals, name='a', ordered=True))
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
new file mode 100644
index 0000000000000..f8e44b1548819
--- /dev/null
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -0,0 +1,501 @@
+# -*- coding: utf-8 -*-
+
+"""
+test all other .agg behavior
+"""
+
+from __future__ import print_function
+
+import pytest
+
+from datetime import datetime, timedelta
+from functools import partial
+
+import numpy as np
+import pandas as pd
+
+from pandas import date_range, DataFrame, Index, MultiIndex, Series
+from pandas.core.groupby import SpecificationError
+from pandas.io.formats.printing import pprint_thing
+import pandas.util.testing as tm
+
+
+def test_agg_api():
+ # GH 6337
+ # http://stackoverflow.com/questions/21706030/pandas-groupby-agg-function-column-dtype-error
+ # different api for agg when passed custom function with mixed frame
+
+ df = DataFrame({'data1': np.random.randn(5),
+ 'data2': np.random.randn(5),
+ 'key1': ['a', 'a', 'b', 'b', 'a'],
+ 'key2': ['one', 'two', 'one', 'two', 'one']})
+ grouped = df.groupby('key1')
+
+ def peak_to_peak(arr):
+ return arr.max() - arr.min()
+
+ expected = grouped.agg([peak_to_peak])
+ expected.columns = ['data1', 'data2']
+ result = grouped.agg(peak_to_peak)
+ tm.assert_frame_equal(result, expected)
+
+
+def test_agg_datetimes_mixed():
+ data = [[1, '2012-01-01', 1.0],
+ [2, '2012-01-02', 2.0],
+ [3, None, 3.0]]
+
+ df1 = DataFrame({'key': [x[0] for x in data],
+ 'date': [x[1] for x in data],
+ 'value': [x[2] for x in data]})
+
+ data = [[row[0],
+ datetime.strptime(row[1], '%Y-%m-%d').date() if row[1] else None,
+ row[2]]
+ for row in data]
+
+ df2 = DataFrame({'key': [x[0] for x in data],
+ 'date': [x[1] for x in data],
+ 'value': [x[2] for x in data]})
+
+ df1['weights'] = df1['value'] / df1['value'].sum()
+ gb1 = df1.groupby('date').aggregate(np.sum)
+
+ df2['weights'] = df1['value'] / df1['value'].sum()
+ gb2 = df2.groupby('date').aggregate(np.sum)
+
+ assert (len(gb1) == len(gb2))
+
+
+def test_agg_period_index():
+ from pandas import period_range, PeriodIndex
+ prng = period_range('2012-1-1', freq='M', periods=3)
+ df = DataFrame(np.random.randn(3, 2), index=prng)
+ rs = df.groupby(level=0).sum()
+ assert isinstance(rs.index, PeriodIndex)
+
+ # GH 3579
+ index = period_range(start='1999-01', periods=5, freq='M')
+ s1 = Series(np.random.rand(len(index)), index=index)
+ s2 = Series(np.random.rand(len(index)), index=index)
+ series = [('s1', s1), ('s2', s2)]
+ df = DataFrame.from_items(series)
+ grouped = df.groupby(df.index.month)
+ list(grouped)
+
+
+def test_agg_dict_parameter_cast_result_dtypes():
+ # GH 12821
+
+ df = DataFrame({'class': ['A', 'A', 'B', 'B', 'C', 'C', 'D', 'D'],
+ 'time': date_range('1/1/2011', periods=8, freq='H')})
+ df.loc[[0, 1, 2, 5], 'time'] = None
+
+ # test for `first` function
+ exp = df.loc[[0, 3, 4, 6]].set_index('class')
+ grouped = df.groupby('class')
+ tm.assert_frame_equal(grouped.first(), exp)
+ tm.assert_frame_equal(grouped.agg('first'), exp)
+ tm.assert_frame_equal(grouped.agg({'time': 'first'}), exp)
+ tm.assert_series_equal(grouped.time.first(), exp['time'])
+ tm.assert_series_equal(grouped.time.agg('first'), exp['time'])
+
+ # test for `last` function
+ exp = df.loc[[0, 3, 4, 7]].set_index('class')
+ grouped = df.groupby('class')
+ tm.assert_frame_equal(grouped.last(), exp)
+ tm.assert_frame_equal(grouped.agg('last'), exp)
+ tm.assert_frame_equal(grouped.agg({'time': 'last'}), exp)
+ tm.assert_series_equal(grouped.time.last(), exp['time'])
+ tm.assert_series_equal(grouped.time.agg('last'), exp['time'])
+
+ # count
+ exp = pd.Series([2, 2, 2, 2],
+ index=Index(list('ABCD'), name='class'),
+ name='time')
+ tm.assert_series_equal(grouped.time.agg(len), exp)
+ tm.assert_series_equal(grouped.time.size(), exp)
+
+ exp = pd.Series([0, 1, 1, 2],
+ index=Index(list('ABCD'), name='class'),
+ name='time')
+ tm.assert_series_equal(grouped.time.count(), exp)
+
+
+def test_agg_cast_results_dtypes():
+ # similar to GH12821
+ # xref #11444
+ u = [datetime(2015, x + 1, 1) for x in range(12)]
+ v = list('aaabbbbbbccd')
+ df = pd.DataFrame({'X': v, 'Y': u})
+
+ result = df.groupby('X')['Y'].agg(len)
+ expected = df.groupby('X')['Y'].count()
+ tm.assert_series_equal(result, expected)
+
+
+def test_aggregate_float64_no_int64():
+ # see gh-11199
+ df = DataFrame({"a": [1, 2, 3, 4, 5],
+ "b": [1, 2, 2, 4, 5],
+ "c": [1, 2, 3, 4, 5]})
+
+ expected = DataFrame({"a": [1, 2.5, 4, 5]}, index=[1, 2, 4, 5])
+ expected.index.name = "b"
+
+ result = df.groupby("b")[["a"]].mean()
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame({"a": [1, 2.5, 4, 5], "c": [1, 2.5, 4, 5]},
+ index=[1, 2, 4, 5])
+ expected.index.name = "b"
+
+ result = df.groupby("b")[["a", "c"]].mean()
+ tm.assert_frame_equal(result, expected)
+
+
+def test_aggregate_api_consistency():
+ # GH 9052
+ # make sure that the aggregates via dict
+ # are consistent
+ df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': np.random.randn(8) + 1.0,
+ 'D': np.arange(8)})
+
+ grouped = df.groupby(['A', 'B'])
+ c_mean = grouped['C'].mean()
+ c_sum = grouped['C'].sum()
+ d_mean = grouped['D'].mean()
+ d_sum = grouped['D'].sum()
+
+ result = grouped['D'].agg(['sum', 'mean'])
+ expected = pd.concat([d_sum, d_mean], axis=1)
+ expected.columns = ['sum', 'mean']
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+ result = grouped.agg([np.sum, np.mean])
+ expected = pd.concat([c_sum, c_mean, d_sum, d_mean], axis=1)
+ expected.columns = MultiIndex.from_product([['C', 'D'],
+ ['sum', 'mean']])
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+ result = grouped[['D', 'C']].agg([np.sum, np.mean])
+ expected = pd.concat([d_sum, d_mean, c_sum, c_mean], axis=1)
+ expected.columns = MultiIndex.from_product([['D', 'C'],
+ ['sum', 'mean']])
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+ result = grouped.agg({'C': 'mean', 'D': 'sum'})
+ expected = pd.concat([d_sum, c_mean], axis=1)
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+ result = grouped.agg({'C': ['mean', 'sum'],
+ 'D': ['mean', 'sum']})
+ expected = pd.concat([c_mean, c_sum, d_mean, d_sum], axis=1)
+ expected.columns = MultiIndex.from_product([['C', 'D'],
+ ['mean', 'sum']])
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = grouped[['D', 'C']].agg({'r': np.sum,
+ 'r2': np.mean})
+ expected = pd.concat([d_sum, c_sum, d_mean, c_mean], axis=1)
+ expected.columns = MultiIndex.from_product([['r', 'r2'],
+ ['D', 'C']])
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+
+def test_agg_dict_renaming_deprecation():
+ # 15931
+ df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
+ 'B': range(5),
+ 'C': range(5)})
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False) as w:
+ df.groupby('A').agg({'B': {'foo': ['sum', 'max']},
+ 'C': {'bar': ['count', 'min']}})
+ assert "using a dict with renaming" in str(w[0].message)
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ df.groupby('A')[['B', 'C']].agg({'ma': 'max'})
+
+ with tm.assert_produces_warning(FutureWarning) as w:
+ df.groupby('A').B.agg({'foo': 'count'})
+ assert "using a dict on a Series for aggregation" in str(w[0].message)
+
+
+def test_agg_compat():
+ # GH 12334
+ df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': np.random.randn(8) + 1.0,
+ 'D': np.arange(8)})
+
+ g = df.groupby(['A', 'B'])
+
+ expected = pd.concat([g['D'].sum(), g['D'].std()], axis=1)
+ expected.columns = MultiIndex.from_tuples([('C', 'sum'),
+ ('C', 'std')])
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = g['D'].agg({'C': ['sum', 'std']})
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+ expected = pd.concat([g['D'].sum(), g['D'].std()], axis=1)
+ expected.columns = ['C', 'D']
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = g['D'].agg({'C': 'sum', 'D': 'std'})
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+
+def test_agg_nested_dicts():
+ # API change for disallowing these types of nested dicts
+ df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B': ['one', 'one', 'two', 'two',
+ 'two', 'two', 'one', 'two'],
+ 'C': np.random.randn(8) + 1.0,
+ 'D': np.arange(8)})
+
+ g = df.groupby(['A', 'B'])
+
+ msg = r'cannot perform renaming for r[1-2] with a nested dictionary'
+ with tm.assert_raises_regex(SpecificationError, msg):
+ g.aggregate({'r1': {'C': ['mean', 'sum']},
+ 'r2': {'D': ['mean', 'sum']}})
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = g.agg({'C': {'ra': ['mean', 'std']},
+ 'D': {'rb': ['mean', 'std']}})
+ expected = pd.concat([g['C'].mean(), g['C'].std(),
+ g['D'].mean(), g['D'].std()],
+ axis=1)
+ expected.columns = pd.MultiIndex.from_tuples(
+ [('ra', 'mean'), ('ra', 'std'),
+ ('rb', 'mean'), ('rb', 'std')])
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+ # same name as the original column
+ # GH9052
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ expected = g['D'].agg({'result1': np.sum, 'result2': np.mean})
+ expected = expected.rename(columns={'result1': 'D'})
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ result = g['D'].agg({'D': np.sum, 'result2': np.mean})
+ tm.assert_frame_equal(result, expected, check_like=True)
+
+
+def test_agg_item_by_item_raise_typeerror():
+ from numpy.random import randint
+
+ df = DataFrame(randint(10, size=(20, 10)))
+
+ def raiseException(df):
+ pprint_thing('----------------------------------------')
+ pprint_thing(df.to_string())
+ raise TypeError('test')
+
+ with tm.assert_raises_regex(TypeError, 'test'):
+ df.groupby(0).agg(raiseException)
+
+
+def test_series_agg_multikey():
+ ts = tm.makeTimeSeries()
+ grouped = ts.groupby([lambda x: x.year, lambda x: x.month])
+
+ result = grouped.agg(np.sum)
+ expected = grouped.sum()
+ tm.assert_series_equal(result, expected)
+
+
+def test_series_agg_multi_pure_python():
+ data = DataFrame(
+ {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar',
+ 'foo', 'foo', 'foo'],
+ 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two',
+ 'two', 'two', 'one'],
+ 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny',
+ 'dull', 'shiny', 'shiny', 'shiny'],
+ 'D': np.random.randn(11),
+ 'E': np.random.randn(11),
+ 'F': np.random.randn(11)})
+
+ def bad(x):
+ assert (len(x.base) > 0)
+ return 'foo'
+
+ result = data.groupby(['A', 'B']).agg(bad)
+ expected = data.groupby(['A', 'B']).agg(lambda x: 'foo')
+ tm.assert_frame_equal(result, expected)
+
+
+def test_agg_consistency():
+ # agg with ([]) and () not consistent
+ # GH 6715
+ def P1(a):
+ try:
+ return np.percentile(a.dropna(), q=1)
+ except Exception:
+ return np.nan
+
+ import datetime as dt
+ df = DataFrame({'col1': [1, 2, 3, 4],
+ 'col2': [10, 25, 26, 31],
+ 'date': [dt.date(2013, 2, 10), dt.date(2013, 2, 10),
+ dt.date(2013, 2, 11), dt.date(2013, 2, 11)]})
+
+ g = df.groupby('date')
+
+ expected = g.agg([P1])
+ expected.columns = expected.columns.levels[0]
+
+ result = g.agg(P1)
+ tm.assert_frame_equal(result, expected)
+
+
+def test_agg_callables():
+ # GH 7929
+ df = DataFrame({'foo': [1, 2], 'bar': [3, 4]}).astype(np.int64)
+
+ class fn_class(object):
+
+ def __call__(self, x):
+ return sum(x)
+
+ equiv_callables = [sum,
+ np.sum,
+ lambda x: sum(x),
+ lambda x: x.sum(),
+ partial(sum),
+ fn_class(), ]
+
+ expected = df.groupby("foo").agg(sum)
+ for ecall in equiv_callables:
+ result = df.groupby('foo').agg(ecall)
+ tm.assert_frame_equal(result, expected)
+
+
+def test_agg_over_numpy_arrays():
+ # GH 3788
+ df = pd.DataFrame([[1, np.array([10, 20, 30])],
+ [1, np.array([40, 50, 60])],
+ [2, np.array([20, 30, 40])]],
+ columns=['category', 'arraydata'])
+ result = df.groupby('category').agg(sum)
+
+ expected_data = [[np.array([50, 70, 90])], [np.array([20, 30, 40])]]
+ expected_index = pd.Index([1, 2], name='category')
+ expected_column = ['arraydata']
+ expected = pd.DataFrame(expected_data,
+ index=expected_index,
+ columns=expected_column)
+
+ tm.assert_frame_equal(result, expected)
+
+
+def test_agg_timezone_round_trip():
+ # GH 15426
+ ts = pd.Timestamp("2016-01-01 12:00:00", tz='US/Pacific')
+ df = pd.DataFrame({'a': 1,
+ 'b': [ts + timedelta(minutes=nn) for nn in range(10)]})
+
+ result1 = df.groupby('a')['b'].agg(np.min).iloc[0]
+ result2 = df.groupby('a')['b'].agg(lambda x: np.min(x)).iloc[0]
+ result3 = df.groupby('a')['b'].min().iloc[0]
+
+ assert result1 == ts
+ assert result2 == ts
+ assert result3 == ts
+
+ dates = [pd.Timestamp("2016-01-0%d 12:00:00" % i, tz='US/Pacific')
+ for i in range(1, 5)]
+ df = pd.DataFrame({'A': ['a', 'b'] * 2, 'B': dates})
+ grouped = df.groupby('A')
+
+ ts = df['B'].iloc[0]
+ assert ts == grouped.nth(0)['B'].iloc[0]
+ assert ts == grouped.head(1)['B'].iloc[0]
+ assert ts == grouped.first()['B'].iloc[0]
+ assert ts == grouped.apply(lambda x: x.iloc[0])[0]
+
+ ts = df['B'].iloc[2]
+ assert ts == grouped.last()['B'].iloc[0]
+ assert ts == grouped.apply(lambda x: x.iloc[-1])[0]
+
+
+def test_sum_uint64_overflow():
+ # see gh-14758
+ # Convert to uint64 and don't overflow
+ df = pd.DataFrame([[1, 2], [3, 4], [5, 6]], dtype=object)
+ df = df + 9223372036854775807
+
+ index = pd.Index([9223372036854775808,
+ 9223372036854775810,
+ 9223372036854775812],
+ dtype=np.uint64)
+ expected = pd.DataFrame({1: [9223372036854775809,
+ 9223372036854775811,
+ 9223372036854775813]},
+ index=index)
+
+ expected.index.name = 0
+ result = df.groupby(0).sum()
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("structure, expected", [
+ (tuple, pd.DataFrame({'C': {(1, 1): (1, 1, 1), (3, 4): (3, 4, 4)}})),
+ (list, pd.DataFrame({'C': {(1, 1): [1, 1, 1], (3, 4): [3, 4, 4]}})),
+ (lambda x: tuple(x), pd.DataFrame({'C': {(1, 1): (1, 1, 1),
+ (3, 4): (3, 4, 4)}})),
+ (lambda x: list(x), pd.DataFrame({'C': {(1, 1): [1, 1, 1],
+ (3, 4): [3, 4, 4]}}))
+])
+def test_agg_structs_dataframe(structure, expected):
+ df = pd.DataFrame({'A': [1, 1, 1, 3, 3, 3],
+ 'B': [1, 1, 1, 4, 4, 4],
+ 'C': [1, 1, 1, 3, 4, 4]})
+
+ result = df.groupby(['A', 'B']).aggregate(structure)
+ expected.index.names = ['A', 'B']
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("structure, expected", [
+ (tuple, pd.Series([(1, 1, 1), (3, 4, 4)], index=[1, 3], name='C')),
+ (list, pd.Series([[1, 1, 1], [3, 4, 4]], index=[1, 3], name='C')),
+ (lambda x: tuple(x), pd.Series([(1, 1, 1), (3, 4, 4)],
+ index=[1, 3], name='C')),
+ (lambda x: list(x), pd.Series([[1, 1, 1], [3, 4, 4]],
+ index=[1, 3], name='C'))
+])
+def test_agg_structs_series(structure, expected):
+ # Issue #18079
+ df = pd.DataFrame({'A': [1, 1, 1, 3, 3, 3],
+ 'B': [1, 1, 1, 4, 4, 4],
+ 'C': [1, 1, 1, 3, 4, 4]})
+
+ result = df.groupby('A')['C'].aggregate(structure)
+ expected.index.name = 'A'
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.xfail(reason="GH-18869: agg func not called on empty groups.")
+def test_agg_category_nansum():
+ categories = ['a', 'b', 'c']
+ df = pd.DataFrame({"A": pd.Categorical(['a', 'a', 'b'],
+ categories=categories),
+ 'B': [1, 2, 3]})
+ result = df.groupby("A").B.agg(np.nansum)
+ expected = pd.Series([3, 3, 0],
+ index=pd.CategoricalIndex(['a', 'b', 'c'],
+ categories=categories,
+ name='A'),
+ name='B')
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/groupby/test_aggregate.py b/pandas/tests/groupby/test_aggregate.py
deleted file mode 100644
index cca21fddd116e..0000000000000
--- a/pandas/tests/groupby/test_aggregate.py
+++ /dev/null
@@ -1,961 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""
-we test .agg behavior / note that .apply is tested
-generally in test_groupby.py
-"""
-
-from __future__ import print_function
-
-import pytest
-
-from datetime import datetime, timedelta
-from functools import partial
-
-import numpy as np
-from numpy import nan
-import pandas as pd
-
-from pandas import (date_range, MultiIndex, DataFrame,
- Series, Index, bdate_range, concat)
-from pandas.util.testing import assert_frame_equal, assert_series_equal
-from pandas.core.groupby import SpecificationError, DataError
-from pandas.compat import OrderedDict
-from pandas.io.formats.printing import pprint_thing
-import pandas.util.testing as tm
-
-
-class TestGroupByAggregate(object):
-
- def setup_method(self, method):
- self.ts = tm.makeTimeSeries()
-
- self.seriesd = tm.getSeriesData()
- self.tsd = tm.getTimeSeriesData()
- self.frame = DataFrame(self.seriesd)
- self.tsframe = DataFrame(self.tsd)
-
- self.df = DataFrame(
- {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
- 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
- 'C': np.random.randn(8),
- 'D': np.random.randn(8)})
-
- self.df_mixed_floats = DataFrame(
- {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
- 'B': ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'],
- 'C': np.random.randn(8),
- 'D': np.array(
- np.random.randn(8), dtype='float32')})
-
- index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
- 'three']],
- labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
- [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
- names=['first', 'second'])
- self.mframe = DataFrame(np.random.randn(10, 3), index=index,
- columns=['A', 'B', 'C'])
-
- self.three_group = DataFrame(
- {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny',
- 'dull', 'shiny', 'shiny', 'shiny'],
- 'D': np.random.randn(11),
- 'E': np.random.randn(11),
- 'F': np.random.randn(11)})
-
- def test_agg_api(self):
-
- # GH 6337
- # http://stackoverflow.com/questions/21706030/pandas-groupby-agg-function-column-dtype-error
- # different api for agg when passed custom function with mixed frame
-
- df = DataFrame({'data1': np.random.randn(5),
- 'data2': np.random.randn(5),
- 'key1': ['a', 'a', 'b', 'b', 'a'],
- 'key2': ['one', 'two', 'one', 'two', 'one']})
- grouped = df.groupby('key1')
-
- def peak_to_peak(arr):
- return arr.max() - arr.min()
-
- expected = grouped.agg([peak_to_peak])
- expected.columns = ['data1', 'data2']
- result = grouped.agg(peak_to_peak)
- assert_frame_equal(result, expected)
-
- def test_agg_regression1(self):
- grouped = self.tsframe.groupby([lambda x: x.year, lambda x: x.month])
- result = grouped.agg(np.mean)
- expected = grouped.mean()
- assert_frame_equal(result, expected)
-
- def test_agg_datetimes_mixed(self):
- data = [[1, '2012-01-01', 1.0], [2, '2012-01-02', 2.0], [3, None, 3.0]]
-
- df1 = DataFrame({'key': [x[0] for x in data],
- 'date': [x[1] for x in data],
- 'value': [x[2] for x in data]})
-
- data = [[row[0], datetime.strptime(row[1], '%Y-%m-%d').date() if row[1]
- else None, row[2]] for row in data]
-
- df2 = DataFrame({'key': [x[0] for x in data],
- 'date': [x[1] for x in data],
- 'value': [x[2] for x in data]})
-
- df1['weights'] = df1['value'] / df1['value'].sum()
- gb1 = df1.groupby('date').aggregate(np.sum)
-
- df2['weights'] = df1['value'] / df1['value'].sum()
- gb2 = df2.groupby('date').aggregate(np.sum)
-
- assert (len(gb1) == len(gb2))
-
- def test_agg_period_index(self):
- from pandas import period_range, PeriodIndex
- prng = period_range('2012-1-1', freq='M', periods=3)
- df = DataFrame(np.random.randn(3, 2), index=prng)
- rs = df.groupby(level=0).sum()
- assert isinstance(rs.index, PeriodIndex)
-
- # GH 3579
- index = period_range(start='1999-01', periods=5, freq='M')
- s1 = Series(np.random.rand(len(index)), index=index)
- s2 = Series(np.random.rand(len(index)), index=index)
- series = [('s1', s1), ('s2', s2)]
- df = DataFrame.from_items(series)
- grouped = df.groupby(df.index.month)
- list(grouped)
-
- def test_agg_dict_parameter_cast_result_dtypes(self):
- # GH 12821
-
- df = DataFrame(
- {'class': ['A', 'A', 'B', 'B', 'C', 'C', 'D', 'D'],
- 'time': date_range('1/1/2011', periods=8, freq='H')})
- df.loc[[0, 1, 2, 5], 'time'] = None
-
- # test for `first` function
- exp = df.loc[[0, 3, 4, 6]].set_index('class')
- grouped = df.groupby('class')
- assert_frame_equal(grouped.first(), exp)
- assert_frame_equal(grouped.agg('first'), exp)
- assert_frame_equal(grouped.agg({'time': 'first'}), exp)
- assert_series_equal(grouped.time.first(), exp['time'])
- assert_series_equal(grouped.time.agg('first'), exp['time'])
-
- # test for `last` function
- exp = df.loc[[0, 3, 4, 7]].set_index('class')
- grouped = df.groupby('class')
- assert_frame_equal(grouped.last(), exp)
- assert_frame_equal(grouped.agg('last'), exp)
- assert_frame_equal(grouped.agg({'time': 'last'}), exp)
- assert_series_equal(grouped.time.last(), exp['time'])
- assert_series_equal(grouped.time.agg('last'), exp['time'])
-
- # count
- exp = pd.Series([2, 2, 2, 2],
- index=Index(list('ABCD'), name='class'),
- name='time')
- assert_series_equal(grouped.time.agg(len), exp)
- assert_series_equal(grouped.time.size(), exp)
-
- exp = pd.Series([0, 1, 1, 2],
- index=Index(list('ABCD'), name='class'),
- name='time')
- assert_series_equal(grouped.time.count(), exp)
-
- def test_agg_cast_results_dtypes(self):
- # similar to GH12821
- # xref #11444
- u = [datetime(2015, x + 1, 1) for x in range(12)]
- v = list('aaabbbbbbccd')
- df = pd.DataFrame({'X': v, 'Y': u})
-
- result = df.groupby('X')['Y'].agg(len)
- expected = df.groupby('X')['Y'].count()
- assert_series_equal(result, expected)
-
- def test_agg_must_agg(self):
- grouped = self.df.groupby('A')['C']
- pytest.raises(Exception, grouped.agg, lambda x: x.describe())
- pytest.raises(Exception, grouped.agg, lambda x: x.index[:2])
-
- def test_agg_ser_multi_key(self):
- # TODO(wesm): unused
- ser = self.df.C # noqa
-
- f = lambda x: x.sum()
- results = self.df.C.groupby([self.df.A, self.df.B]).aggregate(f)
- expected = self.df.groupby(['A', 'B']).sum()['C']
- assert_series_equal(results, expected)
-
- def test_agg_apply_corner(self):
- # nothing to group, all NA
- grouped = self.ts.groupby(self.ts * np.nan)
- assert self.ts.dtype == np.float64
-
- # groupby float64 values results in Float64Index
- exp = Series([], dtype=np.float64, index=pd.Index(
- [], dtype=np.float64))
- assert_series_equal(grouped.sum(), exp)
- assert_series_equal(grouped.agg(np.sum), exp)
- assert_series_equal(grouped.apply(np.sum), exp, check_index_type=False)
-
- # DataFrame
- grouped = self.tsframe.groupby(self.tsframe['A'] * np.nan)
- exp_df = DataFrame(columns=self.tsframe.columns, dtype=float,
- index=pd.Index([], dtype=np.float64))
- assert_frame_equal(grouped.sum(), exp_df, check_names=False)
- assert_frame_equal(grouped.agg(np.sum), exp_df, check_names=False)
- assert_frame_equal(grouped.apply(np.sum), exp_df.iloc[:, :0],
- check_names=False)
-
- def test_agg_grouping_is_list_tuple(self):
- from pandas.core.groupby import Grouping
-
- df = tm.makeTimeDataFrame()
-
- grouped = df.groupby(lambda x: x.year)
- grouper = grouped.grouper.groupings[0].grouper
- grouped.grouper.groupings[0] = Grouping(self.ts.index, list(grouper))
-
- result = grouped.agg(np.mean)
- expected = grouped.mean()
- tm.assert_frame_equal(result, expected)
-
- grouped.grouper.groupings[0] = Grouping(self.ts.index, tuple(grouper))
-
- result = grouped.agg(np.mean)
- expected = grouped.mean()
- tm.assert_frame_equal(result, expected)
-
- def test_aggregate_float64_no_int64(self):
- # see gh-11199
- df = DataFrame({"a": [1, 2, 3, 4, 5],
- "b": [1, 2, 2, 4, 5],
- "c": [1, 2, 3, 4, 5]})
-
- expected = DataFrame({"a": [1, 2.5, 4, 5]},
- index=[1, 2, 4, 5])
- expected.index.name = "b"
-
- result = df.groupby("b")[["a"]].mean()
- tm.assert_frame_equal(result, expected)
-
- expected = DataFrame({"a": [1, 2.5, 4, 5],
- "c": [1, 2.5, 4, 5]},
- index=[1, 2, 4, 5])
- expected.index.name = "b"
-
- result = df.groupby("b")[["a", "c"]].mean()
- tm.assert_frame_equal(result, expected)
-
- def test_aggregate_api_consistency(self):
- # GH 9052
- # make sure that the aggregates via dict
- # are consistent
-
- df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'foo', 'foo'],
- 'B': ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C': np.random.randn(8) + 1.0,
- 'D': np.arange(8)})
-
- grouped = df.groupby(['A', 'B'])
- c_mean = grouped['C'].mean()
- c_sum = grouped['C'].sum()
- d_mean = grouped['D'].mean()
- d_sum = grouped['D'].sum()
-
- result = grouped['D'].agg(['sum', 'mean'])
- expected = pd.concat([d_sum, d_mean],
- axis=1)
- expected.columns = ['sum', 'mean']
- assert_frame_equal(result, expected, check_like=True)
-
- result = grouped.agg([np.sum, np.mean])
- expected = pd.concat([c_sum,
- c_mean,
- d_sum,
- d_mean],
- axis=1)
- expected.columns = MultiIndex.from_product([['C', 'D'],
- ['sum', 'mean']])
- assert_frame_equal(result, expected, check_like=True)
-
- result = grouped[['D', 'C']].agg([np.sum, np.mean])
- expected = pd.concat([d_sum,
- d_mean,
- c_sum,
- c_mean],
- axis=1)
- expected.columns = MultiIndex.from_product([['D', 'C'],
- ['sum', 'mean']])
- assert_frame_equal(result, expected, check_like=True)
-
- result = grouped.agg({'C': 'mean', 'D': 'sum'})
- expected = pd.concat([d_sum,
- c_mean],
- axis=1)
- assert_frame_equal(result, expected, check_like=True)
-
- result = grouped.agg({'C': ['mean', 'sum'],
- 'D': ['mean', 'sum']})
- expected = pd.concat([c_mean,
- c_sum,
- d_mean,
- d_sum],
- axis=1)
- expected.columns = MultiIndex.from_product([['C', 'D'],
- ['mean', 'sum']])
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = grouped[['D', 'C']].agg({'r': np.sum,
- 'r2': np.mean})
- expected = pd.concat([d_sum,
- c_sum,
- d_mean,
- c_mean],
- axis=1)
- expected.columns = MultiIndex.from_product([['r', 'r2'],
- ['D', 'C']])
- assert_frame_equal(result, expected, check_like=True)
-
- def test_agg_dict_renaming_deprecation(self):
- # 15931
- df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
- 'B': range(5),
- 'C': range(5)})
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False) as w:
- df.groupby('A').agg({'B': {'foo': ['sum', 'max']},
- 'C': {'bar': ['count', 'min']}})
- assert "using a dict with renaming" in str(w[0].message)
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- df.groupby('A')[['B', 'C']].agg({'ma': 'max'})
-
- with tm.assert_produces_warning(FutureWarning) as w:
- df.groupby('A').B.agg({'foo': 'count'})
- assert "using a dict on a Series for aggregation" in str(
- w[0].message)
-
- def test_agg_compat(self):
-
- # GH 12334
-
- df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'foo', 'foo'],
- 'B': ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C': np.random.randn(8) + 1.0,
- 'D': np.arange(8)})
-
- g = df.groupby(['A', 'B'])
-
- expected = pd.concat([g['D'].sum(),
- g['D'].std()],
- axis=1)
- expected.columns = MultiIndex.from_tuples([('C', 'sum'),
- ('C', 'std')])
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = g['D'].agg({'C': ['sum', 'std']})
- assert_frame_equal(result, expected, check_like=True)
-
- expected = pd.concat([g['D'].sum(),
- g['D'].std()],
- axis=1)
- expected.columns = ['C', 'D']
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = g['D'].agg({'C': 'sum', 'D': 'std'})
- assert_frame_equal(result, expected, check_like=True)
-
- def test_agg_nested_dicts(self):
-
- # API change for disallowing these types of nested dicts
- df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
- 'foo', 'bar', 'foo', 'foo'],
- 'B': ['one', 'one', 'two', 'two',
- 'two', 'two', 'one', 'two'],
- 'C': np.random.randn(8) + 1.0,
- 'D': np.arange(8)})
-
- g = df.groupby(['A', 'B'])
-
- def f():
- g.aggregate({'r1': {'C': ['mean', 'sum']},
- 'r2': {'D': ['mean', 'sum']}})
-
- pytest.raises(SpecificationError, f)
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = g.agg({'C': {'ra': ['mean', 'std']},
- 'D': {'rb': ['mean', 'std']}})
- expected = pd.concat([g['C'].mean(), g['C'].std(), g['D'].mean(),
- g['D'].std()], axis=1)
- expected.columns = pd.MultiIndex.from_tuples([('ra', 'mean'), (
- 'ra', 'std'), ('rb', 'mean'), ('rb', 'std')])
- assert_frame_equal(result, expected, check_like=True)
-
- # same name as the original column
- # GH9052
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- expected = g['D'].agg({'result1': np.sum, 'result2': np.mean})
- expected = expected.rename(columns={'result1': 'D'})
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = g['D'].agg({'D': np.sum, 'result2': np.mean})
- assert_frame_equal(result, expected, check_like=True)
-
- def test_agg_python_multiindex(self):
- grouped = self.mframe.groupby(['A', 'B'])
-
- result = grouped.agg(np.mean)
- expected = grouped.mean()
- tm.assert_frame_equal(result, expected)
-
- def test_aggregate_str_func(self):
- def _check_results(grouped):
- # single series
- result = grouped['A'].agg('std')
- expected = grouped['A'].std()
- assert_series_equal(result, expected)
-
- # group frame by function name
- result = grouped.aggregate('var')
- expected = grouped.var()
- assert_frame_equal(result, expected)
-
- # group frame by function dict
- result = grouped.agg(OrderedDict([['A', 'var'], ['B', 'std'],
- ['C', 'mean'], ['D', 'sem']]))
- expected = DataFrame(OrderedDict([['A', grouped['A'].var(
- )], ['B', grouped['B'].std()], ['C', grouped['C'].mean()],
- ['D', grouped['D'].sem()]]))
- assert_frame_equal(result, expected)
-
- by_weekday = self.tsframe.groupby(lambda x: x.weekday())
- _check_results(by_weekday)
-
- by_mwkday = self.tsframe.groupby([lambda x: x.month,
- lambda x: x.weekday()])
- _check_results(by_mwkday)
-
- def test_aggregate_item_by_item(self):
-
- df = self.df.copy()
- df['E'] = ['a'] * len(self.df)
- grouped = self.df.groupby('A')
-
- # API change in 0.11
- # def aggfun(ser):
- # return len(ser + 'a')
- # result = grouped.agg(aggfun)
- # assert len(result.columns) == 1
-
- aggfun = lambda ser: ser.size
- result = grouped.agg(aggfun)
- foo = (self.df.A == 'foo').sum()
- bar = (self.df.A == 'bar').sum()
- K = len(result.columns)
-
- # GH5782
- # odd comparisons can result here, so cast to make easy
- exp = pd.Series(np.array([foo] * K), index=list('BCD'),
- dtype=np.float64, name='foo')
- tm.assert_series_equal(result.xs('foo'), exp)
-
- exp = pd.Series(np.array([bar] * K), index=list('BCD'),
- dtype=np.float64, name='bar')
- tm.assert_almost_equal(result.xs('bar'), exp)
-
- def aggfun(ser):
- return ser.size
-
- result = DataFrame().groupby(self.df.A).agg(aggfun)
- assert isinstance(result, DataFrame)
- assert len(result) == 0
-
- def test_agg_item_by_item_raise_typeerror(self):
- from numpy.random import randint
-
- df = DataFrame(randint(10, size=(20, 10)))
-
- def raiseException(df):
- pprint_thing('----------------------------------------')
- pprint_thing(df.to_string())
- raise TypeError
-
- pytest.raises(TypeError, df.groupby(0).agg, raiseException)
-
- def test_series_agg_multikey(self):
- ts = tm.makeTimeSeries()
- grouped = ts.groupby([lambda x: x.year, lambda x: x.month])
-
- result = grouped.agg(np.sum)
- expected = grouped.sum()
- assert_series_equal(result, expected)
-
- def test_series_agg_multi_pure_python(self):
- data = DataFrame(
- {'A': ['foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar',
- 'foo', 'foo', 'foo'],
- 'B': ['one', 'one', 'one', 'two', 'one', 'one', 'one', 'two',
- 'two', 'two', 'one'],
- 'C': ['dull', 'dull', 'shiny', 'dull', 'dull', 'shiny', 'shiny',
- 'dull', 'shiny', 'shiny', 'shiny'],
- 'D': np.random.randn(11),
- 'E': np.random.randn(11),
- 'F': np.random.randn(11)})
-
- def bad(x):
- assert (len(x.base) > 0)
- return 'foo'
-
- result = data.groupby(['A', 'B']).agg(bad)
- expected = data.groupby(['A', 'B']).agg(lambda x: 'foo')
- assert_frame_equal(result, expected)
-
- def test_cythonized_aggers(self):
- data = {'A': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1., nan, nan],
- 'B': ['A', 'B'] * 6,
- 'C': np.random.randn(12)}
- df = DataFrame(data)
- df.loc[2:10:2, 'C'] = nan
-
- def _testit(name):
-
- op = lambda x: getattr(x, name)()
-
- # single column
- grouped = df.drop(['B'], axis=1).groupby('A')
- exp = {}
- for cat, group in grouped:
- exp[cat] = op(group['C'])
- exp = DataFrame({'C': exp})
- exp.index.name = 'A'
- result = op(grouped)
- assert_frame_equal(result, exp)
-
- # multiple columns
- grouped = df.groupby(['A', 'B'])
- expd = {}
- for (cat1, cat2), group in grouped:
- expd.setdefault(cat1, {})[cat2] = op(group['C'])
- exp = DataFrame(expd).T.stack(dropna=False)
- exp.index.names = ['A', 'B']
- exp.name = 'C'
-
- result = op(grouped)['C']
- if name in ['sum', 'prod']:
- assert_series_equal(result, exp)
-
- _testit('count')
- _testit('sum')
- _testit('std')
- _testit('var')
- _testit('sem')
- _testit('mean')
- _testit('median')
- _testit('prod')
- _testit('min')
- _testit('max')
-
- def test_cython_agg_boolean(self):
- frame = DataFrame({'a': np.random.randint(0, 5, 50),
- 'b': np.random.randint(0, 2, 50).astype('bool')})
- result = frame.groupby('a')['b'].mean()
- expected = frame.groupby('a')['b'].agg(np.mean)
-
- assert_series_equal(result, expected)
-
- def test_cython_agg_nothing_to_agg(self):
- frame = DataFrame({'a': np.random.randint(0, 5, 50),
- 'b': ['foo', 'bar'] * 25})
- pytest.raises(DataError, frame.groupby('a')['b'].mean)
-
- frame = DataFrame({'a': np.random.randint(0, 5, 50),
- 'b': ['foo', 'bar'] * 25})
- pytest.raises(DataError, frame[['b']].groupby(frame['a']).mean)
-
- def test_cython_agg_nothing_to_agg_with_dates(self):
- frame = DataFrame({'a': np.random.randint(0, 5, 50),
- 'b': ['foo', 'bar'] * 25,
- 'dates': pd.date_range('now', periods=50,
- freq='T')})
- with tm.assert_raises_regex(DataError,
- "No numeric types to aggregate"):
- frame.groupby('b').dates.mean()
-
- def test_cython_agg_frame_columns(self):
- # #2113
- df = DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
-
- df.groupby(level=0, axis='columns').mean()
- df.groupby(level=0, axis='columns').mean()
- df.groupby(level=0, axis='columns').mean()
- df.groupby(level=0, axis='columns').mean()
-
- def test_cython_agg_return_dict(self):
- # GH 16741
- ts = self.df.groupby('A')['B'].agg(
- lambda x: x.value_counts().to_dict())
- expected = Series([{'two': 1, 'one': 1, 'three': 1},
- {'two': 2, 'one': 2, 'three': 1}],
- index=Index(['bar', 'foo'], name='A'),
- name='B')
- assert_series_equal(ts, expected)
-
- def test_cython_fail_agg(self):
- dr = bdate_range('1/1/2000', periods=50)
- ts = Series(['A', 'B', 'C', 'D', 'E'] * 10, index=dr)
-
- grouped = ts.groupby(lambda x: x.month)
- summed = grouped.sum()
- expected = grouped.agg(np.sum)
- assert_series_equal(summed, expected)
-
- def test_agg_consistency(self):
- # agg with ([]) and () not consistent
- # GH 6715
-
- def P1(a):
- try:
- return np.percentile(a.dropna(), q=1)
- except Exception:
- return np.nan
-
- import datetime as dt
- df = DataFrame({'col1': [1, 2, 3, 4],
- 'col2': [10, 25, 26, 31],
- 'date': [dt.date(2013, 2, 10), dt.date(2013, 2, 10),
- dt.date(2013, 2, 11), dt.date(2013, 2, 11)]})
-
- g = df.groupby('date')
-
- expected = g.agg([P1])
- expected.columns = expected.columns.levels[0]
-
- result = g.agg(P1)
- assert_frame_equal(result, expected)
-
- def test_wrap_agg_out(self):
- grouped = self.three_group.groupby(['A', 'B'])
-
- def func(ser):
- if ser.dtype == np.object:
- raise TypeError
- else:
- return ser.sum()
-
- result = grouped.aggregate(func)
- exp_grouped = self.three_group.loc[:, self.three_group.columns != 'C']
- expected = exp_grouped.groupby(['A', 'B']).aggregate(func)
- assert_frame_equal(result, expected)
-
- def test_agg_multiple_functions_maintain_order(self):
- # GH #610
- funcs = [('mean', np.mean), ('max', np.max), ('min', np.min)]
- result = self.df.groupby('A')['C'].agg(funcs)
- exp_cols = Index(['mean', 'max', 'min'])
-
- tm.assert_index_equal(result.columns, exp_cols)
-
- def test_multiple_functions_tuples_and_non_tuples(self):
- # #1359
-
- funcs = [('foo', 'mean'), 'std']
- ex_funcs = [('foo', 'mean'), ('std', 'std')]
-
- result = self.df.groupby('A')['C'].agg(funcs)
- expected = self.df.groupby('A')['C'].agg(ex_funcs)
- assert_frame_equal(result, expected)
-
- result = self.df.groupby('A').agg(funcs)
- expected = self.df.groupby('A').agg(ex_funcs)
- assert_frame_equal(result, expected)
-
- def test_agg_multiple_functions_too_many_lambdas(self):
- grouped = self.df.groupby('A')
- funcs = ['mean', lambda x: x.mean(), lambda x: x.std()]
-
- pytest.raises(SpecificationError, grouped.agg, funcs)
-
- def test_more_flexible_frame_multi_function(self):
-
- grouped = self.df.groupby('A')
-
- exmean = grouped.agg(OrderedDict([['C', np.mean], ['D', np.mean]]))
- exstd = grouped.agg(OrderedDict([['C', np.std], ['D', np.std]]))
-
- expected = concat([exmean, exstd], keys=['mean', 'std'], axis=1)
- expected = expected.swaplevel(0, 1, axis=1).sort_index(level=0, axis=1)
-
- d = OrderedDict([['C', [np.mean, np.std]], ['D', [np.mean, np.std]]])
- result = grouped.aggregate(d)
-
- assert_frame_equal(result, expected)
-
- # be careful
- result = grouped.aggregate(OrderedDict([['C', np.mean],
- ['D', [np.mean, np.std]]]))
- expected = grouped.aggregate(OrderedDict([['C', np.mean],
- ['D', [np.mean, np.std]]]))
- assert_frame_equal(result, expected)
-
- def foo(x):
- return np.mean(x)
-
- def bar(x):
- return np.std(x, ddof=1)
-
- # this uses column selection & renaming
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- d = OrderedDict([['C', np.mean], ['D', OrderedDict(
- [['foo', np.mean], ['bar', np.std]])]])
- result = grouped.aggregate(d)
-
- d = OrderedDict([['C', [np.mean]], ['D', [foo, bar]]])
- expected = grouped.aggregate(d)
-
- assert_frame_equal(result, expected)
-
- def test_multi_function_flexible_mix(self):
- # GH #1268
- grouped = self.df.groupby('A')
-
- d = OrderedDict([['C', OrderedDict([['foo', 'mean'], [
- 'bar', 'std'
- ]])], ['D', 'sum']])
-
- # this uses column selection & renaming
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = grouped.aggregate(d)
-
- d2 = OrderedDict([['C', OrderedDict([['foo', 'mean'], [
- 'bar', 'std'
- ]])], ['D', ['sum']]])
-
- # this uses column selection & renaming
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result2 = grouped.aggregate(d2)
-
- d3 = OrderedDict([['C', OrderedDict([['foo', 'mean'], [
- 'bar', 'std'
- ]])], ['D', {'sum': 'sum'}]])
-
- # this uses column selection & renaming
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- expected = grouped.aggregate(d3)
-
- assert_frame_equal(result, expected)
- assert_frame_equal(result2, expected)
-
- def test_agg_callables(self):
- # GH 7929
- df = DataFrame({'foo': [1, 2], 'bar': [3, 4]}).astype(np.int64)
-
- class fn_class(object):
-
- def __call__(self, x):
- return sum(x)
-
- equiv_callables = [sum, np.sum, lambda x: sum(x), lambda x: x.sum(),
- partial(sum), fn_class()]
-
- expected = df.groupby("foo").agg(sum)
- for ecall in equiv_callables:
- result = df.groupby('foo').agg(ecall)
- assert_frame_equal(result, expected)
-
- def test__cython_agg_general(self):
- ops = [('mean', np.mean),
- ('median', np.median),
- ('var', np.var),
- ('add', np.sum),
- ('prod', np.prod),
- ('min', np.min),
- ('max', np.max),
- ('first', lambda x: x.iloc[0]),
- ('last', lambda x: x.iloc[-1]), ]
- df = DataFrame(np.random.randn(1000))
- labels = np.random.randint(0, 50, size=1000).astype(float)
-
- for op, targop in ops:
- result = df.groupby(labels)._cython_agg_general(op)
- expected = df.groupby(labels).agg(targop)
- try:
- tm.assert_frame_equal(result, expected)
- except BaseException as exc:
- exc.args += ('operation: %s' % op, )
- raise
-
- @pytest.mark.parametrize('op, targop', [
- ('mean', np.mean),
- ('median', lambda x: np.median(x) if len(x) > 0 else np.nan),
- ('var', lambda x: np.var(x, ddof=1)),
- ('min', np.min),
- ('max', np.max), ]
- )
- def test_cython_agg_empty_buckets(self, op, targop):
- df = pd.DataFrame([11, 12, 13])
- grps = range(0, 55, 5)
-
- # calling _cython_agg_general directly, instead of via the user API
- # which sets different values for min_count, so do that here.
- result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(op)
- expected = df.groupby(pd.cut(df[0], grps)).agg(lambda x: targop(x))
- try:
- tm.assert_frame_equal(result, expected)
- except BaseException as exc:
- exc.args += ('operation: %s' % op,)
- raise
-
- def test_cython_agg_empty_buckets_nanops(self):
- # GH-18869 can't call nanops on empty groups, so hardcode expected
- # for these
- df = pd.DataFrame([11, 12, 13], columns=['a'])
- grps = range(0, 25, 5)
- # add / sum
- result = df.groupby(pd.cut(df['a'], grps))._cython_agg_general('add')
- intervals = pd.interval_range(0, 20, freq=5)
- expected = pd.DataFrame(
- {"a": [0, 0, 36, 0]},
- index=pd.CategoricalIndex(intervals, name='a', ordered=True))
- tm.assert_frame_equal(result, expected)
-
- # prod
- result = df.groupby(pd.cut(df['a'], grps))._cython_agg_general('prod')
- expected = pd.DataFrame(
- {"a": [1, 1, 1716, 1]},
- index=pd.CategoricalIndex(intervals, name='a', ordered=True))
- tm.assert_frame_equal(result, expected)
-
- @pytest.mark.xfail(reason="GH-18869: agg func not called on empty groups.")
- def test_agg_category_nansum(self):
- categories = ['a', 'b', 'c']
- df = pd.DataFrame({"A": pd.Categorical(['a', 'a', 'b'],
- categories=categories),
- 'B': [1, 2, 3]})
- result = df.groupby("A").B.agg(np.nansum)
- expected = pd.Series([3, 3, 0],
- index=pd.CategoricalIndex(['a', 'b', 'c'],
- categories=categories,
- name='A'),
- name='B')
- tm.assert_series_equal(result, expected)
-
- def test_agg_over_numpy_arrays(self):
- # GH 3788
- df = pd.DataFrame([[1, np.array([10, 20, 30])],
- [1, np.array([40, 50, 60])],
- [2, np.array([20, 30, 40])]],
- columns=['category', 'arraydata'])
- result = df.groupby('category').agg(sum)
-
- expected_data = [[np.array([50, 70, 90])], [np.array([20, 30, 40])]]
- expected_index = pd.Index([1, 2], name='category')
- expected_column = ['arraydata']
- expected = pd.DataFrame(expected_data,
- index=expected_index,
- columns=expected_column)
-
- assert_frame_equal(result, expected)
-
- def test_agg_timezone_round_trip(self):
- # GH 15426
- ts = pd.Timestamp("2016-01-01 12:00:00", tz='US/Pacific')
- df = pd.DataFrame({'a': 1, 'b': [ts + timedelta(minutes=nn)
- for nn in range(10)]})
-
- result1 = df.groupby('a')['b'].agg(np.min).iloc[0]
- result2 = df.groupby('a')['b'].agg(lambda x: np.min(x)).iloc[0]
- result3 = df.groupby('a')['b'].min().iloc[0]
-
- assert result1 == ts
- assert result2 == ts
- assert result3 == ts
-
- dates = [pd.Timestamp("2016-01-0%d 12:00:00" % i, tz='US/Pacific')
- for i in range(1, 5)]
- df = pd.DataFrame({'A': ['a', 'b'] * 2, 'B': dates})
- grouped = df.groupby('A')
-
- ts = df['B'].iloc[0]
- assert ts == grouped.nth(0)['B'].iloc[0]
- assert ts == grouped.head(1)['B'].iloc[0]
- assert ts == grouped.first()['B'].iloc[0]
- assert ts == grouped.apply(lambda x: x.iloc[0])[0]
-
- ts = df['B'].iloc[2]
- assert ts == grouped.last()['B'].iloc[0]
- assert ts == grouped.apply(lambda x: x.iloc[-1])[0]
-
- def test_sum_uint64_overflow(self):
- # see gh-14758
-
- # Convert to uint64 and don't overflow
- df = pd.DataFrame([[1, 2], [3, 4], [5, 6]],
- dtype=object) + 9223372036854775807
-
- index = pd.Index([9223372036854775808, 9223372036854775810,
- 9223372036854775812], dtype=np.uint64)
- expected = pd.DataFrame({1: [9223372036854775809,
- 9223372036854775811,
- 9223372036854775813]}, index=index)
-
- expected.index.name = 0
- result = df.groupby(0).sum()
- tm.assert_frame_equal(result, expected)
-
- @pytest.mark.parametrize("structure, expected", [
- (tuple, pd.DataFrame({'C': {(1, 1): (1, 1, 1), (3, 4): (3, 4, 4)}})),
- (list, pd.DataFrame({'C': {(1, 1): [1, 1, 1], (3, 4): [3, 4, 4]}})),
- (lambda x: tuple(x), pd.DataFrame({'C': {(1, 1): (1, 1, 1),
- (3, 4): (3, 4, 4)}})),
- (lambda x: list(x), pd.DataFrame({'C': {(1, 1): [1, 1, 1],
- (3, 4): [3, 4, 4]}}))
- ])
- def test_agg_structs_dataframe(self, structure, expected):
- df = pd.DataFrame({'A': [1, 1, 1, 3, 3, 3],
- 'B': [1, 1, 1, 4, 4, 4], 'C': [1, 1, 1, 3, 4, 4]})
-
- result = df.groupby(['A', 'B']).aggregate(structure)
- expected.index.names = ['A', 'B']
- assert_frame_equal(result, expected)
-
- @pytest.mark.parametrize("structure, expected", [
- (tuple, pd.Series([(1, 1, 1), (3, 4, 4)], index=[1, 3], name='C')),
- (list, pd.Series([[1, 1, 1], [3, 4, 4]], index=[1, 3], name='C')),
- (lambda x: tuple(x), pd.Series([(1, 1, 1), (3, 4, 4)],
- index=[1, 3], name='C')),
- (lambda x: list(x), pd.Series([[1, 1, 1], [3, 4, 4]],
- index=[1, 3], name='C'))
- ])
- def test_agg_structs_series(self, structure, expected):
- # Issue #18079
- df = pd.DataFrame({'A': [1, 1, 1, 3, 3, 3],
- 'B': [1, 1, 1, 4, 4, 4], 'C': [1, 1, 1, 3, 4, 4]})
-
- result = df.groupby('A')['C'].aggregate(structure)
- expected.index.name = 'A'
- assert_series_equal(result, expected)
| closes #18490
Module currently has tests in 2 classes (`TestGroupByAggregate` and `TestGroupByAggregateCython`). The remaining tests are not in a class.
Also made the following changes:
### TestGroupByAggregate class
#### test_agg_must_agg
* replaced `pytest.raises` with `tm.assert_raises_regex`
#### test_agg_apply_corner
* made it more readable
#### test_agg_multiple_functions_too_many_lambdas
* replaced `pytest.raises` with `tm.assert_raises_regex`
#### test_multi_function_flexible_mix
* made it more readable
### TestGroupByAggregateCython class
_all test methods that contain cython in the test name_
#### test_cython_agg_nothing_to_agg
* replaced `pytest.raises` with `tm.assert_raises_regex`
#### test_cython_agg_return_dict
* replaced `self.df` with df initialized inside function
### All Others
#### test_agg_dict_renaming_deprecation
* made it more readable
#### test_agg_nested_dicts
* replaced `pytest.raises` with `tm.assert_raises_regex`
* made it more readable
#### test_agg_item_by_item_raise_typeerror
* replaced `pytest.raises` with `tm.assert_raises_regex`
#### test_agg_structs_series
* made it more readable
| https://api.github.com/repos/pandas-dev/pandas/pulls/18931 | 2017-12-24T19:13:09Z | 2017-12-30T12:31:36Z | 2017-12-30T12:31:36Z | 2018-01-05T14:24:09Z |
BUG: Stack/unstack do not return subclassed objects (GH15563) | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index dc305f36f32ec..ec106ff2b2f61 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -449,6 +449,8 @@ Reshaping
- Bug in :func:`cut` which fails when using readonly arrays (:issue:`18773`)
- Bug in :func:`Dataframe.pivot_table` which fails when the ``aggfunc`` arg is of type string. The behavior is now consistent with other methods like ``agg`` and ``apply`` (:issue:`18713`)
- Bug in :func:`DataFrame.merge` in which merging using ``Index`` objects as vectors raised an Exception (:issue:`19038`)
+- Bug in :func:`DataFrame.stack`, :func:`DataFrame.unstack`, :func:`Series.unstack` which were not returning subclasses (:issue:`15563`)
+-
Numeric
^^^^^^^
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index b648c426a877f..28e9694681912 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -80,8 +80,7 @@ def melt(frame, id_vars=None, value_vars=None, var_name=None,
mdata[col] = np.asanyarray(frame.columns
._get_level_values(i)).repeat(N)
- from pandas import DataFrame
- return DataFrame(mdata, columns=mcolumns)
+ return frame._constructor(mdata, columns=mcolumns)
def lreshape(data, groups, dropna=True, label=None):
@@ -152,8 +151,7 @@ def lreshape(data, groups, dropna=True, label=None):
if not mask.all():
mdata = {k: v[mask] for k, v in compat.iteritems(mdata)}
- from pandas import DataFrame
- return DataFrame(mdata, columns=id_cols + pivot_cols)
+ return data._constructor(mdata, columns=id_cols + pivot_cols)
def wide_to_long(df, stubnames, i, j, sep="", suffix=r'\d+'):
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index d6aed064e49f8..7a34044f70c34 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -37,8 +37,23 @@ class _Unstacker(object):
Parameters
----------
+ values : ndarray
+ Values of DataFrame to "Unstack"
+ index : object
+ Pandas ``Index``
level : int or str, default last level
Level to "unstack". Accepts a name for the level.
+ value_columns : Index, optional
+ Pandas ``Index`` or ``MultiIndex`` object if unstacking a DataFrame
+ fill_value : scalar, optional
+ Default value to fill in missing values if subgroups do not have the
+ same set of labels. By default, missing values will be replaced with
+ the default fill value for that data type, NaN for float, NaT for
+ datetimelike, etc. For integer types, by default data will converted to
+ float and missing values will be set to NaN.
+ constructor : object
+ Pandas ``DataFrame`` or subclass used to create unstacked
+ response. If None, DataFrame or SparseDataFrame will be used.
Examples
--------
@@ -69,7 +84,7 @@ class _Unstacker(object):
"""
def __init__(self, values, index, level=-1, value_columns=None,
- fill_value=None):
+ fill_value=None, constructor=None):
self.is_categorical = None
self.is_sparse = is_sparse(values)
@@ -86,6 +101,14 @@ def __init__(self, values, index, level=-1, value_columns=None,
self.value_columns = value_columns
self.fill_value = fill_value
+ if constructor is None:
+ if self.is_sparse:
+ self.constructor = SparseDataFrame
+ else:
+ self.constructor = DataFrame
+ else:
+ self.constructor = constructor
+
if value_columns is None and values.shape[1] != 1: # pragma: no cover
raise ValueError('must pass column labels for multi-column data')
@@ -173,8 +196,7 @@ def get_result(self):
ordered=ordered)
for i in range(values.shape[-1])]
- klass = SparseDataFrame if self.is_sparse else DataFrame
- return klass(values, index=index, columns=columns)
+ return self.constructor(values, index=index, columns=columns)
def get_new_values(self):
values = self.values
@@ -374,8 +396,9 @@ def pivot(self, index=None, columns=None, values=None):
index = self.index
else:
index = self[index]
- indexed = Series(self[values].values,
- index=MultiIndex.from_arrays([index, self[columns]]))
+ indexed = self._constructor_sliced(
+ self[values].values,
+ index=MultiIndex.from_arrays([index, self[columns]]))
return indexed.unstack(columns)
@@ -461,7 +484,8 @@ def unstack(obj, level, fill_value=None):
return obj.T.stack(dropna=False)
else:
unstacker = _Unstacker(obj.values, obj.index, level=level,
- fill_value=fill_value)
+ fill_value=fill_value,
+ constructor=obj._constructor_expanddim)
return unstacker.get_result()
@@ -470,12 +494,12 @@ def _unstack_frame(obj, level, fill_value=None):
unstacker = partial(_Unstacker, index=obj.index,
level=level, fill_value=fill_value)
blocks = obj._data.unstack(unstacker)
- klass = type(obj)
- return klass(blocks)
+ return obj._constructor(blocks)
else:
unstacker = _Unstacker(obj.values, obj.index, level=level,
value_columns=obj.columns,
- fill_value=fill_value)
+ fill_value=fill_value,
+ constructor=obj._constructor)
return unstacker.get_result()
@@ -528,8 +552,7 @@ def factorize(index):
new_values = new_values[mask]
new_index = new_index[mask]
- klass = type(frame)._constructor_sliced
- return klass(new_values, index=new_index)
+ return frame._constructor_sliced(new_values, index=new_index)
def stack_multiple(frame, level, dropna=True):
@@ -676,7 +699,7 @@ def _convert_level_number(level_num, columns):
new_index = MultiIndex(levels=new_levels, labels=new_labels,
names=new_names, verify_integrity=False)
- result = DataFrame(new_data, index=new_index, columns=new_columns)
+ result = frame._constructor(new_data, index=new_index, columns=new_columns)
# more efficient way to go about this? can do the whole masking biz but
# will only save a small amount of time...
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index 52c591e4dcbb0..c52b512c2930a 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -5,7 +5,7 @@
from warnings import catch_warnings
import numpy as np
-from pandas import DataFrame, Series, MultiIndex, Panel
+from pandas import DataFrame, Series, MultiIndex, Panel, Index
import pandas as pd
import pandas.util.testing as tm
@@ -247,3 +247,270 @@ def test_subclass_sparse_transpose(self):
[2, 5],
[3, 6]])
tm.assert_sp_frame_equal(ossdf.T, essdf)
+
+ def test_subclass_stack(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]],
+ index=['a', 'b', 'c'],
+ columns=['X', 'Y', 'Z'])
+
+ res = df.stack()
+ exp = tm.SubclassedSeries(
+ [1, 2, 3, 4, 5, 6, 7, 8, 9],
+ index=[list('aaabbbccc'), list('XYZXYZXYZ')])
+
+ tm.assert_series_equal(res, exp)
+
+ def test_subclass_stack_multi(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame([
+ [10, 11, 12, 13],
+ [20, 21, 22, 23],
+ [30, 31, 32, 33],
+ [40, 41, 42, 43]],
+ index=MultiIndex.from_tuples(
+ list(zip(list('AABB'), list('cdcd'))),
+ names=['aaa', 'ccc']),
+ columns=MultiIndex.from_tuples(
+ list(zip(list('WWXX'), list('yzyz'))),
+ names=['www', 'yyy']))
+
+ exp = tm.SubclassedDataFrame([
+ [10, 12],
+ [11, 13],
+ [20, 22],
+ [21, 23],
+ [30, 32],
+ [31, 33],
+ [40, 42],
+ [41, 43]],
+ index=MultiIndex.from_tuples(list(zip(
+ list('AAAABBBB'), list('ccddccdd'), list('yzyzyzyz'))),
+ names=['aaa', 'ccc', 'yyy']),
+ columns=Index(['W', 'X'], name='www'))
+
+ res = df.stack()
+ tm.assert_frame_equal(res, exp)
+
+ res = df.stack('yyy')
+ tm.assert_frame_equal(res, exp)
+
+ exp = tm.SubclassedDataFrame([
+ [10, 11],
+ [12, 13],
+ [20, 21],
+ [22, 23],
+ [30, 31],
+ [32, 33],
+ [40, 41],
+ [42, 43]],
+ index=MultiIndex.from_tuples(list(zip(
+ list('AAAABBBB'), list('ccddccdd'), list('WXWXWXWX'))),
+ names=['aaa', 'ccc', 'www']),
+ columns=Index(['y', 'z'], name='yyy'))
+
+ res = df.stack('www')
+ tm.assert_frame_equal(res, exp)
+
+ def test_subclass_stack_multi_mixed(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame([
+ [10, 11, 12.0, 13.0],
+ [20, 21, 22.0, 23.0],
+ [30, 31, 32.0, 33.0],
+ [40, 41, 42.0, 43.0]],
+ index=MultiIndex.from_tuples(
+ list(zip(list('AABB'), list('cdcd'))),
+ names=['aaa', 'ccc']),
+ columns=MultiIndex.from_tuples(
+ list(zip(list('WWXX'), list('yzyz'))),
+ names=['www', 'yyy']))
+
+ exp = tm.SubclassedDataFrame([
+ [10, 12.0],
+ [11, 13.0],
+ [20, 22.0],
+ [21, 23.0],
+ [30, 32.0],
+ [31, 33.0],
+ [40, 42.0],
+ [41, 43.0]],
+ index=MultiIndex.from_tuples(list(zip(
+ list('AAAABBBB'), list('ccddccdd'), list('yzyzyzyz'))),
+ names=['aaa', 'ccc', 'yyy']),
+ columns=Index(['W', 'X'], name='www'))
+
+ res = df.stack()
+ tm.assert_frame_equal(res, exp)
+
+ res = df.stack('yyy')
+ tm.assert_frame_equal(res, exp)
+
+ exp = tm.SubclassedDataFrame([
+ [10.0, 11.0],
+ [12.0, 13.0],
+ [20.0, 21.0],
+ [22.0, 23.0],
+ [30.0, 31.0],
+ [32.0, 33.0],
+ [40.0, 41.0],
+ [42.0, 43.0]],
+ index=MultiIndex.from_tuples(list(zip(
+ list('AAAABBBB'), list('ccddccdd'), list('WXWXWXWX'))),
+ names=['aaa', 'ccc', 'www']),
+ columns=Index(['y', 'z'], name='yyy'))
+
+ res = df.stack('www')
+ tm.assert_frame_equal(res, exp)
+
+ def test_subclass_unstack(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9]],
+ index=['a', 'b', 'c'],
+ columns=['X', 'Y', 'Z'])
+
+ res = df.unstack()
+ exp = tm.SubclassedSeries(
+ [1, 4, 7, 2, 5, 8, 3, 6, 9],
+ index=[list('XXXYYYZZZ'), list('abcabcabc')])
+
+ tm.assert_series_equal(res, exp)
+
+ def test_subclass_unstack_multi(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame([
+ [10, 11, 12, 13],
+ [20, 21, 22, 23],
+ [30, 31, 32, 33],
+ [40, 41, 42, 43]],
+ index=MultiIndex.from_tuples(
+ list(zip(list('AABB'), list('cdcd'))),
+ names=['aaa', 'ccc']),
+ columns=MultiIndex.from_tuples(
+ list(zip(list('WWXX'), list('yzyz'))),
+ names=['www', 'yyy']))
+
+ exp = tm.SubclassedDataFrame([
+ [10, 20, 11, 21, 12, 22, 13, 23],
+ [30, 40, 31, 41, 32, 42, 33, 43]],
+ index=Index(['A', 'B'], name='aaa'),
+ columns=MultiIndex.from_tuples(list(zip(
+ list('WWWWXXXX'), list('yyzzyyzz'), list('cdcdcdcd'))),
+ names=['www', 'yyy', 'ccc']))
+
+ res = df.unstack()
+ tm.assert_frame_equal(res, exp)
+
+ res = df.unstack('ccc')
+ tm.assert_frame_equal(res, exp)
+
+ exp = tm.SubclassedDataFrame([
+ [10, 30, 11, 31, 12, 32, 13, 33],
+ [20, 40, 21, 41, 22, 42, 23, 43]],
+ index=Index(['c', 'd'], name='ccc'),
+ columns=MultiIndex.from_tuples(list(zip(
+ list('WWWWXXXX'), list('yyzzyyzz'), list('ABABABAB'))),
+ names=['www', 'yyy', 'aaa']))
+
+ res = df.unstack('aaa')
+ tm.assert_frame_equal(res, exp)
+
+ def test_subclass_unstack_multi_mixed(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame([
+ [10, 11, 12.0, 13.0],
+ [20, 21, 22.0, 23.0],
+ [30, 31, 32.0, 33.0],
+ [40, 41, 42.0, 43.0]],
+ index=MultiIndex.from_tuples(
+ list(zip(list('AABB'), list('cdcd'))),
+ names=['aaa', 'ccc']),
+ columns=MultiIndex.from_tuples(
+ list(zip(list('WWXX'), list('yzyz'))),
+ names=['www', 'yyy']))
+
+ exp = tm.SubclassedDataFrame([
+ [10, 20, 11, 21, 12.0, 22.0, 13.0, 23.0],
+ [30, 40, 31, 41, 32.0, 42.0, 33.0, 43.0]],
+ index=Index(['A', 'B'], name='aaa'),
+ columns=MultiIndex.from_tuples(list(zip(
+ list('WWWWXXXX'), list('yyzzyyzz'), list('cdcdcdcd'))),
+ names=['www', 'yyy', 'ccc']))
+
+ res = df.unstack()
+ tm.assert_frame_equal(res, exp)
+
+ res = df.unstack('ccc')
+ tm.assert_frame_equal(res, exp)
+
+ exp = tm.SubclassedDataFrame([
+ [10, 30, 11, 31, 12.0, 32.0, 13.0, 33.0],
+ [20, 40, 21, 41, 22.0, 42.0, 23.0, 43.0]],
+ index=Index(['c', 'd'], name='ccc'),
+ columns=MultiIndex.from_tuples(list(zip(
+ list('WWWWXXXX'), list('yyzzyyzz'), list('ABABABAB'))),
+ names=['www', 'yyy', 'aaa']))
+
+ res = df.unstack('aaa')
+ tm.assert_frame_equal(res, exp)
+
+ def test_subclass_pivot(self):
+ # GH 15564
+ df = tm.SubclassedDataFrame({
+ 'index': ['A', 'B', 'C', 'C', 'B', 'A'],
+ 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'],
+ 'values': [1., 2., 3., 3., 2., 1.]})
+
+ pivoted = df.pivot(
+ index='index', columns='columns', values='values')
+
+ expected = tm.SubclassedDataFrame({
+ 'One': {'A': 1., 'B': 2., 'C': 3.},
+ 'Two': {'A': 1., 'B': 2., 'C': 3.}})
+
+ expected.index.name, expected.columns.name = 'index', 'columns'
+
+ tm.assert_frame_equal(pivoted, expected)
+
+ def test_subclassed_melt(self):
+ # GH 15564
+ cheese = tm.SubclassedDataFrame({
+ 'first': ['John', 'Mary'],
+ 'last': ['Doe', 'Bo'],
+ 'height': [5.5, 6.0],
+ 'weight': [130, 150]})
+
+ melted = pd.melt(cheese, id_vars=['first', 'last'])
+
+ expected = tm.SubclassedDataFrame([
+ ['John', 'Doe', 'height', 5.5],
+ ['Mary', 'Bo', 'height', 6.0],
+ ['John', 'Doe', 'weight', 130],
+ ['Mary', 'Bo', 'weight', 150]],
+ columns=['first', 'last', 'variable', 'value'])
+
+ tm.assert_frame_equal(melted, expected)
+
+ def test_subclassed_wide_to_long(self):
+ # GH 9762
+
+ np.random.seed(123)
+ x = np.random.randn(3)
+ df = tm.SubclassedDataFrame({
+ "A1970": {0: "a", 1: "b", 2: "c"},
+ "A1980": {0: "d", 1: "e", 2: "f"},
+ "B1970": {0: 2.5, 1: 1.2, 2: .7},
+ "B1980": {0: 3.2, 1: 1.3, 2: .1},
+ "X": dict(zip(range(3), x))})
+
+ df["id"] = df.index
+ exp_data = {"X": x.tolist() + x.tolist(),
+ "A": ['a', 'b', 'c', 'd', 'e', 'f'],
+ "B": [2.5, 1.2, 0.7, 3.2, 1.3, 0.1],
+ "year": [1970, 1970, 1970, 1980, 1980, 1980],
+ "id": [0, 1, 2, 0, 1, 2]}
+ expected = tm.SubclassedDataFrame(exp_data)
+ expected = expected.set_index(['id', 'year'])[["X", "A", "B"]]
+ long_frame = pd.wide_to_long(df, ["A", "B"], i="id", j="year")
+
+ tm.assert_frame_equal(long_frame, expected)
diff --git a/pandas/tests/series/test_subclass.py b/pandas/tests/series/test_subclass.py
index 37c8d7343f7f1..60afaa3b821e1 100644
--- a/pandas/tests/series/test_subclass.py
+++ b/pandas/tests/series/test_subclass.py
@@ -13,24 +13,31 @@ def test_indexing_sliced(self):
res = s.loc[['a', 'b']]
exp = tm.SubclassedSeries([1, 2], index=list('ab'))
tm.assert_series_equal(res, exp)
- assert isinstance(res, tm.SubclassedSeries)
res = s.iloc[[2, 3]]
exp = tm.SubclassedSeries([3, 4], index=list('cd'))
tm.assert_series_equal(res, exp)
- assert isinstance(res, tm.SubclassedSeries)
res = s.loc[['a', 'b']]
exp = tm.SubclassedSeries([1, 2], index=list('ab'))
tm.assert_series_equal(res, exp)
- assert isinstance(res, tm.SubclassedSeries)
def test_to_frame(self):
s = tm.SubclassedSeries([1, 2, 3, 4], index=list('abcd'), name='xxx')
res = s.to_frame()
exp = tm.SubclassedDataFrame({'xxx': [1, 2, 3, 4]}, index=list('abcd'))
tm.assert_frame_equal(res, exp)
- assert isinstance(res, tm.SubclassedDataFrame)
+
+ def test_subclass_unstack(self):
+ # GH 15564
+ s = tm.SubclassedSeries(
+ [1, 2, 3, 4], index=[list('aabb'), list('xyxy')])
+
+ res = s.unstack()
+ exp = tm.SubclassedDataFrame(
+ {'x': [1, 3], 'y': [2, 4]}, index=['a', 'b'])
+
+ tm.assert_frame_equal(res, exp)
class TestSparseSeriesSubclassing(object):
| - [x] closes #15563
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Basically picked from #15655 to support stack/unstack in subclassed DataFrames and Series.
| https://api.github.com/repos/pandas-dev/pandas/pulls/18929 | 2017-12-24T14:07:49Z | 2018-01-12T11:49:01Z | 2018-01-12T11:49:01Z | 2018-01-12T11:49:08Z |
CLN: Remove Timestamp.offset | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 3f300deddebeb..6be58dff0eecb 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -238,6 +238,7 @@ Removal of prior version deprecations/changes
- :func:`read_csv` has dropped the ``as_recarray`` parameter (:issue:`13373`)
- :func:`read_csv` has dropped the ``buffer_lines`` parameter (:issue:`13360`)
- :func:`read_csv` has dropped the ``compact_ints`` and ``use_unsigned`` parameters (:issue:`13323`)
+- The ``Timestamp`` class has dropped the ``offset`` attribute in favor of ``freq`` (:issue:`13593`)
.. _whatsnew_0230.performance:
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 086657e8c97b4..683be4c9aa3a8 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -396,7 +396,7 @@ class NaTType(_NaT):
""")
fromordinal = _make_error_func('fromordinal', # noqa:E128
"""
- Timestamp.fromordinal(ordinal, freq=None, tz=None, offset=None)
+ Timestamp.fromordinal(ordinal, freq=None, tz=None)
passed an ordinal, translate and convert to a ts
note: by definition there cannot be any tz info on the ordinal itself
@@ -409,8 +409,6 @@ class NaTType(_NaT):
Offset which Timestamp will have
tz : str, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time which Timestamp will have.
- offset : str, DateOffset
- Deprecated, use freq
""")
# _nat_methods
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 67045cde8661f..1792f852c9e1e 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -435,9 +435,9 @@ class Timestamp(_Timestamp):
"""
@classmethod
- def fromordinal(cls, ordinal, freq=None, tz=None, offset=None):
+ def fromordinal(cls, ordinal, freq=None, tz=None):
"""
- Timestamp.fromordinal(ordinal, freq=None, tz=None, offset=None)
+ Timestamp.fromordinal(ordinal, freq=None, tz=None)
passed an ordinal, translate and convert to a ts
note: by definition there cannot be any tz info on the ordinal itself
@@ -450,11 +450,9 @@ class Timestamp(_Timestamp):
Offset which Timestamp will have
tz : str, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time which Timestamp will have.
- offset : str, DateOffset
- Deprecated, use freq
"""
return cls(datetime.fromordinal(ordinal),
- freq=freq, tz=tz, offset=offset)
+ freq=freq, tz=tz)
@classmethod
def now(cls, tz=None):
@@ -529,8 +527,7 @@ class Timestamp(_Timestamp):
object freq=None, tz=None, unit=None,
year=None, month=None, day=None,
hour=None, minute=None, second=None, microsecond=None,
- tzinfo=None,
- object offset=None):
+ tzinfo=None):
# The parameter list folds together legacy parameter names (the first
# four) and positional and keyword parameter names from pydatetime.
#
@@ -554,15 +551,6 @@ class Timestamp(_Timestamp):
cdef _TSObject ts
- if offset is not None:
- # deprecate offset kwd in 0.19.0, GH13593
- if freq is not None:
- msg = "Can only specify freq or offset, not both"
- raise TypeError(msg)
- warnings.warn("offset is deprecated. Use freq instead",
- FutureWarning)
- freq = offset
-
if tzinfo is not None:
if not PyTZInfo_Check(tzinfo):
# tzinfo must be a datetime.tzinfo object, GH#17690
@@ -676,12 +664,6 @@ class Timestamp(_Timestamp):
"""
return self.tzinfo
- @property
- def offset(self):
- warnings.warn(".offset is deprecated. Use .freq instead",
- FutureWarning)
- return self.freq
-
def __setstate__(self, state):
self.value = state[0]
self.freq = state[1]
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index 7194849f19ebb..69ce7a42851a1 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -170,8 +170,9 @@ def test_NaT_docstrings():
ts_missing = [x for x in ts_names if x not in nat_names and
not x.startswith('_')]
ts_missing.sort()
- ts_expected = ['freqstr', 'normalize', 'offset',
- 'to_julian_date', 'to_period', 'tz']
+ ts_expected = ['freqstr', 'normalize',
+ 'to_julian_date',
+ 'to_period', 'tz']
assert ts_missing == ts_expected
ts_overlap = [x for x in nat_names if x in ts_names and
diff --git a/pandas/tests/scalar/test_timestamp.py b/pandas/tests/scalar/test_timestamp.py
index 19c09701f6106..4f4f2648d3834 100644
--- a/pandas/tests/scalar/test_timestamp.py
+++ b/pandas/tests/scalar/test_timestamp.py
@@ -307,36 +307,6 @@ def test_constructor_fromordinal(self):
ts = Timestamp.fromordinal(dt_tz.toordinal(), tz='US/Eastern')
assert ts.to_pydatetime() == dt_tz
- def test_constructor_offset_depr(self):
- # see gh-12160
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- ts = Timestamp('2011-01-01', offset='D')
- assert ts.freq == 'D'
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- assert ts.offset == 'D'
-
- msg = "Can only specify freq or offset, not both"
- with tm.assert_raises_regex(TypeError, msg):
- Timestamp('2011-01-01', offset='D', freq='D')
-
- def test_constructor_offset_depr_fromordinal(self):
- # GH 12160
- base = datetime(2000, 1, 1)
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- ts = Timestamp.fromordinal(base.toordinal(), offset='D')
- assert Timestamp('2000-01-01') == ts
- assert ts.freq == 'D'
- assert base.toordinal() == ts.toordinal()
-
- msg = "Can only specify freq or offset, not both"
- with tm.assert_raises_regex(TypeError, msg):
- Timestamp.fromordinal(base.toordinal(), offset='D', freq='D')
-
class TestTimestamp(object):
| Deprecated in v0.19.0
xref #13593
| https://api.github.com/repos/pandas-dev/pandas/pulls/18927 | 2017-12-24T10:01:47Z | 2017-12-26T22:13:27Z | 2017-12-26T22:13:27Z | 2017-12-27T07:24:37Z |
CLN: ASV offset | diff --git a/asv_bench/benchmarks/offset.py b/asv_bench/benchmarks/offset.py
index 849776bf9a591..034e861e7fc01 100644
--- a/asv_bench/benchmarks/offset.py
+++ b/asv_bench/benchmarks/offset.py
@@ -2,51 +2,58 @@
from datetime import datetime
import numpy as np
-
import pandas as pd
-from pandas import date_range
-
try:
- import pandas.tseries.holiday
+ import pandas.tseries.holiday # noqa
except ImportError:
pass
hcal = pd.tseries.holiday.USFederalHolidayCalendar()
+# These offests currently raise a NotImplimentedError with .apply_index()
+non_apply = [pd.offsets.Day(),
+ pd.offsets.BYearEnd(),
+ pd.offsets.BYearBegin(),
+ pd.offsets.BQuarterEnd(),
+ pd.offsets.BQuarterBegin(),
+ pd.offsets.BMonthEnd(),
+ pd.offsets.BMonthBegin(),
+ pd.offsets.CustomBusinessDay(),
+ pd.offsets.CustomBusinessDay(calendar=hcal),
+ pd.offsets.CustomBusinessMonthBegin(calendar=hcal),
+ pd.offsets.CustomBusinessMonthEnd(calendar=hcal),
+ pd.offsets.CustomBusinessMonthEnd(calendar=hcal)]
+other_offsets = [pd.offsets.YearEnd(), pd.offsets.YearBegin(),
+ pd.offsets.QuarterEnd(), pd.offsets.QuarterBegin(),
+ pd.offsets.MonthEnd(), pd.offsets.MonthBegin(),
+ pd.offsets.DateOffset(months=2, days=2),
+ pd.offsets.BusinessDay(), pd.offsets.SemiMonthEnd(),
+ pd.offsets.SemiMonthBegin()]
+offsets = non_apply + other_offsets
class ApplyIndex(object):
- goal_time = 0.2
- params = [pd.offsets.YearEnd(), pd.offsets.YearBegin(),
- pd.offsets.BYearEnd(), pd.offsets.BYearBegin(),
- pd.offsets.QuarterEnd(), pd.offsets.QuarterBegin(),
- pd.offsets.BQuarterEnd(), pd.offsets.BQuarterBegin(),
- pd.offsets.MonthEnd(), pd.offsets.MonthBegin(),
- pd.offsets.BMonthEnd(), pd.offsets.BMonthBegin()]
-
- def setup(self, param):
- self.offset = param
+ goal_time = 0.2
- self.N = 100000
- self.rng = date_range(start='1/1/2000', periods=self.N, freq='T')
- self.ser = pd.Series(self.rng)
+ params = other_offsets
+ param_names = ['offset']
- def time_apply_index(self, param):
- self.rng + self.offset
+ def setup(self, offset):
+ N = 10000
+ self.rng = pd.date_range(start='1/1/2000', periods=N, freq='T')
- def time_apply_series(self, param):
- self.ser + self.offset
+ def time_apply_index(self, offset):
+ offset.apply_index(self.rng)
class OnOffset(object):
+
goal_time = 0.2
- params = [pd.offsets.QuarterBegin(), pd.offsets.QuarterEnd(),
- pd.offsets.BQuarterBegin(), pd.offsets.BQuarterEnd()]
+ params = offsets
param_names = ['offset']
def setup(self, offset):
- self.offset = offset
self.dates = [datetime(2016, m, d)
for m in [10, 11, 12]
for d in [1, 2, 3, 28, 29, 30, 31]
@@ -54,205 +61,62 @@ def setup(self, offset):
def time_on_offset(self, offset):
for date in self.dates:
- self.offset.onOffset(date)
-
-
-class DatetimeIndexArithmetic(object):
- goal_time = 0.2
-
- def setup(self):
- self.N = 100000
- self.rng = date_range(start='1/1/2000', periods=self.N, freq='T')
- self.day_offset = pd.offsets.Day()
- self.relativedelta_offset = pd.offsets.DateOffset(months=2, days=2)
- self.busday_offset = pd.offsets.BusinessDay()
-
- def time_add_offset_delta(self):
- self.rng + self.day_offset
-
- def time_add_offset_fast(self):
- self.rng + self.relativedelta_offset
-
- def time_add_offset_slow(self):
- self.rng + self.busday_offset
-
-
-class SeriesArithmetic(object):
- goal_time = 0.2
+ offset.onOffset(date)
- def setup(self):
- self.N = 100000
- rng = date_range(start='20140101', freq='T', periods=self.N)
- self.ser = pd.Series(rng)
- self.day_offset = pd.offsets.Day()
- self.relativedelta_offset = pd.offsets.DateOffset(months=2, days=2)
- self.busday_offset = pd.offsets.BusinessDay()
- def time_add_offset_delta(self):
- self.ser + self.day_offset
+class OffsetSeriesArithmetic(object):
- def time_add_offset_fast(self):
- self.ser + self.relativedelta_offset
-
- def time_add_offset_slow(self):
- self.ser + self.busday_offset
-
-
-class YearBegin(object):
goal_time = 0.2
+ params = offsets
+ param_names = ['offset']
- def setup(self):
- self.date = datetime(2011, 1, 1)
- self.year = pd.offsets.YearBegin()
+ def setup(self, offset):
+ N = 1000
+ rng = pd.date_range(start='1/1/2000', periods=N, freq='T')
+ self.data = pd.Series(rng)
- def time_timeseries_year_apply(self):
- self.year.apply(self.date)
+ def time_add_offset(self, offset):
+ self.data + offset
- def time_timeseries_year_incr(self):
- self.date + self.year
+class OffsetDatetimeIndexArithmetic(object):
-class Day(object):
goal_time = 0.2
+ params = offsets
+ param_names = ['offset']
- def setup(self):
- self.date = datetime(2011, 1, 1)
- self.day = pd.offsets.Day()
+ def setup(self, offset):
+ N = 1000
+ self.data = pd.date_range(start='1/1/2000', periods=N, freq='T')
- def time_timeseries_day_apply(self):
- self.day.apply(self.date)
+ def time_add_offset(self, offset):
+ self.data + offset
- def time_timeseries_day_incr(self):
- self.date + self.day
+class OffestDatetimeArithmetic(object):
-class CBDay(object):
goal_time = 0.2
+ params = offsets
+ param_names = ['offset']
- def setup(self):
+ def setup(self, offset):
self.date = datetime(2011, 1, 1)
self.dt64 = np.datetime64('2011-01-01 09:00Z')
- self.cday = pd.offsets.CustomBusinessDay()
-
- def time_custom_bday_decr(self):
- self.date - self.cday
-
- def time_custom_bday_incr(self):
- self.date + self.cday
-
- def time_custom_bday_apply(self):
- self.cday.apply(self.date)
-
- def time_custom_bday_apply_dt64(self):
- self.cday.apply(self.dt64)
-
-
-class CBDayHolidays(object):
- goal_time = 0.2
-
- def setup(self):
- self.date = datetime(2011, 1, 1)
- self.cdayh = pd.offsets.CustomBusinessDay(calendar=hcal)
-
- def time_custom_bday_cal_incr(self):
- self.date + 1 * self.cdayh
-
- def time_custom_bday_cal_decr(self):
- self.date - 1 * self.cdayh
-
- def time_custom_bday_cal_incr_n(self):
- self.date + 10 * self.cdayh
-
- def time_custom_bday_cal_incr_neg_n(self):
- self.date - 10 * self.cdayh
-
-
-class CBMonthBegin(object):
- goal_time = 0.2
-
- def setup(self):
- self.date = datetime(2011, 1, 1)
- self.cmb = pd.offsets.CustomBusinessMonthBegin(calendar=hcal)
-
- def time_custom_bmonthbegin_decr_n(self):
- self.date - (10 * self.cmb)
-
- def time_custom_bmonthbegin_incr_n(self):
- self.date + (10 * self.cmb)
-
-
-class CBMonthEnd(object):
- goal_time = 0.2
-
- def setup(self):
- self.date = datetime(2011, 1, 1)
- self.cme = pd.offsets.CustomBusinessMonthEnd(calendar=hcal)
-
- def time_custom_bmonthend_incr(self):
- self.date + self.cme
-
- def time_custom_bmonthend_incr_n(self):
- self.date + (10 * self.cme)
-
- def time_custom_bmonthend_decr_n(self):
- self.date - (10 * self.cme)
-
-
-class SemiMonthOffset(object):
- goal_time = 0.2
-
- def setup(self):
- self.N = 100000
- self.rng = date_range(start='1/1/2000', periods=self.N, freq='T')
- # date is not on an offset which will be slowest case
- self.date = datetime(2011, 1, 2)
- self.semi_month_end = pd.offsets.SemiMonthEnd()
- self.semi_month_begin = pd.offsets.SemiMonthBegin()
-
- def time_end_apply(self):
- self.semi_month_end.apply(self.date)
-
- def time_end_incr(self):
- self.date + self.semi_month_end
-
- def time_end_incr_n(self):
- self.date + 10 * self.semi_month_end
-
- def time_end_decr(self):
- self.date - self.semi_month_end
-
- def time_end_decr_n(self):
- self.date - 10 * self.semi_month_end
-
- def time_end_apply_index(self):
- self.semi_month_end.apply_index(self.rng)
-
- def time_end_incr_rng(self):
- self.rng + self.semi_month_end
-
- def time_end_decr_rng(self):
- self.rng - self.semi_month_end
-
- def time_begin_apply(self):
- self.semi_month_begin.apply(self.date)
-
- def time_begin_incr(self):
- self.date + self.semi_month_begin
- def time_begin_incr_n(self):
- self.date + 10 * self.semi_month_begin
+ def time_apply(self, offset):
+ offset.apply(self.date)
- def time_begin_decr(self):
- self.date - self.semi_month_begin
+ def time_apply_np_dt64(self, offset):
+ offset.apply(self.dt64)
- def time_begin_decr_n(self):
- self.date - 10 * self.semi_month_begin
+ def time_add(self, offset):
+ self.date + offset
- def time_begin_apply_index(self):
- self.semi_month_begin.apply_index(self.rng)
+ def time_add_10(self, offset):
+ self.date + (10 * offset)
- def time_begin_incr_rng(self):
- self.rng + self.semi_month_begin
+ def time_subtract(self, offset):
+ self.date - offset
- def time_begin_decr_rng(self):
- self.rng - self.semi_month_begin
+ def time_subtract_10(self, offset):
+ self.date - (10 * offset)
diff --git a/ci/lint.sh b/ci/lint.sh
index d678cd1ce5d70..5380c91831cec 100755
--- a/ci/lint.sh
+++ b/ci/lint.sh
@@ -24,7 +24,7 @@ if [ "$LINT" ]; then
echo "Linting setup.py DONE"
echo "Linting asv_bench/benchmarks/"
- flake8 asv_bench/benchmarks/ --exclude=asv_bench/benchmarks/[ijoprs]*.py --ignore=F811
+ flake8 asv_bench/benchmarks/ --exclude=asv_bench/benchmarks/[ips]*.py --ignore=F811
if [ $? -ne "0" ]; then
RET=1
fi
| - Consolidated all the offsets to a list that was reused for all the benchmarks.
- The `ApplyIndex` benchmark seemed to be benchmarking arithmetic, so I changed it to specifically use the `apply_index()` method
```
$ asv dev -b ^offset
· Discovering benchmarks
· Running 10 total benchmarks (1 commits * 1 environments * 10 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 10.00%] ··· Running offset.ApplyIndex.time_apply_index ok
[ 10.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<YearEnd: month=12> 173ms
<YearBegin: month=1> 6.94ms
<QuarterEnd: startingMonth=3> 194ms
<QuarterBegin: startingMonth=3> 7.15ms
<MonthEnd> 1.28ms
<MonthBegin> 1.19ms
<DateOffset: kwds={'months': 2, 'days': 2}> 2.45ms
<BusinessDay> 136ms
<SemiMonthEnd: day_of_month=15> 138ms
<SemiMonthBegin: day_of_month=15> 140ms
============================================= ========
[ 20.00%] ··· Running offset.OffestDatetimeArithmetic.time_add ok
[ 20.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 106μs
<BusinessYearEnd: month=12> 202μs
<BusinessYearBegin: month=1> 123μs
<BusinessQuarterEnd: startingMonth=3> 105μs
<BusinessQuarterBegin: startingMonth=3> 124μs
<BusinessMonthEnd> 121μs
<BusinessMonthBegin> 138μs
<CustomBusinessDay> 89.2μs
<CustomBusinessDay> 89.6μs
<CustomBusinessMonthBegin> 415μs
<CustomBusinessMonthEnd> 442μs
<CustomBusinessMonthEnd> 422μs
<YearEnd: month=12> 63.0μs
<YearBegin: month=1> 61.7μs
<QuarterEnd: startingMonth=3> 105μs
<QuarterBegin: startingMonth=3> 150μs
<MonthEnd> 176μs
<MonthBegin> 121μs
<DateOffset: kwds={'months': 2, 'days': 2}> 69.2μs
<BusinessDay> 84.9μs
<SemiMonthEnd: day_of_month=15> 179μs
<SemiMonthBegin: day_of_month=15> 177μs
============================================= ========
[ 30.00%] ··· Running offset.OffestDatetimeArithmetic.time_add_10 ok
[ 30.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 121μs
<BusinessYearEnd: month=12> 161μs
<BusinessYearBegin: month=1> 162μs
<BusinessQuarterEnd: startingMonth=3> 207μs
<BusinessQuarterBegin: startingMonth=3> 137μs
<BusinessMonthEnd> 141μs
<BusinessMonthBegin> 156μs
<CustomBusinessDay> 111μs
<CustomBusinessDay> 121μs
<CustomBusinessMonthBegin> 458μs
<CustomBusinessMonthEnd> 464μs
<CustomBusinessMonthEnd> 461μs
<YearEnd: month=12> 129μs
<YearBegin: month=1> 99.1μs
<QuarterEnd: startingMonth=3> 115μs
<QuarterBegin: startingMonth=3> 219μs
<MonthEnd> 192μs
<MonthBegin> 148μs
<DateOffset: kwds={'months': 2, 'days': 2}> 280μs
<BusinessDay> 101μs
<SemiMonthEnd: day_of_month=15> 187μs
<SemiMonthBegin: day_of_month=15> 185μs
============================================= ========
[ 40.00%] ··· Running offset.OffestDatetimeArithmetic.time_apply ok
[ 40.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 98.1μs
<BusinessYearEnd: month=12> 192μs
<BusinessYearBegin: month=1> 116μs
<BusinessQuarterEnd: startingMonth=3> 96.5μs
<BusinessQuarterBegin: startingMonth=3> 116μs
<BusinessMonthEnd> 118μs
<BusinessMonthBegin> 129μs
<CustomBusinessDay> 79.8μs
<CustomBusinessDay> 81.3μs
<CustomBusinessMonthBegin> 526μs
<CustomBusinessMonthEnd> 416μs
<CustomBusinessMonthEnd> 414μs
<YearEnd: month=12> 53.1μs
<YearBegin: month=1> 51.8μs
<QuarterEnd: startingMonth=3> 95.0μs
<QuarterBegin: startingMonth=3> 137μs
<MonthEnd> 165μs
<MonthBegin> 113μs
<DateOffset: kwds={'months': 2, 'days': 2}> 61.1μs
<BusinessDay> 76.8μs
<SemiMonthEnd: day_of_month=15> 170μs
<SemiMonthBegin: day_of_month=15> 168μs
============================================= ========
[ 50.00%] ··· Running offset.OffestDatetimeArithmetic.time_apply_np_dt64 ok
[ 50.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 104μs
<BusinessYearEnd: month=12> 199μs
<BusinessYearBegin: month=1> 121μs
<BusinessQuarterEnd: startingMonth=3> 105μs
<BusinessQuarterBegin: startingMonth=3> 123μs
<BusinessMonthEnd> 121μs
<BusinessMonthBegin> 137μs
<CustomBusinessDay> 86.2μs
<CustomBusinessDay> 88.0μs
<CustomBusinessMonthBegin> 411μs
<CustomBusinessMonthEnd> 420μs
<CustomBusinessMonthEnd> 421μs
<YearEnd: month=12> 60.4μs
<YearBegin: month=1> 58.5μs
<QuarterEnd: startingMonth=3> 103μs
<QuarterBegin: startingMonth=3> 119μs
<MonthEnd> 174μs
<MonthBegin> 118μs
<DateOffset: kwds={'months': 2, 'days': 2}> 69.0μs
<BusinessDay> 82.6μs
<SemiMonthEnd: day_of_month=15> 178μs
<SemiMonthBegin: day_of_month=15> 173μs
============================================= ========
[ 60.00%] ··· Running offset.OffestDatetimeArithmetic.time_subtract ok
[ 60.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 125μs
<BusinessYearEnd: month=12> 165μs
<BusinessYearBegin: month=1> 165μs
<BusinessQuarterEnd: startingMonth=3> 117μs
<BusinessQuarterBegin: startingMonth=3> 136μs
<BusinessMonthEnd> 145μs
<BusinessMonthBegin> 144μs
<CustomBusinessDay> 114μs
<CustomBusinessDay> 105μs
<CustomBusinessMonthBegin> 380μs
<CustomBusinessMonthEnd> 564μs
<CustomBusinessMonthEnd> 569μs
<YearEnd: month=12> 104μs
<YearBegin: month=1> 104μs
<QuarterEnd: startingMonth=3> 114μs
<QuarterBegin: startingMonth=3> 130μs
<MonthEnd> 209μs
<MonthBegin> 140μs
<DateOffset: kwds={'months': 2, 'days': 2}> 129μs
<BusinessDay> 85.2μs
<SemiMonthEnd: day_of_month=15> 191μs
<SemiMonthBegin: day_of_month=15> 188μs
============================================= ========
[ 70.00%] ··· Running offset.OffestDatetimeArithmetic.time_subtract_10 ok
[ 70.00%] ····
============================================= =======
offset
--------------------------------------------- -------
<Day> 139μs
<BusinessYearEnd: month=12> 198μs
<BusinessYearBegin: month=1> 196μs
<BusinessQuarterEnd: startingMonth=3> 126μs
<BusinessQuarterBegin: startingMonth=3> 146μs
<BusinessMonthEnd> 157μs
<BusinessMonthBegin> 158μs
<CustomBusinessDay> 144μs
<CustomBusinessDay> 114μs
<CustomBusinessMonthBegin> 386μs
<CustomBusinessMonthEnd> 482μs
<CustomBusinessMonthEnd> 478μs
<YearEnd: month=12> 156μs
<YearBegin: month=1> 136μs
<QuarterEnd: startingMonth=3> 123μs
<QuarterBegin: startingMonth=3> 140μs
<MonthEnd> 210μs
<MonthBegin> 154μs
<DateOffset: kwds={'months': 2, 'days': 2}> 516μs
<BusinessDay> 115μs
<SemiMonthEnd: day_of_month=15> 197μs
<SemiMonthBegin: day_of_month=15> 195μs
============================================= =======
[ 80.00%] ··· Running offset.OffsetDatetimeIndexArithmetic.time_add_offset ok
[ 80.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 1.14ms
<BusinessYearEnd: month=12> 1.97s
<BusinessYearBegin: month=1> 1.08s
<BusinessQuarterEnd: startingMonth=3> 920ms
<BusinessQuarterBegin: startingMonth=3> 1.10s
<BusinessMonthEnd> 1.07s
<BusinessMonthBegin> 1.14s
<CustomBusinessDay> 752ms
<CustomBusinessDay> 754ms
<CustomBusinessMonthBegin> 4.56s
<CustomBusinessMonthEnd> 4.04s
<CustomBusinessMonthEnd> 4.02s
<YearEnd: month=12> 172ms
<YearBegin: month=1> 7.10ms
<QuarterEnd: startingMonth=3> 191ms
<QuarterBegin: startingMonth=3> 7.13ms
<MonthEnd> 1.48ms
<MonthBegin> 1.40ms
<DateOffset: kwds={'months': 2, 'days': 2}> 3.09ms
<BusinessDay> 137ms
<SemiMonthEnd: day_of_month=15> 141ms
<SemiMonthBegin: day_of_month=15> 136ms
============================================= ========
[ 90.00%] ··· Running offset.OffsetSeriesArithmetic.time_add_offset ok
[ 90.00%] ····
============================================= =======
offset
--------------------------------------------- -------
<Day> 416ms
<BusinessYearEnd: month=12> 2.39s
<BusinessYearBegin: month=1> 1.50s
<BusinessQuarterEnd: startingMonth=3> 1.34s
<BusinessQuarterBegin: startingMonth=3> 1.51s
<BusinessMonthEnd> 1.49s
<BusinessMonthBegin> 1.56s
<CustomBusinessDay> 1.17s
<CustomBusinessDay> 1.18s
<CustomBusinessMonthBegin> 5.02s
<CustomBusinessMonthEnd> 4.49s
<CustomBusinessMonthEnd> 4.46s
<YearEnd: month=12> 588ms
<YearBegin: month=1> 426ms
<QuarterEnd: startingMonth=3> 611ms
<QuarterBegin: startingMonth=3> 425ms
<MonthEnd> 422ms
<MonthBegin> 420ms
<DateOffset: kwds={'months': 2, 'days': 2}> 422ms
<BusinessDay> 549ms
<SemiMonthEnd: day_of_month=15> 559ms
<SemiMonthBegin: day_of_month=15> 561ms
============================================= =======
[100.00%] ··· Running offset.OnOffset.time_on_offset ok
[100.00%] ····
============================================= ========
offset
--------------------------------------------- --------
<Day> 28.1μs
<BusinessYearEnd: month=12> 6.93ms
<BusinessYearBegin: month=1> 5.21ms
<BusinessQuarterEnd: startingMonth=3> 5.61ms
<BusinessQuarterBegin: startingMonth=3> 4.61ms
<BusinessMonthEnd> 5.33ms
<BusinessMonthBegin> 34.6μs
<CustomBusinessDay> 409μs
<CustomBusinessDay> 408μs
<CustomBusinessMonthBegin> 16.1ms
<CustomBusinessMonthEnd> 16.3ms
<CustomBusinessMonthEnd> 16.3ms
<YearEnd: month=12> 34.7μs
<YearBegin: month=1> 21.3μs
<QuarterEnd: startingMonth=3> 252μs
<QuarterBegin: startingMonth=3> 4.48ms
<MonthEnd> 34.5μs
<MonthBegin> 18.8μs
<DateOffset: kwds={'months': 2, 'days': 2}> 21.4μs
<BusinessDay> 21.0μs
<SemiMonthEnd: day_of_month=15> 38.6μs
<SemiMonthBegin: day_of_month=15> 22.6μs
============================================= ========
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/18926 | 2017-12-24T08:26:34Z | 2017-12-31T14:54:14Z | 2017-12-31T14:54:14Z | 2017-12-31T23:55:21Z |
DOC: Clarify dispatch behavior of read_sql | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 26874a57c66f7..0d398ad3135a6 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -337,15 +337,22 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
"""
Read SQL query or database table into a DataFrame.
+ This function is a convenience wrapper around ``read_sql_table`` and
+ ``read_sql_query`` (for backward compatibility). It will delegate
+ to the specific function depending on the provided input. A SQL query
+ will be routed to ``read_sql_query``, while a database table name will
+ be routed to ``read_sql_table``. Note that the delegated function might
+ have more specific notes about their functionality not listed here.
+
Parameters
----------
sql : string or SQLAlchemy Selectable (select or text object)
- SQL query to be executed.
- con : SQLAlchemy connectable(engine/connection) or database string URI
+ SQL query to be executed or a table name.
+ con : SQLAlchemy connectable (engine/connection) or database string URI
or DBAPI2 connection (fallback mode)
+
Using SQLAlchemy makes it possible to use any DB supported by that
- library.
- If a DBAPI2 object, only sqlite3 is supported.
+ library. If a DBAPI2 object, only sqlite3 is supported.
index_col : string or list of strings, optional, default: None
Column(s) to set as index(MultiIndex).
coerce_float : boolean, default True
@@ -377,14 +384,6 @@ def read_sql(sql, con, index_col=None, coerce_float=True, params=None,
-------
DataFrame
- Notes
- -----
- This function is a convenience wrapper around ``read_sql_table`` and
- ``read_sql_query`` (and for backward compatibility) and will delegate
- to the specific function depending on the provided input (database
- table name or SQL query). The delegated function might have more specific
- notes about their functionality not listed here.
-
See also
--------
read_sql_table : Read SQL database table into a DataFrame.
| Wasn't particularly clear or prominent in the docs.
Closes #18861. | https://api.github.com/repos/pandas-dev/pandas/pulls/18925 | 2017-12-24T06:45:31Z | 2017-12-26T22:14:54Z | 2017-12-26T22:14:54Z | 2017-12-27T07:24:45Z |
BUG: fix issue with concat creating SparseFrame if not all series are sparse. | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 1890636bc8e1a..42f5e65bd6974 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -537,6 +537,7 @@ Reshaping
- Bug in :func:`DataFrame.merge` in which merging using ``Index`` objects as vectors raised an Exception (:issue:`19038`)
- Bug in :func:`DataFrame.stack`, :func:`DataFrame.unstack`, :func:`Series.unstack` which were not returning subclasses (:issue:`15563`)
- Bug in timezone comparisons, manifesting as a conversion of the index to UTC in ``.concat()`` (:issue:`18523`)
+- Bug in :func:`concat` when concatting sparse and dense series it returns only a ``SparseDataFrame``. Should be a ``DataFrame``. (:issue:`18914`, :issue:`18686`, and :issue:`16874`)
-
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index 3e54ce61cd5b2..ddecbe85087d8 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -19,7 +19,7 @@
_TD_DTYPE)
from pandas.core.dtypes.generic import (
ABCDatetimeIndex, ABCTimedeltaIndex,
- ABCPeriodIndex, ABCRangeIndex)
+ ABCPeriodIndex, ABCRangeIndex, ABCSparseDataFrame)
def get_dtype_kinds(l):
@@ -89,14 +89,16 @@ def _get_series_result_type(result, objs=None):
def _get_frame_result_type(result, objs):
"""
return appropriate class of DataFrame-like concat
- if any block is SparseBlock, return SparseDataFrame
+ if all blocks are SparseBlock, return SparseDataFrame
otherwise, return 1st obj
"""
- if any(b.is_sparse for b in result.blocks):
+
+ if result.blocks and all(b.is_sparse for b in result.blocks):
from pandas.core.sparse.api import SparseDataFrame
return SparseDataFrame
else:
- return objs[0]
+ return next(obj for obj in objs if not isinstance(obj,
+ ABCSparseDataFrame))
def _concat_compat(to_concat, axis=0):
diff --git a/pandas/core/dtypes/generic.py b/pandas/core/dtypes/generic.py
index 6fae09c43d2be..b032cb6f14d4c 100644
--- a/pandas/core/dtypes/generic.py
+++ b/pandas/core/dtypes/generic.py
@@ -43,6 +43,8 @@ def _check(cls, inst):
ABCSeries = create_pandas_abc_type("ABCSeries", "_typ", ("series", ))
ABCDataFrame = create_pandas_abc_type("ABCDataFrame", "_typ", ("dataframe", ))
+ABCSparseDataFrame = create_pandas_abc_type("ABCSparseDataFrame", "_subtyp",
+ ("sparse_frame", ))
ABCPanel = create_pandas_abc_type("ABCPanel", "_typ", ("panel",))
ABCSparseSeries = create_pandas_abc_type("ABCSparseSeries", "_subtyp",
('sparse_series',
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 58cb182e7d403..53f92b98f022e 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -18,6 +18,7 @@ class TestABCClasses(object):
df = pd.DataFrame({'names': ['a', 'b', 'c']}, index=multi_index)
sparse_series = pd.Series([1, 2, 3]).to_sparse()
sparse_array = pd.SparseArray(np.random.randn(10))
+ sparse_frame = pd.SparseDataFrame({'a': [1, -1, None]})
def test_abc_types(self):
assert isinstance(pd.Index(['a', 'b', 'c']), gt.ABCIndex)
@@ -37,6 +38,7 @@ def test_abc_types(self):
assert isinstance(self.df.to_panel(), gt.ABCPanel)
assert isinstance(self.sparse_series, gt.ABCSparseSeries)
assert isinstance(self.sparse_array, gt.ABCSparseArray)
+ assert isinstance(self.sparse_frame, gt.ABCSparseDataFrame)
assert isinstance(self.categorical, gt.ABCCategorical)
assert isinstance(pd.Period('2012', freq='A-DEC'), gt.ABCPeriod)
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index 22925cceb30d1..c9d079421532f 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -454,6 +454,15 @@ def test_dataframe_dummies_preserve_categorical_dtype(self, dtype):
tm.assert_frame_equal(result, expected)
+ @pytest.mark.parametrize('sparse', [True, False])
+ def test_get_dummies_dont_sparsify_all_columns(self, sparse):
+ # GH18914
+ df = DataFrame.from_items([('GDP', [1, 2]), ('Nation', ['AB', 'CD'])])
+ df = get_dummies(df, columns=['Nation'], sparse=sparse)
+ df2 = df.reindex(columns=['GDP'])
+
+ tm.assert_frame_equal(df[['GDP']], df2)
+
class TestCategoricalReshape(object):
diff --git a/pandas/tests/sparse/test_combine_concat.py b/pandas/tests/sparse/test_combine_concat.py
index 15639fbe156c6..70fd1da529d46 100644
--- a/pandas/tests/sparse/test_combine_concat.py
+++ b/pandas/tests/sparse/test_combine_concat.py
@@ -1,8 +1,10 @@
# pylint: disable-msg=E1101,W0612
+import pytest
import numpy as np
import pandas as pd
import pandas.util.testing as tm
+import itertools
class TestSparseSeriesConcat(object):
@@ -317,37 +319,52 @@ def test_concat_axis1(self):
assert isinstance(res, pd.SparseDataFrame)
tm.assert_frame_equal(res.to_dense(), exp)
- def test_concat_sparse_dense(self):
- sparse = self.dense1.to_sparse()
-
- res = pd.concat([sparse, self.dense2])
- exp = pd.concat([self.dense1, self.dense2])
- assert isinstance(res, pd.SparseDataFrame)
- tm.assert_frame_equal(res.to_dense(), exp)
-
- res = pd.concat([self.dense2, sparse])
- exp = pd.concat([self.dense2, self.dense1])
- assert isinstance(res, pd.SparseDataFrame)
- tm.assert_frame_equal(res.to_dense(), exp)
-
- sparse = self.dense1.to_sparse(fill_value=0)
-
- res = pd.concat([sparse, self.dense2])
- exp = pd.concat([self.dense1, self.dense2])
- assert isinstance(res, pd.SparseDataFrame)
- tm.assert_frame_equal(res.to_dense(), exp)
-
- res = pd.concat([self.dense2, sparse])
- exp = pd.concat([self.dense2, self.dense1])
- assert isinstance(res, pd.SparseDataFrame)
- tm.assert_frame_equal(res.to_dense(), exp)
-
- res = pd.concat([self.dense3, sparse], axis=1)
- exp = pd.concat([self.dense3, self.dense1], axis=1)
- assert isinstance(res, pd.SparseDataFrame)
- tm.assert_frame_equal(res, exp)
-
- res = pd.concat([sparse, self.dense3], axis=1)
- exp = pd.concat([self.dense1, self.dense3], axis=1)
- assert isinstance(res, pd.SparseDataFrame)
- tm.assert_frame_equal(res, exp)
+ @pytest.mark.parametrize('fill_value,sparse_idx,dense_idx',
+ itertools.product([None, 0, 1, np.nan],
+ [0, 1],
+ [1, 0]))
+ def test_concat_sparse_dense_rows(self, fill_value, sparse_idx, dense_idx):
+ frames = [self.dense1, self.dense2]
+ sparse_frame = [frames[dense_idx],
+ frames[sparse_idx].to_sparse(fill_value=fill_value)]
+ dense_frame = [frames[dense_idx], frames[sparse_idx]]
+
+ # This will try both directions sparse + dense and dense + sparse
+ for _ in range(2):
+ res = pd.concat(sparse_frame)
+ exp = pd.concat(dense_frame)
+
+ assert isinstance(res, pd.SparseDataFrame)
+ tm.assert_frame_equal(res.to_dense(), exp)
+
+ sparse_frame = sparse_frame[::-1]
+ dense_frame = dense_frame[::-1]
+
+ @pytest.mark.parametrize('fill_value,sparse_idx,dense_idx',
+ itertools.product([None, 0, 1, np.nan],
+ [0, 1],
+ [1, 0]))
+ def test_concat_sparse_dense_cols(self, fill_value, sparse_idx, dense_idx):
+ # See GH16874, GH18914 and #18686 for why this should be a DataFrame
+
+ frames = [self.dense1, self.dense3]
+
+ sparse_frame = [frames[dense_idx],
+ frames[sparse_idx].to_sparse(fill_value=fill_value)]
+ dense_frame = [frames[dense_idx], frames[sparse_idx]]
+
+ # This will try both directions sparse + dense and dense + sparse
+ for _ in range(2):
+ res = pd.concat(sparse_frame, axis=1)
+ exp = pd.concat(dense_frame, axis=1)
+
+ for column in frames[dense_idx].columns:
+ if dense_idx == sparse_idx:
+ tm.assert_frame_equal(res[column], exp[column])
+ else:
+ tm.assert_series_equal(res[column], exp[column])
+
+ tm.assert_frame_equal(res, exp)
+
+ sparse_frame = sparse_frame[::-1]
+ dense_frame = dense_frame[::-1]
| Ok so after trying a few things out. This seems to be the problem.
When concatting multiple data frames together if any of the containing series are sparse then the entire data frame becomes sparse (or SparseDataFrame).
This is in fact not what we want. We want a DataFrame that contains a SparseSeries and a Series.
closes #18914,
closes #18686
closes #16874
closes #18551
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18924 | 2017-12-24T02:41:15Z | 2018-02-01T13:09:19Z | 2018-02-01T13:09:18Z | 2018-02-01T13:20:14Z |
COMPAT-18589: Supporting axis in Series.rename | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 72f63a4da0f4d..a8d35602b9185 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -840,6 +840,7 @@ Reshaping
- Bug in :func:`concat` when concatting sparse and dense series it returns only a ``SparseDataFrame``. Should be a ``DataFrame``. (:issue:`18914`, :issue:`18686`, and :issue:`16874`)
- Improved error message for :func:`DataFrame.merge` when there is no common merge key (:issue:`19427`)
- Bug in :func:`DataFrame.join` which does an *outer* instead of a *left* join when being called with multiple DataFrames and some have non-unique indices (:issue:`19624`)
+- :func:`Series.rename` now accepts ``axis`` as a kwarg (:issue:`18589`)
Other
^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 35f866c9e7d58..297450417e3cf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -863,6 +863,9 @@ def rename(self, *args, **kwargs):
copy = kwargs.pop('copy', True)
inplace = kwargs.pop('inplace', False)
level = kwargs.pop('level', None)
+ axis = kwargs.pop('axis', None)
+ if axis is not None:
+ axis = self._get_axis_number(axis)
if kwargs:
raise TypeError('rename() got an unexpected keyword '
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index 714e43a4af1f8..dce4e82cbdcf1 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -81,6 +81,14 @@ def test_rename_set_name_inplace(self):
exp = np.array(['a', 'b', 'c'], dtype=np.object_)
tm.assert_numpy_array_equal(s.index.values, exp)
+ def test_rename_axis_supported(self):
+ # Supporting axis for compatibility, detailed in GH-18589
+ s = Series(range(5))
+ s.rename({}, axis=0)
+ s.rename({}, axis='index')
+ with tm.assert_raises_regex(ValueError, 'No axis named 5'):
+ s.rename({}, axis=5)
+
def test_set_name_attribute(self):
s = Series([1, 2, 3])
s2 = Series([1, 2, 3], name='bar')
| - [x] closes #18589
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18923 | 2017-12-23T19:02:31Z | 2018-02-14T11:12:08Z | 2018-02-14T11:12:08Z | 2018-02-14T12:05:58Z |
DEPR: Added is_copy to NDFrame._deprecations | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e9dd82eb64834..f2dbb3ef4d32a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -115,7 +115,7 @@ class NDFrame(PandasObject, SelectionMixin):
_internal_names_set = set(_internal_names)
_accessors = frozenset([])
_deprecations = frozenset(['as_blocks', 'blocks',
- 'consolidate', 'convert_objects'])
+ 'consolidate', 'convert_objects', 'is_copy'])
_metadata = []
_is_copy = None
| Should have been part of PR #18801
| https://api.github.com/repos/pandas-dev/pandas/pulls/18922 | 2017-12-23T15:54:43Z | 2017-12-23T19:42:27Z | 2017-12-23T19:42:27Z | 2017-12-23T19:42:31Z |
Breaking changes for sum / prod of empty / all-NA | diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index 2d30e00142846..8617aa6c03e1f 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -3,12 +3,218 @@
v0.22.0
-------
-This is a major release from 0.21.1 and includes a number of API changes,
-deprecations, new features, enhancements, and performance improvements along
-with a large number of bug fixes. We recommend that all users upgrade to this
-version.
+This is a major release from 0.21.1 and includes a single, API-breaking change.
+We recommend that all users upgrade to this version after carefully reading the
+release note (singular!).
.. _whatsnew_0220.api_breaking:
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pandas 0.22.0 changes the handling of empty and all-*NA* sums and products. The
+summary is that
+
+* The sum of an empty or all-*NA* ``Series`` is now ``0``
+* The product of an empty or all-*NA* ``Series`` is now ``1``
+* We've added a ``min_count`` parameter to ``.sum()`` and ``.prod()`` controlling
+ the minimum number of valid values for the result to be valid. If fewer than
+ ``min_count`` non-*NA* values are present, the result is *NA*. The default is
+ ``0``. To return ``NaN``, the 0.21 behavior, use ``min_count=1``.
+
+Some background: In pandas 0.21, we fixed a long-standing inconsistency
+in the return value of all-*NA* series depending on whether or not bottleneck
+was installed. See :ref:`whatsnew_0210.api_breaking.bottleneck`. At the same
+time, we changed the sum and prod of an empty ``Series`` to also be ``NaN``.
+
+Based on feedback, we've partially reverted those changes.
+
+Arithmetic Operations
+^^^^^^^^^^^^^^^^^^^^^
+
+The default sum for empty or all-*NA* ``Series`` is now ``0``.
+
+*pandas 0.21.x*
+
+.. code-block:: ipython
+
+ In [1]: pd.Series([]).sum()
+ Out[1]: nan
+
+ In [2]: pd.Series([np.nan]).sum()
+ Out[2]: nan
+
+*pandas 0.22.0*
+
+.. ipython:: python
+
+ pd.Series([]).sum()
+ pd.Series([np.nan]).sum()
+
+The default behavior is the same as pandas 0.20.3 with bottleneck installed. It
+also matches the behavior of NumPy's ``np.nansum`` on empty and all-*NA* arrays.
+
+To have the sum of an empty series return ``NaN`` (the default behavior of
+pandas 0.20.3 without bottleneck, or pandas 0.21.x), use the ``min_count``
+keyword.
+
+.. ipython:: python
+
+ pd.Series([]).sum(min_count=1)
+
+Thanks to the ``skipna`` parameter, the ``.sum`` on an all-*NA*
+series is conceptually the same as the ``.sum`` of an empty one with
+``skipna=True`` (the default).
+
+.. ipython:: python
+
+ pd.Series([np.nan]).sum(min_count=1) # skipna=True by default
+
+The ``min_count`` parameter refers to the minimum number of *non-null* values
+required for a non-NA sum or product.
+
+:meth:`Series.prod` has been updated to behave the same as :meth:`Series.sum`,
+returning ``1`` instead.
+
+.. ipython:: python
+
+ pd.Series([]).prod()
+ pd.Series([np.nan]).prod()
+ pd.Series([]).prod(min_count=1)
+
+These changes affect :meth:`DataFrame.sum` and :meth:`DataFrame.prod` as well.
+Finally, a few less obvious places in pandas are affected by this change.
+
+Grouping by a Categorical
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Grouping by a ``Categorical`` and summing now returns ``0`` instead of
+``NaN`` for categories with no observations. The product now returns ``1``
+instead of ``NaN``.
+
+*pandas 0.21.x*
+
+.. code-block:: ipython
+
+ In [8]: grouper = pd.Categorical(['a', 'a'], categories=['a', 'b'])
+
+ In [9]: pd.Series([1, 2]).groupby(grouper).sum()
+ Out[9]:
+ a 3.0
+ b NaN
+ dtype: float64
+
+*pandas 0.22*
+
+.. ipython:: python
+
+ grouper = pd.Categorical(['a', 'a'], categories=['a', 'b'])
+ pd.Series([1, 2]).groupby(grouper).sum()
+
+To restore the 0.21 behavior of returning ``NaN`` for unobserved groups,
+use ``min_count>=1``.
+
+.. ipython:: python
+
+ pd.Series([1, 2]).groupby(grouper).sum(min_count=1)
+
+Resample
+^^^^^^^^
+
+The sum and product of all-*NA* bins has changed from ``NaN`` to ``0`` for
+sum and ``1`` for product.
+
+*pandas 0.21.x*
+
+.. code-block:: ipython
+
+ In [11]: s = pd.Series([1, 1, np.nan, np.nan],
+ ...: index=pd.date_range('2017', periods=4))
+ ...: s
+ Out[11]:
+ 2017-01-01 1.0
+ 2017-01-02 1.0
+ 2017-01-03 NaN
+ 2017-01-04 NaN
+ Freq: D, dtype: float64
+
+ In [12]: s.resample('2d').sum()
+ Out[12]:
+ 2017-01-01 2.0
+ 2017-01-03 NaN
+ Freq: 2D, dtype: float64
+
+*pandas 0.22.0*
+
+.. ipython:: python
+
+ s = pd.Series([1, 1, np.nan, np.nan],
+ index=pd.date_range('2017', periods=4))
+ s.resample('2d').sum()
+
+To restore the 0.21 behavior of returning ``NaN``, use ``min_count>=1``.
+
+.. ipython:: python
+
+ s.resample('2d').sum(min_count=1)
+
+In particular, upsampling and taking the sum or product is affected, as
+upsampling introduces missing values even if the original series was
+entirely valid.
+
+*pandas 0.21.x*
+
+.. code-block:: ipython
+
+ In [14]: idx = pd.DatetimeIndex(['2017-01-01', '2017-01-02'])
+
+ In [15]: pd.Series([1, 2], index=idx).resample('12H').sum()
+ Out[15]:
+ 2017-01-01 00:00:00 1.0
+ 2017-01-01 12:00:00 NaN
+ 2017-01-02 00:00:00 2.0
+ Freq: 12H, dtype: float64
+
+*pandas 0.22.0*
+
+.. ipython:: python
+
+ idx = pd.DatetimeIndex(['2017-01-01', '2017-01-02'])
+ pd.Series([1, 2], index=idx).resample("12H").sum()
+
+Once again, the ``min_count`` keyword is available to restore the 0.21 behavior.
+
+.. ipython:: python
+
+ pd.Series([1, 2], index=idx).resample("12H").sum(min_count=1)
+
+Rolling and Expanding
+^^^^^^^^^^^^^^^^^^^^^
+
+Rolling and expanding already have a ``min_periods`` keyword that behaves
+similar to ``min_count``. The only case that changes is when doing a rolling
+or expanding sum with ``min_periods=0``. Previously this returned ``NaN``,
+when fewer than ``min_periods`` non-*NA* values were in the window. Now it
+returns ``0``.
+
+*pandas 0.21.1*
+
+.. code-block:: ipython
+
+ In [17]: s = pd.Series([np.nan, np.nan])
+
+ In [18]: s.rolling(2, min_periods=0).sum()
+ Out[18]:
+ 0 NaN
+ 1 NaN
+ dtype: float64
+
+*pandas 0.22.0*
+
+.. ipython:: python
+
+ s = pd.Series([np.nan, np.nan])
+ s.rolling(2, min_periods=0).sum()
+
+The default behavior of ``min_periods=None``, implying that ``min_periods``
+equals the window size, is unchanged.
diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
index 16b7cbff44e03..14d47398ac1df 100644
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -37,7 +37,7 @@ def group_add_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{c_type}}, ndim=2] values,
ndarray[int64_t] labels,
- Py_ssize_t min_count=1):
+ Py_ssize_t min_count=0):
"""
Only aggregates on axis=0
"""
@@ -101,7 +101,7 @@ def group_prod_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{c_type}}, ndim=2] values,
ndarray[int64_t] labels,
- Py_ssize_t min_count=1):
+ Py_ssize_t min_count=0):
"""
Only aggregates on axis=0
"""
diff --git a/pandas/_libs/window.pyx b/pandas/_libs/window.pyx
index ecce45742afa7..e46bf24c36f18 100644
--- a/pandas/_libs/window.pyx
+++ b/pandas/_libs/window.pyx
@@ -220,14 +220,16 @@ cdef class VariableWindowIndexer(WindowIndexer):
right_closed: bint
right endpoint closedness
True if the right endpoint is closed, False if open
-
+ floor: optional
+ unit for flooring the unit
"""
def __init__(self, ndarray input, int64_t win, int64_t minp,
- bint left_closed, bint right_closed, ndarray index):
+ bint left_closed, bint right_closed, ndarray index,
+ object floor=None):
self.is_variable = 1
self.N = len(index)
- self.minp = _check_minp(win, minp, self.N)
+ self.minp = _check_minp(win, minp, self.N, floor=floor)
self.start = np.empty(self.N, dtype='int64')
self.start.fill(-1)
@@ -342,7 +344,7 @@ def get_window_indexer(input, win, minp, index, closed,
if index is not None:
indexer = VariableWindowIndexer(input, win, minp, left_closed,
- right_closed, index)
+ right_closed, index, floor)
elif use_mock:
indexer = MockFixedWindowIndexer(input, win, minp, left_closed,
right_closed, index, floor)
@@ -441,7 +443,7 @@ def roll_sum(ndarray[double_t] input, int64_t win, int64_t minp,
object index, object closed):
cdef:
double val, prev_x, sum_x = 0
- int64_t s, e
+ int64_t s, e, range_endpoint
int64_t nobs = 0, i, j, N
bint is_variable
ndarray[int64_t] start, end
@@ -449,7 +451,8 @@ def roll_sum(ndarray[double_t] input, int64_t win, int64_t minp,
start, end, N, win, minp, is_variable = get_window_indexer(input, win,
minp, index,
- closed)
+ closed,
+ floor=0)
output = np.empty(N, dtype=float)
# for performance we are going to iterate
@@ -489,13 +492,15 @@ def roll_sum(ndarray[double_t] input, int64_t win, int64_t minp,
# fixed window
+ range_endpoint = int_max(minp, 1) - 1
+
with nogil:
- for i in range(0, minp - 1):
+ for i in range(0, range_endpoint):
add_sum(input[i], &nobs, &sum_x)
output[i] = NaN
- for i in range(minp - 1, N):
+ for i in range(range_endpoint, N):
val = input[i]
add_sum(val, &nobs, &sum_x)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2acf64f1d9f74..c5359ba2c5ea1 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7619,48 +7619,48 @@ def _doc_parms(cls):
_sum_examples = """\
Examples
--------
-By default, the sum of an empty series is ``NaN``.
+By default, the sum of an empty or all-NA Series is ``0``.
->>> pd.Series([]).sum() # min_count=1 is the default
-nan
+>>> pd.Series([]).sum() # min_count=0 is the default
+0.0
This can be controlled with the ``min_count`` parameter. For example, if
-you'd like the sum of an empty series to be 0, pass ``min_count=0``.
+you'd like the sum of an empty series to be NaN, pass ``min_count=1``.
->>> pd.Series([]).sum(min_count=0)
-0.0
+>>> pd.Series([]).sum(min_count=1)
+nan
Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).sum()
-nan
-
->>> pd.Series([np.nan]).sum(min_count=0)
0.0
+
+>>> pd.Series([np.nan]).sum(min_count=1)
+nan
"""
_prod_examples = """\
Examples
--------
-By default, the product of an empty series is ``NaN``
+By default, the product of an empty or all-NA Series is ``1``
>>> pd.Series([]).prod()
-nan
+1.0
This can be controlled with the ``min_count`` parameter
->>> pd.Series([]).prod(min_count=0)
-1.0
+>>> pd.Series([]).prod(min_count=1)
+nan
Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
empty series identically.
>>> pd.Series([np.nan]).prod()
-nan
-
->>> pd.Series([np.nan]).sum(min_count=0)
1.0
+
+>>> pd.Series([np.nan]).sum(min_count=1)
+nan
"""
@@ -7683,7 +7683,7 @@ def _make_min_count_stat_function(cls, name, name1, name2, axis_descr, desc,
examples=examples)
@Appender(_num_doc)
def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
- min_count=1,
+ min_count=0,
**kwargs):
nv.validate_stat_func(tuple(), kwargs, fname=name)
if skipna is None:
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 041239ed06d88..06b7dbb4ecf7b 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1363,8 +1363,8 @@ def last(x):
else:
return last(x)
- cls.sum = groupby_function('sum', 'add', np.sum, min_count=1)
- cls.prod = groupby_function('prod', 'prod', np.prod, min_count=1)
+ cls.sum = groupby_function('sum', 'add', np.sum, min_count=0)
+ cls.prod = groupby_function('prod', 'prod', np.prod, min_count=0)
cls.min = groupby_function('min', 'min', np.min, numeric_only=False)
cls.max = groupby_function('max', 'max', np.max, numeric_only=False)
cls.first = groupby_function('first', 'first', first_compat,
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 88f69f6ff2e14..d1a355021f388 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -109,6 +109,11 @@ def f(values, axis=None, skipna=True, **kwds):
try:
if values.size == 0 and kwds.get('min_count') is None:
# We are empty, returning NA for our type
+ # Only applies for the default `min_count` of None
+ # since that affects how empty arrays are handled.
+ # TODO(GH-18976) update all the nanops methods to
+ # correctly handle empty inputs and remove this check.
+ # It *may* just be `var`
return _na_for_min_count(values, axis)
if (_USE_BOTTLENECK and skipna and
@@ -281,6 +286,20 @@ def _wrap_results(result, dtype):
def _na_for_min_count(values, axis):
+ """Return the missing value for `values`
+
+ Parameters
+ ----------
+ values : ndarray
+ axis : int or None
+ axis for the reduction
+
+ Returns
+ -------
+ result : scalar or ndarray
+ For 1-D values, returns a scalar of the correct missing type.
+ For 2-D values, returns a 1-D array where each element is missing.
+ """
# we either return np.nan or pd.NaT
if is_numeric_dtype(values):
values = values.astype('float64')
@@ -308,7 +327,7 @@ def nanall(values, axis=None, skipna=True):
@disallow('M8')
@bottleneck_switch()
-def nansum(values, axis=None, skipna=True, min_count=1):
+def nansum(values, axis=None, skipna=True, min_count=0):
values, mask, dtype, dtype_max = _get_values(values, skipna, 0)
dtype_sum = dtype_max
if is_float_dtype(dtype):
@@ -645,7 +664,7 @@ def nankurt(values, axis=None, skipna=True):
@disallow('M8', 'm8')
-def nanprod(values, axis=None, skipna=True, min_count=1):
+def nanprod(values, axis=None, skipna=True, min_count=0):
mask = isna(values)
if skipna and not is_any_int_dtype(values):
values = values.copy()
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index a30c727ecb87c..5447ce7470b9d 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -629,7 +629,7 @@ def size(self):
# downsample methods
for method in ['sum', 'prod']:
- def f(self, _method=method, min_count=1, *args, **kwargs):
+ def f(self, _method=method, min_count=0, *args, **kwargs):
nv.validate_resampler_func(_method, args, kwargs)
return self._downsample(_method, min_count=min_count)
f.__doc__ = getattr(GroupBy, method).__doc__
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 80e9acd0d2281..69f1aeddc43e9 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -478,7 +478,8 @@ def test_nunique(self):
Series({0: 1, 1: 3, 2: 2}))
def test_sum(self):
- self._check_stat_op('sum', np.sum, has_numeric_only=True)
+ self._check_stat_op('sum', np.sum, has_numeric_only=True,
+ skipna_alternative=np.nansum)
# mixed types (with upcasting happening)
self._check_stat_op('sum', np.sum,
@@ -753,7 +754,8 @@ def alt(x):
def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
has_numeric_only=False, check_dtype=True,
- check_dates=False, check_less_precise=False):
+ check_dates=False, check_less_precise=False,
+ skipna_alternative=None):
if frame is None:
frame = self.frame
# set some NAs
@@ -774,15 +776,11 @@ def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
assert len(result)
if has_skipna:
- def skipna_wrapper(x):
- nona = x.dropna()
- if len(nona) == 0:
- return np.nan
- return alternative(nona)
-
def wrapper(x):
return alternative(x.values)
+ skipna_wrapper = tm._make_skipna_wrapper(alternative,
+ skipna_alternative)
result0 = f(axis=0, skipna=False)
result1 = f(axis=1, skipna=False)
tm.assert_series_equal(result0, frame.apply(wrapper),
@@ -834,8 +832,11 @@ def wrapper(x):
r0 = getattr(all_na, name)(axis=0)
r1 = getattr(all_na, name)(axis=1)
if name in ['sum', 'prod']:
- assert np.isnan(r0).all()
- assert np.isnan(r1).all()
+ unit = int(name == 'prod')
+ expected = pd.Series(unit, index=r0.index, dtype=r0.dtype)
+ tm.assert_series_equal(r0, expected)
+ expected = pd.Series(unit, index=r1.index, dtype=r1.dtype)
+ tm.assert_series_equal(r1, expected)
def test_mode(self):
df = pd.DataFrame({"A": [12, 12, 11, 12, 19, 11],
@@ -982,11 +983,16 @@ def test_sum_prod_nanops(self, method, unit):
df = pd.DataFrame({"a": [unit, unit],
"b": [unit, np.nan],
"c": [np.nan, np.nan]})
+ # The default
+ result = getattr(df, method)
+ expected = pd.Series([unit, unit, unit], index=idx, dtype='float64')
+ # min_count=1
result = getattr(df, method)(min_count=1)
expected = pd.Series([unit, unit, np.nan], index=idx)
tm.assert_series_equal(result, expected)
+ # min_count=0
result = getattr(df, method)(min_count=0)
expected = pd.Series([unit, unit, unit], index=idx, dtype='float64')
tm.assert_series_equal(result, expected)
@@ -995,6 +1001,7 @@ def test_sum_prod_nanops(self, method, unit):
expected = pd.Series([unit, np.nan, np.nan], index=idx)
tm.assert_series_equal(result, expected)
+ # min_count > 1
df = pd.DataFrame({"A": [unit] * 10, "B": [unit] * 5 + [np.nan] * 5})
result = getattr(df, method)(min_count=5)
expected = pd.Series(result, index=['A', 'B'])
@@ -1004,6 +1011,29 @@ def test_sum_prod_nanops(self, method, unit):
expected = pd.Series(result, index=['A', 'B'])
tm.assert_series_equal(result, expected)
+ def test_sum_nanops_timedelta(self):
+ # prod isn't defined on timedeltas
+ idx = ['a', 'b', 'c']
+ df = pd.DataFrame({"a": [0, 0],
+ "b": [0, np.nan],
+ "c": [np.nan, np.nan]})
+
+ df2 = df.apply(pd.to_timedelta)
+
+ # 0 by default
+ result = df2.sum()
+ expected = pd.Series([0, 0, 0], dtype='m8[ns]', index=idx)
+ tm.assert_series_equal(result, expected)
+
+ # min_count=0
+ result = df2.sum(min_count=0)
+ tm.assert_series_equal(result, expected)
+
+ # min_count=1
+ result = df2.sum(min_count=1)
+ expected = pd.Series([0, 0, np.nan], dtype='m8[ns]', index=idx)
+ tm.assert_series_equal(result, expected)
+
def test_sum_object(self):
values = self.frame.values.astype(int)
frame = DataFrame(values, index=self.frame.index,
diff --git a/pandas/tests/groupby/test_aggregate.py b/pandas/tests/groupby/test_aggregate.py
index 07ecc085098bf..cca21fddd116e 100644
--- a/pandas/tests/groupby/test_aggregate.py
+++ b/pandas/tests/groupby/test_aggregate.py
@@ -813,8 +813,6 @@ def test__cython_agg_general(self):
('mean', np.mean),
('median', lambda x: np.median(x) if len(x) > 0 else np.nan),
('var', lambda x: np.var(x, ddof=1)),
- ('add', lambda x: np.sum(x) if len(x) > 0 else np.nan),
- ('prod', np.prod),
('min', np.min),
('max', np.max), ]
)
@@ -824,12 +822,7 @@ def test_cython_agg_empty_buckets(self, op, targop):
# calling _cython_agg_general directly, instead of via the user API
# which sets different values for min_count, so do that here.
- if op in ('add', 'prod'):
- min_count = 1
- else:
- min_count = -1
- result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(
- op, min_count=min_count)
+ result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(op)
expected = df.groupby(pd.cut(df[0], grps)).agg(lambda x: targop(x))
try:
tm.assert_frame_equal(result, expected)
@@ -837,6 +830,40 @@ def test_cython_agg_empty_buckets(self, op, targop):
exc.args += ('operation: %s' % op,)
raise
+ def test_cython_agg_empty_buckets_nanops(self):
+ # GH-18869 can't call nanops on empty groups, so hardcode expected
+ # for these
+ df = pd.DataFrame([11, 12, 13], columns=['a'])
+ grps = range(0, 25, 5)
+ # add / sum
+ result = df.groupby(pd.cut(df['a'], grps))._cython_agg_general('add')
+ intervals = pd.interval_range(0, 20, freq=5)
+ expected = pd.DataFrame(
+ {"a": [0, 0, 36, 0]},
+ index=pd.CategoricalIndex(intervals, name='a', ordered=True))
+ tm.assert_frame_equal(result, expected)
+
+ # prod
+ result = df.groupby(pd.cut(df['a'], grps))._cython_agg_general('prod')
+ expected = pd.DataFrame(
+ {"a": [1, 1, 1716, 1]},
+ index=pd.CategoricalIndex(intervals, name='a', ordered=True))
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.xfail(reason="GH-18869: agg func not called on empty groups.")
+ def test_agg_category_nansum(self):
+ categories = ['a', 'b', 'c']
+ df = pd.DataFrame({"A": pd.Categorical(['a', 'a', 'b'],
+ categories=categories),
+ 'B': [1, 2, 3]})
+ result = df.groupby("A").B.agg(np.nansum)
+ expected = pd.Series([3, 3, 0],
+ index=pd.CategoricalIndex(['a', 'b', 'c'],
+ categories=categories,
+ name='A'),
+ name='B')
+ tm.assert_series_equal(result, expected)
+
def test_agg_over_numpy_arrays(self):
# GH 3788
df = pd.DataFrame([[1, np.array([10, 20, 30])],
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 5e3d2bb9cf091..d4f35aa8755d1 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -37,7 +37,7 @@ def test_groupby(self):
# single grouper
gb = df.groupby("A")
exp_idx = CategoricalIndex(['a', 'b', 'z'], name='A', ordered=True)
- expected = DataFrame({'values': Series([3, 7, np.nan], index=exp_idx)})
+ expected = DataFrame({'values': Series([3, 7, 0], index=exp_idx)})
result = gb.sum()
tm.assert_frame_equal(result, expected)
@@ -670,9 +670,9 @@ def test_empty_sum(self):
'B': [1, 2, 1]})
expected_idx = pd.CategoricalIndex(['a', 'b', 'c'], name='A')
- # NA by default
+ # 0 by default
result = df.groupby("A").B.sum()
- expected = pd.Series([3, 1, np.nan], expected_idx, name='B')
+ expected = pd.Series([3, 1, 0], expected_idx, name='B')
tm.assert_series_equal(result, expected)
# min_count=0
@@ -685,6 +685,11 @@ def test_empty_sum(self):
expected = pd.Series([3, 1, np.nan], expected_idx, name='B')
tm.assert_series_equal(result, expected)
+ # min_count>1
+ result = df.groupby("A").B.sum(min_count=2)
+ expected = pd.Series([3, np.nan, np.nan], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
+
def test_empty_prod(self):
# https://github.com/pandas-dev/pandas/issues/18678
df = pd.DataFrame({"A": pd.Categorical(['a', 'a', 'b'],
@@ -693,9 +698,9 @@ def test_empty_prod(self):
expected_idx = pd.CategoricalIndex(['a', 'b', 'c'], name='A')
- # NA by default
+ # 1 by default
result = df.groupby("A").B.prod()
- expected = pd.Series([2, 1, np.nan], expected_idx, name='B')
+ expected = pd.Series([2, 1, 1], expected_idx, name='B')
tm.assert_series_equal(result, expected)
# min_count=0
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index cf4a6ec1c932a..a13d985ab6974 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2704,7 +2704,7 @@ def h(df, arg3):
# Assert the results here
index = pd.Index(['A', 'B', 'C'], name='group')
- expected = pd.Series([-79.5160891089, -78.4839108911, None],
+ expected = pd.Series([-79.5160891089, -78.4839108911, -80],
index=index)
assert_series_equal(expected, result)
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index c8503b16a0e16..d359bfa5351a9 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -41,12 +41,11 @@ def test_groupby_with_timegrouper(self):
df = df.set_index(['Date'])
expected = DataFrame(
- {'Quantity': np.nan},
+ {'Quantity': 0},
index=date_range('20130901 13:00:00',
'20131205 13:00:00', freq='5D',
name='Date', closed='left'))
- expected.iloc[[0, 6, 18], 0] = np.array(
- [24., 6., 9.], dtype='float64')
+ expected.iloc[[0, 6, 18], 0] = np.array([24, 6, 9], dtype='int64')
result1 = df.resample('5D') .sum()
assert_frame_equal(result1, expected)
@@ -245,6 +244,8 @@ def test_timegrouper_with_reg_groups(self):
result = df.groupby([pd.Grouper(freq='1M', key='Date')]).sum()
assert_frame_equal(result, expected)
+ @pytest.mark.parametrize('freq', ['D', 'M', 'A', 'Q-APR'])
+ def test_timegrouper_with_reg_groups_freq(self, freq):
# GH 6764 multiple grouping with/without sort
df = DataFrame({
'date': pd.to_datetime([
@@ -258,20 +259,24 @@ def test_timegrouper_with_reg_groups(self):
'cost1': [12, 15, 10, 24, 39, 1, 0, 90, 45, 34, 1, 12]
}).set_index('date')
- for freq in ['D', 'M', 'A', 'Q-APR']:
- expected = df.groupby('user_id')[
- 'whole_cost'].resample(
- freq).sum().dropna().reorder_levels(
- ['date', 'user_id']).sort_index().astype('int64')
- expected.name = 'whole_cost'
-
- result1 = df.sort_index().groupby([pd.Grouper(freq=freq),
- 'user_id'])['whole_cost'].sum()
- assert_series_equal(result1, expected)
-
- result2 = df.groupby([pd.Grouper(freq=freq), 'user_id'])[
- 'whole_cost'].sum()
- assert_series_equal(result2, expected)
+ expected = (
+ df.groupby('user_id')['whole_cost']
+ .resample(freq)
+ .sum(min_count=1) # XXX
+ .dropna()
+ .reorder_levels(['date', 'user_id'])
+ .sort_index()
+ .astype('int64')
+ )
+ expected.name = 'whole_cost'
+
+ result1 = df.sort_index().groupby([pd.Grouper(freq=freq),
+ 'user_id'])['whole_cost'].sum()
+ assert_series_equal(result1, expected)
+
+ result2 = df.groupby([pd.Grouper(freq=freq), 'user_id'])[
+ 'whole_cost'].sum()
+ assert_series_equal(result2, expected)
def test_timegrouper_get_group(self):
# GH 6914
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index cd92edc927173..14bf194ba5ee4 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -36,12 +36,12 @@ class TestSeriesAnalytics(TestData):
])
def test_empty(self, method, unit, use_bottleneck):
with pd.option_context("use_bottleneck", use_bottleneck):
- # GH 9422
+ # GH 9422 / 18921
# Entirely empty
s = Series([])
# NA by default
result = getattr(s, method)()
- assert isna(result)
+ assert result == unit
# Explict
result = getattr(s, method)(min_count=0)
@@ -52,7 +52,7 @@ def test_empty(self, method, unit, use_bottleneck):
# Skipna, default
result = getattr(s, method)(skipna=True)
- assert isna(result)
+ result == unit
# Skipna, explicit
result = getattr(s, method)(skipna=True, min_count=0)
@@ -65,7 +65,7 @@ def test_empty(self, method, unit, use_bottleneck):
s = Series([np.nan])
# NA by default
result = getattr(s, method)()
- assert isna(result)
+ assert result == unit
# Explicit
result = getattr(s, method)(min_count=0)
@@ -76,7 +76,7 @@ def test_empty(self, method, unit, use_bottleneck):
# Skipna, default
result = getattr(s, method)(skipna=True)
- assert isna(result)
+ result == unit
# skipna, explicit
result = getattr(s, method)(skipna=True, min_count=0)
@@ -110,7 +110,7 @@ def test_empty(self, method, unit, use_bottleneck):
# GH #844 (changed in 9422)
df = DataFrame(np.empty((10, 0)))
- assert (df.sum(1).isnull()).all()
+ assert (getattr(df, method)(1) == unit).all()
s = pd.Series([1])
result = getattr(s, method)(min_count=2)
@@ -131,9 +131,9 @@ def test_empty(self, method, unit, use_bottleneck):
def test_empty_multi(self, method, unit):
s = pd.Series([1, np.nan, np.nan, np.nan],
index=pd.MultiIndex.from_product([('a', 'b'), (0, 1)]))
- # NaN by default
+ # 1 / 0 by default
result = getattr(s, method)(level=0)
- expected = pd.Series([1, np.nan], index=['a', 'b'])
+ expected = pd.Series([1, unit], index=['a', 'b'])
tm.assert_series_equal(result, expected)
# min_count=0
@@ -147,7 +147,7 @@ def test_empty_multi(self, method, unit):
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
- "method", ['sum', 'mean', 'median', 'std', 'var'])
+ "method", ['mean', 'median', 'std', 'var'])
def test_ops_consistency_on_empty(self, method):
# GH 7869
@@ -195,7 +195,7 @@ def test_sum_overflow(self, use_bottleneck):
assert np.allclose(float(result), v[-1])
def test_sum(self):
- self._check_stat_op('sum', np.sum, check_allna=True)
+ self._check_stat_op('sum', np.sum, check_allna=False)
def test_sum_inf(self):
s = Series(np.random.randn(10))
diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py
index 14a44c36c6a0c..3c93ff1d3f31e 100644
--- a/pandas/tests/series/test_quantile.py
+++ b/pandas/tests/series/test_quantile.py
@@ -38,7 +38,7 @@ def test_quantile(self):
# GH7661
result = Series([np.timedelta64('NaT')]).sum()
- assert result is pd.NaT
+ assert result == pd.Timedelta(0)
msg = 'percentiles should all be in the interval \\[0, 1\\]'
for invalid in [-1, 2, [0.5, -1], [0.5, 2]]:
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index d03ecb9f9b5b7..df3c49a73d227 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -182,12 +182,17 @@ def _coerce_tds(targ, res):
check_dtype=check_dtype)
def check_fun_data(self, testfunc, targfunc, testarval, targarval,
- targarnanval, check_dtype=True, **kwargs):
+ targarnanval, check_dtype=True, empty_targfunc=None,
+ **kwargs):
for axis in list(range(targarval.ndim)) + [None]:
for skipna in [False, True]:
targartempval = targarval if skipna else targarnanval
- try:
+ if skipna and empty_targfunc and isna(targartempval).all():
+ targ = empty_targfunc(targartempval, axis=axis, **kwargs)
+ else:
targ = targfunc(targartempval, axis=axis, **kwargs)
+
+ try:
res = testfunc(testarval, axis=axis, skipna=skipna,
**kwargs)
self.check_results(targ, res, axis,
@@ -219,10 +224,11 @@ def check_fun_data(self, testfunc, targfunc, testarval, targarval,
except ValueError:
return
self.check_fun_data(testfunc, targfunc, testarval2, targarval2,
- targarnanval2, check_dtype=check_dtype, **kwargs)
+ targarnanval2, check_dtype=check_dtype,
+ empty_targfunc=empty_targfunc, **kwargs)
def check_fun(self, testfunc, targfunc, testar, targar=None,
- targarnan=None, **kwargs):
+ targarnan=None, empty_targfunc=None, **kwargs):
if targar is None:
targar = testar
if targarnan is None:
@@ -232,7 +238,8 @@ def check_fun(self, testfunc, targfunc, testar, targar=None,
targarnanval = getattr(self, targarnan)
try:
self.check_fun_data(testfunc, targfunc, testarval, targarval,
- targarnanval, **kwargs)
+ targarnanval, empty_targfunc=empty_targfunc,
+ **kwargs)
except BaseException as exc:
exc.args += ('testar: %s' % testar, 'targar: %s' % targar,
'targarnan: %s' % targarnan)
@@ -329,7 +336,8 @@ def test_nanall(self):
def test_nansum(self):
self.check_funs(nanops.nansum, np.sum, allow_str=False,
- allow_date=False, allow_tdelta=True, check_dtype=False)
+ allow_date=False, allow_tdelta=True, check_dtype=False,
+ empty_targfunc=np.nansum)
def test_nanmean(self):
self.check_funs(nanops.nanmean, np.mean, allow_complex=False,
@@ -461,9 +469,11 @@ def test_nankurt(self):
allow_str=False, allow_date=False,
allow_tdelta=False)
+ @td.skip_if_no("numpy", min_version="1.10.0")
def test_nanprod(self):
self.check_funs(nanops.nanprod, np.prod, allow_str=False,
- allow_date=False, allow_tdelta=False)
+ allow_date=False, allow_tdelta=False,
+ empty_targfunc=np.nanprod)
def check_nancorr_nancov_2d(self, checkfun, targ0, targ1, **kwargs):
res00 = checkfun(self.arr_float_2d, self.arr_float1_2d, **kwargs)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 34c1ee5683183..d772dba25868e 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -9,7 +9,6 @@
import numpy as np
from pandas.core.dtypes.common import is_float_dtype
-from pandas.core.dtypes.missing import remove_na_arraylike
from pandas import (Series, DataFrame, Index, date_range, isna, notna,
pivot, MultiIndex)
from pandas.core.nanops import nanall, nanany
@@ -83,13 +82,14 @@ def test_count(self):
self._check_stat_op('count', f, obj=self.panel, has_skipna=False)
def test_sum(self):
- self._check_stat_op('sum', np.sum)
+ self._check_stat_op('sum', np.sum, skipna_alternative=np.nansum)
def test_mean(self):
self._check_stat_op('mean', np.mean)
+ @td.skip_if_no("numpy", min_version="1.10.0")
def test_prod(self):
- self._check_stat_op('prod', np.prod)
+ self._check_stat_op('prod', np.prod, skipna_alternative=np.nanprod)
def test_median(self):
def wrapper(x):
@@ -140,7 +140,8 @@ def alt(x):
self._check_stat_op('sem', alt)
- def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
+ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True,
+ skipna_alternative=None):
if obj is None:
obj = self.panel
@@ -152,11 +153,8 @@ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
if has_skipna:
- def skipna_wrapper(x):
- nona = remove_na_arraylike(x)
- if len(nona) == 0:
- return np.nan
- return alternative(nona)
+ skipna_wrapper = tm._make_skipna_wrapper(alternative,
+ skipna_alternative)
def wrapper(x):
return alternative(np.asarray(x))
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index e194136ec716d..e429403bbc919 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -8,7 +8,6 @@
from pandas import Series, Index, isna, notna
from pandas.core.dtypes.common import is_float_dtype
-from pandas.core.dtypes.missing import remove_na_arraylike
from pandas.core.panel import Panel
from pandas.core.panel4d import Panel4D
from pandas.tseries.offsets import BDay
@@ -38,13 +37,14 @@ def test_count(self):
self._check_stat_op('count', f, obj=self.panel4d, has_skipna=False)
def test_sum(self):
- self._check_stat_op('sum', np.sum)
+ self._check_stat_op('sum', np.sum, skipna_alternative=np.nansum)
def test_mean(self):
self._check_stat_op('mean', np.mean)
+ @td.skip_if_no("numpy", min_version="1.10.0")
def test_prod(self):
- self._check_stat_op('prod', np.prod)
+ self._check_stat_op('prod', np.prod, skipna_alternative=np.nanprod)
def test_median(self):
def wrapper(x):
@@ -105,7 +105,8 @@ def alt(x):
# self._check_stat_op('skew', alt)
- def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
+ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True,
+ skipna_alternative=None):
if obj is None:
obj = self.panel4d
@@ -116,11 +117,9 @@ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
f = getattr(obj, name)
if has_skipna:
- def skipna_wrapper(x):
- nona = remove_na_arraylike(x)
- if len(nona) == 0:
- return np.nan
- return alternative(nona)
+
+ skipna_wrapper = tm._make_skipna_wrapper(alternative,
+ skipna_alternative)
def wrapper(x):
return alternative(np.asarray(x))
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 4a3c4eff9f8c3..e9a517605020a 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -3390,9 +3390,9 @@ def test_aggregate_normal(self):
def test_resample_entirly_nat_window(self, method, unit):
s = pd.Series([0] * 2 + [np.nan] * 2,
index=pd.date_range('2017', periods=4))
- # nan by default
+ # 0 / 1 by default
result = methodcaller(method)(s.resample("2d"))
- expected = pd.Series([0.0, np.nan],
+ expected = pd.Series([0.0, unit],
index=pd.to_datetime(['2017-01-01',
'2017-01-03']))
tm.assert_series_equal(result, expected)
@@ -3411,8 +3411,17 @@ def test_resample_entirly_nat_window(self, method, unit):
'2017-01-03']))
tm.assert_series_equal(result, expected)
- def test_aggregate_with_nat(self):
+ @pytest.mark.parametrize('func, fill_value', [
+ ('min', np.nan),
+ ('max', np.nan),
+ ('sum', 0),
+ ('prod', 1),
+ ('count', 0),
+ ])
+ def test_aggregate_with_nat(self, func, fill_value):
# check TimeGrouper's aggregation is identical as normal groupby
+ # if NaT is included, 'var', 'std', 'mean', 'first','last'
+ # and 'nth' doesn't work yet
n = 20
data = np.random.randn(n, 4).astype('int64')
@@ -3426,42 +3435,42 @@ def test_aggregate_with_nat(self):
normal_grouped = normal_df.groupby('key')
dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq='D'))
- for func in ['min', 'max', 'sum', 'prod']:
- normal_result = getattr(normal_grouped, func)()
- dt_result = getattr(dt_grouped, func)()
- pad = DataFrame([[np.nan, np.nan, np.nan, np.nan]], index=[3],
- columns=['A', 'B', 'C', 'D'])
- expected = normal_result.append(pad)
- expected = expected.sort_index()
- expected.index = date_range(start='2013-01-01', freq='D',
- periods=5, name='key')
- assert_frame_equal(expected, dt_result)
+ normal_result = getattr(normal_grouped, func)()
+ dt_result = getattr(dt_grouped, func)()
- for func in ['count']:
- normal_result = getattr(normal_grouped, func)()
- pad = DataFrame([[0, 0, 0, 0]], index=[3],
- columns=['A', 'B', 'C', 'D'])
- expected = normal_result.append(pad)
- expected = expected.sort_index()
- expected.index = date_range(start='2013-01-01', freq='D',
- periods=5, name='key')
- dt_result = getattr(dt_grouped, func)()
- assert_frame_equal(expected, dt_result)
+ pad = DataFrame([[fill_value] * 4], index=[3],
+ columns=['A', 'B', 'C', 'D'])
+ expected = normal_result.append(pad)
+ expected = expected.sort_index()
+ expected.index = date_range(start='2013-01-01', freq='D',
+ periods=5, name='key')
+ assert_frame_equal(expected, dt_result)
+ assert dt_result.index.name == 'key'
- for func in ['size']:
- normal_result = getattr(normal_grouped, func)()
- pad = Series([0], index=[3])
- expected = normal_result.append(pad)
- expected = expected.sort_index()
- expected.index = date_range(start='2013-01-01', freq='D',
- periods=5, name='key')
- dt_result = getattr(dt_grouped, func)()
- assert_series_equal(expected, dt_result)
- # GH 9925
- assert dt_result.index.name == 'key'
+ def test_aggregate_with_nat_size(self):
+ # GH 9925
+ n = 20
+ data = np.random.randn(n, 4).astype('int64')
+ normal_df = DataFrame(data, columns=['A', 'B', 'C', 'D'])
+ normal_df['key'] = [1, 2, np.nan, 4, 5] * 4
- # if NaT is included, 'var', 'std', 'mean', 'first','last'
- # and 'nth' doesn't work yet
+ dt_df = DataFrame(data, columns=['A', 'B', 'C', 'D'])
+ dt_df['key'] = [datetime(2013, 1, 1), datetime(2013, 1, 2), pd.NaT,
+ datetime(2013, 1, 4), datetime(2013, 1, 5)] * 4
+
+ normal_grouped = normal_df.groupby('key')
+ dt_grouped = dt_df.groupby(TimeGrouper(key='key', freq='D'))
+
+ normal_result = normal_grouped.size()
+ dt_result = dt_grouped.size()
+
+ pad = Series([0], index=[3])
+ expected = normal_result.append(pad)
+ expected = expected.sort_index()
+ expected.index = date_range(start='2013-01-01', freq='D',
+ periods=5, name='key')
+ assert_series_equal(expected, dt_result)
+ assert dt_result.index.name == 'key'
def test_repr(self):
# GH18203
@@ -3482,9 +3491,9 @@ def test_upsample_sum(self, method, unit):
'2017-01-01T00:30:00',
'2017-01-01T01:00:00'])
- # NaN by default
+ # 0 / 1 by default
result = methodcaller(method)(resampled)
- expected = pd.Series([1, np.nan, 1], index=index)
+ expected = pd.Series([1, unit, 1], index=index)
tm.assert_series_equal(result, expected)
# min_count=0
@@ -3496,3 +3505,8 @@ def test_upsample_sum(self, method, unit):
result = methodcaller(method, min_count=1)(resampled)
expected = pd.Series([1, np.nan, 1], index=index)
tm.assert_series_equal(result, expected)
+
+ # min_count>1
+ result = methodcaller(method, min_count=2)(resampled)
+ expected = pd.Series([np.nan, np.nan, np.nan], index=index)
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index bee925823eebe..ccffc554e00c7 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -439,6 +439,28 @@ def tests_empty_df_rolling(self, roller):
result = DataFrame(index=pd.DatetimeIndex([])).rolling(roller).sum()
tm.assert_frame_equal(result, expected)
+ def test_missing_minp_zero(self):
+ # https://github.com/pandas-dev/pandas/pull/18921
+ # minp=0
+ x = pd.Series([np.nan])
+ result = x.rolling(1, min_periods=0).sum()
+ expected = pd.Series([0.0])
+ tm.assert_series_equal(result, expected)
+
+ # minp=1
+ result = x.rolling(1, min_periods=1).sum()
+ expected = pd.Series([np.nan])
+ tm.assert_series_equal(result, expected)
+
+ def test_missing_minp_zero_variable(self):
+ # https://github.com/pandas-dev/pandas/pull/18921
+ x = pd.Series([np.nan] * 4,
+ index=pd.DatetimeIndex(['2017-01-01', '2017-01-04',
+ '2017-01-06', '2017-01-07']))
+ result = x.rolling(pd.Timedelta("2d"), min_periods=0).sum()
+ expected = pd.Series(0.0, index=x.index)
+ tm.assert_series_equal(result, expected)
+
def test_multi_index_names(self):
# GH 16789, 16825
@@ -512,6 +534,19 @@ def test_empty_df_expanding(self, expander):
index=pd.DatetimeIndex([])).expanding(expander).sum()
tm.assert_frame_equal(result, expected)
+ def test_missing_minp_zero(self):
+ # https://github.com/pandas-dev/pandas/pull/18921
+ # minp=0
+ x = pd.Series([np.nan])
+ result = x.expanding(min_periods=0).sum()
+ expected = pd.Series([0.0])
+ tm.assert_series_equal(result, expected)
+
+ # minp=1
+ result = x.expanding(min_periods=1).sum()
+ expected = pd.Series([np.nan])
+ tm.assert_series_equal(result, expected)
+
class TestEWM(Base):
@@ -828,7 +863,8 @@ def test_centered_axis_validation(self):
.rolling(window=3, center=True, axis=2).mean())
def test_rolling_sum(self):
- self._check_moment_func(mom.rolling_sum, np.sum, name='sum')
+ self._check_moment_func(mom.rolling_sum, np.nansum, name='sum',
+ zero_min_periods_equal=False)
def test_rolling_count(self):
counter = lambda x: np.isfinite(x).astype(float).sum()
@@ -1298,14 +1334,18 @@ def test_fperr_robustness(self):
def _check_moment_func(self, f, static_comp, name=None, window=50,
has_min_periods=True, has_center=True,
has_time_rule=True, preserve_nan=True,
- fill_value=None, test_stable=False, **kwargs):
+ fill_value=None, test_stable=False,
+ zero_min_periods_equal=True,
+ **kwargs):
with warnings.catch_warnings(record=True):
self._check_ndarray(f, static_comp, window=window,
has_min_periods=has_min_periods,
preserve_nan=preserve_nan,
has_center=has_center, fill_value=fill_value,
- test_stable=test_stable, **kwargs)
+ test_stable=test_stable,
+ zero_min_periods_equal=zero_min_periods_equal,
+ **kwargs)
with warnings.catch_warnings(record=True):
self._check_structures(f, static_comp,
@@ -1324,7 +1364,8 @@ def _check_moment_func(self, f, static_comp, name=None, window=50,
def _check_ndarray(self, f, static_comp, window=50, has_min_periods=True,
preserve_nan=True, has_center=True, fill_value=None,
- test_stable=False, test_window=True, **kwargs):
+ test_stable=False, test_window=True,
+ zero_min_periods_equal=True, **kwargs):
def get_result(arr, window, min_periods=None, center=False):
return f(arr, window, min_periods=min_periods, center=center, **
kwargs)
@@ -1357,10 +1398,11 @@ def get_result(arr, window, min_periods=None, center=False):
assert isna(result[3])
assert notna(result[4])
- # min_periods=0
- result0 = get_result(arr, 20, min_periods=0)
- result1 = get_result(arr, 20, min_periods=1)
- tm.assert_almost_equal(result0, result1)
+ if zero_min_periods_equal:
+ # min_periods=0 may be equivalent to min_periods=1
+ result0 = get_result(arr, 20, min_periods=0)
+ result1 = get_result(arr, 20, min_periods=1)
+ tm.assert_almost_equal(result0, result1)
else:
result = get_result(arr, 50)
tm.assert_almost_equal(result[-1], static_comp(arr[10:-10]))
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 4e9282c3bd031..8acf16536f1de 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -2665,3 +2665,31 @@ def setTZ(tz):
yield
finally:
setTZ(orig_tz)
+
+
+def _make_skipna_wrapper(alternative, skipna_alternative=None):
+ """Create a function for calling on an array.
+
+ Parameters
+ ----------
+ alternative : function
+ The function to be called on the array with no NaNs.
+ Only used when 'skipna_alternative' is None.
+ skipna_alternative : function
+ The function to be called on the original array
+
+ Returns
+ -------
+ skipna_wrapper : function
+ """
+ if skipna_alternative:
+ def skipna_wrapper(x):
+ return skipna_alternative(x.values)
+ else:
+ def skipna_wrapper(x):
+ nona = x.dropna()
+ if len(nona) == 0:
+ return np.nan
+ return alternative(nona)
+
+ return skipna_wrapper
| Changes the defaults for `min_count` so that `sum([])` and `sum([np.nan])` are 0 by default, and NaN with `min_count>=1`.
I'd recommend looking at only the latest commit until https://github.com/pandas-dev/pandas/pull/18876 is merged. I'll probably force push changes here to keep all the relevant changes in the last commit until https://github.com/pandas-dev/pandas/pull/18876 is in, rebase on that, and then start pushing changes regularly.
cc @jreback @jorisvandenbossche @shoyer @wesm | https://api.github.com/repos/pandas-dev/pandas/pulls/18921 | 2017-12-23T12:48:38Z | 2017-12-29T13:05:49Z | 2017-12-29T13:05:49Z | 2017-12-30T23:18:24Z |
Fixing 3.6 Escape Sequence Deprecations in tests/io/parser/usecols.py | diff --git a/pandas/tests/io/parser/usecols.py b/pandas/tests/io/parser/usecols.py
index 0fa53e6288bda..8767055239cd5 100644
--- a/pandas/tests/io/parser/usecols.py
+++ b/pandas/tests/io/parser/usecols.py
@@ -492,16 +492,18 @@ def test_raise_on_usecols_names_mismatch(self):
tm.assert_frame_equal(df, expected)
usecols = ['a', 'b', 'c', 'f']
- with tm.assert_raises_regex(ValueError, msg.format(missing="\['f'\]")):
+ with tm.assert_raises_regex(
+ ValueError, msg.format(missing=r"\['f'\]")):
self.read_csv(StringIO(data), usecols=usecols)
usecols = ['a', 'b', 'f']
- with tm.assert_raises_regex(ValueError, msg.format(missing="\['f'\]")):
+ with tm.assert_raises_regex(
+ ValueError, msg.format(missing=r"\['f'\]")):
self.read_csv(StringIO(data), usecols=usecols)
usecols = ['a', 'b', 'f', 'g']
with tm.assert_raises_regex(
- ValueError, msg.format(missing="\[('f', 'g'|'g', 'f')\]")):
+ ValueError, msg.format(missing=r"\[('f', 'g'|'g', 'f')\]")):
self.read_csv(StringIO(data), usecols=usecols)
names = ['A', 'B', 'C', 'D']
@@ -525,9 +527,11 @@ def test_raise_on_usecols_names_mismatch(self):
# tm.assert_frame_equal(df, expected)
usecols = ['A', 'B', 'C', 'f']
- with tm.assert_raises_regex(ValueError, msg.format(missing="\['f'\]")):
+ with tm.assert_raises_regex(
+ ValueError, msg.format(missing=r"\['f'\]")):
self.read_csv(StringIO(data), header=0, names=names,
usecols=usecols)
usecols = ['A', 'B', 'f']
- with tm.assert_raises_regex(ValueError, msg.format(missing="\['f'\]")):
+ with tm.assert_raises_regex(
+ ValueError, msg.format(missing=r"\['f'\]")):
self.read_csv(StringIO(data), names=names, usecols=usecols)
| @jreback [brought up some warnings](https://github.com/pandas-dev/pandas/pull/17310#issuecomment-353402586) on 3.6 that should be fixed by making the regex an r'string'.
Tests pass, happy to fix all the other occurrences, would need to know how to generate these warnings on my local machine though, as running the same pytest command as CI doesn't seem to bring them up for me. | https://api.github.com/repos/pandas-dev/pandas/pulls/18918 | 2017-12-23T02:41:27Z | 2017-12-23T19:45:36Z | 2017-12-23T19:45:36Z | 2017-12-28T15:08:34Z |
CLN: ASV join_merge | diff --git a/asv_bench/benchmarks/join_merge.py b/asv_bench/benchmarks/join_merge.py
index 3b0e33b72ddc1..5b40a29d54683 100644
--- a/asv_bench/benchmarks/join_merge.py
+++ b/asv_bench/benchmarks/join_merge.py
@@ -1,20 +1,24 @@
-from .pandas_vb_common import *
+import string
+import numpy as np
+import pandas.util.testing as tm
+from pandas import (DataFrame, Series, MultiIndex, date_range, concat, merge,
+ merge_asof)
try:
from pandas import merge_ordered
except ImportError:
from pandas import ordered_merge as merge_ordered
+from .pandas_vb_common import Panel, setup # noqa
-# ----------------------------------------------------------------------
-# Append
class Append(object):
+
goal_time = 0.2
def setup(self):
- self.df1 = pd.DataFrame(np.random.randn(10000, 4),
- columns=['A', 'B', 'C', 'D'])
+ self.df1 = DataFrame(np.random.randn(10000, 4),
+ columns=['A', 'B', 'C', 'D'])
self.df2 = self.df1.copy()
self.df2.index = np.arange(10000, 20000)
self.mdf1 = self.df1.copy()
@@ -35,237 +39,221 @@ def time_append_mixed(self):
self.mdf1.append(self.mdf2)
-# ----------------------------------------------------------------------
-# Concat
-
class Concat(object):
- goal_time = 0.2
- def setup(self):
- self.n = 1000
- self.indices = tm.makeStringIndex(1000)
- self.s = Series(self.n, index=self.indices)
- self.pieces = [self.s[i:(- i)] for i in range(1, 10)]
- self.pieces = (self.pieces * 50)
-
- self.df_small = pd.DataFrame(randn(5, 4))
+ goal_time = 0.2
+ params = [0, 1]
+ param_names = ['axis']
- # empty
- self.df = pd.DataFrame(dict(A=range(10000)), index=date_range('20130101', periods=10000, freq='s'))
- self.empty = pd.DataFrame()
+ def setup(self, axis):
+ N = 1000
+ s = Series(N, index=tm.makeStringIndex(N))
+ self.series = [s[i:- i] for i in range(1, 10)] * 50
+ self.small_frames = [DataFrame(np.random.randn(5, 4))] * 1000
+ df = DataFrame({'A': range(N)},
+ index=date_range('20130101', periods=N, freq='s'))
+ self.empty_left = [DataFrame(), df]
+ self.empty_right = [df, DataFrame()]
- def time_concat_series_axis1(self):
- concat(self.pieces, axis=1)
+ def time_concat_series(self, axis):
+ concat(self.series, axis=axis)
- def time_concat_small_frames(self):
- concat(([self.df_small] * 1000))
+ def time_concat_small_frames(self, axis):
+ concat(self.small_frames, axis=axis)
- def time_concat_empty_frames1(self):
- concat([self.df, self.empty])
+ def time_concat_empty_right(self, axis):
+ concat(self.empty_right, axis=axis)
- def time_concat_empty_frames2(self):
- concat([self.empty, self.df])
+ def time_concat_empty_left(self, axis):
+ concat(self.empty_left, axis=axis)
class ConcatPanels(object):
- goal_time = 0.2
-
- def setup(self):
- dataset = np.zeros((10000, 200, 2), dtype=np.float32)
- self.panels_f = [pd.Panel(np.copy(dataset, order='F'))
- for i in range(20)]
- self.panels_c = [pd.Panel(np.copy(dataset, order='C'))
- for i in range(20)]
- def time_c_ordered_axis0(self):
- concat(self.panels_c, axis=0, ignore_index=True)
-
- def time_f_ordered_axis0(self):
- concat(self.panels_f, axis=0, ignore_index=True)
+ goal_time = 0.2
+ params = ([0, 1, 2], [True, False])
+ param_names = ['axis', 'ignore_index']
- def time_c_ordered_axis1(self):
- concat(self.panels_c, axis=1, ignore_index=True)
+ def setup(self, axis, ignore_index):
+ panel_c = Panel(np.zeros((10000, 200, 2), dtype=np.float32, order='C'))
+ self.panels_c = [panel_c] * 20
+ panel_f = Panel(np.zeros((10000, 200, 2), dtype=np.float32, order='F'))
+ self.panels_f = [panel_f] * 20
- def time_f_ordered_axis1(self):
- concat(self.panels_f, axis=1, ignore_index=True)
+ def time_c_ordered(self, axis, ignore_index):
+ concat(self.panels_c, axis=axis, ignore_index=ignore_index)
- def time_c_ordered_axis2(self):
- concat(self.panels_c, axis=2, ignore_index=True)
+ def time_f_ordered(self, axis, ignore_index):
+ concat(self.panels_f, axis=axis, ignore_index=ignore_index)
- def time_f_ordered_axis2(self):
- concat(self.panels_f, axis=2, ignore_index=True)
+class ConcatDataFrames(object):
-class ConcatFrames(object):
goal_time = 0.2
+ params = ([0, 1], [True, False])
+ param_names = ['axis', 'ignore_index']
- def setup(self):
- dataset = np.zeros((10000, 200), dtype=np.float32)
-
- self.frames_f = [pd.DataFrame(np.copy(dataset, order='F'))
- for i in range(20)]
- self.frames_c = [pd.DataFrame(np.copy(dataset, order='C'))
- for i in range(20)]
-
- def time_c_ordered_axis0(self):
- concat(self.frames_c, axis=0, ignore_index=True)
-
- def time_f_ordered_axis0(self):
- concat(self.frames_f, axis=0, ignore_index=True)
+ def setup(self, axis, ignore_index):
+ frame_c = DataFrame(np.zeros((10000, 200),
+ dtype=np.float32, order='C'))
+ self.frame_c = [frame_c] * 20
+ frame_f = DataFrame(np.zeros((10000, 200),
+ dtype=np.float32, order='F'))
+ self.frame_f = [frame_f] * 20
- def time_c_ordered_axis1(self):
- concat(self.frames_c, axis=1, ignore_index=True)
+ def time_c_ordered(self, axis, ignore_index):
+ concat(self.frame_c, axis=axis, ignore_index=ignore_index)
- def time_f_ordered_axis1(self):
- concat(self.frames_f, axis=1, ignore_index=True)
+ def time_f_ordered(self, axis, ignore_index):
+ concat(self.frame_f, axis=axis, ignore_index=ignore_index)
-# ----------------------------------------------------------------------
-# Joins
-
class Join(object):
- goal_time = 0.2
-
- def setup(self):
- self.level1 = tm.makeStringIndex(10).values
- self.level2 = tm.makeStringIndex(1000).values
- self.label1 = np.arange(10).repeat(1000)
- self.label2 = np.tile(np.arange(1000), 10)
- self.key1 = np.tile(self.level1.take(self.label1), 10)
- self.key2 = np.tile(self.level2.take(self.label2), 10)
- self.shuf = np.arange(100000)
- random.shuffle(self.shuf)
- try:
- self.index2 = MultiIndex(levels=[self.level1, self.level2],
- labels=[self.label1, self.label2])
- self.index3 = MultiIndex(levels=[np.arange(10), np.arange(100), np.arange(100)],
- labels=[np.arange(10).repeat(10000), np.tile(np.arange(100).repeat(100), 10), np.tile(np.tile(np.arange(100), 100), 10)])
- self.df_multi = DataFrame(np.random.randn(len(self.index2), 4),
- index=self.index2,
- columns=['A', 'B', 'C', 'D'])
- except:
- pass
- self.df = pd.DataFrame({'data1': np.random.randn(100000),
- 'data2': np.random.randn(100000),
- 'key1': self.key1,
- 'key2': self.key2})
- self.df_key1 = pd.DataFrame(np.random.randn(len(self.level1), 4),
- index=self.level1,
- columns=['A', 'B', 'C', 'D'])
- self.df_key2 = pd.DataFrame(np.random.randn(len(self.level2), 4),
- index=self.level2,
- columns=['A', 'B', 'C', 'D'])
- self.df_shuf = self.df.reindex(self.df.index[self.shuf])
-
- def time_join_dataframe_index_multi(self):
- self.df.join(self.df_multi, on=['key1', 'key2'])
-
- def time_join_dataframe_index_single_key_bigger(self):
- self.df.join(self.df_key2, on='key2')
- def time_join_dataframe_index_single_key_bigger_sort(self):
- self.df_shuf.join(self.df_key2, on='key2', sort=True)
-
- def time_join_dataframe_index_single_key_small(self):
- self.df.join(self.df_key1, on='key1')
+ goal_time = 0.2
+ params = [True, False]
+ param_names = ['sort']
+
+ def setup(self, sort):
+ level1 = tm.makeStringIndex(10).values
+ level2 = tm.makeStringIndex(1000).values
+ label1 = np.arange(10).repeat(1000)
+ label2 = np.tile(np.arange(1000), 10)
+ index2 = MultiIndex(levels=[level1, level2],
+ labels=[label1, label2])
+ self.df_multi = DataFrame(np.random.randn(len(index2), 4),
+ index=index2,
+ columns=['A', 'B', 'C', 'D'])
+
+ self.key1 = np.tile(level1.take(label1), 10)
+ self.key2 = np.tile(level2.take(label2), 10)
+ self.df = DataFrame({'data1': np.random.randn(100000),
+ 'data2': np.random.randn(100000),
+ 'key1': self.key1,
+ 'key2': self.key2})
+
+ self.df_key1 = DataFrame(np.random.randn(len(level1), 4),
+ index=level1,
+ columns=['A', 'B', 'C', 'D'])
+ self.df_key2 = DataFrame(np.random.randn(len(level2), 4),
+ index=level2,
+ columns=['A', 'B', 'C', 'D'])
+
+ shuf = np.arange(100000)
+ np.random.shuffle(shuf)
+ self.df_shuf = self.df.reindex(self.df.index[shuf])
+
+ def time_join_dataframe_index_multi(self, sort):
+ self.df.join(self.df_multi, on=['key1', 'key2'], sort=sort)
+
+ def time_join_dataframe_index_single_key_bigger(self, sort):
+ self.df.join(self.df_key2, on='key2', sort=sort)
+
+ def time_join_dataframe_index_single_key_small(self, sort):
+ self.df.join(self.df_key1, on='key1', sort=sort)
+
+ def time_join_dataframe_index_shuffle_key_bigger_sort(self, sort):
+ self.df_shuf.join(self.df_key2, on='key2', sort=sort)
class JoinIndex(object):
+
goal_time = 0.2
def setup(self):
- np.random.seed(2718281)
- self.n = 50000
- self.left = pd.DataFrame(np.random.randint(1, (self.n / 500), (self.n, 2)), columns=['jim', 'joe'])
- self.right = pd.DataFrame(np.random.randint(1, (self.n / 500), (self.n, 2)), columns=['jolie', 'jolia']).set_index('jolie')
+ N = 50000
+ self.left = DataFrame(np.random.randint(1, N / 500, (N, 2)),
+ columns=['jim', 'joe'])
+ self.right = DataFrame(np.random.randint(1, N / 500, (N, 2)),
+ columns=['jolie', 'jolia']).set_index('jolie')
def time_left_outer_join_index(self):
self.left.join(self.right, on='jim')
-class join_non_unique_equal(object):
+class JoinNonUnique(object):
# outer join of non-unique
# GH 6329
-
goal_time = 0.2
def setup(self):
- self.date_index = date_range('01-Jan-2013', '23-Jan-2013', freq='T')
- self.daily_dates = self.date_index.to_period('D').to_timestamp('S', 'S')
- self.fracofday = (self.date_index.view(np.ndarray) - self.daily_dates.view(np.ndarray))
- self.fracofday = (self.fracofday.astype('timedelta64[ns]').astype(np.float64) / 86400000000000.0)
- self.fracofday = Series(self.fracofday, self.daily_dates)
- self.index = date_range(self.date_index.min().to_period('A').to_timestamp('D', 'S'), self.date_index.max().to_period('A').to_timestamp('D', 'E'), freq='D')
- self.temp = Series(1.0, self.index)
+ date_index = date_range('01-Jan-2013', '23-Jan-2013', freq='T')
+ daily_dates = date_index.to_period('D').to_timestamp('S', 'S')
+ self.fracofday = date_index.values - daily_dates.values
+ self.fracofday = self.fracofday.astype('timedelta64[ns]')
+ self.fracofday = self.fracofday.astype(np.float64) / 86400000000000.0
+ self.fracofday = Series(self.fracofday, daily_dates)
+ index = date_range(date_index.min(), date_index.max(), freq='D')
+ self.temp = Series(1.0, index)[self.fracofday.index]
def time_join_non_unique_equal(self):
- (self.fracofday * self.temp[self.fracofday.index])
-
+ self.fracofday * self.temp
-# ----------------------------------------------------------------------
-# Merges
class Merge(object):
- goal_time = 0.2
- def setup(self):
- self.N = 10000
- self.indices = tm.makeStringIndex(self.N).values
- self.indices2 = tm.makeStringIndex(self.N).values
- self.key = np.tile(self.indices[:8000], 10)
- self.key2 = np.tile(self.indices2[:8000], 10)
- self.left = pd.DataFrame({'key': self.key, 'key2': self.key2,
- 'value': np.random.randn(80000)})
- self.right = pd.DataFrame({'key': self.indices[2000:],
- 'key2': self.indices2[2000:],
- 'value2': np.random.randn(8000)})
-
- self.df = pd.DataFrame({'key1': np.tile(np.arange(500).repeat(10), 2),
- 'key2': np.tile(np.arange(250).repeat(10), 4),
- 'value': np.random.randn(10000)})
- self.df2 = pd.DataFrame({'key1': np.arange(500), 'value2': randn(500)})
+ goal_time = 0.2
+ params = [True, False]
+ param_names = ['sort']
+
+ def setup(self, sort):
+ N = 10000
+ indices = tm.makeStringIndex(N).values
+ indices2 = tm.makeStringIndex(N).values
+ key = np.tile(indices[:8000], 10)
+ key2 = np.tile(indices2[:8000], 10)
+ self.left = DataFrame({'key': key, 'key2': key2,
+ 'value': np.random.randn(80000)})
+ self.right = DataFrame({'key': indices[2000:],
+ 'key2': indices2[2000:],
+ 'value2': np.random.randn(8000)})
+
+ self.df = DataFrame({'key1': np.tile(np.arange(500).repeat(10), 2),
+ 'key2': np.tile(np.arange(250).repeat(10), 4),
+ 'value': np.random.randn(10000)})
+ self.df2 = DataFrame({'key1': np.arange(500),
+ 'value2': np.random.randn(500)})
self.df3 = self.df[:5000]
- def time_merge_2intkey_nosort(self):
- merge(self.left, self.right, sort=False)
+ def time_merge_2intkey(self, sort):
+ merge(self.left, self.right, sort=sort)
- def time_merge_2intkey_sort(self):
- merge(self.left, self.right, sort=True)
+ def time_merge_dataframe_integer_2key(self, sort):
+ merge(self.df, self.df3, sort=sort)
- def time_merge_dataframe_integer_2key(self):
- merge(self.df, self.df3)
+ def time_merge_dataframe_integer_key(self, sort):
+ merge(self.df, self.df2, on='key1', sort=sort)
- def time_merge_dataframe_integer_key(self):
- merge(self.df, self.df2, on='key1')
+class I8Merge(object):
-class i8merge(object):
goal_time = 0.2
+ params = ['inner', 'outer', 'left', 'right']
+ param_names = ['how']
- def setup(self):
- (low, high, n) = (((-1) << 10), (1 << 10), (1 << 20))
- self.left = pd.DataFrame(np.random.randint(low, high, (n, 7)),
- columns=list('ABCDEFG'))
+ def setup(self, how):
+ low, high, n = -1000, 1000, 10**6
+ self.left = DataFrame(np.random.randint(low, high, (n, 7)),
+ columns=list('ABCDEFG'))
self.left['left'] = self.left.sum(axis=1)
- self.i = np.random.permutation(len(self.left))
- self.right = self.left.iloc[self.i].copy()
- self.right.columns = (self.right.columns[:(-1)].tolist() + ['right'])
- self.right.index = np.arange(len(self.right))
- self.right['right'] *= (-1)
+ self.right = self.left.sample(frac=1).rename({'left': 'right'}, axis=1)
+ self.right = self.right.reset_index(drop=True)
+ self.right['right'] *= -1
- def time_i8merge(self):
- merge(self.left, self.right, how='outer')
+ def time_i8merge(self, how):
+ merge(self.left, self.right, how=how)
class MergeCategoricals(object):
+
goal_time = 0.2
def setup(self):
- self.left_object = pd.DataFrame(
+ self.left_object = DataFrame(
{'X': np.random.choice(range(0, 10), size=(10000,)),
'Y': np.random.choice(['one', 'two', 'three'], size=(10000,))})
- self.right_object = pd.DataFrame(
+ self.right_object = DataFrame(
{'X': np.random.choice(range(0, 10), size=(10000,)),
'Z': np.random.choice(['jjj', 'kkk', 'sss'], size=(10000,))})
@@ -281,103 +269,85 @@ def time_merge_cat(self):
merge(self.left_cat, self.right_cat, on='X')
-# ----------------------------------------------------------------------
-# Ordered merge
-
class MergeOrdered(object):
def setup(self):
-
groups = tm.makeStringIndex(10).values
-
- self.left = pd.DataFrame({'group': groups.repeat(5000),
- 'key' : np.tile(np.arange(0, 10000, 2), 10),
- 'lvalue': np.random.randn(50000)})
-
- self.right = pd.DataFrame({'key' : np.arange(10000),
- 'rvalue' : np.random.randn(10000)})
+ self.left = DataFrame({'group': groups.repeat(5000),
+ 'key': np.tile(np.arange(0, 10000, 2), 10),
+ 'lvalue': np.random.randn(50000)})
+ self.right = DataFrame({'key': np.arange(10000),
+ 'rvalue': np.random.randn(10000)})
def time_merge_ordered(self):
merge_ordered(self.left, self.right, on='key', left_by='group')
-# ----------------------------------------------------------------------
-# asof merge
-
class MergeAsof(object):
def setup(self):
- import string
- np.random.seed(0)
one_count = 200000
two_count = 1000000
- self.df1 = pd.DataFrame(
+ df1 = DataFrame(
{'time': np.random.randint(0, one_count / 20, one_count),
'key': np.random.choice(list(string.ascii_uppercase), one_count),
'key2': np.random.randint(0, 25, one_count),
'value1': np.random.randn(one_count)})
- self.df2 = pd.DataFrame(
+ df2 = DataFrame(
{'time': np.random.randint(0, two_count / 20, two_count),
'key': np.random.choice(list(string.ascii_uppercase), two_count),
'key2': np.random.randint(0, 25, two_count),
'value2': np.random.randn(two_count)})
- self.df1 = self.df1.sort_values('time')
- self.df2 = self.df2.sort_values('time')
+ df1 = df1.sort_values('time')
+ df2 = df2.sort_values('time')
- self.df1['time32'] = np.int32(self.df1.time)
- self.df2['time32'] = np.int32(self.df2.time)
+ df1['time32'] = np.int32(df1.time)
+ df2['time32'] = np.int32(df2.time)
- self.df1a = self.df1[['time', 'value1']]
- self.df2a = self.df2[['time', 'value2']]
- self.df1b = self.df1[['time', 'key', 'value1']]
- self.df2b = self.df2[['time', 'key', 'value2']]
- self.df1c = self.df1[['time', 'key2', 'value1']]
- self.df2c = self.df2[['time', 'key2', 'value2']]
- self.df1d = self.df1[['time32', 'value1']]
- self.df2d = self.df2[['time32', 'value2']]
- self.df1e = self.df1[['time', 'key', 'key2', 'value1']]
- self.df2e = self.df2[['time', 'key', 'key2', 'value2']]
+ self.df1a = df1[['time', 'value1']]
+ self.df2a = df2[['time', 'value2']]
+ self.df1b = df1[['time', 'key', 'value1']]
+ self.df2b = df2[['time', 'key', 'value2']]
+ self.df1c = df1[['time', 'key2', 'value1']]
+ self.df2c = df2[['time', 'key2', 'value2']]
+ self.df1d = df1[['time32', 'value1']]
+ self.df2d = df2[['time32', 'value2']]
+ self.df1e = df1[['time', 'key', 'key2', 'value1']]
+ self.df2e = df2[['time', 'key', 'key2', 'value2']]
- def time_noby(self):
+ def time_on_int(self):
merge_asof(self.df1a, self.df2a, on='time')
+ def time_on_int32(self):
+ merge_asof(self.df1d, self.df2d, on='time32')
+
def time_by_object(self):
merge_asof(self.df1b, self.df2b, on='time', by='key')
def time_by_int(self):
merge_asof(self.df1c, self.df2c, on='time', by='key2')
- def time_on_int32(self):
- merge_asof(self.df1d, self.df2d, on='time32')
-
def time_multiby(self):
merge_asof(self.df1e, self.df2e, on='time', by=['key', 'key2'])
-# ----------------------------------------------------------------------
-# data alignment
-
class Align(object):
+
goal_time = 0.2
def setup(self):
- self.n = 1000000
- self.sz = 500000
- self.rng = np.arange(0, 10000000000000, 10000000)
- self.stamps = (np.datetime64(datetime.now()).view('i8') + self.rng)
- self.idx1 = np.sort(self.sample(self.stamps, self.sz))
- self.idx2 = np.sort(self.sample(self.stamps, self.sz))
- self.ts1 = Series(np.random.randn(self.sz), self.idx1)
- self.ts2 = Series(np.random.randn(self.sz), self.idx2)
-
- def sample(self, values, k):
- self.sampler = np.random.permutation(len(values))
- return values.take(self.sampler[:k])
+ size = 5 * 10**5
+ rng = np.arange(0, 10**13, 10**7)
+ stamps = np.datetime64('now').view('i8') + rng
+ idx1 = np.sort(np.random.choice(stamps, size, replace=False))
+ idx2 = np.sort(np.random.choice(stamps, size, replace=False))
+ self.ts1 = Series(np.random.randn(size), idx1)
+ self.ts2 = Series(np.random.randn(size), idx2)
def time_series_align_int64_index(self):
- (self.ts1 + self.ts2)
+ self.ts1 + self.ts2
def time_series_align_left_monotonic(self):
self.ts1.align(self.ts2, join='left')
| Flake8'd, utilized `param`s to add some additional benchmarks, and simplified the setup where possible.
```
$ asv dev -b ^join_merge
· Discovering benchmarks
· Running 30 total benchmarks (1 commits * 1 environments * 30 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 3.33%] ··· Running join_merge.Align.time_series_align_int64_index 672ms
[ 6.67%] ··· Running ...erge.Align.time_series_align_left_monotonic 203ms
[ 10.00%] ··· Running join_merge.Append.time_append_homogenous 1.57ms
[ 10.00%] ····· /home/matt/Projects/pandas-mroeschke/asv_bench/benchmarks/join_merge.py:29: FutureWarning: consolidate is deprecated and will be removed in a future release.
self.mdf1.consolidate(inplace=True)
[ 13.33%] ··· Running join_merge.Append.time_append_mixed 2.51ms
[ 13.33%] ····· /home/matt/Projects/pandas-mroeschke/asv_bench/benchmarks/join_merge.py:29: FutureWarning: consolidate is deprecated and will be removed in a future release.
self.mdf1.consolidate(inplace=True)
[ 16.67%] ··· Running join_merge.Concat.time_concat_empty_left ok
[ 16.67%] ····
====== =======
axis
------ -------
0 425μs
1 475μs
====== =======
[ 20.00%] ··· Running join_merge.Concat.time_concat_empty_right ok
[ 20.00%] ····
====== =======
axis
------ -------
0 406μs
1 481μs
====== =======
[ 23.33%] ··· Running join_merge.Concat.time_concat_series ok
[ 23.33%] ····
====== ========
axis
------ --------
0 27.5ms
1 218ms
====== ========
[ 26.67%] ··· Running join_merge.Concat.time_concat_small_frames ok
[ 26.67%] ····
====== ========
axis
------ --------
0 107ms
1 81.2ms
====== ========
[ 30.00%] ··· Running join_merge.ConcatDataFrames.time_c_ordered ok
[ 30.00%] ····
====== ======= =======
-- ignore_index
------ ---------------
axis True False
====== ======= =======
0 146ms 141ms
1 225ms 226ms
====== ======= =======
[ 33.33%] ··· Running join_merge.ConcatDataFrames.time_f_ordered ok
[ 33.33%] ····
====== ======= =======
-- ignore_index
------ ---------------
axis True False
====== ======= =======
0 173ms 175ms
1 161ms 155ms
====== ======= =======
[ 36.67%] ··· Running join_merge.ConcatPanels.time_c_ordered ok
[ 36.67%] ····
====== ======= =======
-- ignore_index
------ ---------------
axis True False
====== ======= =======
0 318ms 314ms
1 359ms 363ms
2 1.67s 1.66s
====== ======= =======
[ 40.00%] ··· Running join_merge.ConcatPanels.time_f_ordered ok
[ 40.00%] ····
====== ======= =======
-- ignore_index
------ ---------------
axis True False
====== ======= =======
0 666ms 666ms
1 321ms 297ms
2 297ms 299ms
====== ======= =======
[ 43.33%] ··· Running join_merge.I8Merge.time_i8merge ok
[ 43.33%] ····
======= =======
how
------- -------
inner 1.62s
outer 1.61s
left 1.62s
right 1.62s
======= =======
[ 46.67%] ··· Running ..._merge.Join.time_join_dataframe_index_multi ok
[ 46.67%] ····
======= ========
sort
------- --------
True 53.0ms
False 41.9ms
======= ========
[ 50.00%] ··· Running ...oin_dataframe_index_shuffle_key_bigger_sort ok
[ 50.00%] ····
======= ========
sort
------- --------
True 36.5ms
False 29.8ms
======= ========
[ 53.33%] ··· Running ...time_join_dataframe_index_single_key_bigger ok
[ 53.33%] ····
======= ========
sort
------- --------
True 36.6ms
False 30.8ms
======= ========
[ 56.67%] ··· Running ....time_join_dataframe_index_single_key_small ok
[ 56.67%] ····
======= ========
sort
------- --------
True 30.6ms
False 27.6ms
======= ========
[ 60.00%] ··· Running ..._merge.JoinIndex.time_left_outer_join_index 4.82s
[ 63.33%] ··· Running ...ge.JoinNonUnique.time_join_non_unique_equal 417ms
[ 66.67%] ··· Running join_merge.Merge.time_merge_2intkey ok
[ 66.67%] ····
======= ========
sort
------- --------
True 72.6ms
False 40.9ms
======= ========
[ 70.00%] ··· Running ...rge.Merge.time_merge_dataframe_integer_2key ok
[ 70.00%] ····
======= ========
sort
------- --------
True 23.3ms
False 10.0ms
======= ========
[ 73.33%] ··· Running ...erge.Merge.time_merge_dataframe_integer_key ok
[ 73.33%] ····
======= ========
sort
------- --------
True 5.46ms
False 4.87ms
======= ========
[ 76.67%] ··· Running join_merge.MergeAsof.time_by_int 48.6ms
[ 80.00%] ··· Running join_merge.MergeAsof.time_by_object 83.9ms
[ 83.33%] ··· Running join_merge.MergeAsof.time_multiby 1.32s
[ 86.67%] ··· Running join_merge.MergeAsof.time_on_int 29.2ms
[ 90.00%] ··· Running join_merge.MergeAsof.time_on_int32 33.9ms
[ 93.33%] ··· Running join_merge.MergeCategoricals.time_merge_cat 775ms
[ 96.67%] ··· Running join_merge.MergeCategoricals.time_merge_object 1.50s
[100.00%] ··· Running join_merge.MergeOrdered.time_merge_ordered 146ms
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/18917 | 2017-12-23T02:05:24Z | 2017-12-23T19:57:29Z | 2017-12-23T19:57:29Z | 2017-12-24T05:51:15Z |
CLN: ASV plotting | diff --git a/asv_bench/benchmarks/plotting.py b/asv_bench/benchmarks/plotting.py
index 16889b2f19e89..5b49112b0e07d 100644
--- a/asv_bench/benchmarks/plotting.py
+++ b/asv_bench/benchmarks/plotting.py
@@ -1,21 +1,20 @@
-from .pandas_vb_common import *
-try:
- from pandas import date_range
-except ImportError:
- def date_range(start=None, end=None, periods=None, freq=None):
- return DatetimeIndex(start, end, periods=periods, offset=freq)
+import numpy as np
+from pandas import DataFrame, Series, DatetimeIndex, date_range
try:
from pandas.plotting import andrews_curves
except ImportError:
from pandas.tools.plotting import andrews_curves
+import matplotlib
+matplotlib.use('Agg')
+
+from .pandas_vb_common import setup # noqa
class Plotting(object):
+
goal_time = 0.2
def setup(self):
- import matplotlib
- matplotlib.use('Agg')
self.s = Series(np.random.randn(1000000))
self.df = DataFrame({'col': self.s})
@@ -27,18 +26,17 @@ def time_frame_plot(self):
class TimeseriesPlotting(object):
+
goal_time = 0.2
def setup(self):
- import matplotlib
- matplotlib.use('Agg')
N = 2000
M = 5
idx = date_range('1/1/1975', periods=N)
self.df = DataFrame(np.random.randn(N, M), index=idx)
- idx_irregular = pd.DatetimeIndex(np.concatenate((idx.values[0:10],
- idx.values[12:])))
+ idx_irregular = DatetimeIndex(np.concatenate((idx.values[0:10],
+ idx.values[12:])))
self.df2 = DataFrame(np.random.randn(len(idx_irregular), M),
index=idx_irregular)
@@ -53,16 +51,14 @@ def time_plot_irregular(self):
class Misc(object):
+
goal_time = 0.6
def setup(self):
- import matplotlib
- matplotlib.use('Agg')
- self.N = 500
- self.M = 10
- data_dict = {x: np.random.randn(self.N) for x in range(self.M)}
- data_dict["Name"] = ["A"] * self.N
- self.df = DataFrame(data_dict)
+ N = 500
+ M = 10
+ self.df = DataFrame(np.random.randn(N, M))
+ self.df['Name'] = ["A"] * N
def time_plot_andrews_curves(self):
andrews_curves(self.df, "Name")
| Flake8'd and simplified some of the setup.
```
$ asv dev -b ^plotting
· Discovering benchmarks
· Running 6 total benchmarks (1 commits * 1 environments * 6 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 16.67%] ··· Running plotting.Misc.time_plot_andrews_curves 1.92s
[ 33.33%] ··· Running plotting.Plotting.time_frame_plot 417ms
[ 50.00%] ··· Running plotting.Plotting.time_series_plot 418ms
[ 66.67%] ··· Running ...ting.TimeseriesPlotting.time_plot_irregular 130ms
[ 83.33%] ··· Running plotting.TimeseriesPlotting.time_plot_regular 388ms
[100.00%] ··· Running ...TimeseriesPlotting.time_plot_regular_compat 123ms
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/18916 | 2017-12-23T01:58:34Z | 2017-12-23T20:02:43Z | 2017-12-23T20:02:43Z | 2017-12-24T05:53:19Z |
CLN: ASV panel ctor | diff --git a/asv_bench/benchmarks/panel_ctor.py b/asv_bench/benchmarks/panel_ctor.py
index cc6071b054662..456fe959c5aa3 100644
--- a/asv_bench/benchmarks/panel_ctor.py
+++ b/asv_bench/benchmarks/panel_ctor.py
@@ -1,65 +1,56 @@
-from .pandas_vb_common import *
-from datetime import timedelta
+from datetime import datetime, timedelta
+from pandas import DataFrame, DatetimeIndex, date_range
-class Constructors1(object):
- goal_time = 0.2
-
- def setup(self):
- self.data_frames = {}
- self.start = datetime(1990, 1, 1)
- self.end = datetime(2012, 1, 1)
- for x in range(100):
- self.end += timedelta(days=1)
- self.dr = np.asarray(date_range(self.start, self.end))
- self.df = DataFrame({'a': ([0] * len(self.dr)), 'b': ([1] * len(self.dr)), 'c': ([2] * len(self.dr)), }, index=self.dr)
- self.data_frames[x] = self.df
-
- def time_panel_from_dict_all_different_indexes(self):
- Panel.from_dict(self.data_frames)
+from .pandas_vb_common import Panel, setup # noqa
-class Constructors2(object):
+class DifferentIndexes(object):
goal_time = 0.2
def setup(self):
self.data_frames = {}
+ start = datetime(1990, 1, 1)
+ end = datetime(2012, 1, 1)
for x in range(100):
- self.dr = np.asarray(DatetimeIndex(start=datetime(1990, 1, 1), end=datetime(2012, 1, 1), freq='D'))
- self.df = DataFrame({'a': ([0] * len(self.dr)), 'b': ([1] * len(self.dr)), 'c': ([2] * len(self.dr)), }, index=self.dr)
- self.data_frames[x] = self.df
+ end += timedelta(days=1)
+ idx = date_range(start, end)
+ df = DataFrame({'a': 0, 'b': 1, 'c': 2}, index=idx)
+ self.data_frames[x] = df
- def time_panel_from_dict_equiv_indexes(self):
+ def time_from_dict(self):
Panel.from_dict(self.data_frames)
-class Constructors3(object):
+class SameIndexes(object):
+
goal_time = 0.2
def setup(self):
- self.dr = np.asarray(DatetimeIndex(start=datetime(1990, 1, 1), end=datetime(2012, 1, 1), freq='D'))
- self.data_frames = {}
- for x in range(100):
- self.df = DataFrame({'a': ([0] * len(self.dr)), 'b': ([1] * len(self.dr)), 'c': ([2] * len(self.dr)), }, index=self.dr)
- self.data_frames[x] = self.df
+ idx = DatetimeIndex(start=datetime(1990, 1, 1),
+ end=datetime(2012, 1, 1),
+ freq='D')
+ df = DataFrame({'a': 0, 'b': 1, 'c': 2}, index=idx)
+ self.data_frames = dict(enumerate([df] * 100))
- def time_panel_from_dict_same_index(self):
+ def time_from_dict(self):
Panel.from_dict(self.data_frames)
-class Constructors4(object):
+class TwoIndexes(object):
+
goal_time = 0.2
def setup(self):
- self.data_frames = {}
- self.start = datetime(1990, 1, 1)
- self.end = datetime(2012, 1, 1)
- for x in range(100):
- if (x == 50):
- self.end += timedelta(days=1)
- self.dr = np.asarray(date_range(self.start, self.end))
- self.df = DataFrame({'a': ([0] * len(self.dr)), 'b': ([1] * len(self.dr)), 'c': ([2] * len(self.dr)), }, index=self.dr)
- self.data_frames[x] = self.df
-
- def time_panel_from_dict_two_different_indexes(self):
+ start = datetime(1990, 1, 1)
+ end = datetime(2012, 1, 1)
+ df1 = DataFrame({'a': 0, 'b': 1, 'c': 2},
+ index=DatetimeIndex(start=start, end=end, freq='D'))
+ end += timedelta(days=1)
+ df2 = DataFrame({'a': 0, 'b': 1, 'c': 2},
+ index=DatetimeIndex(start=start, end=end, freq='D'))
+ dfs = [df1] * 50 + [df2] * 50
+ self.data_frames = dict(enumerate(dfs))
+
+ def time_from_dict(self):
Panel.from_dict(self.data_frames)
| There were two benchmarks that were essentially the same (dictionary of dataframes with the same index benchmark), so I removed one along with the usual flake8.
```
$ asv dev -b ^panel_ctor
· Discovering benchmarks
· Running 3 total benchmarks (1 commits * 1 environments * 3 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 33.33%] ··· Running panel_ctor.DifferentIndexes.time_from_dict 387ms
[ 66.67%] ··· Running panel_ctor.SameIndexes.time_from_dict 32.0ms
[100.00%] ··· Running panel_ctor.TwoIndexes.time_from_dict 105ms
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/18915 | 2017-12-23T01:52:54Z | 2017-12-23T20:03:47Z | 2017-12-23T20:03:47Z | 2017-12-24T05:54:36Z |
DOC: update versionadded references of 0.22 to 0.23 | diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index 86d2ec2254057..5f2e90e6ae4fe 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -568,7 +568,7 @@ standard database join operations between DataFrame objects:
.. note::
Support for specifying index levels as the ``on``, ``left_on``, and
- ``right_on`` parameters was added in version 0.22.0.
+ ``right_on`` parameters was added in version 0.23.0.
The return type will be the same as ``left``. If ``left`` is a ``DataFrame``
and ``right`` is a subclass of DataFrame, the return type will still be
diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst
index 1b81d83bb76c7..e2b7b0e586d70 100644
--- a/doc/source/reshaping.rst
+++ b/doc/source/reshaping.rst
@@ -642,7 +642,7 @@ By default new columns will have ``np.uint8`` dtype. To choose another dtype use
pd.get_dummies(df, dtype=bool).dtypes
-.. versionadded:: 0.22.0
+.. versionadded:: 0.23.0
.. _reshaping.factorize:
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index f9bd6849c5072..845d0243c39e9 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -859,7 +859,7 @@ def rename_categories(self, new_categories, inplace=False):
* callable : a callable that is called on all items in the old
categories and whose return values comprise the new categories.
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
.. warning::
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 65934494b321b..26257f6ecbc37 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -200,7 +200,7 @@
Notes
-----
Support for specifying index levels as the `on`, `left_on`, and
-`right_on` parameters was added in version 0.22.0
+`right_on` parameters was added in version 0.23.0
Examples
--------
@@ -5094,7 +5094,7 @@ def join(self, other, on=None, how='left', lsuffix='', rsuffix='',
of DataFrame objects
Support for specifying index levels as the `on` parameter was added
- in version 0.22.0
+ in version 0.23.0
Examples
--------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d08fbf8593946..e9dd82eb64834 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1692,7 +1692,7 @@ def to_json(self, path_or_buf=None, orient=None, date_format=None,
including the index (``index=False``) is only supported when
orient is 'split' or 'table'.
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
Returns
-------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 5231dc2deb233..79de63b0caeb6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3785,7 +3785,7 @@ def drop(self, labels, errors='raise'):
level : int or str, optional, default None
Only return values from specified level (for MultiIndex)
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
Returns
-------
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index cb786574909db..8b6121f360b76 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -554,7 +554,7 @@ def to_tuples(self, na_tuple=True):
Returns NA as a tuple if True, ``(nan, nan)``, or just as the NA
value itself if False, ``nan``.
- ..versionadded:: 0.22.0
+ ..versionadded:: 0.23.0
Examples
--------
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index d5551c6c9f297..c2804c8f8e63e 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -200,7 +200,7 @@ def wide_to_long(df, stubnames, i, j, sep="", suffix=r'\d+'):
.. versionadded:: 0.20.0
- .. versionchanged:: 0.22.0
+ .. versionchanged:: 0.23.0
When all suffixes are numeric, they are cast to int64/float64.
Returns
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 5bb86885c0875..320ad109f01ba 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -731,7 +731,7 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False,
dtype : dtype, default np.uint8
Data type for new columns. Only a single dtype is allowed.
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
Returns
-------
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 4245b9eb641ba..6b8edbb146e4b 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -184,7 +184,7 @@ def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False,
conversion. May produce sigificant speed-up when parsing duplicate date
strings, especially ones with timezone offsets.
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
Returns
-------
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 2dbfeab9cc331..97a739b349a98 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -132,7 +132,7 @@
nrows : int, default None
Number of rows to parse
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
na_values : scalar, str, list-like, or dict, default None
Additional strings to recognize as NA/NaN. If dict passed, specific
@@ -150,7 +150,7 @@
format.
skip_footer : int, default 0
- .. deprecated:: 0.22.0
+ .. deprecated:: 0.23.0
Pass in `skipfooter` instead.
skipfooter : int, default 0
Rows at the end to skip (0-indexed)
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index daf05bb80d7ca..3af9e78a5aac4 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -795,7 +795,7 @@ def hide_index(self):
"""
Hide any indices from rendering.
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
Returns
-------
@@ -808,7 +808,7 @@ def hide_columns(self, subset):
"""
Hide columns from rendering.
- .. versionadded:: 0.22.0
+ .. versionadded:: 0.23.0
Parameters
----------
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index eaaa14e756e22..e431c9447e8f8 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -131,7 +131,7 @@ def read(self, path, columns=None, **kwargs):
def _validate_write_lt_070(self, df):
# Compatibility shim for pyarrow < 0.7.0
- # TODO: Remove in pandas 0.22.0
+ # TODO: Remove in pandas 0.23.0
from pandas.core.indexes.multi import MultiIndex
if isinstance(df.index, MultiIndex):
msg = (
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index 4290001fea405..1cd98feb05ea0 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -209,7 +209,7 @@ def ensure_compat(dispatch, name, arg, func_kw=None, *args, **kwargs):
kwds[k] = value
# TODO: the below is only in place temporary until this module is removed.
- kwargs.pop('freq', None) # freq removed in 0.22
+ kwargs.pop('freq', None) # freq removed in 0.23
# how is a keyword that if not-None should be in kwds
how = kwargs.pop('how', None)
if how is not None:
| Follow-up on https://github.com/pandas-dev/pandas/pull/18897 | https://api.github.com/repos/pandas-dev/pandas/pulls/18911 | 2017-12-22T09:28:42Z | 2017-12-22T14:14:18Z | 2017-12-22T14:14:18Z | 2017-12-22T14:14:22Z |
CLN: ASV panel_methods | diff --git a/asv_bench/benchmarks/panel_methods.py b/asv_bench/benchmarks/panel_methods.py
index 6609305502011..9ee1949b311db 100644
--- a/asv_bench/benchmarks/panel_methods.py
+++ b/asv_bench/benchmarks/panel_methods.py
@@ -1,24 +1,19 @@
-from .pandas_vb_common import *
+import numpy as np
+from .pandas_vb_common import Panel, setup # noqa
-class PanelMethods(object):
- goal_time = 0.2
-
- def setup(self):
- self.index = date_range(start='2000', freq='D', periods=1000)
- self.panel = Panel(np.random.randn(100, len(self.index), 1000))
- def time_pct_change_items(self):
- self.panel.pct_change(1, axis='items')
+class PanelMethods(object):
- def time_pct_change_major(self):
- self.panel.pct_change(1, axis='major')
+ goal_time = 0.2
+ params = ['items', 'major', 'minor']
+ param_names = ['axis']
- def time_pct_change_minor(self):
- self.panel.pct_change(1, axis='minor')
+ def setup(self, axis):
+ self.panel = Panel(np.random.randn(100, 1000, 100))
- def time_shift(self):
- self.panel.shift(1)
+ def time_pct_change(self, axis):
+ self.panel.pct_change(1, axis=axis)
- def time_shift_minor(self):
- self.panel.shift(1, axis='minor')
+ def time_shift(self, axis):
+ self.panel.shift(1, axis=axis)
| Flake8'd, `param`'d, and this benchmark was timing out on my machine so I scaled down the size of the Panel but should be representative of larger Panels
```
asv dev -b ^panel_methods
· Discovering benchmarks
· Running 2 total benchmarks (1 commits * 1 environments * 2 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 50.00%] ··· Running panel_methods.PanelMethods.time_pct_change ok
[ 50.00%] ····
======= =======
axis
------- -------
items 1.54s
major 1.39s
minor 1.41s
======= =======
[100.00%] ··· Running panel_methods.PanelMethods.time_shift ok
[100.00%] ····
======= =======
axis
------- -------
items 397μs
major 385μs
minor 390μs
======= =======
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/18907 | 2017-12-22T07:35:19Z | 2017-12-22T14:28:40Z | 2017-12-22T14:28:40Z | 2017-12-22T16:45:53Z |
CLN: ASV io benchmarks | diff --git a/asv_bench/benchmarks/io/excel.py b/asv_bench/benchmarks/io/excel.py
new file mode 100644
index 0000000000000..a7c6c43d15026
--- /dev/null
+++ b/asv_bench/benchmarks/io/excel.py
@@ -0,0 +1,37 @@
+import numpy as np
+from pandas import DataFrame, date_range, ExcelWriter, read_excel
+from pandas.compat import BytesIO
+import pandas.util.testing as tm
+
+from ..pandas_vb_common import BaseIO, setup # noqa
+
+
+class Excel(object):
+
+ goal_time = 0.2
+ params = ['openpyxl', 'xlsxwriter', 'xlwt']
+ param_names = ['engine']
+
+ def setup(self, engine):
+ N = 2000
+ C = 5
+ self.df = DataFrame(np.random.randn(N, C),
+ columns=['float{}'.format(i) for i in range(C)],
+ index=date_range('20000101', periods=N, freq='H'))
+ self.df['object'] = tm.makeStringIndex(N)
+ self.bio_read = BytesIO()
+ self.writer_read = ExcelWriter(self.bio_read, engine=engine)
+ self.df.to_excel(self.writer_read, sheet_name='Sheet1')
+ self.writer_read.save()
+ self.bio_read.seek(0)
+
+ self.bio_write = BytesIO()
+ self.bio_write.seek(0)
+ self.writer_write = ExcelWriter(self.bio_write, engine=engine)
+
+ def time_read_excel(self, engine):
+ read_excel(self.bio_read)
+
+ def time_write_excel(self, engine):
+ self.df.to_excel(self.writer_write, sheet_name='Sheet1')
+ self.writer_write.save()
diff --git a/asv_bench/benchmarks/hdfstore_bench.py b/asv_bench/benchmarks/io/hdf.py
similarity index 77%
rename from asv_bench/benchmarks/hdfstore_bench.py
rename to asv_bench/benchmarks/io/hdf.py
index d7b3be25a18b9..5c0e9586c1cb5 100644
--- a/asv_bench/benchmarks/hdfstore_bench.py
+++ b/asv_bench/benchmarks/io/hdf.py
@@ -1,11 +1,11 @@
import numpy as np
-from pandas import DataFrame, Panel, date_range, HDFStore
+from pandas import DataFrame, Panel, date_range, HDFStore, read_hdf
import pandas.util.testing as tm
-from .pandas_vb_common import BaseIO, setup # noqa
+from ..pandas_vb_common import BaseIO, setup # noqa
-class HDF5(BaseIO):
+class HDFStoreDataFrame(BaseIO):
goal_time = 0.2
@@ -34,9 +34,9 @@ def setup(self):
self.df_dc = DataFrame(np.random.randn(N, 10),
columns=['C%03d' % i for i in range(10)])
- self.f = '__test__.h5'
+ self.fname = '__test__.h5'
- self.store = HDFStore(self.f)
+ self.store = HDFStore(self.fname)
self.store.put('fixed', self.df)
self.store.put('fixed_mixed', self.df_mixed)
self.store.append('table', self.df2)
@@ -46,7 +46,7 @@ def setup(self):
def teardown(self):
self.store.close()
- self.remove(self.f)
+ self.remove(self.fname)
def time_read_store(self):
self.store.get('fixed')
@@ -99,25 +99,48 @@ def time_store_info(self):
self.store.info()
-class HDF5Panel(BaseIO):
+class HDFStorePanel(BaseIO):
goal_time = 0.2
def setup(self):
- self.f = '__test__.h5'
+ self.fname = '__test__.h5'
self.p = Panel(np.random.randn(20, 1000, 25),
items=['Item%03d' % i for i in range(20)],
major_axis=date_range('1/1/2000', periods=1000),
minor_axis=['E%03d' % i for i in range(25)])
- self.store = HDFStore(self.f)
+ self.store = HDFStore(self.fname)
self.store.append('p1', self.p)
def teardown(self):
self.store.close()
- self.remove(self.f)
+ self.remove(self.fname)
def time_read_store_table_panel(self):
self.store.select('p1')
def time_write_store_table_panel(self):
self.store.append('p2', self.p)
+
+
+class HDF(BaseIO):
+
+ goal_time = 0.2
+ params = ['table', 'fixed']
+ param_names = ['format']
+
+ def setup(self, format):
+ self.fname = '__test__.h5'
+ N = 100000
+ C = 5
+ self.df = DataFrame(np.random.randn(N, C),
+ columns=['float{}'.format(i) for i in range(C)],
+ index=date_range('20000101', periods=N, freq='H'))
+ self.df['object'] = tm.makeStringIndex(N)
+ self.df.to_hdf(self.fname, 'df', format=format)
+
+ def time_read_hdf(self, format):
+ read_hdf(self.fname, 'df')
+
+ def time_write_hdf(self, format):
+ self.df.to_hdf(self.fname, 'df', format=format)
diff --git a/asv_bench/benchmarks/io/msgpack.py b/asv_bench/benchmarks/io/msgpack.py
new file mode 100644
index 0000000000000..8ccce01117ca4
--- /dev/null
+++ b/asv_bench/benchmarks/io/msgpack.py
@@ -0,0 +1,26 @@
+import numpy as np
+from pandas import DataFrame, date_range, read_msgpack
+import pandas.util.testing as tm
+
+from ..pandas_vb_common import BaseIO, setup # noqa
+
+
+class MSGPack(BaseIO):
+
+ goal_time = 0.2
+
+ def setup(self):
+ self.fname = '__test__.msg'
+ N = 100000
+ C = 5
+ self.df = DataFrame(np.random.randn(N, C),
+ columns=['float{}'.format(i) for i in range(C)],
+ index=date_range('20000101', periods=N, freq='H'))
+ self.df['object'] = tm.makeStringIndex(N)
+ self.df.to_msgpack(self.fname)
+
+ def time_read_msgpack(self):
+ read_msgpack(self.fname)
+
+ def time_write_msgpack(self):
+ self.df.to_msgpack(self.fname)
diff --git a/asv_bench/benchmarks/io/pickle.py b/asv_bench/benchmarks/io/pickle.py
new file mode 100644
index 0000000000000..2ad0fcca6eb26
--- /dev/null
+++ b/asv_bench/benchmarks/io/pickle.py
@@ -0,0 +1,26 @@
+import numpy as np
+from pandas import DataFrame, date_range, read_pickle
+import pandas.util.testing as tm
+
+from ..pandas_vb_common import BaseIO, setup # noqa
+
+
+class Pickle(BaseIO):
+
+ goal_time = 0.2
+
+ def setup(self):
+ self.fname = '__test__.pkl'
+ N = 100000
+ C = 5
+ self.df = DataFrame(np.random.randn(N, C),
+ columns=['float{}'.format(i) for i in range(C)],
+ index=date_range('20000101', periods=N, freq='H'))
+ self.df['object'] = tm.makeStringIndex(N)
+ self.df.to_pickle(self.fname)
+
+ def time_read_pickle(self):
+ read_pickle(self.fname)
+
+ def time_write_pickle(self):
+ self.df.to_pickle(self.fname)
diff --git a/asv_bench/benchmarks/io/sas.py b/asv_bench/benchmarks/io/sas.py
new file mode 100644
index 0000000000000..526c524de7fff
--- /dev/null
+++ b/asv_bench/benchmarks/io/sas.py
@@ -0,0 +1,21 @@
+import os
+
+from pandas import read_sas
+
+
+class SAS(object):
+
+ goal_time = 0.2
+ params = ['sas7bdat', 'xport']
+ param_names = ['format']
+
+ def setup(self, format):
+ # Read files that are located in 'pandas/io/tests/sas/data'
+ files = {'sas7bdat': 'test1.sas7bdat', 'xport': 'paxraw_d_short.xpt'}
+ file = files[format]
+ paths = [os.path.dirname(__file__), '..', '..', '..', 'pandas',
+ 'tests', 'io', 'sas', 'data', file]
+ self.f = os.path.join(*paths)
+
+ def time_read_msgpack(self, format):
+ read_sas(self.f, format=format)
diff --git a/asv_bench/benchmarks/io/sql.py b/asv_bench/benchmarks/io/sql.py
new file mode 100644
index 0000000000000..ef4e501e5f3b9
--- /dev/null
+++ b/asv_bench/benchmarks/io/sql.py
@@ -0,0 +1,132 @@
+import sqlite3
+
+import numpy as np
+import pandas.util.testing as tm
+from pandas import DataFrame, date_range, read_sql_query, read_sql_table
+from sqlalchemy import create_engine
+
+from ..pandas_vb_common import setup # noqa
+
+
+class SQL(object):
+
+ goal_time = 0.2
+ params = ['sqlalchemy', 'sqlite']
+ param_names = ['connection']
+
+ def setup(self, connection):
+ N = 10000
+ con = {'sqlalchemy': create_engine('sqlite:///:memory:'),
+ 'sqlite': sqlite3.connect(':memory:')}
+ self.table_name = 'test_type'
+ self.query_all = 'SELECT * FROM {}'.format(self.table_name)
+ self.con = con[connection]
+ self.df = DataFrame({'float': np.random.randn(N),
+ 'float_with_nan': np.random.randn(N),
+ 'string': ['foo'] * N,
+ 'bool': [True] * N,
+ 'int': np.random.randint(0, N, size=N),
+ 'datetime': date_range('2000-01-01',
+ periods=N,
+ freq='s')},
+ index=tm.makeStringIndex(N))
+ self.df.loc[1000:3000, 'float_with_nan'] = np.nan
+ self.df['datetime_string'] = self.df['datetime'].astype(str)
+ self.df.to_sql(self.table_name, self.con, if_exists='replace')
+
+ def time_to_sql_dataframe(self, connection):
+ self.df.to_sql('test1', self.con, if_exists='replace')
+
+ def time_read_sql_query(self, connection):
+ read_sql_query(self.query_all, self.con)
+
+
+class WriteSQLDtypes(object):
+
+ goal_time = 0.2
+ params = (['sqlalchemy', 'sqlite'],
+ ['float', 'float_with_nan', 'string', 'bool', 'int', 'datetime'])
+ param_names = ['connection', 'dtype']
+
+ def setup(self, connection, dtype):
+ N = 10000
+ con = {'sqlalchemy': create_engine('sqlite:///:memory:'),
+ 'sqlite': sqlite3.connect(':memory:')}
+ self.table_name = 'test_type'
+ self.query_col = 'SELECT {} FROM {}'.format(dtype, self.table_name)
+ self.con = con[connection]
+ self.df = DataFrame({'float': np.random.randn(N),
+ 'float_with_nan': np.random.randn(N),
+ 'string': ['foo'] * N,
+ 'bool': [True] * N,
+ 'int': np.random.randint(0, N, size=N),
+ 'datetime': date_range('2000-01-01',
+ periods=N,
+ freq='s')},
+ index=tm.makeStringIndex(N))
+ self.df.loc[1000:3000, 'float_with_nan'] = np.nan
+ self.df['datetime_string'] = self.df['datetime'].astype(str)
+ self.df.to_sql(self.table_name, self.con, if_exists='replace')
+
+ def time_to_sql_dataframe_column(self, connection, dtype):
+ self.df[[dtype]].to_sql('test1', self.con, if_exists='replace')
+
+ def time_read_sql_query_select_column(self, connection, dtype):
+ read_sql_query(self.query_col, self.con)
+
+
+class ReadSQLTable(object):
+
+ goal_time = 0.2
+
+ def setup(self):
+ N = 10000
+ self.table_name = 'test'
+ self.con = create_engine('sqlite:///:memory:')
+ self.df = DataFrame({'float': np.random.randn(N),
+ 'float_with_nan': np.random.randn(N),
+ 'string': ['foo'] * N,
+ 'bool': [True] * N,
+ 'int': np.random.randint(0, N, size=N),
+ 'datetime': date_range('2000-01-01',
+ periods=N,
+ freq='s')},
+ index=tm.makeStringIndex(N))
+ self.df.loc[1000:3000, 'float_with_nan'] = np.nan
+ self.df['datetime_string'] = self.df['datetime'].astype(str)
+ self.df.to_sql(self.table_name, self.con, if_exists='replace')
+
+ def time_read_sql_table_all(self):
+ read_sql_table(self.table_name, self.con)
+
+ def time_read_sql_table_parse_dates(self):
+ read_sql_table(self.table_name, self.con, columns=['datetime_string'],
+ parse_dates=['datetime_string'])
+
+
+class ReadSQLTableDtypes(object):
+
+ goal_time = 0.2
+
+ params = ['float', 'float_with_nan', 'string', 'bool', 'int', 'datetime']
+ param_names = ['dtype']
+
+ def setup(self, dtype):
+ N = 10000
+ self.table_name = 'test'
+ self.con = create_engine('sqlite:///:memory:')
+ self.df = DataFrame({'float': np.random.randn(N),
+ 'float_with_nan': np.random.randn(N),
+ 'string': ['foo'] * N,
+ 'bool': [True] * N,
+ 'int': np.random.randint(0, N, size=N),
+ 'datetime': date_range('2000-01-01',
+ periods=N,
+ freq='s')},
+ index=tm.makeStringIndex(N))
+ self.df.loc[1000:3000, 'float_with_nan'] = np.nan
+ self.df['datetime_string'] = self.df['datetime'].astype(str)
+ self.df.to_sql(self.table_name, self.con, if_exists='replace')
+
+ def time_read_sql_table_column(self, dtype):
+ read_sql_table(self.table_name, self.con, columns=[dtype])
diff --git a/asv_bench/benchmarks/io/stata.py b/asv_bench/benchmarks/io/stata.py
new file mode 100644
index 0000000000000..e0f5752ca930f
--- /dev/null
+++ b/asv_bench/benchmarks/io/stata.py
@@ -0,0 +1,37 @@
+import numpy as np
+from pandas import DataFrame, date_range, read_stata
+import pandas.util.testing as tm
+
+from ..pandas_vb_common import BaseIO, setup # noqa
+
+
+class Stata(BaseIO):
+
+ goal_time = 0.2
+ params = ['tc', 'td', 'tm', 'tw', 'th', 'tq', 'ty']
+ param_names = ['convert_dates']
+
+ def setup(self, convert_dates):
+ self.fname = '__test__.dta'
+ N = 100000
+ C = 5
+ self.df = DataFrame(np.random.randn(N, C),
+ columns=['float{}'.format(i) for i in range(C)],
+ index=date_range('20000101', periods=N, freq='H'))
+ self.df['object'] = tm.makeStringIndex(N)
+ self.df['int8_'] = np.random.randint(np.iinfo(np.int8).min,
+ np.iinfo(np.int8).max - 27, N)
+ self.df['int16_'] = np.random.randint(np.iinfo(np.int16).min,
+ np.iinfo(np.int16).max - 27, N)
+ self.df['int32_'] = np.random.randint(np.iinfo(np.int32).min,
+ np.iinfo(np.int32).max - 27, N)
+ self.df['float32_'] = np.array(np.random.randn(N),
+ dtype=np.float32)
+ self.convert_dates = {'index': convert_dates}
+ self.df.to_stata(self.fname, self.convert_dates)
+
+ def time_read_stata(self, convert_dates):
+ read_stata(self.fname)
+
+ def time_write_stata(self, convert_dates):
+ self.df.to_stata(self.fname, self.convert_dates)
diff --git a/asv_bench/benchmarks/io_sql.py b/asv_bench/benchmarks/io_sql.py
deleted file mode 100644
index ec855e5d33525..0000000000000
--- a/asv_bench/benchmarks/io_sql.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import sqlalchemy
-from .pandas_vb_common import *
-import sqlite3
-from sqlalchemy import create_engine
-
-
-#-------------------------------------------------------------------------------
-# to_sql
-
-class WriteSQL(object):
- goal_time = 0.2
-
- def setup(self):
- self.engine = create_engine('sqlite:///:memory:')
- self.con = sqlite3.connect(':memory:')
- self.index = tm.makeStringIndex(10000)
- self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
-
- def time_fallback(self):
- self.df.to_sql('test1', self.con, if_exists='replace')
-
- def time_sqlalchemy(self):
- self.df.to_sql('test1', self.engine, if_exists='replace')
-
-
-#-------------------------------------------------------------------------------
-# read_sql
-
-class ReadSQL(object):
- goal_time = 0.2
-
- def setup(self):
- self.engine = create_engine('sqlite:///:memory:')
- self.con = sqlite3.connect(':memory:')
- self.index = tm.makeStringIndex(10000)
- self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
- self.df.to_sql('test2', self.engine, if_exists='replace')
- self.df.to_sql('test2', self.con, if_exists='replace')
-
- def time_read_query_fallback(self):
- read_sql_query('SELECT * FROM test2', self.con)
-
- def time_read_query_sqlalchemy(self):
- read_sql_query('SELECT * FROM test2', self.engine)
-
- def time_read_table_sqlalchemy(self):
- read_sql_table('test2', self.engine)
-
-
-#-------------------------------------------------------------------------------
-# type specific write
-
-class WriteSQLTypes(object):
- goal_time = 0.2
-
- def setup(self):
- self.engine = create_engine('sqlite:///:memory:')
- self.con = sqlite3.connect(':memory:')
- self.df = DataFrame({'float': randn(10000), 'string': (['foo'] * 10000), 'bool': ([True] * 10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
- self.df.loc[1000:3000, 'float'] = np.nan
-
- def time_string_fallback(self):
- self.df[['string']].to_sql('test_string', self.con, if_exists='replace')
-
- def time_string_sqlalchemy(self):
- self.df[['string']].to_sql('test_string', self.engine, if_exists='replace')
-
- def time_float_fallback(self):
- self.df[['float']].to_sql('test_float', self.con, if_exists='replace')
-
- def time_float_sqlalchemy(self):
- self.df[['float']].to_sql('test_float', self.engine, if_exists='replace')
-
- def time_datetime_sqlalchemy(self):
- self.df[['datetime']].to_sql('test_datetime', self.engine, if_exists='replace')
-
-
-#-------------------------------------------------------------------------------
-# type specific read
-
-class ReadSQLTypes(object):
- goal_time = 0.2
-
- def setup(self):
- self.engine = create_engine('sqlite:///:memory:')
- self.con = sqlite3.connect(':memory:')
- self.df = DataFrame({'float': randn(10000), 'datetime': date_range('2000-01-01', periods=10000, freq='s'), })
- self.df['datetime_string'] = self.df['datetime'].map(str)
- self.df.to_sql('test_type', self.engine, if_exists='replace')
- self.df[['float', 'datetime_string']].to_sql('test_type', self.con, if_exists='replace')
-
- def time_datetime_read_and_parse_sqlalchemy(self):
- read_sql_table('test_type', self.engine, columns=['datetime_string'], parse_dates=['datetime_string'])
-
- def time_datetime_read_as_native_sqlalchemy(self):
- read_sql_table('test_type', self.engine, columns=['datetime'])
-
- def time_float_read_query_fallback(self):
- read_sql_query('SELECT float FROM test_type', self.con)
-
- def time_float_read_query_sqlalchemy(self):
- read_sql_query('SELECT float FROM test_type', self.engine)
-
- def time_float_read_table_sqlalchemy(self):
- read_sql_table('test_type', self.engine, columns=['float'])
diff --git a/asv_bench/benchmarks/packers.py b/asv_bench/benchmarks/packers.py
deleted file mode 100644
index 7b6cefc56f0da..0000000000000
--- a/asv_bench/benchmarks/packers.py
+++ /dev/null
@@ -1,243 +0,0 @@
-from .pandas_vb_common import *
-from numpy.random import randint
-import pandas as pd
-from collections import OrderedDict
-from pandas.compat import BytesIO
-import sqlite3
-import os
-from sqlalchemy import create_engine
-import numpy as np
-from random import randrange
-
-
-class _Packers(object):
- goal_time = 0.2
-
- def _setup(self):
- self.f = '__test__.msg'
- self.N = 100000
- self.C = 5
- self.index = date_range('20000101', periods=self.N, freq='H')
- self.df = DataFrame({'float{0}'.format(i): randn(self.N) for i in range(self.C)}, index=self.index)
- self.df2 = self.df.copy()
- self.df2['object'] = [('%08x' % randrange((16 ** 8))) for _ in range(self.N)]
- self.remove(self.f)
-
- def remove(self, f):
- try:
- os.remove(f)
- except:
- pass
-
- def teardown(self):
- self.remove(self.f)
-
-
-class Packers(_Packers):
-
- def setup(self):
- self._setup()
- self.df.to_csv(self.f)
-
- def time_packers_read_csv(self):
- pd.read_csv(self.f)
-
-
-class packers_read_excel(_Packers):
-
- def setup(self):
- self._setup()
- self.bio = BytesIO()
- self.writer = pd.io.excel.ExcelWriter(self.bio, engine='xlsxwriter')
- self.df[:2000].to_excel(self.writer)
- self.writer.save()
-
- def time_packers_read_excel(self):
- self.bio.seek(0)
- pd.read_excel(self.bio)
-
-
-class packers_read_hdf_store(_Packers):
-
- def setup(self):
- self._setup()
- self.df2.to_hdf(self.f, 'df')
-
- def time_packers_read_hdf_store(self):
- pd.read_hdf(self.f, 'df')
-
-
-class packers_read_hdf_table(_Packers):
-
- def setup(self):
- self._setup()
- self.df2.to_hdf(self.f, 'df', format='table')
-
- def time_packers_read_hdf_table(self):
- pd.read_hdf(self.f, 'df')
-
-
-class packers_read_pack(_Packers):
-
- def setup(self):
- self._setup()
- self.df2.to_msgpack(self.f)
-
- def time_packers_read_pack(self):
- pd.read_msgpack(self.f)
-
-
-class packers_read_pickle(_Packers):
-
- def setup(self):
- self._setup()
- self.df2.to_pickle(self.f)
-
- def time_packers_read_pickle(self):
- pd.read_pickle(self.f)
-
-
-class packers_read_sql(_Packers):
-
- def setup(self):
- self._setup()
- self.engine = create_engine('sqlite:///:memory:')
- self.df2.to_sql('table', self.engine, if_exists='replace')
-
- def time_packers_read_sql(self):
- pd.read_sql_table('table', self.engine)
-
-
-class packers_read_stata(_Packers):
-
- def setup(self):
- self._setup()
- self.df.to_stata(self.f, {'index': 'tc', })
-
- def time_packers_read_stata(self):
- pd.read_stata(self.f)
-
-
-class packers_read_stata_with_validation(_Packers):
-
- def setup(self):
- self._setup()
- self.df['int8_'] = [randint(np.iinfo(np.int8).min, (np.iinfo(np.int8).max - 27)) for _ in range(self.N)]
- self.df['int16_'] = [randint(np.iinfo(np.int16).min, (np.iinfo(np.int16).max - 27)) for _ in range(self.N)]
- self.df['int32_'] = [randint(np.iinfo(np.int32).min, (np.iinfo(np.int32).max - 27)) for _ in range(self.N)]
- self.df['float32_'] = np.array(randn(self.N), dtype=np.float32)
- self.df.to_stata(self.f, {'index': 'tc', })
-
- def time_packers_read_stata_with_validation(self):
- pd.read_stata(self.f)
-
-
-class packers_read_sas(_Packers):
-
- def setup(self):
-
- testdir = os.path.join(os.path.dirname(__file__), '..', '..',
- 'pandas', 'tests', 'io', 'sas')
- if not os.path.exists(testdir):
- testdir = os.path.join(os.path.dirname(__file__), '..', '..',
- 'pandas', 'io', 'tests', 'sas')
- self.f = os.path.join(testdir, 'data', 'test1.sas7bdat')
- self.f2 = os.path.join(testdir, 'data', 'paxraw_d_short.xpt')
-
- def time_read_sas7bdat(self):
- pd.read_sas(self.f, format='sas7bdat')
-
- def time_read_xport(self):
- pd.read_sas(self.f2, format='xport')
-
-
-class CSV(_Packers):
-
- def setup(self):
- self._setup()
-
- def time_write_csv(self):
- self.df.to_csv(self.f)
-
-
-class Excel(_Packers):
-
- def setup(self):
- self._setup()
- self.bio = BytesIO()
-
- def time_write_excel_openpyxl(self):
- self.bio.seek(0)
- self.writer = pd.io.excel.ExcelWriter(self.bio, engine='openpyxl')
- self.df[:2000].to_excel(self.writer)
- self.writer.save()
-
- def time_write_excel_xlsxwriter(self):
- self.bio.seek(0)
- self.writer = pd.io.excel.ExcelWriter(self.bio, engine='xlsxwriter')
- self.df[:2000].to_excel(self.writer)
- self.writer.save()
-
- def time_write_excel_xlwt(self):
- self.bio.seek(0)
- self.writer = pd.io.excel.ExcelWriter(self.bio, engine='xlwt')
- self.df[:2000].to_excel(self.writer)
- self.writer.save()
-
-
-class HDF(_Packers):
-
- def setup(self):
- self._setup()
-
- def time_write_hdf_store(self):
- self.df2.to_hdf(self.f, 'df')
-
- def time_write_hdf_table(self):
- self.df2.to_hdf(self.f, 'df', table=True)
-
-
-class MsgPack(_Packers):
-
- def setup(self):
- self._setup()
-
- def time_write_msgpack(self):
- self.df2.to_msgpack(self.f)
-
-
-class Pickle(_Packers):
-
- def setup(self):
- self._setup()
-
- def time_write_pickle(self):
- self.df2.to_pickle(self.f)
-
-
-class SQL(_Packers):
-
- def setup(self):
- self._setup()
- self.engine = create_engine('sqlite:///:memory:')
-
- def time_write_sql(self):
- self.df2.to_sql('table', self.engine, if_exists='replace')
-
-
-class STATA(_Packers):
-
- def setup(self):
- self._setup()
-
- self.df3=self.df.copy()
- self.df3['int8_'] = [randint(np.iinfo(np.int8).min, (np.iinfo(np.int8).max - 27)) for _ in range(self.N)]
- self.df3['int16_'] = [randint(np.iinfo(np.int16).min, (np.iinfo(np.int16).max - 27)) for _ in range(self.N)]
- self.df3['int32_'] = [randint(np.iinfo(np.int32).min, (np.iinfo(np.int32).max - 27)) for _ in range(self.N)]
- self.df3['float32_'] = np.array(randn(self.N), dtype=np.float32)
-
- def time_write_stata(self):
- self.df.to_stata(self.f, {'index': 'tc', })
-
- def time_write_stata_with_validation(self):
- self.df3.to_stata(self.f, {'index': 'tc', })
| [xref](https://github.com/pandas-dev/pandas/pull/18815#issuecomment-352723075)
Consolidated the benchmarks from `io_sql.py`, `hdfstore_bench.py`, and `packers.py` into their own files in the `io` folder. Benchmarks are largely the same as they were, just cleaned and simplified where able.
```
asv dev -b ^io
· Discovering benchmarks
· Running 67 total benchmarks (1 commits * 1 environments * 67 benchmarks)
[ 0.00%] ·· Building for existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 0.00%] ·· Benchmarking existing-py_home_matt_anaconda_envs_pandas_dev_bin_python
[ 1.49%] ··· Running io.csv.ReadCSVCategorical.time_convert_direct 98.5ms
[ 2.99%] ··· Running io.csv.ReadCSVCategorical.time_convert_post 142ms
[ 4.48%] ··· Running io.csv.ReadCSVComment.time_comment 53.6ms
[ 5.97%] ··· Running io.csv.ReadCSVDInferDatetimeFormat.time_read_csv ok
[ 5.97%] ····
======================= ======== ========= ========
-- format
----------------------- ---------------------------
infer_datetime_format custom iso8601 ymd
======================= ======== ========= ========
True 24.4ms 4.73ms 4.99ms
False 662ms 3.50ms 3.23ms
======================= ======== ========= ========
[ 7.46%] ··· Running io.csv.ReadCSVFloatPrecision.time_read_csv ok
[ 7.46%] ····
===== ========== ========== ================ ========== ========== ================
-- decimal / float_precision
----- -----------------------------------------------------------------------------
sep . / None . / high . / round_trip _ / None _ / high _ / round_trip
===== ========== ========== ================ ========== ========== ================
, 3.91ms 3.77ms 5.52ms 4.25ms 4.19ms 4.20ms
; 3.86ms 3.73ms 5.56ms 4.16ms 4.23ms 4.13ms
===== ========== ========== ================ ========== ========== ================
[ 8.96%] ··· Running io.csv.ReadCSVFloatPrecision.time_read_csv_python_engine ok
[ 8.96%] ····
===== ========== ========== ================ ========== ========== ================
-- decimal / float_precision
----- -----------------------------------------------------------------------------
sep . / None . / high . / round_trip _ / None _ / high _ / round_trip
===== ========== ========== ================ ========== ========== ================
, 8.54ms 8.44ms 8.50ms 7.03ms 6.99ms 6.99ms
; 8.46ms 8.53ms 8.42ms 7.00ms 7.09ms 7.00ms
===== ========== ========== ================ ========== ========== ================
[ 10.45%] ··· Running io.csv.ReadCSVParseDates.time_baseline 2.86ms
[ 11.94%] ··· Running io.csv.ReadCSVParseDates.time_multiple_date 2.85ms
[ 13.43%] ··· Running io.csv.ReadCSVSkipRows.time_skipprows ok
[ 13.43%] ····
========== ========
skiprows
---------- --------
None 46.1ms
10000 31.4ms
========== ========
[ 14.93%] ··· Running io.csv.ReadCSVThousands.time_thousands ok
[ 14.93%] ····
===== ======== ========
-- thousands
----- -----------------
sep None ,
===== ======== ========
, 38.6ms 35.6ms
| 38.9ms 38.6ms
===== ======== ========
[ 16.42%] ··· Running io.csv.ReadUint64Integers.time_read_uint64 8.95ms
[ 17.91%] ··· Running io.csv.ReadUint64Integers.time_read_uint64_na_values 13.0ms
[ 19.40%] ··· Running io.csv.ReadUint64Integers.time_read_uint64_neg_values 13.0ms
[ 20.90%] ··· Running io.csv.S3.time_read_csv_10_rows ok
[ 20.90%] ····
============= ======== =======
-- engine
------------- ----------------
compression python c
============= ======== =======
None 8.43s 6.27s
gzip 6.80s 7.74s
bz2 36.6s n/a
============= ======== =======
[ 22.39%] ··· Running io.csv.ToCSV.time_frame ok
[ 22.39%] ····
======= ========
kind
------- --------
wide 89.1ms
long 172ms
mixed 38.6ms
======= ========
[ 23.88%] ··· Running io.csv.ToCSVDatetime.time_frame_date_formatting 23.1ms
[ 25.37%] ··· Running io.excel.Excel.time_read_excel ok
[ 25.37%] ····
============ =======
engine
------------ -------
openpyxl 565ms
xlsxwriter 569ms
xlwt 158ms
============ =======
[ 26.87%] ··· Running io.excel.Excel.time_write_excel ok
[ 26.87%] ····
============ =======
engine
------------ -------
openpyxl 1.24s
xlsxwriter 970ms
xlwt 704ms
============ =======
[ 28.36%] ··· Running io.hdf.HDF.time_read_hdf ok
[ 28.36%] ····
======== ========
format
-------- --------
table 63.0ms
fixed 79.8ms
======== ========
[ 29.85%] ··· Running io.hdf.HDF.time_write_hdf ok
[ 29.85%] ····
======== =======
format
-------- -------
table 121ms
fixed 149ms
======== =======
[ 31.34%] ··· Running io.hdf.HDFStoreDataFrame.time_query_store_table 26.4ms
[ 32.84%] ··· Running io.hdf.HDFStoreDataFrame.time_query_store_table_wide 33.2ms
[ 34.33%] ··· Running io.hdf.HDFStoreDataFrame.time_read_store 12.1ms
[ 35.82%] ··· Running io.hdf.HDFStoreDataFrame.time_read_store_mixed 25.4ms
[ 37.31%] ··· Running io.hdf.HDFStoreDataFrame.time_read_store_table 18.0ms
[ 38.81%] ··· Running io.hdf.HDFStoreDataFrame.time_read_store_table_mixed 36.6ms
[ 40.30%] ··· Running io.hdf.HDFStoreDataFrame.time_read_store_table_wide 30.1ms
[ 41.79%] ··· Running io.hdf.HDFStoreDataFrame.time_store_info 53.2ms
[ 43.28%] ··· Running io.hdf.HDFStoreDataFrame.time_store_repr 112μs
[ 44.78%] ··· Running io.hdf.HDFStoreDataFrame.time_store_str 109μs
[ 46.27%] ··· Running io.hdf.HDFStoreDataFrame.time_write_store 13.6ms
[ 47.76%] ··· Running io.hdf.HDFStoreDataFrame.time_write_store_mixed 31.6ms
[ 49.25%] ··· Running io.hdf.HDFStoreDataFrame.time_write_store_table 45.1ms
[ 50.75%] ··· Running io.hdf.HDFStoreDataFrame.time_write_store_table_dc 357ms
[ 52.24%] ··· Running io.hdf.HDFStoreDataFrame.time_write_store_table_mixed 56.7ms
[ 53.73%] ··· Running io.hdf.HDFStoreDataFrame.time_write_store_table_wide 160ms
[ 55.22%] ··· Running io.hdf.HDFStorePanel.time_read_store_table_panel 55.2ms
[ 56.72%] ··· Running io.hdf.HDFStorePanel.time_write_store_table_panel 90.3ms
[ 58.21%] ··· Running io.json.ReadJSON.time_read_json ok
[ 58.21%] ····
========= ======= ==========
-- index
--------- ------------------
orient int datetime
========= ======= ==========
split 253ms 270ms
index 7.80s 7.97s
records 619ms 627ms
========= ======= ==========
[ 59.70%] ··· Running io.json.ReadJSONLines.peakmem_read_json_lines ok
[ 59.70%] ····
========== ======
index
---------- ------
int 192M
datetime 192M
========== ======
[ 61.19%] ··· Running io.json.ReadJSONLines.peakmem_read_json_lines_concat ok
[ 61.19%] ····
========== ======
index
---------- ------
int 164M
datetime 164M
========== ======
[ 62.69%] ··· Running io.json.ReadJSONLines.time_read_json_lines ok
[ 62.69%] ····
========== =======
index
---------- -------
int 734ms
datetime 740ms
========== =======
[ 64.18%] ··· Running io.json.ReadJSONLines.time_read_json_lines_concat ok
[ 64.18%] ····
========== =======
index
---------- -------
int 767ms
datetime 763ms
========== =======
[ 65.67%] ··· Running io.json.ToJSON.time_delta_int_tstamp ok
[ 65.67%] ····
========= =======
orient
--------- -------
split 347ms
columns 353ms
index 400ms
========= =======
[ 67.16%] ··· Running io.json.ToJSON.time_delta_int_tstamp_lines ok
[ 67.16%] ····
========= =======
orient
--------- -------
split 634ms
columns 643ms
index 632ms
========= =======
[ 68.66%] ··· Running io.json.ToJSON.time_float_int ok
[ 68.66%] ····
========= =======
orient
--------- -------
split 232ms
columns 233ms
index 389ms
========= =======
[ 70.15%] ··· Running io.json.ToJSON.time_float_int_lines ok
[ 70.15%] ····
========= =======
orient
--------- -------
split 684ms
columns 686ms
index 685ms
========= =======
[ 71.64%] ··· Running io.json.ToJSON.time_float_int_str ok
[ 71.64%] ····
========= =======
orient
--------- -------
split 354ms
columns 231ms
index 411ms
========= =======
[ 73.13%] ··· Running io.json.ToJSON.time_float_int_str_lines ok
[ 73.13%] ····
========= =======
orient
--------- -------
split 713ms
columns 713ms
index 714ms
========= =======
[ 74.63%] ··· Running io.json.ToJSON.time_floats_with_dt_index ok
[ 74.63%] ····
========= =======
orient
--------- -------
split 191ms
columns 222ms
index 220ms
========= =======
[ 76.12%] ··· Running io.json.ToJSON.time_floats_with_dt_index_lines ok
[ 76.12%] ····
========= =======
orient
--------- -------
split 531ms
columns 527ms
index 533ms
========= =======
[ 77.61%] ··· Running io.json.ToJSON.time_floats_with_int_idex_lines ok
[ 77.61%] ····
========= =======
orient
--------- -------
split 525ms
columns 525ms
index 524ms
========= =======
[ 79.10%] ··· Running io.json.ToJSON.time_floats_with_int_index ok
[ 79.10%] ····
========= =======
orient
--------- -------
split 167ms
columns 183ms
index 194ms
========= =======
[ 80.60%] ··· Running io.msgpack.MSGPack.time_read_msgpack 48.2ms
[ 82.09%] ··· Running io.msgpack.MSGPack.time_write_msgpack 73.5ms
[ 83.58%] ··· Running io.pickle.Pickle.time_read_pickle 110ms
[ 85.07%] ··· Running io.pickle.Pickle.time_write_pickle 157ms
[ 86.57%] ··· Running io.sas.SAS.time_read_msgpack ok
[ 86.57%] ····
========== ========
format
---------- --------
sas7bdat 608ms
xport 9.92ms
========== ========
[ 88.06%] ··· Running io.sql.ReadSQLTable.time_read_sql_table_all ok
[ 88.06%] ····
================ =======
dtype
---------------- -------
float 101ms
float_with_nan 101ms
string 101ms
bool 102ms
int 101ms
datetime 101ms
================ =======
[ 89.55%] ··· Running io.sql.ReadSQLTable.time_read_sql_table_column ok
[ 89.55%] ····
================ ========
dtype
---------------- --------
float 24.1ms
float_with_nan 23.0ms
string 25.1ms
bool 23.8ms
int 25.6ms
datetime 46.6ms
================ ========
[ 91.04%] ··· Running io.sql.ReadSQLTable.time_read_sql_table_parse_dates ok
[ 91.04%] ····
================ ========
dtype
---------------- --------
float 31.5ms
float_with_nan 31.9ms
string 31.0ms
bool 30.9ms
int 31.2ms
datetime 31.5ms
================ ========
[ 92.54%] ··· Running io.sql.SQL.time_read_sql_query_select_all ok
[ 92.54%] ····
============ ======== ================ ======== ======== ======== ==========
-- dtype
------------ ---------------------------------------------------------------
connection float float_with_nan string bool int datetime
============ ======== ================ ======== ======== ======== ==========
sqlalchemy 70.6ms 71.0ms 70.8ms 70.4ms 70.6ms 71.5ms
sqlite 58.7ms 58.1ms 58.0ms 57.7ms 58.5ms 58.2ms
============ ======== ================ ======== ======== ======== ==========
[ 94.03%] ··· Running io.sql.SQL.time_read_sql_query_select_column ok
[ 94.03%] ····
============ ======== ================ ======== ======== ======== ==========
-- dtype
------------ ---------------------------------------------------------------
connection float float_with_nan string bool int datetime
============ ======== ================ ======== ======== ======== ==========
sqlalchemy 71.6ms 70.6ms 71.5ms 70.8ms 70.8ms 70.4ms
sqlite 58.4ms 58.8ms 58.4ms 57.8ms 57.7ms 58.1ms
============ ======== ================ ======== ======== ======== ==========
[ 95.52%] ··· Running io.sql.SQL.time_to_sql_dataframe_colums ok
[ 95.52%] ····
============ ======== ================ ======== ======== ======== ==========
-- dtype
------------ ---------------------------------------------------------------
connection float float_with_nan string bool int datetime
============ ======== ================ ======== ======== ======== ==========
sqlalchemy 140ms 149ms 136ms 148ms 135ms 257ms
sqlite 54.4ms 63.2ms 55.5ms 92.4ms 52.7ms 118ms
============ ======== ================ ======== ======== ======== ==========
[ 97.01%] ··· Running io.sql.SQL.time_to_sql_dataframe_full ok
[ 97.01%] ····
============ ======= ================ ======== ======= ======= ==========
-- dtype
------------ ------------------------------------------------------------
connection float float_with_nan string bool int datetime
============ ======= ================ ======== ======= ======= ==========
sqlalchemy 435ms 440ms 433ms 436ms 436ms 435ms
sqlite 185ms 186ms 185ms 186ms 185ms 186ms
============ ======= ================ ======== ======= ======= ==========
[ 98.51%] ··· Running io.stata.Stata.time_read_stata ok
[ 98.51%] ····
=============== =======
convert_dates
--------------- -------
tc 514ms
td 510ms
tm 1.35s
tw 1.27s
th 1.36s
tq 1.35s
ty 1.76s
=============== =======
[100.00%] ··· Running io.stata.Stata.time_write_stata ok
[100.00%] ····
=============== =======
convert_dates
--------------- -------
tc 593ms
td 592ms
tm 603ms
tw 1.34s
th 624ms
tq 616ms
ty 592ms
=============== =======
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/18906 | 2017-12-22T07:07:12Z | 2017-12-23T20:04:44Z | 2017-12-23T20:04:44Z | 2017-12-24T05:55:16Z |
Fixed read_json int overflow | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 40e1e2011479c..348c1c6dafbcb 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -310,6 +310,7 @@ I/O
- Bug in :func:`read_csv` where a ``MultiIndex`` with duplicate columns was not being mangled appropriately (:issue:`18062`)
- Bug in :func:`read_sas` where a file with 0 variables gave an ``AttributeError`` incorrectly. Now it gives an ``EmptyDataError`` (:issue:`18184`)
- Bug in :func:`DataFrame.to_latex()` where pairs of braces meant to serve as invisible placeholders were escaped (:issue:`18667`)
+- Bug in :func:`read_json` where large numeric values were causing an ``OverflowError`` (:issue:`18842`)
-
Plotting
diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py
index 0e0aae0506809..bb435c625ff35 100644
--- a/pandas/io/json/json.py
+++ b/pandas/io/json/json.py
@@ -724,7 +724,7 @@ def _try_convert_to_date(self, data):
if new_data.dtype == 'object':
try:
new_data = data.astype('int64')
- except (TypeError, ValueError):
+ except (TypeError, ValueError, OverflowError):
pass
# ignore numbers that are out of range
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 7cf3d6cd7b612..10139eb07a925 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1074,6 +1074,20 @@ def test_read_jsonl_unicode_chars(self):
columns=['a', 'b'])
assert_frame_equal(result, expected)
+ def test_read_json_large_numbers(self):
+ # GH18842
+ json = '{"articleId": "1404366058080022500245"}'
+ json = StringIO(json)
+ result = read_json(json, typ="series")
+ expected = Series(1.404366e+21, index=['articleId'])
+ assert_series_equal(result, expected)
+
+ json = '{"0": {"articleId": "1404366058080022500245"}}'
+ json = StringIO(json)
+ result = read_json(json)
+ expected = DataFrame(1.404366e+21, index=['articleId'], columns=[0])
+ assert_frame_equal(result, expected)
+
def test_to_jsonl(self):
# GH9180
df = DataFrame([[1, 2], [1, 2]], columns=['a', 'b'])
| - [X] closes #18842
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18905 | 2017-12-22T02:11:00Z | 2017-12-27T20:20:25Z | 2017-12-27T20:20:25Z | 2018-02-27T01:32:14Z |
DEPR: convert_datetime64 parameter in to_records() | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 408a52e0526ee..856e36fc24202 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -842,6 +842,7 @@ Deprecations
- ``pandas.tseries.plotting.tsplot`` is deprecated. Use :func:`Series.plot` instead (:issue:`18627`)
- ``Index.summary()`` is deprecated and will be removed in a future version (:issue:`18217`)
- ``NDFrame.get_ftype_counts()`` is deprecated and will be removed in a future version (:issue:`18243`)
+- The ``convert_datetime64`` parameter in :func:`DataFrame.to_records` has been deprecated and will be removed in a future version. The NumPy bug motivating this parameter has been resolved. The default value for this parameter has also changed from ``True`` to ``None`` (:issue:`18160`).
.. _whatsnew_0230.prior_deprecations:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9e57579ddfc05..7c0e367e74ffa 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1311,7 +1311,7 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
return cls(mgr)
- def to_records(self, index=True, convert_datetime64=True):
+ def to_records(self, index=True, convert_datetime64=None):
"""
Convert DataFrame to a NumPy record array.
@@ -1322,7 +1322,9 @@ def to_records(self, index=True, convert_datetime64=True):
----------
index : boolean, default True
Include index in resulting record array, stored in 'index' field.
- convert_datetime64 : boolean, default True
+ convert_datetime64 : boolean, default None
+ .. deprecated:: 0.23.0
+
Whether to convert the index to datetime.datetime if it is a
DatetimeIndex.
@@ -1376,6 +1378,13 @@ def to_records(self, index=True, convert_datetime64=True):
('2018-01-01T09:01:00.000000000', 2, 0.75)],
dtype=[('index', '<M8[ns]'), ('A', '<i8'), ('B', '<f8')])
"""
+
+ if convert_datetime64 is not None:
+ warnings.warn("The 'convert_datetime64' parameter is "
+ "deprecated and will be removed in a future "
+ "version",
+ FutureWarning, stacklevel=2)
+
if index:
if is_datetime64_any_dtype(self.index) and convert_datetime64:
ix_vals = [self.index.to_pydatetime()]
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index 82dadacd5b1ac..32b8a6e2b6b86 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -79,10 +79,23 @@ def test_to_records_dt64(self):
df = DataFrame([["one", "two", "three"],
["four", "five", "six"]],
index=date_range("2012-01-01", "2012-01-02"))
- assert df.to_records()['index'][0] == df.index[0]
- rs = df.to_records(convert_datetime64=False)
- assert rs['index'][0] == df.index.values[0]
+ # convert_datetime64 defaults to None
+ expected = df.index.values[0]
+ result = df.to_records()['index'][0]
+ assert expected == result
+
+ # check for FutureWarning if convert_datetime64=False is passed
+ with tm.assert_produces_warning(FutureWarning):
+ expected = df.index.values[0]
+ result = df.to_records(convert_datetime64=False)['index'][0]
+ assert expected == result
+
+ # check for FutureWarning if convert_datetime64=True is passed
+ with tm.assert_produces_warning(FutureWarning):
+ expected = df.index[0]
+ result = df.to_records(convert_datetime64=True)['index'][0]
+ assert expected == result
def test_to_records_with_multindex(self):
# GH3189
| - [x] closes #18160
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
As noted in the original issue, the underlying NumPy bug seems to be fixed so this parameter may no longer be needed.
I also changed the default value for ``convert_datetime64`` from ``True`` to ``False`` and only give a ``FutureWarning`` when ``convert_datetime64=True`` is passed but I'm not sure if this is more appropriate than leaving ``True`` as the default and always giving a ``FutureWarning``. | https://api.github.com/repos/pandas-dev/pandas/pulls/18902 | 2017-12-21T23:44:27Z | 2018-04-14T23:58:31Z | 2018-04-14T23:58:31Z | 2018-04-15T11:14:27Z |
DOC: Fix names | diff --git a/doc/source/whatsnew/v0.22.0 b/doc/source/whatsnew/v0.22.0.txt
similarity index 100%
rename from doc/source/whatsnew/v0.22.0
rename to doc/source/whatsnew/v0.22.0.txt
diff --git a/doc/source/whatsnew/v0.23.0 b/doc/source/whatsnew/v0.23.0.txt
similarity index 100%
rename from doc/source/whatsnew/v0.23.0
rename to doc/source/whatsnew/v0.23.0.txt
| [ci skip]
cc @jreback | https://api.github.com/repos/pandas-dev/pandas/pulls/18900 | 2017-12-21T21:14:40Z | 2017-12-21T21:16:40Z | 2017-12-21T21:16:40Z | 2017-12-21T21:17:11Z |
DOC: move versions 0.22 -> 0.23, add 0.22 docs | diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 64cbe0b050a61..d61a98fe2dae4 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -18,6 +18,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: whatsnew/v0.23.0.txt
+
.. include:: whatsnew/v0.22.0.txt
.. include:: whatsnew/v0.21.1.txt
diff --git a/doc/source/whatsnew/v0.22.0 b/doc/source/whatsnew/v0.22.0
new file mode 100644
index 0000000000000..2d30e00142846
--- /dev/null
+++ b/doc/source/whatsnew/v0.22.0
@@ -0,0 +1,14 @@
+.. _whatsnew_0220:
+
+v0.22.0
+-------
+
+This is a major release from 0.21.1 and includes a number of API changes,
+deprecations, new features, enhancements, and performance improvements along
+with a large number of bug fixes. We recommend that all users upgrade to this
+version.
+
+.. _whatsnew_0220.api_breaking:
+
+Backwards incompatible API changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.23.0
similarity index 97%
rename from doc/source/whatsnew/v0.22.0.txt
rename to doc/source/whatsnew/v0.23.0
index a289cf32949be..40e1e2011479c 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.23.0
@@ -1,6 +1,6 @@
-.. _whatsnew_0220:
+.. _whatsnew_0230:
-v0.22.0
+v0.23.0
-------
This is a major release from 0.21.1 and includes a number of API changes,
@@ -8,7 +8,7 @@ deprecations, new features, enhancements, and performance improvements along
with a large number of bug fixes. We recommend that all users upgrade to this
version.
-.. _whatsnew_0220.enhancements:
+.. _whatsnew_0230.enhancements:
New features
~~~~~~~~~~~~
@@ -32,7 +32,7 @@ The :func:`get_dummies` now accepts a ``dtype`` argument, which specifies a dtyp
pd.get_dummies(df, columns=['c'], dtype=bool).dtypes
-.. _whatsnew_0220.enhancements.merge_on_columns_and_levels:
+.. _whatsnew_0230.enhancements.merge_on_columns_and_levels:
Merging on a combination of columns and index levels
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -63,7 +63,7 @@ levels <merging.merge_on_columns_and_levels>` documentation section.
left.merge(right, on=['key1', 'key2'])
-.. _whatsnew_0220.enhancements.ran_inf:
+.. _whatsnew_0230.enhancements.ran_inf:
``.rank()`` handles ``inf`` values when ``NaN`` are present
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -119,7 +119,7 @@ Current Behavior
s.rank(na_option='top')
-.. _whatsnew_0220.enhancements.other:
+.. _whatsnew_0230.enhancements.other:
Other Enhancements
^^^^^^^^^^^^^^^^^^
@@ -142,12 +142,12 @@ Other Enhancements
- ``Categorical.rename_categories``, ``CategoricalIndex.rename_categories`` and :attr:`Series.cat.rename_categories`
can now take a callable as their argument (:issue:`18862`)
-.. _whatsnew_0220.api_breaking:
+.. _whatsnew_0230.api_breaking:
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. _whatsnew_0220.api_breaking.deps:
+.. _whatsnew_0230.api_breaking.deps:
Dependencies have increased minimum versions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -171,7 +171,7 @@ Build Changes
- Building from source now explicity requires ``setuptools`` in ``setup.py`` (:issue:`18113`)
- Updated conda recipe to be in compliance with conda-build 3.0+ (:issue:`18002`)
-.. _whatsnew_0220.api:
+.. _whatsnew_0230.api:
Other API Changes
^^^^^^^^^^^^^^^^^
@@ -201,7 +201,7 @@ Other API Changes
- :func:`pandas.merge` now raises a ``ValueError`` when trying to merge on incompatible data types (:issue:`9780`)
- :func:`wide_to_long` previously kept numeric-like suffixes as ``object`` dtype. Now they are cast to numeric if possible (:issue:`17627`)
-.. _whatsnew_0220.deprecations:
+.. _whatsnew_0230.deprecations:
Deprecations
~~~~~~~~~~~~
@@ -217,7 +217,7 @@ Deprecations
- :func:`read_excel` has deprecated the ``skip_footer`` parameter. Use ``skipfooter`` instead (:issue:`18836`)
- The ``is_copy`` attribute is deprecated and will be removed in a future version (:issue:`18801`).
-.. _whatsnew_0220.prior_deprecations:
+.. _whatsnew_0230.prior_deprecations:
Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -238,7 +238,7 @@ Removal of prior version deprecations/changes
- :func:`read_csv` has dropped the ``buffer_lines`` parameter (:issue:`13360`)
- :func:`read_csv` has dropped the ``compact_ints`` and ``use_unsigned`` parameters (:issue:`13323`)
-.. _whatsnew_0220.performance:
+.. _whatsnew_0230.performance:
Performance Improvements
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -256,7 +256,7 @@ Performance Improvements
- Improved performance of ``DatetimeIndex`` and ``Series`` arithmetic operations with Business-Month and Business-Quarter frequencies (:issue:`18489`)
- :func:`Series` / :func:`DataFrame` tab completion limits to 100 values, for better performance. (:issue:`18587`)
-.. _whatsnew_0220.docs:
+.. _whatsnew_0230.docs:
Documentation Changes
~~~~~~~~~~~~~~~~~~~~~
@@ -265,7 +265,7 @@ Documentation Changes
-
-
-.. _whatsnew_0220.bug_fixes:
+.. _whatsnew_0230.bug_fixes:
Bug Fixes
~~~~~~~~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/18897 | 2017-12-21T13:54:16Z | 2017-12-21T17:24:57Z | 2017-12-21T17:24:57Z | 2017-12-22T09:33:41Z | |
CI: move coverage | diff --git a/.travis.yml b/.travis.yml
index ea9d4307d6bf1..e56435faeec19 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -52,10 +52,10 @@ matrix:
# In allow_failures
- dist: trusty
env:
- - JOB="3.5_CONDA_BUILD_TEST" TEST_ARGS="--skip-slow --skip-network" CONDA_BUILD_TEST=true COVERAGE=true
+ - JOB="3.5_CONDA_BUILD_TEST" TEST_ARGS="--skip-slow --skip-network" CONDA_BUILD_TEST=true
- dist: trusty
env:
- - JOB="3.6" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate" CONDA_FORGE=true
+ - JOB="3.6" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate" CONDA_FORGE=true COVERAGE=true
# In allow_failures
- dist: trusty
env:
@@ -80,7 +80,7 @@ matrix:
# TODO(jreback)
- dist: trusty
env:
- - JOB="3.5_CONDA_BUILD_TEST" TEST_ARGS="--skip-slow --skip-network" CONDA_BUILD_TEST=true COVERAGE=true
+ - JOB="3.5_CONDA_BUILD_TEST" TEST_ARGS="--skip-slow --skip-network" CONDA_BUILD_TEST=true
- dist: trusty
env:
- JOB="2.7_SLOW" SLOW=true
| closes #18895
| https://api.github.com/repos/pandas-dev/pandas/pulls/18896 | 2017-12-21T13:49:32Z | 2017-12-21T15:00:31Z | 2017-12-21T15:00:31Z | 2017-12-21T15:05:24Z |
BLD: fix conda to 4.3.30 | diff --git a/.travis.yml b/.travis.yml
index 9eccf87960dd0..ea9d4307d6bf1 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -49,6 +49,7 @@ matrix:
apt:
packages:
- python-gtk2
+ # In allow_failures
- dist: trusty
env:
- JOB="3.5_CONDA_BUILD_TEST" TEST_ARGS="--skip-slow --skip-network" CONDA_BUILD_TEST=true COVERAGE=true
@@ -76,6 +77,10 @@ matrix:
env:
- JOB="3.6_DOC" DOC=true
allow_failures:
+ # TODO(jreback)
+ - dist: trusty
+ env:
+ - JOB="3.5_CONDA_BUILD_TEST" TEST_ARGS="--skip-slow --skip-network" CONDA_BUILD_TEST=true COVERAGE=true
- dist: trusty
env:
- JOB="2.7_SLOW" SLOW=true
diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index 475fc6a46955d..800a20aa94b8f 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -56,6 +56,11 @@ if [ "$CONDA_BUILD_TEST" ]; then
conda install conda-build
fi
+# TODO(jreback)
+echo
+echo "[fix conda version]"
+conda install conda=4.3.30
+
echo
echo "[add channels]"
conda config --remove channels defaults || exit 1
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 2d56e12533cd0..a0070dce6a7f1 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -40,7 +40,7 @@ def __fspath__(self):
except ImportError:
pass
-HERE = os.path.dirname(__file__)
+HERE = os.path.abspath(os.path.dirname(__file__))
class TestCommonIOCapabilities(object):
@@ -150,10 +150,8 @@ def test_read_non_existant(self, reader, module, error_class, fn_ext):
(pd.read_fwf, 'os', os.path.join(HERE, 'data',
'fixed_width_format.txt')),
(pd.read_excel, 'xlrd', os.path.join(HERE, 'data', 'test1.xlsx')),
-
- # TODO(jreback) gh-18873
- # (pd.read_feather, 'feather', os.path.join(HERE, 'data',
- # 'feather-0_3_1.feather')),
+ (pd.read_feather, 'feather', os.path.join(HERE, 'data',
+ 'feather-0_3_1.feather')),
(pd.read_hdf, 'tables', os.path.join(HERE, 'data', 'legacy_hdf',
'datetimetz_object.h5')),
(pd.read_stata, 'os', os.path.join(HERE, 'data', 'stata10_115.dta')),
| CI: move 3.5 conda build to allowed_failures
xref #18870 for abspath
| https://api.github.com/repos/pandas-dev/pandas/pulls/18893 | 2017-12-21T12:58:16Z | 2017-12-21T13:43:32Z | 2017-12-21T13:43:32Z | 2017-12-21T13:43:32Z |
API: Allow ordered=None in CategoricalDtype | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 083242cd69b74..f1158b9ad87eb 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -460,6 +460,29 @@ To restore previous behavior, simply set ``expand`` to ``False``:
extracted
type(extracted)
+.. _whatsnew_0230.api_breaking.cdt_ordered:
+
+Default value for the ``ordered`` parameter of ``CategoricalDtype``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The default value of the ``ordered`` parameter for :class:`~pandas.api.types.CategoricalDtype` has changed from ``False`` to ``None`` to allow updating of ``categories`` without impacting ``ordered``. Behavior should remain consistent for downstream objects, such as :class:`Categorical` (:issue:`18790`)
+
+In previous versions, the default value for the ``ordered`` parameter was ``False``. This could potentially lead to the ``ordered`` parameter unintentionally being changed from ``True`` to ``False`` when users attempt to update ``categories`` if ``ordered`` is not explicitly specified, as it would silently default to ``False``. The new behavior for ``ordered=None`` is to retain the existing value of ``ordered``.
+
+New Behavior:
+
+.. ipython:: python
+
+ from pandas.api.types import CategoricalDtype
+ cat = pd.Categorical(list('abcaba'), ordered=True, categories=list('cba'))
+ cat
+ cdt = CategoricalDtype(categories=list('cbad'))
+ cat.astype(cdt)
+
+Notice in the example above that the converted ``Categorical`` has retained ``ordered=True``. Had the default value for ``ordered`` remained as ``False``, the converted ``Categorical`` would have become unordered, despite ``ordered=False`` never being explicitly specified. To change the value of ``ordered``, explicitly pass it to the new dtype, e.g. ``CategoricalDtype(categories=list('cbad'), ordered=False)``.
+
+Note that the unintenional conversion of ``ordered`` discussed above did not arise in previous versions due to separate bugs that prevented ``astype`` from doing any type of category to category conversion (:issue:`10696`, :issue:`18593`). These bugs have been fixed in this release, and motivated changing the default value of ``ordered``.
+
.. _whatsnew_0230.api:
Other API Changes
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 62c6a6b16cbe9..93250bdbb5054 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -243,7 +243,7 @@ class Categorical(ExtensionArray, PandasObject):
# For comparisons, so that numpy uses our implementation if the compare
# ops, which raise
__array_priority__ = 1000
- _dtype = CategoricalDtype()
+ _dtype = CategoricalDtype(ordered=False)
_deprecations = frozenset(['labels'])
_typ = 'categorical'
@@ -294,7 +294,7 @@ def __init__(self, values, categories=None, ordered=None, dtype=None,
if fastpath:
self._codes = coerce_indexer_dtype(values, categories)
- self._dtype = dtype
+ self._dtype = self._dtype.update_dtype(dtype)
return
# null_mask indicates missing values we want to exclude from inference.
@@ -358,7 +358,7 @@ def __init__(self, values, categories=None, ordered=None, dtype=None,
full_codes[~null_mask] = codes
codes = full_codes
- self._dtype = dtype
+ self._dtype = self._dtype.update_dtype(dtype)
self._codes = coerce_indexer_dtype(codes, dtype.categories)
@property
@@ -438,7 +438,7 @@ def astype(self, dtype, copy=True):
"""
if is_categorical_dtype(dtype):
# GH 10696/18593
- dtype = self.dtype._update_dtype(dtype)
+ dtype = self.dtype.update_dtype(dtype)
self = self.copy() if copy else self
if dtype == self.dtype:
return self
@@ -560,7 +560,7 @@ def from_codes(cls, codes, categories, ordered=False):
raise ValueError(
"codes need to be convertible to an arrays of integers")
- categories = CategoricalDtype._validate_categories(categories)
+ categories = CategoricalDtype.validate_categories(categories)
if len(codes) and (codes.max() >= len(categories) or codes.min() < -1):
raise ValueError("codes need to be between -1 and "
@@ -1165,7 +1165,7 @@ def __setstate__(self, state):
# Provide compatibility with pre-0.15.0 Categoricals.
if '_categories' not in state and '_levels' in state:
- state['_categories'] = self.dtype._validate_categories(state.pop(
+ state['_categories'] = self.dtype.validate_categories(state.pop(
'_levels'))
if '_codes' not in state and 'labels' in state:
state['_codes'] = coerce_indexer_dtype(
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index d8d3a96992757..99e4033f104db 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -159,11 +159,11 @@ class CategoricalDtype(PandasExtensionDtype):
_metadata = ['categories', 'ordered']
_cache = {}
- def __init__(self, categories=None, ordered=False):
+ def __init__(self, categories=None, ordered=None):
self._finalize(categories, ordered, fastpath=False)
@classmethod
- def _from_fastpath(cls, categories=None, ordered=False):
+ def _from_fastpath(cls, categories=None, ordered=None):
self = cls.__new__(cls)
self._finalize(categories, ordered, fastpath=True)
return self
@@ -180,14 +180,12 @@ def _from_categorical_dtype(cls, dtype, categories=None, ordered=None):
def _finalize(self, categories, ordered, fastpath=False):
- if ordered is None:
- ordered = False
- else:
- self._validate_ordered(ordered)
+ if ordered is not None:
+ self.validate_ordered(ordered)
if categories is not None:
- categories = self._validate_categories(categories,
- fastpath=fastpath)
+ categories = self.validate_categories(categories,
+ fastpath=fastpath)
self._categories = categories
self._ordered = ordered
@@ -208,6 +206,17 @@ def __hash__(self):
return int(self._hash_categories(self.categories, self.ordered))
def __eq__(self, other):
+ """
+ Rules for CDT equality:
+ 1) Any CDT is equal to the string 'category'
+ 2) Any CDT is equal to a CDT with categories=None regardless of ordered
+ 3) A CDT with ordered=True is only equal to another CDT with
+ ordered=True and identical categories in the same order
+ 4) A CDT with ordered={False, None} is only equal to another CDT with
+ ordered={False, None} and identical categories, but same order is
+ not required. There is no distinction between False/None.
+ 5) Any other comparison returns False
+ """
if isinstance(other, compat.string_types):
return other == self.name
@@ -220,12 +229,16 @@ def __eq__(self, other):
# CDT(., .) = CDT(None, False) and *all*
# CDT(., .) = CDT(None, True).
return True
- elif self.ordered:
- return other.ordered and self.categories.equals(other.categories)
- elif other.ordered:
- return False
+ elif self.ordered or other.ordered:
+ # At least one has ordered=True; equal if both have ordered=True
+ # and the same values for categories in the same order.
+ return ((self.ordered == other.ordered) and
+ self.categories.equals(other.categories))
else:
- # both unordered; this could probably be optimized / cached
+ # Neither has ordered=True; equal if both have the same categories,
+ # but same order is not necessary. There is no distinction between
+ # ordered=False and ordered=None: CDT(., False) and CDT(., None)
+ # will be equal if they have the same categories.
return hash(self) == hash(other)
def __repr__(self):
@@ -288,7 +301,7 @@ def construct_from_string(cls, string):
raise TypeError("cannot construct a CategoricalDtype")
@staticmethod
- def _validate_ordered(ordered):
+ def validate_ordered(ordered):
"""
Validates that we have a valid ordered parameter. If
it is not a boolean, a TypeError will be raised.
@@ -308,7 +321,7 @@ def _validate_ordered(ordered):
raise TypeError("'ordered' must either be 'True' or 'False'")
@staticmethod
- def _validate_categories(categories, fastpath=False):
+ def validate_categories(categories, fastpath=False):
"""
Validates that we have good categories
@@ -340,7 +353,7 @@ def _validate_categories(categories, fastpath=False):
return categories
- def _update_dtype(self, dtype):
+ def update_dtype(self, dtype):
"""
Returns a CategoricalDtype with categories and ordered taken from dtype
if specified, otherwise falling back to self if unspecified
@@ -361,11 +374,16 @@ def _update_dtype(self, dtype):
'got {dtype!r}').format(dtype=dtype)
raise ValueError(msg)
- # dtype is CDT: keep current categories if None (ordered can't be None)
+ # dtype is CDT: keep current categories/ordered if None
new_categories = dtype.categories
if new_categories is None:
new_categories = self.categories
- return CategoricalDtype(new_categories, dtype.ordered)
+
+ new_ordered = dtype.ordered
+ if new_ordered is None:
+ new_ordered = self.ordered
+
+ return CategoricalDtype(new_categories, new_ordered)
@property
def categories(self):
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index b36bc1df23247..60f5552576ea1 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -344,7 +344,7 @@ def astype(self, dtype, copy=True):
return IntervalIndex(np.array(self))
elif is_categorical_dtype(dtype):
# GH 18630
- dtype = self.dtype._update_dtype(dtype)
+ dtype = self.dtype.update_dtype(dtype)
if dtype == self.dtype:
return self.copy() if copy else self
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index d800a7b92b559..cc833af03ae66 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -24,6 +24,11 @@
import pandas.util.testing as tm
+@pytest.fixture(params=[True, False, None])
+def ordered(request):
+ return request.param
+
+
class Base(object):
def setup_method(self, method):
@@ -124,41 +129,6 @@ def test_tuple_categories(self):
result = CategoricalDtype(categories)
assert all(result.categories == categories)
- @pytest.mark.parametrize('dtype', [
- CategoricalDtype(list('abc'), False),
- CategoricalDtype(list('abc'), True)])
- @pytest.mark.parametrize('new_dtype', [
- 'category',
- CategoricalDtype(None, False),
- CategoricalDtype(None, True),
- CategoricalDtype(list('abc'), False),
- CategoricalDtype(list('abc'), True),
- CategoricalDtype(list('cba'), False),
- CategoricalDtype(list('cba'), True),
- CategoricalDtype(list('wxyz'), False),
- CategoricalDtype(list('wxyz'), True)])
- def test_update_dtype(self, dtype, new_dtype):
- if isinstance(new_dtype, string_types) and new_dtype == 'category':
- expected_categories = dtype.categories
- expected_ordered = dtype.ordered
- else:
- expected_categories = new_dtype.categories
- if expected_categories is None:
- expected_categories = dtype.categories
- expected_ordered = new_dtype.ordered
-
- result = dtype._update_dtype(new_dtype)
- tm.assert_index_equal(result.categories, expected_categories)
- assert result.ordered is expected_ordered
-
- @pytest.mark.parametrize('bad_dtype', [
- 'foo', object, np.int64, PeriodDtype('Q')])
- def test_update_dtype_errors(self, bad_dtype):
- dtype = CategoricalDtype(list('abc'), False)
- msg = 'a CategoricalDtype must be passed to perform an update, '
- with tm.assert_raises_regex(ValueError, msg):
- dtype._update_dtype(bad_dtype)
-
class TestDatetimeTZDtype(Base):
@@ -609,17 +579,12 @@ def test_caching(self):
class TestCategoricalDtypeParametrized(object):
- @pytest.mark.parametrize('categories, ordered', [
- (['a', 'b', 'c', 'd'], False),
- (['a', 'b', 'c', 'd'], True),
- (np.arange(1000), False),
- (np.arange(1000), True),
- (['a', 'b', 10, 2, 1.3, True], False),
- ([True, False], True),
- ([True, False], False),
- (pd.date_range('2017', periods=4), True),
- (pd.date_range('2017', periods=4), False),
- ])
+ @pytest.mark.parametrize('categories', [
+ list('abcd'),
+ np.arange(1000),
+ ['a', 'b', 10, 2, 1.3, True],
+ [True, False],
+ pd.date_range('2017', periods=4)])
def test_basic(self, categories, ordered):
c1 = CategoricalDtype(categories, ordered=ordered)
tm.assert_index_equal(c1.categories, pd.Index(categories))
@@ -627,21 +592,24 @@ def test_basic(self, categories, ordered):
def test_order_matters(self):
categories = ['a', 'b']
- c1 = CategoricalDtype(categories, ordered=False)
- c2 = CategoricalDtype(categories, ordered=True)
+ c1 = CategoricalDtype(categories, ordered=True)
+ c2 = CategoricalDtype(categories, ordered=False)
+ c3 = CategoricalDtype(categories, ordered=None)
assert c1 is not c2
+ assert c1 is not c3
- def test_unordered_same(self):
- c1 = CategoricalDtype(['a', 'b'])
- c2 = CategoricalDtype(['b', 'a'])
+ @pytest.mark.parametrize('ordered', [False, None])
+ def test_unordered_same(self, ordered):
+ c1 = CategoricalDtype(['a', 'b'], ordered=ordered)
+ c2 = CategoricalDtype(['b', 'a'], ordered=ordered)
assert hash(c1) == hash(c2)
def test_categories(self):
result = CategoricalDtype(['a', 'b', 'c'])
tm.assert_index_equal(result.categories, pd.Index(['a', 'b', 'c']))
- assert result.ordered is False
+ assert result.ordered is None
- def test_equal_but_different(self):
+ def test_equal_but_different(self, ordered):
c1 = CategoricalDtype([1, 2, 3])
c2 = CategoricalDtype([1., 2., 3.])
assert c1 is not c2
@@ -652,9 +620,11 @@ def test_equal_but_different(self):
([1, 2, 3], [3, 2, 1]),
])
def test_order_hashes_different(self, v1, v2):
- c1 = CategoricalDtype(v1)
+ c1 = CategoricalDtype(v1, ordered=False)
c2 = CategoricalDtype(v2, ordered=True)
+ c3 = CategoricalDtype(v1, ordered=None)
assert c1 is not c2
+ assert c1 is not c3
def test_nan_invalid(self):
with pytest.raises(ValueError):
@@ -669,26 +639,46 @@ def test_same_categories_different_order(self):
c2 = CategoricalDtype(['b', 'a'], ordered=True)
assert c1 is not c2
- @pytest.mark.parametrize('ordered, other, expected', [
- (True, CategoricalDtype(['a', 'b'], True), True),
- (False, CategoricalDtype(['a', 'b'], False), True),
- (True, CategoricalDtype(['a', 'b'], False), False),
- (False, CategoricalDtype(['a', 'b'], True), False),
- (True, CategoricalDtype([1, 2], False), False),
- (False, CategoricalDtype([1, 2], True), False),
- (False, CategoricalDtype(None, True), True),
- (True, CategoricalDtype(None, True), True),
- (False, CategoricalDtype(None, False), True),
- (True, CategoricalDtype(None, False), True),
- (True, 'category', True),
- (False, 'category', True),
- (True, 'not a category', False),
- (False, 'not a category', False),
- ])
- def test_categorical_equality(self, ordered, other, expected):
- c1 = CategoricalDtype(['a', 'b'], ordered)
+ @pytest.mark.parametrize('ordered1', [True, False, None])
+ @pytest.mark.parametrize('ordered2', [True, False, None])
+ def test_categorical_equality(self, ordered1, ordered2):
+ # same categories, same order
+ # any combination of None/False are equal
+ # True/True is the only combination with True that are equal
+ c1 = CategoricalDtype(list('abc'), ordered1)
+ c2 = CategoricalDtype(list('abc'), ordered2)
+ result = c1 == c2
+ expected = bool(ordered1) is bool(ordered2)
+ assert result is expected
+
+ # same categories, different order
+ # any combination of None/False are equal (order doesn't matter)
+ # any combination with True are not equal (different order of cats)
+ c1 = CategoricalDtype(list('abc'), ordered1)
+ c2 = CategoricalDtype(list('cab'), ordered2)
+ result = c1 == c2
+ expected = (bool(ordered1) is False) and (bool(ordered2) is False)
+ assert result is expected
+
+ # different categories
+ c2 = CategoricalDtype([1, 2, 3], ordered2)
+ assert c1 != c2
+
+ # none categories
+ c1 = CategoricalDtype(list('abc'), ordered1)
+ c2 = CategoricalDtype(None, ordered2)
+ c3 = CategoricalDtype(None, ordered1)
+ assert c1 == c2
+ assert c2 == c1
+ assert c2 == c3
+
+ @pytest.mark.parametrize('categories', [list('abc'), None])
+ @pytest.mark.parametrize('other', ['category', 'not a category'])
+ def test_categorical_equality_strings(self, categories, ordered, other):
+ c1 = CategoricalDtype(categories, ordered)
result = c1 == other
- assert result == expected
+ expected = other == 'category'
+ assert result is expected
def test_invalid_raises(self):
with tm.assert_raises_regex(TypeError, 'ordered'):
@@ -729,12 +719,12 @@ def test_from_categorical_dtype_both(self):
c1, categories=[1, 2], ordered=False)
assert result == CategoricalDtype([1, 2], ordered=False)
- def test_str_vs_repr(self):
- c1 = CategoricalDtype(['a', 'b'])
+ def test_str_vs_repr(self, ordered):
+ c1 = CategoricalDtype(['a', 'b'], ordered=ordered)
assert str(c1) == 'category'
# Py2 will have unicode prefixes
- pat = r"CategoricalDtype\(categories=\[.*\], ordered=False\)"
- assert re.match(pat, repr(c1))
+ pat = r"CategoricalDtype\(categories=\[.*\], ordered={ordered}\)"
+ assert re.match(pat.format(ordered=ordered), repr(c1))
def test_categorical_categories(self):
# GH17884
@@ -742,3 +732,38 @@ def test_categorical_categories(self):
tm.assert_index_equal(c1.categories, pd.Index(['a', 'b']))
c1 = CategoricalDtype(CategoricalIndex(['a', 'b']))
tm.assert_index_equal(c1.categories, pd.Index(['a', 'b']))
+
+ @pytest.mark.parametrize('new_categories', [
+ list('abc'), list('cba'), list('wxyz'), None])
+ @pytest.mark.parametrize('new_ordered', [True, False, None])
+ def test_update_dtype(self, ordered, new_categories, new_ordered):
+ dtype = CategoricalDtype(list('abc'), ordered)
+ new_dtype = CategoricalDtype(new_categories, new_ordered)
+
+ expected_categories = new_dtype.categories
+ if expected_categories is None:
+ expected_categories = dtype.categories
+
+ expected_ordered = new_dtype.ordered
+ if expected_ordered is None:
+ expected_ordered = dtype.ordered
+
+ result = dtype.update_dtype(new_dtype)
+ tm.assert_index_equal(result.categories, expected_categories)
+ assert result.ordered is expected_ordered
+
+ def test_update_dtype_string(self, ordered):
+ dtype = CategoricalDtype(list('abc'), ordered)
+ expected_categories = dtype.categories
+ expected_ordered = dtype.ordered
+ result = dtype.update_dtype('category')
+ tm.assert_index_equal(result.categories, expected_categories)
+ assert result.ordered is expected_ordered
+
+ @pytest.mark.parametrize('bad_dtype', [
+ 'foo', object, np.int64, PeriodDtype('Q')])
+ def test_update_dtype_errors(self, bad_dtype):
+ dtype = CategoricalDtype(list('abc'), False)
+ msg = 'a CategoricalDtype must be passed to perform an update, '
+ with tm.assert_raises_regex(ValueError, msg):
+ dtype.update_dtype(bad_dtype)
| - [X] closes #18790
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
For equality comparisons with `ordered=None`, I essentially treated it as if it where `ordered=False`:
- `CDT(['a', 'b'], None) == CDT(['a', 'b'], False)` --> `True`
- `CDT(['a', 'b'], None) == CDT(['b', 'a'], False)` --> `True`
- `CDT(['a', 'b'], None) == CDT(['a', 'b'], True)` --> `False`
This maintains existing comparison behavior when ordered is not specified:
- `CDT(['a', 'b'], False) == CDT(['a', 'b'])` --> `True`
- `CDT(['a', 'b'], True) == CDT(['a', 'b'])` --> `False`
<br />
I didn't make any code modifications in regards to hashing, so `CDT(*, None)` will have the same hash as `CDT(*, False)`. This seems to be consistent with how equality is treated. Makes the logic implementing equality nicer too, since the case when both dtypes are unordered relies on hashes. | https://api.github.com/repos/pandas-dev/pandas/pulls/18889 | 2017-12-21T07:27:50Z | 2018-02-10T17:02:30Z | 2018-02-10T17:02:29Z | 2018-09-24T17:26:41Z |
BLD: more quiet in the build | diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index 6946d7dd11870..475fc6a46955d 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -175,7 +175,7 @@ if [ "$PIP_BUILD_TEST" ]; then
echo "[building release]"
bash scripts/build_dist_for_release.sh
conda uninstall -y cython
- time pip install dist/*tar.gz || exit 1
+ time pip install dist/*tar.gz --quiet || exit 1
elif [ "$CONDA_BUILD_TEST" ]; then
diff --git a/scripts/build_dist_for_release.sh b/scripts/build_dist_for_release.sh
index e77974ae08b0c..bee0f23a68ec2 100644
--- a/scripts/build_dist_for_release.sh
+++ b/scripts/build_dist_for_release.sh
@@ -5,6 +5,6 @@
# this builds the release cleanly & is building on the current checkout
rm -rf dist
git clean -xfd
-python setup.py clean
-python setup.py cython
-python setup.py sdist --formats=gztar
+python setup.py clean --quiet
+python setup.py cython --quiet
+python setup.py sdist --formats=gztar --quiet
| https://api.github.com/repos/pandas-dev/pandas/pulls/18886 | 2017-12-21T00:00:51Z | 2017-12-21T02:08:23Z | 2017-12-21T02:08:23Z | 2017-12-21T02:08:23Z | |
Fix Series[timedelta64]+DatetimeIndex[tz] bugs | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 5fd7c3e217928..119dd894abe4c 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -370,6 +370,7 @@ Numeric
- Bug in :func:`Series.__sub__` subtracting a non-nanosecond ``np.datetime64`` object from a ``Series`` gave incorrect results (:issue:`7996`)
- Bug in :class:`DatetimeIndex`, :class:`TimedeltaIndex` addition and subtraction of zero-dimensional integer arrays gave incorrect results (:issue:`19012`)
+- Bug in :func:`Series.__add__` adding Series with dtype ``timedelta64[ns]`` to a timezone-aware ``DatetimeIndex`` incorrectly dropped timezone information (:issue:`13905`)
-
Categorical
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 2a77a23c2cfa1..ee2fdd213dd9a 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -671,7 +671,9 @@ def __add__(self, other):
from pandas.tseries.offsets import DateOffset
other = lib.item_from_zerodim(other)
- if is_timedelta64_dtype(other):
+ if isinstance(other, ABCSeries):
+ return NotImplemented
+ elif is_timedelta64_dtype(other):
return self._add_delta(other)
elif isinstance(self, TimedeltaIndex) and isinstance(other, Index):
if hasattr(other, '_add_delta'):
@@ -702,7 +704,9 @@ def __sub__(self, other):
from pandas.tseries.offsets import DateOffset
other = lib.item_from_zerodim(other)
- if is_timedelta64_dtype(other):
+ if isinstance(other, ABCSeries):
+ return NotImplemented
+ elif is_timedelta64_dtype(other):
return self._add_delta(-other)
elif isinstance(self, TimedeltaIndex) and isinstance(other, Index):
if not isinstance(other, TimedeltaIndex):
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index b17682b6c3448..ef0406a4b9f9d 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -854,6 +854,9 @@ def _maybe_update_attributes(self, attrs):
return attrs
def _add_delta(self, delta):
+ if isinstance(delta, ABCSeries):
+ return NotImplemented
+
from pandas import TimedeltaIndex
name = self.name
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 89d793a586e74..0229f7c256464 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -39,7 +39,7 @@
from pandas.core.dtypes.generic import (
ABCSeries,
ABCDataFrame,
- ABCIndex,
+ ABCIndex, ABCDatetimeIndex,
ABCPeriodIndex)
# -----------------------------------------------------------------------------
@@ -514,8 +514,9 @@ def _convert_to_array(self, values, name=None, other=None):
values[:] = iNaT
# a datelike
- elif isinstance(values, pd.DatetimeIndex):
- values = values.to_series()
+ elif isinstance(values, ABCDatetimeIndex):
+ # TODO: why are we casting to_series in the first place?
+ values = values.to_series(keep_tz=True)
# datetime with tz
elif (isinstance(ovalues, datetime.datetime) and
hasattr(ovalues, 'tzinfo')):
@@ -535,6 +536,11 @@ def _convert_to_array(self, values, name=None, other=None):
elif inferred_type in ('timedelta', 'timedelta64'):
# have a timedelta, convert to to ns here
values = to_timedelta(values, errors='coerce', box=False)
+ if isinstance(other, ABCDatetimeIndex):
+ # GH#13905
+ # Defer to DatetimeIndex/TimedeltaIndex operations where
+ # timezones are handled carefully.
+ values = pd.TimedeltaIndex(values)
elif inferred_type == 'integer':
# py3 compat where dtype is 'm' but is an integer
if values.dtype.kind == 'm':
@@ -754,25 +760,26 @@ def wrapper(left, right, name=name, na_op=na_op):
na_op = converted.na_op
if isinstance(rvalues, ABCSeries):
- name = _maybe_match_name(left, rvalues)
lvalues = getattr(lvalues, 'values', lvalues)
rvalues = getattr(rvalues, 'values', rvalues)
# _Op aligns left and right
else:
- if isinstance(rvalues, pd.Index):
- name = _maybe_match_name(left, rvalues)
- else:
- name = left.name
if (hasattr(lvalues, 'values') and
- not isinstance(lvalues, pd.DatetimeIndex)):
+ not isinstance(lvalues, ABCDatetimeIndex)):
lvalues = lvalues.values
+ if isinstance(right, (ABCSeries, pd.Index)):
+ # `left` is always a Series object
+ res_name = _maybe_match_name(left, right)
+ else:
+ res_name = left.name
+
result = wrap_results(safe_na_op(lvalues, rvalues))
return construct_result(
left,
result,
index=left.index,
- name=name,
+ name=res_name,
dtype=dtype,
)
diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py
index 4684eb89557bf..381e2ef3041e7 100644
--- a/pandas/tests/indexes/datetimes/test_arithmetic.py
+++ b/pandas/tests/indexes/datetimes/test_arithmetic.py
@@ -364,6 +364,33 @@ def test_datetimeindex_sub_timestamp_overflow(self):
with pytest.raises(OverflowError):
dtimin - variant
+ @pytest.mark.parametrize('names', [('foo', None, None),
+ ('baz', 'bar', None),
+ ('bar', 'bar', 'bar')])
+ @pytest.mark.parametrize('tz', [None, 'America/Chicago'])
+ def test_dti_add_series(self, tz, names):
+ # GH#13905
+ index = DatetimeIndex(['2016-06-28 05:30', '2016-06-28 05:31'],
+ tz=tz, name=names[0])
+ ser = Series([Timedelta(seconds=5)] * 2,
+ index=index, name=names[1])
+ expected = Series(index + Timedelta(seconds=5),
+ index=index, name=names[2])
+
+ # passing name arg isn't enough when names[2] is None
+ expected.name = names[2]
+ assert expected.dtype == index.dtype
+ result = ser + index
+ tm.assert_series_equal(result, expected)
+ result2 = index + ser
+ tm.assert_series_equal(result2, expected)
+
+ expected = index + Timedelta(seconds=5)
+ result3 = ser.values + index
+ tm.assert_index_equal(result3, expected)
+ result4 = index + ser.values
+ tm.assert_index_equal(result4, expected)
+
@pytest.mark.parametrize('box', [np.array, pd.Index])
def test_dti_add_offset_array(self, tz, box):
# GH#18849
| ser + index lost timezone
index + ser retained timezone but returned a DatetimeIndex
- [x] closes #13905
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18884 | 2017-12-20T22:22:45Z | 2018-01-02T11:23:50Z | 2018-01-02T11:23:49Z | 2018-01-23T04:40:47Z |
Fix DatetimeIndex.insert(pd.NaT) for tz-aware index | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 3f300deddebeb..cfaf8718544ca 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -303,6 +303,7 @@ Indexing
- :func:`Index.to_series` now accepts ``index`` and ``name`` kwargs (:issue:`18699`)
- :func:`DatetimeIndex.to_series` now accepts ``index`` and ``name`` kwargs (:issue:`18699`)
- Bug in indexing non-scalar value from ``Series`` having non-unique ``Index`` will return value flattened (:issue:`17610`)
+- Bug in :func:`DatetimeIndex.insert` where inserting ``NaT`` into a timezone-aware index incorrectly raised (:issue:`16357`)
I/O
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index bec26ef72d63a..3fc3cf9a78a25 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1775,7 +1775,7 @@ def insert(self, loc, item):
if isinstance(item, (datetime, np.datetime64)):
self._assert_can_do_op(item)
- if not self._has_same_tz(item):
+ if not self._has_same_tz(item) and not isna(item):
raise ValueError(
'Passed item and index have different timezone')
# check freq can be preserved on edge cases
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index b3ce22962d5d4..48ceefd6368c0 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -46,6 +46,15 @@ def test_where_tz(self):
expected = i2
tm.assert_index_equal(result, expected)
+ @pytest.mark.parametrize('null', [None, np.nan, pd.NaT])
+ @pytest.mark.parametrize('tz', [None, 'UTC', 'US/Eastern'])
+ def test_insert_nat(self, tz, null):
+ # GH#16537, GH#18295 (test missing)
+ idx = pd.DatetimeIndex(['2017-01-01'], tz=tz)
+ expected = pd.DatetimeIndex(['NaT', '2017-01-01'], tz=tz)
+ res = idx.insert(0, null)
+ tm.assert_index_equal(res, expected)
+
def test_insert(self):
idx = DatetimeIndex(
['2000-01-04', '2000-01-01', '2000-01-02'], name='idx')
@@ -145,13 +154,6 @@ def test_insert(self):
assert result.tz == expected.tz
assert result.freq is None
- # GH 18295 (test missing)
- expected = DatetimeIndex(
- ['20170101', pd.NaT, '20170102', '20170103', '20170104'])
- for na in (np.nan, pd.NaT, None):
- result = date_range('20170101', periods=4).insert(1, na)
- tm.assert_index_equal(result, expected)
-
def test_delete(self):
idx = date_range(start='2000-01-01', periods=5, freq='M', name='idx')
| - [x] closes #16357
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18883 | 2017-12-20T22:01:53Z | 2017-12-29T14:39:38Z | 2017-12-29T14:39:37Z | 2017-12-29T16:29:10Z |
API: disallow duplicate level names | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 5e55efb4e21fb..df50624a9fb2f 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -184,6 +184,7 @@ Other API Changes
- A :class:`Series` of ``dtype=category`` constructed from an empty ``dict`` will now have categories of ``dtype=object`` rather than ``dtype=float64``, consistently with the case in which an empty list is passed (:issue:`18515`)
- ``NaT`` division with :class:`datetime.timedelta` will now return ``NaN`` instead of raising (:issue:`17876`)
- All-NaN levels in a ``MultiIndex`` are now assigned ``float`` rather than ``object`` dtype, promoting consistency with ``Index`` (:issue:`17929`).
+- Levels names of a ``MultiIndex`` (when not None) are now required to be unique: trying to create a ``MultiIndex`` with repeated names will raise a ``ValueError`` (:issue:`18872`)
- :class:`Timestamp` will no longer silently ignore unused or invalid ``tz`` or ``tzinfo`` keyword arguments (:issue:`17690`)
- :class:`Timestamp` will no longer silently ignore invalid ``freq`` arguments (:issue:`5168`)
- :class:`CacheableOffset` and :class:`WeekDay` are no longer available in the ``pandas.tseries.offsets`` module (:issue:`17830`)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index c20c6e1f75a24..f4c4f91d2cc57 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -579,23 +579,24 @@ def _set_names(self, names, level=None, validate=True):
if level is None:
level = range(self.nlevels)
+ used = {}
else:
level = [self._get_level_number(l) for l in level]
+ used = {self.levels[l].name: l
+ for l in set(range(self.nlevels)) - set(level)}
# set the name
for l, name in zip(level, names):
+ if name is not None and name in used:
+ raise ValueError('Duplicated level name: "{}", assigned to '
+ 'level {}, is already used for level '
+ '{}.'.format(name, l, used[name]))
self.levels[l].rename(name, inplace=True)
+ used[name] = l
names = property(fset=_set_names, fget=_get_names,
doc="Names of levels in MultiIndex")
- def _reference_duplicate_name(self, name):
- """
- Returns True if the name refered to in self.names is duplicated.
- """
- # count the times name equals an element in self.names.
- return sum(name == n for n in self.names) > 1
-
def _format_native_types(self, na_rep='nan', **kwargs):
new_levels = []
new_labels = []
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 320ad109f01ba..1ca014baa9ec8 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -91,12 +91,6 @@ def __init__(self, values, index, level=-1, value_columns=None,
self.index = index
- if isinstance(self.index, MultiIndex):
- if index._reference_duplicate_name(level):
- msg = ("Ambiguous reference to {level}. The index "
- "names are not unique.".format(level=level))
- raise ValueError(msg)
-
self.level = self.index._get_level_number(level)
# when index includes `nan`, need to lift levels/strides by 1
@@ -502,11 +496,6 @@ def factorize(index):
return categories, codes
N, K = frame.shape
- if isinstance(frame.columns, MultiIndex):
- if frame.columns._reference_duplicate_name(level):
- msg = ("Ambiguous reference to {level}. The column "
- "names are not unique.".format(level=level))
- raise ValueError(msg)
# Will also convert negative level numbers and check if out of bounds.
level_num = frame.columns._get_level_number(level)
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index e7ea3f9c62540..6e3b7a059fd49 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -130,6 +130,20 @@ def test_set_index2(self):
result = df.set_index(df.C)
assert result.index.name == 'C'
+ @pytest.mark.parametrize('level', ['a', pd.Series(range(3), name='a')])
+ def test_set_index_duplicate_names(self, level):
+ # GH18872
+ df = pd.DataFrame(np.arange(8).reshape(4, 2), columns=['a', 'b'])
+
+ # Pass an existing level name:
+ df.index.name = 'a'
+ pytest.raises(ValueError, df.set_index, level, append=True)
+ pytest.raises(ValueError, df.set_index, [level], append=True)
+
+ # Pass twice the same level name:
+ df.index.name = 'c'
+ pytest.raises(ValueError, df.set_index, [level, level])
+
def test_set_index_nonuniq(self):
df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'],
'B': ['one', 'two', 'three', 'one', 'two'],
@@ -591,19 +605,6 @@ def test_reorder_levels(self):
index=e_idx)
assert_frame_equal(result, expected)
- result = df.reorder_levels([0, 0, 0])
- e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']],
- labels=[[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0]],
- names=['L0', 'L0', 'L0'])
- expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)},
- index=e_idx)
- assert_frame_equal(result, expected)
-
- result = df.reorder_levels(['L0', 'L0', 'L0'])
- assert_frame_equal(result, expected)
-
def test_reset_index(self):
stacked = self.frame.stack()[::2]
stacked = DataFrame({'foo': stacked, 'bar': stacked})
@@ -831,7 +832,7 @@ def test_set_index_names(self):
mi = MultiIndex.from_arrays(df[['A', 'B']].T.values, names=['A', 'B'])
mi2 = MultiIndex.from_arrays(df[['A', 'B', 'A', 'B']].T.values,
- names=['A', 'B', 'A', 'B'])
+ names=['A', 'B', 'C', 'D'])
df = df.set_index(['A', 'B'])
@@ -843,13 +844,14 @@ def test_set_index_names(self):
# Check actual equality
tm.assert_index_equal(df.set_index(df.index).index, mi)
+ idx2 = df.index.rename(['C', 'D'])
+
# Check that [MultiIndex, MultiIndex] yields a MultiIndex rather
# than a pair of tuples
- assert isinstance(df.set_index(
- [df.index, df.index]).index, MultiIndex)
+ assert isinstance(df.set_index([df.index, idx2]).index, MultiIndex)
# Check equality
- tm.assert_index_equal(df.set_index([df.index, df.index]).index, mi2)
+ tm.assert_index_equal(df.set_index([df.index, idx2]).index, mi2)
def test_rename_objects(self):
renamed = self.mixed_frame.rename(columns=str.upper)
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index f34d25142a057..5ff4f58774322 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -560,16 +560,6 @@ def test_unstack_dtypes(self):
assert left.shape == (3, 2)
tm.assert_frame_equal(left, right)
- def test_unstack_non_unique_index_names(self):
- idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')],
- names=['c1', 'c1'])
- df = DataFrame([1, 2], index=idx)
- with pytest.raises(ValueError):
- df.unstack('c1')
-
- with pytest.raises(ValueError):
- df.T.stack('c1')
-
def test_unstack_nan_index(self): # GH7466
cast = lambda val: '{0:1}'.format('' if val != val else val)
nan = np.nan
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 5e3d2bb9cf091..12f5b98fb64f4 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -388,8 +388,8 @@ def test_groupby_multi_categorical_as_index(self):
columns=['cat', 'A', 'B'])
tm.assert_frame_equal(result, expected)
- # another not in-axis grouper (conflicting names in index)
- s = Series(['a', 'b', 'b'], name='cat')
+ # another not in-axis grouper
+ s = Series(['a', 'b', 'b'], name='cat2')
result = df.groupby(['cat', s], as_index=False).sum()
expected = DataFrame({'cat': Categorical([1, 1, 2, 2, 3, 3]),
'A': [10.0, nan, nan, 22.0, nan, nan],
@@ -397,6 +397,10 @@ def test_groupby_multi_categorical_as_index(self):
columns=['cat', 'A', 'B'])
tm.assert_frame_equal(result, expected)
+ # GH18872: conflicting names in desired index
+ pytest.raises(ValueError, lambda: df.groupby(['cat',
+ s.rename('cat')]).sum())
+
# is original index dropped?
expected = DataFrame({'cat': Categorical([1, 1, 2, 2, 3, 3]),
'A': [10, 11, 10, 11, 10, 11],
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 4d6e543851d4f..2a7c020f4c9e9 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -536,15 +536,6 @@ def test_names(self):
level_names = [level.name for level in index.levels]
assert ind_names == level_names
- def test_reference_duplicate_name(self):
- idx = MultiIndex.from_tuples(
- [('a', 'b'), ('c', 'd')], names=['x', 'x'])
- assert idx._reference_duplicate_name('x')
-
- idx = MultiIndex.from_tuples(
- [('a', 'b'), ('c', 'd')], names=['x', 'y'])
- assert not idx._reference_duplicate_name('x')
-
def test_astype(self):
expected = self.index.copy()
actual = self.index.astype('O')
@@ -609,6 +600,23 @@ def test_constructor_mismatched_label_levels(self):
with tm.assert_raises_regex(ValueError, label_error):
self.index.copy().set_labels([[0, 0, 0, 0], [0, 0]])
+ @pytest.mark.parametrize('names', [['a', 'b', 'a'], [1, 1, 2],
+ [1, 'a', 1]])
+ def test_duplicate_level_names(self, names):
+ # GH18872
+ pytest.raises(ValueError, pd.MultiIndex.from_product,
+ [[0, 1]] * 3, names=names)
+
+ # With .rename()
+ mi = pd.MultiIndex.from_product([[0, 1]] * 3)
+ tm.assert_raises_regex(ValueError, "Duplicated level name:",
+ mi.rename, names)
+
+ # With .rename(., level=)
+ mi.rename(names[0], level=1, inplace=True)
+ tm.assert_raises_regex(ValueError, "Duplicated level name:",
+ mi.rename, names[:2], level=[0, 2])
+
def assert_multiindex_copied(self, copy, original):
# Levels should be (at least, shallow copied)
tm.assert_copy(copy.levels, original.levels)
@@ -667,11 +675,6 @@ def test_changing_names(self):
shallow_copy.names = [name + "c" for name in shallow_copy.names]
self.check_level_names(self.index, new_names)
- def test_duplicate_names(self):
- self.index.names = ['foo', 'foo']
- tm.assert_raises_regex(KeyError, 'Level foo not found',
- self.index._get_level_number, 'foo')
-
def test_get_level_number_integer(self):
self.index.names = [1, 0]
assert self.index._get_level_number(1) == 0
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index bedafccca5798..2f8ef32722051 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -589,8 +589,8 @@ def test_to_latex_no_bold_rows(self):
"""
assert observed == expected
- @pytest.mark.parametrize('name0', [None, 'named'])
- @pytest.mark.parametrize('name1', [None, 'named'])
+ @pytest.mark.parametrize('name0', [None, 'named0'])
+ @pytest.mark.parametrize('name1', [None, 'named1'])
@pytest.mark.parametrize('axes', [[0], [1], [0, 1]])
def test_to_latex_multiindex_names(self, name0, name1, axes):
# GH 18667
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 47be8d115a07e..305c1ebcedc6f 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -1908,12 +1908,6 @@ def make_index(names=None):
'a', 'b'], index=make_index(['date', 'a', 't']))
pytest.raises(ValueError, store.append, 'df', df)
- # dup within level
- _maybe_remove(store, 'df')
- df = DataFrame(np.zeros((12, 2)), columns=['a', 'b'],
- index=make_index(['date', 'date', 'date']))
- pytest.raises(ValueError, store.append, 'df', df)
-
# fully names
_maybe_remove(store, 'df')
df = DataFrame(np.zeros((12, 2)), columns=[
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 5b64f62527da4..786c57a4a82df 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1623,14 +1623,9 @@ def test_crosstab_with_numpy_size(self):
tm.assert_frame_equal(result, expected)
def test_crosstab_dup_index_names(self):
- # GH 13279
+ # GH 13279, GH 18872
s = pd.Series(range(3), name='foo')
- result = pd.crosstab(s, s)
- expected_index = pd.Index(range(3), name='foo')
- expected = pd.DataFrame(np.eye(3, dtype=np.int64),
- index=expected_index,
- columns=expected_index)
- tm.assert_frame_equal(result, expected)
+ pytest.raises(ValueError, pd.crosstab, s, s)
@pytest.mark.parametrize("names", [['a', ('b', 'c')],
[('a', 'b'), 'c']])
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index f3be7bb9905f4..714e43a4af1f8 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -214,17 +214,6 @@ def test_reorder_levels(self):
expected = Series(np.arange(6), index=e_idx)
assert_series_equal(result, expected)
- result = s.reorder_levels([0, 0, 0])
- e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']],
- labels=[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0]],
- names=['L0', 'L0', 'L0'])
- expected = Series(np.arange(6), index=e_idx)
- assert_series_equal(result, expected)
-
- result = s.reorder_levels(['L0', 'L0', 'L0'])
- assert_series_equal(result, expected)
-
def test_rename_axis_inplace(self):
# GH 15704
series = self.ts.copy()
| - [x] closes #18872
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18882 | 2017-12-20T20:53:23Z | 2017-12-29T14:34:18Z | 2017-12-29T14:34:18Z | 2018-01-01T17:47:00Z |
TST: xfail more in 3.5 conda build | diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index caee8c8d85811..2d56e12533cd0 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -144,15 +144,16 @@ def test_read_non_existant(self, reader, module, error_class, fn_ext):
with pytest.raises(error_class):
reader(path)
- @pytest.mark.xfail(reason="not working in 3.5 conda build")
@pytest.mark.parametrize('reader, module, path', [
(pd.read_csv, 'os', os.path.join(HERE, 'data', 'iris.csv')),
(pd.read_table, 'os', os.path.join(HERE, 'data', 'iris.csv')),
(pd.read_fwf, 'os', os.path.join(HERE, 'data',
'fixed_width_format.txt')),
(pd.read_excel, 'xlrd', os.path.join(HERE, 'data', 'test1.xlsx')),
- (pd.read_feather, 'feather', os.path.join(HERE, 'data',
- 'feather-0_3_1.feather')),
+
+ # TODO(jreback) gh-18873
+ # (pd.read_feather, 'feather', os.path.join(HERE, 'data',
+ # 'feather-0_3_1.feather')),
(pd.read_hdf, 'tables', os.path.join(HERE, 'data', 'legacy_hdf',
'datetimetz_object.h5')),
(pd.read_stata, 'os', os.path.join(HERE, 'data', 'stata10_115.dta')),
| https://api.github.com/repos/pandas-dev/pandas/pulls/18879 | 2017-12-20T19:13:35Z | 2017-12-20T22:56:00Z | 2017-12-20T22:56:00Z | 2017-12-20T22:56:00Z | |
Fix FY5253 onOffset/apply bug, simplify | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 24f3e4433411e..3a6c4e10eaa97 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -290,6 +290,7 @@ Conversion
- Bug in :class:`Timestamp` where comparison with an array of ``Timestamp`` objects would result in a ``RecursionError`` (:issue:`15183`)
- Bug in :class:`WeekOfMonth` and class:`Week` where addition and subtraction did not roll correctly (:issue:`18510`,:issue:`18672`,:issue:`18864`)
- Bug in :meth:`DatetimeIndex.astype` when converting between timezone aware dtypes, and converting from timezone aware to naive (:issue:`18951`)
+- Bug in :class:`FY5253` where ``datetime`` addition and subtraction incremented incorrectly for dates on the year-end but not normalized to midnight (:issue:`18854`)
Indexing
diff --git a/pandas/tests/tseries/offsets/test_fiscal.py b/pandas/tests/tseries/offsets/test_fiscal.py
index 2dd061dcc6f9e..09206439e9996 100644
--- a/pandas/tests/tseries/offsets/test_fiscal.py
+++ b/pandas/tests/tseries/offsets/test_fiscal.py
@@ -158,17 +158,6 @@ def test_apply(self):
class TestFY5253NearestEndMonth(Base):
- def test_get_target_month_end(self):
- assert (makeFY5253NearestEndMonth(
- startingMonth=8, weekday=WeekDay.SAT).get_target_month_end(
- datetime(2013, 1, 1)) == datetime(2013, 8, 31))
- assert (makeFY5253NearestEndMonth(
- startingMonth=12, weekday=WeekDay.SAT).get_target_month_end(
- datetime(2013, 1, 1)) == datetime(2013, 12, 31))
- assert (makeFY5253NearestEndMonth(
- startingMonth=2, weekday=WeekDay.SAT).get_target_month_end(
- datetime(2013, 1, 1)) == datetime(2013, 2, 28))
-
def test_get_year_end(self):
assert (makeFY5253NearestEndMonth(
startingMonth=8, weekday=WeekDay.SAT).get_year_end(
@@ -625,3 +614,22 @@ def test_bunched_yearends():
assert fy.rollback(dt) == Timestamp('2002-12-28')
assert (-fy).apply(dt) == Timestamp('2002-12-28')
assert dt - fy == Timestamp('2002-12-28')
+
+
+def test_fy5253_last_onoffset():
+ # GH#18877 dates on the year-end but not normalized to midnight
+ offset = FY5253(n=-5, startingMonth=5, variation="last", weekday=0)
+ ts = Timestamp('1984-05-28 06:29:43.955911354+0200',
+ tz='Europe/San_Marino')
+ fast = offset.onOffset(ts)
+ slow = (ts + offset) - offset == ts
+ assert fast == slow
+
+
+def test_fy5253_nearest_onoffset():
+ # GH#18877 dates on the year-end but not normalized to midnight
+ offset = FY5253(n=3, startingMonth=7, variation="nearest", weekday=2)
+ ts = Timestamp('2032-07-28 00:12:59.035729419+0000', tz='Africa/Dakar')
+ fast = offset.onOffset(ts)
+ slow = (ts + offset) - offset == ts
+ assert fast == slow
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 54250bbf903a4..0e6a2259274ed 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -1814,13 +1814,6 @@ def __init__(self, n=1, normalize=False, weekday=0, startingMonth=1,
raise ValueError('{variation} is not a valid variation'
.format(variation=self.variation))
- @cache_readonly
- def _offset_lwom(self):
- if self.variation == "nearest":
- return None
- else:
- return LastWeekOfMonth(n=1, weekday=self.weekday)
-
def isAnchored(self):
return (self.n == 1 and
self.startingMonth is not None and
@@ -1841,6 +1834,8 @@ def onOffset(self, dt):
@apply_wraps
def apply(self, other):
+ norm = Timestamp(other).normalize()
+
n = self.n
prev_year = self.get_year_end(
datetime(other.year - 1, self.startingMonth, 1))
@@ -1853,32 +1848,26 @@ def apply(self, other):
cur_year = tslib._localize_pydatetime(cur_year, other.tzinfo)
next_year = tslib._localize_pydatetime(next_year, other.tzinfo)
- if other == prev_year:
+ # Note: next_year.year == other.year + 1, so we will always
+ # have other < next_year
+ if norm == prev_year:
n -= 1
- elif other == cur_year:
+ elif norm == cur_year:
pass
- elif other == next_year:
- n += 1
- # TODO: Not hit in tests
elif n > 0:
- if other < prev_year:
+ if norm < prev_year:
n -= 2
- elif prev_year < other < cur_year:
+ elif prev_year < norm < cur_year:
n -= 1
- elif cur_year < other < next_year:
+ elif cur_year < norm < next_year:
pass
- else:
- assert False
else:
- if next_year < other:
- n += 2
- # TODO: Not hit in tests; UPDATE: looks impossible
- elif cur_year < other < next_year:
+ if cur_year < norm < next_year:
n += 1
- elif prev_year < other < cur_year:
+ elif prev_year < norm < cur_year:
pass
- elif (other.year == prev_year.year and other < prev_year and
- prev_year - other <= timedelta(6)):
+ elif (norm.year == prev_year.year and norm < prev_year and
+ prev_year - norm <= timedelta(6)):
# GH#14774, error when next_year.year == cur_year.year
# e.g. prev_year == datetime(2004, 1, 3),
# other == datetime(2004, 1, 1)
@@ -1894,35 +1883,30 @@ def apply(self, other):
return result
def get_year_end(self, dt):
- if self.variation == "nearest":
- return self._get_year_end_nearest(dt)
- else:
- return self._get_year_end_last(dt)
-
- def get_target_month_end(self, dt):
- target_month = datetime(dt.year, self.startingMonth, 1,
- tzinfo=dt.tzinfo)
- return shift_month(target_month, 0, 'end')
- # TODO: is this DST-safe?
+ assert dt.tzinfo is None
- def _get_year_end_nearest(self, dt):
- target_date = self.get_target_month_end(dt)
+ dim = ccalendar.get_days_in_month(dt.year, self.startingMonth)
+ target_date = datetime(dt.year, self.startingMonth, dim)
wkday_diff = self.weekday - target_date.weekday()
if wkday_diff == 0:
+ # year_end is the same for "last" and "nearest" cases
return target_date
- days_forward = wkday_diff % 7
- if days_forward <= 3:
- # The upcoming self.weekday is closer than the previous one
- return target_date + timedelta(days_forward)
- else:
- # The previous self.weekday is closer than the upcoming one
- return target_date + timedelta(days_forward - 7)
+ if self.variation == "last":
+ days_forward = (wkday_diff % 7) - 7
- def _get_year_end_last(self, dt):
- current_year = datetime(dt.year, self.startingMonth, 1,
- tzinfo=dt.tzinfo)
- return current_year + self._offset_lwom
+ # days_forward is always negative, so we always end up
+ # in the same year as dt
+ return target_date + timedelta(days=days_forward)
+ else:
+ # variation == "nearest":
+ days_forward = wkday_diff % 7
+ if days_forward <= 3:
+ # The upcoming self.weekday is closer than the previous one
+ return target_date + timedelta(days_forward)
+ else:
+ # The previous self.weekday is closer than the upcoming one
+ return target_date + timedelta(days_forward - 7)
@property
def rule_code(self):
| Similar to #18875.
The actual bugfix here is just changing the comparison in `apply` from `other < whatever` to `norm < whatever`. The remaining edits to `get_year_end` are orthogonal simplification (which I guess could go in a separate PR).
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18877 | 2017-12-20T17:59:07Z | 2017-12-28T12:28:44Z | 2017-12-28T12:28:44Z | 2018-02-11T22:00:22Z |
ENH: Added a min_count keyword to stat funcs | diff --git a/pandas/_libs/groupby_helper.pxi.in b/pandas/_libs/groupby_helper.pxi.in
index d38b677df321c..16b7cbff44e03 100644
--- a/pandas/_libs/groupby_helper.pxi.in
+++ b/pandas/_libs/groupby_helper.pxi.in
@@ -36,7 +36,8 @@ def get_dispatch(dtypes):
def group_add_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=1):
"""
Only aggregates on axis=0
"""
@@ -88,7 +89,7 @@ def group_add_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
out[i, j] = NAN
else:
out[i, j] = sumx[i, j]
@@ -99,7 +100,8 @@ def group_add_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_prod_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=1):
"""
Only aggregates on axis=0
"""
@@ -147,7 +149,7 @@ def group_prod_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
for i in range(ncounts):
for j in range(K):
- if nobs[i, j] == 0:
+ if nobs[i, j] < min_count:
out[i, j] = NAN
else:
out[i, j] = prodx[i, j]
@@ -159,12 +161,15 @@ def group_prod_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_var_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{dest_type2}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
cdef:
Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
{{dest_type2}} val, ct, oldmean
ndarray[{{dest_type2}}, ndim=2] nobs, mean
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if not len(values) == len(labels):
raise AssertionError("len(index) != len(labels)")
@@ -208,12 +213,15 @@ def group_var_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_mean_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{dest_type2}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
cdef:
Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
{{dest_type2}} val, count
ndarray[{{dest_type2}}, ndim=2] sumx, nobs
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if not len(values) == len(labels):
raise AssertionError("len(index) != len(labels)")
@@ -263,7 +271,8 @@ def group_mean_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_ohlc_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{dest_type2}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -272,6 +281,8 @@ def group_ohlc_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
{{dest_type2}} val, count
Py_ssize_t ngroups = len(counts)
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if len(labels) == 0:
return
@@ -332,7 +343,8 @@ def get_dispatch(dtypes):
def group_last_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -342,6 +354,8 @@ def group_last_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[{{dest_type2}}, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if not len(values) == len(labels):
raise AssertionError("len(index) != len(labels)")
@@ -382,7 +396,8 @@ def group_last_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_nth_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{c_type}}, ndim=2] values,
- ndarray[int64_t] labels, int64_t rank):
+ ndarray[int64_t] labels, int64_t rank,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -392,6 +407,8 @@ def group_nth_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[{{dest_type2}}, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if not len(values) == len(labels):
raise AssertionError("len(index) != len(labels)")
@@ -455,7 +472,8 @@ def get_dispatch(dtypes):
def group_max_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{dest_type2}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -464,6 +482,8 @@ def group_max_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
{{dest_type2}} val, count
ndarray[{{dest_type2}}, ndim=2] maxx, nobs
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if not len(values) == len(labels):
raise AssertionError("len(index) != len(labels)")
@@ -526,7 +546,8 @@ def group_max_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_min_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
ndarray[int64_t] counts,
ndarray[{{dest_type2}}, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -535,6 +556,8 @@ def group_min_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
{{dest_type2}} val, count
ndarray[{{dest_type2}}, ndim=2] minx, nobs
+ assert min_count == -1, "'min_count' only used in add and prod"
+
if not len(values) == len(labels):
raise AssertionError("len(index) != len(labels)")
@@ -686,7 +709,8 @@ def group_cummax_{{name}}(ndarray[{{dest_type2}}, ndim=2] out,
def group_median_float64(ndarray[float64_t, ndim=2] out,
ndarray[int64_t] counts,
ndarray[float64_t, ndim=2] values,
- ndarray[int64_t] labels):
+ ndarray[int64_t] labels,
+ Py_ssize_t min_count=-1):
"""
Only aggregates on axis=0
"""
@@ -695,6 +719,9 @@ def group_median_float64(ndarray[float64_t, ndim=2] out,
ndarray[int64_t] _counts
ndarray data
float64_t* ptr
+
+ assert min_count == -1, "'min_count' only used in add and prod"
+
ngroups = len(counts)
N, K = (<object> values).shape
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f2dbb3ef4d32a..2acf64f1d9f74 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7322,7 +7322,8 @@ def _add_numeric_operations(cls):
@Substitution(outname='mad',
desc="Return the mean absolute deviation of the values "
"for the requested axis",
- name1=name, name2=name2, axis_descr=axis_descr)
+ name1=name, name2=name2, axis_descr=axis_descr,
+ min_count='', examples='')
@Appender(_num_doc)
def mad(self, axis=None, skipna=None, level=None):
if skipna is None:
@@ -7363,7 +7364,8 @@ def mad(self, axis=None, skipna=None, level=None):
@Substitution(outname='compounded',
desc="Return the compound percentage of the values for "
"the requested axis", name1=name, name2=name2,
- axis_descr=axis_descr)
+ axis_descr=axis_descr,
+ min_count='', examples='')
@Appender(_num_doc)
def compound(self, axis=None, skipna=None, level=None):
if skipna is None:
@@ -7387,10 +7389,10 @@ def compound(self, axis=None, skipna=None, level=None):
lambda y, axis: np.maximum.accumulate(y, axis), "max",
-np.inf, np.nan)
- cls.sum = _make_stat_function(
+ cls.sum = _make_min_count_stat_function(
cls, 'sum', name, name2, axis_descr,
'Return the sum of the values for the requested axis',
- nanops.nansum)
+ nanops.nansum, _sum_examples)
cls.mean = _make_stat_function(
cls, 'mean', name, name2, axis_descr,
'Return the mean of the values for the requested axis',
@@ -7406,10 +7408,10 @@ def compound(self, axis=None, skipna=None, level=None):
"by N-1\n",
nanops.nankurt)
cls.kurtosis = cls.kurt
- cls.prod = _make_stat_function(
+ cls.prod = _make_min_count_stat_function(
cls, 'prod', name, name2, axis_descr,
'Return the product of the values for the requested axis',
- nanops.nanprod)
+ nanops.nanprod, _prod_examples)
cls.product = cls.prod
cls.median = _make_stat_function(
cls, 'median', name, name2, axis_descr,
@@ -7540,10 +7542,13 @@ def _doc_parms(cls):
numeric_only : boolean, default None
Include only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
+%(min_count)s\
Returns
-------
-%(outname)s : %(name1)s or %(name2)s (if level specified)\n"""
+%(outname)s : %(name1)s or %(name2)s (if level specified)
+
+%(examples)s"""
_num_ddof_doc = """
@@ -7611,9 +7616,92 @@ def _doc_parms(cls):
"""
+_sum_examples = """\
+Examples
+--------
+By default, the sum of an empty series is ``NaN``.
+
+>>> pd.Series([]).sum() # min_count=1 is the default
+nan
+
+This can be controlled with the ``min_count`` parameter. For example, if
+you'd like the sum of an empty series to be 0, pass ``min_count=0``.
+
+>>> pd.Series([]).sum(min_count=0)
+0.0
+
+Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
+empty series identically.
+
+>>> pd.Series([np.nan]).sum()
+nan
+
+>>> pd.Series([np.nan]).sum(min_count=0)
+0.0
+"""
+
+_prod_examples = """\
+Examples
+--------
+By default, the product of an empty series is ``NaN``
+
+>>> pd.Series([]).prod()
+nan
+
+This can be controlled with the ``min_count`` parameter
+
+>>> pd.Series([]).prod(min_count=0)
+1.0
+
+Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
+empty series identically.
+
+>>> pd.Series([np.nan]).prod()
+nan
+
+>>> pd.Series([np.nan]).sum(min_count=0)
+1.0
+"""
+
+
+_min_count_stub = """\
+min_count : int, default 1
+ The required number of valid values to perform the operation. If fewer than
+ ``min_count`` non-NA values are present the result will be NA.
+
+ .. versionadded :: 0.21.2
+
+ Added with the default being 1. This means the sum or product
+ of an all-NA or empty series is ``NaN``.
+"""
+
+
+def _make_min_count_stat_function(cls, name, name1, name2, axis_descr, desc,
+ f, examples):
+ @Substitution(outname=name, desc=desc, name1=name1, name2=name2,
+ axis_descr=axis_descr, min_count=_min_count_stub,
+ examples=examples)
+ @Appender(_num_doc)
+ def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
+ min_count=1,
+ **kwargs):
+ nv.validate_stat_func(tuple(), kwargs, fname=name)
+ if skipna is None:
+ skipna = True
+ if axis is None:
+ axis = self._stat_axis_number
+ if level is not None:
+ return self._agg_by_level(name, axis=axis, level=level,
+ skipna=skipna, min_count=min_count)
+ return self._reduce(f, name, axis=axis, skipna=skipna,
+ numeric_only=numeric_only, min_count=min_count)
+
+ return set_function_name(stat_func, name, cls)
+
+
def _make_stat_function(cls, name, name1, name2, axis_descr, desc, f):
@Substitution(outname=name, desc=desc, name1=name1, name2=name2,
- axis_descr=axis_descr)
+ axis_descr=axis_descr, min_count='', examples='')
@Appender(_num_doc)
def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
**kwargs):
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 47b80c00da4d4..041239ed06d88 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -986,7 +986,8 @@ def _cython_transform(self, how, numeric_only=True):
return self._wrap_transformed_output(output, names)
- def _cython_agg_general(self, how, alt=None, numeric_only=True):
+ def _cython_agg_general(self, how, alt=None, numeric_only=True,
+ min_count=-1):
output = {}
for name, obj in self._iterate_slices():
is_numeric = is_numeric_dtype(obj.dtype)
@@ -994,7 +995,8 @@ def _cython_agg_general(self, how, alt=None, numeric_only=True):
continue
try:
- result, names = self.grouper.aggregate(obj.values, how)
+ result, names = self.grouper.aggregate(obj.values, how,
+ min_count=min_count)
except AssertionError as e:
raise GroupByError(str(e))
output[name] = self._try_cast(result, obj)
@@ -1301,7 +1303,8 @@ def _add_numeric_operations(cls):
""" add numeric operations to the GroupBy generically """
def groupby_function(name, alias, npfunc,
- numeric_only=True, _convert=False):
+ numeric_only=True, _convert=False,
+ min_count=-1):
_local_template = "Compute %(f)s of group values"
@@ -1311,6 +1314,8 @@ def groupby_function(name, alias, npfunc,
def f(self, **kwargs):
if 'numeric_only' not in kwargs:
kwargs['numeric_only'] = numeric_only
+ if 'min_count' not in kwargs:
+ kwargs['min_count'] = min_count
self._set_group_selection()
try:
return self._cython_agg_general(
@@ -1358,8 +1363,8 @@ def last(x):
else:
return last(x)
- cls.sum = groupby_function('sum', 'add', np.sum)
- cls.prod = groupby_function('prod', 'prod', np.prod)
+ cls.sum = groupby_function('sum', 'add', np.sum, min_count=1)
+ cls.prod = groupby_function('prod', 'prod', np.prod, min_count=1)
cls.min = groupby_function('min', 'min', np.min, numeric_only=False)
cls.max = groupby_function('max', 'max', np.max, numeric_only=False)
cls.first = groupby_function('first', 'first', first_compat,
@@ -2139,7 +2144,7 @@ def get_group_levels(self):
'var': 'group_var',
'first': {
'name': 'group_nth',
- 'f': lambda func, a, b, c, d: func(a, b, c, d, 1)
+ 'f': lambda func, a, b, c, d, e: func(a, b, c, d, 1, -1)
},
'last': 'group_last',
'ohlc': 'group_ohlc',
@@ -2209,7 +2214,7 @@ def wrapper(*args, **kwargs):
(how, dtype_str))
return func, dtype_str
- def _cython_operation(self, kind, values, how, axis):
+ def _cython_operation(self, kind, values, how, axis, min_count=-1):
assert kind in ['transform', 'aggregate']
# can we do this operation with our cython functions
@@ -2294,11 +2299,12 @@ def _cython_operation(self, kind, values, how, axis):
counts = np.zeros(self.ngroups, dtype=np.int64)
result = self._aggregate(
result, counts, values, labels, func, is_numeric,
- is_datetimelike)
+ is_datetimelike, min_count)
elif kind == 'transform':
result = _maybe_fill(np.empty_like(values, dtype=out_dtype),
fill_value=np.nan)
+ # TODO: min_count
result = self._transform(
result, values, labels, func, is_numeric, is_datetimelike)
@@ -2335,14 +2341,15 @@ def _cython_operation(self, kind, values, how, axis):
return result, names
- def aggregate(self, values, how, axis=0):
- return self._cython_operation('aggregate', values, how, axis)
+ def aggregate(self, values, how, axis=0, min_count=-1):
+ return self._cython_operation('aggregate', values, how, axis,
+ min_count=min_count)
def transform(self, values, how, axis=0):
return self._cython_operation('transform', values, how, axis)
def _aggregate(self, result, counts, values, comp_ids, agg_func,
- is_numeric, is_datetimelike):
+ is_numeric, is_datetimelike, min_count=-1):
if values.ndim > 3:
# punting for now
raise NotImplementedError("number of dimensions is currently "
@@ -2351,9 +2358,10 @@ def _aggregate(self, result, counts, values, comp_ids, agg_func,
for i, chunk in enumerate(values.transpose(2, 0, 1)):
chunk = chunk.squeeze()
- agg_func(result[:, :, i], counts, chunk, comp_ids)
+ agg_func(result[:, :, i], counts, chunk, comp_ids,
+ min_count)
else:
- agg_func(result, counts, values, comp_ids)
+ agg_func(result, counts, values, comp_ids, min_count)
return result
@@ -3643,9 +3651,10 @@ def _iterate_slices(self):
continue
yield val, slicer(val)
- def _cython_agg_general(self, how, alt=None, numeric_only=True):
+ def _cython_agg_general(self, how, alt=None, numeric_only=True,
+ min_count=-1):
new_items, new_blocks = self._cython_agg_blocks(
- how, alt=alt, numeric_only=numeric_only)
+ how, alt=alt, numeric_only=numeric_only, min_count=min_count)
return self._wrap_agged_blocks(new_items, new_blocks)
def _wrap_agged_blocks(self, items, blocks):
@@ -3671,7 +3680,8 @@ def _wrap_agged_blocks(self, items, blocks):
_block_agg_axis = 0
- def _cython_agg_blocks(self, how, alt=None, numeric_only=True):
+ def _cython_agg_blocks(self, how, alt=None, numeric_only=True,
+ min_count=-1):
# TODO: the actual managing of mgr_locs is a PITA
# here, it should happen via BlockManager.combine
@@ -3688,7 +3698,7 @@ def _cython_agg_blocks(self, how, alt=None, numeric_only=True):
locs = block.mgr_locs.as_array
try:
result, _ = self.grouper.aggregate(
- block.values, how, axis=agg_axis)
+ block.values, how, axis=agg_axis, min_count=min_count)
except NotImplementedError:
# generally if we have numeric_only=False
# and non-applicable functions
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index e1c09947ac0b4..88f69f6ff2e14 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -107,21 +107,9 @@ def f(values, axis=None, skipna=True, **kwds):
if k not in kwds:
kwds[k] = v
try:
- if values.size == 0:
-
- # we either return np.nan or pd.NaT
- if is_numeric_dtype(values):
- values = values.astype('float64')
- fill_value = na_value_for_dtype(values.dtype)
-
- if values.ndim == 1:
- return fill_value
- else:
- result_shape = (values.shape[:axis] +
- values.shape[axis + 1:])
- result = np.empty(result_shape, dtype=values.dtype)
- result.fill(fill_value)
- return result
+ if values.size == 0 and kwds.get('min_count') is None:
+ # We are empty, returning NA for our type
+ return _na_for_min_count(values, axis)
if (_USE_BOTTLENECK and skipna and
_bn_ok_dtype(values.dtype, bn_name)):
@@ -292,6 +280,22 @@ def _wrap_results(result, dtype):
return result
+def _na_for_min_count(values, axis):
+ # we either return np.nan or pd.NaT
+ if is_numeric_dtype(values):
+ values = values.astype('float64')
+ fill_value = na_value_for_dtype(values.dtype)
+
+ if values.ndim == 1:
+ return fill_value
+ else:
+ result_shape = (values.shape[:axis] +
+ values.shape[axis + 1:])
+ result = np.empty(result_shape, dtype=values.dtype)
+ result.fill(fill_value)
+ return result
+
+
def nanany(values, axis=None, skipna=True):
values, mask, dtype, _ = _get_values(values, skipna, False, copy=skipna)
return values.any(axis)
@@ -304,7 +308,7 @@ def nanall(values, axis=None, skipna=True):
@disallow('M8')
@bottleneck_switch()
-def nansum(values, axis=None, skipna=True):
+def nansum(values, axis=None, skipna=True, min_count=1):
values, mask, dtype, dtype_max = _get_values(values, skipna, 0)
dtype_sum = dtype_max
if is_float_dtype(dtype):
@@ -312,7 +316,7 @@ def nansum(values, axis=None, skipna=True):
elif is_timedelta64_dtype(dtype):
dtype_sum = np.float64
the_sum = values.sum(axis, dtype=dtype_sum)
- the_sum = _maybe_null_out(the_sum, axis, mask)
+ the_sum = _maybe_null_out(the_sum, axis, mask, min_count=min_count)
return _wrap_results(the_sum, dtype)
@@ -641,13 +645,13 @@ def nankurt(values, axis=None, skipna=True):
@disallow('M8', 'm8')
-def nanprod(values, axis=None, skipna=True):
+def nanprod(values, axis=None, skipna=True, min_count=1):
mask = isna(values)
if skipna and not is_any_int_dtype(values):
values = values.copy()
values[mask] = 1
result = values.prod(axis)
- return _maybe_null_out(result, axis, mask)
+ return _maybe_null_out(result, axis, mask, min_count=min_count)
def _maybe_arg_null_out(result, axis, mask, skipna):
@@ -683,9 +687,9 @@ def _get_counts(mask, axis, dtype=float):
return np.array(count, dtype=dtype)
-def _maybe_null_out(result, axis, mask):
+def _maybe_null_out(result, axis, mask, min_count=1):
if axis is not None and getattr(result, 'ndim', False):
- null_mask = (mask.shape[axis] - mask.sum(axis)) == 0
+ null_mask = (mask.shape[axis] - mask.sum(axis) - min_count) < 0
if np.any(null_mask):
if is_numeric_dtype(result):
if np.iscomplexobj(result):
@@ -698,7 +702,7 @@ def _maybe_null_out(result, axis, mask):
result[null_mask] = None
elif result is not tslib.NaT:
null_mask = mask.size - mask.sum()
- if null_mask == 0:
+ if null_mask < min_count:
result = np.nan
return result
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index c2bf7cff746eb..a30c727ecb87c 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -625,9 +625,20 @@ def size(self):
Resampler._deprecated_valids += dir(Resampler)
+
+# downsample methods
+for method in ['sum', 'prod']:
+
+ def f(self, _method=method, min_count=1, *args, **kwargs):
+ nv.validate_resampler_func(_method, args, kwargs)
+ return self._downsample(_method, min_count=min_count)
+ f.__doc__ = getattr(GroupBy, method).__doc__
+ setattr(Resampler, method, f)
+
+
# downsample methods
-for method in ['min', 'max', 'first', 'last', 'sum', 'mean', 'sem',
- 'median', 'prod', 'ohlc']:
+for method in ['min', 'max', 'first', 'last', 'mean', 'sem',
+ 'median', 'ohlc']:
def f(self, _method=method, *args, **kwargs):
nv.validate_resampler_func(_method, args, kwargs)
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 17d711f937bf7..80e9acd0d2281 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -973,6 +973,37 @@ def test_sum_corner(self):
assert len(axis0) == 0
assert len(axis1) == 0
+ @pytest.mark.parametrize('method, unit', [
+ ('sum', 0),
+ ('prod', 1),
+ ])
+ def test_sum_prod_nanops(self, method, unit):
+ idx = ['a', 'b', 'c']
+ df = pd.DataFrame({"a": [unit, unit],
+ "b": [unit, np.nan],
+ "c": [np.nan, np.nan]})
+
+ result = getattr(df, method)(min_count=1)
+ expected = pd.Series([unit, unit, np.nan], index=idx)
+ tm.assert_series_equal(result, expected)
+
+ result = getattr(df, method)(min_count=0)
+ expected = pd.Series([unit, unit, unit], index=idx, dtype='float64')
+ tm.assert_series_equal(result, expected)
+
+ result = getattr(df.iloc[1:], method)(min_count=1)
+ expected = pd.Series([unit, np.nan, np.nan], index=idx)
+ tm.assert_series_equal(result, expected)
+
+ df = pd.DataFrame({"A": [unit] * 10, "B": [unit] * 5 + [np.nan] * 5})
+ result = getattr(df, method)(min_count=5)
+ expected = pd.Series(result, index=['A', 'B'])
+ tm.assert_series_equal(result, expected)
+
+ result = getattr(df, method)(min_count=6)
+ expected = pd.Series(result, index=['A', 'B'])
+ tm.assert_series_equal(result, expected)
+
def test_sum_object(self):
values = self.frame.values.astype(int)
frame = DataFrame(values, index=self.frame.index,
diff --git a/pandas/tests/groupby/test_aggregate.py b/pandas/tests/groupby/test_aggregate.py
index 3d27df31cee6e..07ecc085098bf 100644
--- a/pandas/tests/groupby/test_aggregate.py
+++ b/pandas/tests/groupby/test_aggregate.py
@@ -809,26 +809,33 @@ def test__cython_agg_general(self):
exc.args += ('operation: %s' % op, )
raise
- def test_cython_agg_empty_buckets(self):
- ops = [('mean', np.mean),
- ('median', lambda x: np.median(x) if len(x) > 0 else np.nan),
- ('var', lambda x: np.var(x, ddof=1)),
- ('add', lambda x: np.sum(x) if len(x) > 0 else np.nan),
- ('prod', np.prod),
- ('min', np.min),
- ('max', np.max), ]
-
+ @pytest.mark.parametrize('op, targop', [
+ ('mean', np.mean),
+ ('median', lambda x: np.median(x) if len(x) > 0 else np.nan),
+ ('var', lambda x: np.var(x, ddof=1)),
+ ('add', lambda x: np.sum(x) if len(x) > 0 else np.nan),
+ ('prod', np.prod),
+ ('min', np.min),
+ ('max', np.max), ]
+ )
+ def test_cython_agg_empty_buckets(self, op, targop):
df = pd.DataFrame([11, 12, 13])
grps = range(0, 55, 5)
- for op, targop in ops:
- result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(op)
- expected = df.groupby(pd.cut(df[0], grps)).agg(lambda x: targop(x))
- try:
- tm.assert_frame_equal(result, expected)
- except BaseException as exc:
- exc.args += ('operation: %s' % op,)
- raise
+ # calling _cython_agg_general directly, instead of via the user API
+ # which sets different values for min_count, so do that here.
+ if op in ('add', 'prod'):
+ min_count = 1
+ else:
+ min_count = -1
+ result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(
+ op, min_count=min_count)
+ expected = df.groupby(pd.cut(df[0], grps)).agg(lambda x: targop(x))
+ try:
+ tm.assert_frame_equal(result, expected)
+ except BaseException as exc:
+ exc.args += ('operation: %s' % op,)
+ raise
def test_agg_over_numpy_arrays(self):
# GH 3788
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index c73423921898d..5e3d2bb9cf091 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -662,3 +662,48 @@ def test_groupby_categorical_two_columns(self):
"C3": [nan, nan, nan, nan, 10, 100,
nan, nan, nan, nan, 200, 34]}, index=idx)
tm.assert_frame_equal(res, exp)
+
+ def test_empty_sum(self):
+ # https://github.com/pandas-dev/pandas/issues/18678
+ df = pd.DataFrame({"A": pd.Categorical(['a', 'a', 'b'],
+ categories=['a', 'b', 'c']),
+ 'B': [1, 2, 1]})
+ expected_idx = pd.CategoricalIndex(['a', 'b', 'c'], name='A')
+
+ # NA by default
+ result = df.groupby("A").B.sum()
+ expected = pd.Series([3, 1, np.nan], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
+
+ # min_count=0
+ result = df.groupby("A").B.sum(min_count=0)
+ expected = pd.Series([3, 1, 0], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
+
+ # min_count=1
+ result = df.groupby("A").B.sum(min_count=1)
+ expected = pd.Series([3, 1, np.nan], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
+
+ def test_empty_prod(self):
+ # https://github.com/pandas-dev/pandas/issues/18678
+ df = pd.DataFrame({"A": pd.Categorical(['a', 'a', 'b'],
+ categories=['a', 'b', 'c']),
+ 'B': [1, 2, 1]})
+
+ expected_idx = pd.CategoricalIndex(['a', 'b', 'c'], name='A')
+
+ # NA by default
+ result = df.groupby("A").B.prod()
+ expected = pd.Series([2, 1, np.nan], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
+
+ # min_count=0
+ result = df.groupby("A").B.prod(min_count=0)
+ expected = pd.Series([2, 1, 1], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
+
+ # min_count=1
+ result = df.groupby("A").B.prod(min_count=1)
+ expected = pd.Series([2, 1, np.nan], expected_idx, name='B')
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 0dae6aa96ced1..cd92edc927173 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -30,38 +30,122 @@
class TestSeriesAnalytics(TestData):
@pytest.mark.parametrize("use_bottleneck", [True, False])
- @pytest.mark.parametrize("method", ["sum", "prod"])
- def test_empty(self, method, use_bottleneck):
-
+ @pytest.mark.parametrize("method, unit", [
+ ("sum", 0.0),
+ ("prod", 1.0)
+ ])
+ def test_empty(self, method, unit, use_bottleneck):
with pd.option_context("use_bottleneck", use_bottleneck):
# GH 9422
- # treat all missing as NaN
+ # Entirely empty
s = Series([])
+ # NA by default
result = getattr(s, method)()
assert isna(result)
+ # Explict
+ result = getattr(s, method)(min_count=0)
+ assert result == unit
+
+ result = getattr(s, method)(min_count=1)
+ assert isna(result)
+
+ # Skipna, default
result = getattr(s, method)(skipna=True)
assert isna(result)
+ # Skipna, explicit
+ result = getattr(s, method)(skipna=True, min_count=0)
+ assert result == unit
+
+ result = getattr(s, method)(skipna=True, min_count=1)
+ assert isna(result)
+
+ # All-NA
s = Series([np.nan])
+ # NA by default
result = getattr(s, method)()
assert isna(result)
+ # Explicit
+ result = getattr(s, method)(min_count=0)
+ assert result == unit
+
+ result = getattr(s, method)(min_count=1)
+ assert isna(result)
+
+ # Skipna, default
result = getattr(s, method)(skipna=True)
assert isna(result)
+ # skipna, explicit
+ result = getattr(s, method)(skipna=True, min_count=0)
+ assert result == unit
+
+ result = getattr(s, method)(skipna=True, min_count=1)
+ assert isna(result)
+
+ # Mix of valid, empty
s = Series([np.nan, 1])
+ # Default
result = getattr(s, method)()
assert result == 1.0
- s = Series([np.nan, 1])
+ # Explicit
+ result = getattr(s, method)(min_count=0)
+ assert result == 1.0
+
+ result = getattr(s, method)(min_count=1)
+ assert result == 1.0
+
+ # Skipna
result = getattr(s, method)(skipna=True)
assert result == 1.0
+ result = getattr(s, method)(skipna=True, min_count=0)
+ assert result == 1.0
+
+ result = getattr(s, method)(skipna=True, min_count=1)
+ assert result == 1.0
+
# GH #844 (changed in 9422)
df = DataFrame(np.empty((10, 0)))
assert (df.sum(1).isnull()).all()
+ s = pd.Series([1])
+ result = getattr(s, method)(min_count=2)
+ assert isna(result)
+
+ s = pd.Series([np.nan])
+ result = getattr(s, method)(min_count=2)
+ assert isna(result)
+
+ s = pd.Series([np.nan, 1])
+ result = getattr(s, method)(min_count=2)
+ assert isna(result)
+
+ @pytest.mark.parametrize('method, unit', [
+ ('sum', 0.0),
+ ('prod', 1.0),
+ ])
+ def test_empty_multi(self, method, unit):
+ s = pd.Series([1, np.nan, np.nan, np.nan],
+ index=pd.MultiIndex.from_product([('a', 'b'), (0, 1)]))
+ # NaN by default
+ result = getattr(s, method)(level=0)
+ expected = pd.Series([1, np.nan], index=['a', 'b'])
+ tm.assert_series_equal(result, expected)
+
+ # min_count=0
+ result = getattr(s, method)(level=0, min_count=0)
+ expected = pd.Series([1, unit], index=['a', 'b'])
+ tm.assert_series_equal(result, expected)
+
+ # min_count=1
+ result = getattr(s, method)(level=0, min_count=1)
+ expected = pd.Series([1, np.nan], index=['a', 'b'])
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.parametrize(
"method", ['sum', 'mean', 'median', 'std', 'var'])
def test_ops_consistency_on_empty(self, method):
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 38f4b8be469a5..4a3c4eff9f8c3 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -4,6 +4,7 @@
from datetime import datetime, timedelta
from functools import partial
from textwrap import dedent
+from operator import methodcaller
import pytz
import pytest
@@ -3382,6 +3383,34 @@ def test_aggregate_normal(self):
assert_frame_equal(expected, dt_result)
"""
+ @pytest.mark.parametrize('method, unit', [
+ ('sum', 0),
+ ('prod', 1),
+ ])
+ def test_resample_entirly_nat_window(self, method, unit):
+ s = pd.Series([0] * 2 + [np.nan] * 2,
+ index=pd.date_range('2017', periods=4))
+ # nan by default
+ result = methodcaller(method)(s.resample("2d"))
+ expected = pd.Series([0.0, np.nan],
+ index=pd.to_datetime(['2017-01-01',
+ '2017-01-03']))
+ tm.assert_series_equal(result, expected)
+
+ # min_count=0
+ result = methodcaller(method, min_count=0)(s.resample("2d"))
+ expected = pd.Series([0.0, unit],
+ index=pd.to_datetime(['2017-01-01',
+ '2017-01-03']))
+ tm.assert_series_equal(result, expected)
+
+ # min_count=1
+ result = methodcaller(method, min_count=1)(s.resample("2d"))
+ expected = pd.Series([0.0, np.nan],
+ index=pd.to_datetime(['2017-01-01',
+ '2017-01-03']))
+ tm.assert_series_equal(result, expected)
+
def test_aggregate_with_nat(self):
# check TimeGrouper's aggregation is identical as normal groupby
@@ -3441,3 +3470,29 @@ def test_repr(self):
"closed='left', label='left', how='mean', "
"convention='e', base=0)")
assert result == expected
+
+ @pytest.mark.parametrize('method, unit', [
+ ('sum', 0),
+ ('prod', 1),
+ ])
+ def test_upsample_sum(self, method, unit):
+ s = pd.Series(1, index=pd.date_range("2017", periods=2, freq="H"))
+ resampled = s.resample("30T")
+ index = pd.to_datetime(['2017-01-01T00:00:00',
+ '2017-01-01T00:30:00',
+ '2017-01-01T01:00:00'])
+
+ # NaN by default
+ result = methodcaller(method)(resampled)
+ expected = pd.Series([1, np.nan, 1], index=index)
+ tm.assert_series_equal(result, expected)
+
+ # min_count=0
+ result = methodcaller(method, min_count=0)(resampled)
+ expected = pd.Series([1, unit, 1], index=index)
+ tm.assert_series_equal(result, expected)
+
+ # min_count=1
+ result = methodcaller(method, min_count=1)(resampled)
+ expected = pd.Series([1, np.nan, 1], index=index)
+ tm.assert_series_equal(result, expected)
| The current default is 1, reproducing the behavior of pandas 0.21. The current
test suite should pass. I'll add additional commits here changing the default to be 0.
Currently, only nansum and nanprod actually do anything with `min_count`. It
will not be hard to adjust other nan* methods use it if we want. This was just
simplest for now.
Additional tests for the new behavior have been added.
closes #18678 | https://api.github.com/repos/pandas-dev/pandas/pulls/18876 | 2017-12-20T17:36:56Z | 2017-12-28T14:44:51Z | 2017-12-28T14:44:51Z | 2017-12-29T00:58:23Z |
Fix bugs in WeekOfMonth.apply, Week.onOffset | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 40e1e2011479c..1a3b3e751190b 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -282,6 +282,8 @@ Conversion
- Bug in :meth:`Index.astype` with a categorical dtype where the resultant index is not converted to a :class:`CategoricalIndex` for all types of index (:issue:`18630`)
- Bug in :meth:`Series.astype` and ``Categorical.astype()`` where an existing categorical data does not get updated (:issue:`10696`, :issue:`18593`)
- Bug in :class:`Series` constructor with an int or float list where specifying ``dtype=str``, ``dtype='str'`` or ``dtype='U'`` failed to convert the data elements to strings (:issue:`16605`)
+- Bug in :class:`Timestamp` where comparison with an array of ``Timestamp`` objects would result in a ``RecursionError`` (:issue:`15183`)
+- Bug in :class:`WeekOfMonth` and class:`Week` where addition and subtraction did not roll correctly (:issue:`18510`,:issue:`18672`,:issue:`18864`)
Indexing
@@ -361,4 +363,3 @@ Other
^^^^^
- Improved error message when attempting to use a Python keyword as an identifier in a ``numexpr`` backed query (:issue:`18221`)
-- Bug in :class:`Timestamp` where comparison with an array of ``Timestamp`` objects would result in a ``RecursionError`` (:issue:`15183`)
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index 5b4c2f9d86674..b304ebff55b6e 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -3147,3 +3147,37 @@ def test_require_integers(offset_types):
cls = offset_types
with pytest.raises(ValueError):
cls(n=1.5)
+
+
+def test_weeks_onoffset():
+ # GH#18510 Week with weekday = None, normalize = False should always
+ # be onOffset
+ offset = Week(n=2, weekday=None)
+ ts = Timestamp('1862-01-13 09:03:34.873477378+0210', tz='Africa/Lusaka')
+ fast = offset.onOffset(ts)
+ slow = (ts + offset) - offset == ts
+ assert fast == slow
+
+ # negative n
+ offset = Week(n=2, weekday=None)
+ ts = Timestamp('1856-10-24 16:18:36.556360110-0717', tz='Pacific/Easter')
+ fast = offset.onOffset(ts)
+ slow = (ts + offset) - offset == ts
+ assert fast == slow
+
+
+def test_weekofmonth_onoffset():
+ # GH#18864
+ # Make sure that nanoseconds don't trip up onOffset (and with it apply)
+ offset = WeekOfMonth(n=2, week=2, weekday=0)
+ ts = Timestamp('1916-05-15 01:14:49.583410462+0422', tz='Asia/Qyzylorda')
+ fast = offset.onOffset(ts)
+ slow = (ts + offset) - offset == ts
+ assert fast == slow
+
+ # negative n
+ offset = WeekOfMonth(n=-3, week=1, weekday=0)
+ ts = Timestamp('1980-12-08 03:38:52.878321185+0500', tz='Asia/Oral')
+ fast = offset.onOffset(ts)
+ slow = (ts + offset) - offset == ts
+ assert fast == slow
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 8b12b2f3ad2ce..54250bbf903a4 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -112,6 +112,31 @@ def wrapper(self, other):
return wrapper
+def shift_day(other, days):
+ """
+ Increment the datetime `other` by the given number of days, retaining
+ the time-portion of the datetime. For tz-naive datetimes this is
+ equivalent to adding a timedelta. For tz-aware datetimes it is similar to
+ dateutil's relativedelta.__add__, but handles pytz tzinfo objects.
+
+ Parameters
+ ----------
+ other : datetime or Timestamp
+ days : int
+
+ Returns
+ -------
+ shifted: datetime or Timestamp
+ """
+ if other.tzinfo is None:
+ return other + timedelta(days=days)
+
+ tz = other.tzinfo
+ naive = other.replace(tzinfo=None)
+ shifted = naive + timedelta(days=days)
+ return tslib._localize_pydatetime(shifted, tz)
+
+
# ---------------------------------------------------------------------
# DateOffset
@@ -1342,6 +1367,8 @@ def apply_index(self, i):
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
return False
+ elif self.weekday is None:
+ return True
return dt.weekday() == self.weekday
@property
@@ -1361,7 +1388,29 @@ def _from_name(cls, suffix=None):
return cls(weekday=weekday)
-class WeekOfMonth(DateOffset):
+class _WeekOfMonthMixin(object):
+ """Mixin for methods common to WeekOfMonth and LastWeekOfMonth"""
+ @apply_wraps
+ def apply(self, other):
+ compare_day = self._get_offset_day(other)
+
+ months = self.n
+ if months > 0 and compare_day > other.day:
+ months -= 1
+ elif months <= 0 and compare_day < other.day:
+ months += 1
+
+ shifted = shift_month(other, months, 'start')
+ to_day = self._get_offset_day(shifted)
+ return shift_day(shifted, to_day - shifted.day)
+
+ def onOffset(self, dt):
+ if self.normalize and not _is_normalized(dt):
+ return False
+ return dt.day == self._get_offset_day(dt)
+
+
+class WeekOfMonth(_WeekOfMonthMixin, DateOffset):
"""
Describes monthly dates like "the Tuesday of the 2nd week of each month"
@@ -1400,34 +1449,23 @@ def __init__(self, n=1, normalize=False, week=None, weekday=None):
self.kwds = {'weekday': weekday, 'week': week}
- @apply_wraps
- def apply(self, other):
- base = other
- offsetOfMonth = self.getOffsetOfMonth(other)
-
- months = self.n
- if months > 0 and offsetOfMonth > other:
- months -= 1
- elif months <= 0 and offsetOfMonth < other:
- months += 1
-
- other = self.getOffsetOfMonth(shift_month(other, months, 'start'))
- other = datetime(other.year, other.month, other.day, base.hour,
- base.minute, base.second, base.microsecond)
- return other
+ def _get_offset_day(self, other):
+ """
+ Find the day in the same month as other that has the same
+ weekday as self.weekday and is the self.week'th such day in the month.
- def getOffsetOfMonth(self, dt):
- w = Week(weekday=self.weekday)
- d = datetime(dt.year, dt.month, 1, tzinfo=dt.tzinfo)
- # TODO: Is this DST-safe?
- d = w.rollforward(d)
- return d + timedelta(weeks=self.week)
+ Parameters
+ ----------
+ other: datetime
- def onOffset(self, dt):
- if self.normalize and not _is_normalized(dt):
- return False
- d = datetime(dt.year, dt.month, dt.day, tzinfo=dt.tzinfo)
- return d == self.getOffsetOfMonth(dt)
+ Returns
+ -------
+ day: int
+ """
+ mstart = datetime(other.year, other.month, 1)
+ wday = mstart.weekday()
+ shift_days = (self.weekday - wday) % 7
+ return 1 + shift_days + self.week * 7
@property
def rule_code(self):
@@ -1448,7 +1486,7 @@ def _from_name(cls, suffix=None):
return cls(week=week, weekday=weekday)
-class LastWeekOfMonth(DateOffset):
+class LastWeekOfMonth(_WeekOfMonthMixin, DateOffset):
"""
Describes monthly dates in last week of month like "the last Tuesday of
each month"
@@ -1482,31 +1520,24 @@ def __init__(self, n=1, normalize=False, weekday=None):
self.kwds = {'weekday': weekday}
- @apply_wraps
- def apply(self, other):
- offsetOfMonth = self.getOffsetOfMonth(other)
-
- months = self.n
- if months > 0 and offsetOfMonth > other:
- months -= 1
- elif months <= 0 and offsetOfMonth < other:
- months += 1
-
- return self.getOffsetOfMonth(shift_month(other, months, 'start'))
+ def _get_offset_day(self, other):
+ """
+ Find the day in the same month as other that has the same
+ weekday as self.weekday and is the last such day in the month.
- def getOffsetOfMonth(self, dt):
- m = MonthEnd()
- d = datetime(dt.year, dt.month, 1, dt.hour, dt.minute,
- dt.second, dt.microsecond, tzinfo=dt.tzinfo)
- eom = m.rollforward(d)
- # TODO: Is this DST-safe?
- w = Week(weekday=self.weekday)
- return w.rollback(eom)
+ Parameters
+ ----------
+ other: datetime
- def onOffset(self, dt):
- if self.normalize and not _is_normalized(dt):
- return False
- return dt == self.getOffsetOfMonth(dt)
+ Returns
+ -------
+ day: int
+ """
+ dim = ccalendar.get_days_in_month(other.year, other.month)
+ mend = datetime(other.year, other.month, dim)
+ wday = mend.weekday()
+ shift_days = (wday - self.weekday) % 7
+ return dim - shift_days
@property
def rule_code(self):
| In the process we get rid of `WeekOfMonth.getOffsetOfMonth` and `LastWeekOfMonth.getOffsetOfMonth`, which were idiosyncratic what arguments they passed to `datetime`.
The issues this addresses are orthogonal to #18762, but the code affected does overlap. In particular, after this, `roll_monthday` will not be needed in 18762.
closes #18864
closes #18672
closes #18510
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18875 | 2017-12-20T17:26:27Z | 2017-12-23T20:34:46Z | 2017-12-23T20:34:45Z | 2018-02-11T22:00:27Z |
TST: xfail conda 3.5 fails | diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 13a393d9109ae..caee8c8d85811 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -144,6 +144,7 @@ def test_read_non_existant(self, reader, module, error_class, fn_ext):
with pytest.raises(error_class):
reader(path)
+ @pytest.mark.xfail(reason="not working in 3.5 conda build")
@pytest.mark.parametrize('reader, module, path', [
(pd.read_csv, 'os', os.path.join(HERE, 'data', 'iris.csv')),
(pd.read_table, 'os', os.path.join(HERE, 'data', 'iris.csv')),
| closes #18870
| https://api.github.com/repos/pandas-dev/pandas/pulls/18873 | 2017-12-20T15:54:18Z | 2017-12-20T18:09:57Z | 2017-12-20T18:09:57Z | 2017-12-20T18:09:57Z |
BLD: try try again | diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index 8cf70e47a4b8f..6946d7dd11870 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -184,7 +184,7 @@ elif [ "$CONDA_BUILD_TEST" ]; then
conda build ./conda.recipe --numpy 1.13 --python 3.5 -q --no-test
echo "[installing]"
- conda install $(conda build ./conda.recipe --numpy 1.13 --python 3.5 --output) --quiet --use-local
+ conda install pandas --use-local
else
| https://api.github.com/repos/pandas-dev/pandas/pulls/18868 | 2017-12-20T11:12:52Z | 2017-12-20T12:36:48Z | 2017-12-20T12:36:48Z | 2017-12-20T12:36:48Z | |
DOC: Modify astype copy=False example to work across platforms | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 4eb7865523cc3..7fc9a91c83267 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4343,7 +4343,7 @@ def astype(self, dtype, copy=True, errors='raise', **kwargs):
pandas object may propagate changes:
>>> s1 = pd.Series([1,2])
- >>> s2 = s1.astype('int', copy=False)
+ >>> s2 = s1.astype('int64', copy=False)
>>> s2[0] = 10
>>> s1 # note that s1[0] has changed too
0 10
| - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Very minor, just changed `'int'` -> `'int64'`.
The example as previously written does not work on Windows, since `astype('int')` converts to `int32` instead of `int64`. The `Series` constructor defaults to `int64`, so on Windows the `astype` ends up making a copy since the dtype changes, and thus the propagation to `s1` doesn't occur.
This shouldn't impact Linux or Mac, since `astype('int')` converts to `int64` on those platforms, so `astype('int64')` should be equivalent. I suppose this might not work for people using 32 bit distributions (?), but this should have wider coverage of potential readers of the docs, and is less confusing than using something like `'intp'`. | https://api.github.com/repos/pandas-dev/pandas/pulls/18865 | 2017-12-20T06:19:09Z | 2017-12-21T15:03:20Z | 2017-12-21T15:03:20Z | 2017-12-21T15:30:11Z |
ENH: Let Categorical.rename_categories take a callable | diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index 0579a80aad28e..fcca50d1acdfd 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -139,6 +139,8 @@ Other Enhancements
- :func:`read_excel()` has gained the ``nrows`` parameter (:issue:`16645`)
- :func:``DataFrame.to_json`` and ``Series.to_json`` now accept an ``index`` argument which allows the user to exclude the index from the JSON output (:issue:`17394`)
- ``IntervalIndex.to_tuples()`` has gained the ``na_tuple`` parameter to control whether NA is returned as a tuple of NA, or NA itself (:issue:`18756`)
+- ``Categorical.rename_categories``, ``CategoricalIndex.rename_categories`` and :attr:`Series.cat.rename_categories`
+ can now take a callable as their argument (:issue:`18862`)
.. _whatsnew_0220.api_breaking:
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 356e76df366b4..f9bd6849c5072 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -844,7 +844,7 @@ def rename_categories(self, new_categories, inplace=False):
Parameters
----------
- new_categories : list-like or dict-like
+ new_categories : list-like, dict-like or callable
* list-like: all items must be unique and the number of items in
the new categories must match the existing number of categories.
@@ -852,7 +852,14 @@ def rename_categories(self, new_categories, inplace=False):
* dict-like: specifies a mapping from
old categories to new. Categories not contained in the mapping
are passed through and extra categories in the mapping are
- ignored. *New in version 0.21.0*.
+ ignored.
+
+ .. versionadded:: 0.21.0
+
+ * callable : a callable that is called on all items in the old
+ categories and whose return values comprise the new categories.
+
+ .. versionadded:: 0.22.0
.. warning::
@@ -890,6 +897,12 @@ def rename_categories(self, new_categories, inplace=False):
>>> c.rename_categories({'a': 'A', 'c': 'C'})
[A, A, b]
Categories (2, object): [A, b]
+
+ You may also provide a callable to create the new categories
+
+ >>> c.rename_categories(lambda x: x.upper())
+ [A, A, B]
+ Categories (2, object): [A, B]
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
cat = self if inplace else self.copy()
@@ -906,6 +919,8 @@ def rename_categories(self, new_categories, inplace=False):
if is_dict_like(new_categories):
cat.categories = [new_categories.get(item, item)
for item in cat.categories]
+ elif callable(new_categories):
+ cat.categories = [new_categories(item) for item in cat.categories]
else:
cat.categories = new_categories
if not inplace:
diff --git a/pandas/tests/categorical/test_api.py b/pandas/tests/categorical/test_api.py
index 7cc0aafaf05b6..12db4a9bea28b 100644
--- a/pandas/tests/categorical/test_api.py
+++ b/pandas/tests/categorical/test_api.py
@@ -71,9 +71,14 @@ def test_rename_categories(self):
exp_cat = Index(["a", "b", "c"])
tm.assert_index_equal(cat.categories, exp_cat)
- res = cat.rename_categories([1, 2, 3], inplace=True)
+
+ # GH18862 (let rename_categories take callables)
+ result = cat.rename_categories(lambda x: x.upper())
+ expected = Categorical(["A", "B", "C", "A"])
+ tm.assert_categorical_equal(result, expected)
# and now inplace
+ res = cat.rename_categories([1, 2, 3], inplace=True)
assert res is None
tm.assert_numpy_array_equal(cat.__array__(), np.array([1, 2, 3, 1],
dtype=np.int64))
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 543f59013ff12..f7328a99195b9 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -185,6 +185,11 @@ def test_method_delegation(self):
tm.assert_index_equal(result, CategoricalIndex(
list('ffggef'), categories=list('efg')))
+ # GH18862 (let rename_categories take callables)
+ result = ci.rename_categories(lambda x: x.upper())
+ tm.assert_index_equal(result, CategoricalIndex(
+ list('AABBCA'), categories=list('CAB')))
+
ci = CategoricalIndex(list('aabbca'), categories=list('cab'))
result = ci.add_categories(['d'])
tm.assert_index_equal(result, CategoricalIndex(
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 617ca2199f588..a2838f803421c 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -588,6 +588,14 @@ def f():
pytest.raises(Exception, f)
# right: s.cat.set_categories([4,3,2,1])
+ # GH18862 (let Series.cat.rename_categories take callables)
+ s = Series(Categorical(["a", "b", "c", "a"], ordered=True))
+ result = s.cat.rename_categories(lambda x: x.upper())
+ expected = Series(Categorical(["A", "B", "C", "A"],
+ categories=["A", "B", "C"],
+ ordered=True))
+ tm.assert_series_equal(result, expected)
+
def test_str_accessor_api_for_categorical(self):
# https://github.com/pandas-dev/pandas/issues/10661
from pandas.core.strings import StringMethods
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This PR allows ``Categorical.rename_categories`` to take a callable as its argument.
This is useful for quickly changing the categories the same way for all categories, e.g.
```python
>>> pd.Categorical(['a', 'b']).rename_categories("cat_{}".format)
[cat_a, cat_b]
Categories (2, object): [cat_a, cat_b]
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/18862 | 2017-12-20T01:25:03Z | 2017-12-21T15:02:36Z | 2017-12-21T15:02:36Z | 2017-12-26T22:06:02Z |
BLD: --use-local on conda | diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index 67a175268e22e..8cf70e47a4b8f 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -184,7 +184,7 @@ elif [ "$CONDA_BUILD_TEST" ]; then
conda build ./conda.recipe --numpy 1.13 --python 3.5 -q --no-test
echo "[installing]"
- conda install $(conda build ./conda.recipe --numpy 1.13 --python 3.5 --output) --quiet
+ conda install $(conda build ./conda.recipe --numpy 1.13 --python 3.5 --output) --quiet --use-local
else
| https://api.github.com/repos/pandas-dev/pandas/pulls/18858 | 2017-12-19T23:32:26Z | 2017-12-20T00:57:01Z | 2017-12-20T00:57:01Z | 2017-12-20T00:57:01Z | |
ENH: df.assign accepting dependent **kwargs (#14207) | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index d7650b6b0938f..78e2fdb46f659 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -95,7 +95,7 @@ constructed from the sorted keys of the dict, if possible.
NaN (not a number) is the standard missing data marker used in pandas.
-**From scalar value**
+**From scalar value**
If ``data`` is a scalar value, an index must be
provided. The value will be repeated to match the length of **index**.
@@ -154,7 +154,7 @@ See also the :ref:`section on attribute access<indexing.attribute_access>`.
Vectorized operations and label alignment with Series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-When working with raw NumPy arrays, looping through value-by-value is usually
+When working with raw NumPy arrays, looping through value-by-value is usually
not necessary. The same is true when working with Series in pandas.
Series can also be passed into most NumPy methods expecting an ndarray.
@@ -324,7 +324,7 @@ From a list of dicts
From a dict of tuples
~~~~~~~~~~~~~~~~~~~~~
-You can automatically create a multi-indexed frame by passing a tuples
+You can automatically create a multi-indexed frame by passing a tuples
dictionary.
.. ipython:: python
@@ -347,7 +347,7 @@ column name provided).
**Missing Data**
Much more will be said on this topic in the :ref:`Missing data <missing_data>`
-section. To construct a DataFrame with missing data, we use ``np.nan`` to
+section. To construct a DataFrame with missing data, we use ``np.nan`` to
represent missing values. Alternatively, you may pass a ``numpy.MaskedArray``
as the data argument to the DataFrame constructor, and its masked entries will
be considered missing.
@@ -370,7 +370,7 @@ set to ``'index'`` in order to use the dict keys as row labels.
``DataFrame.from_records`` takes a list of tuples or an ndarray with structured
dtype. It works analogously to the normal ``DataFrame`` constructor, except that
-the resulting DataFrame index may be a specific field of the structured
+the resulting DataFrame index may be a specific field of the structured
dtype. For example:
.. ipython:: python
@@ -506,25 +506,70 @@ to be inserted (for example, a ``Series`` or NumPy array), or a function
of one argument to be called on the ``DataFrame``. A *copy* of the original
DataFrame is returned, with the new values inserted.
+.. versionmodified:: 0.23.0
+
+Starting with Python 3.6 the order of ``**kwargs`` is preserved. This allows
+for *dependent* assignment, where an expression later in ``**kwargs`` can refer
+to a column created earlier in the same :meth:`~DataFrame.assign`.
+
+.. ipython:: python
+
+ dfa = pd.DataFrame({"A": [1, 2, 3],
+ "B": [4, 5, 6]})
+ dfa.assign(C=lambda x: x['A'] + x['B'],
+ D=lambda x: x['A'] + x['C'])
+
+In the second expression, ``x['C']`` will refer to the newly created column,
+that's equal to ``dfa['A'] + dfa['B']``.
+
+To write code compatible with all versions of Python, split the assignment in two.
+
+.. ipython:: python
+
+ dependent = pd.DataFrame({"A": [1, 1, 1]})
+ (dependent.assign(A=lambda x: x['A'] + 1)
+ .assign(B=lambda x: x['A'] + 2))
+
.. warning::
- Since the function signature of ``assign`` is ``**kwargs``, a dictionary,
- the order of the new columns in the resulting DataFrame cannot be guaranteed
- to match the order you pass in. To make things predictable, items are inserted
- alphabetically (by key) at the end of the DataFrame.
+ Dependent assignment maybe subtly change the behavior of your code between
+ Python 3.6 and older versions of Python.
+
+ If you wish write code that supports versions of python before and after 3.6,
+ you'll need to take care when passing ``assign`` expressions that
+
+ * Updating an existing column
+ * Refering to the newly updated column in the same ``assign``
+
+ For example, we'll update column "A" and then refer to it when creating "B".
+
+ .. code-block:: python
+
+ >>> dependent = pd.DataFrame({"A": [1, 1, 1]})
+ >>> dependent.assign(A=lambda x: x["A"] + 1,
+ B=lambda x: x["A"] + 2)
+
+ For Python 3.5 and earlier the expression creating ``B`` refers to the
+ "old" value of ``A``, ``[1, 1, 1]``. The output is then
+
+ .. code-block:: python
+
+ A B
+ 0 2 3
+ 1 2 3
+ 2 2 3
+
+ For Python 3.6 and later, the expression creating ``A`` refers to the
+ "new" value of ``A``, ``[2, 2, 2]``, which results in
+
+ .. code-block:: python
- All expressions are computed first, and then assigned. So you can't refer
- to another column being assigned in the same call to ``assign``. For example:
+ A B
+ 0 2 4
+ 1 2 4
+ 2 2 4
- .. ipython::
- :verbatim:
- In [1]: # Don't do this, bad reference to `C`
- df.assign(C = lambda x: x['A'] + x['B'],
- D = lambda x: x['A'] + x['C'])
- In [2]: # Instead, break it into two assigns
- (df.assign(C = lambda x: x['A'] + x['B'])
- .assign(D = lambda x: x['A'] + x['C']))
Indexing / Selection
~~~~~~~~~~~~~~~~~~~~
@@ -914,7 +959,7 @@ For example, using the earlier example data, we could do:
Squeezing
~~~~~~~~~
-Another way to change the dimensionality of an object is to ``squeeze`` a 1-len
+Another way to change the dimensionality of an object is to ``squeeze`` a 1-len
object, similar to ``wp['Item1']``.
.. ipython:: python
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index cf5a44442045b..db5c79dcb3c42 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -248,6 +248,46 @@ Current Behavior:
pd.RangeIndex(1, 5) / 0
+.. _whatsnew_0230.enhancements.assign_dependent:
+
+``.assign()`` accepts dependent arguments
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The :func:`DataFrame.assign` now accepts dependent keyword arguments for python version later than 3.6 (see also `PEP 468
+<https://www.python.org/dev/peps/pep-0468/>`_). Later keyword arguments may now refer to earlier ones if the argument is a callable. See the
+:ref:`documentation here <dsintro.chained_assignment>` (:issue:`14207`)
+
+.. ipython:: python
+
+ df = pd.DataFrame({'A': [1, 2, 3]})
+ df
+ df.assign(B=df.A, C=lambda x:x['A']+ x['B'])
+
+.. warning::
+
+ This may subtly change the behavior of your code when you're
+ using ``.assign()`` to update an existing column. Previously, callables
+ referring to other variables being updated would get the "old" values
+
+ Previous Behaviour:
+
+ .. code-block:: ipython
+
+ In [2]: df = pd.DataFrame({"A": [1, 2, 3]})
+
+ In [3]: df.assign(A=lambda df: df.A + 1, C=lambda df: df.A * -1)
+ Out[3]:
+ A C
+ 0 2 -1
+ 1 3 -2
+ 2 4 -3
+
+ New Behaviour:
+
+ .. ipython:: python
+
+ df.assign(A=df.A+1, C= lambda df: df.A* -1)
+
.. _whatsnew_0230.enhancements.other:
Other Enhancements
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6d8dcb8a1ca89..c99c59db1d8cb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2687,12 +2687,17 @@ def assign(self, **kwargs):
Notes
-----
- For python 3.6 and above, the columns are inserted in the order of
- \*\*kwargs. For python 3.5 and earlier, since \*\*kwargs is unordered,
- the columns are inserted in alphabetical order at the end of your
- DataFrame. Assigning multiple columns within the same ``assign``
- is possible, but you cannot reference other columns created within
- the same ``assign`` call.
+ Assigning multiple columns within the same ``assign`` is possible.
+ For Python 3.6 and above, later items in '\*\*kwargs' may refer to
+ newly created or modified columns in 'df'; items are computed and
+ assigned into 'df' in order. For Python 3.5 and below, the order of
+ keyword arguments is not specified, you cannot refer to newly created
+ or modified columns. All items are computed first, and then assigned
+ in alphabetical order.
+
+ .. versionmodified :: 0.23.0
+
+ Keyword argument order is maintained for Python 3.6 and later.
Examples
--------
@@ -2728,22 +2733,34 @@ def assign(self, **kwargs):
7 8 -1.495604 2.079442
8 9 0.549296 2.197225
9 10 -0.758542 2.302585
+
+ Where the keyword arguments depend on each other
+
+ >>> df = pd.DataFrame({'A': [1, 2, 3]})
+
+ >>> df.assign(B=df.A, C=lambda x:x['A']+ x['B'])
+ A B C
+ 0 1 1 2
+ 1 2 2 4
+ 2 3 3 6
"""
data = self.copy()
- # do all calculations first...
- results = OrderedDict()
- for k, v in kwargs.items():
- results[k] = com._apply_if_callable(v, data)
-
- # preserve order for 3.6 and later, but sort by key for 3.5 and earlier
+ # >= 3.6 preserve order of kwargs
if PY36:
- results = results.items()
+ for k, v in kwargs.items():
+ data[k] = com._apply_if_callable(v, data)
else:
+ # <= 3.5: do all calculations first...
+ results = OrderedDict()
+ for k, v in kwargs.items():
+ results[k] = com._apply_if_callable(v, data)
+
+ # <= 3.5 and earlier
results = sorted(results.items())
- # ... and then assign
- for k, v in results:
- data[k] = v
+ # ... and then assign
+ for k, v in results:
+ data[k] = v
return data
def _sanitize_column(self, key, value, broadcast=True):
diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py
index 9acdf2f17d86a..8236a41d00243 100644
--- a/pandas/tests/frame/test_mutate_columns.py
+++ b/pandas/tests/frame/test_mutate_columns.py
@@ -89,11 +89,35 @@ def test_assign_bad(self):
df.assign(lambda x: x.A)
with pytest.raises(AttributeError):
df.assign(C=df.A, D=df.A + df.C)
+
+ @pytest.mark.skipif(PY36, reason="""Issue #14207: valid for python
+ 3.6 and above""")
+ def test_assign_dependent_old_python(self):
+ df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
+
+ # Key C does not exist at defition time of df
with pytest.raises(KeyError):
- df.assign(C=lambda df: df.A, D=lambda df: df['A'] + df['C'])
+ df.assign(C=lambda df: df.A,
+ D=lambda df: df['A'] + df['C'])
with pytest.raises(KeyError):
df.assign(C=df.A, D=lambda x: x['A'] + x['C'])
+ @pytest.mark.skipif(not PY36, reason="""Issue #14207: not valid for
+ python 3.5 and below""")
+ def test_assign_dependent(self):
+ df = DataFrame({'A': [1, 2], 'B': [3, 4]})
+
+ result = df.assign(C=df.A, D=lambda x: x['A'] + x['C'])
+ expected = DataFrame([[1, 3, 1, 2], [2, 4, 2, 4]],
+ columns=list('ABCD'))
+ assert_frame_equal(result, expected)
+
+ result = df.assign(C=lambda df: df.A,
+ D=lambda df: df['A'] + df['C'])
+ expected = DataFrame([[1, 3, 1, 2], [2, 4, 2, 4]],
+ columns=list('ABCD'))
+ assert_frame_equal(result, expected)
+
def test_insert_error_msmgs(self):
# GH 7432
| Specifically, 'df.assign(b=1, c=lambda x:x['b'])'
does not throw an exception in python 3.6 and above.
Further details are discussed in Issues #14207 and #18797.
closes #14207
closes #18797
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18852 | 2017-12-19T20:12:25Z | 2018-02-10T16:20:19Z | 2018-02-10T16:20:18Z | 2018-02-10T17:08:25Z |
CLN: Drop compact_ints/use_unsigned from read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index d51307081b17f..2584941ac14d2 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -199,21 +199,6 @@ low_memory : boolean, default ``True``
Note that the entire file is read into a single DataFrame regardless,
use the ``chunksize`` or ``iterator`` parameter to return the data in chunks.
(Only valid with C parser)
-compact_ints : boolean, default False
- .. deprecated:: 0.19.0
-
- Argument moved to ``pd.to_numeric``
-
- If ``compact_ints`` is ``True``, then for any column that is of integer dtype, the
- parser will attempt to cast it as the smallest integer ``dtype`` possible, either
- signed or unsigned depending on the specification from the ``use_unsigned`` parameter.
-use_unsigned : boolean, default False
- .. deprecated:: 0.18.2
-
- Argument moved to ``pd.to_numeric``
-
- If integer columns are being compacted (i.e. ``compact_ints=True``), specify whether
- the column should be compacted to the smallest signed or unsigned integer dtype.
memory_map : boolean, default False
If a filepath is provided for ``filepath_or_buffer``, map the file object
directly onto memory and access the data directly from there. Using this
diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index 0579a80aad28e..0e1577c1d9e29 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -233,6 +233,7 @@ Removal of prior version deprecations/changes
- :func:`read_csv` has dropped the ``skip_footer`` parameter (:issue:`13386`)
- :func:`read_csv` has dropped the ``as_recarray`` parameter (:issue:`13373`)
- :func:`read_csv` has dropped the ``buffer_lines`` parameter (:issue:`13360`)
+- :func:`read_csv` has dropped the ``compact_ints`` and ``use_unsigned`` parameters (:issue:`13323`)
.. _whatsnew_0220.performance:
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index f01068ae2e538..1f7c359b519a5 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -305,7 +305,6 @@ cdef class TextReader:
object index_col
object low_memory
object skiprows
- object compact_ints, use_unsigned
object dtype
object encoding
object compression
@@ -355,10 +354,7 @@ cdef class TextReader:
na_fvalues=None,
true_values=None,
false_values=None,
-
- compact_ints=False,
allow_leading_cols=True,
- use_unsigned=False,
low_memory=False,
skiprows=None,
skipfooter=0,
@@ -482,10 +478,7 @@ cdef class TextReader:
self.false_set = kset_from_list(self.false_values)
self.converters = converters
-
self.na_filter = na_filter
- self.compact_ints = compact_ints
- self.use_unsigned = use_unsigned
self.verbose = verbose
self.low_memory = low_memory
@@ -1122,11 +1115,6 @@ cdef class TextReader:
if upcast_na and na_count > 0:
col_res = _maybe_upcast(col_res)
- if issubclass(col_res.dtype.type,
- np.integer) and self.compact_ints:
- col_res = lib.downcast_int64(col_res, na_values,
- self.use_unsigned)
-
if col_res is None:
raise ParserError('Unable to parse column %d' % i)
diff --git a/pandas/_libs/src/inference.pyx b/pandas/_libs/src/inference.pyx
index 8bfed4fe60fed..5ed8828a0f122 100644
--- a/pandas/_libs/src/inference.pyx
+++ b/pandas/_libs/src/inference.pyx
@@ -1657,74 +1657,3 @@ def fast_multiget(dict mapping, ndarray keys, default=np.nan):
output[i] = default
return maybe_convert_objects(output)
-
-
-def downcast_int64(ndarray[int64_t] arr, object na_values,
- bint use_unsigned=0):
- cdef:
- Py_ssize_t i, n = len(arr)
- int64_t mx = INT64_MIN + 1, mn = INT64_MAX
- int64_t NA = na_values[np.int64]
- int64_t val
- ndarray[uint8_t] mask
- int na_count = 0
-
- _mask = np.empty(n, dtype=bool)
- mask = _mask.view(np.uint8)
-
- for i in range(n):
- val = arr[i]
-
- if val == NA:
- mask[i] = 1
- na_count += 1
- continue
-
- # not NA
- mask[i] = 0
-
- if val > mx:
- mx = val
-
- if val < mn:
- mn = val
-
- if mn >= 0 and use_unsigned:
- if mx <= UINT8_MAX - 1:
- result = arr.astype(np.uint8)
- if na_count:
- np.putmask(result, _mask, na_values[np.uint8])
- return result
-
- if mx <= UINT16_MAX - 1:
- result = arr.astype(np.uint16)
- if na_count:
- np.putmask(result, _mask, na_values[np.uint16])
- return result
-
- if mx <= UINT32_MAX - 1:
- result = arr.astype(np.uint32)
- if na_count:
- np.putmask(result, _mask, na_values[np.uint32])
- return result
-
- else:
- if mn >= INT8_MIN + 1 and mx <= INT8_MAX:
- result = arr.astype(np.int8)
- if na_count:
- np.putmask(result, _mask, na_values[np.int8])
- return result
-
- if mn >= INT16_MIN + 1 and mx <= INT16_MAX:
- result = arr.astype(np.int16)
- if na_count:
- np.putmask(result, _mask, na_values[np.int16])
- return result
-
- if mn >= INT32_MIN + 1 and mx <= INT32_MAX:
- result = arr.astype(np.int32)
- if na_count:
- np.putmask(result, _mask, na_values[np.int32])
- return result
-
- return arr
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 3d07b0e6cbdfd..92f58db775423 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -273,21 +273,6 @@
Note that the entire file is read into a single DataFrame regardless,
use the `chunksize` or `iterator` parameter to return the data in chunks.
(Only valid with C parser)
-compact_ints : boolean, default False
- .. deprecated:: 0.19.0
- Argument moved to ``pd.to_numeric``
-
- If compact_ints is True, then for any column that is of integer dtype,
- the parser will attempt to cast it as the smallest integer dtype possible,
- either signed or unsigned depending on the specification from the
- `use_unsigned` parameter.
-use_unsigned : boolean, default False
- .. deprecated:: 0.19.0
- Argument moved to ``pd.to_numeric``
-
- If integer columns are being compacted (i.e. `compact_ints=True`), specify
- whether the column should be compacted to the smallest signed or unsigned
- integer dtype.
memory_map : boolean, default False
If a filepath is provided for `filepath_or_buffer`, map the file object
directly onto memory and access the data directly from there. Using this
@@ -496,8 +481,6 @@ def _read(filepath_or_buffer, kwds):
_c_parser_defaults = {
'delim_whitespace': False,
'na_filter': True,
- 'compact_ints': False,
- 'use_unsigned': False,
'low_memory': True,
'memory_map': False,
'error_bad_lines': True,
@@ -518,13 +501,9 @@ def _read(filepath_or_buffer, kwds):
}
_deprecated_defaults = {
- 'compact_ints': None,
- 'use_unsigned': None,
'tupleize_cols': None
}
_deprecated_args = {
- 'compact_ints',
- 'use_unsigned',
'tupleize_cols',
}
@@ -596,8 +575,6 @@ def parser_f(filepath_or_buffer,
# Internal
doublequote=True,
delim_whitespace=False,
- compact_ints=None,
- use_unsigned=None,
low_memory=_c_parser_defaults['low_memory'],
memory_map=False,
float_precision=None):
@@ -662,8 +639,6 @@ def parser_f(filepath_or_buffer,
float_precision=float_precision,
na_filter=na_filter,
- compact_ints=compact_ints,
- use_unsigned=use_unsigned,
delim_whitespace=delim_whitespace,
warn_bad_lines=warn_bad_lines,
error_bad_lines=error_bad_lines,
@@ -1569,11 +1544,6 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
if cast_type and not is_dtype_equal(cvals, cast_type):
cvals = self._cast_types(cvals, cast_type, c)
- if issubclass(cvals.dtype.type, np.integer) and self.compact_ints:
- cvals = lib.downcast_int64(
- cvals, parsers.na_values,
- self.use_unsigned)
-
result[c] = cvals
if verbose and na_count:
print('Filled %d NA values in column %s' % (na_count, str(c)))
@@ -2064,8 +2034,6 @@ def __init__(self, f, **kwds):
self.converters = kwds['converters']
self.dtype = kwds['dtype']
- self.compact_ints = kwds['compact_ints']
- self.use_unsigned = kwds['use_unsigned']
self.thousands = kwds['thousands']
self.decimal = kwds['decimal']
diff --git a/pandas/tests/dtypes/test_io.py b/pandas/tests/dtypes/test_io.py
index ae92e9ecca681..06b61371c9a0b 100644
--- a/pandas/tests/dtypes/test_io.py
+++ b/pandas/tests/dtypes/test_io.py
@@ -71,39 +71,3 @@ def test_convert_sql_column_decimals(self):
result = lib.convert_sql_column(arr)
expected = np.array([1.5, np.nan, 3, 4.2], dtype='f8')
tm.assert_numpy_array_equal(result, expected)
-
- def test_convert_downcast_int64(self):
- from pandas._libs.parsers import na_values
-
- arr = np.array([1, 2, 7, 8, 10], dtype=np.int64)
- expected = np.array([1, 2, 7, 8, 10], dtype=np.int8)
-
- # default argument
- result = lib.downcast_int64(arr, na_values)
- tm.assert_numpy_array_equal(result, expected)
-
- result = lib.downcast_int64(arr, na_values, use_unsigned=False)
- tm.assert_numpy_array_equal(result, expected)
-
- expected = np.array([1, 2, 7, 8, 10], dtype=np.uint8)
- result = lib.downcast_int64(arr, na_values, use_unsigned=True)
- tm.assert_numpy_array_equal(result, expected)
-
- # still cast to int8 despite use_unsigned=True
- # because of the negative number as an element
- arr = np.array([1, 2, -7, 8, 10], dtype=np.int64)
- expected = np.array([1, 2, -7, 8, 10], dtype=np.int8)
- result = lib.downcast_int64(arr, na_values, use_unsigned=True)
- tm.assert_numpy_array_equal(result, expected)
-
- arr = np.array([1, 2, 7, 8, 300], dtype=np.int64)
- expected = np.array([1, 2, 7, 8, 300], dtype=np.int16)
- result = lib.downcast_int64(arr, na_values)
- tm.assert_numpy_array_equal(result, expected)
-
- int8_na = na_values[np.int8]
- int64_na = na_values[np.int64]
- arr = np.array([int64_na, 2, 3, 10, 15], dtype=np.int64)
- expected = np.array([int8_na, 2, 3, 10, 15], dtype=np.int8)
- result = lib.downcast_int64(arr, na_values)
- tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 8a1f23d203a32..8525cb42c2455 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -1371,49 +1371,6 @@ def test_raise_on_no_columns(self):
data = "\n\n\n"
pytest.raises(EmptyDataError, self.read_csv, StringIO(data))
- def test_compact_ints_use_unsigned(self):
- # see gh-13323
- data = 'a,b,c\n1,9,258'
-
- # sanity check
- expected = DataFrame({
- 'a': np.array([1], dtype=np.int64),
- 'b': np.array([9], dtype=np.int64),
- 'c': np.array([258], dtype=np.int64),
- })
- out = self.read_csv(StringIO(data))
- tm.assert_frame_equal(out, expected)
-
- expected = DataFrame({
- 'a': np.array([1], dtype=np.int8),
- 'b': np.array([9], dtype=np.int8),
- 'c': np.array([258], dtype=np.int16),
- })
-
- # default behaviour for 'use_unsigned'
- with tm.assert_produces_warning(
- FutureWarning, check_stacklevel=False):
- out = self.read_csv(StringIO(data), compact_ints=True)
- tm.assert_frame_equal(out, expected)
-
- with tm.assert_produces_warning(
- FutureWarning, check_stacklevel=False):
- out = self.read_csv(StringIO(data), compact_ints=True,
- use_unsigned=False)
- tm.assert_frame_equal(out, expected)
-
- expected = DataFrame({
- 'a': np.array([1], dtype=np.uint8),
- 'b': np.array([9], dtype=np.uint8),
- 'c': np.array([258], dtype=np.uint16),
- })
-
- with tm.assert_produces_warning(
- FutureWarning, check_stacklevel=False):
- out = self.read_csv(StringIO(data), compact_ints=True,
- use_unsigned=True)
- tm.assert_frame_equal(out, expected)
-
def test_memory_map(self):
mmap_file = os.path.join(self.dirpath, 'test_mmap.csv')
expected = DataFrame({
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index 30dcc3e5731aa..3117f6fae55da 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -128,20 +128,12 @@ def read(self):
class TestDeprecatedFeatures(object):
@pytest.mark.parametrize("engine", ["c", "python"])
- @pytest.mark.parametrize("kwargs", [{"compact_ints": True},
- {"compact_ints": False},
- {"use_unsigned": True},
- {"use_unsigned": False},
- {"tupleize_cols": True},
+ @pytest.mark.parametrize("kwargs", [{"tupleize_cols": True},
{"tupleize_cols": False}])
def test_deprecated_args(self, engine, kwargs):
data = "1,2,3"
arg, _ = list(kwargs.items())[0]
- if engine == "python" and arg == "buffer_lines":
- # unsupported --> exception is raised
- return
-
with tm.assert_produces_warning(
FutureWarning, check_stacklevel=False):
read_csv(StringIO(data), engine=engine, **kwargs)
| Deprecated in v0.19.0
xref #13323
| https://api.github.com/repos/pandas-dev/pandas/pulls/18851 | 2017-12-19T17:34:31Z | 2017-12-21T15:01:25Z | 2017-12-21T15:01:25Z | 2017-12-22T05:05:45Z |
BUG: DatetimeIndex + arraylike of DateOffsets | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 709009542e160..ff041a4849138 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -291,6 +291,7 @@ Conversion
- Bug in :class:`WeekOfMonth` and class:`Week` where addition and subtraction did not roll correctly (:issue:`18510`,:issue:`18672`,:issue:`18864`)
- Bug in :meth:`DatetimeIndex.astype` when converting between timezone aware dtypes, and converting from timezone aware to naive (:issue:`18951`)
- Bug in :class:`FY5253` where ``datetime`` addition and subtraction incremented incorrectly for dates on the year-end but not normalized to midnight (:issue:`18854`)
+- Bug in :class:`DatetimeIndex` where adding or subtracting an array-like of ``DateOffset`` objects either raised (``np.array``, ``pd.Index``) or broadcast incorrectly (``pd.Series``) (:issue:`18849`)
Indexing
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index a441e6c3fd36a..40c07376d2522 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -18,6 +18,7 @@
is_list_like,
is_scalar,
is_bool_dtype,
+ is_offsetlike,
is_categorical_dtype,
is_datetime_or_timedelta_dtype,
is_float_dtype,
@@ -649,6 +650,14 @@ def _sub_datelike(self, other):
def _sub_period(self, other):
return NotImplemented
+ def _add_offset_array(self, other):
+ # Array/Index of DateOffset objects
+ return NotImplemented
+
+ def _sub_offset_array(self, other):
+ # Array/Index of DateOffset objects
+ return NotImplemented
+
@classmethod
def _add_datetimelike_methods(cls):
"""
@@ -671,7 +680,12 @@ def __add__(self, other):
return self._add_delta(other)
elif is_integer(other):
return self.shift(other)
- elif isinstance(other, (Index, datetime, np.datetime64)):
+ elif isinstance(other, (datetime, np.datetime64)):
+ return self._add_datelike(other)
+ elif is_offsetlike(other):
+ # Array/Index of DateOffset objects
+ return self._add_offset_array(other)
+ elif isinstance(other, Index):
return self._add_datelike(other)
else: # pragma: no cover
return NotImplemented
@@ -692,10 +706,6 @@ def __sub__(self, other):
return self._add_delta(-other)
elif isinstance(other, DatetimeIndex):
return self._sub_datelike(other)
- elif isinstance(other, Index):
- raise TypeError("cannot subtract {typ1} and {typ2}"
- .format(typ1=type(self).__name__,
- typ2=type(other).__name__))
elif isinstance(other, (DateOffset, timedelta)):
return self._add_delta(-other)
elif is_integer(other):
@@ -704,6 +714,14 @@ def __sub__(self, other):
return self._sub_datelike(other)
elif isinstance(other, Period):
return self._sub_period(other)
+ elif is_offsetlike(other):
+ # Array/Index of DateOffset objects
+ return self._sub_offset_array(other)
+ elif isinstance(other, Index):
+ raise TypeError("cannot subtract {typ1} and {typ2}"
+ .format(typ1=type(self).__name__,
+ typ2=type(other).__name__))
+
else: # pragma: no cover
return NotImplemented
cls.__sub__ = __sub__
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 9e804b6575c47..321d59eb0e35f 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -893,6 +893,32 @@ def _add_offset(self, offset):
"or DatetimeIndex", PerformanceWarning)
return self.astype('O') + offset
+ def _add_offset_array(self, other):
+ # Array/Index of DateOffset objects
+ if isinstance(other, ABCSeries):
+ return NotImplemented
+ elif len(other) == 1:
+ return self + other[0]
+ else:
+ warnings.warn("Adding/subtracting array of DateOffsets to "
+ "{} not vectorized".format(type(self)),
+ PerformanceWarning)
+ return self.astype('O') + np.array(other)
+ # TODO: This works for __add__ but loses dtype in __sub__
+
+ def _sub_offset_array(self, other):
+ # Array/Index of DateOffset objects
+ if isinstance(other, ABCSeries):
+ return NotImplemented
+ elif len(other) == 1:
+ return self - other[0]
+ else:
+ warnings.warn("Adding/subtracting array of DateOffsets to "
+ "{} not vectorized".format(type(self)),
+ PerformanceWarning)
+ res_values = self.astype('O').values - np.array(other)
+ return self.__class__(res_values, freq='infer')
+
def _format_native_types(self, na_rep='NaT', date_format=None, **kwargs):
from pandas.io.formats.format import _get_format_datetime64_from_values
format = _get_format_datetime64_from_values(self, date_format)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 05ec7f41b0c66..3a7a5e44d5a88 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -757,7 +757,10 @@ def wrapper(left, right, name=name, na_op=na_op):
rvalues = getattr(rvalues, 'values', rvalues)
# _Op aligns left and right
else:
- name = left.name
+ if isinstance(rvalues, pd.Index):
+ name = _maybe_match_name(left, rvalues)
+ else:
+ name = left.name
if (hasattr(lvalues, 'values') and
not isinstance(lvalues, pd.DatetimeIndex)):
lvalues = lvalues.values
diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py
index a46462e91a866..6cfa083172921 100644
--- a/pandas/tests/indexes/datetimes/test_arithmetic.py
+++ b/pandas/tests/indexes/datetimes/test_arithmetic.py
@@ -363,6 +363,51 @@ def test_datetimeindex_sub_timestamp_overflow(self):
with pytest.raises(OverflowError):
dtimin - variant
+ @pytest.mark.parametrize('box', [np.array, pd.Index])
+ def test_dti_add_offset_array(self, tz, box):
+ # GH#18849
+ dti = pd.date_range('2017-01-01', periods=2, tz=tz)
+ other = box([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)])
+ res = dti + other
+ expected = DatetimeIndex([dti[n] + other[n] for n in range(len(dti))],
+ name=dti.name, freq='infer')
+ tm.assert_index_equal(res, expected)
+
+ res2 = other + dti
+ tm.assert_index_equal(res2, expected)
+
+ @pytest.mark.parametrize('box', [np.array, pd.Index])
+ def test_dti_sub_offset_array(self, tz, box):
+ # GH#18824
+ dti = pd.date_range('2017-01-01', periods=2, tz=tz)
+ other = box([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)])
+ res = dti - other
+ expected = DatetimeIndex([dti[n] - other[n] for n in range(len(dti))],
+ name=dti.name, freq='infer')
+ tm.assert_index_equal(res, expected)
+
+ @pytest.mark.parametrize('names', [(None, None, None),
+ ('foo', 'bar', None),
+ ('foo', 'foo', 'foo')])
+ def test_dti_with_offset_series(self, tz, names):
+ # GH#18849
+ dti = pd.date_range('2017-01-01', periods=2, tz=tz, name=names[0])
+ other = pd.Series([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)],
+ name=names[1])
+
+ expected_add = pd.Series([dti[n] + other[n] for n in range(len(dti))],
+ name=names[2])
+ res = dti + other
+ tm.assert_series_equal(res, expected_add)
+ res2 = other + dti
+ tm.assert_series_equal(res2, expected_add)
+
+ expected_sub = pd.Series([dti[n] - other[n] for n in range(len(dti))],
+ name=names[2])
+
+ res3 = dti - other
+ tm.assert_series_equal(res3, expected_sub)
+
# GH 10699
@pytest.mark.parametrize('klass,assert_func', zip([Series, DatetimeIndex],
diff --git a/pandas/tests/indexes/period/test_arithmetic.py b/pandas/tests/indexes/period/test_arithmetic.py
index 66aa5d2db6569..b64f9074c3cf0 100644
--- a/pandas/tests/indexes/period/test_arithmetic.py
+++ b/pandas/tests/indexes/period/test_arithmetic.py
@@ -12,6 +12,32 @@
class TestPeriodIndexArithmetic(object):
+ def test_pi_add_offset_array(self):
+ # GH#18849
+ pi = pd.PeriodIndex([pd.Period('2015Q1'), pd.Period('2016Q2')])
+ offs = np.array([pd.offsets.QuarterEnd(n=1, startingMonth=12),
+ pd.offsets.QuarterEnd(n=-2, startingMonth=12)])
+ res = pi + offs
+ expected = pd.PeriodIndex([pd.Period('2015Q2'), pd.Period('2015Q4')])
+ tm.assert_index_equal(res, expected)
+
+ unanchored = np.array([pd.offsets.Hour(n=1),
+ pd.offsets.Minute(n=-2)])
+ with pytest.raises(period.IncompatibleFrequency):
+ pi + unanchored
+ with pytest.raises(TypeError):
+ unanchored + pi
+
+ @pytest.mark.xfail(reason='GH#18824 radd doesnt implement this case')
+ def test_pi_radd_offset_array(self):
+ # GH#18849
+ pi = pd.PeriodIndex([pd.Period('2015Q1'), pd.Period('2016Q2')])
+ offs = np.array([pd.offsets.QuarterEnd(n=1, startingMonth=12),
+ pd.offsets.QuarterEnd(n=-2, startingMonth=12)])
+ res = offs + pi
+ expected = pd.PeriodIndex([pd.Period('2015Q2'), pd.Period('2015Q4')])
+ tm.assert_index_equal(res, expected)
+
def test_add_iadd(self):
rng = pd.period_range('1/1/2000', freq='D', periods=5)
other = pd.period_range('1/6/2000', freq='D', periods=5)
diff --git a/pandas/tests/indexes/timedeltas/test_arithmetic.py b/pandas/tests/indexes/timedeltas/test_arithmetic.py
index 087567354d32d..3c567e52cccb5 100644
--- a/pandas/tests/indexes/timedeltas/test_arithmetic.py
+++ b/pandas/tests/indexes/timedeltas/test_arithmetic.py
@@ -28,6 +28,24 @@ def freq(request):
class TestTimedeltaIndexArithmetic(object):
_holder = TimedeltaIndex
+ @pytest.mark.xfail(reason='GH#18824 ufunc add cannot use operands...')
+ def test_tdi_with_offset_array(self):
+ # GH#18849
+ tdi = pd.TimedeltaIndex(['1 days 00:00:00', '3 days 04:00:00'])
+ offs = np.array([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)])
+ expected = pd.TimedeltaIndex(['1 days 01:00:00', '3 days 04:02:00'])
+
+ res = tdi + offs
+ tm.assert_index_equal(res, expected)
+
+ res2 = offs + tdi
+ tm.assert_index_equal(res2, expected)
+
+ anchored = np.array([pd.offsets.QuarterEnd(),
+ pd.offsets.Week(weekday=2)])
+ with pytest.raises(TypeError):
+ tdi + anchored
+
# TODO: Split by ops, better name
def test_numeric_compat(self):
idx = self._holder(np.arange(5, dtype='int64'))
| Before:
```
>>> dti = pd.date_range('2017-01-01', periods=2)
>>> other = np.array([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)])
>>> dti + other
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc add cannot use operands with types dtype('<M8[ns]') and dtype('O')
# Same for `dti - other`, `dti + pd.Index(other)`, `dti - pd.Index(other)`
>>> dti + pd.Series(other)
0 DatetimeIndex(['2017-01-31', '2017-01-31'], dt...
1 DatetimeIndex(['2017-01-03', '2017-01-04'], dt...
dtype: object
# yikes.
```
After:
```
>>> dti + other
pandas/core/indexes/datetimelike.py:677: PerformanceWarning: Adding/subtracting array of DateOffsets to <class 'pandas.core.indexes.datetimes.DatetimeIndex'> not vectorized
PerformanceWarning)
DatetimeIndex(['2017-01-31', '2017-01-04'], dtype='datetime64[ns]', freq=None)
>>> dti - pd.Index(other)
DatetimeIndex(['2016-12-31', '2016-12-31'], dtype='datetime64[ns]', freq=None)
>>> dti + pd.Series(other)
0 2017-01-31
1 2017-01-04
dtype: datetime64[ns]
```
<b>Caveat</b> This will need a follow-up to make sure `name` attribute is propogated correctly.
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18849 | 2017-12-19T17:22:14Z | 2017-12-29T00:25:53Z | 2017-12-29T00:25:53Z | 2018-01-05T19:23:01Z |
Make DatetimeIndex iterator pickleable by dill | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index ec5c20d341b50..bec26ef72d63a 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1251,8 +1251,7 @@ def __iter__(self):
converted = libts.ints_to_pydatetime(data[start_i:end_i],
tz=self.tz, freq=self.freq,
box="timestamp")
- for v in converted:
- yield v
+ return iter(converted)
def _wrap_union_result(self, other, result):
name = self.name if self.name == other.name else None
| Currently, dill (https://github.com/uqfoundation/dill) cannot pickle iterators over DatetimeIndex because they are generators. This simple change removes that limitation.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/18848 | 2017-12-19T16:26:47Z | 2017-12-21T15:03:47Z | 2017-12-21T15:03:47Z | 2018-05-30T21:01:38Z |
Fixed typo in test_eval arguments | diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 90f197738543a..9c3572f9ffe72 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -1798,7 +1798,7 @@ def test_invalid_parser():
'pandas': PandasExprVisitor}
-@pytest.mark.parametrize('engine', _parsers)
+@pytest.mark.parametrize('engine', _engines)
@pytest.mark.parametrize('parser', _parsers)
def test_disallowed_nodes(engine, parser):
VisitorClass = _parsers[parser]
| - [X] closes #18821
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/18847 | 2017-12-19T16:22:59Z | 2017-12-21T15:04:15Z | 2017-12-21T15:04:14Z | 2017-12-21T15:05:02Z |
Dec cleanup | diff --git a/pandas/conftest.py b/pandas/conftest.py
index b09119895617c..4cf5c9da44697 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -3,7 +3,6 @@
from distutils.version import LooseVersion
import numpy
import pandas
-import pandas.util.testing as tm
import dateutil
@@ -51,7 +50,6 @@ def add_imports(doctest_namespace):
@pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
def spmatrix(request):
- tm._skip_if_no_scipy()
from scipy import sparse
return getattr(sparse, request.param + '_matrix')
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index d63764e90d26e..47be8d115a07e 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -718,8 +718,6 @@ def test_put_compression(self):
@td.skip_if_windows_python_3
def test_put_compression_blosc(self):
- tm.skip_if_no_package('tables', min_version='2.2',
- app='blosc support')
df = tm.makeTimeDataFrame()
with ensure_clean_store(self.path) as store:
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index bb5dbdcaaa7c4..4b9d6621a20fb 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -7,6 +7,7 @@
from numpy import nan
import numpy as np
import pandas as pd
+from distutils.version import LooseVersion
from pandas import Series, DataFrame, bdate_range, Panel
from pandas.core.dtypes.common import (
@@ -20,6 +21,7 @@
from pandas.compat import lrange
from pandas import compat
from pandas.core.sparse import frame as spf
+import pandas.util._test_decorators as td
from pandas._libs.sparse import BlockIndex, IntIndex
from pandas.core.sparse.api import SparseSeries, SparseDataFrame, SparseArray
@@ -1169,14 +1171,13 @@ def test_notna(self):
tm.assert_frame_equal(res.to_dense(), exp)
+@td.skip_if_no_scipy
@pytest.mark.parametrize('index', [None, list('abc')]) # noqa: F811
@pytest.mark.parametrize('columns', [None, list('def')])
@pytest.mark.parametrize('fill_value', [None, 0, np.nan])
@pytest.mark.parametrize('dtype', [bool, int, float, np.uint16])
def test_from_to_scipy(spmatrix, index, columns, fill_value, dtype):
# GH 4343
- tm.skip_if_no_package('scipy')
-
# Make one ndarray and from it one sparse matrix, both to be used for
# constructing frames and comparing results
arr = np.eye(3, dtype=dtype)
@@ -1225,13 +1226,17 @@ def test_from_to_scipy(spmatrix, index, columns, fill_value, dtype):
assert sdf.to_coo().dtype == np.object_
+@td.skip_if_no_scipy
@pytest.mark.parametrize('fill_value', [None, 0, np.nan]) # noqa: F811
def test_from_to_scipy_object(spmatrix, fill_value):
# GH 4343
dtype = object
columns = list('cd')
index = list('ab')
- tm.skip_if_no_package('scipy', max_version='0.19.0')
+ import scipy
+ if (spmatrix is scipy.sparse.dok_matrix and LooseVersion(
+ scipy.__version__) >= LooseVersion('0.19.0')):
+ pytest.skip("dok_matrix from object does not work in SciPy >= 0.19")
# Make one ndarray and from it one sparse matrix, both to be used for
# constructing frames and comparing results
@@ -1270,10 +1275,9 @@ def test_from_to_scipy_object(spmatrix, fill_value):
assert sdf.to_coo().dtype == res_dtype
+@td.skip_if_no_scipy
def test_from_scipy_correct_ordering(spmatrix):
# GH 16179
- tm.skip_if_no_package('scipy')
-
arr = np.arange(1, 5).reshape(2, 2)
try:
spm = spmatrix(arr)
@@ -1290,10 +1294,9 @@ def test_from_scipy_correct_ordering(spmatrix):
tm.assert_frame_equal(sdf.to_dense(), expected.to_dense())
+@td.skip_if_no_scipy
def test_from_scipy_fillna(spmatrix):
# GH 16112
- tm.skip_if_no_package('scipy')
-
arr = np.eye(3)
arr[1:, 0] = np.nan
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 9305504f8d5e3..d03ecb9f9b5b7 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -12,6 +12,7 @@
from pandas.core.dtypes.common import is_integer_dtype
import pandas.core.nanops as nanops
import pandas.util.testing as tm
+import pandas.util._test_decorators as td
use_bn = nanops._USE_BOTTLENECK
@@ -381,8 +382,8 @@ def test_nanstd(self):
allow_str=False, allow_date=False,
allow_tdelta=True, allow_obj='convert')
+ @td.skip_if_no('scipy', min_version='0.17.0')
def test_nansem(self):
- tm.skip_if_no_package('scipy', min_version='0.17.0')
from scipy.stats import sem
with np.errstate(invalid='ignore'):
self.check_funs_ddof(nanops.nansem, sem, allow_complex=False,
@@ -441,8 +442,8 @@ def _skew_kurt_wrap(self, values, axis=None, func=None):
return 0.
return result
+ @td.skip_if_no('scipy', min_version='0.17.0')
def test_nanskew(self):
- tm.skip_if_no_package('scipy', min_version='0.17.0')
from scipy.stats import skew
func = partial(self._skew_kurt_wrap, func=skew)
with np.errstate(invalid='ignore'):
@@ -450,8 +451,8 @@ def test_nanskew(self):
allow_str=False, allow_date=False,
allow_tdelta=False)
+ @td.skip_if_no('scipy', min_version='0.17.0')
def test_nankurt(self):
- tm.skip_if_no_package('scipy', min_version='0.17.0')
from scipy.stats import kurtosis
func1 = partial(kurtosis, fisher=True)
func = partial(self._skew_kurt_wrap, func=func1)
@@ -549,8 +550,8 @@ def test_nancorr_pearson(self):
self.check_nancorr_nancov_1d(nanops.nancorr, targ0, targ1,
method='pearson')
+ @td.skip_if_no_scipy
def test_nancorr_kendall(self):
- tm.skip_if_no_package('scipy.stats')
from scipy.stats import kendalltau
targ0 = kendalltau(self.arr_float_2d, self.arr_float1_2d)[0]
targ1 = kendalltau(self.arr_float_2d.flat, self.arr_float1_2d.flat)[0]
@@ -561,8 +562,8 @@ def test_nancorr_kendall(self):
self.check_nancorr_nancov_1d(nanops.nancorr, targ0, targ1,
method='kendall')
+ @td.skip_if_no_scipy
def test_nancorr_spearman(self):
- tm.skip_if_no_package('scipy.stats')
from scipy.stats import spearmanr
targ0 = spearmanr(self.arr_float_2d, self.arr_float1_2d)[0]
targ1 = spearmanr(self.arr_float_2d.flat, self.arr_float1_2d.flat)[0]
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 131d470053a79..4e9282c3bd031 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -15,7 +15,6 @@
from datetime import datetime
from functools import wraps, partial
from contextlib import contextmanager
-from distutils.version import LooseVersion
from numpy.random import randn, rand
import numpy as np
@@ -317,35 +316,6 @@ def close(fignum=None):
_close(fignum)
-def _skip_if_mpl_1_5():
- import matplotlib as mpl
-
- v = mpl.__version__
- if LooseVersion(v) > LooseVersion('1.4.3') or str(v)[0] == '0':
- import pytest
- pytest.skip("matplotlib 1.5")
- else:
- mpl.use("Agg", warn=False)
-
-
-def _skip_if_no_scipy():
- import pytest
-
- pytest.importorskip("scipy.stats")
- pytest.importorskip("scipy.sparse")
- pytest.importorskip("scipy.interpolate")
-
-
-def _skip_if_no_mock():
- try:
- import mock # noqa
- except ImportError:
- try:
- from unittest import mock # noqa
- except ImportError:
- import pytest
- raise pytest.skip("mock is not installed")
-
# -----------------------------------------------------------------------------
# locale utilities
@@ -1979,62 +1949,6 @@ def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
-# Dependency checker when running tests.
-#
-# Copied this from nipy/nipype
-# Copyright of respective developers, License: BSD-3
-def skip_if_no_package(pkg_name, min_version=None, max_version=None,
- app='pandas', checker=LooseVersion):
- """Check that the min/max version of the required package is installed.
-
- If the package check fails, the test is automatically skipped.
-
- Parameters
- ----------
- pkg_name : string
- Name of the required package.
- min_version : string, optional
- Minimal version number for required package.
- max_version : string, optional
- Max version number for required package.
- app : string, optional
- Application that is performing the check. For instance, the
- name of the tutorial being executed that depends on specific
- packages.
- checker : object, optional
- The class that will perform the version checking. Default is
- distutils.version.LooseVersion.
-
- Examples
- --------
- package_check('numpy', '1.3')
-
- """
-
- import pytest
- if app:
- msg = '{app} requires {pkg_name}'.format(app=app, pkg_name=pkg_name)
- else:
- msg = 'module requires {pkg_name}'.format(pkg_name=pkg_name)
- if min_version:
- msg += ' with version >= {min_version}'.format(min_version=min_version)
- if max_version:
- msg += ' with version < {max_version}'.format(max_version=max_version)
- try:
- mod = __import__(pkg_name)
- except ImportError:
- mod = None
- try:
- have_version = mod.__version__
- except AttributeError:
- pytest.skip('Cannot find version for {pkg_name}'
- .format(pkg_name=pkg_name))
- if min_version and checker(have_version) < checker(min_version):
- pytest.skip(msg)
- if max_version and checker(have_version) >= checker(max_version):
- pytest.skip(msg)
-
-
def optional_args(decorator):
"""allows a decorator to take optional positional and keyword arguments.
Assumes that taking a single, callable, positional argument means that
| - [X] closes #18190
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This should be the last commit to close out the issue referenced above. I think there's further opportunity to convert some of the imperative ``pytest.skip`` calls over to the new decorator methodology (especially Ibn ``pandas/tests/io``) but to keep the scope clean I would rather open a new issue for those rather than continually update #18190 | https://api.github.com/repos/pandas-dev/pandas/pulls/18844 | 2017-12-19T15:48:10Z | 2017-12-21T15:05:14Z | 2017-12-21T15:05:14Z | 2017-12-22T09:42:33Z |
BLD: fix conda install - try again | diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index e350dd95d9d7e..ba1de3dd0397e 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -48,9 +48,7 @@ echo
echo "[update conda]"
conda config --set ssl_verify false || exit 1
conda config --set quiet true --set always_yes true --set changeps1 false || exit 1
-
-conda install conda=4.3.30
-# conda update -q conda
+conda update -q conda
if [ "$CONDA_BUILD_TEST" ]; then
echo
@@ -58,7 +56,6 @@ if [ "$CONDA_BUILD_TEST" ]; then
conda install conda-build
fi
-
echo
echo "[add channels]"
conda config --remove channels defaults || exit 1
@@ -125,7 +122,7 @@ if [ "$COVERAGE" ]; then
fi
echo
-if [ -z "$PIP_BUILD_TEST" ] and [ -z "$CONDA_BUILD_TEST" ]; then
+if [ -z "$PIP_BUILD_TEST" ] && [ -z "$CONDA_BUILD_TEST" ]; then
# build but don't install
echo "[build em]"
| https://api.github.com/repos/pandas-dev/pandas/pulls/18841 | 2017-12-19T13:27:26Z | 2017-12-19T14:08:23Z | 2017-12-19T14:08:23Z | 2017-12-19T14:08:24Z | |
BLD: fix conda version to 4.3.30 | diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index 90b9bf3f3186e..e350dd95d9d7e 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -48,7 +48,9 @@ echo
echo "[update conda]"
conda config --set ssl_verify false || exit 1
conda config --set quiet true --set always_yes true --set changeps1 false || exit 1
-conda update -q conda
+
+conda install conda=4.3.30
+# conda update -q conda
if [ "$CONDA_BUILD_TEST" ]; then
echo
| https://api.github.com/repos/pandas-dev/pandas/pulls/18838 | 2017-12-19T12:06:34Z | 2017-12-19T12:48:00Z | 2017-12-19T12:48:00Z | 2017-12-19T12:48:00Z | |
DEPR: Deprecate skip_footer in read_excel | diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index ae6d0816abc41..58e80361a4fba 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -208,6 +208,7 @@ Deprecations
that is the actual tuple, instead of treating the tuple as multiple keys. To
retain the previous behavior, use a list instead of a tuple (:issue:`18314`)
- ``Series.valid`` is deprecated. Use :meth:`Series.dropna` instead (:issue:`18800`).
+- :func:`read_excel` has deprecated the ``skip_footer`` parameter. Use ``skipfooter`` instead (:issue:`18836`)
.. _whatsnew_0220.prior_deprecations:
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index a1dcd52b61270..2dbfeab9cc331 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -149,6 +149,10 @@
any numeric columns will automatically be parsed, regardless of display
format.
skip_footer : int, default 0
+
+ .. deprecated:: 0.22.0
+ Pass in `skipfooter` instead.
+skipfooter : int, default 0
Rows at the end to skip (0-indexed)
convert_float : boolean, default True
convert integral floats to int (i.e., 1.0 --> 1). If False, all numeric
@@ -200,6 +204,7 @@ def get_writer(engine_name):
@Appender(_read_excel_doc)
@deprecate_kwarg("parse_cols", "usecols")
+@deprecate_kwarg("skip_footer", "skipfooter")
def read_excel(io,
sheet_name=0,
header=0,
@@ -218,7 +223,7 @@ def read_excel(io,
parse_dates=False,
date_parser=None,
thousands=None,
- skip_footer=0,
+ skipfooter=0,
convert_float=True,
**kwds):
@@ -251,7 +256,7 @@ def read_excel(io,
parse_dates=parse_dates,
date_parser=date_parser,
thousands=thousands,
- skip_footer=skip_footer,
+ skipfooter=skipfooter,
convert_float=convert_float,
**kwds)
@@ -333,7 +338,7 @@ def parse(self,
parse_dates=False,
date_parser=None,
thousands=None,
- skip_footer=0,
+ skipfooter=0,
convert_float=True,
**kwds):
"""
@@ -358,7 +363,7 @@ def parse(self,
parse_dates=parse_dates,
date_parser=date_parser,
thousands=thousands,
- skip_footer=skip_footer,
+ skipfooter=skipfooter,
convert_float=convert_float,
**kwds)
@@ -412,14 +417,10 @@ def _parse_excel(self,
parse_dates=False,
date_parser=None,
thousands=None,
- skip_footer=0,
+ skipfooter=0,
convert_float=True,
**kwds):
- skipfooter = kwds.pop('skipfooter', None)
- if skipfooter is not None:
- skip_footer = skipfooter
-
_validate_header_arg(header)
if 'chunksize' in kwds:
@@ -590,7 +591,7 @@ def _parse_cell(cell_contents, cell_typ):
parse_dates=parse_dates,
date_parser=date_parser,
thousands=thousands,
- skipfooter=skip_footer,
+ skipfooter=skipfooter,
**kwds)
output[asheetname] = parser.read(nrows=nrows)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 274d60c40e83f..71677322329f5 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -286,14 +286,14 @@ def test_excel_table_sheet_by_index(self):
tm.assert_frame_equal(df2, dfref, check_names=False)
df3 = read_excel(excel, 0, index_col=0, skipfooter=1)
- df4 = read_excel(excel, 0, index_col=0, skip_footer=1)
tm.assert_frame_equal(df3, df1.iloc[:-1])
- tm.assert_frame_equal(df3, df4)
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ df4 = read_excel(excel, 0, index_col=0, skip_footer=1)
+ tm.assert_frame_equal(df3, df4)
df3 = excel.parse(0, index_col=0, skipfooter=1)
- df4 = excel.parse(0, index_col=0, skip_footer=1)
tm.assert_frame_equal(df3, df1.iloc[:-1])
- tm.assert_frame_equal(df3, df4)
import xlrd
with pytest.raises(xlrd.XLRDError):
@@ -311,10 +311,7 @@ def test_excel_table(self):
df3 = self.get_exceldf('test1', 'Sheet1', index_col=0,
skipfooter=1)
- df4 = self.get_exceldf('test1', 'Sheet1', index_col=0,
- skip_footer=1)
tm.assert_frame_equal(df3, df1.iloc[:-1])
- tm.assert_frame_equal(df3, df4)
def test_reader_special_dtypes(self):
| For consistency with `read_csv`, which uses `skipfooter`. | https://api.github.com/repos/pandas-dev/pandas/pulls/18836 | 2017-12-19T06:38:00Z | 2017-12-19T11:01:02Z | 2017-12-19T11:01:02Z | 2017-12-19T11:12:03Z |
CLN: Drop the buffer_lines parameter in read_csv | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 184767015bf93..d51307081b17f 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -199,11 +199,6 @@ low_memory : boolean, default ``True``
Note that the entire file is read into a single DataFrame regardless,
use the ``chunksize`` or ``iterator`` parameter to return the data in chunks.
(Only valid with C parser)
-buffer_lines : int, default None
- .. deprecated:: 0.19.0
-
- Argument removed because its value is not respected by the parser
-
compact_ints : boolean, default False
.. deprecated:: 0.19.0
diff --git a/doc/source/whatsnew/v0.22.0.txt b/doc/source/whatsnew/v0.22.0.txt
index ae6d0816abc41..24867ca17141f 100644
--- a/doc/source/whatsnew/v0.22.0.txt
+++ b/doc/source/whatsnew/v0.22.0.txt
@@ -227,6 +227,7 @@ Removal of prior version deprecations/changes
- ``DatetimeIndex.to_datetime``, ``Timestamp.to_datetime``, ``PeriodIndex.to_datetime``, and ``Index.to_datetime`` have been removed (:issue:`8254`, :issue:`14096`, :issue:`14113`)
- :func:`read_csv` has dropped the ``skip_footer`` parameter (:issue:`13386`)
- :func:`read_csv` has dropped the ``as_recarray`` parameter (:issue:`13373`)
+- :func:`read_csv` has dropped the ``buffer_lines`` parameter (:issue:`13360`)
.. _whatsnew_0220.performance:
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index c6899fa527b6e..f01068ae2e538 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -360,7 +360,6 @@ cdef class TextReader:
allow_leading_cols=True,
use_unsigned=False,
low_memory=False,
- buffer_lines=None,
skiprows=None,
skipfooter=0,
verbose=False,
@@ -557,7 +556,7 @@ cdef class TextReader:
if not self.table_width:
raise EmptyDataError("No columns to parse from file")
- # compute buffer_lines as function of table width
+ # Compute buffer_lines as function of table width.
heuristic = 2**20 // self.table_width
self.buffer_lines = 1
while self.buffer_lines * 2 < heuristic:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index c2fca1f961222..3d07b0e6cbdfd 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -273,9 +273,6 @@
Note that the entire file is read into a single DataFrame regardless,
use the `chunksize` or `iterator` parameter to return the data in chunks.
(Only valid with C parser)
-buffer_lines : int, default None
- .. deprecated:: 0.19.0
- This argument is not respected by the parser
compact_ints : boolean, default False
.. deprecated:: 0.19.0
Argument moved to ``pd.to_numeric``
@@ -503,7 +500,6 @@ def _read(filepath_or_buffer, kwds):
'use_unsigned': False,
'low_memory': True,
'memory_map': False,
- 'buffer_lines': None,
'error_bad_lines': True,
'warn_bad_lines': True,
'tupleize_cols': False,
@@ -518,18 +514,15 @@ def _read(filepath_or_buffer, kwds):
_c_unsupported = {'skipfooter'}
_python_unsupported = {
'low_memory',
- 'buffer_lines',
'float_precision',
}
_deprecated_defaults = {
- 'buffer_lines': None,
'compact_ints': None,
'use_unsigned': None,
'tupleize_cols': None
}
_deprecated_args = {
- 'buffer_lines',
'compact_ints',
'use_unsigned',
'tupleize_cols',
@@ -606,7 +599,6 @@ def parser_f(filepath_or_buffer,
compact_ints=None,
use_unsigned=None,
low_memory=_c_parser_defaults['low_memory'],
- buffer_lines=None,
memory_map=False,
float_precision=None):
@@ -676,7 +668,6 @@ def parser_f(filepath_or_buffer,
warn_bad_lines=warn_bad_lines,
error_bad_lines=error_bad_lines,
low_memory=low_memory,
- buffer_lines=buffer_lines,
mangle_dupe_cols=mangle_dupe_cols,
tupleize_cols=tupleize_cols,
infer_datetime_format=infer_datetime_format,
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index b944322b1ed40..30dcc3e5731aa 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -128,9 +128,7 @@ def read(self):
class TestDeprecatedFeatures(object):
@pytest.mark.parametrize("engine", ["c", "python"])
- @pytest.mark.parametrize("kwargs", [{"buffer_lines": True},
- {"buffer_lines": False},
- {"compact_ints": True},
+ @pytest.mark.parametrize("kwargs", [{"compact_ints": True},
{"compact_ints": False},
{"use_unsigned": True},
{"use_unsigned": False},
| Deprecated back in 0.19.0
xref #13360. | https://api.github.com/repos/pandas-dev/pandas/pulls/18835 | 2017-12-19T05:39:41Z | 2017-12-19T11:01:58Z | 2017-12-19T11:01:58Z | 2017-12-19T11:11:49Z |
TST: FIXMES in DataFrame.quantile tests | diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py
index 2e6318955e119..5773edbdbcdec 100644
--- a/pandas/tests/frame/methods/test_quantile.py
+++ b/pandas/tests/frame/methods/test_quantile.py
@@ -280,9 +280,13 @@ def test_quantile_datetime(self):
tm.assert_frame_equal(result, expected)
# empty when numeric_only=True
- # FIXME (gives empty frame in 0.18.1, broken in 0.19.0)
- # result = df[['a', 'c']].quantile(.5)
- # result = df[['a', 'c']].quantile([.5])
+ result = df[["a", "c"]].quantile(0.5)
+ expected = Series([], index=[], dtype=np.float64, name=0.5)
+ tm.assert_series_equal(result, expected)
+
+ result = df[["a", "c"]].quantile([0.5])
+ expected = DataFrame(index=[0.5])
+ tm.assert_frame_equal(result, expected)
def test_quantile_invalid(self, datetime_frame):
msg = "percentiles should all be in the interval \\[0, 1\\]"
@@ -481,7 +485,7 @@ def test_quantile_nat(self):
)
tm.assert_frame_equal(res, exp)
- def test_quantile_empty_no_rows(self):
+ def test_quantile_empty_no_rows_floats(self):
# floats
df = DataFrame(columns=["a", "b"], dtype="float64")
@@ -494,21 +498,43 @@ def test_quantile_empty_no_rows(self):
exp = DataFrame([[np.nan, np.nan]], columns=["a", "b"], index=[0.5])
tm.assert_frame_equal(res, exp)
- # FIXME (gives empty frame in 0.18.1, broken in 0.19.0)
- # res = df.quantile(0.5, axis=1)
- # res = df.quantile([0.5], axis=1)
+ res = df.quantile(0.5, axis=1)
+ exp = Series([], index=[], dtype="float64", name=0.5)
+ tm.assert_series_equal(res, exp)
+
+ res = df.quantile([0.5], axis=1)
+ exp = DataFrame(columns=[], index=[0.5])
+ tm.assert_frame_equal(res, exp)
+ def test_quantile_empty_no_rows_ints(self):
# ints
df = DataFrame(columns=["a", "b"], dtype="int64")
- # FIXME (gives empty frame in 0.18.1, broken in 0.19.0)
- # res = df.quantile(0.5)
+ res = df.quantile(0.5)
+ exp = Series([np.nan, np.nan], index=["a", "b"], name=0.5)
+ tm.assert_series_equal(res, exp)
+ def test_quantile_empty_no_rows_dt64(self):
# datetimes
df = DataFrame(columns=["a", "b"], dtype="datetime64[ns]")
- # FIXME (gives NaNs instead of NaT in 0.18.1 or 0.19.0)
- # res = df.quantile(0.5, numeric_only=False)
+ res = df.quantile(0.5, numeric_only=False)
+ exp = Series(
+ [pd.NaT, pd.NaT], index=["a", "b"], dtype="datetime64[ns]", name=0.5
+ )
+ tm.assert_series_equal(res, exp)
+
+ # Mixed dt64/dt64tz
+ df["a"] = df["a"].dt.tz_localize("US/Central")
+ res = df.quantile(0.5, numeric_only=False)
+ exp = exp.astype(object)
+ tm.assert_series_equal(res, exp)
+
+ # both dt64tz
+ df["b"] = df["b"].dt.tz_localize("US/Central")
+ res = df.quantile(0.5, numeric_only=False)
+ exp = exp.astype(df["b"].dtype)
+ tm.assert_series_equal(res, exp)
def test_quantile_empty_no_columns(self):
# GH#23925 _get_numeric_data may drop all columns
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
cc @jorisvandenbossche these were introduced in #14536 | https://api.github.com/repos/pandas-dev/pandas/pulls/44437 | 2021-11-13T22:13:18Z | 2021-11-14T02:09:07Z | 2021-11-14T02:09:07Z | 2021-11-15T07:32:43Z |
TST: remove pyarrow bz2 xfail | diff --git a/pandas/tests/io/parser/test_compression.py b/pandas/tests/io/parser/test_compression.py
index e0799df8d7a4c..5aa0edfd8b46a 100644
--- a/pandas/tests/io/parser/test_compression.py
+++ b/pandas/tests/io/parser/test_compression.py
@@ -103,8 +103,6 @@ def test_compression(parser_and_data, compression_only, buffer, filename):
tm.write_to_compressed(compress_type, path, data)
compression = "infer" if filename else compress_type
- if ext == "bz2":
- pytest.xfail("pyarrow wheels don't have bz2 codec support")
if buffer:
with open(path, "rb") as f:
result = parser.read_csv(f, compression=compression)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44436 | 2021-11-13T21:48:56Z | 2021-11-14T02:06:11Z | 2021-11-14T02:06:11Z | 2021-11-14T15:12:16Z |
REF: simplify putmask_smart | diff --git a/pandas/core/array_algos/putmask.py b/pandas/core/array_algos/putmask.py
index 77e38e6c6e3fc..1f37e0e5d249a 100644
--- a/pandas/core/array_algos/putmask.py
+++ b/pandas/core/array_algos/putmask.py
@@ -4,7 +4,6 @@
from __future__ import annotations
from typing import Any
-import warnings
import numpy as np
@@ -15,16 +14,12 @@
)
from pandas.core.dtypes.cast import (
+ can_hold_element,
convert_scalar_for_putitemlike,
find_common_type,
infer_dtype_from,
)
-from pandas.core.dtypes.common import (
- is_float_dtype,
- is_integer_dtype,
- is_list_like,
-)
-from pandas.core.dtypes.missing import isna_compat
+from pandas.core.dtypes.common import is_list_like
from pandas.core.arrays import ExtensionArray
@@ -75,7 +70,7 @@ def putmask_smart(values: np.ndarray, mask: npt.NDArray[np.bool_], new) -> np.nd
`values`, updated in-place.
mask : np.ndarray[bool]
Applies to both sides (array like).
- new : `new values` either scalar or an array like aligned with `values`
+ new : listlike `new values` aligned with `values`
Returns
-------
@@ -89,9 +84,6 @@ def putmask_smart(values: np.ndarray, mask: npt.NDArray[np.bool_], new) -> np.nd
# we cannot use np.asarray() here as we cannot have conversions
# that numpy does when numeric are mixed with strings
- if not is_list_like(new):
- new = np.broadcast_to(new, mask.shape)
-
# see if we are only masking values that if putted
# will work in the current dtype
try:
@@ -100,27 +92,12 @@ def putmask_smart(values: np.ndarray, mask: npt.NDArray[np.bool_], new) -> np.nd
# TypeError: only integer scalar arrays can be converted to a scalar index
pass
else:
- # make sure that we have a nullable type if we have nulls
- if not isna_compat(values, nn[0]):
- pass
- elif not (is_float_dtype(nn.dtype) or is_integer_dtype(nn.dtype)):
- # only compare integers/floats
- pass
- elif not (is_float_dtype(values.dtype) or is_integer_dtype(values.dtype)):
- # only compare integers/floats
- pass
- else:
-
- # we ignore ComplexWarning here
- with warnings.catch_warnings(record=True):
- warnings.simplefilter("ignore", np.ComplexWarning)
- nn_at = nn.astype(values.dtype)
-
- comp = nn == nn_at
- if is_list_like(comp) and comp.all():
- nv = values.copy()
- nv[mask] = nn_at
- return nv
+ # We only get to putmask_smart when we cannot hold 'new' in values.
+ # The "smart" part of putmask_smart is checking if we can hold new[mask]
+ # in values, in which case we can still avoid the need to cast.
+ if can_hold_element(values, nn):
+ values[mask] = nn
+ return values
new = np.asarray(new)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 55e5b0d0439fa..e20bbb0d90fba 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -952,7 +952,8 @@ def putmask(self, mask, new) -> list[Block]:
List[Block]
"""
orig_mask = mask
- mask, noop = validate_putmask(self.values.T, mask)
+ values = cast(np.ndarray, self.values)
+ mask, noop = validate_putmask(values.T, mask)
assert not isinstance(new, (ABCIndex, ABCSeries, ABCDataFrame))
# if we are passed a scalar None, convert it here
@@ -960,7 +961,6 @@ def putmask(self, mask, new) -> list[Block]:
new = self.fill_value
if self._can_hold_element(new):
-
# error: Argument 1 to "putmask_without_repeat" has incompatible type
# "Union[ndarray, ExtensionArray]"; expected "ndarray"
putmask_without_repeat(self.values.T, mask, new) # type: ignore[arg-type]
@@ -979,9 +979,15 @@ def putmask(self, mask, new) -> list[Block]:
elif self.ndim == 1 or self.shape[0] == 1:
# no need to split columns
- # error: Argument 1 to "putmask_smart" has incompatible type "Union[ndarray,
- # ExtensionArray]"; expected "ndarray"
- nv = putmask_smart(self.values.T, mask, new).T # type: ignore[arg-type]
+ if not is_list_like(new):
+ # putmask_smart can't save us the need to cast
+ return self.coerce_to_target_dtype(new).putmask(mask, new)
+
+ # This differs from
+ # `self.coerce_to_target_dtype(new).putmask(mask, new)`
+ # because putmask_smart will check if new[mask] may be held
+ # by our dtype.
+ nv = putmask_smart(values.T, mask, new).T
return [self.make_block(nv)]
else:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44435 | 2021-11-13T21:41:14Z | 2021-11-14T02:06:51Z | 2021-11-14T02:06:51Z | 2021-11-14T15:11:45Z |
BUG: read_csv and read_fwf not skipping all defined rows when nrows is given | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index a593a03de5c25..e4e45e9fb0647 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -633,6 +633,7 @@ I/O
- Bug in unpickling a :class:`Index` with object dtype incorrectly inferring numeric dtypes (:issue:`43188`)
- Bug in :func:`read_csv` where reading multi-header input with unequal lengths incorrectly raising uncontrolled ``IndexError`` (:issue:`43102`)
- Bug in :func:`read_csv`, changed exception class when expecting a file path name or file-like object from ``OSError`` to ``TypeError`` (:issue:`43366`)
+- Bug in :func:`read_csv` and :func:`read_fwf` ignoring all ``skiprows`` except first when ``nrows`` is specified for ``engine='python'`` (:issue:`44021`)
- Bug in :func:`read_json` not handling non-numpy dtypes correctly (especially ``category``) (:issue:`21892`, :issue:`33205`)
- Bug in :func:`json_normalize` where multi-character ``sep`` parameter is incorrectly prefixed to every key (:issue:`43831`)
- Bug in :func:`json_normalize` where reading data with missing multi-level metadata would not respect errors="ignore" (:issue:`44312`)
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 4d596aa2f3fa6..36387f0835f4a 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -19,7 +19,10 @@
import numpy as np
import pandas._libs.lib as lib
-from pandas._typing import FilePathOrBuffer
+from pandas._typing import (
+ FilePathOrBuffer,
+ Scalar,
+)
from pandas.errors import (
EmptyDataError,
ParserError,
@@ -1020,14 +1023,7 @@ def _get_lines(self, rows=None):
new_rows = self.data[self.pos : self.pos + rows]
new_pos = self.pos + rows
- # Check for stop rows. n.b.: self.skiprows is a set.
- if self.skiprows:
- new_rows = [
- row
- for i, row in enumerate(new_rows)
- if not self.skipfunc(i + self.pos)
- ]
-
+ new_rows = self._remove_skipped_rows(new_rows)
lines.extend(new_rows)
self.pos = new_pos
@@ -1035,11 +1031,21 @@ def _get_lines(self, rows=None):
new_rows = []
try:
if rows is not None:
- for _ in range(rows):
+
+ rows_to_skip = 0
+ if self.skiprows is not None and self.pos is not None:
+ # Only read additional rows if pos is in skiprows
+ rows_to_skip = len(
+ set(self.skiprows) - set(range(self.pos))
+ )
+
+ for _ in range(rows + rows_to_skip):
# assert for mypy, data is Iterator[str] or None, would
# error in next
assert self.data is not None
new_rows.append(next(self.data))
+
+ new_rows = self._remove_skipped_rows(new_rows)
lines.extend(new_rows)
else:
rows = 0
@@ -1052,12 +1058,7 @@ def _get_lines(self, rows=None):
new_rows.append(new_row)
except StopIteration:
- if self.skiprows:
- new_rows = [
- row
- for i, row in enumerate(new_rows)
- if not self.skipfunc(i + self.pos)
- ]
+ new_rows = self._remove_skipped_rows(new_rows)
lines.extend(new_rows)
if len(lines) == 0:
raise
@@ -1076,6 +1077,13 @@ def _get_lines(self, rows=None):
lines = self._check_thousands(lines)
return self._check_decimal(lines)
+ def _remove_skipped_rows(self, new_rows: list[list[Scalar]]) -> list[list[Scalar]]:
+ if self.skiprows:
+ return [
+ row for i, row in enumerate(new_rows) if not self.skipfunc(i + self.pos)
+ ]
+ return new_rows
+
class FixedWidthReader(abc.Iterator):
"""
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index 8d1fa97f9f8bb..d4e33543d8a04 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -862,3 +862,18 @@ def test_colspecs_with_comment():
)
expected = DataFrame([[1, "K"]], columns=[0, 1])
tm.assert_frame_equal(result, expected)
+
+
+def test_skip_rows_and_n_rows():
+ # GH#44021
+ data = """a\tb
+1\t a
+2\t b
+3\t c
+4\t d
+5\t e
+6\t f
+ """
+ result = read_fwf(StringIO(data), nrows=4, skiprows=[2, 4])
+ expected = DataFrame({"a": [1, 3, 5, 6], "b": ["a", "c", "e", "f"]})
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_skiprows.py b/pandas/tests/io/parser/test_skiprows.py
index 9df6bf42c55d2..627bda44016e9 100644
--- a/pandas/tests/io/parser/test_skiprows.py
+++ b/pandas/tests/io/parser/test_skiprows.py
@@ -256,3 +256,21 @@ def test_skip_rows_bad_callable(all_parsers):
with pytest.raises(ZeroDivisionError, match=msg):
parser.read_csv(StringIO(data), skiprows=lambda x: 1 / 0)
+
+
+def test_skip_rows_and_n_rows(all_parsers):
+ # GH#44021
+ data = """a,b
+1,a
+2,b
+3,c
+4,d
+5,e
+6,f
+7,g
+8,h
+"""
+ parser = all_parsers
+ result = parser.read_csv(StringIO(data), nrows=5, skiprows=[2, 4, 6])
+ expected = DataFrame({"a": [1, 3, 5, 7, 8], "b": ["a", "c", "e", "g", "h"]})
+ tm.assert_frame_equal(result, expected)
| - [x] closes #44021
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44434 | 2021-11-13T21:34:27Z | 2021-11-16T00:55:59Z | 2021-11-16T00:55:58Z | 2021-11-19T21:31:24Z |
Fix FloatingArray.equals on older numpy | diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 1797f1aff4235..568f3484e78e4 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -47,6 +47,7 @@
)
from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import (
+ array_equivalent,
isna,
notna,
)
@@ -636,11 +637,12 @@ def equals(self, other) -> bool:
# GH#44382 if e.g. self[1] is np.nan and other[1] is pd.NA, we are NOT
# equal.
- return np.array_equal(self._mask, other._mask) and np.array_equal(
- self._data[~self._mask],
- other._data[~other._mask],
- equal_nan=True,
- )
+ if not np.array_equal(self._mask, other._mask):
+ return False
+
+ left = self._data[~self._mask]
+ right = other._data[~other._mask]
+ return array_equivalent(left, right, dtype_equal=True)
def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
if name in {"any", "all"}:
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index c457b52cf4b0e..eea3fa37b7435 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -475,8 +475,8 @@ def array_equivalent(
return np.array_equal(left, right)
-def _array_equivalent_float(left, right):
- return ((left == right) | (np.isnan(left) & np.isnan(right))).all()
+def _array_equivalent_float(left, right) -> bool:
+ return bool(((left == right) | (np.isnan(left) & np.isnan(right))).all())
def _array_equivalent_datetimelike(left, right):
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44432 | 2021-11-13T21:17:18Z | 2021-11-13T23:26:01Z | 2021-11-13T23:26:01Z | 2021-11-13T23:26:58Z |
Doc: Clean obj.empty docs to describe Series/DataFrame | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 23608cf0192df..30e057cac968f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2001,15 +2001,15 @@ def __contains__(self, key) -> bool_t:
@property
def empty(self) -> bool_t:
"""
- Indicator whether DataFrame is empty.
+ Indicator whether Series/DataFrame is empty.
- True if DataFrame is entirely empty (no items), meaning any of the
+ True if Series/DataFrame is entirely empty (no items), meaning any of the
axes are of length 0.
Returns
-------
bool
- If DataFrame is empty, return True, if not return False.
+ If Series/DataFrame is empty, return True, if not return False.
See Also
--------
@@ -2019,7 +2019,7 @@ def empty(self) -> bool_t:
Notes
-----
- If DataFrame contains only NaNs, it is still not considered empty. See
+ If Series/DataFrame contains only NaNs, it is still not considered empty. See
the example below.
Examples
@@ -2045,6 +2045,16 @@ def empty(self) -> bool_t:
False
>>> df.dropna().empty
True
+
+ >>> ser_empty = pd.Series({'A' : []})
+ >>> ser_empty
+ A []
+ dtype: object
+ >>> ser_empty.empty
+ False
+ >>> ser_empty = pd.Series()
+ >>> ser_empty.empty
+ True
"""
return any(len(self._get_axis(a)) == 0 for a in self._AXIS_ORDERS)
| - [x] closes #42697
| https://api.github.com/repos/pandas-dev/pandas/pulls/44430 | 2021-11-13T19:37:30Z | 2021-11-14T02:23:35Z | 2021-11-14T02:23:35Z | 2021-11-14T19:41:40Z |
BUG: DataFrame with mismatched NA value and dtype | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 59b164c156d79..92fadf801cec7 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -608,6 +608,7 @@ Missing
^^^^^^^
- Bug in :meth:`DataFrame.fillna` with limit and no method ignores axis='columns' or ``axis = 1`` (:issue:`40989`)
- Bug in :meth:`DataFrame.fillna` not replacing missing values when using a dict-like ``value`` and duplicate column names (:issue:`43476`)
+- Bug in constructing a :class:`DataFrame` with a dictionary ``np.datetime64`` as a value and ``dtype='timedelta64[ns]'``, or vice-versa, incorrectly casting instead of raising (:issue:`??`)
-
MultiIndex
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index e6d6b561803d6..a766f8321a641 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -443,15 +443,18 @@ def dict_to_mgr(
if missing.any() and not is_integer_dtype(dtype):
nan_dtype: DtypeObj
- if dtype is None or (
- isinstance(dtype, np.dtype) and np.issubdtype(dtype, np.flexible)
- ):
+ if dtype is not None:
+ # calling sanitize_array ensures we don't mix-and-match
+ # NA dtypes
+ midxs = missing.values.nonzero()[0]
+ for i in midxs:
+ arr = sanitize_array(arrays.iat[i], index, dtype=dtype)
+ arrays.iat[i] = arr
+ else:
# GH#1783
nan_dtype = np.dtype("object")
- else:
- nan_dtype = dtype
- val = construct_1d_arraylike_from_scalar(np.nan, len(index), nan_dtype)
- arrays.loc[missing] = [val] * missing.sum()
+ val = construct_1d_arraylike_from_scalar(np.nan, len(index), nan_dtype)
+ arrays.loc[missing] = [val] * missing.sum()
arrays = list(arrays)
columns = ensure_index(columns)
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index f92bbe1c718ab..52797862afa14 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2903,14 +2903,7 @@ def test_from_timedelta64_scalar_object(self, constructor):
assert isinstance(get1(obj), np.timedelta64)
@pytest.mark.parametrize("cls", [np.datetime64, np.timedelta64])
- def test_from_scalar_datetimelike_mismatched(self, constructor, cls, request):
- node = request.node
- params = node.callspec.params
- if params["frame_or_series"] is DataFrame and params["constructor"] is dict:
- mark = pytest.mark.xfail(
- reason="DataFrame incorrectly allows mismatched datetimelike"
- )
- node.add_marker(mark)
+ def test_from_scalar_datetimelike_mismatched(self, constructor, cls):
scalar = cls("NaT", "ns")
dtype = {np.datetime64: "m8[ns]", np.timedelta64: "M8[ns]"}[cls]
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
<s>Also fixed the FloatingArray.equals bug on older numpys</s>no longer needed, so reverted | https://api.github.com/repos/pandas-dev/pandas/pulls/44428 | 2021-11-13T19:20:24Z | 2021-11-14T02:05:12Z | 2021-11-14T02:05:12Z | 2021-11-14T15:20:30Z |
[DOC] Transferring data from the 6th column to the first. #44379 | diff --git a/doc/source/user_guide/basics.rst b/doc/source/user_guide/basics.rst
index a589ad96ca7d9..40ff1049e5820 100644
--- a/doc/source/user_guide/basics.rst
+++ b/doc/source/user_guide/basics.rst
@@ -2051,32 +2051,33 @@ The following table lists all of pandas extension types. For methods requiring `
arguments, strings can be specified as indicated. See the respective
documentation sections for more on each type.
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| Kind of Data | Data Type | Scalar | Array | String Aliases | Documentation |
-+===================+===========================+====================+===============================+=========================================+===============================+
-| tz-aware datetime | :class:`DatetimeTZDtype` | :class:`Timestamp` | :class:`arrays.DatetimeArray` | ``'datetime64[ns, <tz>]'`` | :ref:`timeseries.timezone` |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| Categorical | :class:`CategoricalDtype` | (none) | :class:`Categorical` | ``'category'`` | :ref:`categorical` |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| period | :class:`PeriodDtype` | :class:`Period` | :class:`arrays.PeriodArray` | ``'period[<freq>]'``, | :ref:`timeseries.periods` |
-| (time spans) | | | | ``'Period[<freq>]'`` | |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| sparse | :class:`SparseDtype` | (none) | :class:`arrays.SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, | :ref:`sparse` |
-| | | | | ``'Sparse[float]'`` | |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| intervals | :class:`IntervalDtype` | :class:`Interval` | :class:`arrays.IntervalArray` | ``'interval'``, ``'Interval'``, | :ref:`advanced.intervalindex` |
-| | | | | ``'Interval[<numpy_dtype>]'``, | |
-| | | | | ``'Interval[datetime64[ns, <tz>]]'``, | |
-| | | | | ``'Interval[timedelta64[<freq>]]'`` | |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| nullable integer + :class:`Int64Dtype`, ... | (none) | :class:`arrays.IntegerArray` | ``'Int8'``, ``'Int16'``, ``'Int32'``, | :ref:`integer_na` |
-| | | | | ``'Int64'``, ``'UInt8'``, ``'UInt16'``, | |
-| | | | | ``'UInt32'``, ``'UInt64'`` | |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| Strings | :class:`StringDtype` | :class:`str` | :class:`arrays.StringArray` | ``'string'`` | :ref:`text` |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
-| Boolean (with NA) | :class:`BooleanDtype` | :class:`bool` | :class:`arrays.BooleanArray` | ``'boolean'`` | :ref:`api.arrays.bool` |
-+-------------------+---------------------------+--------------------+-------------------------------+-----------------------------------------+-------------------------------+
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| Kind of Data | Data Type | Scalar | Array | String Aliases |
++=================================================+===============+===========+========+===========+===============================+========================================+
+| :ref:`tz-aware datetime <timeseries.timezone>` | :class:`DatetimeTZDtype` | :class:`Timestamp` | :class:`arrays.DatetimeArray` | ``'datetime64[ns, <tz>]'`` |
+| | | | | |
++-------------------------------------------------+---------------+-----------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`Categorical <categorical>` | :class:`CategoricalDtype` | (none) | :class:`Categorical` | ``'category'`` |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`period (time spans) <timeseries.periods>` | :class:`PeriodDtype` | :class:`Period` | :class:`arrays.PeriodArray` | ``'period[<freq>]'``, |
+| | | | ``'Period[<freq>]'`` | |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`sparse <sparse>` | :class:`SparseDtype` | (none) | :class:`arrays.SparseArray` | ``'Sparse'``, ``'Sparse[int]'``, |
+| | | | | ``'Sparse[float]'`` |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`intervals <advanced.intervalindex>` | :class:`IntervalDtype` | :class:`Interval` | :class:`arrays.IntervalArray` | ``'interval'``, ``'Interval'``, |
+| | | | | ``'Interval[<numpy_dtype>]'``, |
+| | | | | ``'Interval[datetime64[ns, <tz>]]'``, |
+| | | | | ``'Interval[timedelta64[<freq>]]'`` |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`nullable integer <integer_na>` | :class:`Int64Dtype`, ... | (none) | :class:`arrays.IntegerArray` | ``'Int8'``, ``'Int16'``, ``'Int32'``, |
+| | | | | ``'Int64'``, ``'UInt8'``, ``'UInt16'``,|
+| | | | | ``'UInt32'``, ``'UInt64'`` |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`Strings <text>` | :class:`StringDtype` | :class:`str` | :class:`arrays.StringArray` | ``'string'`` |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
+| :ref:`Boolean (with NA) <api.arrays.bool>` | :class:`BooleanDtype` | :class:`bool` | :class:`arrays.BooleanArray` | ``'boolean'`` |
++-------------------------------------------------+---------------------------+--------------------+-------------------------------+----------------------------------------+
pandas has two ways to store strings.
| Transferring data from the 6th column to the first.
This should resolve issue #44379
| https://api.github.com/repos/pandas-dev/pandas/pulls/44427 | 2021-11-13T18:34:43Z | 2021-11-25T17:23:36Z | 2021-11-25T17:23:36Z | 2021-11-25T17:23:50Z |
TYP: improve typing for DataFrame.to_string | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b88c97b8e988d..2f99785674a1a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -989,15 +989,13 @@ def __repr__(self) -> str:
"""
Return a string representation for a particular DataFrame.
"""
- buf = StringIO("")
if self._info_repr():
+ buf = StringIO()
self.info(buf=buf)
return buf.getvalue()
repr_params = fmt.get_dataframe_repr_params()
- self.to_string(buf=buf, **repr_params)
-
- return buf.getvalue()
+ return self.to_string(**repr_params)
def _repr_html_(self) -> str | None:
"""
@@ -1006,7 +1004,7 @@ def _repr_html_(self) -> str | None:
Mainly for IPython notebook.
"""
if self._info_repr():
- buf = StringIO("")
+ buf = StringIO()
self.info(buf=buf)
# need to escape the <class>, should be the first line.
val = buf.getvalue().replace("<", r"<", 1)
@@ -1043,6 +1041,56 @@ def _repr_html_(self) -> str | None:
else:
return None
+ @overload
+ def to_string(
+ self,
+ buf: None = ...,
+ columns: Sequence[str] | None = ...,
+ col_space: int | list[int] | dict[Hashable, int] | None = ...,
+ header: bool | Sequence[str] = ...,
+ index: bool = ...,
+ na_rep: str = ...,
+ formatters: fmt.FormattersType | None = ...,
+ float_format: fmt.FloatFormatType | None = ...,
+ sparsify: bool | None = ...,
+ index_names: bool = ...,
+ justify: str | None = ...,
+ max_rows: int | None = ...,
+ max_cols: int | None = ...,
+ show_dimensions: bool = ...,
+ decimal: str = ...,
+ line_width: int | None = ...,
+ min_rows: int | None = ...,
+ max_colwidth: int | None = ...,
+ encoding: str | None = ...,
+ ) -> str:
+ ...
+
+ @overload
+ def to_string(
+ self,
+ buf: FilePathOrBuffer[str],
+ columns: Sequence[str] | None = ...,
+ col_space: int | list[int] | dict[Hashable, int] | None = ...,
+ header: bool | Sequence[str] = ...,
+ index: bool = ...,
+ na_rep: str = ...,
+ formatters: fmt.FormattersType | None = ...,
+ float_format: fmt.FloatFormatType | None = ...,
+ sparsify: bool | None = ...,
+ index_names: bool = ...,
+ justify: str | None = ...,
+ max_rows: int | None = ...,
+ max_cols: int | None = ...,
+ show_dimensions: bool = ...,
+ decimal: str = ...,
+ line_width: int | None = ...,
+ min_rows: int | None = ...,
+ max_colwidth: int | None = ...,
+ encoding: str | None = ...,
+ ) -> None:
+ ...
+
@Substitution(
header_type="bool or sequence of strings",
header="Write out the column names. If a list of strings "
@@ -1058,7 +1106,7 @@ def to_string(
self,
buf: FilePathOrBuffer[str] | None = None,
columns: Sequence[str] | None = None,
- col_space: int | None = None,
+ col_space: int | list[int] | dict[Hashable, int] | None = None,
header: bool | Sequence[str] = True,
index: bool = True,
na_rep: str = "NaN",
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/44426 | 2021-11-13T18:33:54Z | 2021-11-18T09:38:11Z | 2021-11-18T09:38:11Z | 2021-11-18T10:09:52Z |
BUG: Using boolean keys to select a column (GH44322) | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index a593a03de5c25..02e2523d265e4 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -605,7 +605,7 @@ Indexing
- Bug in :meth:`Series.__setitem__` with an integer dtype other than ``int64`` setting with a ``range`` object unnecessarily upcasting to ``int64`` (:issue:`44261`)
- Bug in :meth:`Series.__setitem__` with a boolean mask indexer setting a listlike value of length 1 incorrectly broadcasting that value (:issue:`44265`)
- Bug in :meth:`DataFrame.loc.__setitem__` and :meth:`DataFrame.iloc.__setitem__` with mixed dtypes sometimes failing to operate in-place (:issue:`44345`)
--
+- Bug in :meth:`DataFrame.loc.__getitem__` incorrectly raising ``KeyError`` when selecting a single column with a boolean key (:issue:`44322`).
Missing
^^^^^^^
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 91f1415178471..7d12a27aed84a 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -994,7 +994,7 @@ def _validate_key(self, key, axis: int):
# slice of labels (where start-end in labels)
# slice of integers (only if in the labels)
# boolean not in slice and with boolean index
- if isinstance(key, bool) and not is_bool_dtype(self.obj.index):
+ if isinstance(key, bool) and not is_bool_dtype(self.obj._get_axis(axis)):
raise KeyError(
f"{key}: boolean label can not be used without a boolean index"
)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 63d1568ed4d43..a07928b40ad0e 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -178,6 +178,26 @@ def test_column_types_consistent(self):
)
tm.assert_frame_equal(df, expected)
+ @pytest.mark.parametrize(
+ "obj, key, exp",
+ [
+ (
+ DataFrame([[1]], columns=Index([False])),
+ IndexSlice[:, False],
+ Series([1], name=False),
+ ),
+ (Series([1], index=Index([False])), False, [1]),
+ (DataFrame([[1]], index=Index([False])), False, Series([1], name=False)),
+ ],
+ )
+ def test_loc_getitem_single_boolean_arg(self, obj, key, exp):
+ # GH 44322
+ res = obj.loc[key]
+ if isinstance(exp, (DataFrame, Series)):
+ tm.assert_equal(res, exp)
+ else:
+ assert res == exp
+
class TestLoc2:
# TODO: better name, just separating out things that rely on base class
| - [x] closes #44322
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44425 | 2021-11-13T12:51:42Z | 2021-11-15T18:22:56Z | 2021-11-15T18:22:55Z | 2021-11-15T18:22:59Z |
DOC: whatsnew for the improvement to warning messages | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 8db9be21ca4ef..94606e049018e 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -15,6 +15,31 @@ including other versions of pandas.
Enhancements
~~~~~~~~~~~~
+.. _whatsnew_140.enhancements.warning_lineno:
+
+Improved warning messages
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Previously, warning messages may have pointed to lines within the pandas library. Running the script ``setting_with_copy_warning.py``
+
+.. code-block:: python
+
+ import pandas as pd
+
+ df = pd.DataFrame({'a': [1, 2, 3]})
+ df[:2].loc[:, 'a'] = 5
+
+with pandas 1.3 resulted in::
+
+ .../site-packages/pandas/core/indexing.py:1951: SettingWithCopyWarning:
+ A value is trying to be set on a copy of a slice from a DataFrame.
+
+This made it difficult to determine where the warning was being generated from. Now pandas will inspect the call stack, reporting the first line outside of the pandas library that gave rise to the warning. The output of the above script is now::
+
+ setting_with_copy_warning.py:4: SettingWithCopyWarning:
+ A value is trying to be set on a copy of a slice from a DataFrame.
+
+
.. _whatsnew_140.enhancements.numeric_index:
More flexible numeric dtypes for indexes
| - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
The use of find_stack_level is a significant improvement for users, thought it should go in the whatsnew. It appears to me that the ipython directive in sphinx will suppress any warning message, so I've just made it a literal block.

cc @phofl
| https://api.github.com/repos/pandas-dev/pandas/pulls/44419 | 2021-11-12T23:07:11Z | 2021-11-13T17:21:03Z | 2021-11-13T17:21:03Z | 2021-11-13T17:34:57Z |
DOC: Add how=cross description to join | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b01de5dec610d..212bb63693d56 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9155,6 +9155,11 @@ def join(
* inner: form intersection of calling frame's index (or column if
on is specified) with `other`'s index, preserving the order
of the calling's one.
+ * cross: creates the cartesian product from both frames, preserves the order
+ of the left keys.
+
+ .. versionadded:: 1.2.0
+
lsuffix : str, default ''
Suffix to use from left frame's overlapping columns.
rsuffix : str, default ''
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/44418 | 2021-11-12T22:44:01Z | 2021-11-13T17:05:35Z | 2021-11-13T17:05:35Z | 2021-11-13T18:48:29Z |
BUG: DataFrame.astype(series) with duplicate columns | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 59b164c156d79..8964f5e3ffad2 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -714,6 +714,7 @@ Styler
Other
^^^^^
+- Bug in :meth:`DataFrame.astype` with non-unique columns and a :class:`Series` ``dtype`` argument (:issue:`44417`)
- Bug in :meth:`CustomBusinessMonthBegin.__add__` (:meth:`CustomBusinessMonthEnd.__add__`) not applying the extra ``offset`` parameter when beginning (end) of the target month is already a business day (:issue:`41356`)
- Bug in :meth:`RangeIndex.union` with another ``RangeIndex`` with matching (even) ``step`` and starts differing by strictly less than ``step / 2`` (:issue:`44019`)
- Bug in :meth:`RangeIndex.difference` with ``sort=None`` and ``step<0`` failing to sort (:issue:`44085`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 6b51456006021..eb0c5a236c2da 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5826,14 +5826,22 @@ def astype(
"Only a column name can be used for the "
"key in a dtype mappings argument."
)
+
+ # GH#44417 cast to Series so we can use .iat below, which will be
+ # robust in case we
+ from pandas import Series
+
+ dtype_ser = Series(dtype, dtype=object)
+ dtype_ser = dtype_ser.reindex(self.columns, fill_value=None, copy=False)
+
results = []
- for col_name, col in self.items():
- if col_name in dtype:
- results.append(
- col.astype(dtype=dtype[col_name], copy=copy, errors=errors)
- )
+ for i, (col_name, col) in enumerate(self.items()):
+ cdt = dtype_ser.iat[i]
+ if isna(cdt):
+ res_col = col.copy() if copy else col
else:
- results.append(col.copy() if copy else col)
+ res_col = col.astype(dtype=cdt, copy=copy, errors=errors)
+ results.append(res_col)
elif is_extension_array_dtype(dtype) and self.ndim > 1:
# GH 18099/22869: columnwise conversion to extension dtype
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 3c45f7263265c..b8354e800753d 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -992,7 +992,7 @@ def _wrap_applied_output(
result = self.obj._constructor(
index=self.grouper.result_index, columns=data.columns
)
- result = result.astype(data.dtypes.to_dict(), copy=False)
+ result = result.astype(data.dtypes, copy=False)
return result
# GH12824
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index 9f1f953cecc7e..e5e07761fd755 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -261,6 +261,26 @@ def test_astype_duplicate_col(self):
expected = concat([a1_str, b, a2_str], axis=1)
tm.assert_frame_equal(result, expected)
+ def test_astype_duplicate_col_series_arg(self):
+ # GH#44417
+ vals = np.random.randn(3, 4)
+ df = DataFrame(vals, columns=["A", "B", "C", "A"])
+ dtypes = df.dtypes
+ dtypes.iloc[0] = str
+ dtypes.iloc[2] = "Float64"
+
+ result = df.astype(dtypes)
+ expected = DataFrame(
+ {
+ 0: vals[:, 0].astype(str),
+ 1: vals[:, 1],
+ 2: pd.array(vals[:, 2], dtype="Float64"),
+ 3: vals[:, 3],
+ }
+ )
+ expected.columns = df.columns
+ tm.assert_frame_equal(result, expected)
+
@pytest.mark.parametrize(
"dtype",
[
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 203d8abb465d0..f632da9616124 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2031,6 +2031,16 @@ def get_result():
tm.assert_equal(result, expected)
+def test_empty_groupby_apply_nonunique_columns():
+ # GH#44417
+ df = DataFrame(np.random.randn(0, 4))
+ df[3] = df[3].astype(np.int64)
+ df.columns = [0, 1, 2, 0]
+ gb = df.groupby(df[1])
+ res = gb.apply(lambda x: x)
+ assert (res.dtypes == df.dtypes).all()
+
+
def test_tuple_as_grouping():
# https://github.com/pandas-dev/pandas/issues/18314
df = DataFrame(
| - [ ] closes #xxxx
- [x] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44417 | 2021-11-12T22:24:42Z | 2021-11-14T02:12:10Z | 2021-11-14T02:12:10Z | 2021-11-14T06:03:30Z |
ENH: Use find_stack_level | diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index c9f7fd43c1050..05cd3a3a72257 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -11,6 +11,7 @@
)
from pandas._libs.missing import is_matching_na
import pandas._libs.testing as _testing
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_bool,
@@ -106,7 +107,7 @@ def assert_almost_equal(
"is deprecated and will be removed in a future version. "
"You can stop passing 'check_less_precise' to silence this warning.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
# https://github.com/python/mypy/issues/7642
# error: Argument 1 to "_get_tol_from_less_precise" has incompatible
@@ -340,7 +341,7 @@ def _get_ilevel_values(index, level):
"is deprecated and will be removed in a future version. "
"You can stop passing 'check_less_precise' to silence this warning.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
# https://github.com/python/mypy/issues/7642
# error: Argument 1 to "_get_tol_from_less_precise" has incompatible
@@ -818,7 +819,7 @@ def assert_extension_array_equal(
"is deprecated and will be removed in a future version. "
"You can stop passing 'check_less_precise' to silence this warning.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
rtol = atol = _get_tol_from_less_precise(check_less_precise)
@@ -964,7 +965,7 @@ def assert_series_equal(
"is deprecated and will be removed in a future version. "
"You can stop passing 'check_less_precise' to silence this warning.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
rtol = atol = _get_tol_from_less_precise(check_less_precise)
@@ -1247,7 +1248,7 @@ def assert_frame_equal(
"is deprecated and will be removed in a future version. "
"You can stop passing 'check_less_precise' to silence this warning.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
rtol = atol = _get_tol_from_less_precise(check_less_precise)
diff --git a/pandas/io/common.py b/pandas/io/common.py
index be6577e646ac3..12c7afc8ee2e4 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -49,6 +49,7 @@
import_lzma,
)
from pandas.compat._optional import import_optional_dependency
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import is_file_like
@@ -270,7 +271,7 @@ def _get_filepath_or_buffer(
warnings.warn(
"compression has no effect when passing a non-binary object as input.",
RuntimeWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
compression_method = None
diff --git a/pandas/io/date_converters.py b/pandas/io/date_converters.py
index f079a25f69fec..ef60afa195234 100644
--- a/pandas/io/date_converters.py
+++ b/pandas/io/date_converters.py
@@ -4,6 +4,7 @@
import numpy as np
from pandas._libs.tslibs import parsing
+from pandas.util._exceptions import find_stack_level
def parse_date_time(date_col, time_col):
@@ -18,7 +19,7 @@ def parse_date_time(date_col, time_col):
Use pd.to_datetime(date_col + " " + time_col).to_pydatetime() instead to get a Numpy array.
""", # noqa: E501
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
date_col = _maybe_cast(date_col)
time_col = _maybe_cast(time_col)
@@ -38,7 +39,7 @@ def parse_date_fields(year_col, month_col, day_col):
np.array([s.to_pydatetime() for s in ser]) instead to get a Numpy array.
""", # noqa: E501
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
year_col = _maybe_cast(year_col)
@@ -63,7 +64,7 @@ def parse_all_fields(year_col, month_col, day_col, hour_col, minute_col, second_
np.array([s.to_pydatetime() for s in ser]) instead to get a Numpy array.
""", # noqa: E501
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
year_col = _maybe_cast(year_col)
@@ -89,7 +90,7 @@ def generic_parser(parse_func, *cols):
Use pd.to_datetime instead.
""",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
N = _check_columns(cols)
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index e543c9161a26e..1caf334f9607e 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -833,7 +833,7 @@ def __new__(
warnings.warn(
"Use of **kwargs is deprecated, use engine_kwargs instead.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
# only switch class if generic(ExcelWriter)
@@ -868,7 +868,7 @@ def __new__(
"deprecated and will also raise a warning, it can "
"be globally set and the warning suppressed.",
FutureWarning,
- stacklevel=4,
+ stacklevel=find_stack_level(),
)
cls = get_writer(engine)
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 339585810bec1..6374f52f6964b 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -32,6 +32,7 @@
ParserError,
ParserWarning,
)
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import astype_nansafe
from pandas.core.dtypes.common import (
@@ -558,7 +559,7 @@ def _convert_to_ndarrays(
f"for column {c} - only the converter will be used."
),
ParserWarning,
- stacklevel=7,
+ stacklevel=find_stack_level(),
)
try:
@@ -830,7 +831,7 @@ def _check_data_length(self, columns: list[str], data: list[ArrayLike]) -> None:
"Length of header or names does not match length of data. This leads "
"to a loss of data with index_col=False.",
ParserWarning,
- stacklevel=6,
+ stacklevel=find_stack_level(),
)
def _evaluate_usecols(self, usecols, names):
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index 352dd998dda0f..db750cded45e5 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -10,6 +10,7 @@
FilePathOrBuffer,
)
from pandas.errors import DtypeWarning
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -387,7 +388,7 @@ def _concatenate_chunks(chunks: list[dict[int, ArrayLike]]) -> dict:
f"Specify dtype option on import or set low_memory=False."
]
)
- warnings.warn(warning_message, DtypeWarning, stacklevel=8)
+ warnings.warn(warning_message, DtypeWarning, stacklevel=find_stack_level())
return result
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index b0e868b260369..4d596aa2f3fa6 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -24,6 +24,7 @@
EmptyDataError,
ParserError,
)
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import is_integer
from pandas.core.dtypes.inference import is_dict_like
@@ -555,7 +556,7 @@ def _handle_usecols(
"Defining usecols with out of bounds indices is deprecated "
"and will raise a ParserError in a future version.",
FutureWarning,
- stacklevel=8,
+ stacklevel=find_stack_level(),
)
col_indices = self.usecols
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 6d3cc84a31d05..6fb9497dbc1d6 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -1041,7 +1041,7 @@ def _clean_options(self, options, engine):
"engine='python'."
),
ParserWarning,
- stacklevel=5,
+ stacklevel=find_stack_level(),
)
index_col = options["index_col"]
@@ -1573,7 +1573,9 @@ def _merge_with_dialect_properties(
conflict_msgs.append(msg)
if conflict_msgs:
- warnings.warn("\n\n".join(conflict_msgs), ParserWarning, stacklevel=2)
+ warnings.warn(
+ "\n\n".join(conflict_msgs), ParserWarning, stacklevel=find_stack_level()
+ )
kwds[param] = dialect_val
return kwds
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 8c8e9b9feeb80..0e886befb5f2f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -45,6 +45,7 @@
from pandas.compat.pickle_compat import patch_pickle
from pandas.errors import PerformanceWarning
from pandas.util._decorators import cache_readonly
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
ensure_object,
@@ -2190,7 +2191,9 @@ def update_info(self, info):
# frequency/name just warn
if key in ["freq", "index_name"]:
ws = attribute_conflict_doc % (key, existing_value, value)
- warnings.warn(ws, AttributeConflictWarning, stacklevel=6)
+ warnings.warn(
+ ws, AttributeConflictWarning, stacklevel=find_stack_level()
+ )
# reset
idx[key] = None
@@ -3080,7 +3083,7 @@ def write_array(
pass
else:
ws = performance_doc % (inferred_type, key, items)
- warnings.warn(ws, PerformanceWarning, stacklevel=7)
+ warnings.warn(ws, PerformanceWarning, stacklevel=find_stack_level())
vlarr = self._handle.create_vlarray(self.group, key, _tables().ObjectAtom())
vlarr.append(value)
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index ec5262ee3a04c..867ce52cbde6f 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -28,6 +28,7 @@
from pandas._typing import DtypeArg
from pandas.compat._optional import import_optional_dependency
from pandas.errors import AbstractMethodError
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_datetime64tz_dtype,
@@ -1159,7 +1160,7 @@ def _sqlalchemy_type(self, col):
"the 'timedelta' type is not supported, and will be "
"written as integer values (ns frequency) to the database.",
UserWarning,
- stacklevel=8,
+ stacklevel=find_stack_level(),
)
return BigInteger
elif col_type == "floating":
@@ -1886,7 +1887,7 @@ def _create_table_setup(self):
pat = re.compile(r"\s+")
column_names = [col_name for col_name, _, _ in column_names_and_types]
if any(map(pat.search, column_names)):
- warnings.warn(_SAFE_NAMES_WARNING, stacklevel=6)
+ warnings.warn(_SAFE_NAMES_WARNING, stacklevel=find_stack_level())
escape = _get_valid_sqlite_name
@@ -1948,7 +1949,7 @@ def _sql_type_name(self, col):
"the 'timedelta' type is not supported, and will be "
"written as integer values (ns frequency) to the database.",
UserWarning,
- stacklevel=8,
+ stacklevel=find_stack_level(),
)
col_type = "integer"
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index 9679e79d8c4ba..5314a61191d78 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -13,6 +13,8 @@
import matplotlib.ticker as ticker
import numpy as np
+from pandas.util._exceptions import find_stack_level
+
from pandas.core.dtypes.common import is_list_like
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -233,7 +235,7 @@ def create_subplots(
"When passing multiple axes, sharex and sharey "
"are ignored. These settings must be specified when creating axes.",
UserWarning,
- stacklevel=4,
+ stacklevel=find_stack_level(),
)
if ax.size == naxes:
fig = ax.flat[0].get_figure()
@@ -256,7 +258,7 @@ def create_subplots(
"To output multiple subplots, the figure containing "
"the passed axes is being cleared.",
UserWarning,
- stacklevel=4,
+ stacklevel=find_stack_level(),
)
fig.clear()
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index c2d7f7b3f716c..fc01771507888 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -29,6 +29,7 @@
from pandas._libs.tslibs.parsing import get_rule_month
from pandas._typing import npt
from pandas.util._decorators import cache_readonly
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_datetime64_dtype,
@@ -116,7 +117,7 @@ def get_offset(name: str) -> DateOffset:
"get_offset is deprecated and will be removed in a future version, "
"use to_offset instead.",
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
return _get_offset(name)
diff --git a/pandas/util/_validators.py b/pandas/util/_validators.py
index f8bd1ec7bc96a..ee54b1b2074cb 100644
--- a/pandas/util/_validators.py
+++ b/pandas/util/_validators.py
@@ -12,6 +12,8 @@
import numpy as np
+from pandas.util._exceptions import find_stack_level
+
from pandas.core.dtypes.common import (
is_bool,
is_integer,
@@ -339,7 +341,7 @@ def validate_axis_style_args(data, args, kwargs, arg_name, method_name):
"positional arguments for 'index' or 'columns' will raise "
"a 'TypeError'."
)
- warnings.warn(msg, FutureWarning, stacklevel=4)
+ warnings.warn(msg, FutureWarning, stacklevel=find_stack_level())
out[data._get_axis_name(0)] = args[0]
out[data._get_axis_name(1)] = args[1]
else:
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index af9fe4846b27d..0ab59a202149d 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1,5 +1,7 @@
import warnings
+from pandas.util._exceptions import find_stack_level
+
from pandas._testing import * # noqa
warnings.warn(
@@ -8,5 +10,5 @@
"public API at pandas.testing instead."
),
FutureWarning,
- stacklevel=2,
+ stacklevel=find_stack_level(),
)
| Part of #44347
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
| https://api.github.com/repos/pandas-dev/pandas/pulls/44416 | 2021-11-12T21:24:20Z | 2021-11-13T17:04:23Z | 2021-11-13T17:04:23Z | 2021-11-13T21:16:16Z |
TST: de-duplicate assert_slics_equivalent | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index c2c55a4060f7a..4f9ef2c3c3ffa 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -82,6 +82,7 @@
assert_extension_array_equal,
assert_frame_equal,
assert_index_equal,
+ assert_indexing_slices_equivalent,
assert_interval_array_equal,
assert_is_sorted,
assert_is_valid_plot_return_object,
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index c9f7fd43c1050..82253b73a824f 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -1444,3 +1444,17 @@ def is_extension_array_dtype_and_needs_i8_conversion(left_dtype, right_dtype) ->
Related to issue #37609
"""
return is_extension_array_dtype(left_dtype) and needs_i8_conversion(right_dtype)
+
+
+def assert_indexing_slices_equivalent(ser: Series, l_slc: slice, i_slc: slice):
+ """
+ Check that ser.iloc[i_slc] matches ser.loc[l_slc] and, if applicable,
+ ser[l_slc].
+ """
+ expected = ser.iloc[i_slc]
+
+ assert_series_equal(ser.loc[l_slc], expected)
+
+ if not ser.index.is_integer():
+ # For integer indices, .loc and plain getitem are position-based.
+ assert_series_equal(ser[l_slc], expected)
diff --git a/pandas/tests/indexing/multiindex/test_slice.py b/pandas/tests/indexing/multiindex/test_slice.py
index 42edaa2fe6c3a..55d45a21d643a 100644
--- a/pandas/tests/indexing/multiindex/test_slice.py
+++ b/pandas/tests/indexing/multiindex/test_slice.py
@@ -702,32 +702,30 @@ def test_per_axis_per_level_setitem(self):
tm.assert_frame_equal(df, expected)
def test_multiindex_label_slicing_with_negative_step(self):
- s = Series(
+ ser = Series(
np.arange(20), MultiIndex.from_product([list("abcde"), np.arange(4)])
)
SLC = pd.IndexSlice
- def assert_slices_equivalent(l_slc, i_slc):
- tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
- tm.assert_series_equal(s[l_slc], s.iloc[i_slc])
+ tm.assert_indexing_slices_equivalent(ser, SLC[::-1], SLC[::-1])
- assert_slices_equivalent(SLC[::-1], SLC[::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC["d"::-1], SLC[15::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("d",)::-1], SLC[15::-1])
- assert_slices_equivalent(SLC["d"::-1], SLC[15::-1])
- assert_slices_equivalent(SLC[("d",)::-1], SLC[15::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[:"d":-1], SLC[:11:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[:("d",):-1], SLC[:11:-1])
- assert_slices_equivalent(SLC[:"d":-1], SLC[:11:-1])
- assert_slices_equivalent(SLC[:("d",):-1], SLC[:11:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC["d":"b":-1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("d",):"b":-1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC["d":("b",):-1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("d",):("b",):-1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC["b":"d":-1], SLC[:0])
- assert_slices_equivalent(SLC["d":"b":-1], SLC[15:3:-1])
- assert_slices_equivalent(SLC[("d",):"b":-1], SLC[15:3:-1])
- assert_slices_equivalent(SLC["d":("b",):-1], SLC[15:3:-1])
- assert_slices_equivalent(SLC[("d",):("b",):-1], SLC[15:3:-1])
- assert_slices_equivalent(SLC["b":"d":-1], SLC[:0])
-
- assert_slices_equivalent(SLC[("c", 2)::-1], SLC[10::-1])
- assert_slices_equivalent(SLC[:("c", 2):-1], SLC[:9:-1])
- assert_slices_equivalent(SLC[("e", 0):("c", 2):-1], SLC[16:9:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("c", 2)::-1], SLC[10::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[:("c", 2):-1], SLC[:9:-1])
+ tm.assert_indexing_slices_equivalent(
+ ser, SLC[("e", 0):("c", 2):-1], SLC[16:9:-1]
+ )
def test_multiindex_slice_first_level(self):
# GH 12697
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 7c7e9f79a77ae..2805c8877ed78 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -709,21 +709,17 @@ def run_tests(df, rhs, right_loc, right_iloc):
def test_str_label_slicing_with_negative_step(self):
SLC = pd.IndexSlice
- def assert_slices_equivalent(l_slc, i_slc):
- tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
-
- if not idx.is_integer:
- # For integer indices, .loc and plain getitem are position-based.
- tm.assert_series_equal(s[l_slc], s.iloc[i_slc])
- tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
-
for idx in [_mklbl("A", 20), np.arange(20) + 100, np.linspace(100, 150, 20)]:
idx = Index(idx)
- s = Series(np.arange(20), index=idx)
- assert_slices_equivalent(SLC[idx[9] :: -1], SLC[9::-1])
- assert_slices_equivalent(SLC[: idx[9] : -1], SLC[:8:-1])
- assert_slices_equivalent(SLC[idx[13] : idx[9] : -1], SLC[13:8:-1])
- assert_slices_equivalent(SLC[idx[9] : idx[13] : -1], SLC[:0])
+ ser = Series(np.arange(20), index=idx)
+ tm.assert_indexing_slices_equivalent(ser, SLC[idx[9] :: -1], SLC[9::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[: idx[9] : -1], SLC[:8:-1])
+ tm.assert_indexing_slices_equivalent(
+ ser, SLC[idx[13] : idx[9] : -1], SLC[13:8:-1]
+ )
+ tm.assert_indexing_slices_equivalent(
+ ser, SLC[idx[9] : idx[13] : -1], SLC[:0]
+ )
def test_slice_with_zero_step_raises(self, indexer_sl, frame_or_series):
obj = frame_or_series(np.arange(20), index=_mklbl("A", 20))
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 6c3587c7eeada..8a34882b1e5d4 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -338,26 +338,19 @@ def test_slice_with_zero_step_raises(index, frame_or_series, indexer_sli):
],
)
def test_slice_with_negative_step(index):
- def assert_slices_equivalent(l_slc, i_slc):
- expected = ts.iloc[i_slc]
-
- tm.assert_series_equal(ts[l_slc], expected)
- tm.assert_series_equal(ts.loc[l_slc], expected)
-
keystr1 = str(index[9])
keystr2 = str(index[13])
- box = type(index[0])
- ts = Series(np.arange(20), index)
+ ser = Series(np.arange(20), index)
SLC = IndexSlice
- for key in [keystr1, box(keystr1)]:
- assert_slices_equivalent(SLC[key::-1], SLC[9::-1])
- assert_slices_equivalent(SLC[:key:-1], SLC[:8:-1])
+ for key in [keystr1, index[9]]:
+ tm.assert_indexing_slices_equivalent(ser, SLC[key::-1], SLC[9::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[:key:-1], SLC[:8:-1])
- for key2 in [keystr2, box(keystr2)]:
- assert_slices_equivalent(SLC[key2:key:-1], SLC[13:8:-1])
- assert_slices_equivalent(SLC[key:key2:-1], SLC[0:0:-1])
+ for key2 in [keystr2, index[13]]:
+ tm.assert_indexing_slices_equivalent(ser, SLC[key2:key:-1], SLC[13:8:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[key:key2:-1], SLC[0:0:-1])
def test_tuple_index():
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
Also fixes one of the usages that uses `.is_integer` instead of `.is_integer()` | https://api.github.com/repos/pandas-dev/pandas/pulls/44415 | 2021-11-12T19:50:19Z | 2021-11-13T17:05:21Z | 2021-11-13T17:05:21Z | 2021-11-13T17:19:28Z |
TST: collect/share Index tests | diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 389bf56ab6035..bb1a1bc72116d 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -44,6 +44,19 @@
class TestDataFrameSetItem:
+ def test_setitem_str_subclass(self):
+ # GH#37366
+ class mystring(str):
+ pass
+
+ data = ["2020-10-22 01:21:00+00:00"]
+ index = DatetimeIndex(data)
+ df = DataFrame({"a": [1]}, index=index)
+ df["b"] = 2
+ df[mystring("c")] = 3
+ expected = DataFrame({"a": [1], "b": [2], mystring("c"): [3]}, index=index)
+ tm.assert_equal(df, expected)
+
@pytest.mark.parametrize("dtype", ["int32", "int64", "float32", "float64"])
def test_setitem_dtype(self, dtype, float_frame):
arr = np.random.randn(len(float_frame))
diff --git a/pandas/tests/indexes/base_class/test_formats.py b/pandas/tests/indexes/base_class/test_formats.py
index f07b06acbfbdb..9053d45dee623 100644
--- a/pandas/tests/indexes/base_class/test_formats.py
+++ b/pandas/tests/indexes/base_class/test_formats.py
@@ -122,6 +122,14 @@ def test_repr_summary(self):
assert len(result) < 200
assert "..." in result
+ def test_summary_bug(self):
+ # GH#3869
+ ind = Index(["{other}%s", "~:{range}:0"], name="A")
+ result = ind._summary()
+ # shouldn't be formatted accidentally.
+ assert "~:{range}:0" in result
+ assert "{other}%s" in result
+
def test_index_repr_bool_nan(self):
# GH32146
arr = Index([True, False, np.nan], dtype=object)
@@ -132,3 +140,9 @@ def test_index_repr_bool_nan(self):
exp2 = repr(arr)
out2 = "Index([True, False, nan], dtype='object')"
assert out2 == exp2
+
+ def test_format_different_scalar_lengths(self):
+ # GH#35439
+ idx = Index(["aaaaaaaaa", "b"])
+ expected = ["aaaaaaaaa", "b"]
+ assert idx.format() == expected
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 33d2558613baf..a5ee743b5cd9a 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -69,26 +69,6 @@ def test_pickle_compat_construction(self):
with pytest.raises(TypeError, match=msg):
self._index_cls()
- @pytest.mark.parametrize("name", [None, "new_name"])
- def test_to_frame(self, name, simple_index):
- # see GH-15230, GH-22580
- idx = simple_index
-
- if name:
- idx_name = name
- else:
- idx_name = idx.name or 0
-
- df = idx.to_frame(name=idx_name)
-
- assert df.index is idx
- assert len(df.columns) == 1
- assert df.columns[0] == idx_name
- assert df[idx_name].values is not idx.values
-
- df = idx.to_frame(index=False, name=idx_name)
- assert df.index is not idx
-
def test_shift(self, simple_index):
# GH8083 test the base class for shift
@@ -226,46 +206,6 @@ def test_repr_max_seq_item_setting(self, simple_index):
repr(idx)
assert "..." not in str(idx)
- def test_copy_name(self, index):
- # gh-12309: Check that the "name" argument
- # passed at initialization is honored.
- if isinstance(index, MultiIndex):
- return
-
- first = type(index)(index, copy=True, name="mario")
- second = type(first)(first, copy=False)
-
- # Even though "copy=False", we want a new object.
- assert first is not second
-
- # Not using tm.assert_index_equal() since names differ.
- assert index.equals(first)
-
- assert first.name == "mario"
- assert second.name == "mario"
-
- s1 = Series(2, index=first)
- s2 = Series(3, index=second[:-1])
-
- if not isinstance(index, CategoricalIndex):
- # See gh-13365
- s3 = s1 * s2
- assert s3.index.name == "mario"
-
- def test_copy_name2(self, index):
- # gh-35592
- if isinstance(index, MultiIndex):
- return
-
- assert index.copy(name="mario").name == "mario"
-
- with pytest.raises(ValueError, match="Length of new names must be 1, got 2"):
- index.copy(name=["mario", "luigi"])
-
- msg = f"{type(index).__name__}.name must be a hashable type"
- with pytest.raises(TypeError, match=msg):
- index.copy(name=[["mario"]])
-
def test_ensure_copied_data(self, index):
# Check the "copy" argument of each Index.__new__ is honoured
# GH12309
diff --git a/pandas/tests/indexes/datetimes/test_formats.py b/pandas/tests/indexes/datetimes/test_formats.py
index 36046aaeacaae..197038dbadaf7 100644
--- a/pandas/tests/indexes/datetimes/test_formats.py
+++ b/pandas/tests/indexes/datetimes/test_formats.py
@@ -254,3 +254,20 @@ def test_dti_custom_business_summary_dateutil(self):
pd.bdate_range(
"1/1/2005", "1/1/2009", freq="C", tz=dateutil.tz.tzutc()
)._summary()
+
+
+class TestFormat:
+ def test_format_with_name_time_info(self):
+ # bug I fixed 12/20/2011
+ dates = pd.date_range("2011-01-01 04:00:00", periods=10, name="something")
+
+ formatted = dates.format(name=True)
+ assert formatted[0] == "something"
+
+ def test_format_datetime_with_time(self):
+ dti = DatetimeIndex([datetime(2012, 2, 7), datetime(2012, 2, 7, 23)])
+
+ result = dti.format()
+ expected = ["2012-02-07 00:00:00", "2012-02-07 23:00:00"]
+ assert len(result) == 2
+ assert result == expected
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index c3152b77d39df..beca71969dfcd 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -21,25 +21,12 @@
)
import pandas._testing as tm
-from pandas.tseries.offsets import (
- BDay,
- CDay,
-)
+from pandas.tseries.frequencies import to_offset
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
class TestGetItem:
- def test_ellipsis(self):
- # GH#21282
- idx = date_range(
- "2011-01-01", "2011-01-31", freq="D", tz="Asia/Tokyo", name="idx"
- )
-
- result = idx[...]
- assert result.equals(idx)
- assert result is not idx
-
def test_getitem_slice_keeps_name(self):
# GH4226
st = Timestamp("2013-07-01 00:00:00", tz="America/Los_Angeles")
@@ -88,44 +75,17 @@ def test_getitem(self):
tm.assert_index_equal(result, expected)
assert result.freq == expected.freq
- def test_dti_business_getitem(self):
- rng = bdate_range(START, END)
- smaller = rng[:5]
- exp = DatetimeIndex(rng.view(np.ndarray)[:5], freq="B")
- tm.assert_index_equal(smaller, exp)
- assert smaller.freq == exp.freq
-
- assert smaller.freq == rng.freq
-
- sliced = rng[::5]
- assert sliced.freq == BDay() * 5
-
- fancy_indexed = rng[[4, 3, 2, 1, 0]]
- assert len(fancy_indexed) == 5
- assert isinstance(fancy_indexed, DatetimeIndex)
- assert fancy_indexed.freq is None
-
- # 32-bit vs. 64-bit platforms
- assert rng[4] == rng[np.int_(4)]
-
- def test_dti_business_getitem_matplotlib_hackaround(self):
- rng = bdate_range(START, END)
- with tm.assert_produces_warning(FutureWarning):
- # GH#30588 multi-dimensional indexing deprecated
- values = rng[:, None]
- expected = rng.values[:, None]
- tm.assert_numpy_array_equal(values, expected)
-
- def test_dti_custom_getitem(self):
- rng = bdate_range(START, END, freq="C")
+ @pytest.mark.parametrize("freq", ["B", "C"])
+ def test_dti_business_getitem(self, freq):
+ rng = bdate_range(START, END, freq=freq)
smaller = rng[:5]
- exp = DatetimeIndex(rng.view(np.ndarray)[:5], freq="C")
+ exp = DatetimeIndex(rng.view(np.ndarray)[:5], freq=freq)
tm.assert_index_equal(smaller, exp)
assert smaller.freq == exp.freq
assert smaller.freq == rng.freq
sliced = rng[::5]
- assert sliced.freq == CDay() * 5
+ assert sliced.freq == to_offset(freq) * 5
fancy_indexed = rng[[4, 3, 2, 1, 0]]
assert len(fancy_indexed) == 5
@@ -135,8 +95,9 @@ def test_dti_custom_getitem(self):
# 32-bit vs. 64-bit platforms
assert rng[4] == rng[np.int_(4)]
- def test_dti_custom_getitem_matplotlib_hackaround(self):
- rng = bdate_range(START, END, freq="C")
+ @pytest.mark.parametrize("freq", ["B", "C"])
+ def test_dti_business_getitem_matplotlib_hackaround(self, freq):
+ rng = bdate_range(START, END, freq=freq)
with tm.assert_produces_warning(FutureWarning):
# GH#30588 multi-dimensional indexing deprecated
values = rng[:, None]
@@ -255,6 +216,12 @@ def test_where_tz(self):
class TestTake:
+ def test_take_nan_first_datetime(self):
+ index = DatetimeIndex([pd.NaT, Timestamp("20130101"), Timestamp("20130102")])
+ result = index.take([-1, 0, 1])
+ expected = DatetimeIndex([index[-1], index[0], index[1]])
+ tm.assert_index_equal(result, expected)
+
def test_take(self):
# GH#10295
idx1 = date_range("2011-01-01", "2011-01-31", freq="D", name="idx")
diff --git a/pandas/tests/indexes/interval/test_indexing.py b/pandas/tests/indexes/interval/test_indexing.py
index 8df8eef69e9c9..f12f32724b9e1 100644
--- a/pandas/tests/indexes/interval/test_indexing.py
+++ b/pandas/tests/indexes/interval/test_indexing.py
@@ -11,6 +11,7 @@
Interval,
IntervalIndex,
NaT,
+ Series,
Timedelta,
date_range,
timedelta_range,
@@ -523,3 +524,37 @@ def test_putmask_td64(self):
result = idx.putmask(mask, idx[-1])
expected = IntervalIndex([idx[-1]] * 3 + list(idx[3:]))
tm.assert_index_equal(result, expected)
+
+
+class TestGetValue:
+ @pytest.mark.parametrize("key", [[5], (2, 3)])
+ def test_get_value_non_scalar_errors(self, key):
+ # GH#31117
+ idx = IntervalIndex.from_tuples([(1, 3), (2, 4), (3, 5), (7, 10), (3, 10)])
+ ser = Series(range(len(idx)), index=idx)
+
+ msg = str(key)
+ with pytest.raises(InvalidIndexError, match=msg):
+ with tm.assert_produces_warning(FutureWarning):
+ idx.get_value(ser, key)
+
+
+class TestContains:
+ # .__contains__, not .contains
+
+ def test_contains_dunder(self):
+
+ index = IntervalIndex.from_arrays([0, 1], [1, 2], closed="right")
+
+ # __contains__ requires perfect matches to intervals.
+ assert 0 not in index
+ assert 1 not in index
+ assert 2 not in index
+
+ assert Interval(0, 1, closed="right") in index
+ assert Interval(0, 2, closed="right") not in index
+ assert Interval(0, 0.5, closed="right") not in index
+ assert Interval(3, 5, closed="right") not in index
+ assert Interval(-1, 0, closed="left") not in index
+ assert Interval(0, 1, closed="left") not in index
+ assert Interval(0, 1, closed="both") not in index
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index 321d1aa34b9af..843885832690f 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -4,8 +4,6 @@
import numpy as np
import pytest
-from pandas.errors import InvalidIndexError
-
import pandas as pd
from pandas import (
Index,
@@ -500,23 +498,6 @@ def test_contains_method(self):
):
i.contains(Interval(0, 1))
- def test_contains_dunder(self):
-
- index = IntervalIndex.from_arrays([0, 1], [1, 2], closed="right")
-
- # __contains__ requires perfect matches to intervals.
- assert 0 not in index
- assert 1 not in index
- assert 2 not in index
-
- assert Interval(0, 1, closed="right") in index
- assert Interval(0, 2, closed="right") not in index
- assert Interval(0, 0.5, closed="right") not in index
- assert Interval(3, 5, closed="right") not in index
- assert Interval(-1, 0, closed="left") not in index
- assert Interval(0, 1, closed="left") not in index
- assert Interval(0, 1, closed="both") not in index
-
def test_dropna(self, closed):
expected = IntervalIndex.from_tuples([(0.0, 1.0), (1.0, 2.0)], closed=closed)
@@ -908,24 +889,6 @@ def test_is_all_dates(self):
year_2017_index = IntervalIndex([year_2017])
assert not year_2017_index._is_all_dates
- @pytest.mark.parametrize("key", [[5], (2, 3)])
- def test_get_value_non_scalar_errors(self, key):
- # GH 31117
- idx = IntervalIndex.from_tuples([(1, 3), (2, 4), (3, 5), (7, 10), (3, 10)])
- s = pd.Series(range(len(idx)), index=idx)
-
- msg = str(key)
- with pytest.raises(InvalidIndexError, match=msg):
- with tm.assert_produces_warning(FutureWarning):
- idx.get_value(s, key)
-
- @pytest.mark.parametrize("closed", ["left", "right", "both"])
- def test_pickle_round_trip_closed(self, closed):
- # https://github.com/pandas-dev/pandas/issues/35658
- idx = IntervalIndex.from_tuples([(1, 2), (2, 3)], closed=closed)
- result = tm.round_trip_pickle(idx)
- tm.assert_index_equal(result, idx)
-
def test_dir():
# GH#27571 dir(interval_index) should not raise
diff --git a/pandas/tests/indexes/interval/test_pickle.py b/pandas/tests/indexes/interval/test_pickle.py
new file mode 100644
index 0000000000000..308a90e72eab5
--- /dev/null
+++ b/pandas/tests/indexes/interval/test_pickle.py
@@ -0,0 +1,13 @@
+import pytest
+
+from pandas import IntervalIndex
+import pandas._testing as tm
+
+
+class TestPickle:
+ @pytest.mark.parametrize("closed", ["left", "right", "both"])
+ def test_pickle_round_trip_closed(self, closed):
+ # https://github.com/pandas-dev/pandas/issues/35658
+ idx = IntervalIndex.from_tuples([(1, 2), (2, 3)], closed=closed)
+ result = tm.round_trip_pickle(idx)
+ tm.assert_index_equal(result, idx)
diff --git a/pandas/tests/indexes/multi/test_compat.py b/pandas/tests/indexes/multi/test_compat.py
index d2b5a595b8454..cbb4ae0b0d09b 100644
--- a/pandas/tests/indexes/multi/test_compat.py
+++ b/pandas/tests/indexes/multi/test_compat.py
@@ -96,10 +96,3 @@ def test_inplace_mutation_resets_values():
assert "_values" not in mi2._cache
tm.assert_almost_equal(mi2.values, new_values)
assert "_values" in mi2._cache
-
-
-def test_pickle_compat_construction():
- # this is testing for pickle compat
- # need an object to create with
- with pytest.raises(TypeError, match="Must pass both levels and codes"):
- MultiIndex()
diff --git a/pandas/tests/indexes/multi/test_pickle.py b/pandas/tests/indexes/multi/test_pickle.py
new file mode 100644
index 0000000000000..1d8b721404421
--- /dev/null
+++ b/pandas/tests/indexes/multi/test_pickle.py
@@ -0,0 +1,10 @@
+import pytest
+
+from pandas import MultiIndex
+
+
+def test_pickle_compat_construction():
+ # this is testing for pickle compat
+ # need an object to create with
+ with pytest.raises(TypeError, match="Must pass both levels and codes"):
+ MultiIndex()
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index 1b5e64bca03a0..df2f114e73df2 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -52,14 +52,6 @@ def non_comparable_idx(request):
class TestGetItem:
- def test_ellipsis(self):
- # GH#21282
- idx = period_range("2011-01-01", "2011-01-31", freq="D", name="idx")
-
- result = idx[...]
- assert result.equals(idx)
- assert result is not idx
-
def test_getitem_slice_keeps_name(self):
idx = period_range("20010101", periods=10, freq="D", name="bob")
assert idx.name == idx[1:].name
diff --git a/pandas/tests/indexes/test_any_index.py b/pandas/tests/indexes/test_any_index.py
index f7dafd78a801f..91679959e7979 100644
--- a/pandas/tests/indexes/test_any_index.py
+++ b/pandas/tests/indexes/test_any_index.py
@@ -137,6 +137,12 @@ def test_pickle_preserves_name(self, index):
class TestIndexing:
+ def test_getitem_ellipsis(self, index):
+ # GH#21282
+ result = index[...]
+ assert result.equals(index)
+ assert result is not index
+
def test_slice_keeps_name(self, index):
assert index.name == index[1:].name
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 7f9a5c0b50595..59ec66ecc1fe9 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1,8 +1,5 @@
from collections import defaultdict
-from datetime import (
- datetime,
- timedelta,
-)
+from datetime import datetime
from io import StringIO
import math
import re
@@ -10,10 +7,7 @@
import numpy as np
import pytest
-from pandas.compat import (
- IS64,
- np_datetime64_compat,
-)
+from pandas.compat import IS64
from pandas.util._test_decorators import async_mark
import pandas as pd
@@ -27,7 +21,6 @@
RangeIndex,
Series,
TimedeltaIndex,
- Timestamp,
date_range,
period_range,
)
@@ -219,91 +212,6 @@ def test_constructor_simple_new(self, vals, dtype):
result = index._simple_new(index.values, dtype)
tm.assert_index_equal(result, index)
- @pytest.mark.parametrize(
- "vals",
- [
- [1, 2, 3],
- np.array([1, 2, 3]),
- np.array([1, 2, 3], dtype=int),
- # below should coerce
- [1.0, 2.0, 3.0],
- np.array([1.0, 2.0, 3.0], dtype=float),
- ],
- )
- def test_constructor_dtypes_to_int64(self, vals):
- index = Index(vals, dtype=int)
- assert isinstance(index, Int64Index)
-
- @pytest.mark.parametrize(
- "vals",
- [
- [1, 2, 3],
- [1.0, 2.0, 3.0],
- np.array([1.0, 2.0, 3.0]),
- np.array([1, 2, 3], dtype=int),
- np.array([1.0, 2.0, 3.0], dtype=float),
- ],
- )
- def test_constructor_dtypes_to_float64(self, vals):
- index = Index(vals, dtype=float)
- assert isinstance(index, Float64Index)
-
- @pytest.mark.parametrize(
- "vals",
- [
- [1, 2, 3],
- np.array([1, 2, 3], dtype=int),
- np.array(
- [np_datetime64_compat("2011-01-01"), np_datetime64_compat("2011-01-02")]
- ),
- [datetime(2011, 1, 1), datetime(2011, 1, 2)],
- ],
- )
- def test_constructor_dtypes_to_categorical(self, vals):
- index = Index(vals, dtype="category")
- assert isinstance(index, CategoricalIndex)
-
- @pytest.mark.parametrize("cast_index", [True, False])
- @pytest.mark.parametrize(
- "vals",
- [
- Index(
- np.array(
- [
- np_datetime64_compat("2011-01-01"),
- np_datetime64_compat("2011-01-02"),
- ]
- )
- ),
- Index([datetime(2011, 1, 1), datetime(2011, 1, 2)]),
- ],
- )
- def test_constructor_dtypes_to_datetime(self, cast_index, vals):
- if cast_index:
- index = Index(vals, dtype=object)
- assert isinstance(index, Index)
- assert index.dtype == object
- else:
- index = Index(vals)
- assert isinstance(index, DatetimeIndex)
-
- @pytest.mark.parametrize("cast_index", [True, False])
- @pytest.mark.parametrize(
- "vals",
- [
- np.array([np.timedelta64(1, "D"), np.timedelta64(1, "D")]),
- [timedelta(1), timedelta(1)],
- ],
- )
- def test_constructor_dtypes_to_timedelta(self, cast_index, vals):
- if cast_index:
- index = Index(vals, dtype=object)
- assert isinstance(index, Index)
- assert index.dtype == object
- else:
- index = Index(vals)
- assert isinstance(index, TimedeltaIndex)
-
@pytest.mark.filterwarnings("ignore:Passing keywords other:FutureWarning")
@pytest.mark.parametrize("attr", ["values", "asi8"])
@pytest.mark.parametrize("klass", [Index, DatetimeIndex])
@@ -726,20 +634,6 @@ def test_is_all_dates(self, index, expected):
def test_summary(self, index):
index._summary()
- def test_summary_bug(self):
- # GH3869`
- ind = Index(["{other}%s", "~:{range}:0"], name="A")
- result = ind._summary()
- # shouldn't be formatted accidentally.
- assert "~:{range}:0" in result
- assert "{other}%s" in result
-
- def test_format_different_scalar_lengths(self):
- # GH35439
- idx = Index(["aaaaaaaaa", "b"])
- expected = ["aaaaaaaaa", "b"]
- assert idx.format() == expected
-
def test_format_bug(self):
# GH 14626
# windows has different precision on datetime.datetime.now (it doesn't
@@ -767,21 +661,6 @@ def test_format_missing(self, vals, nulls_fixture):
assert formatted == expected
assert index[3] is nulls_fixture
- def test_format_with_name_time_info(self):
- # bug I fixed 12/20/2011
- dates = date_range("2011-01-01 04:00:00", periods=10, name="something")
-
- formatted = dates.format(name=True)
- assert formatted[0] == "something"
-
- def test_format_datetime_with_time(self):
- t = Index([datetime(2012, 2, 7), datetime(2012, 2, 7, 23)])
-
- result = t.format()
- expected = ["2012-02-07 00:00:00", "2012-02-07 23:00:00"]
- assert len(result) == 2
- assert result == expected
-
@pytest.mark.parametrize("op", ["any", "all"])
def test_logical_compat(self, op, simple_index):
index = simple_index
@@ -1129,12 +1008,6 @@ def test_outer_join_sort(self):
tm.assert_index_equal(result, expected)
- def test_nan_first_take_datetime(self):
- index = Index([pd.NaT, Timestamp("20130101"), Timestamp("20130102")])
- result = index.take([-1, 0, 1])
- expected = Index([index[-1], index[0], index[1]])
- tm.assert_index_equal(result, expected)
-
def test_take_fill_value(self):
# GH 12631
index = Index(list("ABC"), name="xxx")
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index ed9243a5ba8d0..1592c34b48dd8 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -1,7 +1,7 @@
"""
Collection of tests asserting things that should be true for
-any index subclass. Makes use of the `indices` fixture defined
-in pandas/tests/indexes/conftest.py.
+any index subclass except for MultiIndex. Makes use of the `index_flat`
+fixture defined in pandas/conftest.py.
"""
import re
@@ -29,6 +29,26 @@
class TestCommon:
+ @pytest.mark.parametrize("name", [None, "new_name"])
+ def test_to_frame(self, name, index_flat):
+ # see GH#15230, GH#22580
+ idx = index_flat
+
+ if name:
+ idx_name = name
+ else:
+ idx_name = idx.name or 0
+
+ df = idx.to_frame(name=idx_name)
+
+ assert df.index is idx
+ assert len(df.columns) == 1
+ assert df.columns[0] == idx_name
+ assert df[idx_name].values is not idx.values
+
+ df = idx.to_frame(index=False, name=idx_name)
+ assert df.index is not idx
+
def test_droplevel(self, index):
# GH 21115
if isinstance(index, MultiIndex):
@@ -126,6 +146,46 @@ def test_copy_and_deepcopy(self, index_flat):
new_copy = index.copy(deep=True, name="banana")
assert new_copy.name == "banana"
+ def test_copy_name(self, index_flat):
+ # GH#12309: Check that the "name" argument
+ # passed at initialization is honored.
+ index = index_flat
+
+ first = type(index)(index, copy=True, name="mario")
+ second = type(first)(first, copy=False)
+
+ # Even though "copy=False", we want a new object.
+ assert first is not second
+ tm.assert_index_equal(first, second)
+
+ # Not using tm.assert_index_equal() since names differ.
+ assert index.equals(first)
+
+ assert first.name == "mario"
+ assert second.name == "mario"
+
+ # TODO: belongs in series arithmetic tests?
+ s1 = pd.Series(2, index=first)
+ s2 = pd.Series(3, index=second[:-1])
+ # See GH#13365
+ s3 = s1 * s2
+ assert s3.index.name == "mario"
+
+ def test_copy_name2(self, index_flat):
+ # GH#35592
+ index = index_flat
+ if isinstance(index, MultiIndex):
+ return
+
+ assert index.copy(name="mario").name == "mario"
+
+ with pytest.raises(ValueError, match="Length of new names must be 1, got 2"):
+ index.copy(name=["mario", "luigi"])
+
+ msg = f"{type(index).__name__}.name must be a hashable type"
+ with pytest.raises(TypeError, match=msg):
+ index.copy(name=[["mario"]])
+
def test_unique_level(self, index_flat):
# don't test a MultiIndex here (as its tested separated)
index = index_flat
diff --git a/pandas/tests/indexes/test_index_new.py b/pandas/tests/indexes/test_index_new.py
index 5c5ec7219d2d7..deeaffaf5b9cc 100644
--- a/pandas/tests/indexes/test_index_new.py
+++ b/pandas/tests/indexes/test_index_new.py
@@ -1,11 +1,17 @@
"""
Tests for the Index constructor conducting inference.
"""
+from datetime import (
+ datetime,
+ timedelta,
+)
from decimal import Decimal
import numpy as np
import pytest
+from pandas.compat import np_datetime64_compat
+
from pandas.core.dtypes.common import is_unsigned_integer_dtype
from pandas import (
@@ -27,6 +33,7 @@
)
import pandas._testing as tm
from pandas.core.api import (
+ Float64Index,
Int64Index,
UInt64Index,
)
@@ -232,6 +239,91 @@ def test_constructor_int_dtype_nan_raises(self, dtype):
with pytest.raises(ValueError, match=msg):
Index(data, dtype=dtype)
+ @pytest.mark.parametrize(
+ "vals",
+ [
+ [1, 2, 3],
+ np.array([1, 2, 3]),
+ np.array([1, 2, 3], dtype=int),
+ # below should coerce
+ [1.0, 2.0, 3.0],
+ np.array([1.0, 2.0, 3.0], dtype=float),
+ ],
+ )
+ def test_constructor_dtypes_to_int64(self, vals):
+ index = Index(vals, dtype=int)
+ assert isinstance(index, Int64Index)
+
+ @pytest.mark.parametrize(
+ "vals",
+ [
+ [1, 2, 3],
+ [1.0, 2.0, 3.0],
+ np.array([1.0, 2.0, 3.0]),
+ np.array([1, 2, 3], dtype=int),
+ np.array([1.0, 2.0, 3.0], dtype=float),
+ ],
+ )
+ def test_constructor_dtypes_to_float64(self, vals):
+ index = Index(vals, dtype=float)
+ assert isinstance(index, Float64Index)
+
+ @pytest.mark.parametrize(
+ "vals",
+ [
+ [1, 2, 3],
+ np.array([1, 2, 3], dtype=int),
+ np.array(
+ [np_datetime64_compat("2011-01-01"), np_datetime64_compat("2011-01-02")]
+ ),
+ [datetime(2011, 1, 1), datetime(2011, 1, 2)],
+ ],
+ )
+ def test_constructor_dtypes_to_categorical(self, vals):
+ index = Index(vals, dtype="category")
+ assert isinstance(index, CategoricalIndex)
+
+ @pytest.mark.parametrize("cast_index", [True, False])
+ @pytest.mark.parametrize(
+ "vals",
+ [
+ Index(
+ np.array(
+ [
+ np_datetime64_compat("2011-01-01"),
+ np_datetime64_compat("2011-01-02"),
+ ]
+ )
+ ),
+ Index([datetime(2011, 1, 1), datetime(2011, 1, 2)]),
+ ],
+ )
+ def test_constructor_dtypes_to_datetime(self, cast_index, vals):
+ if cast_index:
+ index = Index(vals, dtype=object)
+ assert isinstance(index, Index)
+ assert index.dtype == object
+ else:
+ index = Index(vals)
+ assert isinstance(index, DatetimeIndex)
+
+ @pytest.mark.parametrize("cast_index", [True, False])
+ @pytest.mark.parametrize(
+ "vals",
+ [
+ np.array([np.timedelta64(1, "D"), np.timedelta64(1, "D")]),
+ [timedelta(1), timedelta(1)],
+ ],
+ )
+ def test_constructor_dtypes_to_timedelta(self, cast_index, vals):
+ if cast_index:
+ index = Index(vals, dtype=object)
+ assert isinstance(index, Index)
+ assert index.dtype == object
+ else:
+ index = Index(vals)
+ assert isinstance(index, TimedeltaIndex)
+
class TestIndexConstructorUnwrapping:
# Test passing different arraylike values to pd.Index
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index 66fdaa2778600..0c2f8d0103ceb 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -21,14 +21,6 @@
class TestGetItem:
- def test_ellipsis(self):
- # GH#21282
- idx = timedelta_range("1 day", "31 day", freq="D", name="idx")
-
- result = idx[...]
- assert result.equals(idx)
- assert result is not idx
-
def test_getitem_slice_keeps_name(self):
# GH#4226
tdi = timedelta_range("1d", "5d", freq="H", name="timebucket")
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py
index e46eed05caa86..332ab02255911 100644
--- a/pandas/tests/indexing/test_datetime.py
+++ b/pandas/tests/indexing/test_datetime.py
@@ -130,7 +130,7 @@ def test_nanosecond_getitem_setitem_with_tz(self):
expected = DataFrame(-1, index=index, columns=["a"])
tm.assert_frame_equal(result, expected)
- def test_getitem_millisecond_resolution(self, frame_or_series):
+ def test_getitem_str_slice_millisecond_resolution(self, frame_or_series):
# GH#33589
keys = [
@@ -152,16 +152,3 @@ def test_getitem_millisecond_resolution(self, frame_or_series):
],
)
tm.assert_equal(result, expected)
-
- def test_str_subclass(self):
- # GH 37366
- class mystring(str):
- pass
-
- data = ["2020-10-22 01:21:00+00:00"]
- index = pd.DatetimeIndex(data)
- df = DataFrame({"a": [1]}, index=index)
- df["b"] = 2
- df[mystring("c")] = 3
- expected = DataFrame({"a": [1], "b": [2], mystring("c"): [3]}, index=index)
- tm.assert_equal(df, expected)
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 2805c8877ed78..6a9ece738952d 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -323,9 +323,9 @@ def test_dups_fancy_indexing3(self):
def test_duplicate_int_indexing(self, indexer_sl):
# GH 17347
- s = Series(range(3), index=[1, 1, 3])
- expected = s[1]
- result = indexer_sl(s)[[1]]
+ ser = Series(range(3), index=[1, 1, 3])
+ expected = Series(range(2), index=[1, 1])
+ result = indexer_sl(ser)[[1]]
tm.assert_series_equal(result, expected)
def test_indexing_mixed_frame_bug(self):
@@ -653,13 +653,6 @@ def test_loc_setitem_fullindex_views(self):
df.loc[df.index] = df.loc[df.index]
tm.assert_frame_equal(df, df2)
- def test_float_index_at_iat(self):
- s = Series([1, 2, 3], index=[0.1, 0.2, 0.3])
- for el, item in s.items():
- assert s.at[el] == item
- for i in range(len(s)):
- assert s.iat[i] == i + 1
-
def test_rhs_alignment(self):
# GH8258, tests that both rows & columns are aligned to what is
# assigned to. covers both uniform data-type & multi-type cases
@@ -963,7 +956,11 @@ def test_extension_array_cross_section():
def test_extension_array_cross_section_converts():
# all numeric columns -> numeric series
df = DataFrame(
- {"A": pd.array([1, 2], dtype="Int64"), "B": np.array([1, 2])}, index=["a", "b"]
+ {
+ "A": pd.array([1, 2], dtype="Int64"),
+ "B": np.array([1, 2], dtype="int64"),
+ },
+ index=["a", "b"],
)
result = df.loc["a"]
expected = Series([1, 1], dtype="Int64", index=["A", "B"], name="a")
@@ -983,10 +980,3 @@ def test_extension_array_cross_section_converts():
result = df.iloc[0]
tm.assert_series_equal(result, expected)
-
-
-def test_getitem_object_index_float_string():
- # GH 17286
- s = Series([1] * 4, index=Index(["a", "b", "c", 1.0]))
- assert s["a"] == 1
- assert s[1.0] == 1
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index bf262e6755289..bcb76fb078e74 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -77,6 +77,13 @@ def _check(f, func, values=False):
class TestAtAndiAT:
# at and iat tests that don't need Base class
+ def test_float_index_at_iat(self):
+ ser = Series([1, 2, 3], index=[0.1, 0.2, 0.3])
+ for el, item in ser.items():
+ assert ser.at[el] == item
+ for i in range(len(ser)):
+ assert ser.iat[i] == i + 1
+
def test_at_iat_coercion(self):
# as timestamp is not a tuple!
diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py
index 03b1c512f9053..4c17917b949ca 100644
--- a/pandas/tests/series/indexing/test_getitem.py
+++ b/pandas/tests/series/indexing/test_getitem.py
@@ -36,6 +36,12 @@
class TestSeriesGetitemScalars:
+ def test_getitem_object_index_float_string(self):
+ # GH#17286
+ ser = Series([1] * 4, index=Index(["a", "b", "c", 1.0]))
+ assert ser["a"] == 1
+ assert ser[1.0] == 1
+
def test_getitem_float_keys_tuple_values(self):
# see GH#13509
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44413 | 2021-11-12T18:00:43Z | 2021-11-14T02:05:39Z | 2021-11-14T02:05:39Z | 2021-11-14T15:12:55Z |
REF: EA quantile logic to EA._quantile | diff --git a/pandas/core/array_algos/quantile.py b/pandas/core/array_algos/quantile.py
index a1b40acc2558e..261d19ade080f 100644
--- a/pandas/core/array_algos/quantile.py
+++ b/pandas/core/array_algos/quantile.py
@@ -1,7 +1,5 @@
from __future__ import annotations
-from typing import TYPE_CHECKING
-
import numpy as np
from pandas._typing import (
@@ -9,7 +7,6 @@
npt,
)
-from pandas.core.dtypes.common import is_sparse
from pandas.core.dtypes.missing import (
isna,
na_value_for_dtype,
@@ -17,9 +14,6 @@
from pandas.core.nanops import nanpercentile
-if TYPE_CHECKING:
- from pandas.core.arrays import ExtensionArray
-
def quantile_compat(
values: ArrayLike, qs: npt.NDArray[np.float64], interpolation: str
@@ -40,23 +34,12 @@ def quantile_compat(
if isinstance(values, np.ndarray):
fill_value = na_value_for_dtype(values.dtype, compat=False)
mask = isna(values)
- return _quantile_with_mask(values, mask, fill_value, qs, interpolation)
+ return quantile_with_mask(values, mask, fill_value, qs, interpolation)
else:
- # In general we don't want to import from arrays here;
- # this is temporary pending discussion in GH#41428
- from pandas.core.arrays import BaseMaskedArray
-
- if isinstance(values, BaseMaskedArray):
- # e.g. IntegerArray, does not implement _from_factorized
- out = _quantile_ea_fallback(values, qs, interpolation)
-
- else:
- out = _quantile_ea_compat(values, qs, interpolation)
+ return values._quantile(qs, interpolation)
- return out
-
-def _quantile_with_mask(
+def quantile_with_mask(
values: np.ndarray,
mask: np.ndarray,
fill_value,
@@ -114,82 +97,3 @@ def _quantile_with_mask(
result = result.T
return result
-
-
-def _quantile_ea_compat(
- values: ExtensionArray, qs: npt.NDArray[np.float64], interpolation: str
-) -> ExtensionArray:
- """
- ExtensionArray compatibility layer for _quantile_with_mask.
-
- We pretend that an ExtensionArray with shape (N,) is actually (1, N,)
- for compatibility with non-EA code.
-
- Parameters
- ----------
- values : ExtensionArray
- qs : np.ndarray[float64]
- interpolation: str
-
- Returns
- -------
- ExtensionArray
- """
- # TODO(EA2D): make-believe not needed with 2D EAs
- orig = values
-
- # asarray needed for Sparse, see GH#24600
- mask = np.asarray(values.isna())
- mask = np.atleast_2d(mask)
-
- arr, fill_value = values._values_for_factorize()
- arr = np.atleast_2d(arr)
-
- result = _quantile_with_mask(arr, mask, fill_value, qs, interpolation)
-
- if not is_sparse(orig.dtype):
- # shape[0] should be 1 as long as EAs are 1D
-
- if orig.ndim == 2:
- # i.e. DatetimeArray
- result = type(orig)._from_factorized(result, orig)
-
- else:
- assert result.shape == (1, len(qs)), result.shape
- result = type(orig)._from_factorized(result[0], orig)
-
- # error: Incompatible return value type (got "ndarray", expected "ExtensionArray")
- return result # type: ignore[return-value]
-
-
-def _quantile_ea_fallback(
- values: ExtensionArray, qs: npt.NDArray[np.float64], interpolation: str
-) -> ExtensionArray:
- """
- quantile compatibility for ExtensionArray subclasses that do not
- implement `_from_factorized`, e.g. IntegerArray.
-
- Notes
- -----
- We assume that all impacted cases are 1D-only.
- """
- mask = np.atleast_2d(np.asarray(values.isna()))
- npvalues = np.atleast_2d(np.asarray(values))
-
- res = _quantile_with_mask(
- npvalues,
- mask=mask,
- fill_value=values.dtype.na_value,
- qs=qs,
- interpolation=interpolation,
- )
- assert res.ndim == 2
- assert res.shape[0] == 1
- res = res[0]
- try:
- out = type(values)._from_sequence(res, dtype=values.dtype)
- except TypeError:
- # GH#42626: not able to safely cast Int64
- # for floating point output
- out = np.atleast_2d(np.asarray(res, dtype=np.float64))
- return out
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 9d534a5a8d815..21f83f8373586 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -53,6 +53,7 @@
unique,
value_counts,
)
+from pandas.core.array_algos.quantile import quantile_with_mask
from pandas.core.array_algos.transforms import shift
from pandas.core.arrays.base import ExtensionArray
from pandas.core.construction import extract_array
@@ -463,6 +464,30 @@ def value_counts(self, dropna: bool = True):
index = Index(index_arr, name=result.index.name)
return Series(result._values, index=index, name=result.name)
+ def _quantile(
+ self: NDArrayBackedExtensionArrayT,
+ qs: npt.NDArray[np.float64],
+ interpolation: str,
+ ) -> NDArrayBackedExtensionArrayT:
+ # TODO: disable for Categorical if not ordered?
+
+ # asarray needed for Sparse, see GH#24600
+ mask = np.asarray(self.isna())
+ mask = np.atleast_2d(mask)
+
+ arr = np.atleast_2d(self._ndarray)
+ # TODO: something NDArrayBacked-specific instead of _values_for_factorize[1]?
+ fill_value = self._values_for_factorize()[1]
+
+ res_values = quantile_with_mask(arr, mask, fill_value, qs, interpolation)
+
+ result = type(self)._from_factorized(res_values, self)
+ if self.ndim == 1:
+ assert result.shape == (1, len(qs)), result.shape
+ result = result[0]
+
+ return result
+
# ------------------------------------------------------------------------
# numpy-like methods
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index a64aef64ab49f..d07c1eb398b9a 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -75,6 +75,7 @@
isin,
unique,
)
+from pandas.core.array_algos.quantile import quantile_with_mask
from pandas.core.sorting import (
nargminmax,
nargsort,
@@ -1494,6 +1495,41 @@ def _empty(cls, shape: Shape, dtype: ExtensionDtype):
)
return result
+ def _quantile(
+ self: ExtensionArrayT, qs: npt.NDArray[np.float64], interpolation: str
+ ) -> ExtensionArrayT:
+ """
+ Compute the quantiles of self for each quantile in `qs`.
+
+ Parameters
+ ----------
+ qs : np.ndarray[float64]
+ interpolation: str
+
+ Returns
+ -------
+ same type as self
+ """
+ # asarray needed for Sparse, see GH#24600
+ mask = np.asarray(self.isna())
+ mask = np.atleast_2d(mask)
+
+ arr = np.atleast_2d(np.asarray(self))
+ fill_value = np.nan
+
+ res_values = quantile_with_mask(arr, mask, fill_value, qs, interpolation)
+
+ if self.ndim == 2:
+ # i.e. DatetimeArray
+ result = type(self)._from_sequence(res_values)
+
+ else:
+ # shape[0] should be 1 as long as EAs are 1D
+ assert res_values.shape == (1, len(qs)), res_values.shape
+ result = type(self)._from_sequence(res_values[0])
+
+ return result
+
def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
if any(
isinstance(other, (ABCSeries, ABCIndex, ABCDataFrame)) for other in inputs
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index b334a167d3824..9d98bd8045006 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -65,6 +65,7 @@
take,
)
from pandas.core.array_algos import masked_reductions
+from pandas.core.array_algos.quantile import quantile_with_mask
from pandas.core.arraylike import OpsMixin
from pandas.core.arrays import ExtensionArray
from pandas.core.indexers import check_array_indexer
@@ -692,6 +693,38 @@ def equals(self, other) -> bool:
right = other._data[~other._mask]
return array_equivalent(left, right, dtype_equal=True)
+ def _quantile(
+ self: BaseMaskedArrayT, qs: npt.NDArray[np.float64], interpolation: str
+ ) -> BaseMaskedArrayT:
+ """
+ Dispatch to quantile_with_mask, needed because we do not have
+ _from_factorized.
+
+ Notes
+ -----
+ We assume that all impacted cases are 1D-only.
+ """
+ mask = np.atleast_2d(np.asarray(self.isna()))
+ npvalues = np.atleast_2d(np.asarray(self))
+
+ res = quantile_with_mask(
+ npvalues,
+ mask=mask,
+ fill_value=self.dtype.na_value,
+ qs=qs,
+ interpolation=interpolation,
+ )
+ assert res.ndim == 2
+ assert res.shape[0] == 1
+ res = res[0]
+ try:
+ out = type(self)._from_sequence(res, dtype=self.dtype)
+ except TypeError:
+ # GH#42626: not able to safely cast Int64
+ # for floating point output
+ out = np.asarray(res, dtype=np.float64)
+ return out
+
def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
if name in {"any", "all"}:
return getattr(self, name)(skipna=skipna, **kwargs)
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index c054710a01f75..9b2e391966070 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -863,6 +863,12 @@ def value_counts(self, dropna: bool = True) -> Series:
keys = Index(keys)
return Series(counts, index=keys)
+ def _quantile(self, qs: npt.NDArray[np.float64], interpolation: str):
+ # Special case: the returned array isn't _really_ sparse, so we don't
+ # wrap it in a SparseArray
+ result = super()._quantile(qs, interpolation)
+ return np.asarray(result)
+
# --------
# Indexing
# --------
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 550bc4ac56d4b..3654f77825ab4 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1310,6 +1310,9 @@ def quantile(
assert is_list_like(qs) # caller is responsible for this
result = quantile_compat(self.values, np.asarray(qs._values), interpolation)
+ # ensure_block_shape needed for cases where we start with EA and result
+ # is ndarray, e.g. IntegerArray, SparseArray
+ result = ensure_block_shape(result, ndim=2)
return new_block_2d(result, placement=self._mgr_locs)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44412 | 2021-11-12T17:33:53Z | 2021-11-28T19:25:22Z | 2021-11-28T19:25:22Z | 2021-11-28T19:38:51Z |
WARN: Add FutureWarning for `DataFrame.to_latex` | diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst
index 36b591c3c3142..3d3ec53948a01 100644
--- a/doc/source/whatsnew/v1.2.0.rst
+++ b/doc/source/whatsnew/v1.2.0.rst
@@ -150,6 +150,7 @@ and a short caption (:issue:`36267`).
The keyword ``position`` has been added to set the position.
.. ipython:: python
+ :okwarning:
data = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
table = data.to_latex(position='ht')
@@ -161,6 +162,7 @@ one can optionally provide a tuple ``(full_caption, short_caption)``
to add a short caption macro.
.. ipython:: python
+ :okwarning:
data = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
table = data.to_latex(caption=('the full long caption', 'short caption'))
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 1f656f267783f..462e0fd139a94 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -465,6 +465,7 @@ Other Deprecations
- Deprecated the 'errors' keyword argument in :meth:`Series.where`, :meth:`DataFrame.where`, :meth:`Series.mask`, and meth:`DataFrame.mask`; in a future version the argument will be removed (:issue:`44294`)
- Deprecated :meth:`PeriodIndex.astype` to ``datetime64[ns]`` or ``DatetimeTZDtype``, use ``obj.to_timestamp(how).tz_localize(dtype.tz)`` instead (:issue:`44398`)
- Deprecated :meth:`DateOffset.apply`, use ``offset + other`` instead (:issue:`44522`)
+- A deprecation warning is now shown for :meth:`DataFrame.to_latex` indicating the arguments signature may change and emulate more the arguments to :meth:`.Styler.to_latex` in future versions (:issue:`44411`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 888376ea8e1dc..601b8dcd504d6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3272,6 +3272,7 @@ def to_latex(
{returns}
See Also
--------
+ Styler.to_latex : Render a DataFrame to LaTeX with conditional formatting.
DataFrame.to_string : Render a DataFrame to a console-friendly
tabular output.
DataFrame.to_html : Render a DataFrame as an HTML table.
@@ -3281,7 +3282,7 @@ def to_latex(
>>> df = pd.DataFrame(dict(name=['Raphael', 'Donatello'],
... mask=['red', 'purple'],
... weapon=['sai', 'bo staff']))
- >>> print(df.to_latex(index=False)) # doctest: +NORMALIZE_WHITESPACE
+ >>> print(df.to_latex(index=False)) # doctest: +SKIP
\begin{{tabular}}{{lll}}
\toprule
name & mask & weapon \\
@@ -3291,6 +3292,15 @@ def to_latex(
\bottomrule
\end{{tabular}}
"""
+ msg = (
+ "In future versions `DataFrame.to_latex` is expected to utilise the base "
+ "implementation of `Styler.to_latex` for formatting and rendering. "
+ "The arguments signature may therefore change. It is recommended instead "
+ "to use `DataFrame.style.to_latex` which also contains additional "
+ "functionality."
+ )
+ warnings.warn(msg, FutureWarning, stacklevel=find_stack_level())
+
# Get defaults from the pandas config
if self.ndim == 1:
self = self.to_frame()
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index b288fafd8f7f6..bb80bd12c1958 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -265,6 +265,7 @@ def test_repr_column_name_unicode_truncation_bug(self):
with option_context("display.max_columns", 20):
assert "StringCol" in repr(df)
+ @pytest.mark.filterwarnings("ignore::FutureWarning")
def test_latex_repr(self):
result = r"""\begin{tabular}{llll}
\toprule
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index d9bd8f6809c73..ab0199dca3f24 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -3298,6 +3298,7 @@ def test_repr_html_ipython_config(ip):
assert not result.error_in_exec
+@pytest.mark.filterwarnings("ignore:In future versions `DataFrame.to_latex`")
@pytest.mark.parametrize("method", ["to_string", "to_html", "to_latex"])
@pytest.mark.parametrize(
"encoding, data",
@@ -3319,7 +3320,8 @@ def test_filepath_or_buffer_arg(
):
getattr(df, method)(buf=filepath_or_buffer, encoding=encoding)
elif encoding == "foo":
- with tm.assert_produces_warning(None):
+ expected_warning = FutureWarning if method == "to_latex" else None
+ with tm.assert_produces_warning(expected_warning):
with pytest.raises(LookupError, match="unknown encoding"):
getattr(df, method)(buf=filepath_or_buffer, encoding=encoding)
else:
@@ -3328,6 +3330,7 @@ def test_filepath_or_buffer_arg(
assert_filepath_or_buffer_equals(expected)
+@pytest.mark.filterwarnings("ignore::FutureWarning")
@pytest.mark.parametrize("method", ["to_string", "to_html", "to_latex"])
def test_filepath_or_buffer_bad_arg_raises(float_frame, method):
msg = "buf is not a file name and it has no write method"
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 10c8ccae67fb2..01bc94bf594d9 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -19,6 +19,8 @@
RowStringConverter,
)
+pytestmark = pytest.mark.filterwarnings("ignore::FutureWarning")
+
def _dedent(string):
"""Dedent without new line in the beginning.
@@ -1514,3 +1516,15 @@ def test_get_strrow_multindex_multicolumn(self, row_num, expected):
)
assert row_string_converter.get_strrow(row_num=row_num) == expected
+
+ def test_future_warning(self):
+ df = DataFrame([[1]])
+ msg = (
+ "In future versions `DataFrame.to_latex` is expected to utilise the base "
+ "implementation of `Styler.to_latex` for formatting and rendering. "
+ "The arguments signature may therefore change. It is recommended instead "
+ "to use `DataFrame.style.to_latex` which also contains additional "
+ "functionality."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ df.to_latex()
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 5f1256c4e5ba3..a782f8dbbc76d 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -348,6 +348,7 @@ def test_read_fspath_all(self, reader, module, path, datapath):
else:
tm.assert_frame_equal(result, expected)
+ @pytest.mark.filterwarnings("ignore:In future versions `DataFrame.to_latex`")
@pytest.mark.parametrize(
"writer_name, writer_kwargs, module",
[
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index d3ff7f4dc7b4c..de34caa7b4387 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -196,6 +196,7 @@ def test_timeseries_repr_object_dtype(self):
ts2 = ts.iloc[np.random.randint(0, len(ts) - 1, 400)]
repr(ts2).splitlines()[-1]
+ @pytest.mark.filterwarnings("ignore::FutureWarning")
def test_latex_repr(self):
result = r"""\begin{tabular}{ll}
\toprule
| Instead of #41648, which performs a refactor and adds the warning in the same PR, I am proposing adding the warning only for 1.4.0 and then in 2.0 there will be scope to make breaking changes and refactor the arguments signature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/44411 | 2021-11-12T17:25:26Z | 2021-11-24T06:19:59Z | 2021-11-24T06:19:58Z | 2022-11-18T22:08:11Z |
BUG: read_csv raising if parse_dates is used with MultiIndex columns | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index d1e209adb1b8f..d3df785c23544 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -574,6 +574,7 @@ I/O
- Bug in :func:`json_normalize` where multi-character ``sep`` parameter is incorrectly prefixed to every key (:issue:`43831`)
- Bug in :func:`read_csv` with :code:`float_precision="round_trip"` which did not skip initial/trailing whitespace (:issue:`43713`)
- Bug in dumping/loading a :class:`DataFrame` with ``yaml.dump(frame)`` (:issue:`42748`)
+- Bug in :func:`read_csv` raising ``ValueError`` when ``parse_dates`` was used with ``MultiIndex`` columns (:issue:`8991`)
-
Period
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 339585810bec1..ba39b6a933a81 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -259,7 +259,8 @@ def _validate_parse_dates_presence(self, columns: list[str]) -> None:
# ParseDates = Union[DateGroups, List[DateGroups],
# Dict[ColReference, DateGroups]]
cols_needed = itertools.chain.from_iterable(
- col if is_list_like(col) else [col] for col in self.parse_dates
+ col if is_list_like(col) and not isinstance(col, tuple) else [col]
+ for col in self.parse_dates
)
else:
cols_needed = []
@@ -1091,7 +1092,7 @@ def _isindex(colspec):
if isinstance(parse_spec, list):
# list of column lists
for colspec in parse_spec:
- if is_scalar(colspec):
+ if is_scalar(colspec) or isinstance(colspec, tuple):
if isinstance(colspec, int) and colspec not in data_dict:
colspec = orig_names[colspec]
if _isindex(colspec):
@@ -1146,7 +1147,11 @@ def _try_convert_dates(parser: Callable, colspec, data_dict, columns):
else:
colnames.append(c)
- new_name = "_".join([str(x) for x in colnames])
+ new_name: tuple | str
+ if all(isinstance(x, tuple) for x in colnames):
+ new_name = tuple(map("_".join, zip(*colnames)))
+ else:
+ new_name = "_".join([str(x) for x in colnames])
to_parse = [np.asarray(data_dict[c]) for c in colnames if c in data_dict]
new_col = parser(*to_parse)
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index c8bea9592e82a..470440290016d 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -1732,6 +1732,39 @@ def test_date_parser_and_names(all_parsers):
tm.assert_frame_equal(result, expected)
+@skip_pyarrow
+def test_date_parser_multiindex_columns(all_parsers):
+ parser = all_parsers
+ data = """a,b
+1,2
+2019-12-31,6"""
+ result = parser.read_csv(StringIO(data), parse_dates=[("a", "1")], header=[0, 1])
+ expected = DataFrame({("a", "1"): Timestamp("2019-12-31"), ("b", "2"): [6]})
+ tm.assert_frame_equal(result, expected)
+
+
+@skip_pyarrow
+@pytest.mark.parametrize(
+ "parse_spec, col_name",
+ [
+ ([[("a", "1"), ("b", "2")]], ("a_b", "1_2")),
+ ({("foo", "1"): [("a", "1"), ("b", "2")]}, ("foo", "1")),
+ ],
+)
+def test_date_parser_multiindex_columns_combine_cols(all_parsers, parse_spec, col_name):
+ parser = all_parsers
+ data = """a,b,c
+1,2,3
+2019-12,-31,6"""
+ result = parser.read_csv(
+ StringIO(data),
+ parse_dates=parse_spec,
+ header=[0, 1],
+ )
+ expected = DataFrame({col_name: Timestamp("2019-12-31"), ("c", "3"): [6]})
+ tm.assert_frame_equal(result, expected)
+
+
@skip_pyarrow
def test_date_parser_usecols_thousands(all_parsers):
# GH#39365
| - [x] closes #8991
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44408 | 2021-11-12T14:12:49Z | 2021-11-14T02:17:45Z | 2021-11-14T02:17:44Z | 2021-12-15T10:35:54Z |
TST: Add nulls fixture to duplicates categorical na test | diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index f72d85337df8e..8b5557ab6e85f 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -2,7 +2,6 @@
import pytest
from pandas import (
- NA,
Categorical,
Series,
)
@@ -225,11 +224,13 @@ def test_drop_duplicates_categorical_bool(self, ordered):
assert return_value is None
tm.assert_series_equal(sc, tc[~expected])
- def test_drop_duplicates_categorical_bool_na(self):
+ def test_drop_duplicates_categorical_bool_na(self, nulls_fixture):
# GH#44351
ser = Series(
Categorical(
- [True, False, True, False, NA], categories=[True, False], ordered=True
+ [True, False, True, False, nulls_fixture],
+ categories=[True, False],
+ ordered=True,
)
)
result = ser.drop_duplicates()
diff --git a/pandas/tests/series/methods/test_duplicated.py b/pandas/tests/series/methods/test_duplicated.py
index c61492168da63..1c547ee99efed 100644
--- a/pandas/tests/series/methods/test_duplicated.py
+++ b/pandas/tests/series/methods/test_duplicated.py
@@ -2,7 +2,6 @@
import pytest
from pandas import (
- NA,
Categorical,
Series,
)
@@ -39,11 +38,13 @@ def test_duplicated_nan_none(keep, expected):
tm.assert_series_equal(result, expected)
-def test_duplicated_categorical_bool_na():
+def test_duplicated_categorical_bool_na(nulls_fixture):
# GH#44351
ser = Series(
Categorical(
- [True, False, True, False, NA], categories=[True, False], ordered=True
+ [True, False, True, False, nulls_fixture],
+ categories=[True, False],
+ ordered=True,
)
)
result = ser.duplicated()
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
cc @jorisvandenbossche as a follow up | https://api.github.com/repos/pandas-dev/pandas/pulls/44407 | 2021-11-12T13:17:30Z | 2021-11-12T14:55:43Z | 2021-11-12T14:55:43Z | 2021-11-12T15:16:11Z |
TYP: Typ part of python_parser | diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index 9fbeeb74901ef..98d1315c6212c 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -110,7 +110,12 @@ def _finalize_output(self, frame: DataFrame) -> DataFrame:
multi_index_named = False
frame.columns = self.names
# we only need the frame not the names
- frame.columns, frame = self._do_date_conversions(frame.columns, frame)
+ # error: Incompatible types in assignment (expression has type
+ # "Union[List[Union[Union[str, int, float, bool], Union[Period, Timestamp,
+ # Timedelta, Any]]], Index]", variable has type "Index") [assignment]
+ frame.columns, frame = self._do_date_conversions( # type: ignore[assignment]
+ frame.columns, frame
+ )
if self.index_col is not None:
for i, item in enumerate(self.index_col):
if is_integer(item):
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 4f5ba3460a3c8..5d03529654b0d 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -10,10 +10,13 @@
Any,
Callable,
DefaultDict,
+ Hashable,
Iterable,
+ Mapping,
Sequence,
cast,
final,
+ overload,
)
import warnings
@@ -56,6 +59,7 @@
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas.core.dtypes.missing import isna
+from pandas import DataFrame
from pandas.core import algorithms
from pandas.core.arrays import Categorical
from pandas.core.indexes.api import (
@@ -241,7 +245,7 @@ def _open_handles(
errors=kwds.get("encoding_errors", "strict"),
)
- def _validate_parse_dates_presence(self, columns: list[str]) -> Iterable:
+ def _validate_parse_dates_presence(self, columns: Sequence[Hashable]) -> Iterable:
"""
Check if parse_dates are in columns.
@@ -337,11 +341,24 @@ def _should_parse_dates(self, i: int) -> bool:
@final
def _extract_multi_indexer_columns(
- self, header, index_names, passed_names: bool = False
+ self,
+ header,
+ index_names: list | None,
+ passed_names: bool = False,
):
"""
- extract and return the names, index_names, col_names
- header is a list-of-lists returned from the parsers
+ Extract and return the names, index_names, col_names if the column
+ names are a MultiIndex.
+
+ Parameters
+ ----------
+ header: list of lists
+ The header rows
+ index_names: list, optional
+ The names of the future index
+ passed_names: bool, default False
+ A flag specifying if names where passed
+
"""
if len(header) < 2:
return header[0], index_names, None, passed_names
@@ -400,7 +417,7 @@ def extract(r):
return names, index_names, col_names, passed_names
@final
- def _maybe_dedup_names(self, names):
+ def _maybe_dedup_names(self, names: Sequence[Hashable]) -> Sequence[Hashable]:
# see gh-7160 and gh-9424: this helps to provide
# immediate alleviation of the duplicate names
# issue and appears to be satisfactory to users,
@@ -408,7 +425,7 @@ def _maybe_dedup_names(self, names):
# would be nice!
if self.mangle_dupe_cols:
names = list(names) # so we can index
- counts: DefaultDict[int | str | tuple, int] = defaultdict(int)
+ counts: DefaultDict[Hashable, int] = defaultdict(int)
is_potential_mi = _is_potential_multi_index(names, self.index_col)
for i, col in enumerate(names):
@@ -418,6 +435,8 @@ def _maybe_dedup_names(self, names):
counts[col] = cur_count + 1
if is_potential_mi:
+ # for mypy
+ assert isinstance(col, tuple)
col = col[:-1] + (f"{col[-1]}.{cur_count}",)
else:
col = f"{col}.{cur_count}"
@@ -572,7 +591,7 @@ def _agg_index(self, index, try_parse_dates: bool = True) -> Index:
@final
def _convert_to_ndarrays(
self,
- dct: dict,
+ dct: Mapping,
na_values,
na_fvalues,
verbose: bool = False,
@@ -664,7 +683,7 @@ def _convert_to_ndarrays(
@final
def _set_noconvert_dtype_columns(
- self, col_indices: list[int], names: list[int | str | tuple]
+ self, col_indices: list[int], names: Sequence[Hashable]
) -> set[int]:
"""
Set the columns that should not undergo dtype conversions.
@@ -848,7 +867,27 @@ def _cast_types(self, values, cast_type, column):
) from err
return values
- def _do_date_conversions(self, names, data):
+ @overload
+ def _do_date_conversions(
+ self,
+ names: Index,
+ data: DataFrame,
+ ) -> tuple[Sequence[Hashable] | Index, DataFrame]:
+ ...
+
+ @overload
+ def _do_date_conversions(
+ self,
+ names: Sequence[Hashable],
+ data: Mapping[Hashable, ArrayLike],
+ ) -> tuple[Sequence[Hashable], Mapping[Hashable, ArrayLike]]:
+ ...
+
+ def _do_date_conversions(
+ self,
+ names: Sequence[Hashable] | Index,
+ data: Mapping[Hashable, ArrayLike] | DataFrame,
+ ) -> tuple[Sequence[Hashable] | Index, Mapping[Hashable, ArrayLike] | DataFrame]:
# returns data, columns
if self.parse_dates is not None:
@@ -864,7 +903,11 @@ def _do_date_conversions(self, names, data):
return names, data
- def _check_data_length(self, columns: list[str], data: list[ArrayLike]) -> None:
+ def _check_data_length(
+ self,
+ columns: Sequence[Hashable],
+ data: Sequence[ArrayLike],
+ ) -> None:
"""Checks if length of data is equal to length of column names.
One set of trailing commas is allowed. self.index_col not False
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index e96df3b3f3782..05c963f2d2552 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -279,7 +279,7 @@ def read(self, nrows=None):
data_tups = sorted(data.items())
data = {k: v for k, (i, v) in zip(names, data_tups)}
- names, data = self._do_date_conversions(names, data)
+ names, date_data = self._do_date_conversions(names, data)
else:
# rename dict keys
@@ -302,13 +302,13 @@ def read(self, nrows=None):
data = {k: v for k, (i, v) in zip(names, data_tups)}
- names, data = self._do_date_conversions(names, data)
- index, names = self._make_index(data, alldata, names)
+ names, date_data = self._do_date_conversions(names, data)
+ index, names = self._make_index(date_data, alldata, names)
# maybe create a mi on the columns
names = self._maybe_make_multi_index_columns(names, self.col_names)
- return index, names, data
+ return index, names, date_data
def _filter_usecols(self, names):
# hackish
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 08f8d49dcdf1a..2d1433a8f21c8 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -10,7 +10,10 @@
import sys
from typing import (
DefaultDict,
+ Hashable,
Iterator,
+ Mapping,
+ Sequence,
cast,
)
import warnings
@@ -19,6 +22,7 @@
import pandas._libs.lib as lib
from pandas._typing import (
+ ArrayLike,
FilePath,
ReadCsvBuffer,
Scalar,
@@ -110,9 +114,10 @@ def __init__(
# Get columns in two steps: infer from data, then
# infer column indices from self.usecols if it is specified.
self._col_indices: list[int] | None = None
+ columns: list[list[Scalar | None]]
try:
(
- self.columns,
+ columns,
self.num_original_columns,
self.unnamed_cols,
) = self._infer_columns()
@@ -123,18 +128,19 @@ def __init__(
# Now self.columns has the set of columns that we will process.
# The original set is stored in self.original_columns.
# error: Cannot determine type of 'index_names'
+ self.columns: list[Hashable]
(
self.columns,
self.index_names,
self.col_names,
_,
) = self._extract_multi_indexer_columns(
- self.columns,
+ columns,
self.index_names, # type: ignore[has-type]
)
# get popped off for index
- self.orig_names: list[int | str | tuple] = list(self.columns)
+ self.orig_names: list[Hashable] = list(self.columns)
# needs to be cleaned/refactored
# multiple date column thing turning into a real spaghetti factory
@@ -172,7 +178,7 @@ def __init__(
)
self.num = re.compile(regex)
- def _make_reader(self, f):
+ def _make_reader(self, f) -> None:
sep = self.delimiter
if sep is None or len(sep) == 1:
@@ -238,7 +244,7 @@ def _read():
# TextIOWrapper, mmap, None]")
self.data = reader # type: ignore[assignment]
- def read(self, rows=None):
+ def read(self, rows: int | None = None):
try:
content = self._get_lines(rows)
except StopIteration:
@@ -251,7 +257,7 @@ def read(self, rows=None):
# done with first read, next time raise StopIteration
self._first_chunk = False
- columns = list(self.orig_names)
+ columns: Sequence[Hashable] = list(self.orig_names)
if not len(content): # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
names = self._maybe_dedup_names(self.orig_names)
@@ -275,14 +281,17 @@ def read(self, rows=None):
alldata = self._rows_to_cols(content)
data, columns = self._exclude_implicit_index(alldata)
- data = self._convert_data(data)
- columns, data = self._do_date_conversions(columns, data)
+ conv_data = self._convert_data(data)
+ columns, conv_data = self._do_date_conversions(columns, conv_data)
- index, columns = self._make_index(data, alldata, columns, indexnamerow)
+ index, columns = self._make_index(conv_data, alldata, columns, indexnamerow)
- return index, columns, data
+ return index, columns, conv_data
- def _exclude_implicit_index(self, alldata):
+ def _exclude_implicit_index(
+ self,
+ alldata: list[np.ndarray],
+ ) -> tuple[Mapping[Hashable, np.ndarray], Sequence[Hashable]]:
names = self._maybe_dedup_names(self.orig_names)
offset = 0
@@ -304,7 +313,10 @@ def get_chunk(self, size=None):
size = self.chunksize # type: ignore[attr-defined]
return self.read(rows=size)
- def _convert_data(self, data):
+ def _convert_data(
+ self,
+ data: Mapping[Hashable, np.ndarray],
+ ) -> Mapping[Hashable, ArrayLike]:
# apply converters
clean_conv = self._clean_mapping(self.converters)
clean_dtypes = self._clean_mapping(self.dtype)
@@ -336,11 +348,13 @@ def _convert_data(self, data):
clean_dtypes,
)
- def _infer_columns(self):
+ def _infer_columns(
+ self,
+ ) -> tuple[list[list[Scalar | None]], int, set[Scalar | None]]:
names = self.names
num_original_columns = 0
clear_buffer = True
- unnamed_cols: set[str | int | None] = set()
+ unnamed_cols: set[Scalar | None] = set()
self._header_line = None
if self.header is not None:
@@ -355,7 +369,7 @@ def _infer_columns(self):
have_mi_columns = False
header = [header]
- columns: list[list[int | str | None]] = []
+ columns: list[list[Scalar | None]] = []
for level, hr in enumerate(header):
try:
line = self._buffered_line()
@@ -384,7 +398,7 @@ def _infer_columns(self):
line = self.names[:]
- this_columns: list[int | str | None] = []
+ this_columns: list[Scalar | None] = []
this_unnamed_cols = []
for i, c in enumerate(line):
@@ -447,6 +461,7 @@ def _infer_columns(self):
if clear_buffer:
self._clear_buffer()
+ first_line: list[Scalar] | None
if names is not None:
# Read first row after header to check if data are longer
try:
@@ -522,10 +537,10 @@ def _infer_columns(self):
def _handle_usecols(
self,
- columns: list[list[str | int | None]],
- usecols_key: list[str | int | None],
+ columns: list[list[Scalar | None]],
+ usecols_key: list[Scalar | None],
num_original_columns: int,
- ):
+ ) -> list[list[Scalar | None]]:
"""
Sets self._col_indices
@@ -578,7 +593,7 @@ def _buffered_line(self):
else:
return self._next_line()
- def _check_for_bom(self, first_row):
+ def _check_for_bom(self, first_row: list[Scalar]) -> list[Scalar]:
"""
Checks whether the file begins with the BOM character.
If it does, remove it. In addition, if there is quoting
@@ -609,6 +624,7 @@ def _check_for_bom(self, first_row):
return first_row
first_row_bom = first_row[0]
+ new_row: str
if len(first_row_bom) > 1 and first_row_bom[1] == self.quotechar:
start = 2
@@ -627,9 +643,11 @@ def _check_for_bom(self, first_row):
# No quotation so just remove BOM from first element
new_row = first_row_bom[1:]
- return [new_row] + first_row[1:]
- def _is_line_empty(self, line):
+ new_row_list: list[Scalar] = [new_row]
+ return new_row_list + first_row[1:]
+
+ def _is_line_empty(self, line: list[Scalar]) -> bool:
"""
Check if a line is empty or not.
@@ -644,7 +662,7 @@ def _is_line_empty(self, line):
"""
return not line or all(not x for x in line)
- def _next_line(self):
+ def _next_line(self) -> list[Scalar]:
if isinstance(self.data, list):
while self.skipfunc(self.pos):
self.pos += 1
@@ -698,7 +716,7 @@ def _next_line(self):
self.buf.append(line)
return line
- def _alert_malformed(self, msg, row_num):
+ def _alert_malformed(self, msg: str, row_num: int) -> None:
"""
Alert a user about a malformed row, depending on value of
`self.on_bad_lines` enum.
@@ -708,10 +726,12 @@ def _alert_malformed(self, msg, row_num):
Parameters
----------
- msg : The error message to display.
- row_num : The row number where the parsing error occurred.
- Because this row number is displayed, we 1-index,
- even though we 0-index internally.
+ msg: str
+ The error message to display.
+ row_num: int
+ The row number where the parsing error occurred.
+ Because this row number is displayed, we 1-index,
+ even though we 0-index internally.
"""
if self.on_bad_lines == self.BadLineHandleMethod.ERROR:
raise ParserError(msg)
@@ -719,7 +739,7 @@ def _alert_malformed(self, msg, row_num):
base = f"Skipping line {row_num}: "
sys.stderr.write(base + msg + "\n")
- def _next_iter_line(self, row_num):
+ def _next_iter_line(self, row_num: int) -> list[Scalar] | None:
"""
Wrapper around iterating through `self.data` (CSV source).
@@ -729,12 +749,16 @@ def _next_iter_line(self, row_num):
Parameters
----------
- row_num : The row number of the line being parsed.
+ row_num: int
+ The row number of the line being parsed.
"""
try:
# assert for mypy, data is Iterator[str] or None, would error in next
assert self.data is not None
- return next(self.data)
+ line = next(self.data)
+ # for mypy
+ assert isinstance(line, list)
+ return line
except csv.Error as e:
if (
self.on_bad_lines == self.BadLineHandleMethod.ERROR
@@ -763,7 +787,7 @@ def _next_iter_line(self, row_num):
self._alert_malformed(msg, row_num)
return None
- def _check_comments(self, lines):
+ def _check_comments(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
if self.comment is None:
return lines
ret = []
@@ -784,19 +808,19 @@ def _check_comments(self, lines):
ret.append(rl)
return ret
- def _remove_empty_lines(self, lines):
+ def _remove_empty_lines(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
"""
Iterate through the lines and remove any that are
either empty or contain only one whitespace value
Parameters
----------
- lines : array-like
+ lines : list of list of Scalars
The array of lines that we are to filter.
Returns
-------
- filtered_lines : array-like
+ filtered_lines : list of list of Scalars
The same array of lines with the "empty" ones removed.
"""
ret = []
@@ -810,7 +834,7 @@ def _remove_empty_lines(self, lines):
ret.append(line)
return ret
- def _check_thousands(self, lines):
+ def _check_thousands(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
if self.thousands is None:
return lines
@@ -818,7 +842,9 @@ def _check_thousands(self, lines):
lines=lines, search=self.thousands, replace=""
)
- def _search_replace_num_columns(self, lines, search, replace):
+ def _search_replace_num_columns(
+ self, lines: list[list[Scalar]], search: str, replace: str
+ ) -> list[list[Scalar]]:
ret = []
for line in lines:
rl = []
@@ -835,7 +861,7 @@ def _search_replace_num_columns(self, lines, search, replace):
ret.append(rl)
return ret
- def _check_decimal(self, lines):
+ def _check_decimal(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
if self.decimal == parser_defaults["decimal"]:
return lines
@@ -843,12 +869,12 @@ def _check_decimal(self, lines):
lines=lines, search=self.decimal, replace="."
)
- def _clear_buffer(self):
+ def _clear_buffer(self) -> None:
self.buf = []
_implicit_index = False
- def _get_index_name(self, columns):
+ def _get_index_name(self, columns: list[Hashable]):
"""
Try several cases to get lines:
@@ -863,6 +889,7 @@ def _get_index_name(self, columns):
orig_names = list(columns)
columns = list(columns)
+ line: list[Scalar] | None
if self._header_line is not None:
line = self._header_line
else:
@@ -871,6 +898,7 @@ def _get_index_name(self, columns):
except StopIteration:
line = None
+ next_line: list[Scalar] | None
try:
next_line = self._next_line()
except StopIteration:
@@ -917,7 +945,7 @@ def _get_index_name(self, columns):
return index_name, orig_names, columns
- def _rows_to_cols(self, content):
+ def _rows_to_cols(self, content: list[list[Scalar]]) -> list[np.ndarray]:
col_len = self.num_original_columns
if self._implicit_index:
@@ -1000,7 +1028,7 @@ def _rows_to_cols(self, content):
]
return zipped_content
- def _get_lines(self, rows=None):
+ def _get_lines(self, rows: int | None = None):
lines = self.buf
new_rows = None
| @simonjayhawkins I am wondering about mypy with overloads.
The overload for ``_do_date_conversion`` could be more specific, e.g.
```
@overload
def _do_date_conversions(
self,
names: list[Scalar | tuple],
data: dict[Scalar | tuple, ArrayLike] | dict[Scalar | tuple, np.ndarray],
) -> tuple[
list[Scalar | tuple],
dict[Scalar | tuple, ArrayLike] | dict[Scalar | tuple, np.ndarray],
]:
...
```
could be transformed to
```
@overload
def _do_date_conversions(
self,
names: list[Scalar | tuple],
data: dict[Scalar | tuple, ArrayLike],
) -> tuple[
list[Scalar | tuple],
dict[Scalar | tuple, ArrayLike],
]:
...
@overload
def _do_date_conversions(
self,
names: list[Scalar | tuple],
data: dict[Scalar | tuple, np.ndarray],
) -> tuple[
list[Scalar | tuple],
dict[Scalar | tuple, np.ndarray],
]:
...
```
But in this case mypy complains about:
`` error: Overloaded function signature 3 will never be matched: signature 2's parameter type(s) are the same or broader [misc]``
On the other side, if typing this only with
```
@overload
def _do_date_conversions(
self,
names: list[Scalar | tuple],
data: dict[Scalar | tuple, ArrayLike],
) -> tuple[
list[Scalar | tuple],
dict[Scalar | tuple, ArrayLike],
]:
...
```
and passing a ``dict[Scalar | tuple, np.ndarray]`` mypy complains with
```
pandas/io/parsers/python_parser.py:283: error: Argument 2 to "_do_date_conversions" of "ParserBase" has incompatible type "Dict[Union[Union[Union[str, int, float, bool], Union[Period, Timestamp, Timedelta, Any]], Tuple[Any, ...]], ndarray[Any, Any]]"; expected "Dict[Union[Union[Union[str, int, float, bool], Union[Period, Timestamp, Timedelta, Any]], Tuple[Any, ...]], Union[ExtensionArray, ndarray[Any, Any]]]" [arg-type]
pandas/io/parsers/python_parser.py:283: note: "Dict" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance
pandas/io/parsers/python_parser.py:283: note: Consider using "Mapping" instead, which is covariant in the value type
```
This looks inconsistent. I think the overload should accept the distinction between ``ArrayLike`` and ``np.ndarray`` with dicts or lists?
Technically we could use ``Mapping`` probably, but we would loose some strictness in this case and the object is alwyas a ``dict`` and we know if passing only np.ndarray we will get them in return.
On another topic: Should we use an alias for ``Scalar | tuple``? We need this throughout the code to indicate column names | https://api.github.com/repos/pandas-dev/pandas/pulls/44406 | 2021-11-12T13:06:14Z | 2021-11-28T23:52:59Z | 2021-11-28T23:52:59Z | 2021-11-29T11:38:45Z |
BUG: .get_indexer_non_unique() must return an array of ints (#44084) | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index d1e209adb1b8f..2d70f361ba9cd 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -515,7 +515,7 @@ Strings
Interval
^^^^^^^^
--
+- Bug in :meth:`IntervalIndex.get_indexer_non_unique` returning boolean mask instead of array of integers for a non unique and non monotonic index (:issue:`44084`)
-
Indexing
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 5791f89828ca3..885c922d1ee0f 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -727,6 +727,8 @@ def _get_indexer_pointwise(
if isinstance(locs, slice):
# Only needed for get_indexer_non_unique
locs = np.arange(locs.start, locs.stop, locs.step, dtype="intp")
+ elif not self.is_unique and not self.is_monotonic:
+ locs = np.where(locs)[0]
locs = np.array(locs, ndmin=1)
except KeyError:
missing.append(i)
diff --git a/pandas/tests/indexes/interval/test_indexing.py b/pandas/tests/indexes/interval/test_indexing.py
index 8df8eef69e9c9..75f7c69ce5300 100644
--- a/pandas/tests/indexes/interval/test_indexing.py
+++ b/pandas/tests/indexes/interval/test_indexing.py
@@ -8,8 +8,10 @@
from pandas import (
NA,
CategoricalIndex,
+ Index,
Interval,
IntervalIndex,
+ MultiIndex,
NaT,
Timedelta,
date_range,
@@ -373,6 +375,31 @@ def test_get_indexer_with_nans(self):
expected = np.array([0, 1], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
+ def test_get_index_non_unique_non_monotonic(self):
+ # GH#44084 (root cause)
+ index = IntervalIndex.from_tuples(
+ [(0.0, 1.0), (1.0, 2.0), (0.0, 1.0), (1.0, 2.0)]
+ )
+
+ result, _ = index.get_indexer_non_unique([Interval(1.0, 2.0)])
+ expected = np.array([1, 3], dtype=np.intp)
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_get_indexer_multiindex_with_intervals(self):
+ # GH#44084 (MultiIndex case as reported)
+ interval_index = IntervalIndex.from_tuples(
+ [(2.0, 3.0), (0.0, 1.0), (1.0, 2.0)], name="interval"
+ )
+ foo_index = Index([1, 2, 3], name="foo")
+
+ multi_index = MultiIndex.from_product([foo_index, interval_index])
+
+ result = multi_index.get_level_values("interval").get_indexer_for(
+ [Interval(0.0, 1.0)]
+ )
+ expected = np.array([1, 4, 7], dtype=np.intp)
+ tm.assert_numpy_array_equal(result, expected)
+
class TestSliceLocs:
def test_slice_locs_with_interval(self):
| GH#44084 boils down to the following.
According to the docs `.get_indexer_non_unique()` is supposed to return
"integers from 0 to n - 1 indicating that the index at these positions matches
the corresponding target values". However, for an index that is non unique and
non monotonic it returns a boolean mask. That is because it uses `.get_loc()`
which for non unique, non monotonic indexes returns a boolean mask.
This patch catches that case and converts the boolean mask from `.get_loc()`
into the corresponding array of integers if the index is not unique and not
monotonic.
- [x] closes #44084
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44404 | 2021-11-12T09:32:22Z | 2021-11-14T02:26:44Z | 2021-11-14T02:26:43Z | 2021-11-16T12:53:04Z |
Backport PR #44356 on branch 1.3.x (Fixed regression in Series.duplicated for categorical dtype with bool categories) | diff --git a/doc/source/whatsnew/v1.3.5.rst b/doc/source/whatsnew/v1.3.5.rst
index 589092c0dd7e3..951b05b65c81b 100644
--- a/doc/source/whatsnew/v1.3.5.rst
+++ b/doc/source/whatsnew/v1.3.5.rst
@@ -16,6 +16,7 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- Fixed regression in :meth:`Series.equals` when comparing floats with dtype object to None (:issue:`44190`)
- Fixed performance regression in :func:`read_csv` (:issue:`44106`)
+- Fixed regression in :meth:`Series.duplicated` and :meth:`Series.drop_duplicates` when Series has :class:`Categorical` dtype with boolean categories (:issue:`44351`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 3ab0350f23c5a..eb8a1dc5f0e73 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -139,7 +139,7 @@ def _ensure_data(values: ArrayLike) -> tuple[np.ndarray, DtypeObj]:
# i.e. all-bool Categorical, BooleanArray
try:
return np.asarray(values).astype("uint8", copy=False), values.dtype
- except TypeError:
+ except (TypeError, ValueError):
# GH#42107 we have pd.NAs present
return np.asarray(values), values.dtype
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 7eb51f8037792..f72d85337df8e 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -2,6 +2,7 @@
import pytest
from pandas import (
+ NA,
Categorical,
Series,
)
@@ -224,6 +225,20 @@ def test_drop_duplicates_categorical_bool(self, ordered):
assert return_value is None
tm.assert_series_equal(sc, tc[~expected])
+ def test_drop_duplicates_categorical_bool_na(self):
+ # GH#44351
+ ser = Series(
+ Categorical(
+ [True, False, True, False, NA], categories=[True, False], ordered=True
+ )
+ )
+ result = ser.drop_duplicates()
+ expected = Series(
+ Categorical([True, False, np.nan], categories=[True, False], ordered=True),
+ index=[0, 1, 4],
+ )
+ tm.assert_series_equal(result, expected)
+
def test_drop_duplicates_pos_args_deprecation():
# GH#41485
diff --git a/pandas/tests/series/methods/test_duplicated.py b/pandas/tests/series/methods/test_duplicated.py
index 5cc297913e851..c61492168da63 100644
--- a/pandas/tests/series/methods/test_duplicated.py
+++ b/pandas/tests/series/methods/test_duplicated.py
@@ -1,7 +1,11 @@
import numpy as np
import pytest
-from pandas import Series
+from pandas import (
+ NA,
+ Categorical,
+ Series,
+)
import pandas._testing as tm
@@ -33,3 +37,15 @@ def test_duplicated_nan_none(keep, expected):
result = ser.duplicated(keep=keep)
tm.assert_series_equal(result, expected)
+
+
+def test_duplicated_categorical_bool_na():
+ # GH#44351
+ ser = Series(
+ Categorical(
+ [True, False, True, False, NA], categories=[True, False], ordered=True
+ )
+ )
+ result = ser.duplicated()
+ expected = Series([False, False, True, True, False])
+ tm.assert_series_equal(result, expected)
| Backport PR #44356 | https://api.github.com/repos/pandas-dev/pandas/pulls/44402 | 2021-11-12T08:33:01Z | 2021-11-12T09:46:04Z | 2021-11-12T09:46:04Z | 2021-11-12T09:51:40Z |
BUG: DataFrame.stack with EA columns | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 8732e1c397ce5..d1e209adb1b8f 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -623,6 +623,8 @@ Reshaping
- Bug in :func:`crosstab` would fail when inputs are lists or tuples (:issue:`44076`)
- Bug in :meth:`DataFrame.append` failing to retain ``index.name`` when appending a list of :class:`Series` objects (:issue:`44109`)
- Fixed metadata propagation in :meth:`Dataframe.apply` method, consequently fixing the same issue for :meth:`Dataframe.transform`, :meth:`Dataframe.nunique` and :meth:`Dataframe.mode` (:issue:`28283`)
+- Bug in :meth:`DataFrame.stack` with ``ExtensionDtype`` columns incorrectly raising (:issue:`43561`)
+-
Sparse
^^^^^^
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 9c7107ab40644..6c6b14653df75 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -745,13 +745,15 @@ def _convert_level_number(level_num, columns):
if frame._is_homogeneous_type and is_extension_array_dtype(
frame.dtypes.iloc[0]
):
+ # TODO(EA2D): won't need special case, can go through .values
+ # paths below (might change to ._values)
dtype = this[this.columns[loc]].dtypes.iloc[0]
subset = this[this.columns[loc]]
value_slice = dtype.construct_array_type()._concat_same_type(
[x._values for _, x in subset.items()]
)
- N, K = this.shape
+ N, K = subset.shape
idx = np.arange(N * K).reshape(K, N).T.ravel()
value_slice = value_slice.take(idx)
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index 404baecdfecac..62512249dabfc 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -2099,3 +2099,27 @@ def test_stack_unsorted(self):
result = DF.stack(["VAR", "TYP"]).sort_index()
expected = DF.sort_index(axis=1).stack(["VAR", "TYP"]).sort_index()
tm.assert_series_equal(result, expected)
+
+ def test_stack_nullable_dtype(self):
+ # GH#43561
+ columns = MultiIndex.from_product(
+ [["54511", "54515"], ["r", "t_mean"]], names=["station", "element"]
+ )
+ index = Index([1, 2, 3], name="time")
+
+ arr = np.array([[50, 226, 10, 215], [10, 215, 9, 220], [305, 232, 111, 220]])
+ df = DataFrame(arr, columns=columns, index=index, dtype=pd.Int64Dtype())
+
+ result = df.stack("station")
+
+ expected = df.astype(np.int64).stack("station").astype(pd.Int64Dtype())
+ tm.assert_frame_equal(result, expected)
+
+ # non-homogeneous case
+ df[df.columns[0]] = df[df.columns[0]].astype(pd.Float64Dtype())
+ result = df.stack("station")
+
+ # TODO(EA2D): we get object dtype because DataFrame.values can't
+ # be an EA
+ expected = df.astype(object).stack("station")
+ tm.assert_frame_equal(result, expected)
| - [x] closes #43561
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44401 | 2021-11-12T00:00:02Z | 2021-11-12T03:11:32Z | 2021-11-12T03:11:32Z | 2021-11-12T17:19:01Z |
CLN: Refactor extract multiindex header call | diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 8cdcc05f60266..339585810bec1 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -314,14 +314,14 @@ def _should_parse_dates(self, i: int) -> bool:
@final
def _extract_multi_indexer_columns(
- self, header, index_names, col_names, passed_names: bool = False
+ self, header, index_names, passed_names: bool = False
):
"""
extract and return the names, index_names, col_names
header is a list-of-lists returned from the parsers
"""
if len(header) < 2:
- return header[0], index_names, col_names, passed_names
+ return header[0], index_names, None, passed_names
# the names are the tuples of the header that are not the index cols
# 0 is the name of the index, assuming index_col is a list of column
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index 32ca3aaeba6cc..352dd998dda0f 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -78,25 +78,18 @@ def __init__(self, src: FilePathOrBuffer, **kwds):
if self._reader.header is None:
self.names = None
else:
- if len(self._reader.header) > 1:
- # we have a multi index in the columns
- # error: Cannot determine type of 'names'
- # error: Cannot determine type of 'index_names'
- # error: Cannot determine type of 'col_names'
- (
- self.names, # type: ignore[has-type]
- self.index_names,
- self.col_names,
- passed_names,
- ) = self._extract_multi_indexer_columns(
- self._reader.header,
- self.index_names, # type: ignore[has-type]
- self.col_names, # type: ignore[has-type]
- passed_names,
- )
- else:
- # error: Cannot determine type of 'names'
- self.names = list(self._reader.header[0]) # type: ignore[has-type]
+ # error: Cannot determine type of 'names'
+ # error: Cannot determine type of 'index_names'
+ (
+ self.names, # type: ignore[has-type]
+ self.index_names,
+ self.col_names,
+ passed_names,
+ ) = self._extract_multi_indexer_columns(
+ self._reader.header,
+ self.index_names, # type: ignore[has-type]
+ passed_names,
+ )
# error: Cannot determine type of 'names'
if self.names is None: # type: ignore[has-type]
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index af253fc062632..b0e868b260369 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -117,24 +117,16 @@ def __init__(self, f: FilePathOrBuffer | list, **kwds):
# Now self.columns has the set of columns that we will process.
# The original set is stored in self.original_columns.
- if len(self.columns) > 1:
- # we are processing a multi index column
- # error: Cannot determine type of 'index_names'
- # error: Cannot determine type of 'col_names'
- (
- self.columns,
- self.index_names,
- self.col_names,
- _,
- ) = self._extract_multi_indexer_columns(
- self.columns,
- self.index_names, # type: ignore[has-type]
- self.col_names, # type: ignore[has-type]
- )
- # Update list of original names to include all indices.
- self.num_original_columns = len(self.columns)
- else:
- self.columns = self.columns[0]
+ # error: Cannot determine type of 'index_names'
+ (
+ self.columns,
+ self.index_names,
+ self.col_names,
+ _,
+ ) = self._extract_multi_indexer_columns(
+ self.columns,
+ self.index_names, # type: ignore[has-type]
+ )
# get popped off for index
self.orig_names: list[int | str | tuple] = list(self.columns)
| I am in the process of typing parts of the parser modules. I stumbled across this function. The check I have removed is performed inside the function and self.col_names is always None when inserting, so no need to pass it at all | https://api.github.com/repos/pandas-dev/pandas/pulls/44399 | 2021-11-11T21:13:29Z | 2021-11-12T03:10:37Z | 2021-11-12T03:10:37Z | 2021-11-12T08:34:12Z |
DEPR: PeriodIndex.astype(dt64) | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 99a66c7e5454b..8a8c1208b7b89 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -400,6 +400,8 @@ Other Deprecations
- Deprecated casting behavior when setting timezone-aware value(s) into a timezone-aware :class:`Series` or :class:`DataFrame` column when the timezones do not match. Previously this cast to object dtype. In a future version, the values being inserted will be converted to the series or column's existing timezone (:issue:`37605`)
- Deprecated casting behavior when passing an item with mismatched-timezone to :meth:`DatetimeIndex.insert`, :meth:`DatetimeIndex.putmask`, :meth:`DatetimeIndex.where` :meth:`DatetimeIndex.fillna`, :meth:`Series.mask`, :meth:`Series.where`, :meth:`Series.fillna`, :meth:`Series.shift`, :meth:`Series.replace`, :meth:`Series.reindex` (and :class:`DataFrame` column analogues). In the past this has cast to object dtype. In a future version, these will cast the passed item to the index or series's timezone (:issue:`37605`)
- Deprecated the 'errors' keyword argument in :meth:`Series.where`, :meth:`DataFrame.where`, :meth:`Series.mask`, and meth:`DataFrame.mask`; in a future version the argument will be removed (:issue:`44294`)
+- Deprecated :meth:`PeriodIndex.astype` to ``datetime64[ns]`` or ``DatetimeTZDtype``, use ``obj.to_timestamp(how).tz_localize(dtype.tz)`` instead (:issue:`44398`)
+-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index fd5b5bb7396af..1db476065a5c8 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -25,6 +25,7 @@
DtypeObj,
)
from pandas.util._decorators import doc
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
is_datetime64_any_dtype,
@@ -353,6 +354,14 @@ def astype(self, dtype, copy: bool = True, how=lib.no_default):
if is_datetime64_any_dtype(dtype):
# 'how' is index-specific, isn't part of the EA interface.
+ # GH#44398 deprecate astype(dt64), matching Series behavior
+ warnings.warn(
+ f"Converting {type(self).__name__} to DatetimeIndex with "
+ "'astype' is deprecated and will raise in a future version. "
+ "Use `obj.to_timestamp(how).tz_localize(dtype.tz)` instead.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
tz = getattr(dtype, "tz", None)
return self.to_timestamp(how=how).tz_localize(tz)
diff --git a/pandas/tests/indexes/period/methods/test_astype.py b/pandas/tests/indexes/period/methods/test_astype.py
index e2340a2db02f7..c44f2efed1fcc 100644
--- a/pandas/tests/indexes/period/methods/test_astype.py
+++ b/pandas/tests/indexes/period/methods/test_astype.py
@@ -164,7 +164,10 @@ def test_period_astype_to_timestamp(self):
assert res.freq == exp.freq
exp = DatetimeIndex(["2011-01-01", "2011-02-01", "2011-03-01"], tz="US/Eastern")
- res = pi.astype("datetime64[ns, US/Eastern]")
+ msg = "Use `obj.to_timestamp"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ # GH#44398
+ res = pi.astype("datetime64[ns, US/Eastern]")
tm.assert_index_equal(res, exp)
assert res.freq == exp.freq
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index ed9243a5ba8d0..28be474b28de1 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -332,6 +332,9 @@ def test_astype_preserves_name(self, index, dtype):
):
# This astype is deprecated in favor of tz_localize
warn = FutureWarning
+ elif isinstance(index, PeriodIndex) and dtype == "datetime64[ns]":
+ # Deprecated in favor of to_timestamp GH#44398
+ warn = FutureWarning
try:
# Some of these conversions cannot succeed so we use a try / except
with tm.assert_produces_warning(warn):
| - [ ] closes #xxxx
- [x] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
Match the Series/EA behavior | https://api.github.com/repos/pandas-dev/pandas/pulls/44398 | 2021-11-11T21:07:33Z | 2021-11-14T03:20:32Z | 2021-11-14T03:20:32Z | 2021-12-24T17:09:40Z |
ENH: Support timespec argument in Timestamp.isoformat() | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 8732e1c397ce5..a6751c486f25b 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -184,6 +184,7 @@ Other enhancements
- :meth:`DataFrame.dropna` now accepts a single label as ``subset`` along with array-like (:issue:`41021`)
- :meth:`read_excel` now accepts a ``decimal`` argument that allow the user to specify the decimal point when parsing string columns to numeric (:issue:`14403`)
- :meth:`.GroupBy.mean` now supports `Numba <http://numba.pydata.org/>`_ execution with the ``engine`` keyword (:issue:`43731`)
+- :meth:`Timestamp.isoformat`, now handles the ``timespec`` argument from the base :class:``datetime`` class (:issue:`26131`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 2aebf75ba35d4..09bfc4527a428 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -295,7 +295,7 @@ cdef class _NaT(datetime):
def __str__(self) -> str:
return "NaT"
- def isoformat(self, sep="T") -> str:
+ def isoformat(self, sep: str = "T", timespec: str = "auto") -> str:
# This allows Timestamp(ts.isoformat()) to always correctly roundtrip.
return "NaT"
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 613da5a691736..28b8158548ca8 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -737,9 +737,42 @@ cdef class _Timestamp(ABCTimestamp):
# -----------------------------------------------------------------
# Rendering Methods
- def isoformat(self, sep: str = "T") -> str:
- base = super(_Timestamp, self).isoformat(sep=sep)
- if self.nanosecond == 0:
+ def isoformat(self, sep: str = "T", timespec: str = "auto") -> str:
+ """
+ Return the time formatted according to ISO.
+
+ The full format looks like 'YYYY-MM-DD HH:MM:SS.mmmmmmnnn'.
+ By default, the fractional part is omitted if self.microsecond == 0
+ and self.nanosecond == 0.
+
+ If self.tzinfo is not None, the UTC offset is also attached, giving
+ giving a full format of 'YYYY-MM-DD HH:MM:SS.mmmmmmnnn+HH:MM'.
+
+ Parameters
+ ----------
+ sep : str, default 'T'
+ String used as the separator between the date and time.
+
+ timespec : str, default 'auto'
+ Specifies the number of additional terms of the time to include.
+ The valid values are 'auto', 'hours', 'minutes', 'seconds',
+ 'milliseconds', 'microseconds', and 'nanoseconds'.
+
+ Returns
+ -------
+ str
+
+ Examples
+ --------
+ >>> ts = pd.Timestamp('2020-03-14T15:32:52.192548651')
+ >>> ts.isoformat()
+ '2020-03-14T15:32:52.192548651'
+ >>> ts.isoformat(timespec='microseconds')
+ '2020-03-14T15:32:52.192548'
+ """
+ base_ts = "microseconds" if timespec == "nanoseconds" else timespec
+ base = super(_Timestamp, self).isoformat(sep=sep, timespec=base_ts)
+ if self.nanosecond == 0 and timespec != "nanoseconds":
return base
if self.tzinfo is not None:
@@ -747,10 +780,11 @@ cdef class _Timestamp(ABCTimestamp):
else:
base1, base2 = base, ""
- if self.microsecond != 0:
- base1 += f"{self.nanosecond:03d}"
- else:
- base1 += f".{self.nanosecond:09d}"
+ if timespec == "nanoseconds" or (timespec == "auto" and self.nanosecond):
+ if self.microsecond:
+ base1 += f"{self.nanosecond:03d}"
+ else:
+ base1 += f".{self.nanosecond:09d}"
return base1 + base2
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index 21ed57813b60d..b9718249b38c8 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -182,6 +182,7 @@ def test_nat_methods_nat(method):
def test_nat_iso_format(get_nat):
# see gh-12300
assert get_nat("NaT").isoformat() == "NaT"
+ assert get_nat("NaT").isoformat(timespec="nanoseconds") == "NaT"
@pytest.mark.parametrize(
@@ -325,6 +326,10 @@ def test_nat_doc_strings(compare):
klass, method = compare
klass_doc = getattr(klass, method).__doc__
+ # Ignore differences with Timestamp.isoformat() as they're intentional
+ if klass == Timestamp and method == "isoformat":
+ return
+
nat_doc = getattr(NaT, method).__doc__
assert klass_doc == nat_doc
diff --git a/pandas/tests/scalar/timestamp/test_formats.py b/pandas/tests/scalar/timestamp/test_formats.py
new file mode 100644
index 0000000000000..71dbf3539bdb2
--- /dev/null
+++ b/pandas/tests/scalar/timestamp/test_formats.py
@@ -0,0 +1,71 @@
+import pytest
+
+from pandas import Timestamp
+
+ts_no_ns = Timestamp(
+ year=2019,
+ month=5,
+ day=18,
+ hour=15,
+ minute=17,
+ second=8,
+ microsecond=132263,
+)
+ts_ns = Timestamp(
+ year=2019,
+ month=5,
+ day=18,
+ hour=15,
+ minute=17,
+ second=8,
+ microsecond=132263,
+ nanosecond=123,
+)
+ts_ns_tz = Timestamp(
+ year=2019,
+ month=5,
+ day=18,
+ hour=15,
+ minute=17,
+ second=8,
+ microsecond=132263,
+ nanosecond=123,
+ tz="UTC",
+)
+ts_no_us = Timestamp(
+ year=2019,
+ month=5,
+ day=18,
+ hour=15,
+ minute=17,
+ second=8,
+ microsecond=0,
+ nanosecond=123,
+)
+
+
+@pytest.mark.parametrize(
+ "ts, timespec, expected_iso",
+ [
+ (ts_no_ns, "auto", "2019-05-18T15:17:08.132263"),
+ (ts_no_ns, "seconds", "2019-05-18T15:17:08"),
+ (ts_no_ns, "nanoseconds", "2019-05-18T15:17:08.132263000"),
+ (ts_ns, "auto", "2019-05-18T15:17:08.132263123"),
+ (ts_ns, "hours", "2019-05-18T15"),
+ (ts_ns, "minutes", "2019-05-18T15:17"),
+ (ts_ns, "seconds", "2019-05-18T15:17:08"),
+ (ts_ns, "milliseconds", "2019-05-18T15:17:08.132"),
+ (ts_ns, "microseconds", "2019-05-18T15:17:08.132263"),
+ (ts_ns, "nanoseconds", "2019-05-18T15:17:08.132263123"),
+ (ts_ns_tz, "auto", "2019-05-18T15:17:08.132263123+00:00"),
+ (ts_ns_tz, "hours", "2019-05-18T15+00:00"),
+ (ts_ns_tz, "minutes", "2019-05-18T15:17+00:00"),
+ (ts_ns_tz, "seconds", "2019-05-18T15:17:08+00:00"),
+ (ts_ns_tz, "milliseconds", "2019-05-18T15:17:08.132+00:00"),
+ (ts_ns_tz, "microseconds", "2019-05-18T15:17:08.132263+00:00"),
+ (ts_ns_tz, "nanoseconds", "2019-05-18T15:17:08.132263123+00:00"),
+ (ts_no_us, "auto", "2019-05-18T15:17:08.000000123"),
+ ],
+)
+def test_isoformat(ts, timespec, expected_iso):
+ assert ts.isoformat(timespec=timespec) == expected_iso
| - [x] closes #26131
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
This is an update of PR #38550. I added support for "nanoseconds" as an argument, expanded the test cases, and addressed most of the comments in the original PR. | https://api.github.com/repos/pandas-dev/pandas/pulls/44397 | 2021-11-11T20:30:25Z | 2021-11-14T03:19:08Z | 2021-11-14T03:19:08Z | 2021-11-14T03:19:18Z |
TST: parametrize arithmetic tests | diff --git a/pandas/tests/arithmetic/common.py b/pandas/tests/arithmetic/common.py
index af70cdfe538bb..f3173e8f0eb57 100644
--- a/pandas/tests/arithmetic/common.py
+++ b/pandas/tests/arithmetic/common.py
@@ -11,7 +11,26 @@
array,
)
import pandas._testing as tm
-from pandas.core.arrays import PandasArray
+from pandas.core.arrays import (
+ BooleanArray,
+ PandasArray,
+)
+
+
+def assert_cannot_add(left, right, msg="cannot add"):
+ """
+ Helper to assert that left and right cannot be added.
+
+ Parameters
+ ----------
+ left : object
+ right : object
+ msg : str, default "cannot add"
+ """
+ with pytest.raises(TypeError, match=msg):
+ left + right
+ with pytest.raises(TypeError, match=msg):
+ right + left
def assert_invalid_addsub_type(left, right, msg=None):
@@ -79,21 +98,29 @@ def xbox2(x):
# just exclude PandasArray[bool]
if isinstance(x, PandasArray):
return x._ndarray
+ if isinstance(x, BooleanArray):
+ # NB: we are assuming no pd.NAs for now
+ return x.astype(bool)
return x
+ # rev_box: box to use for reversed comparisons
+ rev_box = xbox
+ if isinstance(right, Index) and isinstance(left, Series):
+ rev_box = np.array
+
result = xbox2(left == right)
expected = xbox(np.zeros(result.shape, dtype=np.bool_))
tm.assert_equal(result, expected)
result = xbox2(right == left)
- tm.assert_equal(result, expected)
+ tm.assert_equal(result, rev_box(expected))
result = xbox2(left != right)
tm.assert_equal(result, ~expected)
result = xbox2(right != left)
- tm.assert_equal(result, ~expected)
+ tm.assert_equal(result, rev_box(~expected))
msg = "|".join(
[
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index bff461dbc7038..87bbdfb3c808f 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -41,6 +41,7 @@
)
from pandas.core.ops import roperator
from pandas.tests.arithmetic.common import (
+ assert_cannot_add,
assert_invalid_addsub_type,
assert_invalid_comparison,
get_upcast_box,
@@ -99,6 +100,7 @@ def test_dt64arr_cmp_scalar_invalid(self, other, tz_naive_fixture, box_with_arra
@pytest.mark.parametrize(
"other",
[
+ # GH#4968 invalid date/int comparisons
list(range(10)),
np.arange(10),
np.arange(10).astype(np.float32),
@@ -111,13 +113,14 @@ def test_dt64arr_cmp_scalar_invalid(self, other, tz_naive_fixture, box_with_arra
pd.period_range("1971-01-01", freq="D", periods=10).astype(object),
],
)
- def test_dt64arr_cmp_arraylike_invalid(self, other, tz_naive_fixture):
- # We don't parametrize this over box_with_array because listlike
- # other plays poorly with assert_invalid_comparison reversed checks
+ def test_dt64arr_cmp_arraylike_invalid(
+ self, other, tz_naive_fixture, box_with_array
+ ):
tz = tz_naive_fixture
dta = date_range("1970-01-01", freq="ns", periods=10, tz=tz)._data
- assert_invalid_comparison(dta, other, tm.to_array)
+ obj = tm.box_expected(dta, box_with_array)
+ assert_invalid_comparison(obj, other, box_with_array)
def test_dt64arr_cmp_mixed_invalid(self, tz_naive_fixture):
tz = tz_naive_fixture
@@ -215,18 +218,6 @@ def test_nat_comparisons(
tm.assert_series_equal(result, expected)
- def test_comparison_invalid(self, tz_naive_fixture, box_with_array):
- # GH#4968
- # invalid date/int comparisons
- tz = tz_naive_fixture
- ser = Series(range(5))
- ser2 = Series(date_range("20010101", periods=5, tz=tz))
-
- ser = tm.box_expected(ser, box_with_array)
- ser2 = tm.box_expected(ser2, box_with_array)
-
- assert_invalid_comparison(ser, ser2, box_with_array)
-
@pytest.mark.parametrize(
"data",
[
@@ -315,8 +306,8 @@ def test_timestamp_compare_series(self, left, right):
tm.assert_series_equal(result, expected)
# Compare to NaT with series containing NaT
- expected = left_f(s_nat, Timestamp("nat"))
- result = right_f(Timestamp("nat"), s_nat)
+ expected = left_f(s_nat, NaT)
+ result = right_f(NaT, s_nat)
tm.assert_series_equal(result, expected)
def test_dt64arr_timestamp_equality(self, box_with_array):
@@ -832,17 +823,6 @@ def test_dt64arr_add_timedeltalike_scalar(
result = rng + two_hours
tm.assert_equal(result, expected)
- def test_dt64arr_iadd_timedeltalike_scalar(
- self, tz_naive_fixture, two_hours, box_with_array
- ):
- tz = tz_naive_fixture
-
- rng = date_range("2000-01-01", "2000-02-01", tz=tz)
- expected = date_range("2000-01-01 02:00", "2000-02-01 02:00", tz=tz)
-
- rng = tm.box_expected(rng, box_with_array)
- expected = tm.box_expected(expected, box_with_array)
-
rng += two_hours
tm.assert_equal(rng, expected)
@@ -860,17 +840,6 @@ def test_dt64arr_sub_timedeltalike_scalar(
result = rng - two_hours
tm.assert_equal(result, expected)
- def test_dt64arr_isub_timedeltalike_scalar(
- self, tz_naive_fixture, two_hours, box_with_array
- ):
- tz = tz_naive_fixture
-
- rng = date_range("2000-01-01", "2000-02-01", tz=tz)
- expected = date_range("1999-12-31 22:00", "2000-01-31 22:00", tz=tz)
-
- rng = tm.box_expected(rng, box_with_array)
- expected = tm.box_expected(expected, box_with_array)
-
rng -= two_hours
tm.assert_equal(rng, expected)
@@ -1071,21 +1040,14 @@ def test_dt64arr_add_dt64ndarray_raises(self, tz_naive_fixture, box_with_array):
dt64vals = dti.values
dtarr = tm.box_expected(dti, box_with_array)
- msg = "cannot add"
- with pytest.raises(TypeError, match=msg):
- dtarr + dt64vals
- with pytest.raises(TypeError, match=msg):
- dt64vals + dtarr
+ assert_cannot_add(dtarr, dt64vals)
def test_dt64arr_add_timestamp_raises(self, box_with_array):
# GH#22163 ensure DataFrame doesn't cast Timestamp to i8
idx = DatetimeIndex(["2011-01-01", "2011-01-02"])
+ ts = idx[0]
idx = tm.box_expected(idx, box_with_array)
- msg = "cannot add"
- with pytest.raises(TypeError, match=msg):
- idx + Timestamp("2011-01-01")
- with pytest.raises(TypeError, match=msg):
- Timestamp("2011-01-01") + idx
+ assert_cannot_add(idx, ts)
# -------------------------------------------------------------
# Other Invalid Addition/Subtraction
@@ -1267,13 +1229,12 @@ def test_dti_add_tick_tzaware(self, tz_aware_fixture, box_with_array):
dates = tm.box_expected(dates, box_with_array)
expected = tm.box_expected(expected, box_with_array)
- # TODO: parametrize over the scalar being added? radd? sub?
- offset = dates + pd.offsets.Hour(5)
- tm.assert_equal(offset, expected)
- offset = dates + np.timedelta64(5, "h")
- tm.assert_equal(offset, expected)
- offset = dates + timedelta(hours=5)
- tm.assert_equal(offset, expected)
+ # TODO: sub?
+ for scalar in [pd.offsets.Hour(5), np.timedelta64(5, "h"), timedelta(hours=5)]:
+ offset = dates + scalar
+ tm.assert_equal(offset, expected)
+ offset = scalar + dates
+ tm.assert_equal(offset, expected)
# -------------------------------------------------------------
# RelativeDelta DateOffsets
@@ -1941,8 +1902,7 @@ def test_dt64_mul_div_numeric_invalid(self, one, dt64_series):
one / dt64_series
# TODO: parametrize over box
- @pytest.mark.parametrize("op", ["__add__", "__radd__", "__sub__", "__rsub__"])
- def test_dt64_series_add_intlike(self, tz_naive_fixture, op):
+ def test_dt64_series_add_intlike(self, tz_naive_fixture):
# GH#19123
tz = tz_naive_fixture
dti = DatetimeIndex(["2016-01-02", "2016-02-03", "NaT"], tz=tz)
@@ -1950,21 +1910,16 @@ def test_dt64_series_add_intlike(self, tz_naive_fixture, op):
other = Series([20, 30, 40], dtype="uint8")
- method = getattr(ser, op)
msg = "|".join(
[
"Addition/subtraction of integers and integer-arrays",
"cannot subtract .* from ndarray",
]
)
- with pytest.raises(TypeError, match=msg):
- method(1)
- with pytest.raises(TypeError, match=msg):
- method(other)
- with pytest.raises(TypeError, match=msg):
- method(np.array(other))
- with pytest.raises(TypeError, match=msg):
- method(pd.Index(other))
+ assert_invalid_addsub_type(ser, 1, msg)
+ assert_invalid_addsub_type(ser, other, msg)
+ assert_invalid_addsub_type(ser, np.array(other), msg)
+ assert_invalid_addsub_type(ser, pd.Index(other), msg)
# -------------------------------------------------------------
# Timezone-Centric Tests
@@ -2062,7 +2017,9 @@ def test_dti_add_intarray_tick(self, int_holder, freq):
dti = date_range("2016-01-01", periods=2, freq=freq)
other = int_holder([4, -1])
- msg = "Addition/subtraction of integers|cannot subtract DatetimeArray from"
+ msg = "|".join(
+ ["Addition/subtraction of integers", "cannot subtract DatetimeArray from"]
+ )
assert_invalid_addsub_type(dti, other, msg)
@pytest.mark.parametrize("freq", ["W", "M", "MS", "Q"])
@@ -2072,7 +2029,9 @@ def test_dti_add_intarray_non_tick(self, int_holder, freq):
dti = date_range("2016-01-01", periods=2, freq=freq)
other = int_holder([4, -1])
- msg = "Addition/subtraction of integers|cannot subtract DatetimeArray from"
+ msg = "|".join(
+ ["Addition/subtraction of integers", "cannot subtract DatetimeArray from"]
+ )
assert_invalid_addsub_type(dti, other, msg)
@pytest.mark.parametrize("int_holder", [np.array, pd.Index])
@@ -2222,10 +2181,7 @@ def test_add_datetimelike_and_dtarr(self, box_with_array, addend, tz):
dtarr = tm.box_expected(dti, box_with_array)
msg = "cannot add DatetimeArray and"
- with pytest.raises(TypeError, match=msg):
- dtarr + addend
- with pytest.raises(TypeError, match=msg):
- addend + dtarr
+ assert_cannot_add(dtarr, addend, msg)
# -------------------------------------------------------------
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 9932adccdbaf2..3bf5fdb257c2a 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -29,6 +29,7 @@
UInt64Index,
)
from pandas.core.computation import expressions as expr
+from pandas.tests.arithmetic.common import assert_invalid_comparison
@pytest.fixture(params=[Index, Series, tm.to_array])
@@ -84,25 +85,13 @@ def test_operator_series_comparison_zerorank(self):
expected = 0.0 > Series([1, 2, 3])
tm.assert_series_equal(result, expected)
- def test_df_numeric_cmp_dt64_raises(self):
+ def test_df_numeric_cmp_dt64_raises(self, box_with_array):
# GH#8932, GH#22163
ts = pd.Timestamp.now()
- df = pd.DataFrame({"x": range(5)})
+ obj = np.array(range(5))
+ obj = tm.box_expected(obj, box_with_array)
- msg = (
- "'[<>]' not supported between instances of 'numpy.ndarray' and 'Timestamp'"
- )
- with pytest.raises(TypeError, match=msg):
- df > ts
- with pytest.raises(TypeError, match=msg):
- df < ts
- with pytest.raises(TypeError, match=msg):
- ts < df
- with pytest.raises(TypeError, match=msg):
- ts > df
-
- assert not (df == ts).any().any()
- assert (df != ts).all().all()
+ assert_invalid_comparison(obj, ts, box_with_array)
def test_compare_invalid(self):
# GH#8058
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index 9a586fd553428..3069868ebb677 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -21,17 +21,15 @@
class TestObjectComparisons:
- def test_comparison_object_numeric_nas(self):
+ def test_comparison_object_numeric_nas(self, comparison_op):
ser = Series(np.random.randn(10), dtype=object)
shifted = ser.shift(2)
- ops = ["lt", "le", "gt", "ge", "eq", "ne"]
- for op in ops:
- func = getattr(operator, op)
+ func = comparison_op
- result = func(ser, shifted)
- expected = func(ser.astype(float), shifted.astype(float))
- tm.assert_series_equal(result, expected)
+ result = func(ser, shifted)
+ expected = func(ser.astype(float), shifted.astype(float))
+ tm.assert_series_equal(result, expected)
def test_object_comparisons(self):
ser = Series(["a", "b", np.nan, "c", "a"])
@@ -141,11 +139,13 @@ def test_objarr_radd_str_invalid(self, dtype, data, box_with_array):
ser = Series(data, dtype=dtype)
ser = tm.box_expected(ser, box_with_array)
- msg = (
- "can only concatenate str|"
- "did not contain a loop with signature matching types|"
- "unsupported operand type|"
- "must be str"
+ msg = "|".join(
+ [
+ "can only concatenate str",
+ "did not contain a loop with signature matching types",
+ "unsupported operand type",
+ "must be str",
+ ]
)
with pytest.raises(TypeError, match=msg):
"foo_" + ser
@@ -159,7 +159,9 @@ def test_objarr_add_invalid(self, op, box_with_array):
obj_ser.name = "objects"
obj_ser = tm.box_expected(obj_ser, box)
- msg = "can only concatenate str|unsupported operand type|must be str"
+ msg = "|".join(
+ ["can only concatenate str", "unsupported operand type", "must be str"]
+ )
with pytest.raises(Exception, match=msg):
op(obj_ser, 1)
with pytest.raises(Exception, match=msg):
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index f8814a33292ec..f4404a3483e6f 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -26,6 +26,7 @@
from pandas.core import ops
from pandas.core.arrays import TimedeltaArray
from pandas.tests.arithmetic.common import (
+ assert_invalid_addsub_type,
assert_invalid_comparison,
get_upcast_box,
)
@@ -39,6 +40,20 @@ class TestPeriodArrayLikeComparisons:
# DataFrame/Series/PeriodIndex/PeriodArray. Ideally all comparison
# tests will eventually end up here.
+ @pytest.mark.parametrize("other", ["2017", Period("2017", freq="D")])
+ def test_eq_scalar(self, other, box_with_array):
+
+ idx = PeriodIndex(["2017", "2017", "2018"], freq="D")
+ idx = tm.box_expected(idx, box_with_array)
+ xbox = get_upcast_box(idx, other, True)
+
+ expected = np.array([True, True, False])
+ expected = tm.box_expected(expected, xbox)
+
+ result = idx == other
+
+ tm.assert_equal(result, expected)
+
def test_compare_zerodim(self, box_with_array):
# GH#26689 make sure we unbox zero-dimensional arrays
@@ -54,9 +69,20 @@ def test_compare_zerodim(self, box_with_array):
tm.assert_equal(result, expected)
@pytest.mark.parametrize(
- "scalar", ["foo", Timestamp.now(), Timedelta(days=4), 9, 9.5]
+ "scalar",
+ [
+ "foo",
+ Timestamp.now(),
+ Timedelta(days=4),
+ 9,
+ 9.5,
+ 2000, # specifically don't consider 2000 to match Period("2000", "D")
+ False,
+ None,
+ ],
)
def test_compare_invalid_scalar(self, box_with_array, scalar):
+ # GH#28980
# comparison with scalar that cannot be interpreted as a Period
pi = period_range("2000", periods=4)
parr = tm.box_expected(pi, box_with_array)
@@ -70,6 +96,11 @@ def test_compare_invalid_scalar(self, box_with_array, scalar):
np.arange(4),
np.arange(4).astype(np.float64),
list(range(4)),
+ # match Period semantics by not treating integers as Periods
+ [2000, 2001, 2002, 2003],
+ np.arange(2000, 2004),
+ np.arange(2000, 2004).astype(object),
+ pd.Index([2000, 2001, 2002, 2003]),
],
)
def test_compare_invalid_listlike(self, box_with_array, other):
@@ -138,68 +169,27 @@ def test_compare_object_dtype(self, box_with_array, other_box):
class TestPeriodIndexComparisons:
# TODO: parameterize over boxes
- @pytest.mark.parametrize("other", ["2017", Period("2017", freq="D")])
- def test_eq(self, other):
- idx = PeriodIndex(["2017", "2017", "2018"], freq="D")
- expected = np.array([True, True, False])
- result = idx == other
-
- tm.assert_numpy_array_equal(result, expected)
-
- @pytest.mark.parametrize(
- "other",
- [
- 2017,
- [2017, 2017, 2017],
- np.array([2017, 2017, 2017]),
- np.array([2017, 2017, 2017], dtype=object),
- pd.Index([2017, 2017, 2017]),
- ],
- )
- def test_eq_integer_disallowed(self, other):
- # match Period semantics by not treating integers as Periods
-
- idx = PeriodIndex(["2017", "2017", "2018"], freq="D")
- expected = np.array([False, False, False])
- result = idx == other
-
- tm.assert_numpy_array_equal(result, expected)
- msg = "|".join(
- [
- "not supported between instances of 'Period' and 'int'",
- r"Invalid comparison between dtype=period\[D\] and ",
- ]
- )
- with pytest.raises(TypeError, match=msg):
- idx < other
- with pytest.raises(TypeError, match=msg):
- idx > other
- with pytest.raises(TypeError, match=msg):
- idx <= other
- with pytest.raises(TypeError, match=msg):
- idx >= other
-
def test_pi_cmp_period(self):
idx = period_range("2007-01", periods=20, freq="M")
+ per = idx[10]
- result = idx < idx[10]
+ result = idx < per
exp = idx.values < idx.values[10]
tm.assert_numpy_array_equal(result, exp)
# Tests Period.__richcmp__ against ndarray[object, ndim=2]
- result = idx.values.reshape(10, 2) < idx[10]
+ result = idx.values.reshape(10, 2) < per
tm.assert_numpy_array_equal(result, exp.reshape(10, 2))
# Tests Period.__richcmp__ against ndarray[object, ndim=0]
- result = idx < np.array(idx[10])
+ result = idx < np.array(per)
tm.assert_numpy_array_equal(result, exp)
# TODO: moved from test_datetime64; de-duplicate with version below
def test_parr_cmp_period_scalar2(self, box_with_array):
pi = period_range("2000-01-01", periods=10, freq="D")
- val = Period("2000-01-04", freq="D")
-
+ val = pi[3]
expected = [x > val for x in pi]
ser = tm.box_expected(pi, box_with_array)
@@ -326,23 +316,24 @@ def test_parr_cmp_pi_mismatched_freq(self, freq, box_with_array):
@pytest.mark.parametrize("freq", ["M", "2M", "3M"])
def test_pi_cmp_nat(self, freq):
idx1 = PeriodIndex(["2011-01", "2011-02", "NaT", "2011-05"], freq=freq)
+ per = idx1[1]
- result = idx1 > Period("2011-02", freq=freq)
+ result = idx1 > per
exp = np.array([False, False, False, True])
tm.assert_numpy_array_equal(result, exp)
- result = Period("2011-02", freq=freq) < idx1
+ result = per < idx1
tm.assert_numpy_array_equal(result, exp)
- result = idx1 == Period("NaT", freq=freq)
+ result = idx1 == pd.NaT
exp = np.array([False, False, False, False])
tm.assert_numpy_array_equal(result, exp)
- result = Period("NaT", freq=freq) == idx1
+ result = pd.NaT == idx1
tm.assert_numpy_array_equal(result, exp)
- result = idx1 != Period("NaT", freq=freq)
+ result = idx1 != pd.NaT
exp = np.array([True, True, True, True])
tm.assert_numpy_array_equal(result, exp)
- result = Period("NaT", freq=freq) != idx1
+ result = pd.NaT != idx1
tm.assert_numpy_array_equal(result, exp)
idx2 = PeriodIndex(["2011-02", "2011-01", "2011-04", "NaT"], freq=freq)
@@ -475,28 +466,29 @@ def test_pi_comp_period(self):
idx = PeriodIndex(
["2011-01", "2011-02", "2011-03", "2011-04"], freq="M", name="idx"
)
+ per = idx[2]
- f = lambda x: x == Period("2011-03", freq="M")
+ f = lambda x: x == per
exp = np.array([False, False, True, False], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") == x
+ f = lambda x: per == x
self._check(idx, f, exp)
- f = lambda x: x != Period("2011-03", freq="M")
+ f = lambda x: x != per
exp = np.array([True, True, False, True], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") != x
+ f = lambda x: per != x
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") >= x
+ f = lambda x: per >= x
exp = np.array([True, True, True, False], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: x > Period("2011-03", freq="M")
+ f = lambda x: x > per
exp = np.array([False, False, False, True], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") >= x
+ f = lambda x: per >= x
exp = np.array([True, True, True, False], dtype=np.bool_)
self._check(idx, f, exp)
@@ -504,11 +496,12 @@ def test_pi_comp_period_nat(self):
idx = PeriodIndex(
["2011-01", "NaT", "2011-03", "2011-04"], freq="M", name="idx"
)
+ per = idx[2]
- f = lambda x: x == Period("2011-03", freq="M")
+ f = lambda x: x == per
exp = np.array([False, False, True, False], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") == x
+ f = lambda x: per == x
self._check(idx, f, exp)
f = lambda x: x == pd.NaT
@@ -517,10 +510,10 @@ def test_pi_comp_period_nat(self):
f = lambda x: pd.NaT == x
self._check(idx, f, exp)
- f = lambda x: x != Period("2011-03", freq="M")
+ f = lambda x: x != per
exp = np.array([True, True, False, True], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") != x
+ f = lambda x: per != x
self._check(idx, f, exp)
f = lambda x: x != pd.NaT
@@ -529,11 +522,11 @@ def test_pi_comp_period_nat(self):
f = lambda x: pd.NaT != x
self._check(idx, f, exp)
- f = lambda x: Period("2011-03", freq="M") >= x
+ f = lambda x: per >= x
exp = np.array([True, False, True, False], dtype=np.bool_)
self._check(idx, f, exp)
- f = lambda x: x < Period("2011-03", freq="M")
+ f = lambda x: x < per
exp = np.array([True, False, False, False], dtype=np.bool_)
self._check(idx, f, exp)
@@ -696,20 +689,6 @@ def test_sub_n_gt_1_offsets(self, offset, kwd_name, n):
# -------------------------------------------------------------
# Invalid Operations
- @pytest.mark.parametrize("other", [3.14, np.array([2.0, 3.0])])
- @pytest.mark.parametrize("op", [operator.add, ops.radd, operator.sub, ops.rsub])
- def test_parr_add_sub_float_raises(self, op, other, box_with_array):
- dti = pd.DatetimeIndex(["2011-01-01", "2011-01-02"], freq="D")
- pi = dti.to_period("D")
- pi = tm.box_expected(pi, box_with_array)
- msg = (
- r"unsupported operand type\(s\) for [+-]: .* and .*|"
- "Concatenation operation is not implemented for NumPy arrays"
- )
-
- with pytest.raises(TypeError, match=msg):
- op(pi, other)
-
@pytest.mark.parametrize(
"other",
[
@@ -723,6 +702,8 @@ def test_parr_add_sub_float_raises(self, op, other, box_with_array):
pd.date_range("2016-01-01", periods=3, freq="S")._data,
pd.date_range("2016-01-01", periods=3, tz="Asia/Tokyo")._data,
# Miscellaneous invalid types
+ 3.14,
+ np.array([2.0, 3.0, 4.0]),
],
)
def test_parr_add_sub_invalid(self, other, box_with_array):
@@ -730,11 +711,15 @@ def test_parr_add_sub_invalid(self, other, box_with_array):
rng = period_range("1/1/2000", freq="D", periods=3)
rng = tm.box_expected(rng, box_with_array)
- msg = (
- r"(:?cannot add PeriodArray and .*)"
- r"|(:?cannot subtract .* from (:?a\s)?.*)"
- r"|(:?unsupported operand type\(s\) for \+: .* and .*)"
+ msg = "|".join(
+ [
+ r"(:?cannot add PeriodArray and .*)",
+ r"(:?cannot subtract .* from (:?a\s)?.*)",
+ r"(:?unsupported operand type\(s\) for \+: .* and .*)",
+ r"unsupported operand type\(s\) for [+-]: .* and .*",
+ ]
)
+ assert_invalid_addsub_type(rng, other, msg)
with pytest.raises(TypeError, match=msg):
rng + other
with pytest.raises(TypeError, match=msg):
@@ -1034,9 +1019,11 @@ def test_pi_add_timedeltalike_minute_gt1(self, three_days):
result = rng - other
tm.assert_index_equal(result, expected)
- msg = (
- r"(:?bad operand type for unary -: 'PeriodArray')"
- r"|(:?cannot subtract PeriodArray from timedelta64\[[hD]\])"
+ msg = "|".join(
+ [
+ r"(:?bad operand type for unary -: 'PeriodArray')",
+ r"(:?cannot subtract PeriodArray from timedelta64\[[hD]\])",
+ ]
)
with pytest.raises(TypeError, match=msg):
other - rng
@@ -1261,7 +1248,7 @@ def test_parr_add_sub_object_array(self):
class TestPeriodSeriesArithmetic:
- def test_ops_series_timedelta(self):
+ def test_parr_add_timedeltalike_scalar(self, three_days, box_with_array):
# GH#13043
ser = Series(
[Period("2015-01-01", freq="D"), Period("2015-01-02", freq="D")],
@@ -1270,21 +1257,18 @@ def test_ops_series_timedelta(self):
assert ser.dtype == "Period[D]"
expected = Series(
- [Period("2015-01-02", freq="D"), Period("2015-01-03", freq="D")],
+ [Period("2015-01-04", freq="D"), Period("2015-01-05", freq="D")],
name="xxx",
)
- result = ser + Timedelta("1 days")
- tm.assert_series_equal(result, expected)
-
- result = Timedelta("1 days") + ser
- tm.assert_series_equal(result, expected)
+ obj = tm.box_expected(ser, box_with_array)
+ expected = tm.box_expected(expected, box_with_array)
- result = ser + pd.tseries.offsets.Day()
- tm.assert_series_equal(result, expected)
+ result = obj + three_days
+ tm.assert_equal(result, expected)
- result = pd.tseries.offsets.Day() + ser
- tm.assert_series_equal(result, expected)
+ result = three_days + obj
+ tm.assert_equal(result, expected)
def test_ops_series_period(self):
# GH#13043
@@ -1368,9 +1352,13 @@ def test_parr_ops_errors(self, ng, func, box_with_array):
["2011-01", "2011-02", "2011-03", "2011-04"], freq="M", name="idx"
)
obj = tm.box_expected(idx, box_with_array)
- msg = (
- r"unsupported operand type\(s\)|can only concatenate|"
- r"must be str|object to str implicitly"
+ msg = "|".join(
+ [
+ r"unsupported operand type\(s\)",
+ "can only concatenate",
+ r"must be str",
+ "object to str implicitly",
+ ]
)
with pytest.raises(TypeError, match=msg):
@@ -1544,11 +1532,3 @@ def test_pi_sub_period_nat(self):
exp = TimedeltaIndex([np.nan, np.nan, np.nan, np.nan], name="idx")
tm.assert_index_equal(idx - Period("NaT", freq="M"), exp)
tm.assert_index_equal(Period("NaT", freq="M") - idx, exp)
-
- @pytest.mark.parametrize("scalars", ["a", False, 1, 1.0, None])
- def test_comparison_operations(self, scalars):
- # GH 28980
- expected = Series([False, False])
- s = Series([Period("2019"), Period("2020")], dtype="period[A-DEC]")
- result = s == scalars
- tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 86980ad42766e..8078e8c90a2bf 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -84,11 +84,6 @@ def test_compare_timedelta64_zerodim(self, box_with_array):
expected = tm.box_expected(expected, xbox)
tm.assert_equal(res, expected)
- msg = "Invalid comparison between dtype"
- with pytest.raises(TypeError, match=msg):
- # zero-dim of wrong dtype should still raise
- tdi >= np.array(4)
-
@pytest.mark.parametrize(
"td_scalar",
[
@@ -120,6 +115,7 @@ def test_compare_timedeltalike_scalar(self, box_with_array, td_scalar):
Timestamp.now().to_datetime64(),
Timestamp.now().to_pydatetime(),
Timestamp.now().date(),
+ np.array(4), # zero-dim mismatched dtype
],
)
def test_td64_comparisons_invalid(self, box_with_array, invalid):
@@ -146,17 +142,18 @@ def test_td64_comparisons_invalid(self, box_with_array, invalid):
pd.period_range("1971-01-01", freq="D", periods=10).astype(object),
],
)
- def test_td64arr_cmp_arraylike_invalid(self, other):
+ def test_td64arr_cmp_arraylike_invalid(self, other, box_with_array):
# We don't parametrize this over box_with_array because listlike
# other plays poorly with assert_invalid_comparison reversed checks
rng = timedelta_range("1 days", periods=10)._data
- assert_invalid_comparison(rng, other, tm.to_array)
+ rng = tm.box_expected(rng, box_with_array)
+ assert_invalid_comparison(rng, other, box_with_array)
def test_td64arr_cmp_mixed_invalid(self):
rng = timedelta_range("1 days", periods=5)._data
-
other = np.array([0, 1, 2, rng[3], Timestamp.now()])
+
result = rng == other
expected = np.array([False, False, False, True, False])
tm.assert_numpy_array_equal(result, expected)
@@ -1623,10 +1620,7 @@ def test_td64arr_div_td64_scalar(self, m, unit, box_with_array):
box = box_with_array
xbox = np.ndarray if box is pd.array else box
- startdate = Series(pd.date_range("2013-01-01", "2013-01-03"))
- enddate = Series(pd.date_range("2013-03-01", "2013-03-03"))
-
- ser = enddate - startdate
+ ser = Series([Timedelta(days=59)] * 3)
ser[2] = np.nan
flat = ser
ser = tm.box_expected(ser, box)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44395 | 2021-11-11T18:15:36Z | 2021-11-11T21:02:19Z | 2021-11-11T21:02:18Z | 2021-11-11T21:08:49Z |
fix documentation on options in read_csv | diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 49c2b28207ed5..6d3cc84a31d05 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -86,7 +86,7 @@
delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``.
delimiter : str, default ``None``
Alias for sep.
-header : int, list of int, default 'infer'
+header : int, list of int, None, default 'infer'
Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names
are passed the behavior is identical to ``header=0`` and column
| Noticed this due to pylance (typing) showing header=None was wrong. Don't know how it will propagate to the typings.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44391 | 2021-11-11T13:09:51Z | 2021-11-11T14:15:28Z | 2021-11-11T14:15:28Z | 2021-11-11T14:15:32Z |
BUG: handle NaNs in FloatingArray.equals | diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst
index 8db9be21ca4ef..466c8b21e89bf 100644
--- a/doc/source/whatsnew/v1.4.0.rst
+++ b/doc/source/whatsnew/v1.4.0.rst
@@ -692,6 +692,7 @@ Other
- Bug in :meth:`RangeIndex.difference` with ``sort=None`` and ``step<0`` failing to sort (:issue:`44085`)
- Bug in :meth:`Series.to_frame` and :meth:`Index.to_frame` ignoring the ``name`` argument when ``name=None`` is explicitly passed (:issue:`44212`)
- Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` with ``value=None`` and ExtensionDtypes (:issue:`44270`)
+- Bug in :meth:`FloatingArray.equals` failing to consider two arrays equal if they contain ``np.nan`` values (:issue:`44382`)
-
.. ***DO NOT USE THIS SECTION***
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index b11b11ded2f22..1797f1aff4235 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -627,6 +627,21 @@ def value_counts(self, dropna: bool = True) -> Series:
return Series(counts, index=index)
+ @doc(ExtensionArray.equals)
+ def equals(self, other) -> bool:
+ if type(self) != type(other):
+ return False
+ if other.dtype != self.dtype:
+ return False
+
+ # GH#44382 if e.g. self[1] is np.nan and other[1] is pd.NA, we are NOT
+ # equal.
+ return np.array_equal(self._mask, other._mask) and np.array_equal(
+ self._data[~self._mask],
+ other._data[~other._mask],
+ equal_nan=True,
+ )
+
def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
if name in {"any", "all"}:
return getattr(self, name)(skipna=skipna, **kwargs)
diff --git a/pandas/tests/arrays/floating/test_comparison.py b/pandas/tests/arrays/floating/test_comparison.py
index c4163c25ae74d..a429649f1ce1d 100644
--- a/pandas/tests/arrays/floating/test_comparison.py
+++ b/pandas/tests/arrays/floating/test_comparison.py
@@ -1,7 +1,9 @@
+import numpy as np
import pytest
import pandas as pd
import pandas._testing as tm
+from pandas.core.arrays import FloatingArray
from pandas.tests.arrays.masked_shared import (
ComparisonOps,
NumericOps,
@@ -34,3 +36,30 @@ def test_equals():
a1 = pd.array([1, 2, None], dtype="Float64")
a2 = pd.array([1, 2, None], dtype="Float32")
assert a1.equals(a2) is False
+
+
+def test_equals_nan_vs_na():
+ # GH#44382
+
+ mask = np.zeros(3, dtype=bool)
+ data = np.array([1.0, np.nan, 3.0], dtype=np.float64)
+
+ left = FloatingArray(data, mask)
+ assert left.equals(left)
+ tm.assert_extension_array_equal(left, left)
+
+ assert left.equals(left.copy())
+ assert left.equals(FloatingArray(data.copy(), mask.copy()))
+
+ mask2 = np.array([False, True, False], dtype=bool)
+ data2 = np.array([1.0, 2.0, 3.0], dtype=np.float64)
+ right = FloatingArray(data2, mask2)
+ assert right.equals(right)
+ tm.assert_extension_array_equal(right, right)
+
+ assert not left.equals(right)
+
+ # with mask[1] = True, the only difference is data[1], which should
+ # not matter for equals
+ mask[1] = True
+ assert left.equals(right)
| - [x] closes #44382
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44390 | 2021-11-11T03:13:17Z | 2021-11-13T17:21:41Z | 2021-11-13T17:21:40Z | 2021-11-13T19:24:15Z |
Backport PR #44388 on branch 1.3.x (CI: Use conda-forge to create Python 3.10 env) | diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml
index 78506e3cb61ce..d167397fd09f9 100644
--- a/.github/workflows/sdist.yml
+++ b/.github/workflows/sdist.yml
@@ -53,6 +53,7 @@ jobs:
- uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: pandas-sdist
+ channels: conda-forge
python-version: '${{ matrix.python-version }}'
- name: Install pandas from sdist
| Backport PR #44388: CI: Use conda-forge to create Python 3.10 env | https://api.github.com/repos/pandas-dev/pandas/pulls/44389 | 2021-11-11T02:48:36Z | 2021-11-11T03:50:32Z | 2021-11-11T03:50:32Z | 2021-11-11T03:50:32Z |
CI: Use conda-forge to create Python 3.10 env | diff --git a/.github/workflows/sdist.yml b/.github/workflows/sdist.yml
index 7692dc522522f..92a9f2a5fb97c 100644
--- a/.github/workflows/sdist.yml
+++ b/.github/workflows/sdist.yml
@@ -53,6 +53,7 @@ jobs:
- uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: pandas-sdist
+ channels: conda-forge
python-version: '${{ matrix.python-version }}'
- name: Install pandas from sdist
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
Anaconda messed up their recipe :(. Fixes the sdist job. | https://api.github.com/repos/pandas-dev/pandas/pulls/44388 | 2021-11-11T01:40:10Z | 2021-11-11T02:48:10Z | 2021-11-11T02:48:10Z | 2021-11-11T03:01:13Z |
ENH: implement EA._putmask | diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 8deeb44f65188..674379f6d65f8 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -310,7 +310,7 @@ def _wrap_reduction_result(self, axis: int | None, result):
# ------------------------------------------------------------------------
# __array_function__ methods
- def putmask(self, mask: np.ndarray, value) -> None:
+ def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:
"""
Analogue to np.putmask(self, mask, value)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 70841197761a9..a64aef64ab49f 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1409,6 +1409,33 @@ def insert(self: ExtensionArrayT, loc: int, item) -> ExtensionArrayT:
return type(self)._concat_same_type([self[:loc], item_arr, self[loc:]])
+ def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:
+ """
+ Analogue to np.putmask(self, mask, value)
+
+ Parameters
+ ----------
+ mask : np.ndarray[bool]
+ value : scalar or listlike
+ If listlike, must be arraylike with same length as self.
+
+ Returns
+ -------
+ None
+
+ Notes
+ -----
+ Unlike np.putmask, we do not repeat listlike values with mismatched length.
+ 'value' should either be a scalar or an arraylike with the same length
+ as self.
+ """
+ if is_list_like(value):
+ val = value[mask]
+ else:
+ val = value
+
+ self[mask] = val
+
def _where(
self: ExtensionArrayT, mask: npt.NDArray[np.bool_], value
) -> ExtensionArrayT:
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index d5718d59bf8b0..01bf5ec0633b5 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -36,6 +36,7 @@
PositionalIndexer,
ScalarIndexer,
SequenceIndexer,
+ npt,
)
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender
@@ -1482,15 +1483,15 @@ def to_tuples(self, na_tuple=True) -> np.ndarray:
# ---------------------------------------------------------------------
- def putmask(self, mask: np.ndarray, value) -> None:
+ def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None:
value_left, value_right = self._validate_setitem_value(value)
if isinstance(self._left, np.ndarray):
np.putmask(self._left, mask, value_left)
np.putmask(self._right, mask, value_right)
else:
- self._left.putmask(mask, value_left)
- self._right.putmask(mask, value_right)
+ self._left._putmask(mask, value_left)
+ self._right._putmask(mask, value_right)
def insert(self: IntervalArrayT, loc: int, item: Interval) -> IntervalArrayT:
"""
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ba7dde7d2a4d8..2514702b036dd 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4444,8 +4444,7 @@ def _join_non_unique(
if isinstance(join_array, np.ndarray):
np.putmask(join_array, mask, right)
else:
- # error: "ExtensionArray" has no attribute "putmask"
- join_array.putmask(mask, right) # type: ignore[attr-defined]
+ join_array._putmask(mask, right)
join_index = self._wrap_joined_index(join_array, other)
@@ -5051,8 +5050,7 @@ def putmask(self, mask, value) -> Index:
else:
# Note: we use the original value here, not converted, as
# _validate_fill_value is not idempotent
- # error: "ExtensionArray" has no attribute "putmask"
- values.putmask(mask, value) # type: ignore[attr-defined]
+ values._putmask(mask, value)
return self._shallow_copy(values)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 2589015e0f0b1..66a40b962e183 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1415,15 +1415,13 @@ def putmask(self, mask, new) -> list[Block]:
new_values = self.values
- if isinstance(new, (np.ndarray, ExtensionArray)) and len(new) == len(mask):
- new = new[mask]
-
if mask.ndim == new_values.ndim + 1:
# TODO(EA2D): unnecessary with 2D EAs
mask = mask.reshape(new_values.shape)
try:
- new_values[mask] = new
+ # Caller is responsible for ensuring matching lengths
+ new_values._putmask(mask, new)
except TypeError:
if not is_interval_dtype(self.dtype):
# Discussion about what we want to support in the general
@@ -1704,7 +1702,7 @@ def putmask(self, mask, new) -> list[Block]:
return self.coerce_to_target_dtype(new).putmask(mask, new)
arr = self.values
- arr.T.putmask(mask, new)
+ arr.T._putmask(mask, new)
return [self]
def where(self, other, cond) -> list[Block]:
| Broken off from #43930 | https://api.github.com/repos/pandas-dev/pandas/pulls/44387 | 2021-11-11T00:42:15Z | 2021-11-11T17:50:22Z | 2021-11-11T17:50:22Z | 2021-11-11T17:53:41Z |
TST: make get_upcast_box more flexible | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index e8283a222d86a..c2c55a4060f7a 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -259,7 +259,7 @@ def box_expected(expected, box_cls, transpose=True):
expected = DatetimeArray(expected)
elif box_cls is TimedeltaArray:
expected = TimedeltaArray(expected)
- elif box_cls is np.ndarray:
+ elif box_cls is np.ndarray or box_cls is np.array:
expected = np.array(expected)
elif box_cls is to_array:
expected = to_array(expected)
diff --git a/pandas/tests/arithmetic/common.py b/pandas/tests/arithmetic/common.py
index 6f4e35ad4dfb2..af70cdfe538bb 100644
--- a/pandas/tests/arithmetic/common.py
+++ b/pandas/tests/arithmetic/common.py
@@ -34,26 +34,29 @@ def assert_invalid_addsub_type(left, right, msg=None):
right - left
-def get_expected_box(box):
+def get_upcast_box(left, right, is_cmp: bool = False):
"""
- Get the box to use for 'expected' in a comparison operation.
- """
- if box in [Index, array]:
- return np.ndarray
- return box
-
+ Get the box to use for 'expected' in an arithmetic or comparison operation.
-def get_upcast_box(box, vector):
- """
- Given two box-types, find the one that takes priority.
+ Parameters
+ left : Any
+ right : Any
+ is_cmp : bool, default False
+ Whether the operation is a comparison method.
"""
- if box is DataFrame or isinstance(vector, DataFrame):
+
+ if isinstance(left, DataFrame) or isinstance(right, DataFrame):
return DataFrame
- if box is Series or isinstance(vector, Series):
+ if isinstance(left, Series) or isinstance(right, Series):
+ if is_cmp and isinstance(left, Index):
+ # Index does not defer for comparisons
+ return np.array
return Series
- if box is Index or isinstance(vector, Index):
+ if isinstance(left, Index) or isinstance(right, Index):
+ if is_cmp:
+ return np.array
return Index
- return box
+ return tm.to_array
def assert_invalid_comparison(left, right, box):
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 82f1e60f0aea5..44a70d3933b66 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -43,7 +43,6 @@
from pandas.tests.arithmetic.common import (
assert_invalid_addsub_type,
assert_invalid_comparison,
- get_expected_box,
get_upcast_box,
)
@@ -60,12 +59,12 @@ def test_compare_zerodim(self, tz_naive_fixture, box_with_array):
# Test comparison with zero-dimensional array is unboxed
tz = tz_naive_fixture
box = box_with_array
- xbox = get_expected_box(box)
dti = date_range("20130101", periods=3, tz=tz)
other = np.array(dti.to_numpy()[0])
dtarr = tm.box_expected(dti, box)
+ xbox = get_upcast_box(dtarr, other, True)
result = dtarr <= other
expected = np.array([True, False, False])
expected = tm.box_expected(expected, xbox)
@@ -147,12 +146,12 @@ def test_dt64arr_nat_comparison(self, tz_naive_fixture, box_with_array):
# GH#22242, GH#22163 DataFrame considered NaT == ts incorrectly
tz = tz_naive_fixture
box = box_with_array
- xbox = get_expected_box(box)
ts = Timestamp.now(tz)
ser = Series([ts, NaT])
obj = tm.box_expected(ser, box)
+ xbox = get_upcast_box(obj, ts, True)
expected = Series([True, False], dtype=np.bool_)
expected = tm.box_expected(expected, xbox)
@@ -244,10 +243,9 @@ def test_nat_comparisons_scalar(self, dtype, data, box_with_array):
# on older numpys (since they check object identity)
return
- xbox = get_expected_box(box)
-
left = Series(data, dtype=dtype)
left = tm.box_expected(left, box)
+ xbox = get_upcast_box(left, NaT, True)
expected = [False, False, False]
expected = tm.box_expected(expected, xbox)
@@ -323,10 +321,10 @@ def test_timestamp_compare_series(self, left, right):
def test_dt64arr_timestamp_equality(self, box_with_array):
# GH#11034
- xbox = get_expected_box(box_with_array)
ser = Series([Timestamp("2000-01-29 01:59:00"), Timestamp("2000-01-30"), NaT])
ser = tm.box_expected(ser, box_with_array)
+ xbox = get_upcast_box(ser, ser, True)
result = ser != ser
expected = tm.box_expected([False, False, True], xbox)
@@ -417,13 +415,12 @@ def test_dti_cmp_nat(self, dtype, box_with_array):
# on older numpys (since they check object identity)
return
- xbox = get_expected_box(box_with_array)
-
left = DatetimeIndex([Timestamp("2011-01-01"), NaT, Timestamp("2011-01-03")])
right = DatetimeIndex([NaT, NaT, Timestamp("2011-01-03")])
left = tm.box_expected(left, box_with_array)
right = tm.box_expected(right, box_with_array)
+ xbox = get_upcast_box(left, right, True)
lhs, rhs = left, right
if dtype is object:
@@ -642,12 +639,11 @@ def test_scalar_comparison_tzawareness(
self, comparison_op, other, tz_aware_fixture, box_with_array
):
op = comparison_op
- box = box_with_array
tz = tz_aware_fixture
dti = date_range("2016-01-01", periods=2, tz=tz)
- xbox = get_expected_box(box)
dtarr = tm.box_expected(dti, box_with_array)
+ xbox = get_upcast_box(dtarr, other, True)
if op in [operator.eq, operator.ne]:
exbool = op is operator.ne
expected = np.array([exbool, exbool], dtype=bool)
@@ -2421,14 +2417,13 @@ def test_dti_addsub_offset_arraylike(
self, tz_naive_fixture, names, op, index_or_series
):
# GH#18849, GH#19744
- box = pd.Index
other_box = index_or_series
tz = tz_naive_fixture
dti = date_range("2017-01-01", periods=2, tz=tz, name=names[0])
other = other_box([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)], name=names[1])
- xbox = get_upcast_box(box, other)
+ xbox = get_upcast_box(dti, other)
with tm.assert_produces_warning(PerformanceWarning):
res = op(dti, other)
@@ -2448,7 +2443,7 @@ def test_dti_addsub_object_arraylike(
dti = date_range("2017-01-01", periods=2, tz=tz)
dtarr = tm.box_expected(dti, box_with_array)
other = other_box([pd.offsets.MonthEnd(), Timedelta(days=4)])
- xbox = get_upcast_box(box_with_array, other)
+ xbox = get_upcast_box(dtarr, other)
expected = DatetimeIndex(["2017-01-31", "2017-01-06"], tz=tz_naive_fixture)
expected = tm.box_expected(expected, xbox)
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 41c2cb2cc4f1e..f8814a33292ec 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -27,7 +27,7 @@
from pandas.core.arrays import TimedeltaArray
from pandas.tests.arithmetic.common import (
assert_invalid_comparison,
- get_expected_box,
+ get_upcast_box,
)
# ------------------------------------------------------------------
@@ -41,12 +41,13 @@ class TestPeriodArrayLikeComparisons:
def test_compare_zerodim(self, box_with_array):
# GH#26689 make sure we unbox zero-dimensional arrays
- xbox = get_expected_box(box_with_array)
pi = period_range("2000", periods=4)
other = np.array(pi.to_numpy()[0])
pi = tm.box_expected(pi, box_with_array)
+ xbox = get_upcast_box(pi, other, True)
+
result = pi <= other
expected = np.array([True, False, False, False])
expected = tm.box_expected(expected, xbox)
@@ -78,11 +79,11 @@ def test_compare_invalid_listlike(self, box_with_array, other):
@pytest.mark.parametrize("other_box", [list, np.array, lambda x: x.astype(object)])
def test_compare_object_dtype(self, box_with_array, other_box):
- xbox = get_expected_box(box_with_array)
pi = period_range("2000", periods=5)
parr = tm.box_expected(pi, box_with_array)
other = other_box(pi)
+ xbox = get_upcast_box(parr, other, True)
expected = np.array([True, True, True, True, True])
expected = tm.box_expected(expected, xbox)
@@ -195,14 +196,15 @@ def test_pi_cmp_period(self):
# TODO: moved from test_datetime64; de-duplicate with version below
def test_parr_cmp_period_scalar2(self, box_with_array):
- xbox = get_expected_box(box_with_array)
-
pi = period_range("2000-01-01", periods=10, freq="D")
val = Period("2000-01-04", freq="D")
+
expected = [x > val for x in pi]
ser = tm.box_expected(pi, box_with_array)
+ xbox = get_upcast_box(ser, val, True)
+
expected = tm.box_expected(expected, xbox)
result = ser > val
tm.assert_equal(result, expected)
@@ -216,11 +218,10 @@ def test_parr_cmp_period_scalar2(self, box_with_array):
@pytest.mark.parametrize("freq", ["M", "2M", "3M"])
def test_parr_cmp_period_scalar(self, freq, box_with_array):
# GH#13200
- xbox = get_expected_box(box_with_array)
-
base = PeriodIndex(["2011-01", "2011-02", "2011-03", "2011-04"], freq=freq)
base = tm.box_expected(base, box_with_array)
per = Period("2011-02", freq=freq)
+ xbox = get_upcast_box(base, per, True)
exp = np.array([False, True, False, False])
exp = tm.box_expected(exp, xbox)
@@ -255,14 +256,14 @@ def test_parr_cmp_period_scalar(self, freq, box_with_array):
@pytest.mark.parametrize("freq", ["M", "2M", "3M"])
def test_parr_cmp_pi(self, freq, box_with_array):
# GH#13200
- xbox = get_expected_box(box_with_array)
-
base = PeriodIndex(["2011-01", "2011-02", "2011-03", "2011-04"], freq=freq)
base = tm.box_expected(base, box_with_array)
# TODO: could also box idx?
idx = PeriodIndex(["2011-02", "2011-01", "2011-03", "2011-05"], freq=freq)
+ xbox = get_upcast_box(base, idx, True)
+
exp = np.array([False, False, True, False])
exp = tm.box_expected(exp, xbox)
tm.assert_equal(base == idx, exp)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index b8fa6c79b1b93..86980ad42766e 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -1542,13 +1542,13 @@ def test_tdi_mul_float_series(self, box_with_array):
)
def test_tdi_rmul_arraylike(self, other, box_with_array):
box = box_with_array
- xbox = get_upcast_box(box, other)
tdi = TimedeltaIndex(["1 Day"] * 10)
- expected = timedelta_range("1 days", "10 days")
- expected._data.freq = None
+ expected = timedelta_range("1 days", "10 days")._with_freq(None)
tdi = tm.box_expected(tdi, box)
+ xbox = get_upcast_box(tdi, other)
+
expected = tm.box_expected(expected, xbox)
result = other * tdi
@@ -2000,7 +2000,6 @@ def test_td64arr_rmul_numeric_array(
):
# GH#4521
# divide/multiply by integers
- xbox = get_upcast_box(box_with_array, vector)
tdser = Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
vector = vector.astype(any_real_numpy_dtype)
@@ -2008,6 +2007,8 @@ def test_td64arr_rmul_numeric_array(
expected = Series(["1180 Days", "1770 Days", "NaT"], dtype="timedelta64[ns]")
tdser = tm.box_expected(tdser, box_with_array)
+ xbox = get_upcast_box(tdser, vector)
+
expected = tm.box_expected(expected, xbox)
result = tdser * vector
@@ -2026,7 +2027,6 @@ def test_td64arr_div_numeric_array(
):
# GH#4521
# divide/multiply by integers
- xbox = get_upcast_box(box_with_array, vector)
tdser = Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
vector = vector.astype(any_real_numpy_dtype)
@@ -2034,6 +2034,7 @@ def test_td64arr_div_numeric_array(
expected = Series(["2.95D", "1D 23H 12m", "NaT"], dtype="timedelta64[ns]")
tdser = tm.box_expected(tdser, box_with_array)
+ xbox = get_upcast_box(tdser, vector)
expected = tm.box_expected(expected, xbox)
result = tdser / vector
@@ -2085,7 +2086,7 @@ def test_td64arr_mul_int_series(self, box_with_array, names):
)
tdi = tm.box_expected(tdi, box)
- xbox = get_upcast_box(box, ser)
+ xbox = get_upcast_box(tdi, ser)
expected = tm.box_expected(expected, xbox)
@@ -2117,9 +2118,8 @@ def test_float_series_rdiv_td64arr(self, box_with_array, names):
name=xname,
)
- xbox = get_upcast_box(box, ser)
-
tdi = tm.box_expected(tdi, box)
+ xbox = get_upcast_box(tdi, ser)
expected = tm.box_expected(expected, xbox)
result = ser.__rtruediv__(tdi)
| Will make it easier to parametrize these tests | https://api.github.com/repos/pandas-dev/pandas/pulls/44385 | 2021-11-10T20:23:09Z | 2021-11-11T00:39:48Z | 2021-11-11T00:39:48Z | 2021-11-11T00:41:29Z |
TST/COMPAT: update csv test to infer time with pyarrow>=6.0 | diff --git a/pandas/compat/pyarrow.py b/pandas/compat/pyarrow.py
index 9bf7139769baa..f9b9409317774 100644
--- a/pandas/compat/pyarrow.py
+++ b/pandas/compat/pyarrow.py
@@ -12,9 +12,11 @@
pa_version_under3p0 = _palv < Version("3.0.0")
pa_version_under4p0 = _palv < Version("4.0.0")
pa_version_under5p0 = _palv < Version("5.0.0")
+ pa_version_under6p0 = _palv < Version("6.0.0")
except ImportError:
pa_version_under1p0 = True
pa_version_under2p0 = True
pa_version_under3p0 = True
pa_version_under4p0 = True
pa_version_under5p0 = True
+ pa_version_under6p0 = True
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index 17c107814995c..c8bea9592e82a 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -26,6 +26,7 @@
is_platform_windows,
np_array_datetime64_compat,
)
+from pandas.compat.pyarrow import pa_version_under6p0
import pandas as pd
from pandas import (
@@ -431,6 +432,11 @@ def test_date_col_as_index_col(all_parsers):
columns=["X0", "X2", "X3", "X4", "X5", "X6", "X7"],
index=index,
)
+ if parser.engine == "pyarrow" and not pa_version_under6p0:
+ # https://github.com/pandas-dev/pandas/issues/44231
+ # pyarrow 6.0 starts to infer time type
+ expected["X2"] = pd.to_datetime("1970-01-01" + expected["X2"]).dt.time
+
tm.assert_frame_equal(result, expected)
| Closes #44231 | https://api.github.com/repos/pandas-dev/pandas/pulls/44381 | 2021-11-10T08:47:32Z | 2021-11-10T19:25:11Z | 2021-11-10T19:25:10Z | 2021-11-10T19:25:49Z |
REF/TST: collect index tests | diff --git a/pandas/tests/indexes/datetimelike_/test_is_monotonic.py b/pandas/tests/indexes/datetimelike_/test_is_monotonic.py
new file mode 100644
index 0000000000000..22247c982edbc
--- /dev/null
+++ b/pandas/tests/indexes/datetimelike_/test_is_monotonic.py
@@ -0,0 +1,46 @@
+from pandas import (
+ Index,
+ NaT,
+ date_range,
+)
+
+
+def test_is_monotonic_with_nat():
+ # GH#31437
+ # PeriodIndex.is_monotonic should behave analogously to DatetimeIndex,
+ # in particular never be monotonic when we have NaT
+ dti = date_range("2016-01-01", periods=3)
+ pi = dti.to_period("D")
+ tdi = Index(dti.view("timedelta64[ns]"))
+
+ for obj in [pi, pi._engine, dti, dti._engine, tdi, tdi._engine]:
+ if isinstance(obj, Index):
+ # i.e. not Engines
+ assert obj.is_monotonic
+ assert obj.is_monotonic_increasing
+ assert not obj.is_monotonic_decreasing
+ assert obj.is_unique
+
+ dti1 = dti.insert(0, NaT)
+ pi1 = dti1.to_period("D")
+ tdi1 = Index(dti1.view("timedelta64[ns]"))
+
+ for obj in [pi1, pi1._engine, dti1, dti1._engine, tdi1, tdi1._engine]:
+ if isinstance(obj, Index):
+ # i.e. not Engines
+ assert not obj.is_monotonic
+ assert not obj.is_monotonic_increasing
+ assert not obj.is_monotonic_decreasing
+ assert obj.is_unique
+
+ dti2 = dti.insert(3, NaT)
+ pi2 = dti2.to_period("H")
+ tdi2 = Index(dti2.view("timedelta64[ns]"))
+
+ for obj in [pi2, pi2._engine, dti2, dti2._engine, tdi2, tdi2._engine]:
+ if isinstance(obj, Index):
+ # i.e. not Engines
+ assert not obj.is_monotonic
+ assert not obj.is_monotonic_increasing
+ assert not obj.is_monotonic_decreasing
+ assert obj.is_unique
diff --git a/pandas/tests/indexes/datetimes/methods/test_isocalendar.py b/pandas/tests/indexes/datetimes/methods/test_isocalendar.py
new file mode 100644
index 0000000000000..128a8b3e10eb3
--- /dev/null
+++ b/pandas/tests/indexes/datetimes/methods/test_isocalendar.py
@@ -0,0 +1,20 @@
+from pandas import (
+ DataFrame,
+ DatetimeIndex,
+)
+import pandas._testing as tm
+
+
+def test_isocalendar_returns_correct_values_close_to_new_year_with_tz():
+ # GH#6538: Check that DatetimeIndex and its TimeStamp elements
+ # return the same weekofyear accessor close to new year w/ tz
+ dates = ["2013/12/29", "2013/12/30", "2013/12/31"]
+ dates = DatetimeIndex(dates, tz="Europe/Brussels")
+ result = dates.isocalendar()
+ expected_data_frame = DataFrame(
+ [[2013, 52, 7], [2014, 1, 1], [2014, 1, 2]],
+ columns=["year", "week", "day"],
+ index=dates,
+ dtype="UInt32",
+ )
+ tm.assert_frame_equal(result, expected_data_frame)
diff --git a/pandas/tests/indexes/datetimes/test_asof.py b/pandas/tests/indexes/datetimes/test_asof.py
index c794aefc6a48b..7adc400302cb9 100644
--- a/pandas/tests/indexes/datetimes/test_asof.py
+++ b/pandas/tests/indexes/datetimes/test_asof.py
@@ -1,8 +1,12 @@
+from datetime import timedelta
+
from pandas import (
Index,
Timestamp,
date_range,
+ isna,
)
+import pandas._testing as tm
class TestAsOf:
@@ -12,3 +16,16 @@ def test_asof_partial(self):
result = index.asof("2010-02")
assert result == expected
assert not isinstance(result, Index)
+
+ def test_asof(self):
+ index = tm.makeDateIndex(100)
+
+ dt = index[0]
+ assert index.asof(dt) == dt
+ assert isna(index.asof(dt - timedelta(1)))
+
+ dt = index[-1]
+ assert index.asof(dt + timedelta(1)) == dt
+
+ dt = index[0].to_pydatetime()
+ assert isinstance(index.asof(dt), Timestamp)
diff --git a/pandas/tests/indexes/datetimes/test_freq_attr.py b/pandas/tests/indexes/datetimes/test_freq_attr.py
new file mode 100644
index 0000000000000..f5821a316358d
--- /dev/null
+++ b/pandas/tests/indexes/datetimes/test_freq_attr.py
@@ -0,0 +1,61 @@
+import pytest
+
+from pandas import (
+ DatetimeIndex,
+ date_range,
+)
+
+from pandas.tseries.offsets import (
+ BDay,
+ DateOffset,
+ Day,
+ Hour,
+)
+
+
+class TestFreq:
+ def test_freq_setter_errors(self):
+ # GH#20678
+ idx = DatetimeIndex(["20180101", "20180103", "20180105"])
+
+ # setting with an incompatible freq
+ msg = (
+ "Inferred frequency 2D from passed values does not conform to "
+ "passed frequency 5D"
+ )
+ with pytest.raises(ValueError, match=msg):
+ idx._data.freq = "5D"
+
+ # setting with non-freq string
+ with pytest.raises(ValueError, match="Invalid frequency"):
+ idx._data.freq = "foo"
+
+ @pytest.mark.parametrize("values", [["20180101", "20180103", "20180105"], []])
+ @pytest.mark.parametrize("freq", ["2D", Day(2), "2B", BDay(2), "48H", Hour(48)])
+ @pytest.mark.parametrize("tz", [None, "US/Eastern"])
+ def test_freq_setter(self, values, freq, tz):
+ # GH#20678
+ idx = DatetimeIndex(values, tz=tz)
+
+ # can set to an offset, converting from string if necessary
+ idx._data.freq = freq
+ assert idx.freq == freq
+ assert isinstance(idx.freq, DateOffset)
+
+ # can reset to None
+ idx._data.freq = None
+ assert idx.freq is None
+
+ def test_freq_view_safe(self):
+ # Setting the freq for one DatetimeIndex shouldn't alter the freq
+ # for another that views the same data
+
+ dti = date_range("2016-01-01", periods=5)
+ dta = dti._data
+
+ dti2 = DatetimeIndex(dta)._with_freq(None)
+ assert dti2.freq is None
+
+ # Original was not altered
+ assert dti.freq == "D"
+ assert dta.freq == "D"
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index f0757d0ba555e..44c353315562a 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -297,21 +297,6 @@ def test_week_and_weekofyear_are_deprecated():
idx.weekofyear
-def test_isocalendar_returns_correct_values_close_to_new_year_with_tz():
- # GH 6538: Check that DatetimeIndex and its TimeStamp elements
- # return the same weekofyear accessor close to new year w/ tz
- dates = ["2013/12/29", "2013/12/30", "2013/12/31"]
- dates = DatetimeIndex(dates, tz="Europe/Brussels")
- result = dates.isocalendar()
- expected_data_frame = pd.DataFrame(
- [[2013, 52, 7], [2014, 1, 1], [2014, 1, 2]],
- columns=["year", "week", "day"],
- index=dates,
- dtype="UInt32",
- )
- tm.assert_frame_equal(result, expected_data_frame)
-
-
def test_add_timedelta_preserves_freq():
# GH#37295 should hold for any DTI with freq=None or Tick freq
tz = "Canada/Eastern"
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index 7df94b5820e5d..d6ef4198fad2e 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -6,43 +6,17 @@
from pandas.compat import IS64
from pandas import (
- DateOffset,
DatetimeIndex,
Index,
- Series,
bdate_range,
date_range,
)
import pandas._testing as tm
-from pandas.tseries.offsets import (
- BDay,
- Day,
- Hour,
-)
-
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
class TestDatetimeIndexOps:
- def test_ops_properties_basic(self, datetime_series):
-
- # sanity check that the behavior didn't change
- # GH#7206
- for op in ["year", "day", "second", "weekday"]:
- msg = f"'Series' object has no attribute '{op}'"
- with pytest.raises(AttributeError, match=msg):
- getattr(datetime_series, op)
-
- # attribute access should still work!
- s = Series({"year": 2000, "month": 1, "day": 10})
- assert s.year == 2000
- assert s.month == 1
- assert s.day == 10
- msg = "'Series' object has no attribute 'weekday'"
- with pytest.raises(AttributeError, match=msg):
- s.weekday
-
@pytest.mark.parametrize(
"freq,expected",
[
@@ -74,72 +48,28 @@ def test_infer_freq(self, freq_sample):
tm.assert_index_equal(idx, result)
assert result.freq == freq_sample
- @pytest.mark.parametrize("values", [["20180101", "20180103", "20180105"], []])
- @pytest.mark.parametrize("freq", ["2D", Day(2), "2B", BDay(2), "48H", Hour(48)])
- @pytest.mark.parametrize("tz", [None, "US/Eastern"])
- def test_freq_setter(self, values, freq, tz):
- # GH 20678
- idx = DatetimeIndex(values, tz=tz)
-
- # can set to an offset, converting from string if necessary
- idx._data.freq = freq
- assert idx.freq == freq
- assert isinstance(idx.freq, DateOffset)
-
- # can reset to None
- idx._data.freq = None
- assert idx.freq is None
-
- def test_freq_setter_errors(self):
- # GH 20678
- idx = DatetimeIndex(["20180101", "20180103", "20180105"])
-
- # setting with an incompatible freq
- msg = (
- "Inferred frequency 2D from passed values does not conform to "
- "passed frequency 5D"
- )
- with pytest.raises(ValueError, match=msg):
- idx._data.freq = "5D"
-
- # setting with non-freq string
- with pytest.raises(ValueError, match="Invalid frequency"):
- idx._data.freq = "foo"
-
- def test_freq_view_safe(self):
- # Setting the freq for one DatetimeIndex shouldn't alter the freq
- # for another that views the same data
-
- dti = date_range("2016-01-01", periods=5)
- dta = dti._data
-
- dti2 = DatetimeIndex(dta)._with_freq(None)
- assert dti2.freq is None
-
- # Original was not altered
- assert dti.freq == "D"
- assert dta.freq == "D"
-
+@pytest.mark.parametrize("freq", ["B", "C"])
class TestBusinessDatetimeIndex:
- def setup_method(self, method):
- self.rng = bdate_range(START, END)
+ @pytest.fixture
+ def rng(self, freq):
+ return bdate_range(START, END, freq=freq)
- def test_comparison(self):
- d = self.rng[10]
+ def test_comparison(self, rng):
+ d = rng[10]
- comp = self.rng > d
+ comp = rng > d
assert comp[11]
assert not comp[9]
- def test_copy(self):
- cp = self.rng.copy()
+ def test_copy(self, rng):
+ cp = rng.copy()
repr(cp)
- tm.assert_index_equal(cp, self.rng)
+ tm.assert_index_equal(cp, rng)
- def test_identical(self):
- t1 = self.rng.copy()
- t2 = self.rng.copy()
+ def test_identical(self, rng):
+ t1 = rng.copy()
+ t2 = rng.copy()
assert t1.identical(t2)
# name
@@ -153,20 +83,3 @@ def test_identical(self):
t2v = Index(t2.values)
assert t1.equals(t2v)
assert not t1.identical(t2v)
-
-
-class TestCustomDatetimeIndex:
- def setup_method(self, method):
- self.rng = bdate_range(START, END, freq="C")
-
- def test_comparison(self):
- d = self.rng[10]
-
- comp = self.rng > d
- assert comp[11]
- assert not comp[9]
-
- def test_copy(self):
- cp = self.rng.copy()
- repr(cp)
- tm.assert_index_equal(cp, self.rng)
diff --git a/pandas/tests/indexes/period/test_freq_attr.py b/pandas/tests/indexes/period/test_freq_attr.py
new file mode 100644
index 0000000000000..3bf3e700e5e72
--- /dev/null
+++ b/pandas/tests/indexes/period/test_freq_attr.py
@@ -0,0 +1,21 @@
+import pytest
+
+from pandas import (
+ offsets,
+ period_range,
+)
+import pandas._testing as tm
+
+
+class TestFreq:
+ def test_freq_setter_deprecated(self):
+ # GH#20678
+ idx = period_range("2018Q1", periods=4, freq="Q")
+
+ # no warning for getter
+ with tm.assert_produces_warning(None):
+ idx.freq
+
+ # warning for setter
+ with pytest.raises(AttributeError, match="can't set attribute"):
+ idx.freq = offsets.Day()
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index a7dad4e7f352c..f07107e9d3277 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -38,12 +38,6 @@ def index(self, request):
def test_pickle_compat_construction(self):
super().test_pickle_compat_construction()
- @pytest.mark.parametrize("freq", ["D", "M", "A"])
- def test_pickle_round_trip(self, freq):
- idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq=freq)
- result = tm.round_trip_pickle(idx)
- tm.assert_index_equal(result, idx)
-
def test_where(self):
# This is handled in test_indexing
pass
@@ -307,13 +301,6 @@ def test_with_multi_index(self):
assert isinstance(s.index.values[0][0], Period)
- def test_pickle_freq(self):
- # GH2891
- prng = period_range("1/1/2011", "1/1/2012", freq="M")
- new_prng = tm.round_trip_pickle(prng)
- assert new_prng.freq == offsets.MonthEnd()
- assert new_prng.freqstr == "M"
-
def test_map(self):
# test_map_dictlike generally tests
@@ -341,47 +328,6 @@ def test_maybe_convert_timedelta():
pi._maybe_convert_timedelta(offset)
-def test_is_monotonic_with_nat():
- # GH#31437
- # PeriodIndex.is_monotonic should behave analogously to DatetimeIndex,
- # in particular never be monotonic when we have NaT
- dti = date_range("2016-01-01", periods=3)
- pi = dti.to_period("D")
- tdi = Index(dti.view("timedelta64[ns]"))
-
- for obj in [pi, pi._engine, dti, dti._engine, tdi, tdi._engine]:
- if isinstance(obj, Index):
- # i.e. not Engines
- assert obj.is_monotonic
- assert obj.is_monotonic_increasing
- assert not obj.is_monotonic_decreasing
- assert obj.is_unique
-
- dti1 = dti.insert(0, NaT)
- pi1 = dti1.to_period("D")
- tdi1 = Index(dti1.view("timedelta64[ns]"))
-
- for obj in [pi1, pi1._engine, dti1, dti1._engine, tdi1, tdi1._engine]:
- if isinstance(obj, Index):
- # i.e. not Engines
- assert not obj.is_monotonic
- assert not obj.is_monotonic_increasing
- assert not obj.is_monotonic_decreasing
- assert obj.is_unique
-
- dti2 = dti.insert(3, NaT)
- pi2 = dti2.to_period("H")
- tdi2 = Index(dti2.view("timedelta64[ns]"))
-
- for obj in [pi2, pi2._engine, dti2, dti2._engine, tdi2, tdi2._engine]:
- if isinstance(obj, Index):
- # i.e. not Engines
- assert not obj.is_monotonic
- assert not obj.is_monotonic_increasing
- assert not obj.is_monotonic_decreasing
- assert obj.is_unique
-
-
@pytest.mark.parametrize("array", [True, False])
def test_dunder_array(array):
obj = PeriodIndex(["2000-01-01", "2001-01-01"], freq="D")
diff --git a/pandas/tests/indexes/period/test_pickle.py b/pandas/tests/indexes/period/test_pickle.py
new file mode 100644
index 0000000000000..82f906d1e361f
--- /dev/null
+++ b/pandas/tests/indexes/period/test_pickle.py
@@ -0,0 +1,26 @@
+import numpy as np
+import pytest
+
+from pandas import (
+ NaT,
+ PeriodIndex,
+ period_range,
+)
+import pandas._testing as tm
+
+from pandas.tseries import offsets
+
+
+class TestPickle:
+ @pytest.mark.parametrize("freq", ["D", "M", "A"])
+ def test_pickle_round_trip(self, freq):
+ idx = PeriodIndex(["2016-05-16", "NaT", NaT, np.NaN], freq=freq)
+ result = tm.round_trip_pickle(idx)
+ tm.assert_index_equal(result, idx)
+
+ def test_pickle_freq(self):
+ # GH#2891
+ prng = period_range("1/1/2011", "1/1/2012", freq="M")
+ new_prng = tm.round_trip_pickle(prng)
+ assert new_prng.freq == offsets.MonthEnd()
+ assert new_prng.freqstr == "M"
diff --git a/pandas/tests/indexes/period/test_ops.py b/pandas/tests/indexes/period/test_resolution.py
similarity index 56%
rename from pandas/tests/indexes/period/test_ops.py
rename to pandas/tests/indexes/period/test_resolution.py
index 9ebe44fb16c8d..7ecbde75cfa47 100644
--- a/pandas/tests/indexes/period/test_ops.py
+++ b/pandas/tests/indexes/period/test_resolution.py
@@ -1,10 +1,9 @@
import pytest
import pandas as pd
-import pandas._testing as tm
-class TestPeriodIndexOps:
+class TestResolution:
@pytest.mark.parametrize(
"freq,expected",
[
@@ -22,15 +21,3 @@ class TestPeriodIndexOps:
def test_resolution(self, freq, expected):
idx = pd.period_range(start="2013-04-01", periods=30, freq=freq)
assert idx.resolution == expected
-
- def test_freq_setter_deprecated(self):
- # GH 20678
- idx = pd.period_range("2018Q1", periods=4, freq="Q")
-
- # no warning for getter
- with tm.assert_produces_warning(None):
- idx.freq
-
- # warning for setter
- with pytest.raises(AttributeError, match="can't set attribute"):
- idx.freq = pd.offsets.Day()
diff --git a/pandas/tests/indexes/test_any_index.py b/pandas/tests/indexes/test_any_index.py
index 39a1ddcbc8a6a..f7dafd78a801f 100644
--- a/pandas/tests/indexes/test_any_index.py
+++ b/pandas/tests/indexes/test_any_index.py
@@ -84,6 +84,13 @@ def test_is_type_compatible_deprecation(index):
index.is_type_compatible(index.inferred_type)
+def test_is_mixed_deprecated(index):
+ # GH#32922
+ msg = "Index.is_mixed is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ index.is_mixed()
+
+
class TestConversion:
def test_to_series(self, index):
# assert that we are creating a copy of the index
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 50be69fb93d7c..7f9a5c0b50595 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -29,7 +29,6 @@
TimedeltaIndex,
Timestamp,
date_range,
- isna,
period_range,
)
import pandas._testing as tm
@@ -395,15 +394,6 @@ def test_constructor_empty_special(self, empty, klass):
assert isinstance(empty, klass)
assert not len(empty)
- def test_constructor_overflow_int64(self):
- # see gh-15832
- msg = (
- "The elements provided in the data cannot "
- "all be casted to the dtype int64"
- )
- with pytest.raises(OverflowError, match=msg):
- Index([np.iinfo(np.uint64).max - 1], dtype="int64")
-
@pytest.mark.parametrize(
"index",
[
@@ -502,18 +492,6 @@ def test_is_(self):
ind2 = Index(arr, copy=False)
assert not ind1.is_(ind2)
- @pytest.mark.parametrize("index", ["datetime"], indirect=True)
- def test_asof(self, index):
- d = index[0]
- assert index.asof(d) == d
- assert isna(index.asof(d - timedelta(1)))
-
- d = index[-1]
- assert index.asof(d + timedelta(1)) == d
-
- d = index[0].to_pydatetime()
- assert isinstance(index.asof(d), Timestamp)
-
def test_asof_numeric_vs_bool_raises(self):
left = Index([1, 2, 3])
right = Index([True, False])
@@ -699,12 +677,6 @@ def test_append_empty_preserve_name(self, name, expected):
result = left.append(right)
assert result.name == expected
- def test_is_mixed_deprecated(self, simple_index):
- # GH#32922
- index = simple_index
- with tm.assert_produces_warning(FutureWarning):
- index.is_mixed()
-
@pytest.mark.parametrize(
"index, expected",
[
diff --git a/pandas/tests/indexes/test_index_new.py b/pandas/tests/indexes/test_index_new.py
index 293aa6dd57124..5c5ec7219d2d7 100644
--- a/pandas/tests/indexes/test_index_new.py
+++ b/pandas/tests/indexes/test_index_new.py
@@ -272,3 +272,14 @@ def __array__(self, dtype=None) -> np.ndarray:
expected = Index(array)
result = Index(ArrayLike(array))
tm.assert_index_equal(result, expected)
+
+
+class TestIndexConstructionErrors:
+ def test_constructor_overflow_int64(self):
+ # see GH#15832
+ msg = (
+ "The elements provided in the data cannot "
+ "all be casted to the dtype int64"
+ )
+ with pytest.raises(OverflowError, match=msg):
+ Index([np.iinfo(np.uint64).max - 1], dtype="int64")
diff --git a/pandas/tests/indexes/timedeltas/test_freq_attr.py b/pandas/tests/indexes/timedeltas/test_freq_attr.py
new file mode 100644
index 0000000000000..39b9c11aa833c
--- /dev/null
+++ b/pandas/tests/indexes/timedeltas/test_freq_attr.py
@@ -0,0 +1,61 @@
+import pytest
+
+from pandas import TimedeltaIndex
+
+from pandas.tseries.offsets import (
+ DateOffset,
+ Day,
+ Hour,
+)
+
+
+class TestFreq:
+ @pytest.mark.parametrize("values", [["0 days", "2 days", "4 days"], []])
+ @pytest.mark.parametrize("freq", ["2D", Day(2), "48H", Hour(48)])
+ def test_freq_setter(self, values, freq):
+ # GH#20678
+ idx = TimedeltaIndex(values)
+
+ # can set to an offset, converting from string if necessary
+ idx._data.freq = freq
+ assert idx.freq == freq
+ assert isinstance(idx.freq, DateOffset)
+
+ # can reset to None
+ idx._data.freq = None
+ assert idx.freq is None
+
+ def test_freq_setter_errors(self):
+ # GH#20678
+ idx = TimedeltaIndex(["0 days", "2 days", "4 days"])
+
+ # setting with an incompatible freq
+ msg = (
+ "Inferred frequency 2D from passed values does not conform to "
+ "passed frequency 5D"
+ )
+ with pytest.raises(ValueError, match=msg):
+ idx._data.freq = "5D"
+
+ # setting with a non-fixed frequency
+ msg = r"<2 \* BusinessDays> is a non-fixed frequency"
+ with pytest.raises(ValueError, match=msg):
+ idx._data.freq = "2B"
+
+ # setting with non-freq string
+ with pytest.raises(ValueError, match="Invalid frequency"):
+ idx._data.freq = "foo"
+
+ def test_freq_view_safe(self):
+ # Setting the freq for one TimedeltaIndex shouldn't alter the freq
+ # for another that views the same data
+
+ tdi = TimedeltaIndex(["0 days", "2 days", "4 days"], freq="2D")
+ tda = tdi._data
+
+ tdi2 = TimedeltaIndex(tda)._with_freq(None)
+ assert tdi2.freq is None
+
+ # Original was not altered
+ assert tdi.freq == "2D"
+ assert tda.freq == "2D"
diff --git a/pandas/tests/indexes/timedeltas/test_indexing.py b/pandas/tests/indexes/timedeltas/test_indexing.py
index fc8abb83ed302..66fdaa2778600 100644
--- a/pandas/tests/indexes/timedeltas/test_indexing.py
+++ b/pandas/tests/indexes/timedeltas/test_indexing.py
@@ -340,3 +340,17 @@ def test_slice_invalid_str_with_timedeltaindex(
indexer_sl(obj)[:"foo"]
with pytest.raises(TypeError, match=msg):
indexer_sl(obj)[tdi[0] : "foo"]
+
+
+class TestContains:
+ def test_contains_nonunique(self):
+ # GH#9512
+ for vals in (
+ [0, 1, 0],
+ [0, 0, -1],
+ [0, -1, -1],
+ ["00:01:00", "00:01:00", "00:02:00"],
+ ["00:01:00", "00:01:00", "00:00:01"],
+ ):
+ idx = TimedeltaIndex(vals)
+ assert idx[0] in idx
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index f5d601bcfbcd1..f6013baf86edc 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -1,86 +1,14 @@
-import pytest
-
from pandas import (
TimedeltaIndex,
timedelta_range,
)
import pandas._testing as tm
-from pandas.tseries.offsets import (
- DateOffset,
- Day,
- Hour,
-)
-
class TestTimedeltaIndexOps:
- def test_nonunique_contains(self):
- # GH 9512
- for idx in map(
- TimedeltaIndex,
- (
- [0, 1, 0],
- [0, 0, -1],
- [0, -1, -1],
- ["00:01:00", "00:01:00", "00:02:00"],
- ["00:01:00", "00:01:00", "00:00:01"],
- ),
- ):
- assert idx[0] in idx
-
def test_infer_freq(self, freq_sample):
# GH#11018
idx = timedelta_range("1", freq=freq_sample, periods=10)
result = TimedeltaIndex(idx.asi8, freq="infer")
tm.assert_index_equal(idx, result)
assert result.freq == freq_sample
-
- @pytest.mark.parametrize("values", [["0 days", "2 days", "4 days"], []])
- @pytest.mark.parametrize("freq", ["2D", Day(2), "48H", Hour(48)])
- def test_freq_setter(self, values, freq):
- # GH 20678
- idx = TimedeltaIndex(values)
-
- # can set to an offset, converting from string if necessary
- idx._data.freq = freq
- assert idx.freq == freq
- assert isinstance(idx.freq, DateOffset)
-
- # can reset to None
- idx._data.freq = None
- assert idx.freq is None
-
- def test_freq_setter_errors(self):
- # GH 20678
- idx = TimedeltaIndex(["0 days", "2 days", "4 days"])
-
- # setting with an incompatible freq
- msg = (
- "Inferred frequency 2D from passed values does not conform to "
- "passed frequency 5D"
- )
- with pytest.raises(ValueError, match=msg):
- idx._data.freq = "5D"
-
- # setting with a non-fixed frequency
- msg = r"<2 \* BusinessDays> is a non-fixed frequency"
- with pytest.raises(ValueError, match=msg):
- idx._data.freq = "2B"
-
- # setting with non-freq string
- with pytest.raises(ValueError, match="Invalid frequency"):
- idx._data.freq = "foo"
-
- def test_freq_view_safe(self):
- # Setting the freq for one TimedeltaIndex shouldn't alter the freq
- # for another that views the same data
-
- tdi = TimedeltaIndex(["0 days", "2 days", "4 days"], freq="2D")
- tda = tdi._data
-
- tdi2 = TimedeltaIndex(tda)._with_freq(None)
- assert tdi2.freq is None
-
- # Original was not altered
- assert tdi.freq == "2D"
- assert tda.freq == "2D"
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index aaf98e46f2f09..4e4eb89328540 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -191,3 +191,20 @@ def test_unknown_attribute(self):
msg = "'Series' object has no attribute 'foo'"
with pytest.raises(AttributeError, match=msg):
ser.foo
+
+ def test_datetime_series_no_datelike_attrs(self, datetime_series):
+ # GH#7206
+ for op in ["year", "day", "second", "weekday"]:
+ msg = f"'Series' object has no attribute '{op}'"
+ with pytest.raises(AttributeError, match=msg):
+ getattr(datetime_series, op)
+
+ def test_series_datetimelike_attribute_access(self):
+ # attribute access should still work!
+ ser = Series({"year": 2000, "month": 1, "day": 10})
+ assert ser.year == 2000
+ assert ser.month == 1
+ assert ser.day == 10
+ msg = "'Series' object has no attribute 'weekday'"
+ with pytest.raises(AttributeError, match=msg):
+ ser.weekday
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44377 | 2021-11-09T23:05:55Z | 2021-11-11T23:42:16Z | 2021-11-11T23:42:16Z | 2021-11-11T23:51:23Z |
CLN: misplaced indexing tests | diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 44a70d3933b66..bff461dbc7038 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -359,6 +359,39 @@ def test_dt64arr_timestamp_equality(self, box_with_array):
expected = tm.box_expected([False, False, False], xbox)
tm.assert_equal(result, expected)
+ @pytest.mark.parametrize(
+ "datetimelike",
+ [
+ Timestamp("20130101"),
+ datetime(2013, 1, 1),
+ np.datetime64("2013-01-01T00:00", "ns"),
+ ],
+ )
+ @pytest.mark.parametrize(
+ "op,expected",
+ [
+ (operator.lt, [True, False, False, False]),
+ (operator.le, [True, True, False, False]),
+ (operator.eq, [False, True, False, False]),
+ (operator.gt, [False, False, False, True]),
+ ],
+ )
+ def test_dt64_compare_datetime_scalar(self, datetimelike, op, expected):
+ # GH#17965, test for ability to compare datetime64[ns] columns
+ # to datetimelike
+ ser = Series(
+ [
+ Timestamp("20120101"),
+ Timestamp("20130101"),
+ np.nan,
+ Timestamp("20130103"),
+ ],
+ name="A",
+ )
+ result = op(ser, datetimelike)
+ expected = Series(expected, name="A")
+ tm.assert_series_equal(result, expected)
+
class TestDatetimeIndexComparisons:
diff --git a/pandas/tests/indexes/datetimes/test_partial_slicing.py b/pandas/tests/indexes/datetimes/test_partial_slicing.py
index 896c43db5e356..2f32f9e18311d 100644
--- a/pandas/tests/indexes/datetimes/test_partial_slicing.py
+++ b/pandas/tests/indexes/datetimes/test_partial_slicing.py
@@ -1,7 +1,6 @@
""" test partial slicing on Series/Frame """
from datetime import datetime
-import operator
import numpy as np
import pytest
@@ -412,40 +411,6 @@ def test_loc_datetime_length_one(self):
result = df.loc["2016-10-01T00:00:00":]
tm.assert_frame_equal(result, df)
- @pytest.mark.parametrize(
- "datetimelike",
- [
- Timestamp("20130101"),
- datetime(2013, 1, 1),
- np.datetime64("2013-01-01T00:00", "ns"),
- ],
- )
- @pytest.mark.parametrize(
- "op,expected",
- [
- (operator.lt, [True, False, False, False]),
- (operator.le, [True, True, False, False]),
- (operator.eq, [False, True, False, False]),
- (operator.gt, [False, False, False, True]),
- ],
- )
- def test_selection_by_datetimelike(self, datetimelike, op, expected):
- # GH issue #17965, test for ability to compare datetime64[ns] columns
- # to datetimelike
- df = DataFrame(
- {
- "A": [
- Timestamp("20120101"),
- Timestamp("20130101"),
- np.nan,
- Timestamp("20130103"),
- ]
- }
- )
- result = op(df.A, datetimelike)
- expected = Series(expected, name="A")
- tm.assert_series_equal(result, expected)
-
@pytest.mark.parametrize(
"start",
[
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index dfa750bf933a0..1b5e64bca03a0 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -205,6 +205,7 @@ def test_getitem_seconds(self):
# GH7116
# these show deprecations as we are trying
# to slice with non-integer indexers
+ # FIXME: don't leave commented-out
# with pytest.raises(IndexError):
# idx[v]
continue
@@ -814,12 +815,6 @@ def test_get_value(self):
result2 = idx2.get_value(input2, p1)
tm.assert_series_equal(result2, expected2)
- def test_loc_str(self):
- # https://github.com/pandas-dev/pandas/issues/33964
- index = period_range(start="2000", periods=20, freq="B")
- series = Series(range(20), index=index)
- assert series.loc["2000-01-14"] == 9
-
@pytest.mark.parametrize("freq", ["H", "D"])
def test_get_value_datetime_hourly(self, freq):
# get_loc and get_value should treat datetime objects symmetrically
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index e6c31d22e626f..a7dad4e7f352c 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -211,7 +211,7 @@ def _check_all_fields(self, periodindex):
]
periods = list(periodindex)
- s = Series(periodindex)
+ ser = Series(periodindex)
for field in fields:
field_idx = getattr(periodindex, field)
@@ -219,10 +219,10 @@ def _check_all_fields(self, periodindex):
for x, val in zip(periods, field_idx):
assert getattr(x, field) == val
- if len(s) == 0:
+ if len(ser) == 0:
continue
- field_s = getattr(s.dt, field)
+ field_s = getattr(ser.dt, field)
assert len(periodindex) == len(field_s)
for x, val in zip(periods, field_s):
assert getattr(x, field) == val
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index 2a5051b2982bb..f5d601bcfbcd1 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -1,8 +1,6 @@
-import numpy as np
import pytest
from pandas import (
- Series,
TimedeltaIndex,
timedelta_range,
)
@@ -30,15 +28,6 @@ def test_nonunique_contains(self):
):
assert idx[0] in idx
- def test_unknown_attribute(self):
- # see gh-9680
- tdi = timedelta_range(start=0, periods=10, freq="1s")
- ts = Series(np.random.normal(size=10), index=tdi)
- assert "foo" not in ts.__dict__.keys()
- msg = "'Series' object has no attribute 'foo'"
- with pytest.raises(AttributeError, match=msg):
- ts.foo
-
def test_infer_freq(self, freq_sample):
# GH#11018
idx = timedelta_range("1", freq=freq_sample, periods=10)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index b0aa05371271b..ed9b5cc0850b9 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -2941,3 +2941,9 @@ def test_loc_set_multiple_items_in_multiple_new_columns(self):
)
tm.assert_frame_equal(df, expected)
+
+ def test_getitem_loc_str_periodindex(self):
+ # GH#33964
+ index = pd.period_range(start="2000", periods=20, freq="B")
+ series = Series(range(20), index=index)
+ assert series.loc["2000-01-14"] == 9
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index d77f831bee8bc..6c3587c7eeada 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -377,17 +377,3 @@ def test_frozenset_index():
assert s[idx1] == 2
s[idx1] = 3
assert s[idx1] == 3
-
-
-def test_boolean_index():
- # GH18579
- s1 = Series([1, 2, 3], index=[4, 5, 6])
- s2 = Series([1, 3, 2], index=s1 == 2)
- tm.assert_series_equal(Series([1, 3, 2], [False, True, False]), s2)
-
-
-def test_index_ndim_gt_1_raises():
- # GH18579
- df = DataFrame([[1, 2], [3, 4], [5, 6]], index=[3, 6, 9])
- with pytest.raises(ValueError, match="Index data must be 1-dimensional"):
- Series([1, 3, 2], index=df)
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index b49c209a59a06..aaf98e46f2f09 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -182,3 +182,12 @@ def test_inspect_getmembers(self):
ser = Series(dtype=object)
with tm.assert_produces_warning(None):
inspect.getmembers(ser)
+
+ def test_unknown_attribute(self):
+ # GH#9680
+ tdi = pd.timedelta_range(start=0, periods=10, freq="1s")
+ ser = Series(np.random.normal(size=10), index=tdi)
+ assert "foo" not in ser.__dict__.keys()
+ msg = "'Series' object has no attribute 'foo'"
+ with pytest.raises(AttributeError, match=msg):
+ ser.foo
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 2c33284df18c5..1b488b4cf0b77 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -154,6 +154,12 @@ def test_constructor(self, datetime_series):
with pytest.raises(NotImplementedError, match=msg):
Series(m)
+ def test_constructor_index_ndim_gt_1_raises(self):
+ # GH#18579
+ df = DataFrame([[1, 2], [3, 4], [5, 6]], index=[3, 6, 9])
+ with pytest.raises(ValueError, match="Index data must be 1-dimensional"):
+ Series([1, 3, 2], index=df)
+
@pytest.mark.parametrize("input_class", [list, dict, OrderedDict])
def test_constructor_empty(self, input_class):
with tm.assert_produces_warning(FutureWarning):
@@ -276,6 +282,15 @@ def test_constructor_list_like(self):
result = Series(obj, index=[0, 1, 2])
tm.assert_series_equal(result, expected)
+ def test_constructor_boolean_index(self):
+ # GH#18579
+ s1 = Series([1, 2, 3], index=[4, 5, 6])
+
+ index = s1 == 2
+ result = Series([1, 3, 2], index=index)
+ expected = Series([1, 3, 2], index=[False, True, False])
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.parametrize("dtype", ["bool", "int32", "int64", "float64"])
def test_constructor_index_dtype(self, dtype):
# GH 17088
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44375 | 2021-11-09T22:17:05Z | 2021-11-11T13:40:52Z | 2021-11-11T13:40:52Z | 2021-11-11T15:12:14Z |
TST: Make tests for groupby median/mean more strict on dtype | diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 3c402480ea2ec..e5870a206f419 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -394,8 +394,7 @@ def test_median_empty_bins(observed):
result = df.groupby(bins, observed=observed).median()
expected = df.groupby(bins, observed=observed).agg(lambda x: x.median())
- # TODO: GH 41137
- tm.assert_frame_equal(result, expected, check_dtype=False)
+ tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py
index 8436c2db445ee..34e8e2ac3e84a 100644
--- a/pandas/tests/resample/test_datetime_index.py
+++ b/pandas/tests/resample/test_datetime_index.py
@@ -1692,8 +1692,6 @@ def f(data, add_arg):
df = DataFrame({"A": 1, "B": 2}, index=date_range("2017", periods=10))
result = df.groupby("A").resample("D").agg(f, multiplier).astype(float)
expected = df.groupby("A").resample("D").mean().multiply(multiplier)
- # TODO: GH 41137
- expected = expected.astype("float64")
tm.assert_frame_equal(result, expected)
| - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
Followup to #41139. | https://api.github.com/repos/pandas-dev/pandas/pulls/44374 | 2021-11-09T22:06:34Z | 2021-11-11T00:17:10Z | 2021-11-11T00:17:09Z | 2021-11-11T22:27:28Z |
TST: split/collect partial tests | diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index c487777fc339e..82d55a7bf7189 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -22,6 +22,213 @@
import pandas._testing as tm
+class TestEmptyFrameSetitemExpansion:
+ def test_empty_frame_setitem_index_name_retained(self):
+ # GH#31368 empty frame has non-None index.name -> retained
+ df = DataFrame({}, index=pd.RangeIndex(0, name="df_index"))
+ series = Series(1.23, index=pd.RangeIndex(4, name="series_index"))
+
+ df["series"] = series
+ expected = DataFrame(
+ {"series": [1.23] * 4}, index=pd.RangeIndex(4, name="df_index")
+ )
+
+ tm.assert_frame_equal(df, expected)
+
+ def test_empty_frame_setitem_index_name_inherited(self):
+ # GH#36527 empty frame has None index.name -> not retained
+ df = DataFrame()
+ series = Series(1.23, index=pd.RangeIndex(4, name="series_index"))
+ df["series"] = series
+ expected = DataFrame(
+ {"series": [1.23] * 4}, index=pd.RangeIndex(4, name="series_index")
+ )
+ tm.assert_frame_equal(df, expected)
+
+ def test_loc_setitem_zerolen_series_columns_align(self):
+ # columns will align
+ df = DataFrame(columns=["A", "B"])
+ df.loc[0] = Series(1, index=range(4))
+ expected = DataFrame(columns=["A", "B"], index=[0], dtype=np.float64)
+ tm.assert_frame_equal(df, expected)
+
+ # columns will align
+ df = DataFrame(columns=["A", "B"])
+ df.loc[0] = Series(1, index=["B"])
+
+ exp = DataFrame([[np.nan, 1]], columns=["A", "B"], index=[0], dtype="float64")
+ tm.assert_frame_equal(df, exp)
+
+ def test_loc_setitem_zerolen_list_length_must_match_columns(self):
+ # list-like must conform
+ df = DataFrame(columns=["A", "B"])
+
+ msg = "cannot set a row with mismatched columns"
+ with pytest.raises(ValueError, match=msg):
+ df.loc[0] = [1, 2, 3]
+
+ df = DataFrame(columns=["A", "B"])
+ df.loc[3] = [6, 7] # length matches len(df.columns) --> OK!
+
+ exp = DataFrame([[6, 7]], index=[3], columns=["A", "B"], dtype=np.int64)
+ tm.assert_frame_equal(df, exp)
+
+ def test_partial_set_empty_frame(self):
+
+ # partially set with an empty object
+ # frame
+ df = DataFrame()
+
+ msg = "cannot set a frame with no defined columns"
+
+ with pytest.raises(ValueError, match=msg):
+ df.loc[1] = 1
+
+ with pytest.raises(ValueError, match=msg):
+ df.loc[1] = Series([1], index=["foo"])
+
+ msg = "cannot set a frame with no defined index and a scalar"
+ with pytest.raises(ValueError, match=msg):
+ df.loc[:, 1] = 1
+
+ def test_partial_set_empty_frame2(self):
+ # these work as they don't really change
+ # anything but the index
+ # GH#5632
+ expected = DataFrame(columns=["foo"], index=Index([], dtype="object"))
+
+ df = DataFrame(index=Index([], dtype="object"))
+ df["foo"] = Series([], dtype="object")
+
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame()
+ df["foo"] = Series(df.index)
+
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame()
+ df["foo"] = df.index
+
+ tm.assert_frame_equal(df, expected)
+
+ def test_partial_set_empty_frame3(self):
+ expected = DataFrame(columns=["foo"], index=Index([], dtype="int64"))
+ expected["foo"] = expected["foo"].astype("float64")
+
+ df = DataFrame(index=Index([], dtype="int64"))
+ df["foo"] = []
+
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame(index=Index([], dtype="int64"))
+ df["foo"] = Series(np.arange(len(df)), dtype="float64")
+
+ tm.assert_frame_equal(df, expected)
+
+ def test_partial_set_empty_frame4(self):
+ df = DataFrame(index=Index([], dtype="int64"))
+ df["foo"] = range(len(df))
+
+ expected = DataFrame(columns=["foo"], index=Index([], dtype="int64"))
+ # range is int-dtype-like, so we get int64 dtype
+ expected["foo"] = expected["foo"].astype("int64")
+ tm.assert_frame_equal(df, expected)
+
+ def test_partial_set_empty_frame5(self):
+ df = DataFrame()
+ tm.assert_index_equal(df.columns, Index([], dtype=object))
+ df2 = DataFrame()
+ df2[1] = Series([1], index=["foo"])
+ df.loc[:, 1] = Series([1], index=["foo"])
+ tm.assert_frame_equal(df, DataFrame([[1]], index=["foo"], columns=[1]))
+ tm.assert_frame_equal(df, df2)
+
+ def test_partial_set_empty_frame_no_index(self):
+ # no index to start
+ expected = DataFrame({0: Series(1, index=range(4))}, columns=["A", "B", 0])
+
+ df = DataFrame(columns=["A", "B"])
+ df[0] = Series(1, index=range(4))
+ df.dtypes
+ str(df)
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame(columns=["A", "B"])
+ df.loc[:, 0] = Series(1, index=range(4))
+ df.dtypes
+ str(df)
+ tm.assert_frame_equal(df, expected)
+
+ def test_partial_set_empty_frame_row(self):
+ # GH#5720, GH#5744
+ # don't create rows when empty
+ expected = DataFrame(columns=["A", "B", "New"], index=Index([], dtype="int64"))
+ expected["A"] = expected["A"].astype("int64")
+ expected["B"] = expected["B"].astype("float64")
+ expected["New"] = expected["New"].astype("float64")
+
+ df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
+ y = df[df.A > 5]
+ y["New"] = np.nan
+ tm.assert_frame_equal(y, expected)
+
+ expected = DataFrame(columns=["a", "b", "c c", "d"])
+ expected["d"] = expected["d"].astype("int64")
+ df = DataFrame(columns=["a", "b", "c c"])
+ df["d"] = 3
+ tm.assert_frame_equal(df, expected)
+ tm.assert_series_equal(df["c c"], Series(name="c c", dtype=object))
+
+ # reindex columns is ok
+ df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
+ y = df[df.A > 5]
+ result = y.reindex(columns=["A", "B", "C"])
+ expected = DataFrame(columns=["A", "B", "C"], index=Index([], dtype="int64"))
+ expected["A"] = expected["A"].astype("int64")
+ expected["B"] = expected["B"].astype("float64")
+ expected["C"] = expected["C"].astype("float64")
+ tm.assert_frame_equal(result, expected)
+
+ def test_partial_set_empty_frame_set_series(self):
+ # GH#5756
+ # setting with empty Series
+ df = DataFrame(Series(dtype=object))
+ expected = DataFrame({0: Series(dtype=object)})
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame(Series(name="foo", dtype=object))
+ expected = DataFrame({"foo": Series(dtype=object)})
+ tm.assert_frame_equal(df, expected)
+
+ def test_partial_set_empty_frame_empty_copy_assignment(self):
+ # GH#5932
+ # copy on empty with assignment fails
+ df = DataFrame(index=[0])
+ df = df.copy()
+ df["a"] = 0
+ expected = DataFrame(0, index=[0], columns=["a"])
+ tm.assert_frame_equal(df, expected)
+
+ def test_partial_set_empty_frame_empty_consistencies(self):
+ # GH#6171
+ # consistency on empty frames
+ df = DataFrame(columns=["x", "y"])
+ df["x"] = [1, 2]
+ expected = DataFrame({"x": [1, 2], "y": [np.nan, np.nan]})
+ tm.assert_frame_equal(df, expected, check_dtype=False)
+
+ df = DataFrame(columns=["x", "y"])
+ df["x"] = ["1", "2"]
+ expected = DataFrame({"x": ["1", "2"], "y": [np.nan, np.nan]}, dtype=object)
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame(columns=["x", "y"])
+ df.loc[0, "x"] = 1
+ expected = DataFrame({"x": [1], "y": [np.nan]})
+ tm.assert_frame_equal(df, expected, check_dtype=False)
+
+
class TestPartialSetting:
def test_partial_setting(self):
@@ -61,8 +268,7 @@ def test_partial_setting(self):
with pytest.raises(IndexError, match=msg):
s.iat[3] = 5.0
- # ## frame ##
-
+ def test_partial_setting_frame(self):
df_orig = DataFrame(
np.arange(6).reshape(3, 2), columns=["A", "B"], dtype="int64"
)
@@ -166,33 +372,6 @@ def test_partial_setting_mixed_dtype(self):
df.loc[2] = df.loc[1]
tm.assert_frame_equal(df, expected)
- # columns will align
- df = DataFrame(columns=["A", "B"])
- df.loc[0] = Series(1, index=range(4))
- expected = DataFrame(columns=["A", "B"], index=[0], dtype=np.float64)
- tm.assert_frame_equal(df, expected)
-
- # columns will align
- # TODO: it isn't great that this behavior depends on consolidation
- df = DataFrame(columns=["A", "B"])._consolidate()
- df.loc[0] = Series(1, index=["B"])
-
- exp = DataFrame([[np.nan, 1]], columns=["A", "B"], index=[0], dtype="float64")
- tm.assert_frame_equal(df, exp)
-
- # list-like must conform
- df = DataFrame(columns=["A", "B"])
-
- msg = "cannot set a row with mismatched columns"
- with pytest.raises(ValueError, match=msg):
- df.loc[0] = [1, 2, 3]
-
- df = DataFrame(columns=["A", "B"])
- df.loc[3] = [6, 7]
-
- exp = DataFrame([[6, 7]], index=[3], columns=["A", "B"], dtype=np.int64)
- tm.assert_frame_equal(df, exp)
-
def test_series_partial_set(self):
# partial set with new index
# Regression from GH4825
@@ -352,6 +531,7 @@ def test_setitem_with_expansion_numeric_into_datetimeindex(self, key):
ex_index = Index(list(orig.index) + [key], dtype=object, name=orig.index.name)
ex_data = np.concatenate([orig.values, df.iloc[[0]].values], axis=0)
expected = DataFrame(ex_data, index=ex_index, columns=orig.columns)
+
tm.assert_frame_equal(df, expected)
def test_partial_set_invalid(self):
@@ -369,162 +549,6 @@ def test_partial_set_invalid(self):
tm.assert_index_equal(df.index, Index(orig.index.tolist() + ["a"]))
assert df.index.dtype == "object"
- def test_partial_set_empty_frame(self):
-
- # partially set with an empty object
- # frame
- df = DataFrame()
-
- msg = "cannot set a frame with no defined columns"
-
- with pytest.raises(ValueError, match=msg):
- df.loc[1] = 1
-
- with pytest.raises(ValueError, match=msg):
- df.loc[1] = Series([1], index=["foo"])
-
- msg = "cannot set a frame with no defined index and a scalar"
- with pytest.raises(ValueError, match=msg):
- df.loc[:, 1] = 1
-
- def test_partial_set_empty_frame2(self):
- # these work as they don't really change
- # anything but the index
- # GH5632
- expected = DataFrame(columns=["foo"], index=Index([], dtype="object"))
-
- df = DataFrame(index=Index([], dtype="object"))
- df["foo"] = Series([], dtype="object")
-
- tm.assert_frame_equal(df, expected)
-
- df = DataFrame()
- df["foo"] = Series(df.index)
-
- tm.assert_frame_equal(df, expected)
-
- df = DataFrame()
- df["foo"] = df.index
-
- tm.assert_frame_equal(df, expected)
-
- def test_partial_set_empty_frame3(self):
- expected = DataFrame(columns=["foo"], index=Index([], dtype="int64"))
- expected["foo"] = expected["foo"].astype("float64")
-
- df = DataFrame(index=Index([], dtype="int64"))
- df["foo"] = []
-
- tm.assert_frame_equal(df, expected)
-
- df = DataFrame(index=Index([], dtype="int64"))
- df["foo"] = Series(np.arange(len(df)), dtype="float64")
-
- tm.assert_frame_equal(df, expected)
-
- def test_partial_set_empty_frame4(self):
- df = DataFrame(index=Index([], dtype="int64"))
- df["foo"] = range(len(df))
-
- expected = DataFrame(columns=["foo"], index=Index([], dtype="int64"))
- # range is int-dtype-like, so we get int64 dtype
- expected["foo"] = expected["foo"].astype("int64")
- tm.assert_frame_equal(df, expected)
-
- def test_partial_set_empty_frame5(self):
- df = DataFrame()
- tm.assert_index_equal(df.columns, Index([], dtype=object))
- df2 = DataFrame()
- df2[1] = Series([1], index=["foo"])
- df.loc[:, 1] = Series([1], index=["foo"])
- tm.assert_frame_equal(df, DataFrame([[1]], index=["foo"], columns=[1]))
- tm.assert_frame_equal(df, df2)
-
- def test_partial_set_empty_frame_no_index(self):
- # no index to start
- expected = DataFrame({0: Series(1, index=range(4))}, columns=["A", "B", 0])
-
- df = DataFrame(columns=["A", "B"])
- df[0] = Series(1, index=range(4))
- df.dtypes
- str(df)
- tm.assert_frame_equal(df, expected)
-
- df = DataFrame(columns=["A", "B"])
- df.loc[:, 0] = Series(1, index=range(4))
- df.dtypes
- str(df)
- tm.assert_frame_equal(df, expected)
-
- def test_partial_set_empty_frame_row(self):
- # GH5720, GH5744
- # don't create rows when empty
- expected = DataFrame(columns=["A", "B", "New"], index=Index([], dtype="int64"))
- expected["A"] = expected["A"].astype("int64")
- expected["B"] = expected["B"].astype("float64")
- expected["New"] = expected["New"].astype("float64")
-
- df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
- y = df[df.A > 5]
- y["New"] = np.nan
- tm.assert_frame_equal(y, expected)
- # tm.assert_frame_equal(y,expected)
-
- expected = DataFrame(columns=["a", "b", "c c", "d"])
- expected["d"] = expected["d"].astype("int64")
- df = DataFrame(columns=["a", "b", "c c"])
- df["d"] = 3
- tm.assert_frame_equal(df, expected)
- tm.assert_series_equal(df["c c"], Series(name="c c", dtype=object))
-
- # reindex columns is ok
- df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
- y = df[df.A > 5]
- result = y.reindex(columns=["A", "B", "C"])
- expected = DataFrame(columns=["A", "B", "C"], index=Index([], dtype="int64"))
- expected["A"] = expected["A"].astype("int64")
- expected["B"] = expected["B"].astype("float64")
- expected["C"] = expected["C"].astype("float64")
- tm.assert_frame_equal(result, expected)
-
- def test_partial_set_empty_frame_set_series(self):
- # GH 5756
- # setting with empty Series
- df = DataFrame(Series(dtype=object))
- expected = DataFrame({0: Series(dtype=object)})
- tm.assert_frame_equal(df, expected)
-
- df = DataFrame(Series(name="foo", dtype=object))
- expected = DataFrame({"foo": Series(dtype=object)})
- tm.assert_frame_equal(df, expected)
-
- def test_partial_set_empty_frame_empty_copy_assignment(self):
- # GH 5932
- # copy on empty with assignment fails
- df = DataFrame(index=[0])
- df = df.copy()
- df["a"] = 0
- expected = DataFrame(0, index=[0], columns=["a"])
- tm.assert_frame_equal(df, expected)
-
- def test_partial_set_empty_frame_empty_consistencies(self):
- # GH 6171
- # consistency on empty frames
- df = DataFrame(columns=["x", "y"])
- df["x"] = [1, 2]
- expected = DataFrame({"x": [1, 2], "y": [np.nan, np.nan]})
- tm.assert_frame_equal(df, expected, check_dtype=False)
-
- df = DataFrame(columns=["x", "y"])
- df["x"] = ["1", "2"]
- expected = DataFrame({"x": ["1", "2"], "y": [np.nan, np.nan]}, dtype=object)
- tm.assert_frame_equal(df, expected)
-
- df = DataFrame(columns=["x", "y"])
- df.loc[0, "x"] = 1
- expected = DataFrame({"x": [1], "y": [np.nan]})
- tm.assert_frame_equal(df, expected, check_dtype=False)
-
@pytest.mark.parametrize(
"idx,labels,expected_idx",
[
@@ -584,14 +608,14 @@ def test_loc_with_list_of_strings_representing_datetimes_missing_value(
self, idx, labels
):
# GH 11278
- s = Series(range(20), index=idx)
+ ser = Series(range(20), index=idx)
df = DataFrame(range(20), index=idx)
msg = r"not in index"
with pytest.raises(KeyError, match=msg):
- s.loc[labels]
+ ser.loc[labels]
with pytest.raises(KeyError, match=msg):
- s[labels]
+ ser[labels]
with pytest.raises(KeyError, match=msg):
df.loc[labels]
@@ -628,37 +652,18 @@ def test_loc_with_list_of_strings_representing_datetimes_not_matched_type(
self, idx, labels, msg
):
# GH 11278
- s = Series(range(20), index=idx)
+ ser = Series(range(20), index=idx)
df = DataFrame(range(20), index=idx)
with pytest.raises(KeyError, match=msg):
- s.loc[labels]
+ ser.loc[labels]
with pytest.raises(KeyError, match=msg):
- s[labels]
+ ser[labels]
with pytest.raises(KeyError, match=msg):
df.loc[labels]
- def test_index_name_empty(self):
- # GH 31368
- df = DataFrame({}, index=pd.RangeIndex(0, name="df_index"))
- series = Series(1.23, index=pd.RangeIndex(4, name="series_index"))
-
- df["series"] = series
- expected = DataFrame(
- {"series": [1.23] * 4}, index=pd.RangeIndex(4, name="df_index")
- )
-
- tm.assert_frame_equal(df, expected)
-
- # GH 36527
- df = DataFrame()
- series = Series(1.23, index=pd.RangeIndex(4, name="series_index"))
- df["series"] = series
- expected = DataFrame(
- {"series": [1.23] * 4}, index=pd.RangeIndex(4, name="series_index")
- )
- tm.assert_frame_equal(df, expected)
+class TestStringSlicing:
def test_slice_irregular_datetime_index_with_nan(self):
# GH36953
index = pd.to_datetime(["2012-01-01", "2012-01-02", "2012-01-03", None])
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/44372 | 2021-11-09T20:06:53Z | 2021-11-11T17:51:02Z | 2021-11-11T17:51:02Z | 2021-11-11T17:53:16Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.