title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: Fixing more doc warings and wrong .. code-block :: directive (space before colon) | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index d4a2945f8e3a5..3aa5a10807fd0 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -162,6 +162,14 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
# invgrep -R --include '*.py' -E '[[:space:]] pytest.raises' pandas/tests
# RET=$(($RET + $?)) ; echo $MSG "DONE"
+ MSG='Check for wrong space after code-block directive and before colon (".. code-block ::" instead of ".. code-block::")' ; echo $MSG
+ invgrep -R --include="*.rst" ".. code-block ::" doc/source
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
+ MSG='Check for wrong space after ipython directive and before colon (".. ipython ::" instead of ".. ipython::")' ; echo $MSG
+ invgrep -R --include="*.rst" ".. ipython ::" doc/source
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
MSG='Check that no file in the repo contains tailing whitespaces' ; echo $MSG
set -o pipefail
if [[ "$AZURE" == "true" ]]; then
diff --git a/doc/source/api/arrays.rst b/doc/source/api/arrays.rst
index d8ce2ab7bf73e..a727c3a2c292a 100644
--- a/doc/source/api/arrays.rst
+++ b/doc/source/api/arrays.rst
@@ -195,7 +195,7 @@ Methods
A collection of timedeltas may be stored in a :class:`TimedeltaArray`.
-.. autosumarry::
+.. autosummary::
:toctree: generated/
arrays.TimedeltaArray
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 424ea7370849c..94bec5c5bc83d 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -115,7 +115,7 @@ Series is ndarray-like
``Series`` acts very similarly to a ``ndarray``, and is a valid argument to most NumPy functions.
However, operations such as slicing will also slice the index.
-.. ipython :: python
+.. ipython:: python
s[0]
s[:3]
@@ -171,7 +171,7 @@ Series is dict-like
A Series is like a fixed-size dict in that you can get and set values by index
label:
-.. ipython :: python
+.. ipython:: python
s['a']
s['e'] = 12.
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index add1a4e587240..3fe416c48f670 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -743,9 +743,9 @@ Selecting Random Samples
A random selection of rows or columns from a Series, DataFrame, or Panel with the :meth:`~DataFrame.sample` method. The method will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
-.. ipython :: python
+.. ipython:: python
- s = pd.Series([0,1,2,3,4,5])
+ s = pd.Series([0, 1, 2, 3, 4, 5])
# When no arguments are passed, returns 1 row.
s.sample()
@@ -759,9 +759,9 @@ A random selection of rows or columns from a Series, DataFrame, or Panel with th
By default, ``sample`` will return each row at most once, but one can also sample with replacement
using the ``replace`` option:
-.. ipython :: python
+.. ipython:: python
- s = pd.Series([0,1,2,3,4,5])
+ s = pd.Series([0, 1, 2, 3, 4, 5])
# Without replacement (default):
s.sample(n=6, replace=False)
@@ -774,9 +774,9 @@ By default, each row has an equal probability of being selected, but if you want
to have different probabilities, you can pass the ``sample`` function sampling weights as
``weights``. These weights can be a list, a NumPy array, or a Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights by the sum of the weights. For example:
-.. ipython :: python
+.. ipython:: python
- s = pd.Series([0,1,2,3,4,5])
+ s = pd.Series([0, 1, 2, 3, 4, 5])
example_weights = [0, 0, 0.2, 0.2, 0.2, 0.4]
s.sample(n=3, weights=example_weights)
@@ -788,23 +788,24 @@ When applied to a DataFrame, you can use a column of the DataFrame as sampling w
(provided you are sampling rows and not columns) by simply passing the name of the column
as a string.
-.. ipython :: python
+.. ipython:: python
- df2 = pd.DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
- df2.sample(n = 3, weights = 'weight_column')
+ df2 = pd.DataFrame({'col1': [9, 8, 7, 6],
+ 'weight_column': [0.5, 0.4, 0.1, 0]})
+ df2.sample(n=3, weights='weight_column')
``sample`` also allows users to sample columns instead of rows using the ``axis`` argument.
-.. ipython :: python
+.. ipython:: python
- df3 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
+ df3 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
df3.sample(n=1, axis=1)
Finally, one can also set a seed for ``sample``'s random number generator using the ``random_state`` argument, which will accept either an integer (as a seed) or a NumPy RandomState object.
-.. ipython :: python
+.. ipython:: python
- df4 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
+ df4 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
# With a given seed, the sample will always draw the same rows.
df4.sample(n=2, random_state=2)
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 2149ee7fb46d9..dd1cde0bdff73 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -578,7 +578,7 @@ Duplicate names parsing
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
-.. ipython :: python
+.. ipython:: python
data = ('a,b,a\n'
'0,1,2\n'
@@ -590,7 +590,7 @@ which modifies a series of duplicate columns 'X', ..., 'X' to become
'X', 'X.1', ..., 'X.N'. If ``mangle_dupe_cols=False``, duplicate data can
arise:
-.. code-block :: python
+.. code-block:: ipython
In [2]: data = 'a,b,a\n0,1,2\n3,4,5'
In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False)
@@ -602,7 +602,7 @@ arise:
To prevent users from encountering this problem with duplicate data, a ``ValueError``
exception is raised if ``mangle_dupe_cols != True``:
-.. code-block :: python
+.. code-block:: ipython
In [2]: data = 'a,b,a\n0,1,2\n3,4,5'
In [3]: pd.read_csv(StringIO(data), mangle_dupe_cols=False)
diff --git a/doc/source/whatsnew/v0.12.0.rst b/doc/source/whatsnew/v0.12.0.rst
index 4413fc5cec2a9..b2dd8229c91f3 100644
--- a/doc/source/whatsnew/v0.12.0.rst
+++ b/doc/source/whatsnew/v0.12.0.rst
@@ -191,7 +191,7 @@ I/O Enhancements
You can use ``pd.read_html()`` to read the output from ``DataFrame.to_html()`` like so
- .. ipython :: python
+ .. ipython:: python
:okwarning:
df = pd.DataFrame({'a': range(3), 'b': list('abc')})
@@ -296,7 +296,7 @@ Other Enhancements
For example you can do
- .. ipython :: python
+ .. ipython:: python
df = pd.DataFrame({'a': list('ab..'), 'b': [1, 2, 3, 4]})
df.replace(regex=r'\s*\.\s*', value=np.nan)
@@ -306,7 +306,7 @@ Other Enhancements
Regular string replacement still works as expected. For example, you can do
- .. ipython :: python
+ .. ipython:: python
df.replace('.', np.nan)
diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst
index cd0a7b0e3c339..7b9a8ba082411 100644
--- a/doc/source/whatsnew/v0.15.0.rst
+++ b/doc/source/whatsnew/v0.15.0.rst
@@ -1015,7 +1015,7 @@ Other:
.. ipython:: python
- business_dates = date_range(start='4/1/2014', end='6/30/2014', freq='B')
+ business_dates = pd.date_range(start='4/1/2014', end='6/30/2014', freq='B')
df = pd.DataFrame(1, index=business_dates, columns=['a', 'b'])
# get the first, 4th, and last date index for each month
df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
diff --git a/doc/source/whatsnew/v0.16.0.rst b/doc/source/whatsnew/v0.16.0.rst
index 7ae17febe168d..f082bf656f23c 100644
--- a/doc/source/whatsnew/v0.16.0.rst
+++ b/doc/source/whatsnew/v0.16.0.rst
@@ -51,7 +51,7 @@ to be inserted (for example, a ``Series`` or NumPy array), or a function
of one argument to be called on the ``DataFrame``. The new values are inserted,
and the entire DataFrame (with all original and new columns) is returned.
-.. ipython :: python
+.. ipython:: python
iris = pd.read_csv('data/iris.data')
iris.head()
@@ -61,10 +61,10 @@ and the entire DataFrame (with all original and new columns) is returned.
Above was an example of inserting a precomputed value. We can also pass in
a function to be evaluated.
-.. ipython :: python
+.. ipython:: python
- iris.assign(sepal_ratio = lambda x: (x['SepalWidth'] /
- x['SepalLength'])).head()
+ iris.assign(sepal_ratio=lambda x: (x['SepalWidth']
+ / x['SepalLength'])).head()
The power of ``assign`` comes when used in chains of operations. For example,
we can limit the DataFrame to just those with a Sepal Length greater than 5,
diff --git a/doc/source/whatsnew/v0.16.1.rst b/doc/source/whatsnew/v0.16.1.rst
index cfd7218e11157..7621cb9c1e27c 100644
--- a/doc/source/whatsnew/v0.16.1.rst
+++ b/doc/source/whatsnew/v0.16.1.rst
@@ -181,9 +181,9 @@ total number or rows or columns. It also has options for sampling with or withou
for passing in a column for weights for non-uniform sampling, and for setting seed values to
facilitate replication. (:issue:`2419`)
-.. ipython :: python
+.. ipython:: python
- example_series = Series([0,1,2,3,4,5])
+ example_series = pd.Series([0, 1, 2, 3, 4, 5])
# When no arguments are passed, returns 1
example_series.sample()
@@ -207,9 +207,10 @@ facilitate replication. (:issue:`2419`)
When applied to a DataFrame, one may pass the name of a column to specify sampling weights
when sampling from rows.
-.. ipython :: python
+.. ipython:: python
- df = DataFrame({'col1':[9,8,7,6], 'weight_column':[0.5, 0.4, 0.1, 0]})
+ df = pd.DataFrame({'col1': [9, 8, 7, 6],
+ 'weight_column': [0.5, 0.4, 0.1, 0]})
df.sample(n=3, weights='weight_column')
diff --git a/doc/source/whatsnew/v0.17.0.rst b/doc/source/whatsnew/v0.17.0.rst
index 4884d99d8fc91..6bde4f1b9cf99 100644
--- a/doc/source/whatsnew/v0.17.0.rst
+++ b/doc/source/whatsnew/v0.17.0.rst
@@ -84,9 +84,9 @@ The new implementation allows for having a single-timezone across all rows, with
.. ipython:: python
- df = DataFrame({'A': date_range('20130101', periods=3),
- 'B': date_range('20130101', periods=3, tz='US/Eastern'),
- 'C': date_range('20130101', periods=3, tz='CET')})
+ df = pd.DataFrame({'A': pd.date_range('20130101', periods=3),
+ 'B': pd.date_range('20130101', periods=3, tz='US/Eastern'),
+ 'C': pd.date_range('20130101', periods=3, tz='CET')})
df
df.dtypes
@@ -442,17 +442,18 @@ Other enhancements
- Added a ``DataFrame.round`` method to round the values to a variable number of decimal places (:issue:`10568`).
- .. ipython :: python
+ .. ipython:: python
- df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'],
- index=['first', 'second', 'third'])
+ df = pd.DataFrame(np.random.random([3, 3]),
+ columns=['A', 'B', 'C'],
+ index=['first', 'second', 'third'])
df
df.round(2)
df.round({'A': 0, 'C': 2})
- ``drop_duplicates`` and ``duplicated`` now accept a ``keep`` keyword to target first, last, and all duplicates. The ``take_last`` keyword is deprecated, see :ref:`here <whatsnew_0170.deprecations>` (:issue:`6511`, :issue:`8505`)
- .. ipython :: python
+ .. ipython:: python
s = pd.Series(['A', 'B', 'C', 'A', 'B', 'D'])
s.drop_duplicates()
@@ -630,13 +631,13 @@ Of course you can coerce this as well.
.. ipython:: python
- to_datetime(['2009-07-31', 'asd'], errors='coerce')
+ pd.to_datetime(['2009-07-31', 'asd'], errors='coerce')
To keep the previous behavior, you can use ``errors='ignore'``:
.. ipython:: python
- to_datetime(['2009-07-31', 'asd'], errors='ignore')
+ pd.to_datetime(['2009-07-31', 'asd'], errors='ignore')
Furthermore, ``pd.to_timedelta`` has gained a similar API, of ``errors='raise'|'ignore'|'coerce'``, and the ``coerce`` keyword
has been deprecated in favor of ``errors='coerce'``.
@@ -655,13 +656,13 @@ Previous Behavior:
.. code-block:: ipython
- In [1]: Timestamp('2012Q2')
+ In [1]: pd.Timestamp('2012Q2')
Traceback
...
ValueError: Unable to parse 2012Q2
# Results in today's date.
- In [2]: Timestamp('2014')
+ In [2]: pd.Timestamp('2014')
Out [2]: 2014-08-12 00:00:00
v0.17.0 can parse them as below. It works on ``DatetimeIndex`` also.
@@ -670,9 +671,9 @@ New Behavior:
.. ipython:: python
- Timestamp('2012Q2')
- Timestamp('2014')
- DatetimeIndex(['2012Q2', '2014'])
+ pd.Timestamp('2012Q2')
+ pd.Timestamp('2014')
+ pd.DatetimeIndex(['2012Q2', '2014'])
.. note::
@@ -681,8 +682,8 @@ New Behavior:
.. ipython:: python
import pandas.tseries.offsets as offsets
- Timestamp.now()
- Timestamp.now() + offsets.DateOffset(years=1)
+ pd.Timestamp.now()
+ pd.Timestamp.now() + offsets.DateOffset(years=1)
Changes to Index Comparisons
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -739,7 +740,7 @@ Boolean comparisons of a ``Series`` vs ``None`` will now be equivalent to compar
.. ipython:: python
- s = Series(range(3))
+ s = pd.Series(range(3))
s.iloc[1] = None
s
@@ -807,11 +808,6 @@ Previous Behavior:
New Behavior:
-.. ipython:: python
- :suppress:
-
- import os
-
.. ipython:: python
df_with_missing.to_hdf('file.h5',
@@ -824,6 +820,7 @@ New Behavior:
.. ipython:: python
:suppress:
+ import os
os.remove('file.h5')
See the :ref:`docs <io.hdf5>` for more details.
@@ -876,7 +873,7 @@ Changes to ``Categorical.unique``
- unordered category: values and categories are sorted by appearance order.
- ordered category: values are sorted by appearance order, categories keep existing order.
-.. ipython :: python
+.. ipython:: python
cat = pd.Categorical(['C', 'A', 'B', 'C'],
categories=['A', 'B', 'C'],
@@ -899,7 +896,7 @@ an integer, resulting in ``header=0`` for ``False`` and ``header=1`` for ``True`
A ``bool`` input to ``header`` will now raise a ``TypeError``
-.. code-block :: python
+.. code-block:: ipython
In [29]: df = pd.read_csv('data.csv', header=False)
TypeError: Passing a bool to header is invalid. Use header=None for no header or
@@ -984,10 +981,12 @@ Removal of prior version deprecations/changes
- Removal of ``colSpace`` parameter from ``DataFrame.to_string()``, in favor of ``col_space``, circa 0.8.0 version.
- Removal of automatic time-series broadcasting (:issue:`2304`)
- .. ipython :: python
+ .. ipython:: python
np.random.seed(1234)
- df = DataFrame(np.random.randn(5,2),columns=list('AB'),index=date_range('20130101',periods=5))
+ df = DataFrame(np.random.randn(5, 2),
+ columns=list('AB'),
+ index=date_range('20130101', periods=5))
df
Previously
@@ -1008,9 +1007,9 @@ Removal of prior version deprecations/changes
Current
- .. ipython :: python
+ .. ipython:: python
- df.add(df.A,axis='index')
+ df.add(df.A, axis='index')
- Remove ``table`` keyword in ``HDFStore.put/append``, in favor of using ``format=`` (:issue:`4645`)
diff --git a/doc/source/whatsnew/v0.17.1.rst b/doc/source/whatsnew/v0.17.1.rst
index ddde96c9f598d..233414dae957d 100644
--- a/doc/source/whatsnew/v0.17.1.rst
+++ b/doc/source/whatsnew/v0.17.1.rst
@@ -56,7 +56,7 @@ Here's a quick example:
.. ipython:: python
np.random.seed(123)
- df = DataFrame(np.random.randn(10, 5), columns=list('abcde'))
+ df = pd.DataFrame(np.random.randn(10, 5), columns=list('abcde'))
html = df.style.background_gradient(cmap='viridis', low=.5)
We can render the HTML to get the following table.
@@ -84,7 +84,7 @@ Enhancements
.. ipython:: python
- df = DataFrame({'A': ['foo'] * 1000}) # noqa: F821
+ df = pd.DataFrame({'A': ['foo'] * 1000}) # noqa: F821
df['B'] = df['A'].astype('category')
# shows the '+' as we have object dtypes
diff --git a/doc/source/whatsnew/v0.18.0.rst b/doc/source/whatsnew/v0.18.0.rst
index e9d4891df70c5..9ff6ad7188f5a 100644
--- a/doc/source/whatsnew/v0.18.0.rst
+++ b/doc/source/whatsnew/v0.18.0.rst
@@ -324,7 +324,7 @@ Timedeltas
.. ipython:: python
- t = timedelta_range('1 days 2 hr 13 min 45 us', periods=3, freq='d')
+ t = pd.timedelta_range('1 days 2 hr 13 min 45 us', periods=3, freq='d')
t
t.round('10min')
@@ -810,8 +810,8 @@ performed with the ``Resampler`` objects with :meth:`~Resampler.backfill`,
.. ipython:: python
- s = pd.Series(np.arange(5,dtype='int64'),
- index=date_range('2010-01-01', periods=5, freq='Q'))
+ s = pd.Series(np.arange(5, dtype='int64'),
+ index=pd.date_range('2010-01-01', periods=5, freq='Q'))
s
Previously
diff --git a/doc/source/whatsnew/v0.19.0.rst b/doc/source/whatsnew/v0.19.0.rst
index 38208e9ff4cba..00d0d202d56cc 100644
--- a/doc/source/whatsnew/v0.19.0.rst
+++ b/doc/source/whatsnew/v0.19.0.rst
@@ -1160,7 +1160,7 @@ from ``n`` for the second, and so on, so that, when concatenated, they are ident
the result of calling :func:`read_csv` without the ``chunksize=`` argument
(:issue:`12185`).
-.. ipython :: python
+.. ipython:: python
data = 'A,B\n0,1\n2,3\n4,5\n6,7'
@@ -1178,7 +1178,7 @@ the result of calling :func:`read_csv` without the ``chunksize=`` argument
**New behavior**:
-.. ipython :: python
+.. ipython:: python
pd.concat(pd.read_csv(StringIO(data), chunksize=2))
| Using `.. code-block ::` with a space before the colon wasn't making the blocks be validated for flake8 issues. Same for `ipython` directive.
Making sure the space is not present, and fixing flake8 errors. | https://api.github.com/repos/pandas-dev/pandas/pulls/24650 | 2019-01-06T01:30:26Z | 2019-01-06T16:03:49Z | 2019-01-06T16:03:48Z | 2019-01-07T14:39:46Z |
DOC: whatsnew & linked edits | diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst
index 9981310b4a6fb..68f17a68784c9 100644
--- a/doc/source/advanced.rst
+++ b/doc/source/advanced.rst
@@ -921,7 +921,7 @@ If you need integer based selection, you should use ``iloc``:
dfir.iloc[0:5]
-.. _advanced.intervallindex:
+.. _advanced.intervalindex:
IntervalIndex
~~~~~~~~~~~~~
diff --git a/doc/source/api/arrays.rst b/doc/source/api/arrays.rst
index d8ce2ab7bf73e..d8724e55980b9 100644
--- a/doc/source/api/arrays.rst
+++ b/doc/source/api/arrays.rst
@@ -330,13 +330,13 @@ a :class:`pandas.api.types.CategoricalDtype`.
:toctree: generated/
:template: autosummary/class_without_autosummary.rst
- api.types.CategoricalDtype
+ CategoricalDtype
.. autosummary::
:toctree: generated/
- api.types.CategoricalDtype.categories
- api.types.CategoricalDtype.ordered
+ CategoricalDtype.categories
+ CategoricalDtype.ordered
Categorical data can be stored in a :class:`pandas.Categorical`
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 13681485d2f69..7c06288c01221 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -64,7 +64,7 @@ NumPy's type system to add support for custom arrays
(see :ref:`basics.dtypes`).
To get the actual data inside a :class:`Index` or :class:`Series`, use
-the **array** property
+the ``.array`` property
.. ipython:: python
@@ -72,11 +72,11 @@ the **array** property
s.index.array
:attr:`~Series.array` will always be an :class:`~pandas.api.extensions.ExtensionArray`.
-The exact details of what an ``ExtensionArray`` is and why pandas uses them is a bit
+The exact details of what an :class:`~pandas.api.extensions.ExtensionArray` is and why pandas uses them is a bit
beyond the scope of this introduction. See :ref:`basics.dtypes` for more.
If you know you need a NumPy array, use :meth:`~Series.to_numpy`
-or :meth:`numpy.asarray`.
+or :meth:`numpy.ndarray.asarray`.
.. ipython:: python
@@ -84,17 +84,17 @@ or :meth:`numpy.asarray`.
np.asarray(s)
When the Series or Index is backed by
-an :class:`~pandas.api.extension.ExtensionArray`, :meth:`~Series.to_numpy`
+an :class:`~pandas.api.extensions.ExtensionArray`, :meth:`~Series.to_numpy`
may involve copying data and coercing values. See :ref:`basics.dtypes` for more.
:meth:`~Series.to_numpy` gives some control over the ``dtype`` of the
-resulting :class:`ndarray`. For example, consider datetimes with timezones.
+resulting :class:`numpy.ndarray`. For example, consider datetimes with timezones.
NumPy doesn't have a dtype to represent timezone-aware datetimes, so there
are two possibly useful representations:
-1. An object-dtype :class:`ndarray` with :class:`Timestamp` objects, each
+1. An object-dtype :class:`numpy.ndarray` with :class:`Timestamp` objects, each
with the correct ``tz``
-2. A ``datetime64[ns]`` -dtype :class:`ndarray`, where the values have
+2. A ``datetime64[ns]`` -dtype :class:`numpy.ndarray`, where the values have
been converted to UTC and the timezone discarded
Timezones may be preserved with ``dtype=object``
@@ -106,6 +106,8 @@ Timezones may be preserved with ``dtype=object``
Or thrown away with ``dtype='datetime64[ns]'``
+.. ipython:: python
+
ser.to_numpy(dtype="datetime64[ns]")
Getting the "raw data" inside a :class:`DataFrame` is possibly a bit more
@@ -137,7 +139,7 @@ drawbacks:
1. When your Series contains an :ref:`extension type <extending.extension-types>`, it's
unclear whether :attr:`Series.values` returns a NumPy array or the extension array.
- :attr:`Series.array` will always return an ``ExtensionArray``, and will never
+ :attr:`Series.array` will always return an :class:`~pandas.api.extensions.ExtensionArray`, and will never
copy data. :meth:`Series.to_numpy` will always return a NumPy array,
potentially at the cost of copying / coercing values.
2. When your DataFrame contains a mixture of data types, :attr:`DataFrame.values` may
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 7fa386935e3f4..46a6a6da9da3a 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -164,6 +164,9 @@ See :ref:`integer_na` for more.
.. _whatsnew_0240.enhancements.array:
+Array
+^^^^^
+
A new top-level method :func:`array` has been added for creating 1-dimensional arrays (:issue:`22860`).
This can be used to create any :ref:`extension array <extending.extension-types>`, including
extension arrays registered by :ref:`3rd party libraries <ecosystem.extensions>`. See
@@ -579,6 +582,41 @@ You must pass in the ``line_terminator`` explicitly, even in this case.
...: print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
+.. _whatsnew_0240.bug_fixes.nan_with_str_dtype:
+
+Proper handling of `np.NaN` in a string data-typed column with the Python engine
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There was bug in :func:`read_excel` and :func:`read_csv` with the Python
+engine, where missing values turned to ``'nan'`` with ``dtype=str`` and
+``na_filter=True``. Now, these missing values are converted to the string
+missing indicator, ``np.nan``. (:issue:`20377`)
+
+.. ipython:: python
+ :suppress:
+
+ from pandas.compat import StringIO
+
+*Previous Behavior*:
+
+.. code-block:: ipython
+
+ In [5]: data = 'a,b,c\n1,,3\n4,5,6'
+ In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
+ In [7]: df.loc[0, 'b']
+ Out[7]:
+ 'nan'
+
+*New Behavior*:
+
+.. ipython:: python
+
+ data = 'a,b,c\n1,,3\n4,5,6'
+ df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
+ df.loc[0, 'b']
+
+Notice how we now instead output ``np.nan`` itself instead of a stringified form of it.
+
.. _whatsnew_0240.api.timezone_offset_parsing:
Parsing Datetime Strings with Timezone Offsets
@@ -677,6 +715,9 @@ is the case with :attr:`Period.end_time`, for example
.. _whatsnew_0240.api_breaking.datetime_unique:
+Datetime w/tz and unique
+^^^^^^^^^^^^^^^^^^^^^^^^
+
The return type of :meth:`Series.unique` for datetime with timezone values has changed
from an :class:`numpy.ndarray` of :class:`Timestamp` objects to a :class:`arrays.DatetimeArray` (:issue:`24024`).
@@ -852,12 +893,6 @@ Period Subtraction
Subtraction of a ``Period`` from another ``Period`` will give a ``DateOffset``.
instead of an integer (:issue:`21314`)
-.. ipython:: python
-
- june = pd.Period('June 2018')
- april = pd.Period('April 2018')
- june - april
-
*Previous Behavior*:
.. code-block:: ipython
@@ -869,13 +904,16 @@ instead of an integer (:issue:`21314`)
In [4]: june - april
Out [4]: 2
-Similarly, subtraction of a ``Period`` from a ``PeriodIndex`` will now return
-an ``Index`` of ``DateOffset`` objects instead of an ``Int64Index``
+*New Behavior*:
.. ipython:: python
- pi = pd.period_range('June 2018', freq='M', periods=3)
- pi - pi[0]
+ june = pd.Period('June 2018')
+ april = pd.Period('April 2018')
+ june - april
+
+Similarly, subtraction of a ``Period`` from a ``PeriodIndex`` will now return
+an ``Index`` of ``DateOffset`` objects instead of an ``Int64Index``
*Previous Behavior*:
@@ -886,6 +924,13 @@ an ``Index`` of ``DateOffset`` objects instead of an ``Int64Index``
In [3]: pi - pi[0]
Out[3]: Int64Index([0, 1, 2], dtype='int64')
+*New Behavior*:
+
+.. ipython:: python
+
+ pi = pd.period_range('June 2018', freq='M', periods=3)
+ pi - pi[0]
+
.. _whatsnew_0240.api.timedelta64_subtract_nan:
@@ -902,12 +947,6 @@ all-``NaT``. This is for compatibility with ``TimedeltaIndex`` and
df = pd.DataFrame([pd.Timedelta(days=1)])
df
-.. code-block:: ipython
-
- In [2]: df - np.nan
- ...
- TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'
-
*Previous Behavior*:
.. code-block:: ipython
@@ -919,6 +958,14 @@ all-``NaT``. This is for compatibility with ``TimedeltaIndex`` and
0
0 NaT
+*New Behavior*:
+
+.. code-block:: ipython
+
+ In [2]: df - np.nan
+ ...
+ TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'
+
.. _whatsnew_0240.api.dataframe_cmp_broadcasting:
DataFrame Comparison Operations Broadcasting Changes
@@ -935,13 +982,16 @@ The affected cases are:
- a list or tuple with length matching the number of rows in the :class:`DataFrame` will now raise ``ValueError`` instead of operating column-by-column (:issue:`22880`.
- a list or tuple with length matching the number of columns in the :class:`DataFrame` will now operate row-by-row instead of raising ``ValueError`` (:issue:`22880`).
+.. ipython:: python
+
+ arr = np.arange(6).reshape(3, 2)
+ df = pd.DataFrame(arr)
+ df
+
*Previous Behavior*:
.. code-block:: ipython
- In [3]: arr = np.arange(6).reshape(3, 2)
- In [4]: df = pd.DataFrame(arr)
-
In [5]: df == arr[[0], :]
...: # comparison previously broadcast where arithmetic would raise
Out[5]:
@@ -979,13 +1029,6 @@ The affected cases are:
*New Behavior*:
-.. ipython:: python
- :okexcept:
-
- arr = np.arange(6).reshape(3, 2)
- df = pd.DataFrame(arr)
- df
-
.. ipython:: python
# Comparison operations and arithmetic operations both broadcast.
@@ -1018,12 +1061,16 @@ DataFrame Arithmetic Operations Broadcasting Changes
``np.ndarray`` objects now broadcast in the same way as ``np.ndarray``
broadcast. (:issue:`23000`)
+.. ipython:: python
+
+ arr = np.arange(6).reshape(3, 2)
+ df = pd.DataFrame(arr)
+ df
+
*Previous Behavior*:
.. code-block:: ipython
- In [3]: arr = np.arange(6).reshape(3, 2)
- In [4]: df = pd.DataFrame(arr)
In [5]: df + arr[[0], :] # 1 row, 2 columns
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
@@ -1033,12 +1080,6 @@ broadcast. (:issue:`23000`)
*New Behavior*:
-.. ipython:: python
-
- arr = np.arange(6).reshape(3, 2)
- df = pd.DataFrame(arr)
- df
-
.. ipython:: python
df + arr[[0], :] # 1 row, 2 columns
@@ -1050,41 +1091,50 @@ broadcast. (:issue:`23000`)
ExtensionType Changes
^^^^^^^^^^^^^^^^^^^^^
-:class:`pandas.api.extensions.ExtensionDtype` **Equality and Hashability**
+ **Equality and Hashability**
Pandas now requires that extension dtypes be hashable. The base class implements
a default ``__eq__`` and ``__hash__``. If you have a parametrized dtype, you should
update the ``ExtensionDtype._metadata`` tuple to match the signature of your
``__init__`` method. See :class:`pandas.api.extensions.ExtensionDtype` for more (:issue:`22476`).
-**Other changes**
+**Reshaping changes**
- :meth:`~pandas.api.types.ExtensionArray.dropna` has been added (:issue:`21185`)
- :meth:`~pandas.api.types.ExtensionArray.repeat` has been added (:issue:`24349`)
+- The ``ExtensionArray`` constructor, ``_from_sequence`` now take the keyword arg ``copy=False`` (:issue:`21185`)
+- :meth:`pandas.api.extensions.ExtensionArray.shift` added as part of the basic ``ExtensionArray`` interface (:issue:`22387`).
+- :meth:`~pandas.api.types.ExtensionArray.searchsorted` has been added (:issue:`24350`)
+- Support for reduction operations such as ``sum``, ``mean`` via opt-in base class method override (:issue:`22762`)
+- :func:`ExtensionArray.isna` is allowed to return an ``ExtensionArray`` (:issue:`22325`).
+
+**Dtype changes**
+
- ``ExtensionDtype`` has gained the ability to instantiate from string dtypes, e.g. ``decimal`` would instantiate a registered ``DecimalDtype``; furthermore
the ``ExtensionDtype`` has gained the method ``construct_array_type`` (:issue:`21185`)
-- :meth:`~pandas.api.types.ExtensionArray.searchsorted` has been added (:issue:`24350`)
-- An ``ExtensionArray`` with a boolean dtype now works correctly as a boolean indexer. :meth:`pandas.api.types.is_bool_dtype` now properly considers them boolean (:issue:`22326`)
- Added ``ExtensionDtype._is_numeric`` for controlling whether an extension dtype is considered numeric (:issue:`22290`).
-- The ``ExtensionArray`` constructor, ``_from_sequence`` now take the keyword arg ``copy=False`` (:issue:`21185`)
+- Added :meth:`pandas.api.types.register_extension_dtype` to register an extension type with pandas (:issue:`22664`)
+- Updated the ``.type`` attribute for ``PeriodDtype``, ``DatetimeTZDtype``, and ``IntervalDtype`` to be instances of the dtype (``Period``, ``Timestamp``, and ``Interval`` respectively) (:issue:`22938`)
+
+**Other changes**
+
+- A default repr for :class:`pandas.api.extensions.ExtensionArray` is now provided (:issue:`23601`).
+- An ``ExtensionArray`` with a boolean dtype now works correctly as a boolean indexer. :meth:`pandas.api.types.is_bool_dtype` now properly considers them boolean (:issue:`22326`)
+
+**Bug Fixes**
+
- Bug in :meth:`Series.get` for ``Series`` using ``ExtensionArray`` and integer index (:issue:`21257`)
-- :meth:`pandas.api.extensions.ExtensionArray.shift` added as part of the basic ``ExtensionArray`` interface (:issue:`22387`).
- :meth:`~Series.shift` now dispatches to :meth:`ExtensionArray.shift` (:issue:`22386`)
- :meth:`Series.combine()` works correctly with :class:`~pandas.api.extensions.ExtensionArray` inside of :class:`Series` (:issue:`20825`)
- :meth:`Series.combine()` with scalar argument now works for any function type (:issue:`21248`)
- :meth:`Series.astype` and :meth:`DataFrame.astype` now dispatch to :meth:`ExtensionArray.astype` (:issue:`21185:`).
- Slicing a single row of a ``DataFrame`` with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (:issue:`22784`)
-- Added :meth:`pandas.api.types.register_extension_dtype` to register an extension type with pandas (:issue:`22664`)
- Bug when concatenating multiple ``Series`` with different extension dtypes not casting to object dtype (:issue:`22994`)
- Series backed by an ``ExtensionArray`` now work with :func:`util.hash_pandas_object` (:issue:`23066`)
-- Updated the ``.type`` attribute for ``PeriodDtype``, ``DatetimeTZDtype``, and ``IntervalDtype`` to be instances of the dtype (``Period``, ``Timestamp``, and ``Interval`` respectively) (:issue:`22938`)
-- :func:`ExtensionArray.isna` is allowed to return an ``ExtensionArray`` (:issue:`22325`).
-- Support for reduction operations such as ``sum``, ``mean`` via opt-in base class method override (:issue:`22762`)
- :meth:`DataFrame.stack` no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (:issue:`23077`).
- :meth:`Series.unstack` and :meth:`DataFrame.unstack` no longer convert extension arrays to object-dtype ndarrays. Each column in the output ``DataFrame`` will now have the same dtype as the input (:issue:`23077`).
- Bug when grouping :meth:`Dataframe.groupby()` and aggregating on ``ExtensionArray`` it was not returning the actual ``ExtensionArray`` dtype (:issue:`23227`).
- Bug in :func:`pandas.merge` when merging on an extension array-backed column (:issue:`23020`).
-- A default repr for :class:`pandas.api.extensions.ExtensionArray` is now provided (:issue:`23601`).
.. _whatsnew_0240.api.incompatibilities:
@@ -1184,19 +1234,18 @@ Datetimelike API Changes
- :class:`PeriodIndex` subtraction of another ``PeriodIndex`` will now return an object-dtype :class:`Index` of :class:`DateOffset` objects instead of raising a ``TypeError`` (:issue:`20049`)
- :func:`cut` and :func:`qcut` now returns a :class:`DatetimeIndex` or :class:`TimedeltaIndex` bins when the input is datetime or timedelta dtype respectively and ``retbins=True`` (:issue:`19891`)
- :meth:`DatetimeIndex.to_period` and :meth:`Timestamp.to_period` will issue a warning when timezone information will be lost (:issue:`21333`)
+- :class:`DatetimeIndex` now accepts :class:`Int64Index` arguments as epoch timestamps (:issue:`20997`)
+- :meth:`PeriodIndex.tz_convert` and :meth:`PeriodIndex.tz_localize` have been removed (:issue:`21781`)
.. _whatsnew_0240.api.other:
Other API Changes
^^^^^^^^^^^^^^^^^
-- :class:`DatetimeIndex` now accepts :class:`Int64Index` arguments as epoch timestamps (:issue:`20997`)
- Accessing a level of a ``MultiIndex`` with a duplicate name (e.g. in
- :meth:`~MultiIndex.get_level_values`) now raises a ``ValueError`` instead of
- a ``KeyError`` (:issue:`21678`).
+ :meth:`~MultiIndex.get_level_values`) now raises a ``ValueError`` instead of a ``KeyError`` (:issue:`21678`).
- Invalid construction of ``IntervalDtype`` will now always raise a ``TypeError`` rather than a ``ValueError`` if the subdtype is invalid (:issue:`21185`)
- Trying to reindex a ``DataFrame`` with a non unique ``MultiIndex`` now raises a ``ValueError`` instead of an ``Exception`` (:issue:`21770`)
-- :meth:`PeriodIndex.tz_convert` and :meth:`PeriodIndex.tz_localize` have been removed (:issue:`21781`)
- :class:`Index` subtraction will attempt to operate element-wise instead of raising ``TypeError`` (:issue:`19369`)
- :class:`pandas.io.formats.style.Styler` supports a ``number-format`` property when using :meth:`~pandas.io.formats.style.Styler.to_excel` (:issue:`22015`)
- :meth:`DataFrame.corr` and :meth:`Series.corr` now raise a ``ValueError`` along with a helpful error message instead of a ``KeyError`` when supplied with an invalid method (:issue:`22298`)
@@ -1432,13 +1481,6 @@ Performance Improvements
- Improved performance of :class:`Period` constructor, additionally benefitting ``PeriodArray`` and ``PeriodIndex`` creation (:issue:`24084` and :issue:`24118`)
- Improved performance of tz-aware :class:`DatetimeArray` binary operations (:issue:`24491`)
-.. _whatsnew_0240.docs:
-
-Documentation Changes
-~~~~~~~~~~~~~~~~~~~~~
-
--
-
.. _whatsnew_0240.bug_fixes:
Bug Fixes
@@ -1658,44 +1700,6 @@ MultiIndex
I/O
^^^
-- Bug where integer categorical data would be formatted as floats if ``NaN`` values were present (:issue:`19214`)
-
-
-.. _whatsnew_0240.bug_fixes.nan_with_str_dtype:
-
-Proper handling of `np.NaN` in a string data-typed column with the Python engine
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-There was bug in :func:`read_excel` and :func:`read_csv` with the Python
-engine, where missing values turned to ``'nan'`` with ``dtype=str`` and
-``na_filter=True``. Now, these missing values are converted to the string
-missing indicator, ``np.nan``. (:issue:`20377`)
-
-.. ipython:: python
- :suppress:
-
- from pandas.compat import StringIO
-
-*Previous Behavior*:
-
-.. code-block:: ipython
-
- In [5]: data = 'a,b,c\n1,,3\n4,5,6'
- In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
- In [7]: df.loc[0, 'b']
- Out[7]:
- 'nan'
-
-*New Behavior*:
-
-.. ipython:: python
-
- data = 'a,b,c\n1,,3\n4,5,6'
- df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
- df.loc[0, 'b']
-
-Notice how we now instead output ``np.nan`` itself instead of a stringified form of it.
-
- Bug in :func:`read_csv` in which a column specified with ``CategoricalDtype`` of boolean categories was not being correctly coerced from string values to booleans (:issue:`20498`)
- Bug in :meth:`DataFrame.to_sql` when writing timezone aware data (``datetime64[ns, tz]`` dtype) would raise a ``TypeError`` (:issue:`9086`)
- Bug in :meth:`DataFrame.to_sql` where a naive :class:`DatetimeIndex` would be written as ``TIMESTAMP WITH TIMEZONE`` type in supported databases, e.g. PostgreSQL (:issue:`23510`)
@@ -1711,11 +1715,11 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
- Bug in :func:`read_sas()` in which an incorrect error was raised on an invalid file format. (:issue:`24548`)
- Bug in :meth:`detect_client_encoding` where potential ``IOError`` goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (:issue:`21552`)
-- Bug in :func:`to_html()` with ``index=False`` misses truncation indicators (...) on truncated DataFrame (:issue:`15019`, :issue:`22783`)
-- Bug in :func:`to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`)
-- Bug in :func:`to_html()` with ``index_names=False`` displaying index name (:issue:`22747`)
-- Bug in :func:`to_html()` with ``header=False`` not displaying row index names (:issue:`23788`)
-- Bug in :func:`to_html()` with ``sparsify=False`` that caused it to raise ``TypeError`` (:issue:`22887`)
+- Bug in :func:`DataFrame.to_html()` with ``index=False`` misses truncation indicators (...) on truncated DataFrame (:issue:`15019`, :issue:`22783`)
+- Bug in :func:`DataFrame.to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`)
+- Bug in :func:`DataFrame.to_html()` with ``index_names=False`` displaying index name (:issue:`22747`)
+- Bug in :func:`DataFrame.to_html()` with ``header=False`` not displaying row index names (:issue:`23788`)
+- Bug in :func:`DataFrame.to_html()` with ``sparsify=False`` that caused it to raise ``TypeError`` (:issue:`22887`)
- Bug in :func:`DataFrame.to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`)
- Bug in :func:`DataFrame.to_string()` that caused representations of :class:`DataFrame` to not take up the whole window (:issue:`22984`)
- Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`).
@@ -1838,7 +1842,6 @@ Other
^^^^^
- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (:issue:`24113`)
-- Require at least 0.28.2 version of ``cython`` to support read-only memoryviews (:issue:`21688`)
.. _whatsnew_0.24.0.contributors:
| https://api.github.com/repos/pandas-dev/pandas/pulls/24649 | 2019-01-05T21:53:34Z | 2019-01-06T15:52:32Z | 2019-01-06T15:52:32Z | 2019-01-06T15:52:32Z | |
CI/TST: Check that unittest.mock is not being used in testing | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index da1b035cf3ed2..d4a2945f8e3a5 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -148,6 +148,11 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
invgrep -R --exclude=*.pyc --exclude=testing.py --exclude=test_util.py assert_raises_regex pandas
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ # Check for the following code in testing: `unittest.mock`, `mock.Mock()` or `mock.patch`
+ MSG='Check that unittest.mock is not used (pytest builtin monkeypatch fixture should be used instread)' ; echo $MSG
+ invgrep -r -E --include '*.py' '(unittest(\.| import )mock|mock\.Mock\(\)|mock\.patch)' pandas/tests/
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
# Check that we use pytest.raises only as a context manager
#
# For any flake8-compliant code, the only way this regex gets
| xref https://github.com/pandas-dev/pandas/pull/24624#issuecomment-451658866 | https://api.github.com/repos/pandas-dev/pandas/pulls/24648 | 2019-01-05T21:52:31Z | 2019-01-05T22:47:22Z | 2019-01-05T22:47:22Z | 2020-09-17T00:59:13Z |
PERF: 10x speedup in Series/DataFrame construction for lists of ints | diff --git a/asv_bench/benchmarks/ctors.py b/asv_bench/benchmarks/ctors.py
index 7c78fe7e7a177..9082b4186bfa4 100644
--- a/asv_bench/benchmarks/ctors.py
+++ b/asv_bench/benchmarks/ctors.py
@@ -41,7 +41,7 @@ def list_of_lists_with_none(arr):
class SeriesConstructors(object):
- param_names = ["data_fmt", "with_index"]
+ param_names = ["data_fmt", "with_index", "dtype"]
params = [[no_change,
list,
list_of_str,
@@ -52,15 +52,19 @@ class SeriesConstructors(object):
list_of_lists,
list_of_tuples_with_none,
list_of_lists_with_none],
- [False, True]]
+ [False, True],
+ ['float', 'int']]
- def setup(self, data_fmt, with_index):
+ def setup(self, data_fmt, with_index, dtype):
N = 10**4
- arr = np.random.randn(N)
+ if dtype == 'float':
+ arr = np.random.randn(N)
+ else:
+ arr = np.arange(N)
self.data = data_fmt(arr)
self.index = np.arange(N) if with_index else None
- def time_series_constructor(self, data_fmt, with_index):
+ def time_series_constructor(self, data_fmt, with_index, dtype):
Series(self.data, index=self.index)
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 6e6d35f00725c..85eb6c3421222 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2011,7 +2011,8 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
floats[i] = <float64_t>val
complexes[i] = <double complex>val
if not seen.null_:
- seen.saw_int(int(val))
+ val = int(val)
+ seen.saw_int(val)
if ((seen.uint_ and seen.sint_) or
val > oUINT64_MAX or val < oINT64_MIN):
| This PR is a minor tweak to the `int64`/`uint64` overflow fix added in https://github.com/pandas-dev/pandas/pull/18624
Simply casting to an `int` after doing a typecheck is sufficient for the compiler to generate a 10x speedup:
```
$ asv compare upstream/master HEAD --sort ratio -s
Benchmarks that have improved:
before after ratio
[f074abef] [80641ddf]
<series_list_int_speedup~1> <series_list_int_speedup>
failed 7.39±0s n/a strings.Dummies.time_get_dummies
- 61.7±3ms 11.7±0.5ms 0.19 ctors.SeriesConstructors.time_series_constructor(<function arr_dict>, True, 'int')
- 63.0±2ms 11.1±0.3ms 0.18 ctors.SeriesConstructors.time_series_constructor(<function arr_dict>, False, 'int')
- 55.8±2ms 5.37±0.2ms 0.10 ctors.SeriesConstructors.time_series_constructor(<class 'list'>, True, 'int')
- 55.3±5ms 4.84±0.2ms 0.09 ctors.SeriesConstructors.time_series_constructor(<class 'list'>, False, 'int')
```
This is how `maybe_convert_numeric()` already handles `int`s, so this just brings `maybe_convert_object()` back into alignment.
I believe this would yield a similar speedup for `DataFrame`s but we don't have any benchmarks explicitly testing as such. However, the `get_dummies()` benchmark involves expanding to a `DataFrame` and gets a speedup of similar magnitude (not visible as it previously would time out after 30s).
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24647 | 2019-01-05T21:31:56Z | 2019-01-06T15:48:43Z | 2019-01-06T15:48:42Z | 2019-01-06T15:48:43Z |
Repr for Integer and Pandas Dtypes | diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index f8f87ff1c96f1..b3dde6bf2bd93 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -36,6 +36,11 @@ class _IntegerDtype(ExtensionDtype):
type = None
na_value = np.nan
+ def __repr__(self):
+ sign = 'U' if self.is_unsigned_integer else ''
+ return "{sign}Int{size}Dtype()".format(sign=sign,
+ size=8 * self.itemsize)
+
@cache_readonly
def is_signed_integer(self):
return self.kind == 'i'
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index b1dc77e65eee8..47517782e2bbf 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -38,8 +38,12 @@ def __init__(self, dtype):
self._name = dtype.name
self._type = dtype.type
+ def __repr__(self):
+ return "PandasDtype({!r})".format(self.name)
+
@property
def numpy_dtype(self):
+ """The NumPy dtype this PandasDtype wraps."""
return self._dtype
@property
@@ -72,6 +76,7 @@ def kind(self):
@property
def itemsize(self):
+ """The element size of this data-type object."""
return self._dtype.itemsize
diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 173f9707e76c2..09298bb5cd08d 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -57,6 +57,20 @@ def test_dtypes(dtype):
assert dtype.name is not None
+@pytest.mark.parametrize('dtype, expected', [
+ (Int8Dtype(), 'Int8Dtype()'),
+ (Int16Dtype(), 'Int16Dtype()'),
+ (Int32Dtype(), 'Int32Dtype()'),
+ (Int64Dtype(), 'Int64Dtype()'),
+ (UInt8Dtype(), 'UInt8Dtype()'),
+ (UInt16Dtype(), 'UInt16Dtype()'),
+ (UInt32Dtype(), 'UInt32Dtype()'),
+ (UInt64Dtype(), 'UInt64Dtype()'),
+])
+def test_repr_dtype(dtype, expected):
+ assert repr(dtype) == expected
+
+
def test_repr_array():
result = repr(integer_array([1, None, 3]))
expected = (
diff --git a/pandas/tests/arrays/test_numpy.py b/pandas/tests/arrays/test_numpy.py
index b17e509c24e71..a77f1f8a7b3d1 100644
--- a/pandas/tests/arrays/test_numpy.py
+++ b/pandas/tests/arrays/test_numpy.py
@@ -71,6 +71,17 @@ def test_is_boolean(dtype, expected):
assert dtype._is_boolean is expected
+def test_repr():
+ dtype = PandasDtype(np.dtype("int64"))
+ assert repr(dtype) == "PandasDtype('int64')"
+
+
+def test_constructor_from_string():
+ result = PandasDtype.construct_from_string("int64")
+ expected = PandasDtype(np.dtype("int64"))
+ assert result == expected
+
+
# ----------------------------------------------------------------------------
# Construction
| ```python
>>> pd.PandasDtype("int64")
PandasDtype('int64')
>>> pd.Int32Dtype()
Int32Dtype()
>>> pd.UInt8Dtype()
UInt8Dtype()
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/24646 | 2019-01-05T20:17:41Z | 2019-01-05T21:07:35Z | 2019-01-05T21:07:35Z | 2019-01-05T21:07:39Z |
CLN: remove redundant mac wheel-build code | diff --git a/setup.py b/setup.py
index 6cd359b281b56..7ba4f5ba399d0 100755
--- a/setup.py
+++ b/setup.py
@@ -401,20 +401,6 @@ def run(self):
cmdclass.update({'clean': CleanCommand,
'build': build})
-try:
- from wheel.bdist_wheel import bdist_wheel
-
- class BdistWheel(bdist_wheel):
- def get_tag(self):
- tag = bdist_wheel.get_tag(self)
- repl = 'macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64'
- if tag[2] == 'macosx_10_6_intel':
- tag = (tag[0], tag[1], repl)
- return tag
- cmdclass['bdist_wheel'] = BdistWheel
-except ImportError:
- pass
-
if cython:
suffix = '.pyx'
cmdclass['build_ext'] = CheckingBuildExt
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
this code is redundant for wheels built using https://github.com/MacPython/pandas-wheels, as the line https://github.com/matthew-brett/multibuild/blob/266d88fe1e474748cc8a3823dc3934bf55e76383/osx_utils.sh#L306 adds tags for 10.9 and 10.10.
Note: circleci fails as I forgot to turn it off on my fork, after it was turned off upstream | https://api.github.com/repos/pandas-dev/pandas/pulls/24644 | 2019-01-05T16:43:45Z | 2019-01-05T18:32:17Z | 2019-01-05T18:32:17Z | 2019-01-05T22:21:07Z |
Make DTA/TDA/PA return NotImplemented on comparisons | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index e6fbc6d1f4b15..b858dc0a8d54a 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -19,7 +19,8 @@
is_extension_type, is_float_dtype, is_int64_dtype, is_object_dtype,
is_period_dtype, is_string_dtype, is_timedelta64_dtype, pandas_dtype)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
-from pandas.core.dtypes.generic import ABCIndexClass, ABCPandasArray, ABCSeries
+from pandas.core.dtypes.generic import (
+ ABCDataFrame, ABCIndexClass, ABCPandasArray, ABCSeries)
from pandas.core.dtypes.missing import isna
from pandas.core import ops
@@ -96,9 +97,8 @@ def _dt_array_cmp(cls, op):
nat_result = True if opname == '__ne__' else False
def wrapper(self, other):
- # TODO: return NotImplemented for Series / Index and let pandas unbox
- # Right now, returning NotImplemented for Index fails because we
- # go into the index implementation, which may be a bug?
+ if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
+ return NotImplemented
other = lib.item_from_zerodim(other)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 34bb03b249c21..513bd7223e880 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -17,7 +17,8 @@
_TD_DTYPE, ensure_object, is_datetime64_dtype, is_float_dtype,
is_list_like, is_period_dtype, pandas_dtype)
from pandas.core.dtypes.dtypes import PeriodDtype
-from pandas.core.dtypes.generic import ABCIndexClass, ABCPeriodIndex, ABCSeries
+from pandas.core.dtypes.generic import (
+ ABCDataFrame, ABCIndexClass, ABCPeriodIndex, ABCSeries)
from pandas.core.dtypes.missing import isna, notna
import pandas.core.algorithms as algos
@@ -48,17 +49,13 @@ def _period_array_cmp(cls, op):
def wrapper(self, other):
op = getattr(self.asi8, opname)
- # We want to eventually defer to the Series or PeriodIndex (which will
- # return here with an unboxed PeriodArray). But before we do that,
- # we do a bit of validation on type (Period) and freq, so that our
- # error messages are sensible
+
+ if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
+ return NotImplemented
+
if is_list_like(other) and len(other) != len(self):
raise ValueError("Lengths must match")
- not_implemented = isinstance(other, (ABCSeries, ABCIndexClass))
- if not_implemented:
- other = other._values
-
if isinstance(other, Period):
self._check_compatible_with(other)
@@ -66,8 +63,6 @@ def wrapper(self, other):
elif isinstance(other, cls):
self._check_compatible_with(other)
- if not_implemented:
- return NotImplemented
result = op(other.asi8)
mask = self._isnan | other._isnan
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index ab9986b5bff69..624305ec4303d 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -64,6 +64,9 @@ def _td_array_cmp(cls, op):
nat_result = True if opname == '__ne__' else False
def wrapper(self, other):
+ if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
+ return NotImplemented
+
if _is_convertible_to_td(other) or other is NaT:
try:
other = Timedelta(other)
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 5a8809f754385..aa7332472fc07 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -109,6 +109,11 @@ def _create_comparison_method(cls, op):
Create a comparison method that dispatches to ``cls.values``.
"""
def wrapper(self, other):
+ if isinstance(other, ABCSeries):
+ # the arrays defer to Series for comparison ops but the indexes
+ # don't, so we have to unwrap here.
+ other = other._values
+
result = op(self._data, maybe_unwrap_index(other))
return result
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index cdacd4b42d683..92f209b94f00d 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -152,7 +152,10 @@ def test_parr_cmp_pi_mismatched_freq_raises(self, freq, box_with_array):
# TODO: Could parametrize over boxes for idx?
idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='A')
- with pytest.raises(IncompatibleFrequency, match=msg):
+ rev_msg = (r'Input has different freq=(M|2M|3M) from '
+ r'PeriodArray\(freq=A-DEC\)')
+ idx_msg = rev_msg if box_with_array is tm.to_array else msg
+ with pytest.raises(IncompatibleFrequency, match=idx_msg):
base <= idx
# Different frequency
@@ -164,7 +167,10 @@ def test_parr_cmp_pi_mismatched_freq_raises(self, freq, box_with_array):
Period('2011', freq='4M') >= base
idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='4M')
- with pytest.raises(IncompatibleFrequency, match=msg):
+ rev_msg = (r'Input has different freq=(M|2M|3M) from '
+ r'PeriodArray\(freq=4M\)')
+ idx_msg = rev_msg if box_with_array is tm.to_array else msg
+ with pytest.raises(IncompatibleFrequency, match=idx_msg):
base <= idx
@pytest.mark.parametrize('freq', ['M', '2M', '3M'])
| Before implementing a boilerplate decorator like in #24282, going through to standardize the affected behaviors. | https://api.github.com/repos/pandas-dev/pandas/pulls/24643 | 2019-01-05T16:08:27Z | 2019-01-05T21:54:32Z | 2019-01-05T21:54:32Z | 2019-01-05T21:59:16Z |
DOC: fix warnings in docstrings examples for deprecated functions | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index e6fbc6d1f4b15..3d5312ff1ed49 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -766,8 +766,8 @@ def tz_convert(self, tz):
With the `tz` parameter, we can change the DatetimeIndex
to other time zones:
- >>> dti = pd.DatetimeIndex(start='2014-08-01 09:00',
- ... freq='H', periods=3, tz='Europe/Berlin')
+ >>> dti = pd.date_range(start='2014-08-01 09:00',
+ ... freq='H', periods=3, tz='Europe/Berlin')
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
@@ -784,8 +784,8 @@ def tz_convert(self, tz):
With the ``tz=None``, we can remove the timezone (after converting
to UTC if necessary):
- >>> dti = pd.DatetimeIndex(start='2014-08-01 09:00',freq='H',
- ... periods=3, tz='Europe/Berlin')
+ >>> dti = pd.date_range(start='2014-08-01 09:00',freq='H',
+ ... periods=3, tz='Europe/Berlin')
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
@@ -1037,8 +1037,8 @@ def normalize(self):
Examples
--------
- >>> idx = pd.DatetimeIndex(start='2014-08-01 10:00', freq='H',
- ... periods=3, tz='Asia/Calcutta')
+ >>> idx = pd.date_range(start='2014-08-01 10:00', freq='H',
+ ... periods=3, tz='Asia/Calcutta')
>>> idx
DatetimeIndex(['2014-08-01 10:00:00+05:30',
'2014-08-01 11:00:00+05:30',
@@ -1164,7 +1164,7 @@ def month_name(self, locale=None):
Examples
--------
- >>> idx = pd.DatetimeIndex(start='2018-01', freq='M', periods=3)
+ >>> idx = pd.date_range(start='2018-01', freq='M', periods=3)
>>> idx
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],
dtype='datetime64[ns]', freq='M')
@@ -1200,7 +1200,7 @@ def day_name(self, locale=None):
Examples
--------
- >>> idx = pd.DatetimeIndex(start='2018-01-01', freq='D', periods=3)
+ >>> idx = pd.date_range(start='2018-01-01', freq='D', periods=3)
>>> idx
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03'],
dtype='datetime64[ns]', freq='D')
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d78a19dea9490..d271081aeaa51 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4433,8 +4433,8 @@ def _reindex_multi(self, axes, copy, fill_value):
num_legs num_wings
dog 4 0
hawk 2 2
- >>> df.reindex_axis(['num_wings', 'num_legs', 'num_heads'],
- ... axis='columns')
+ >>> df.reindex(['num_wings', 'num_legs', 'num_heads'],
+ ... axis='columns')
num_wings num_legs num_heads
dog 0 4 NaN
hawk 2 2 NaN
@@ -7352,7 +7352,7 @@ def clip_upper(self, threshold, axis=None, inplace=False):
4 5
dtype: int64
- >>> s.clip_upper(3)
+ >>> s.clip(upper=3)
0 1
1 2
2 3
@@ -7360,11 +7360,11 @@ def clip_upper(self, threshold, axis=None, inplace=False):
4 3
dtype: int64
- >>> t = [5, 4, 3, 2, 1]
- >>> t
+ >>> elemwise_thresholds = [5, 4, 3, 2, 1]
+ >>> elemwise_thresholds
[5, 4, 3, 2, 1]
- >>> s.clip_upper(t)
+ >>> s.clip(upper=elemwise_thresholds)
0 1
1 2
2 3
@@ -7428,7 +7428,7 @@ def clip_lower(self, threshold, axis=None, inplace=False):
Series single threshold clipping:
>>> s = pd.Series([5, 6, 7, 8, 9])
- >>> s.clip_lower(8)
+ >>> s.clip(lower=8)
0 8
1 8
2 8
@@ -7440,7 +7440,7 @@ def clip_lower(self, threshold, axis=None, inplace=False):
should be the same length as the Series.
>>> elemwise_thresholds = [4, 8, 7, 2, 5]
- >>> s.clip_lower(elemwise_thresholds)
+ >>> s.clip(lower=elemwise_thresholds)
0 5
1 8
2 7
@@ -7457,7 +7457,7 @@ def clip_lower(self, threshold, axis=None, inplace=False):
1 3 4
2 5 6
- >>> df.clip_lower(3)
+ >>> df.clip(lower=3)
A B
0 3 3
1 3 4
@@ -7466,7 +7466,7 @@ def clip_lower(self, threshold, axis=None, inplace=False):
Or to an array of values. By default, `threshold` should be the same
shape as the DataFrame.
- >>> df.clip_lower(np.array([[3, 4], [2, 2], [6, 2]]))
+ >>> df.clip(lower=np.array([[3, 4], [2, 2], [6, 2]]))
A B
0 3 4
1 3 4
@@ -7476,13 +7476,13 @@ def clip_lower(self, threshold, axis=None, inplace=False):
`threshold` should be the same length as the axis specified by
`axis`.
- >>> df.clip_lower([3, 3, 5], axis='index')
+ >>> df.clip(lower=[3, 3, 5], axis='index')
A B
0 3 3
1 3 4
2 5 6
- >>> df.clip_lower([4, 5], axis='columns')
+ >>> df.clip(lower=[4, 5], axis='columns')
A B
0 4 5
1 4 5
| - [x] closes #24525
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
I have updated all the `FutureWarning`, let me know if `PerformanceWarning` and `DtypeWarning` shown below also needs a fix.
```zsh
python scripts/validate_docstrings.py --errors GL06,GL07,GL09,SS04,PR03,PR05,EX04
scripts/validate_docstrings.py:764: PerformanceWarning: indexing past lexsort depth may impact performance.
errs, wrns, examples_errs = get_validation_data(doc)
scripts/validate_docstrings.py:764: PerformanceWarning: indexing past lexsort depth may impact performance.
errs, wrns, examples_errs = get_validation_data(doc)
scripts/validate_docstrings.py:524: DtypeWarning: Columns (0) have mixed types. Specify dtype option on import or set low_memory=False.
runner.run(test, out=f.write)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/24642 | 2019-01-05T14:42:34Z | 2019-01-05T17:31:42Z | 2019-01-05T17:31:42Z | 2019-01-05T23:58:06Z |
Fix 32-bit builds by correctly using intp_t instead of int64_t for nmpy.searchsorted result, part 2 (#24621) | diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 7c9c2cafd1afb..6aa02ca1e5421 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -1290,7 +1290,8 @@ def is_date_array_normalized(int64_t[:] stamps, object tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
ndarray[int64_t] trans
- int64_t[:] deltas, pos
+ int64_t[:] deltas
+ intp_t[:] pos
npy_datetimestruct dts
int64_t local_val, delta
str typ
| xref #24613 | https://api.github.com/repos/pandas-dev/pandas/pulls/24640 | 2019-01-05T13:55:17Z | 2019-01-05T14:48:33Z | 2019-01-05T14:48:33Z | 2019-01-05T14:48:33Z |
CLN: Parameterize test cases | diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 01aa8e8ccc1ee..7dfc21562cc5d 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -239,29 +239,17 @@ def test_conversion_outofbounds_datetime(self):
xp = converter.dates.date2num(values[0])
assert rs == xp
- def test_time_formatter(self):
+ @pytest.mark.parametrize('time,format_expected', [
+ (0, '00:00'), # time2num(datetime.time.min)
+ (86399.999999, '23:59:59.999999'), # time2num(datetime.time.max)
+ (90000, '01:00'),
+ (3723, '01:02:03'),
+ (39723.2, '11:02:03.200')
+ ])
+ def test_time_formatter(self, time, format_expected):
# issue 18478
-
- # time2num(datetime.time.min)
- rs = self.tc(0)
- xp = '00:00'
- assert rs == xp
-
- # time2num(datetime.time.max)
- rs = self.tc(86399.999999)
- xp = '23:59:59.999999'
- assert rs == xp
-
- # some other times
- rs = self.tc(90000)
- xp = '01:00'
- assert rs == xp
- rs = self.tc(3723)
- xp = '01:02:03'
- assert rs == xp
- rs = self.tc(39723.2)
- xp = '11:02:03.200'
- assert rs == xp
+ result = self.tc(time)
+ assert result == format_expected
def test_dateindex_conversion(self):
decimals = 9
| - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/24639 | 2019-01-05T13:43:40Z | 2019-01-05T14:56:03Z | 2019-01-05T14:56:03Z | 2019-01-05T16:09:18Z |
DOC: parallel safe | diff --git a/doc/sphinxext/contributors.py b/doc/sphinxext/contributors.py
index 8c9fa5bc961d1..179ba19a0908a 100644
--- a/doc/sphinxext/contributors.py
+++ b/doc/sphinxext/contributors.py
@@ -46,4 +46,8 @@ def run(self):
def setup(app):
app.add_directive('contributors', ContributorsDirective)
- return {'version': '0.1'}
+ return {
+ 'version': '0.1',
+ 'parallel_read_safe': True,
+ 'parallel_write_safe': True,
+ }
| @datapythonista this may have been why the parallel option wasn't helping you. | https://api.github.com/repos/pandas-dev/pandas/pulls/24638 | 2019-01-05T12:54:42Z | 2019-01-05T14:55:34Z | 2019-01-05T14:55:34Z | 2019-01-06T00:07:01Z |
REF: io/formats/html.py (and io/formats/format.py) | diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 9dc2692f276e3..f8ee9c273fd59 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -778,7 +778,7 @@ def space_format(x, y):
for i, (col, x) in enumerate(zip(columns,
fmt_columns))]
- if self.show_index_names and self.has_index_names:
+ if self.show_row_idx_names:
for x in str_columns:
x.append('')
@@ -793,22 +793,33 @@ def has_index_names(self):
def has_column_names(self):
return _has_names(self.frame.columns)
+ @property
+ def show_row_idx_names(self):
+ return all((self.has_index_names,
+ self.index,
+ self.show_index_names))
+
+ @property
+ def show_col_idx_names(self):
+ return all((self.has_column_names,
+ self.show_index_names,
+ self.header))
+
def _get_formatted_index(self, frame):
# Note: this is only used by to_string() and to_latex(), not by
# to_html().
index = frame.index
columns = frame.columns
-
- show_index_names = self.show_index_names and self.has_index_names
- show_col_names = (self.show_index_names and self.has_column_names)
-
fmt = self._get_formatter('__index__')
if isinstance(index, ABCMultiIndex):
- fmt_index = index.format(sparsify=self.sparsify, adjoin=False,
- names=show_index_names, formatter=fmt)
+ fmt_index = index.format(
+ sparsify=self.sparsify, adjoin=False,
+ names=self.show_row_idx_names, formatter=fmt)
else:
- fmt_index = [index.format(name=show_index_names, formatter=fmt)]
+ fmt_index = [index.format(
+ name=self.show_row_idx_names, formatter=fmt)]
+
fmt_index = [tuple(_make_fixed_width(list(x), justify='left',
minimum=(self.col_space or 0),
adj=self.adj)) for x in fmt_index]
@@ -816,7 +827,7 @@ def _get_formatted_index(self, frame):
adjoined = self.adj.adjoin(1, *fmt_index).split('\n')
# empty space for columns
- if show_col_names:
+ if self.show_col_idx_names:
col_header = ['{x}'.format(x=x)
for x in self._get_column_name_list()]
else:
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 390c3f3d5c709..90f1dbe704806 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -45,23 +45,11 @@ def __init__(self, formatter, classes=None, notebook=False, border=None,
@property
def show_row_idx_names(self):
- return all((self.fmt.has_index_names,
- self.fmt.index,
- self.fmt.show_index_names))
+ return self.fmt.show_row_idx_names
@property
def show_col_idx_names(self):
- # see gh-22579
- # Column misalignment also occurs for
- # a standard index when the columns index is named.
- # Determine if ANY column names need to be displayed
- # since if the row index is not displayed a column of
- # blank cells need to be included before the DataFrame values.
- # TODO: refactor to add show_col_idx_names property to
- # DataFrameFormatter
- return all((self.fmt.has_column_names,
- self.fmt.show_index_names,
- self.fmt.header))
+ return self.fmt.show_col_idx_names
@property
def row_levels(self):
@@ -184,14 +172,28 @@ def write_style(self):
template = dedent('\n'.join((template_first,
template_mid,
template_last)))
- if self.notebook:
- self.write(template)
+ self.write(template)
def write_result(self, buf):
- indent = 0
- id_section = ""
- frame = self.frame
+ if self.notebook:
+ self.write('<div>')
+ self.write_style()
+
+ self._write_table()
+
+ if self.should_show_dimensions:
+ by = chr(215) if compat.PY3 else unichr(215) # ×
+ self.write(u('<p>{rows} rows {by} {cols} columns</p>')
+ .format(rows=len(self.frame),
+ by=by,
+ cols=len(self.frame.columns)))
+ if self.notebook:
+ self.write('</div>')
+
+ buffer_put_lines(buf, self.elements)
+
+ def _write_table(self, indent=0):
_classes = ['dataframe'] # Default class.
use_mathjax = get_option("display.html.use_mathjax")
if not use_mathjax:
@@ -204,33 +206,21 @@ def write_result(self, buf):
.format(typ=type(self.classes)))
_classes.extend(self.classes)
- if self.notebook:
- self.write('<div>')
-
- self.write_style()
-
- if self.table_id is not None:
+ if self.table_id is None:
+ id_section = ""
+ else:
id_section = ' id="{table_id}"'.format(table_id=self.table_id)
+
self.write('<table border="{border}" class="{cls}"{id_section}>'
.format(border=self.border, cls=' '.join(_classes),
id_section=id_section), indent)
- indent += self.indent_delta
- indent = self._write_header(indent)
- indent = self._write_body(indent)
+ if self.fmt.header or self.show_row_idx_names:
+ self._write_header(indent + self.indent_delta)
- self.write('</table>', indent)
- if self.should_show_dimensions:
- by = chr(215) if compat.PY3 else unichr(215) # ×
- self.write(u('<p>{rows} rows {by} {cols} columns</p>')
- .format(rows=len(frame),
- by=by,
- cols=len(frame.columns)))
+ self._write_body(indent + self.indent_delta)
- if self.notebook:
- self.write('</div>')
-
- buffer_put_lines(buf, self.elements)
+ self.write('</table>', indent)
def _write_col_header(self, indent):
truncate_h = self.fmt.truncate_h
@@ -359,41 +349,29 @@ def _write_row_header(self, indent):
self.write_tr(row, indent, self.indent_delta, header=True)
def _write_header(self, indent):
- if not (self.fmt.header or self.show_row_idx_names):
- # write nothing
- return indent
-
self.write('<thead>', indent)
- indent += self.indent_delta
if self.fmt.header:
- self._write_col_header(indent)
+ self._write_col_header(indent + self.indent_delta)
if self.show_row_idx_names:
- self._write_row_header(indent)
+ self._write_row_header(indent + self.indent_delta)
- indent -= self.indent_delta
self.write('</thead>', indent)
- return indent
-
def _write_body(self, indent):
self.write('<tbody>', indent)
- indent += self.indent_delta
-
fmt_values = {i: self.fmt._format_col(i) for i in range(self.ncols)}
# write values
if self.fmt.index and isinstance(self.frame.index, ABCMultiIndex):
- self._write_hierarchical_rows(fmt_values, indent)
+ self._write_hierarchical_rows(
+ fmt_values, indent + self.indent_delta)
else:
- self._write_regular_rows(fmt_values, indent)
+ self._write_regular_rows(
+ fmt_values, indent + self.indent_delta)
- indent -= self.indent_delta
self.write('</tbody>', indent)
- indent -= self.indent_delta
-
- return indent
def _write_regular_rows(self, fmt_values, indent):
truncate_h = self.fmt.truncate_h
| - [ n/a] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ n/a] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24637 | 2019-01-05T12:43:27Z | 2019-01-05T14:55:15Z | 2019-01-05T14:55:15Z | 2019-01-05T21:05:31Z |
Fix DeprecationWarning: invalid escape sequence in versioneer.py | diff --git a/versioneer.py b/versioneer.py
index 2725fe98641a4..01adaa248dbd4 100644
--- a/versioneer.py
+++ b/versioneer.py
@@ -464,7 +464,9 @@ def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False):
print("unable to run %s (error)" % dispcmd)
return None
return stdout
-LONG_VERSION_PY['git'] = '''
+
+
+LONG_VERSION_PY['git'] = r'''
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
| Hello,
This is a little patch to fix a `DeprecationWarning: invalid escape sequence` in `versioneer.py`.
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24636 | 2019-01-05T12:19:13Z | 2019-01-05T17:52:34Z | 2019-01-05T17:52:34Z | 2019-01-05T17:58:37Z |
DOC: Fix flake8 errors on whatsnew v0.15* | diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst
index 6f74f0393d123..36f2c9013219b 100644
--- a/doc/source/whatsnew/v0.15.0.rst
+++ b/doc/source/whatsnew/v0.15.0.rst
@@ -5,11 +5,6 @@ v0.15.0 (October 18, 2014)
{{ header }}
-.. ipython:: python
- :suppress:
-
- from pandas import * # noqa F401, F403
-
This is a major release from 0.14.1 and includes a small number of API changes, several new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
@@ -77,7 +72,8 @@ For full docs, see the :ref:`categorical introduction <categorical>` and the
.. ipython:: python
:okwarning:
- df = DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
+ df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6],
+ "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']})
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
@@ -86,7 +82,8 @@ For full docs, see the :ref:`categorical introduction <categorical>` and the
df["grade"].cat.categories = ["very good", "good", "very bad"]
# Reorder the categories and simultaneously add the missing categories
- df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
+ df["grade"] = df["grade"].cat.set_categories(["very bad", "bad",
+ "medium", "good", "very good"])
df["grade"]
df.sort_values("grade")
df.groupby("grade").size()
@@ -123,7 +120,7 @@ This type is very similar to how ``Timestamp`` works for ``datetimes``. It is a
.. code-block:: ipython
# Timedelta accessor
- In [9]: tds = Timedelta('31 days 5 min 3 sec')
+ In [9]: tds = pd.Timedelta('31 days 5 min 3 sec')
In [10]: tds.minutes
Out[10]: 5L
@@ -151,22 +148,22 @@ Construct a scalar
.. ipython:: python
- Timedelta('1 days 06:05:01.00003')
- Timedelta('15.5us')
- Timedelta('1 hour 15.5us')
+ pd.Timedelta('1 days 06:05:01.00003')
+ pd.Timedelta('15.5us')
+ pd.Timedelta('1 hour 15.5us')
# negative Timedeltas have this string repr
# to be more consistent with datetime.timedelta conventions
- Timedelta('-1us')
+ pd.Timedelta('-1us')
# a NaT
- Timedelta('nan')
+ pd.Timedelta('nan')
Access fields for a ``Timedelta``
.. ipython:: python
- td = Timedelta('1 hour 3m 15.5us')
+ td = pd.Timedelta('1 hour 3m 15.5us')
td.seconds
td.microseconds
td.nanoseconds
@@ -177,26 +174,26 @@ Construct a ``TimedeltaIndex``
:suppress:
import datetime
- from datetime import timedelta
.. ipython:: python
- TimedeltaIndex(['1 days','1 days, 00:00:05',
- np.timedelta64(2,'D'),timedelta(days=2,seconds=2)])
+ pd.TimedeltaIndex(['1 days', '1 days, 00:00:05',
+ np.timedelta64(2, 'D'),
+ datetime.timedelta(days=2, seconds=2)])
Constructing a ``TimedeltaIndex`` with a regular range
.. ipython:: python
- timedelta_range('1 days',periods=5,freq='D')
- timedelta_range(start='1 days',end='2 days',freq='30T')
+ pd.timedelta_range('1 days', periods=5, freq='D')
+ pd.timedelta_range(start='1 days', end='2 days', freq='30T')
You can now use a ``TimedeltaIndex`` as the index of a pandas object
.. ipython:: python
- s = Series(np.arange(5),
- index=timedelta_range('1 days',periods=5,freq='s'))
+ s = pd.Series(np.arange(5),
+ index=pd.timedelta_range('1 days', periods=5, freq='s'))
s
You can select with partial string selections
@@ -210,9 +207,9 @@ Finally, the combination of ``TimedeltaIndex`` with ``DatetimeIndex`` allow cert
.. ipython:: python
- tdi = TimedeltaIndex(['1 days',pd.NaT,'2 days'])
+ tdi = pd.TimedeltaIndex(['1 days', pd.NaT, '2 days'])
tdi.tolist()
- dti = date_range('20130101',periods=3)
+ dti = pd.date_range('20130101', periods=3)
dti.tolist()
(dti + tdi).tolist()
@@ -235,9 +232,8 @@ A new display option ``display.memory_usage`` (see :ref:`options`) sets the defa
dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]',
'complex128', 'object', 'bool']
n = 5000
- data = dict([ (t, np.random.randint(100, size=n).astype(t))
- for t in dtypes])
- df = DataFrame(data)
+ data = {t: np.random.randint(100, size=n).astype(t) for t in dtypes}
+ df = pd.DataFrame(data)
df['categorical'] = df['object'].astype('category')
df.info()
@@ -260,7 +256,7 @@ This will return a Series, indexed like the existing Series. See the :ref:`docs
.. ipython:: python
# datetime
- s = Series(date_range('20130101 09:10:12',periods=4))
+ s = pd.Series(pd.date_range('20130101 09:10:12', periods=4))
s
s.dt.hour
s.dt.second
@@ -271,7 +267,7 @@ This enables nice expressions like this:
.. ipython:: python
- s[s.dt.day==2]
+ s[s.dt.day == 2]
You can easily produce tz aware transformations:
@@ -292,7 +288,7 @@ The ``.dt`` accessor works for period and timedelta dtypes.
.. ipython:: python
# period
- s = Series(period_range('20130101',periods=4,freq='D'))
+ s = pd.Series(pd.period_range('20130101', periods=4, freq='D'))
s
s.dt.year
s.dt.day
@@ -300,7 +296,7 @@ The ``.dt`` accessor works for period and timedelta dtypes.
.. ipython:: python
# timedelta
- s = Series(timedelta_range('1 day 00:00:05',periods=4,freq='s'))
+ s = pd.Series(pd.timedelta_range('1 day 00:00:05', periods=4, freq='s'))
s
s.dt.days
s.dt.seconds
@@ -318,11 +314,12 @@ Timezone handling improvements
.. ipython:: python
:okwarning:
- ts = Timestamp('2014-08-01 09:00', tz='US/Eastern')
+ ts = pd.Timestamp('2014-08-01 09:00', tz='US/Eastern')
ts
ts.tz_localize(None)
- didx = DatetimeIndex(start='2014-08-01 09:00', freq='H', periods=10, tz='US/Eastern')
+ didx = pd.DatetimeIndex(start='2014-08-01 09:00', freq='H',
+ periods=10, tz='US/Eastern')
didx
didx.tz_localize(None)
@@ -353,11 +350,11 @@ Rolling/Expanding Moments improvements
.. ipython:: python
- s = Series([10, 11, 12, 13])
+ s = pd.Series([10, 11, 12, 13])
.. code-block:: ipython
- In [15]: rolling_min(s, window=10, min_periods=5)
+ In [15]: pd.rolling_min(s, window=10, min_periods=5)
ValueError: min_periods (5) must be <= window (4)
New behavior
@@ -386,7 +383,7 @@ Rolling/Expanding Moments improvements
.. code-block:: ipython
- In [7]: rolling_sum(Series(range(4)), window=3, min_periods=0, center=True)
+ In [7]: pd.rolling_sum(Series(range(4)), window=3, min_periods=0, center=True)
Out[7]:
0 1
1 3
@@ -398,7 +395,8 @@ Rolling/Expanding Moments improvements
.. code-block:: ipython
- In [7]: rolling_sum(Series(range(4)), window=3, min_periods=0, center=True)
+ In [7]: pd.rolling_sum(pd.Series(range(4)), window=3,
+ ....: min_periods=0, center=True)
Out[7]:
0 1
1 3
@@ -412,13 +410,13 @@ Rolling/Expanding Moments improvements
.. ipython:: python
- s = Series([10.5, 8.8, 11.4, 9.7, 9.3])
+ s = pd.Series([10.5, 8.8, 11.4, 9.7, 9.3])
Behavior prior to 0.15.0:
.. code-block:: ipython
- In [39]: rolling_window(s, window=3, win_type='triang', center=True)
+ In [39]: pd.rolling_window(s, window=3, win_type='triang', center=True)
Out[39]:
0 NaN
1 6.583333
@@ -461,7 +459,7 @@ Rolling/Expanding Moments improvements
.. ipython:: python
- s = Series([1, None, None, None, 2, 3])
+ s = pd.Series([1, None, None, None, 2, 3])
.. code-block:: ipython
@@ -503,21 +501,23 @@ Rolling/Expanding Moments improvements
.. code-block:: ipython
- In [7]: pd.ewma(Series([None, 1., 8.]), com=2.)
+ In [7]: pd.ewma(pd.Series([None, 1., 8.]), com=2.)
Out[7]:
0 NaN
1 1.0
2 5.2
dtype: float64
- In [8]: pd.ewma(Series([1., None, 8.]), com=2., ignore_na=True) # pre-0.15.0 behavior
+ In [8]: pd.ewma(pd.Series([1., None, 8.]), com=2.,
+ ....: ignore_na=True) # pre-0.15.0 behavior
Out[8]:
0 1.0
1 1.0
2 5.2
dtype: float64
- In [9]: pd.ewma(Series([1., None, 8.]), com=2., ignore_na=False) # new default
+ In [9]: pd.ewma(pd.Series([1., None, 8.]), com=2.,
+ ....: ignore_na=False) # new default
Out[9]:
0 1.000000
1 1.000000
@@ -554,7 +554,7 @@ Rolling/Expanding Moments improvements
.. ipython:: python
- s = Series([1., 2., 0., 4.])
+ s = pd.Series([1., 2., 0., 4.])
.. code-block:: ipython
@@ -612,8 +612,8 @@ Improvements in the sql io module
.. code-block:: python
- df.to_sql('table', engine, schema='other_schema')
- pd.read_sql_table('table', engine, schema='other_schema')
+ df.to_sql('table', engine, schema='other_schema') # noqa F821
+ pd.read_sql_table('table', engine, schema='other_schema') # noqa F821
- Added support for writing ``NaN`` values with ``to_sql`` (:issue:`2754`).
- Added support for writing datetime64 columns with ``to_sql`` for all database flavors (:issue:`7103`).
@@ -668,7 +668,7 @@ Other notable API changes:
.. ipython:: python
- df = DataFrame([['a'],['b']],index=[1,2])
+ df = pd.DataFrame([['a'], ['b']], index=[1, 2])
df
In prior versions there was a difference in these two constructs:
@@ -687,13 +687,13 @@ Other notable API changes:
.. code-block:: ipython
- In [3]: df.loc[[1,3]]
+ In [3]: df.loc[[1, 3]]
Out[3]:
0
1 a
3 NaN
- In [4]: df.loc[[1,3],:]
+ In [4]: df.loc[[1, 3], :]
Out[4]:
0
1 a
@@ -703,10 +703,10 @@ Other notable API changes:
.. ipython:: python
- p = Panel(np.arange(2*3*4).reshape(2,3,4),
- items=['ItemA','ItemB'],
- major_axis=[1,2,3],
- minor_axis=['A','B','C','D'])
+ p = pd.Panel(np.arange(2 * 3 * 4).reshape(2, 3, 4),
+ items=['ItemA', 'ItemB'],
+ major_axis=[1, 2, 3],
+ minor_axis=['A', 'B', 'C', 'D'])
p
The following would raise ``KeyError`` prior to 0.15.0:
@@ -725,15 +725,16 @@ Other notable API changes:
.. ipython:: python
:okexcept:
- s = Series(np.arange(3,dtype='int64'),
- index=MultiIndex.from_product([['A'],['foo','bar','baz']],
- names=['one','two'])
- ).sort_index()
+ s = pd.Series(np.arange(3, dtype='int64'),
+ index=pd.MultiIndex.from_product([['A'],
+ ['foo', 'bar', 'baz']],
+ names=['one', 'two'])
+ ).sort_index()
s
try:
- s.loc[['D']]
+ s.loc[['D']]
except KeyError as e:
- print("KeyError: " + str(e))
+ print("KeyError: " + str(e))
- Assigning values to ``None`` now considers the dtype when choosing an 'empty' value (:issue:`7941`).
@@ -743,7 +744,7 @@ Other notable API changes:
.. ipython:: python
- s = Series([1, 2, 3])
+ s = pd.Series([1, 2, 3])
s.loc[0] = None
s
@@ -754,7 +755,7 @@ Other notable API changes:
.. ipython:: python
- s = Series(["a", "b", "c"])
+ s = pd.Series(["a", "b", "c"])
s.loc[0] = None
s
@@ -764,7 +765,7 @@ Other notable API changes:
.. ipython:: python
- s = Series([1, 2, 3])
+ s = pd.Series([1, 2, 3])
s2 = s
s += 1.5
@@ -816,9 +817,9 @@ Other notable API changes:
.. ipython:: python
- i = date_range('1/1/2011', periods=3, freq='10s', tz = 'US/Eastern')
+ i = pd.date_range('1/1/2011', periods=3, freq='10s', tz='US/Eastern')
i
- df = DataFrame( {'a' : i } )
+ df = pd.DataFrame({'a': i})
df
df.dtypes
@@ -837,7 +838,7 @@ Other notable API changes:
.. code-block:: python
- In [1]: df = DataFrame(np.arange(0,9), columns=['count'])
+ In [1]: df = pd.DataFrame(np.arange(0, 9), columns=['count'])
In [2]: df['group'] = 'b'
@@ -855,8 +856,8 @@ Other notable API changes:
.. ipython:: python
- df = DataFrame([[True, 1],[False, 2]],
- columns=["female","fitness"])
+ df = pd.DataFrame([[True, 1], [False, 2]],
+ columns=["female", "fitness"])
df
df.dtypes
@@ -916,18 +917,18 @@ Deprecations
.. code-block:: python
# +
- Index(['a','b','c']) + Index(['b','c','d'])
+ pd.Index(['a', 'b', 'c']) + pd.Index(['b', 'c', 'd'])
# should be replaced by
- Index(['a','b','c']).union(Index(['b','c','d']))
+ pd.Index(['a', 'b', 'c']).union(pd.Index(['b', 'c', 'd']))
.. code-block:: python
# -
- Index(['a','b','c']) - Index(['b','c','d'])
+ pd.Index(['a', 'b', 'c']) - pd.Index(['b', 'c', 'd'])
# should be replaced by
- Index(['a','b','c']).difference(Index(['b','c','d']))
+ pd.Index(['a', 'b', 'c']).difference(pd.Index(['b', 'c', 'd']))
- The ``infer_types`` argument to :func:`~pandas.read_html` now has no
effect and is deprecated (:issue:`7762`, :issue:`7032`).
@@ -979,10 +980,10 @@ Other:
.. ipython:: python
- df = DataFrame({'catA': ['foo', 'foo', 'bar'] * 8,
- 'catB': ['a', 'b', 'c', 'd'] * 6,
- 'numC': np.arange(24),
- 'numD': np.arange(24.) + .5})
+ df = pd.DataFrame({'catA': ['foo', 'foo', 'bar'] * 8,
+ 'catB': ['a', 'b', 'c', 'd'] * 6,
+ 'numC': np.arange(24),
+ 'numD': np.arange(24.) + .5})
df.describe(include=["object"])
df.describe(include=["number", "object"], exclude=["float"])
@@ -1002,7 +1003,7 @@ Other:
.. ipython:: python
- df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['c', 'c', 'b'],
+ df = pd.DataFrame({'A': ['a', 'b', 'a'], 'B': ['c', 'c', 'b'],
'C': [1, 2, 3]})
pd.get_dummies(df)
@@ -1015,7 +1016,7 @@ Other:
.. ipython:: python
business_dates = date_range(start='4/1/2014', end='6/30/2014', freq='B')
- df = DataFrame(1, index=business_dates, columns=['a', 'b'])
+ df = pd.DataFrame(1, index=business_dates, columns=['a', 'b'])
# get the first, 4th, and last date index for each month
df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
@@ -1025,14 +1026,14 @@ Other:
.. ipython:: python
- idx = pd.period_range('2014-07-01 09:00', periods=5, freq='H')
- idx
- idx + pd.offsets.Hour(2)
- idx + Timedelta('120m')
+ idx = pd.period_range('2014-07-01 09:00', periods=5, freq='H')
+ idx
+ idx + pd.offsets.Hour(2)
+ idx + pd.Timedelta('120m')
- idx = pd.period_range('2014-07', periods=5, freq='M')
- idx
- idx + pd.offsets.MonthEnd(3)
+ idx = pd.period_range('2014-07', periods=5, freq='M')
+ idx
+ idx + pd.offsets.MonthEnd(3)
- Added experimental compatibility with ``openpyxl`` for versions >= 2.0. The ``DataFrame.to_excel``
method ``engine`` keyword now recognizes ``openpyxl1`` and ``openpyxl2``
@@ -1051,18 +1052,19 @@ Other:
.. ipython:: python
- idx = MultiIndex.from_product([['a'], range(3), list("pqr")], names=['foo', 'bar', 'baz'])
+ idx = pd.MultiIndex.from_product([['a'], range(3), list("pqr")],
+ names=['foo', 'bar', 'baz'])
idx.set_names('qux', level=0)
- idx.set_names(['qux','corge'], level=[0,1])
- idx.set_levels(['a','b','c'], level='bar')
- idx.set_levels([['a','b','c'],[1,2,3]], level=[1,2])
+ idx.set_names(['qux', 'corge'], level=[0, 1])
+ idx.set_levels(['a', 'b', 'c'], level='bar')
+ idx.set_levels([['a', 'b', 'c'], [1, 2, 3]], level=[1, 2])
- ``Index.isin`` now supports a ``level`` argument to specify which index level
to use for membership tests (:issue:`7892`, :issue:`7890`)
.. code-block:: ipython
- In [1]: idx = MultiIndex.from_product([[0, 1], ['a', 'b', 'c']])
+ In [1]: idx = pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c']])
In [2]: idx.values
Out[2]: array([(0, 'a'), (0, 'b'), (0, 'c'), (1, 'a'), (1, 'b'), (1, 'c')], dtype=object)
@@ -1074,7 +1076,7 @@ Other:
.. ipython:: python
- idx = Index([1, 2, 3, 4, 1, 2])
+ idx = pd.Index([1, 2, 3, 4, 1, 2])
idx
idx.duplicated()
idx.drop_duplicates()
diff --git a/doc/source/whatsnew/v0.15.1.rst b/doc/source/whatsnew/v0.15.1.rst
index be7cf04bcdd68..1091944cb056f 100644
--- a/doc/source/whatsnew/v0.15.1.rst
+++ b/doc/source/whatsnew/v0.15.1.rst
@@ -5,11 +5,6 @@ v0.15.1 (November 9, 2014)
{{ header }}
-.. ipython:: python
- :suppress:
-
- from pandas import * # noqa F401, F403
-
This is a minor bug-fix release from 0.15.0 and includes a small number of API changes, several new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
@@ -28,7 +23,7 @@ API changes
.. ipython:: python
- s = Series(date_range('20130101',periods=5,freq='D'))
+ s = pd.Series(pd.date_range('20130101', periods=5, freq='D'))
s.iloc[2] = np.nan
s
@@ -56,12 +51,12 @@ API changes
.. ipython:: python
- np.random.seed(2718281)
- df = pd.DataFrame(np.random.randint(0, 100, (10, 2)),
- columns=['jim', 'joe'])
- df.head()
+ np.random.seed(2718281)
+ df = pd.DataFrame(np.random.randint(0, 100, (10, 2)),
+ columns=['jim', 'joe'])
+ df.head()
- ts = pd.Series(5 * np.random.randint(0, 3, 10))
+ ts = pd.Series(5 * np.random.randint(0, 3, 10))
previous behavior:
@@ -156,9 +151,9 @@ API changes
In [17]: from pandas.io.data import Options
- In [18]: aapl = Options('aapl','yahoo')
+ In [18]: aapl = Options('aapl', 'yahoo')
- In [19]: aapl.get_call_data().iloc[0:5,0:1]
+ In [19]: aapl.get_call_data().iloc[0:5, 0:1]
Out[19]:
Last
Strike Expiry Type Symbol
@@ -183,7 +178,7 @@ API changes
datetime.date(2016, 1, 15),
datetime.date(2017, 1, 20)]
- In [21]: aapl.get_near_stock_price(expiry=aapl.expiry_dates[0:3]).iloc[0:5,0:1]
+ In [21]: aapl.get_near_stock_price(expiry=aapl.expiry_dates[0:3]).iloc[0:5, 0:1]
Out[21]:
Last
Strike Expiry Type Symbol
@@ -233,7 +228,8 @@ Enhancements
.. ipython:: python
- dfi = DataFrame(1,index=pd.MultiIndex.from_product([['a'],range(1000)]),columns=['A'])
+ dfi = pd.DataFrame(1, index=pd.MultiIndex.from_product([['a'],
+ range(1000)]), columns=['A'])
previous behavior:
diff --git a/doc/source/whatsnew/v0.15.2.rst b/doc/source/whatsnew/v0.15.2.rst
index 437dd3f8d3df6..dabdcd1ab76c3 100644
--- a/doc/source/whatsnew/v0.15.2.rst
+++ b/doc/source/whatsnew/v0.15.2.rst
@@ -5,11 +5,6 @@ v0.15.2 (December 12, 2014)
{{ header }}
-.. ipython:: python
- :suppress:
-
- from pandas import * # noqa F401, F403
-
This is a minor release from 0.15.1 and includes a large number of bug fixes
along with several new features, enhancements, and performance improvements.
@@ -79,7 +74,7 @@ API changes
.. ipython:: python
- data = pd.DataFrame({'x':[1, 2, 3]})
+ data = pd.DataFrame({'x': [1, 2, 3]})
data.y = 2
data['y'] = [2, 4, 6]
data
@@ -154,7 +149,7 @@ Other enhancements:
.. code-block:: python
from sqlalchemy.types import String
- data.to_sql('data_dtype', engine, dtype={'Col_1': String})
+ data.to_sql('data_dtype', engine, dtype={'Col_1': String}) # noqa F821
- ``Series.all`` and ``Series.any`` now support the ``level`` and ``skipna`` parameters (:issue:`8302`):
diff --git a/setup.cfg b/setup.cfg
index 6c076eed580dd..3b7d1da9a2b02 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -46,9 +46,6 @@ ignore = E402, # module level import not at top of file
E711, # comparison to none should be 'if cond is none:'
exclude =
- doc/source/whatsnew/v0.15.0.rst
- doc/source/whatsnew/v0.15.1.rst
- doc/source/whatsnew/v0.15.2.rst
doc/source/basics.rst
doc/source/contributing_docstring.rst
doc/source/enhancingperf.rst
| - [ ] closes #24239 | https://api.github.com/repos/pandas-dev/pandas/pulls/24635 | 2019-01-05T11:10:07Z | 2019-01-05T14:54:55Z | 2019-01-05T14:54:55Z | 2019-01-05T15:03:43Z |
TST: Fixed timezone issues post DatetimeArray refactor | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 7fa386935e3f4..2c8f3ad07b639 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1553,6 +1553,7 @@ Timezones
- Bug in :func:`to_datetime` where ``utc=True`` was not respected when specifying a ``unit`` and ``errors='ignore'`` (:issue:`23758`)
- Bug in :func:`to_datetime` where ``utc=True`` was not respected when passing a :class:`Timestamp` (:issue:`24415`)
- Bug in :meth:`DataFrame.any` returns wrong value when ``axis=1`` and the data is of datetimelike type (:issue:`23070`)
+- Bug in :meth:`DatetimeIndex.to_period` where a timezone aware index was converted to UTC first before creating :class:`PeriodIndex` (:issue:`22905`)
Offsets
^^^^^^^
@@ -1802,6 +1803,9 @@ Reshaping
- Constructing a DataFrame with an index argument that wasn't already an instance of :class:`~pandas.core.Index` was broken (:issue:`22227`).
- Bug in :class:`DataFrame` prevented list subclasses to be used to construction (:issue:`21226`)
- Bug in :func:`DataFrame.unstack` and :func:`DataFrame.pivot_table` returning a missleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (:issue:`20601`)
+- Bug in :func:`DataFrame.unstack` where a ``ValueError`` was raised when unstacking timezone aware values (:issue:`18338`)
+- Bug in :func:`DataFrame.stack` where timezone aware values were converted to timezone naive values (:issue:`19420`)
+- Bug in :func:`merge_asof` where a ``TypeError`` was raised when ``by_col`` were timezone aware values (:issue:`21184`)
.. _whatsnew_0240.bug_fixes.sparse:
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index 362650714418f..f2f6944a21e03 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -936,3 +936,36 @@ def test_unstack_fill_frame_object():
index=list('xyz')
)
assert_frame_equal(result, expected)
+
+
+def test_unstack_timezone_aware_values():
+ # GH 18338
+ df = pd.DataFrame({
+ 'timestamp': [
+ pd.Timestamp('2017-08-27 01:00:00.709949+0000', tz='UTC')],
+ 'a': ['a'],
+ 'b': ['b'],
+ 'c': ['c'],
+ }, columns=['timestamp', 'a', 'b', 'c'])
+ result = df.set_index(['a', 'b']).unstack()
+ expected = pd.DataFrame([[pd.Timestamp('2017-08-27 01:00:00.709949+0000',
+ tz='UTC'),
+ 'c']],
+ index=pd.Index(['a'], name='a'),
+ columns=pd.MultiIndex(
+ levels=[['timestamp', 'c'], ['b']],
+ codes=[[0, 1], [0, 0]],
+ names=[None, 'b']))
+ assert_frame_equal(result, expected)
+
+
+def test_stack_timezone_aware_values():
+ # GH 19420
+ ts = pd.date_range(freq="D", start="20180101", end="20180103",
+ tz="America/New_York")
+ df = pd.DataFrame({"A": ts}, index=["a", "b", "c"])
+ result = df.stack()
+ expected = pd.Series(ts,
+ index=pd.MultiIndex(levels=[['a', 'b', 'c'], ['A']],
+ codes=[[0, 1, 2], [0, 0, 0]]))
+ assert_series_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index c03b8afbe79bf..784d1ca6fb82c 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -293,6 +293,15 @@ def test_to_period_tz(self, tz):
tm.assert_index_equal(result, expected)
+ @pytest.mark.parametrize('tz', ['Etc/GMT-1', 'Etc/GMT+1'])
+ def test_to_period_tz_utc_offset_consistency(self, tz):
+ # GH 22905
+ ts = pd.date_range('1/1/2000', '2/1/2000', tz='Etc/GMT-1')
+ with tm.assert_produces_warning(UserWarning):
+ result = ts.to_period()[0]
+ expected = ts[0].to_period()
+ assert result == expected
+
def test_to_period_nofreq(self):
idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04'])
with pytest.raises(ValueError):
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 1483654daa99e..1d1d7d48adaab 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1022,3 +1022,17 @@ def test_merge_on_nans(self, func, side):
merge_asof(df_null, df, on='a')
else:
merge_asof(df, df_null, on='a')
+
+ def test_merge_by_col_tz_aware(self):
+ # GH 21184
+ left = pd.DataFrame(
+ {'by_col': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
+ 'on_col': [2], 'values': ['a']})
+ right = pd.DataFrame(
+ {'by_col': pd.DatetimeIndex(['2018-01-01']).tz_localize('UTC'),
+ 'on_col': [1], 'values': ['b']})
+ result = pd.merge_asof(left, right, by='by_col', on='on_col')
+ expected = pd.DataFrame([
+ [pd.Timestamp('2018-01-01', tz='UTC'), 2, 'a', 'b']
+ ], columns=['by_col', 'on_col', 'values_x', 'values_y'])
+ assert_frame_equal(result, expected)
| - [x] closes #22905
- [x] closes #18338
- [x] closes #19420
- [x] closes #21184
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24634 | 2019-01-05T08:27:33Z | 2019-01-06T15:53:49Z | 2019-01-06T15:53:48Z | 2019-01-06T18:04:29Z |
DOC: update applymap docstring in pandas/core/frame.py | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a50def7357826..9c9720522189d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6501,6 +6501,14 @@ def applymap(self, func):
--------
DataFrame.apply : Apply a function along input axis of DataFrame.
+ Notes
+ -----
+ In the current implementation applymap calls `func` twice on the
+ first column/row to decide whether it can take a fast or slow
+ code path. This can lead to unexpected behavior if `func` has
+ side-effects, as they will take effect twice for the first
+ column/row.
+
Examples
--------
>>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]])
| - [x] closes #24612
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/24633 | 2019-01-05T04:02:53Z | 2019-01-05T14:54:34Z | 2019-01-05T14:54:34Z | 2019-01-05T15:37:42Z |
CI: unify environment creation | diff --git a/.travis.yml b/.travis.yml
index 529f1221899dc..022e11b7db950 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -82,15 +82,10 @@ before_install:
install:
- echo "install start"
- ci/prep_cython_cache.sh
- - ci/install_travis.sh
+ - ci/setup_env.sh
- ci/submit_cython_cache.sh
- echo "install done"
-before_script:
- - ci/install_db_travis.sh
- - export DISPLAY=":99.0"
- - ci/before_script_travis.sh
-
script:
- echo "script start"
- source activate pandas-dev
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 9b1b17b453af3..eee38dadfab90 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -39,9 +39,8 @@ jobs:
- script: |
export PATH=$HOME/miniconda3/bin:$PATH
sudo apt-get install -y libc6-dev-i386
- ci/incremental/install_miniconda.sh
- ci/incremental/setup_conda_environment.sh
- displayName: 'Set up environment'
+ ci/setup_env.sh
+ displayName: 'Setup environment and build pandas'
condition: true
# Do not require pandas
@@ -59,13 +58,6 @@ jobs:
displayName: 'Dependencies consistency'
condition: true
- - script: |
- export PATH=$HOME/miniconda3/bin:$PATH
- source activate pandas-dev
- ci/incremental/build.sh
- displayName: 'Build'
- condition: true
-
# Require pandas
- script: |
export PATH=$HOME/miniconda3/bin:$PATH
diff --git a/ci/azure/posix.yml b/ci/azure/posix.yml
index 7119054cf2f53..f53e284c221c6 100644
--- a/ci/azure/posix.yml
+++ b/ci/azure/posix.yml
@@ -50,17 +50,9 @@ jobs:
steps:
- script: |
if [ "$(uname)" == "Linux" ]; then sudo apt-get install -y libc6-dev-i386 $EXTRA_APT; fi
- echo "Installing Miniconda"
- ci/incremental/install_miniconda.sh
- export PATH=$HOME/miniconda3/bin:$PATH
- echo "Setting up Conda environment"
- ci/incremental/setup_conda_environment.sh
- displayName: 'Before Install'
- - script: |
- export PATH=$HOME/miniconda3/bin:$PATH
- source activate pandas-dev
- ci/incremental/build.sh
- displayName: 'Build'
+ echo "Creating Environment"
+ ci/setup_env.sh
+ displayName: 'Setup environment and build pandas'
- script: |
export PATH=$HOME/miniconda3/bin:$PATH
source activate pandas-dev
diff --git a/ci/before_script_travis.sh b/ci/before_script_travis.sh
deleted file mode 100755
index 0b3939b1906a2..0000000000000
--- a/ci/before_script_travis.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-#!/bin/bash
-
-echo "inside $0"
-
-if [ "${TRAVIS_OS_NAME}" == "linux" ]; then
- sh -e /etc/init.d/xvfb start
- sleep 3
-fi
-
-# Never fail because bad things happened here.
-true
diff --git a/ci/incremental/build.sh b/ci/incremental/build.sh
deleted file mode 100755
index 05648037935a3..0000000000000
--- a/ci/incremental/build.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-
-# Make sure any error below is reported as such
-set -v -e
-
-echo "[building extensions]"
-python setup.py build_ext -q --inplace
-python -m pip install -e .
-
-echo
-echo "[show environment]"
-conda list
-
-echo
-echo "[done]"
-exit 0
diff --git a/ci/incremental/install_miniconda.sh b/ci/incremental/install_miniconda.sh
deleted file mode 100755
index a47dfdb324b34..0000000000000
--- a/ci/incremental/install_miniconda.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-
-set -v -e
-
-# Install Miniconda
-unamestr=`uname`
-if [[ "$unamestr" == 'Linux' ]]; then
- if [[ "$BITS32" == "yes" ]]; then
- wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86.sh -O miniconda.sh
- else
- wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
- fi
-elif [[ "$unamestr" == 'Darwin' ]]; then
- wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -O miniconda.sh
-else
- echo Error
-fi
-chmod +x miniconda.sh
-./miniconda.sh -b
diff --git a/ci/incremental/setup_conda_environment.sh b/ci/incremental/setup_conda_environment.sh
deleted file mode 100755
index f174c17a614d8..0000000000000
--- a/ci/incremental/setup_conda_environment.sh
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/bin/bash
-
-set -v -e
-
-CONDA_INSTALL="conda install -q -y"
-PIP_INSTALL="pip install -q"
-
-
-# Deactivate any environment
-source deactivate
-# Display root environment (for debugging)
-conda list
-# Clean up any left-over from a previous build
-# (note workaround for https://github.com/conda/conda/issues/2679:
-# `conda env remove` issue)
-conda remove --all -q -y -n pandas-dev
-
-echo
-echo "[create env]"
-time conda env create -q --file="${ENV_FILE}" || exit 1
-
-set +v
-source activate pandas-dev
-set -v
-
-# remove any installed pandas package
-# w/o removing anything else
-echo
-echo "[removing installed pandas]"
-conda remove pandas -y --force || true
-pip uninstall -y pandas || true
-
-echo
-echo "[no installed pandas]"
-conda list pandas
-
-if [ -n "$LOCALE_OVERRIDE" ]; then
- sudo locale-gen "$LOCALE_OVERRIDE"
-fi
-
-# # Install the compiler toolchain
-# if [[ $(uname) == Linux ]]; then
-# if [[ "$CONDA_SUBDIR" == "linux-32" || "$BITS32" == "yes" ]] ; then
-# $CONDA_INSTALL gcc_linux-32 gxx_linux-32
-# else
-# $CONDA_INSTALL gcc_linux-64 gxx_linux-64
-# fi
-# elif [[ $(uname) == Darwin ]]; then
-# $CONDA_INSTALL clang_osx-64 clangxx_osx-64
-# # Install llvm-openmp and intel-openmp on OSX too
-# $CONDA_INSTALL llvm-openmp intel-openmp
-# fi
diff --git a/ci/install_db_travis.sh b/ci/install_db_travis.sh
deleted file mode 100755
index e4e6d7a5a9b85..0000000000000
--- a/ci/install_db_travis.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-
-if [ "${TRAVIS_OS_NAME}" != "linux" ]; then
- echo "not using dbs on non-linux"
- exit 0
-fi
-
-echo "installing dbs"
-mysql -e 'create database pandas_nosetest;'
-psql -c 'create database pandas_nosetest;' -U postgres
-
-echo "done"
-exit 0
diff --git a/ci/install_travis.sh b/ci/install_travis.sh
deleted file mode 100755
index d1a940f119228..0000000000000
--- a/ci/install_travis.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/bin/bash
-
-# edit the locale file if needed
-function edit_init()
-{
- if [ -n "$LOCALE_OVERRIDE" ]; then
- echo "[Adding locale to the first line of pandas/__init__.py]"
- rm -f pandas/__init__.pyc
- sedc="3iimport locale\nlocale.setlocale(locale.LC_ALL, '$LOCALE_OVERRIDE')\n"
- sed -i "$sedc" pandas/__init__.py
- echo "[head -4 pandas/__init__.py]"
- head -4 pandas/__init__.py
- echo
- fi
-}
-
-echo
-echo "[install_travis]"
-edit_init
-
-home_dir=$(pwd)
-echo
-echo "[home_dir]: $home_dir"
-
-# install miniconda
-MINICONDA_DIR="$HOME/miniconda3"
-
-echo
-echo "[Using clean Miniconda install]"
-
-if [ -d "$MINICONDA_DIR" ]; then
- rm -rf "$MINICONDA_DIR"
-fi
-
-# install miniconda
-if [ "${TRAVIS_OS_NAME}" == "osx" ]; then
- time wget http://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -q -O miniconda.sh || exit 1
-else
- time wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -q -O miniconda.sh || exit 1
-fi
-time bash miniconda.sh -b -p "$MINICONDA_DIR" || exit 1
-
-echo
-echo "[show conda]"
-which conda
-
-echo
-echo "[update conda]"
-conda config --set ssl_verify false || exit 1
-conda config --set quiet true --set always_yes true --set changeps1 false || exit 1
-conda update -q conda
-
-# Useful for debugging any issues with conda
-conda info -a || exit 1
-
-# set the compiler cache to work
-echo
-if [ -z "$NOCACHE" ] && [ "${TRAVIS_OS_NAME}" == "linux" ]; then
- echo "[Using ccache]"
- export PATH=/usr/lib/ccache:/usr/lib64/ccache:$PATH
- gcc=$(which gcc)
- echo "[gcc]: $gcc"
- ccache=$(which ccache)
- echo "[ccache]: $ccache"
- export CC='ccache gcc'
-elif [ -z "$NOCACHE" ] && [ "${TRAVIS_OS_NAME}" == "osx" ]; then
- echo "[Install ccache]"
- brew install ccache > /dev/null 2>&1
- echo "[Using ccache]"
- export PATH=/usr/local/opt/ccache/libexec:$PATH
- gcc=$(which gcc)
- echo "[gcc]: $gcc"
- ccache=$(which ccache)
- echo "[ccache]: $ccache"
-else
- echo "[Not using ccache]"
-fi
-
-echo
-echo "[create env]"
-
-# create our environment
-time conda env create -q --file="${ENV_FILE}" || exit 1
-
-source activate pandas-dev
-
-# remove any installed pandas package
-# w/o removing anything else
-echo
-echo "[removing installed pandas]"
-conda remove pandas -y --force
-pip uninstall -y pandas
-
-echo
-echo "[no installed pandas]"
-conda list pandas
-pip list --format columns |grep pandas
-
-# build and install
-echo "[running setup.py develop]"
-python setup.py develop || exit 1
-
-echo
-echo "[show environment]"
-conda list
-
-echo
-echo "[done]"
-exit 0
diff --git a/ci/setup_env.sh b/ci/setup_env.sh
new file mode 100755
index 0000000000000..414a5c8705ee9
--- /dev/null
+++ b/ci/setup_env.sh
@@ -0,0 +1,135 @@
+#!/bin/bash -e
+
+
+# edit the locale file if needed
+if [ -n "$LOCALE_OVERRIDE" ]; then
+ echo "Adding locale to the first line of pandas/__init__.py"
+ rm -f pandas/__init__.pyc
+ SEDC="3iimport locale\nlocale.setlocale(locale.LC_ALL, '$LOCALE_OVERRIDE')\n"
+ sed -i "$SEDC" pandas/__init__.py
+ echo "[head -4 pandas/__init__.py]"
+ head -4 pandas/__init__.py
+ echo
+ sudo locale-gen "$LOCALE_OVERRIDE"
+fi
+
+MINICONDA_DIR="$HOME/miniconda3"
+
+
+if [ -d "$MINICONDA_DIR" ]; then
+ echo
+ echo "rm -rf "$MINICONDA_DIR""
+ rm -rf "$MINICONDA_DIR"
+fi
+
+echo "Install Miniconda"
+UNAME_OS=$(uname)
+if [[ "$UNAME_OS" == 'Linux' ]]; then
+ if [[ "$BITS32" == "yes" ]]; then
+ CONDA_OS="Linux-x86"
+ else
+ CONDA_OS="Linux-x86_64"
+ fi
+elif [[ "$UNAME_OS" == 'Darwin' ]]; then
+ CONDA_OS="MacOSX-x86_64"
+else
+ echo "OS $UNAME_OS not supported"
+ exit 1
+fi
+
+wget -q "https://repo.continuum.io/miniconda/Miniconda3-latest-$CONDA_OS.sh" -O miniconda.sh
+chmod +x miniconda.sh
+./miniconda.sh -b
+
+export PATH=$MINICONDA_DIR/bin:$PATH
+
+echo
+echo "which conda"
+which conda
+
+echo
+echo "update conda"
+conda config --set ssl_verify false
+conda config --set quiet true --set always_yes true --set changeps1 false
+conda update -n base conda
+
+echo "conda info -a"
+conda info -a
+
+echo
+echo "set the compiler cache to work"
+if [ -z "$NOCACHE" ] && [ "${TRAVIS_OS_NAME}" == "linux" ]; then
+ echo "Using ccache"
+ export PATH=/usr/lib/ccache:/usr/lib64/ccache:$PATH
+ GCC=$(which gcc)
+ echo "gcc: $GCC"
+ CCACHE=$(which ccache)
+ echo "ccache: $CCACHE"
+ export CC='ccache gcc'
+elif [ -z "$NOCACHE" ] && [ "${TRAVIS_OS_NAME}" == "osx" ]; then
+ echo "Install ccache"
+ brew install ccache > /dev/null 2>&1
+ echo "Using ccache"
+ export PATH=/usr/local/opt/ccache/libexec:$PATH
+ gcc=$(which gcc)
+ echo "gcc: $gcc"
+ CCACHE=$(which ccache)
+ echo "ccache: $CCACHE"
+else
+ echo "Not using ccache"
+fi
+
+echo "source deactivate"
+source deactivate
+
+echo "conda list (root environment)"
+conda list
+
+# Clean up any left-over from a previous build
+# (note workaround for https://github.com/conda/conda/issues/2679:
+# `conda env remove` issue)
+conda remove --all -q -y -n pandas-dev
+
+echo
+echo "conda env create -q --file=${ENV_FILE}"
+time conda env create -q --file="${ENV_FILE}"
+
+echo "activate pandas-dev"
+source activate pandas-dev
+
+echo
+echo "remove any installed pandas package"
+echo "w/o removing anything else"
+conda remove pandas -y --force || true
+pip uninstall -y pandas || true
+
+echo
+echo "conda list pandas"
+conda list pandas
+
+# Make sure any error below is reported as such
+
+echo "Build extensions and install pandas"
+python setup.py build_ext -q --inplace
+python -m pip install -e .
+
+echo
+echo "conda list"
+conda list
+
+# Install DB for Linux
+export DISPLAY=":99."
+if [ ${TRAVIS_OS_NAME} == "linux" ]; then
+ echo "installing dbs"
+ mysql -e 'create database pandas_nosetest;'
+ psql -c 'create database pandas_nosetest;' -U postgres
+
+ echo
+ echo "sh -e /etc/init.d/xvfb start"
+ sh -e /etc/init.d/xvfb start
+ sleep 3
+else
+ echo "not using dbs on non-linux"
+fi
+
+echo "done"
| closes #24498
closes #23923
| https://api.github.com/repos/pandas-dev/pandas/pulls/24632 | 2019-01-05T03:58:29Z | 2019-04-05T00:42:58Z | 2019-04-05T00:42:57Z | 2019-04-15T18:30:45Z |
DOC: Improve DataFrame.align docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d0555bd2e44b1..a8b90e413ebad 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8345,8 +8345,17 @@ def ranker(data):
fill_value : scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any
"compatible" value
- method : str, default None
+ method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
+ Method to use for filling holes in reindexed Series
+ pad / ffill: propagate last valid observation forward to next valid
+ backfill / bfill: use NEXT valid observation to fill gap
limit : int, default None
+ If method is specified, this is the maximum number of consecutive
+ NaN values to forward/backward fill. In other words, if there is
+ a gap with more than this number of consecutive NaNs, it will only
+ be partially filled. If method is not specified, this is the
+ maximum number of entries along the entire axis where NaNs will be
+ filled. Must be greater than 0 if not None.
fill_axis : %(axes_single_arg)s, default 0
Filling axis, method and limit
broadcast_axis : %(axes_single_arg)s, default None
| The docstring for `DataFrame.align` currently has no descriptions for the `method` or `limit` arguments (which both refer to the same arguments of `fillna`), so this PR adds those descriptions.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.align.html | https://api.github.com/repos/pandas-dev/pandas/pulls/24631 | 2019-01-05T03:37:42Z | 2019-01-05T14:54:06Z | 2019-01-05T14:54:06Z | 2019-01-05T15:08:15Z |
Have Categorical ops defer to DataFrame; broken off of #24282 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3be87c4cabaf0..09dd857182592 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1353,6 +1353,7 @@ Categorical
- Bug in many methods of the ``.str``-accessor, which always failed on calling the ``CategoricalIndex.str`` constructor (:issue:`23555`, :issue:`23556`)
- Bug in :meth:`Series.where` losing the categorical dtype for categorical data (:issue:`24077`)
- Bug in :meth:`Categorical.apply` where ``NaN`` values could be handled unpredictably. They now remain unchanged (:issue:`24241`)
+- Bug in :class:`Categorical` comparison methods incorrectly raising ``ValueError`` when operating against a :class:`DataFrame` (:issue:`24630`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 969add2d3efef..ceab3d0f53a3b 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -23,7 +23,7 @@
is_timedelta64_dtype)
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas.core.dtypes.generic import (
- ABCCategoricalIndex, ABCIndexClass, ABCSeries)
+ ABCCategoricalIndex, ABCDataFrame, ABCIndexClass, ABCSeries)
from pandas.core.dtypes.inference import is_hashable
from pandas.core.dtypes.missing import isna, notna
@@ -59,9 +59,11 @@ def f(self, other):
# results depending whether categories are the same or not is kind of
# insane, so be a bit stricter here and use the python3 idea of
# comparing only things of equal type.
- if isinstance(other, ABCSeries):
+ if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
return NotImplemented
+ other = lib.item_from_zerodim(other)
+
if not self.ordered:
if op in ['__lt__', '__gt__', '__le__', '__ge__']:
raise TypeError("Unordered Categoricals can only compare "
@@ -105,7 +107,6 @@ def f(self, other):
#
# With cat[0], for example, being ``np.int64(1)`` by the time it gets
# into this function would become ``np.array(1)``.
- other = lib.item_from_zerodim(other)
if is_scalar(other):
if other in self.categories:
i = self.categories.get_loc(other)
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index 9304df58bba95..b2965bbcc456a 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -1,4 +1,5 @@
# -*- coding: utf-8 -*-
+import operator
import numpy as np
import pytest
@@ -113,9 +114,34 @@ def test_comparisons(self):
res = cat_rev > "b"
tm.assert_numpy_array_equal(res, exp)
+ # check that zero-dim array gets unboxed
+ res = cat_rev > np.array("b")
+ tm.assert_numpy_array_equal(res, exp)
+
class TestCategoricalOps(object):
+ def test_compare_frame(self):
+ # GH#24282 check that Categorical.__cmp__(DataFrame) defers to frame
+ data = ["a", "b", 2, "a"]
+ cat = Categorical(data)
+
+ df = DataFrame(cat)
+
+ for op in [operator.eq, operator.ne, operator.ge,
+ operator.gt, operator.le, operator.lt]:
+ with pytest.raises(ValueError):
+ # alignment raises unless we transpose
+ op(cat, df)
+
+ result = cat == df.T
+ expected = DataFrame([[True, True, True, True]])
+ tm.assert_frame_equal(result, expected)
+
+ result = cat[::-1] != df.T
+ expected = DataFrame([[False, True, True, False]])
+ tm.assert_frame_equal(result, expected)
+
def test_datetime_categorical_comparison(self):
dt_cat = Categorical(date_range('2014-01-01', periods=3), ordered=True)
tm.assert_numpy_array_equal(dt_cat > dt_cat[0],
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24630 | 2019-01-04T23:54:14Z | 2019-01-05T14:53:45Z | 2019-01-05T14:53:45Z | 2019-01-05T15:38:52Z |
CLN: replace lambdas with named functions so they are labeled in asv | diff --git a/asv_bench/benchmarks/ctors.py b/asv_bench/benchmarks/ctors.py
index 198ed1c90a2e9..7c78fe7e7a177 100644
--- a/asv_bench/benchmarks/ctors.py
+++ b/asv_bench/benchmarks/ctors.py
@@ -3,17 +3,55 @@
from pandas import Series, Index, DatetimeIndex, Timestamp, MultiIndex
+def no_change(arr):
+ return arr
+
+
+def list_of_str(arr):
+ return list(arr.astype(str))
+
+
+def gen_of_str(arr):
+ return (x for x in arr.astype(str))
+
+
+def arr_dict(arr):
+ return dict(zip(range(len(arr)), arr))
+
+
+def list_of_tuples(arr):
+ return [(i, -i) for i in arr]
+
+
+def gen_of_tuples(arr):
+ return ((i, -i) for i in arr)
+
+
+def list_of_lists(arr):
+ return [[i, -i] for i in arr]
+
+
+def list_of_tuples_with_none(arr):
+ return [(i, -i) for i in arr][:-1] + [None]
+
+
+def list_of_lists_with_none(arr):
+ return [[i, -i] for i in arr][:-1] + [None]
+
+
class SeriesConstructors(object):
param_names = ["data_fmt", "with_index"]
- params = [[lambda x: x,
+ params = [[no_change,
list,
- lambda arr: list(arr.astype(str)),
- lambda arr: dict(zip(range(len(arr)), arr)),
- lambda arr: [(i, -i) for i in arr],
- lambda arr: [[i, -i] for i in arr],
- lambda arr: ([(i, -i) for i in arr][:-1] + [None]),
- lambda arr: ([[i, -i] for i in arr][:-1] + [None])],
+ list_of_str,
+ gen_of_str,
+ arr_dict,
+ list_of_tuples,
+ gen_of_tuples,
+ list_of_lists,
+ list_of_tuples_with_none,
+ list_of_lists_with_none],
[False, True]]
def setup(self, data_fmt, with_index):
| This complements a PR in `asv` (https://github.com/airspeed-velocity/asv/pull/771), but would be worthwhile on its own.
`asv` compares benchmarks based on the `repr()` of the parameters, which causes issues when the parameter is an object that doesn't override the default `repr()` (such as functions, especially lambda functions):
```
Benchmarks that have stayed the same:
before after ratio
[24ab22f7] [d7cef344]
n/a 997±3μs n/a ctors.SeriesConstructors.time_series_constructor(<function SeriesConstructors.<lambda> at 0x7f0deb0800d0>, False)
n/a 1.05±0ms n/a ctors.SeriesConstructors.time_series_constructor(<function SeriesConstructors.<lambda> at 0x7f0deb0800d0>, True)
1.04±0.01ms n/a n/a ctors.SeriesConstructors.time_series_constructor(<function SeriesConstructors.<lambda> at 0x7fd9d0bbc598>, False)
1.06±0.02ms n/a n/a ctors.SeriesConstructors.time_series_constructor(<function SeriesConstructors.<lambda> at 0x7fd9d0bbc598>, True)
```
As the memory address component of the name changes between runs, `asv` fails at lining up the comparison. This has to be fixed upstream.
However, passing `lambda` functions is particularly unhelpful, as we have no idea which of the six such objects are represented in the example above.
This PR simply gives each function a descriptive name:
```
asv continuous -b SeriesConstructors v0.20.0..HEAD
[...]
[ 50.00%] · For pandas commit 20c2a780 <asv_change> (round 2/2):
[ 50.00%] ·· Benchmarking conda-py3.6-Cython-matplotlib-numexpr-numpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[ 75.00%] ··· ctors.SeriesConstructors.time_series_constructor ok
[ 75.00%] ··· ======================================================= ============= =============
-- with_index
------------------------------------------------------- ---------------------------
data_fmt False True
======================================================= ============= =============
<function no_change at 0x7efcd7b67c80> 74.0±0.4μs 131±3μs
list 1.15±0.03ms 1.20±0.01ms
<function list_of_str at 0x7efcd7b67d08> 688±9μs 741±6μs
<function arr_dict at 0x7efcd7b67b70> 4.49±0.01ms 4.71±0.02ms
<function list_of_tuples at 0x7efcd7b679d8> 1.02±0.01ms 1.10±0.01ms
<function list_of_lists at 0x7efcd7b67a60> 1.04±0.03ms 1.09±0.04ms
<function list_of_tuples_with_none at 0x7efcd7b67ae8> 1.02±0.02ms 1.08±0.02ms
<function list_of_lists_with_none at 0x7efcd7b676a8> 1.05±0.01ms 1.10±0.01ms
======================================================= ============= =============
[ 75.00%] · For pandas commit 91753873 <maybe_convert_objects_int_overflow_fix~1> (round 2/2):
[ 75.00%] ·· Building for conda-py3.6-Cython-matplotlib-numexpr-numpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt.....
[ 75.00%] ·· Benchmarking conda-py3.6-Cython-matplotlib-numexpr-numpy-openpyxl-pytables-pytest-scipy-sqlalchemy-xlrd-xlsxwriter-xlwt
[100.00%] ··· ctors.SeriesConstructors.time_series_constructor ok
[100.00%] ··· ======================================================= ============= =============
-- with_index
------------------------------------------------------- ---------------------------
data_fmt False True
======================================================= ============= =============
<function no_change at 0x7efcd7b67c80> 72.9±0.9μs 126±1μs
list 1.10±0ms 1.16±0.02ms
<function list_of_str at 0x7efcd7b67d08> 699±10μs 760±3μs
<function arr_dict at 0x7efcd7b67b70> 4.54±0.08ms 4.71±0.04ms
<function list_of_tuples at 0x7efcd7b679d8> 1.01±0.01ms 1.08±0.02ms
<function list_of_lists at 0x7efcd7b67a60> 1.01±0.01ms 1.07±0.01ms
<function list_of_tuples_with_none at 0x7efcd7b67ae8> 1.02±0.01ms 1.09±0.02ms
<function list_of_lists_with_none at 0x7efcd7b676a8> 1.05±0ms 1.08±0.01ms
======================================================= ============= =============
BENCHMARKS NOT SIGNIFICANTLY CHANGED.
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24629 | 2019-01-04T22:37:20Z | 2019-01-05T14:53:23Z | 2019-01-05T14:53:22Z | 2019-01-05T14:53:25Z |
catch complex nan in util.is_nan, de-dup+optimize libmissing, tests | diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index e922a5d1c3b27..229edbac4992d 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -12,7 +12,9 @@ cimport pandas._libs.util as util
from pandas._libs.tslibs.np_datetime cimport (
get_timedelta64_value, get_datetime64_value)
-from pandas._libs.tslibs.nattype cimport checknull_with_nat, c_NaT
+from pandas._libs.tslibs.nattype cimport (
+ checknull_with_nat, c_NaT as NaT, is_null_datetimelike)
+
cdef float64_t INF = <float64_t>np.inf
cdef float64_t NEGINF = -INF
@@ -20,25 +22,6 @@ cdef float64_t NEGINF = -INF
cdef int64_t NPY_NAT = util.get_nat()
-cdef inline bint _check_all_nulls(object val):
- """ utility to check if a value is any type of null """
- res: bint
-
- if isinstance(val, (float, complex)):
- res = val != val
- elif val is c_NaT:
- res = 1
- elif val is None:
- res = 1
- elif util.is_datetime64_object(val):
- res = get_datetime64_value(val) == NPY_NAT
- elif util.is_timedelta64_object(val):
- res = get_timedelta64_value(val) == NPY_NAT
- else:
- res = 0
- return res
-
-
cpdef bint checknull(object val):
"""
Return boolean describing of the input is NA-like, defined here as any
@@ -62,18 +45,7 @@ cpdef bint checknull(object val):
The difference between `checknull` and `checknull_old` is that `checknull`
does *not* consider INF or NEGINF to be NA.
"""
- if util.is_float_object(val) or util.is_complex_object(val):
- return val != val # and val != INF and val != NEGINF
- elif util.is_datetime64_object(val):
- return get_datetime64_value(val) == NPY_NAT
- elif val is c_NaT:
- return True
- elif util.is_timedelta64_object(val):
- return get_timedelta64_value(val) == NPY_NAT
- elif util.is_array(val):
- return False
- else:
- return val is None or util.is_nan(val)
+ return is_null_datetimelike(val, inat_is_null=False)
cpdef bint checknull_old(object val):
@@ -101,18 +73,11 @@ cpdef bint checknull_old(object val):
The difference between `checknull` and `checknull_old` is that `checknull`
does *not* consider INF or NEGINF to be NA.
"""
- if util.is_float_object(val) or util.is_complex_object(val):
- return val != val or val == INF or val == NEGINF
- elif util.is_datetime64_object(val):
- return get_datetime64_value(val) == NPY_NAT
- elif val is c_NaT:
+ if checknull(val):
return True
- elif util.is_timedelta64_object(val):
- return get_timedelta64_value(val) == NPY_NAT
- elif util.is_array(val):
- return False
- else:
- return val is None or util.is_nan(val)
+ elif util.is_float_object(val) or util.is_complex_object(val):
+ return val == INF or val == NEGINF
+ return False
cdef inline bint _check_none_nan_inf_neginf(object val):
@@ -128,7 +93,7 @@ cdef inline bint _check_none_nan_inf_neginf(object val):
cpdef ndarray[uint8_t] isnaobj(ndarray arr):
"""
Return boolean mask denoting which elements of a 1-D array are na-like,
- according to the criteria defined in `_check_all_nulls`:
+ according to the criteria defined in `checknull`:
- None
- nan
- NaT
@@ -154,7 +119,7 @@ cpdef ndarray[uint8_t] isnaobj(ndarray arr):
result = np.empty(n, dtype=np.uint8)
for i in range(n):
val = arr[i]
- result[i] = _check_all_nulls(val)
+ result[i] = checknull(val)
return result.view(np.bool_)
@@ -189,7 +154,7 @@ def isnaobj_old(ndarray arr):
result = np.zeros(n, dtype=np.uint8)
for i in range(n):
val = arr[i]
- result[i] = val is c_NaT or _check_none_nan_inf_neginf(val)
+ result[i] = val is NaT or _check_none_nan_inf_neginf(val)
return result.view(np.bool_)
@@ -299,7 +264,7 @@ cdef inline bint is_null_datetime64(v):
if checknull_with_nat(v):
return True
elif util.is_datetime64_object(v):
- return v.view('int64') == NPY_NAT
+ return get_datetime64_value(v) == NPY_NAT
return False
@@ -309,7 +274,7 @@ cdef inline bint is_null_timedelta64(v):
if checknull_with_nat(v):
return True
elif util.is_timedelta64_object(v):
- return v.view('int64') == NPY_NAT
+ return get_timedelta64_value(v) == NPY_NAT
return False
diff --git a/pandas/_libs/tslibs/nattype.pxd b/pandas/_libs/tslibs/nattype.pxd
index ee8d5ca3d861c..dae5bdc3f93b1 100644
--- a/pandas/_libs/tslibs/nattype.pxd
+++ b/pandas/_libs/tslibs/nattype.pxd
@@ -17,4 +17,4 @@ cdef _NaT c_NaT
cdef bint checknull_with_nat(object val)
-cpdef bint is_null_datetimelike(object val)
+cpdef bint is_null_datetimelike(object val, bint inat_is_null=*)
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index df083f27ad653..a55d15a7c4e85 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -14,6 +14,8 @@ cimport numpy as cnp
from numpy cimport int64_t
cnp.import_array()
+from pandas._libs.tslibs.np_datetime cimport (
+ get_datetime64_value, get_timedelta64_value)
cimport pandas._libs.tslibs.util as util
from pandas._libs.tslibs.util cimport (
get_nat, is_integer_object, is_float_object, is_datetime64_object,
@@ -686,26 +688,30 @@ cdef inline bint checknull_with_nat(object val):
return val is None or util.is_nan(val) or val is c_NaT
-cpdef bint is_null_datetimelike(object val):
+cpdef bint is_null_datetimelike(object val, bint inat_is_null=True):
"""
Determine if we have a null for a timedelta/datetime (or integer versions)
Parameters
----------
val : object
+ inat_is_null : bool, default True
+ Whether to treat integer iNaT value as null
Returns
-------
null_datetimelike : bool
"""
- if val is None or util.is_nan(val):
+ if val is None:
return True
elif val is c_NaT:
return True
+ elif util.is_float_object(val) or util.is_complex_object(val):
+ return val != val
elif util.is_timedelta64_object(val):
- return val.view('int64') == NPY_NAT
+ return get_timedelta64_value(val) == NPY_NAT
elif util.is_datetime64_object(val):
- return val.view('int64') == NPY_NAT
- elif util.is_integer_object(val):
+ return get_datetime64_value(val) == NPY_NAT
+ elif inat_is_null and util.is_integer_object(val):
return val == NPY_NAT
return False
diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd
index 0ba61fcc58f46..ef7065a44f18b 100644
--- a/pandas/_libs/tslibs/util.pxd
+++ b/pandas/_libs/tslibs/util.pxd
@@ -215,7 +215,8 @@ cdef inline bint is_offset_object(object val):
cdef inline bint is_nan(object val):
"""
- Check if val is a Not-A-Number float, including float('NaN') and np.nan.
+ Check if val is a Not-A-Number float or complex, including
+ float('NaN') and np.nan.
Parameters
----------
@@ -225,4 +226,4 @@ cdef inline bint is_nan(object val):
-------
is_nan : bool
"""
- return is_float_object(val) and val != val
+ return (is_float_object(val) or is_complex_object(val)) and val != val
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 965e5e000d026..d913d2ad299ce 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -1,13 +1,14 @@
# -*- coding: utf-8 -*-
from datetime import datetime
+from decimal import Decimal
from warnings import catch_warnings, filterwarnings, simplefilter
import numpy as np
import pytest
from pandas._libs import missing as libmissing
-from pandas._libs.tslib import iNaT
+from pandas._libs.tslibs import iNaT, is_null_datetimelike
from pandas.compat import u
from pandas.core.dtypes.common import is_scalar
@@ -392,3 +393,106 @@ def test_empty_like(self):
expected = np.array([True])
self._check_behavior(arr, expected)
+
+
+m8_units = ['as', 'ps', 'ns', 'us', 'ms', 's',
+ 'm', 'h', 'D', 'W', 'M', 'Y']
+
+na_vals = [
+ None,
+ NaT,
+ float('NaN'),
+ complex('NaN'),
+ np.nan,
+ np.float64('NaN'),
+ np.float32('NaN'),
+ np.complex64(np.nan),
+ np.complex128(np.nan),
+ np.datetime64('NaT'),
+ np.timedelta64('NaT'),
+] + [
+ np.datetime64('NaT', unit) for unit in m8_units
+] + [
+ np.timedelta64('NaT', unit) for unit in m8_units
+]
+
+inf_vals = [
+ float('inf'),
+ float('-inf'),
+ complex('inf'),
+ complex('-inf'),
+ np.inf,
+ np.NINF,
+]
+
+int_na_vals = [
+ # Values that match iNaT, which we treat as null in specific cases
+ np.int64(NaT.value),
+ int(NaT.value),
+]
+
+sometimes_na_vals = [
+ Decimal('NaN'),
+]
+
+never_na_vals = [
+ # float/complex values that when viewed as int64 match iNaT
+ -0.0,
+ np.float64('-0.0'),
+ -0j,
+ np.complex64(-0j),
+]
+
+
+class TestLibMissing(object):
+ def test_checknull(self):
+ for value in na_vals:
+ assert libmissing.checknull(value)
+
+ for value in inf_vals:
+ assert not libmissing.checknull(value)
+
+ for value in int_na_vals:
+ assert not libmissing.checknull(value)
+
+ for value in sometimes_na_vals:
+ assert not libmissing.checknull(value)
+
+ for value in never_na_vals:
+ assert not libmissing.checknull(value)
+
+ def checknull_old(self):
+ for value in na_vals:
+ assert libmissing.checknull_old(value)
+
+ for value in inf_vals:
+ assert libmissing.checknull_old(value)
+
+ for value in int_na_vals:
+ assert not libmissing.checknull_old(value)
+
+ for value in sometimes_na_vals:
+ assert not libmissing.checknull_old(value)
+
+ for value in never_na_vals:
+ assert not libmissing.checknull_old(value)
+
+ def test_is_null_datetimelike(self):
+ for value in na_vals:
+ assert is_null_datetimelike(value)
+ assert is_null_datetimelike(value, False)
+
+ for value in inf_vals:
+ assert not is_null_datetimelike(value)
+ assert not is_null_datetimelike(value, False)
+
+ for value in int_na_vals:
+ assert is_null_datetimelike(value)
+ assert not is_null_datetimelike(value, False)
+
+ for value in sometimes_na_vals:
+ assert not is_null_datetimelike(value)
+ assert not is_null_datetimelike(value, False)
+
+ for value in never_na_vals:
+ assert not is_null_datetimelike(value)
| gets rid of is_null_datelike_scalar, which would, among other things, treat `np.float64('-0.0')` as `iNaT`.
Some overlap with #24619, merge conflicts should be small or zero.
- [x] closes #24607
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24628 | 2019-01-04T22:33:26Z | 2019-01-05T17:52:55Z | 2019-01-05T17:52:55Z | 2019-01-05T17:58:51Z |
Array api docs | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index a7557e6e1d1c2..972b562cfebba 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -657,7 +657,7 @@ Categoricals
------------
pandas can include categorical data in a ``DataFrame``. For full docs, see the
-:ref:`categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>`.
+:ref:`categorical introduction <categorical>` and the :ref:`API documentation <api.arrays.categorical>`.
.. ipython:: python
diff --git a/doc/source/api/arrays.rst b/doc/source/api/arrays.rst
new file mode 100644
index 0000000000000..d8ce2ab7bf73e
--- /dev/null
+++ b/doc/source/api/arrays.rst
@@ -0,0 +1,401 @@
+{{ header }}
+
+.. _api.arrays:
+
+=============
+Pandas Arrays
+=============
+
+.. currentmodule:: pandas
+
+For most data types, pandas uses NumPy arrays as the concrete
+objects contained with a :class:`Index`, :class:`Series`, or
+:class:`DataFrame`.
+
+For some data types, pandas extends NumPy's type system.
+
+=================== ========================= ================== =============================
+Kind of Data Pandas Data Type Scalar Array
+=================== ========================= ================== =============================
+TZ-aware datetime :class:`DatetimeTZDtype` :class:`Timestamp` :ref:`api.arrays.datetime`
+Timedeltas (none) :class:`Timedelta` :ref:`api.arrays.timedelta`
+Period (time spans) :class:`PeriodDtype` :class:`Period` :ref:`api.arrays.period`
+Intervals :class:`IntervalDtype` :class:`Interval` :ref:`api.arrays.interval`
+Nullable Integer :class:`Int64Dtype`, ... (none) :ref:`api.arrays.integer_na`
+Categorical :class:`CategoricalDtype` (none) :ref:`api.arrays.categorical`
+Sparse :class:`SparseDtype` (none) :ref:`api.arrays.sparse`
+=================== ========================= ================== =============================
+
+Pandas and third-party libraries can extend NumPy's type system (see :ref:`extending.extension-types`).
+The top-level :meth:`array` method can be used to create a new array, which may be
+stored in a :class:`Series`, :class:`Index`, or as a column in a :class:`DataFrame`.
+
+.. autosummary::
+ :toctree: generated/
+
+ array
+
+.. _api.arrays.datetime:
+
+Datetime Data
+-------------
+
+NumPy cannot natively represent timezone-aware datetimes. Pandas supports this
+with the :class:`arrays.DatetimeArray` extension array, which can hold timezone-naive
+or timezone-aware values.
+
+:class:`Timestamp`, a subclass of :class:`datetime.datetime`, is pandas'
+scalar type for timezone-naive or timezone-aware datetime data.
+
+.. autosummary::
+ :toctree: generated/
+
+ Timestamp
+
+Properties
+~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Timestamp.asm8
+ Timestamp.day
+ Timestamp.dayofweek
+ Timestamp.dayofyear
+ Timestamp.days_in_month
+ Timestamp.daysinmonth
+ Timestamp.fold
+ Timestamp.hour
+ Timestamp.is_leap_year
+ Timestamp.is_month_end
+ Timestamp.is_month_start
+ Timestamp.is_quarter_end
+ Timestamp.is_quarter_start
+ Timestamp.is_year_end
+ Timestamp.is_year_start
+ Timestamp.max
+ Timestamp.microsecond
+ Timestamp.min
+ Timestamp.minute
+ Timestamp.month
+ Timestamp.nanosecond
+ Timestamp.quarter
+ Timestamp.resolution
+ Timestamp.second
+ Timestamp.tz
+ Timestamp.tzinfo
+ Timestamp.value
+ Timestamp.week
+ Timestamp.weekofyear
+ Timestamp.year
+
+Methods
+~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Timestamp.astimezone
+ Timestamp.ceil
+ Timestamp.combine
+ Timestamp.ctime
+ Timestamp.date
+ Timestamp.day_name
+ Timestamp.dst
+ Timestamp.floor
+ Timestamp.freq
+ Timestamp.freqstr
+ Timestamp.fromordinal
+ Timestamp.fromtimestamp
+ Timestamp.isocalendar
+ Timestamp.isoformat
+ Timestamp.isoweekday
+ Timestamp.month_name
+ Timestamp.normalize
+ Timestamp.now
+ Timestamp.replace
+ Timestamp.round
+ Timestamp.strftime
+ Timestamp.strptime
+ Timestamp.time
+ Timestamp.timestamp
+ Timestamp.timetuple
+ Timestamp.timetz
+ Timestamp.to_datetime64
+ Timestamp.to_julian_date
+ Timestamp.to_period
+ Timestamp.to_pydatetime
+ Timestamp.today
+ Timestamp.toordinal
+ Timestamp.tz_convert
+ Timestamp.tz_localize
+ Timestamp.tzname
+ Timestamp.utcfromtimestamp
+ Timestamp.utcnow
+ Timestamp.utcoffset
+ Timestamp.utctimetuple
+ Timestamp.weekday
+
+A collection of timestamps may be stored in a :class:`arrays.DatetimeArray`.
+For timezone-aware data, the ``.dtype`` of a ``DatetimeArray`` is a
+:class:`DatetimeTZDtype`. For timezone-naive data, ``np.dtype("datetime64[ns]")``
+is used.
+
+If the data are tz-aware, then every value in the array must have the same timezone.
+
+.. autosummary::
+ :toctree: generated/
+
+ arrays.DatetimeArray
+ DatetimeTZDtype
+
+.. _api.arrays.timedelta:
+
+Timedelta Data
+--------------
+
+NumPy can natively represent timedeltas. Pandas provides :class:`Timedelta`
+for symmetry with :class:`Timestamp`.
+
+.. autosummary::
+ :toctree: generated/
+
+ Timedelta
+
+Properties
+~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Timedelta.asm8
+ Timedelta.components
+ Timedelta.days
+ Timedelta.delta
+ Timedelta.freq
+ Timedelta.is_populated
+ Timedelta.max
+ Timedelta.microseconds
+ Timedelta.min
+ Timedelta.nanoseconds
+ Timedelta.resolution
+ Timedelta.seconds
+ Timedelta.value
+ Timedelta.view
+
+Methods
+~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Timedelta.ceil
+ Timedelta.floor
+ Timedelta.isoformat
+ Timedelta.round
+ Timedelta.to_pytimedelta
+ Timedelta.to_timedelta64
+ Timedelta.total_seconds
+
+A collection of timedeltas may be stored in a :class:`TimedeltaArray`.
+
+.. autosumarry::
+ :toctree: generated/
+
+ arrays.TimedeltaArray
+
+.. _api.arrays.period:
+
+Timespan Data
+-------------
+
+Pandas represents spans of times as :class:`Period` objects.
+
+Period
+------
+.. autosummary::
+ :toctree: generated/
+
+ Period
+
+Properties
+~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Period.day
+ Period.dayofweek
+ Period.dayofyear
+ Period.days_in_month
+ Period.daysinmonth
+ Period.end_time
+ Period.freq
+ Period.freqstr
+ Period.hour
+ Period.is_leap_year
+ Period.minute
+ Period.month
+ Period.ordinal
+ Period.quarter
+ Period.qyear
+ Period.second
+ Period.start_time
+ Period.week
+ Period.weekday
+ Period.weekofyear
+ Period.year
+
+Methods
+~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Period.asfreq
+ Period.now
+ Period.strftime
+ Period.to_timestamp
+
+A collection of timedeltas may be stored in a :class:`arrays.PeriodArray`.
+Every period in a ``PeriodArray`` must have the same ``freq``.
+
+.. autosummary::
+ :toctree: generated/
+
+ arrays.DatetimeArray
+ PeriodDtype
+
+.. _api.arrays.interval:
+
+Interval Data
+-------------
+
+Arbitrary intervals can be represented as :class:`Interval` objects.
+
+.. autosummary::
+ :toctree: generated/
+
+ Interval
+
+Properties
+~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Interval.closed
+ Interval.closed_left
+ Interval.closed_right
+ Interval.left
+ Interval.length
+ Interval.mid
+ Interval.open_left
+ Interval.open_right
+ Interval.overlaps
+ Interval.right
+
+A collection of intervals may be stored in an :class:`IntervalArray`.
+
+.. autosummary::
+ :toctree: generated/
+
+ IntervalArray
+ IntervalDtype
+
+.. _api.arrays.integer_na:
+
+Nullable Integer
+----------------
+
+:class:`numpy.ndarray` cannot natively represent integer-data with missing values.
+Pandas provides this through :class:`arrays.IntegerArray`.
+
+.. autosummary::
+ :toctree: generated/
+
+ arrays.IntegerArray
+ Int8Dtype
+ Int16Dtype
+ Int32Dtype
+ Int64Dtype
+ UInt8Dtype
+ UInt16Dtype
+ UInt32Dtype
+ UInt64Dtype
+
+.. _api.arrays.categorical:
+
+Categorical Data
+----------------
+
+Pandas defines a custom data type for representing data that can take only a
+limited, fixed set of values. The dtype of a ``Categorical`` can be described by
+a :class:`pandas.api.types.CategoricalDtype`.
+
+.. autosummary::
+ :toctree: generated/
+ :template: autosummary/class_without_autosummary.rst
+
+ api.types.CategoricalDtype
+
+.. autosummary::
+ :toctree: generated/
+
+ api.types.CategoricalDtype.categories
+ api.types.CategoricalDtype.ordered
+
+Categorical data can be stored in a :class:`pandas.Categorical`
+
+.. autosummary::
+ :toctree: generated/
+ :template: autosummary/class_without_autosummary.rst
+
+ Categorical
+
+The alternative :meth:`Categorical.from_codes` constructor can be used when you
+have the categories and integer codes already:
+
+.. autosummary::
+ :toctree: generated/
+
+ Categorical.from_codes
+
+The dtype information is available on the ``Categorical``
+
+.. autosummary::
+ :toctree: generated/
+
+ Categorical.dtype
+ Categorical.categories
+ Categorical.ordered
+ Categorical.codes
+
+``np.asarray(categorical)`` works by implementing the array interface. Be aware, that this converts
+the Categorical back to a NumPy array, so categories and order information is not preserved!
+
+.. autosummary::
+ :toctree: generated/
+
+ Categorical.__array__
+
+A ``Categorical`` can be stored in a ``Series`` or ``DataFrame``.
+To create a Series of dtype ``category``, use ``cat = s.astype(dtype)`` or
+``Series(..., dtype=dtype)`` where ``dtype`` is either
+
+* the string ``'category'``
+* an instance of :class:`~pandas.api.types.CategoricalDtype`.
+
+If the Series is of dtype ``CategoricalDtype``, ``Series.cat`` can be used to change the categorical
+data. See :ref:`api.series.cat` for more.
+
+.. _api.arrays.sparse:
+
+Sparse Data
+-----------
+
+Data where a single value is repeated many times (e.g. ``0`` or ``NaN``) may
+be stored efficiently as a :class:`SparseArray`.
+
+.. autosummary::
+ :toctree: generated/
+
+ SparseArray
+ SparseDtype
+
+The ``Series.sparse`` accessor may be used to access sparse-specific attributes
+and methods if the :class:`Series` contains sparse values. See
+:ref:`api.series.sparse` for more.
diff --git a/doc/source/api/index.rst b/doc/source/api/index.rst
index 0bd89fc826a21..e4d118e278128 100644
--- a/doc/source/api/index.rst
+++ b/doc/source/api/index.rst
@@ -26,9 +26,9 @@ public functions related to data types in pandas.
general_functions
series
frame
+ arrays
panel
indexing
- scalars
offset_frequency
window
groupby
diff --git a/doc/source/api/scalars.rst b/doc/source/api/scalars.rst
index 662a4d5a8fcfe..e69de29bb2d1d 100644
--- a/doc/source/api/scalars.rst
+++ b/doc/source/api/scalars.rst
@@ -1,204 +0,0 @@
-{{ header }}
-
-.. _api.scalars:
-
-=======
-Scalars
-=======
-.. currentmodule:: pandas
-
-Period
-------
-.. autosummary::
- :toctree: generated/
-
- Period
-
-Properties
-~~~~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Period.day
- Period.dayofweek
- Period.dayofyear
- Period.days_in_month
- Period.daysinmonth
- Period.end_time
- Period.freq
- Period.freqstr
- Period.hour
- Period.is_leap_year
- Period.minute
- Period.month
- Period.ordinal
- Period.quarter
- Period.qyear
- Period.second
- Period.start_time
- Period.week
- Period.weekday
- Period.weekofyear
- Period.year
-
-Methods
-~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Period.asfreq
- Period.now
- Period.strftime
- Period.to_timestamp
-
-Timestamp
----------
-.. autosummary::
- :toctree: generated/
-
- Timestamp
-
-Properties
-~~~~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Timestamp.asm8
- Timestamp.day
- Timestamp.dayofweek
- Timestamp.dayofyear
- Timestamp.days_in_month
- Timestamp.daysinmonth
- Timestamp.fold
- Timestamp.hour
- Timestamp.is_leap_year
- Timestamp.is_month_end
- Timestamp.is_month_start
- Timestamp.is_quarter_end
- Timestamp.is_quarter_start
- Timestamp.is_year_end
- Timestamp.is_year_start
- Timestamp.max
- Timestamp.microsecond
- Timestamp.min
- Timestamp.minute
- Timestamp.month
- Timestamp.nanosecond
- Timestamp.quarter
- Timestamp.resolution
- Timestamp.second
- Timestamp.tz
- Timestamp.tzinfo
- Timestamp.value
- Timestamp.week
- Timestamp.weekofyear
- Timestamp.year
-
-Methods
-~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Timestamp.astimezone
- Timestamp.ceil
- Timestamp.combine
- Timestamp.ctime
- Timestamp.date
- Timestamp.day_name
- Timestamp.dst
- Timestamp.floor
- Timestamp.freq
- Timestamp.freqstr
- Timestamp.fromordinal
- Timestamp.fromtimestamp
- Timestamp.isocalendar
- Timestamp.isoformat
- Timestamp.isoweekday
- Timestamp.month_name
- Timestamp.normalize
- Timestamp.now
- Timestamp.replace
- Timestamp.round
- Timestamp.strftime
- Timestamp.strptime
- Timestamp.time
- Timestamp.timestamp
- Timestamp.timetuple
- Timestamp.timetz
- Timestamp.to_datetime64
- Timestamp.to_julian_date
- Timestamp.to_period
- Timestamp.to_pydatetime
- Timestamp.today
- Timestamp.toordinal
- Timestamp.tz_convert
- Timestamp.tz_localize
- Timestamp.tzname
- Timestamp.utcfromtimestamp
- Timestamp.utcnow
- Timestamp.utcoffset
- Timestamp.utctimetuple
- Timestamp.weekday
-
-Interval
---------
-.. autosummary::
- :toctree: generated/
-
- Interval
-
-Properties
-~~~~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Interval.closed
- Interval.closed_left
- Interval.closed_right
- Interval.left
- Interval.length
- Interval.mid
- Interval.open_left
- Interval.open_right
- Interval.overlaps
- Interval.right
-
-Timedelta
----------
-.. autosummary::
- :toctree: generated/
-
- Timedelta
-
-Properties
-~~~~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Timedelta.asm8
- Timedelta.components
- Timedelta.days
- Timedelta.delta
- Timedelta.freq
- Timedelta.is_populated
- Timedelta.max
- Timedelta.microseconds
- Timedelta.min
- Timedelta.nanoseconds
- Timedelta.resolution
- Timedelta.seconds
- Timedelta.value
- Timedelta.view
-
-Methods
-~~~~~~~
-.. autosummary::
- :toctree: generated/
-
- Timedelta.ceil
- Timedelta.floor
- Timedelta.isoformat
- Timedelta.round
- Timedelta.to_pytimedelta
- Timedelta.to_timedelta64
- Timedelta.total_seconds
diff --git a/doc/source/api/series.rst b/doc/source/api/series.rst
index 7d5e6037b012a..f1238e9d3f2c3 100644
--- a/doc/source/api/series.rst
+++ b/doc/source/api/series.rst
@@ -278,14 +278,34 @@ Time series-related
Series.tshift
Series.slice_shift
+Accessors
+---------
+
+Pandas provides dtype-specific methods under various accessors.
+These are separate namespaces within :class:`Series` that only apply
+to specific data types.
+
+=========================== =================================
+Data Type Accessor
+=========================== =================================
+Datetime, Timedelta, Period :ref:`dt <api.series.dt>`
+String :ref:`str <api.series.str>`
+Categorical :ref:`cat <api.series.cat>`
+Sparse :ref:`sparse <api.series.sparse>`
+=========================== =================================
+
+.. _api.series.dt:
+
Datetimelike Properties
------------------------
+~~~~~~~~~~~~~~~~~~~~~~~
+
``Series.dt`` can be used to access the values of the series as
datetimelike and return several properties.
These can be accessed like ``Series.dt.<property>``.
Datetime Properties
-~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^
+
.. autosummary::
:toctree: generated/
:template: autosummary/accessor_attribute.rst
@@ -320,7 +340,8 @@ Datetime Properties
Series.dt.freq
Datetime Methods
-~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^
+
.. autosummary::
:toctree: generated/
:template: autosummary/accessor_method.rst
@@ -338,7 +359,8 @@ Datetime Methods
Series.dt.day_name
Period Properties
-~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^
+
.. autosummary::
:toctree: generated/
:template: autosummary/accessor_attribute.rst
@@ -348,7 +370,8 @@ Period Properties
Series.dt.end_time
Timedelta Properties
-~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^
+
.. autosummary::
:toctree: generated/
:template: autosummary/accessor_attribute.rst
@@ -360,7 +383,8 @@ Timedelta Properties
Series.dt.components
Timedelta Methods
-~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^
+
.. autosummary::
:toctree: generated/
:template: autosummary/accessor_method.rst
@@ -368,8 +392,12 @@ Timedelta Methods
Series.dt.to_pytimedelta
Series.dt.total_seconds
+
+.. _api.series.str:
+
String handling
----------------
+~~~~~~~~~~~~~~~
+
``Series.str`` can be used to access the values of the series as
strings and apply several methods to it. These can be accessed like
``Series.str.<function/property>``.
@@ -445,82 +473,13 @@ strings and apply several methods to it. These can be accessed like
Series.dt
Index.str
-.. _api.arrays:
-
-Arrays
-------
-Pandas and third-party libraries can extend NumPy's type system (see :ref:`extending.extension-types`).
-
-.. autosummary::
- :toctree: generated/
-
- array
-
-.. _api.categorical:
-
-Categorical
-~~~~~~~~~~~
-
-Pandas defines a custom data type for representing data that can take only a
-limited, fixed set of values. The dtype of a ``Categorical`` can be described by
-a :class:`pandas.api.types.CategoricalDtype`.
-
-.. autosummary::
- :toctree: generated/
- :template: autosummary/class_without_autosummary.rst
-
- api.types.CategoricalDtype
-
-.. autosummary::
- :toctree: generated/
-
- api.types.CategoricalDtype.categories
- api.types.CategoricalDtype.ordered
-
-Categorical data can be stored in a :class:`pandas.Categorical`
-
-.. autosummary::
- :toctree: generated/
- :template: autosummary/class_without_autosummary.rst
-
- Categorical
-
-The alternative :meth:`Categorical.from_codes` constructor can be used when you
-have the categories and integer codes already:
-
-.. autosummary::
- :toctree: generated/
-
- Categorical.from_codes
-
-The dtype information is available on the ``Categorical``
-
-.. autosummary::
- :toctree: generated/
+.. _api.series.cat:
- Categorical.dtype
- Categorical.categories
- Categorical.ordered
- Categorical.codes
-
-``np.asarray(categorical)`` works by implementing the array interface. Be aware, that this converts
-the Categorical back to a NumPy array, so categories and order information is not preserved!
-
-.. autosummary::
- :toctree: generated/
-
- Categorical.__array__
-
-A ``Categorical`` can be stored in a ``Series`` or ``DataFrame``.
-To create a Series of dtype ``category``, use ``cat = s.astype(dtype)`` or
-``Series(..., dtype=dtype)`` where ``dtype`` is either
-
-* the string ``'category'``
-* an instance of :class:`~pandas.api.types.CategoricalDtype`.
+Categorical Accessor
+~~~~~~~~~~~~~~~~~~~~
-If the Series is of dtype ``CategoricalDtype``, ``Series.cat`` can be used to change the categorical
-data. This accessor is similar to the ``Series.dt`` or ``Series.str`` and has the
-following usable methods and properties:
+Categorical-dtype specific methods and attributes are available under
+the ``Series.cat`` accessor.
.. autosummary::
:toctree: generated/
@@ -543,6 +502,31 @@ following usable methods and properties:
Series.cat.as_ordered
Series.cat.as_unordered
+
+.. _api.series.sparse:
+
+Sparse Accessor
+~~~~~~~~~~~~~~~
+
+Sparse-dtype specific methods and attributes are provided under the
+``Series.sparse`` accessor.
+
+.. autosummary::
+ :toctree: generated/
+ :template: autosummary/accessor_attribute.rst
+
+ Series.sparse.npoints
+ Series.sparse.density
+ Series.sparse.fill_value
+ Series.sparse.sp_values
+
+.. autosummary::
+ :toctree: generated/
+
+ Series.sparse.from_coo
+ Series.sparse.to_coo
+
+
Plotting
--------
``Series.plot`` is both a callable method and a namespace attribute for
@@ -594,25 +578,13 @@ Serialization / IO / Conversion
Series.to_clipboard
Series.to_latex
+
Sparse
------
+
.. autosummary::
:toctree: generated/
SparseSeries.to_coo
SparseSeries.from_coo
-.. autosummary::
- :toctree: generated/
- :template: autosummary/accessor_attribute.rst
-
- Series.sparse.npoints
- Series.sparse.density
- Series.sparse.fill_value
- Series.sparse.sp_values
-
-.. autosummary::
- :toctree: generated/
-
- Series.sparse.from_coo
- Series.sparse.to_coo
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 73ae26150b946..13681485d2f69 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1947,7 +1947,7 @@ documentation sections for more on each type.
=================== ========================= ================== ============================= =============================
Kind of Data Data Type Scalar Array Documentation
=================== ========================= ================== ============================= =============================
-tz-aware datetime :class:`DatetimeArray` :class:`Timestamp` :class:`arrays.DatetimeArray` :ref:`timeseries.timezone`
+tz-aware datetime :class:`DatetimeTZDtype` :class:`Timestamp` :class:`arrays.DatetimeArray` :ref:`timeseries.timezone`
Categorical :class:`CategoricalDtype` (none) :class:`Categorical` :ref:`categorical`
period (time spans) :class:`PeriodDtype` :class:`Period` :class:`arrays.PeriodArray` :ref:`timeseries.periods`
sparse :class:`SparseDtype` (none) :class:`arrays.SparseArray` :ref:`sparse`
diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index 68e39e68220a7..a6315c548b382 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -34,7 +34,7 @@ The categorical data type is useful in the following cases:
* As a signal to other Python libraries that this column should be treated as a categorical
variable (e.g. to use suitable statistical methods or plot types).
-See also the :ref:`API docs on categoricals<api.categorical>`.
+See also the :ref:`API docs on categoricals<api.arrays.categorical>`.
.. _categorical.objectcreation:
diff --git a/doc/source/comparison_with_r.rst b/doc/source/comparison_with_r.rst
index a0143d717105c..dfd388125708e 100644
--- a/doc/source/comparison_with_r.rst
+++ b/doc/source/comparison_with_r.rst
@@ -512,7 +512,7 @@ In pandas this is accomplished with ``pd.cut`` and ``astype("category")``:
pd.Series([1, 2, 3, 2, 2, 3]).astype("category")
For more details and examples see :ref:`categorical introduction <categorical>` and the
-:ref:`API documentation <api.categorical>`. There is also a documentation regarding the
+:ref:`API documentation <api.arrays.categorical>`. There is also a documentation regarding the
:ref:`differences to R's factor <categorical.rfactor>`.
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index a37aa2644a805..953f40d1afebe 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -1351,7 +1351,7 @@ important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the :ref:`Categorical
introduction <categorical>` and the
-:ref:`API documentation <api.categorical>`.)
+:ref:`API documentation <api.arrays.categorical>`.)
.. ipython:: python
diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst
index 6f74f0393d123..420125afd29a4 100644
--- a/doc/source/whatsnew/v0.15.0.rst
+++ b/doc/source/whatsnew/v0.15.0.rst
@@ -72,7 +72,7 @@ methods to manipulate. Thanks to Jan Schulz for much of this API/implementation.
:issue:`8075`, :issue:`8076`, :issue:`8143`, :issue:`8453`, :issue:`8518`).
For full docs, see the :ref:`categorical introduction <categorical>` and the
-:ref:`API documentation <api.categorical>`.
+:ref:`API documentation <api.arrays.categorical>`.
.. ipython:: python
:okwarning:
@@ -101,7 +101,7 @@ For full docs, see the :ref:`categorical introduction <categorical>` and the
- The ``Categorical.labels`` attribute was renamed to ``Categorical.codes`` and is read
only. If you want to manipulate codes, please use one of the
- :ref:`API methods on Categoricals <api.categorical>`.
+ :ref:`API methods on Categoricals <api.arrays.categorical>`.
- The ``Categorical.levels`` attribute is renamed to ``Categorical.categories``.
diff --git a/pandas/core/arrays/array_.py b/pandas/core/arrays/array_.py
index 9b2240eb62906..32c08e40b8033 100644
--- a/pandas/core/arrays/array_.py
+++ b/pandas/core/arrays/array_.py
@@ -47,13 +47,13 @@ def array(data, # type: Sequence[object]
Currently, pandas will infer an extension dtype for sequences of
============================== =====================================
- scalar type Array Type
- ============================= =====================================
- * :class:`pandas.Interval` :class:`pandas.IntervalArray`
- * :class:`pandas.Period` :class:`pandas.arrays.PeriodArray`
- * :class:`datetime.datetime` :class:`pandas.arrays.DatetimeArray`
- * :class:`datetime.timedelta` :class:`pandas.arrays.TimedeltaArray`
- ============================= =====================================
+ Scalar Type Array Type
+ ============================== =====================================
+ :class:`pandas.Interval` :class:`pandas.IntervalArray`
+ :class:`pandas.Period` :class:`pandas.arrays.PeriodArray`
+ :class:`datetime.datetime` :class:`pandas.arrays.DatetimeArray`
+ :class:`datetime.timedelta` :class:`pandas.arrays.TimedeltaArray`
+ ============================== =====================================
For all other cases, NumPy's usual inference rules will be used.
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 7c8f58c9a3203..6114e578dc90f 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -56,19 +56,19 @@ class SparseDtype(ExtensionDtype):
----------
dtype : str, ExtensionDtype, numpy.dtype, type, default numpy.float64
The dtype of the underlying array storing the non-fill value values.
- fill_value : scalar, optional.
+ fill_value : scalar, optional
The scalar value not stored in the SparseArray. By default, this
depends on `dtype`.
- ========== ==========
- dtype na_value
- ========== ==========
- float ``np.nan``
- int ``0``
- bool ``False``
- datetime64 ``pd.NaT``
+ =========== ==========
+ dtype na_value
+ =========== ==========
+ float ``np.nan``
+ int ``0``
+ bool ``False``
+ datetime64 ``pd.NaT``
timedelta64 ``pd.NaT``
- ========== ==========
+ =========== ==========
The default value may be overridden by specifying a `fill_value`.
"""
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a50def7357826..7659f0696008b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6966,6 +6966,11 @@ def corr(self, method='pearson', min_periods=1):
-------
y : DataFrame
+ See Also
+ --------
+ DataFrame.corrwith
+ Series.corr
+
Examples
--------
>>> histogram_intersection = lambda a, b: np.minimum(a, b
@@ -6976,11 +6981,6 @@ def corr(self, method='pearson', min_periods=1):
dogs cats
dogs 1.0 0.3
cats 0.3 1.0
-
- See Also
- -------
- DataFrame.corrwith
- Series.corr
"""
numeric_df = self._get_numeric_data()
cols = numeric_df.columns
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d0555bd2e44b1..b3c14bac91f17 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9223,7 +9223,10 @@ def _tz_convert(ax, tz):
def tz_localize(self, tz, axis=0, level=None, copy=True,
ambiguous='raise', nonexistent='raise'):
"""
- Localize tz-naive TimeSeries to target time zone.
+ Localize tz-naive index of a Series or DataFrame to target time zone.
+
+ This operation localizes the Index. To localize the values in a
+ timezone-naive Series, use :meth:`Series.dt.tz_localize`.
Parameters
----------
@@ -9250,10 +9253,9 @@ def tz_localize(self, tz, axis=0, level=None, copy=True,
- 'NaT' will return NaT where there are ambiguous times
- 'raise' will raise an AmbiguousTimeError if there are ambiguous
times
- nonexistent : 'shift_forward', 'shift_backward, 'NaT', timedelta,
- default 'raise'
+ nonexistent : str, default 'raise'
A nonexistent time does not exist in a particular timezone
- where clocks moved forward due to DST.
+ where clocks moved forward due to DST. Valid valuse are:
- 'shift_forward' will shift the nonexistent time forward to the
closest existing time
@@ -9268,6 +9270,8 @@ def tz_localize(self, tz, axis=0, level=None, copy=True,
Returns
-------
+ Series or DataFrame
+ Same type as the input.
Raises
------
| Closes
https://github.com/pandas-dev/pandas/issues/24066
cc @datapythonista @jorisvandenbossche there are a few closely related changes here.
* removed api.scalars. I think it's more useful to present "Here are all pandas extensions", and then sub-divide into array / scalar / dtype. I think arrays are the most prominent of these, so I I've named the file `api/arrays.rst`
* Moved `pd.array` to api/arrays.rst
* Moved Categorical from api/series.rst to api/arrays.rst
* Re-written the accessor section and moved / left those in api/series.rst
* Added DatetimeArray (the original point of this PR)
---
I have quite a few warnings to cleanup, but wanted a +1 on the general reorganization before putting more time into it. | https://api.github.com/repos/pandas-dev/pandas/pulls/24626 | 2019-01-04T21:30:36Z | 2019-01-05T14:52:43Z | 2019-01-05T14:52:43Z | 2019-01-05T14:52:47Z |
remove eadata | diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 082a314facdd6..5a8809f754385 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -73,34 +73,30 @@ class DatetimeIndexOpsMixin(ExtensionOpsMixin):
DatetimeLikeArrayMixin._maybe_mask_results)
__iter__ = ea_passthrough(DatetimeLikeArrayMixin.__iter__)
- @property
- def _eadata(self):
- return self._data
-
@property
def freq(self):
"""
Return the frequency object if it is set, otherwise None.
"""
- return self._eadata.freq
+ return self._data.freq
@freq.setter
def freq(self, value):
- # validation is handled by _eadata setter
- self._eadata.freq = value
+ # validation is handled by _data setter
+ self._data.freq = value
@property
def freqstr(self):
"""
Return the frequency object as a string if it is set, otherwise None.
"""
- return self._eadata.freqstr
+ return self._data.freqstr
def unique(self, level=None):
if level is not None:
self._validate_index_level(level)
- result = self._eadata.unique()
+ result = self._data.unique()
# Note: if `self` is already unique, then self.unique() should share
# a `freq` with self. If not already unique, then self.freq must be
@@ -113,7 +109,7 @@ def _create_comparison_method(cls, op):
Create a comparison method that dispatches to ``cls.values``.
"""
def wrapper(self, other):
- result = op(self._eadata, maybe_unwrap_index(other))
+ result = op(self._data, maybe_unwrap_index(other))
return result
wrapper.__doc__ = op.__doc__
@@ -122,7 +118,7 @@ def wrapper(self, other):
@property
def _ndarray_values(self):
- return self._eadata._ndarray_values
+ return self._data._ndarray_values
# ------------------------------------------------------------------------
# Abstract data attributes
@@ -131,12 +127,12 @@ def _ndarray_values(self):
def values(self):
# type: () -> np.ndarray
# Note: PeriodArray overrides this to return an ndarray of objects.
- return self._eadata._data
+ return self._data._data
@property
@Appender(DatetimeLikeArrayMixin.asi8.__doc__)
def asi8(self):
- return self._eadata.asi8
+ return self._data.asi8
# ------------------------------------------------------------------------
@@ -485,7 +481,7 @@ def _add_datetimelike_methods(cls):
def __add__(self, other):
# dispatch to ExtensionArray implementation
- result = self._eadata.__add__(maybe_unwrap_index(other))
+ result = self._data.__add__(maybe_unwrap_index(other))
return wrap_arithmetic_op(self, other, result)
cls.__add__ = __add__
@@ -497,13 +493,13 @@ def __radd__(self, other):
def __sub__(self, other):
# dispatch to ExtensionArray implementation
- result = self._eadata.__sub__(maybe_unwrap_index(other))
+ result = self._data.__sub__(maybe_unwrap_index(other))
return wrap_arithmetic_op(self, other, result)
cls.__sub__ = __sub__
def __rsub__(self, other):
- result = self._eadata.__rsub__(maybe_unwrap_index(other))
+ result = self._data.__rsub__(maybe_unwrap_index(other))
return wrap_arithmetic_op(self, other, result)
cls.__rsub__ = __rsub__
@@ -534,7 +530,6 @@ def repeat(self, repeats, axis=None):
nv.validate_repeat(tuple(), dict(axis=axis))
freq = self.freq if is_period_dtype(self) else None
return self._shallow_copy(self.asi8.repeat(repeats), freq=freq)
- # TODO: dispatch to _eadata
@Appender(_index_shared_docs['where'] % _index_doc_kwargs)
def where(self, cond, other=None):
@@ -599,10 +594,10 @@ def astype(self, dtype, copy=True):
# Ensure that self.astype(self.dtype) is self
return self
- new_values = self._eadata.astype(dtype, copy=copy)
+ new_values = self._data.astype(dtype, copy=copy)
# pass copy=False because any copying will be done in the
- # _eadata.astype call above
+ # _data.astype call above
return Index(new_values,
dtype=new_values.dtype, name=self.name, copy=False)
@@ -637,7 +632,7 @@ def shift(self, periods, freq=None):
Index.shift : Shift values of Index.
PeriodIndex.shift : Shift values of PeriodIndex.
"""
- result = self._eadata._time_shift(periods, freq=freq)
+ result = self._data._time_shift(periods, freq=freq)
return type(self)(result, name=self.name)
@@ -675,9 +670,6 @@ def maybe_unwrap_index(obj):
unwrapped object
"""
if isinstance(obj, ABCIndexClass):
- if isinstance(obj, DatetimeIndexOpsMixin):
- # i.e. PeriodIndex/DatetimeIndex/TimedeltaIndex
- return obj._eadata
return obj._data
return obj
@@ -712,16 +704,16 @@ def _delegate_class(self):
raise AbstractMethodError
def _delegate_property_get(self, name, *args, **kwargs):
- result = getattr(self._eadata, name)
+ result = getattr(self._data, name)
if name not in self._raw_properties:
result = Index(result, name=self.name)
return result
def _delegate_property_set(self, name, value, *args, **kwargs):
- setattr(self._eadata, name, value)
+ setattr(self._data, name, value)
def _delegate_method(self, name, *args, **kwargs):
- result = operator.methodcaller(name, *args, **kwargs)(self._eadata)
+ result = operator.methodcaller(name, *args, **kwargs)(self._data)
if name not in self._raw_methods:
result = Index(result, name=self.name)
return result
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index f396f081267b3..0201827d2f886 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -341,12 +341,12 @@ def _simple_new(cls, values, name=None, freq=None, tz=None, dtype=None):
@property
def dtype(self):
- return self._eadata.dtype
+ return self._data.dtype
@property
def tz(self):
# GH 18595
- return self._eadata.tz
+ return self._data.tz
@tz.setter
def tz(self, value):
@@ -475,7 +475,7 @@ def union(self, other):
if isinstance(result, DatetimeIndex):
# TODO: we shouldn't be setting attributes like this;
# in all the tests this equality already holds
- result._eadata._dtype = this.dtype
+ result._data._dtype = this.dtype
if (result.freq is None and
(this.freq is not None or other.freq is not None)):
result.freq = to_offset(result.inferred_freq)
@@ -508,7 +508,7 @@ def union_many(self, others):
if isinstance(this, DatetimeIndex):
# TODO: we shouldn't be setting attributes like this;
# in all the tests this equality already holds
- this._eadata._dtype = dtype
+ this._data._dtype = dtype
return this
def _can_fast_union(self, other):
@@ -643,7 +643,7 @@ def intersection(self, other):
def _get_time_micros(self):
values = self.asi8
if self.tz is not None and not timezones.is_utc(self.tz):
- values = self._eadata._local_timestamps()
+ values = self._data._local_timestamps()
return fields.get_time_micros(values)
def to_series(self, keep_tz=None, index=None, name=None):
@@ -1139,7 +1139,7 @@ def offset(self, value):
self.freq = value
def __getitem__(self, key):
- result = self._eadata.__getitem__(key)
+ result = self._data.__getitem__(key)
if is_scalar(result):
return result
elif result.ndim > 1:
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 9301638d4f632..b9d6b8da2cada 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -36,7 +36,7 @@ def _make_wrapped_arith_op(opname):
meth = getattr(TimedeltaArray, opname)
def method(self, other):
- result = meth(self._eadata, maybe_unwrap_index(other))
+ result = meth(self._data, maybe_unwrap_index(other))
return wrap_arithmetic_op(self, other, result)
method.__name__ = opname
@@ -307,7 +307,7 @@ def _box_func(self):
return lambda x: Timedelta(x, unit='ns')
def __getitem__(self, key):
- result = self._eadata.__getitem__(key)
+ result = self._data.__getitem__(key)
if is_scalar(result):
return result
return type(self)(result, name=self.name)
@@ -321,7 +321,7 @@ def astype(self, dtype, copy=True):
# Have to repeat the check for 'timedelta64' (not ns) dtype
# so that we can return a numeric index, since pandas will return
# a TimedeltaIndex when dtype='timedelta'
- result = self._eadata.astype(dtype, copy=copy)
+ result = self._data.astype(dtype, copy=copy)
if self.hasnans:
return Index(result, name=self.name)
return Index(result.astype('i8'), name=self.name)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index de34227cda28a..b94bd80a7aee3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -477,10 +477,7 @@ def _values(self):
"""
Return the internal repr of this data.
"""
- result = self._data.internal_values()
- if isinstance(result, DatetimeIndex):
- result = result._eadata
- return result
+ return self._data.internal_values()
def _formatting_values(self):
"""
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 4474b06b19536..c31d7acad3111 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -1475,7 +1475,7 @@ def test_tdi_rmul_arraylike(self, other, box_with_array):
tdi = TimedeltaIndex(['1 Day'] * 10)
expected = timedelta_range('1 days', '10 days')
- expected._eadata.freq = None
+ expected._data.freq = None
tdi = tm.box_expected(tdi, box)
expected = tm.box_expected(expected, xbox)
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 8890593b1fa9d..6ec3b97bb1450 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -21,7 +21,7 @@ def test_from_pandas_array(self):
result = DatetimeArray._from_sequence(arr, freq='infer')
- expected = pd.date_range('1970-01-01', periods=5, freq='H')._eadata
+ expected = pd.date_range('1970-01-01', periods=5, freq='H')._data
tm.assert_datetime_array_equal(result, expected)
def test_mismatched_timezone_raises(self):
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index 562be4cf85864..c03b8afbe79bf 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -318,8 +318,7 @@ def test_astype_category(self, tz):
pd.Timestamp('2000-01-02', tz=tz)])
tm.assert_index_equal(result, expected)
- # TODO: use \._data following composition changeover
- result = obj._eadata.astype('category')
+ result = obj._data.astype('category')
expected = expected.values
tm.assert_categorical_equal(result, expected)
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index 3f5507612c8e6..23e96dbc3d6ce 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -95,8 +95,7 @@ def test_astype_category(self):
pd.Timedelta('2H')])
tm.assert_index_equal(result, expected)
- # TODO: Use \._data following composition changeover
- result = obj._eadata.astype('category')
+ result = obj._data.astype('category')
expected = expected.values
tm.assert_categorical_equal(result, expected)
| Closes https://github.com/pandas-dev/pandas/issues/24565 | https://api.github.com/repos/pandas-dev/pandas/pulls/24625 | 2019-01-04T20:52:15Z | 2019-01-05T12:35:11Z | 2019-01-05T12:35:10Z | 2019-01-05T14:20:28Z |
REF/TST: use pytest builtin monkeypatch fixture and remove mock fixture | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 30b24e00779a9..35a6b5df35ddc 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1,6 +1,5 @@
from datetime import date, time, timedelta
from decimal import Decimal
-import importlib
import os
from dateutil.tz import tzlocal, tzutc
@@ -637,20 +636,6 @@ def any_skipna_inferred_dtype(request):
return inferred_dtype, values
-@pytest.fixture
-def mock():
- """
- Fixture providing the 'mock' module.
-
- Uses 'unittest.mock' for Python 3. Attempts to import the 3rd party 'mock'
- package for Python 2, skipping if not present.
- """
- if PY3:
- return importlib.import_module("unittest.mock")
- else:
- return pytest.importorskip("mock")
-
-
@pytest.fixture(params=[getattr(pd.offsets, o) for o in pd.offsets.__all__ if
issubclass(getattr(pd.offsets, o), pd.offsets.Tick)])
def tick_classes(request):
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index f58cb362cd6d2..89662b70a39ad 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -197,7 +197,7 @@ def __contains__(self, key):
assert result is expected
-def test_is_file_like(mock):
+def test_is_file_like():
class MockFile(object):
pass
@@ -235,7 +235,6 @@ class MockFile(object):
# Iterator but no read / write attributes
data = [1, 2, 3]
assert not is_file(data)
- assert not is_file(mock.Mock())
@pytest.mark.parametrize(
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index c979894048127..d175f669703c7 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -305,16 +305,14 @@ def test_repr_non_interactive(self):
assert not has_truncated_repr(df)
assert not has_expanded_repr(df)
- def test_repr_truncates_terminal_size(self, mock):
- # https://github.com/pandas-dev/pandas/issues/21180
- # TODO: use mock fixutre.
- # This is being backported, so doing it directly here.
+ def test_repr_truncates_terminal_size(self, monkeypatch):
+ # see gh-21180
terminal_size = (118, 96)
- p1 = mock.patch('pandas.io.formats.console.get_terminal_size',
- return_value=terminal_size)
- p2 = mock.patch('pandas.io.formats.format.get_terminal_size',
- return_value=terminal_size)
+ monkeypatch.setattr('pandas.io.formats.console.get_terminal_size',
+ lambda: terminal_size)
+ monkeypatch.setattr('pandas.io.formats.format.get_terminal_size',
+ lambda: terminal_size)
index = range(5)
columns = pd.MultiIndex.from_tuples([
@@ -323,8 +321,7 @@ def test_repr_truncates_terminal_size(self, mock):
])
df = pd.DataFrame(1, index=index, columns=columns)
- with p1, p2:
- result = repr(df)
+ result = repr(df)
h1, h2 = result.split('\n')[:2]
assert 'long' in h1
@@ -334,21 +331,19 @@ def test_repr_truncates_terminal_size(self, mock):
# regular columns
df2 = pd.DataFrame({"A" * 41: [1, 2], 'B' * 41: [1, 2]})
- with p1, p2:
- result = repr(df2)
+ result = repr(df2)
assert df2.columns[0] in result.split('\n')[0]
- def test_repr_truncates_terminal_size_full(self, mock):
+ def test_repr_truncates_terminal_size_full(self, monkeypatch):
# GH 22984 ensure entire window is filled
terminal_size = (80, 24)
df = pd.DataFrame(np.random.rand(1, 7))
- p1 = mock.patch('pandas.io.formats.console.get_terminal_size',
- return_value=terminal_size)
- p2 = mock.patch('pandas.io.formats.format.get_terminal_size',
- return_value=terminal_size)
- with p1, p2:
- assert "..." not in str(df)
+ monkeypatch.setattr('pandas.io.formats.console.get_terminal_size',
+ lambda: terminal_size)
+ monkeypatch.setattr('pandas.io.formats.format.get_terminal_size',
+ lambda: terminal_size)
+ assert "..." not in str(df)
def test_repr_max_columns_max_rows(self):
term_width, term_height = get_terminal_size()
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index 2dc4c578102bb..b1547181350bc 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -1814,13 +1814,16 @@ class InvalidBuffer(object):
parser.read_csv(InvalidBuffer())
-def test_invalid_file_buffer_mock(all_parsers, mock):
+def test_invalid_file_buffer_mock(all_parsers):
# see gh-15337
parser = all_parsers
msg = "Invalid file path or buffer object type"
+ class Foo():
+ pass
+
with pytest.raises(ValueError, match=msg):
- parser.read_csv(mock.Mock())
+ parser.read_csv(Foo())
def test_valid_file_buffer_seems_invalid(all_parsers):
diff --git a/pandas/tests/io/test_gcs.py b/pandas/tests/io/test_gcs.py
index 12b082c3d4099..ec0631e748dfc 100644
--- a/pandas/tests/io/test_gcs.py
+++ b/pandas/tests/io/test_gcs.py
@@ -17,43 +17,51 @@ def test_is_gcs_url():
@td.skip_if_no('gcsfs')
-def test_read_csv_gcs(mock):
+def test_read_csv_gcs(monkeypatch):
df1 = DataFrame({'int': [1, 3], 'float': [2.0, np.nan], 'str': ['t', 's'],
'dt': date_range('2018-06-18', periods=2)})
- with mock.patch('gcsfs.GCSFileSystem') as MockFileSystem:
- instance = MockFileSystem.return_value
- instance.open.return_value = StringIO(df1.to_csv(index=False))
- df2 = read_csv('gs://test/test.csv', parse_dates=['dt'])
+
+ class MockGCSFileSystem():
+ def open(*args):
+ return StringIO(df1.to_csv(index=False))
+
+ monkeypatch.setattr('gcsfs.GCSFileSystem', MockGCSFileSystem)
+ df2 = read_csv('gs://test/test.csv', parse_dates=['dt'])
assert_frame_equal(df1, df2)
@td.skip_if_no('gcsfs')
-def test_to_csv_gcs(mock):
+def test_to_csv_gcs(monkeypatch):
df1 = DataFrame({'int': [1, 3], 'float': [2.0, np.nan], 'str': ['t', 's'],
'dt': date_range('2018-06-18', periods=2)})
- with mock.patch('gcsfs.GCSFileSystem') as MockFileSystem:
- s = StringIO()
- instance = MockFileSystem.return_value
- instance.open.return_value = s
+ s = StringIO()
+
+ class MockGCSFileSystem():
+ def open(*args):
+ return s
- df1.to_csv('gs://test/test.csv', index=True)
- df2 = read_csv(StringIO(s.getvalue()), parse_dates=['dt'], index_col=0)
+ monkeypatch.setattr('gcsfs.GCSFileSystem', MockGCSFileSystem)
+ df1.to_csv('gs://test/test.csv', index=True)
+ df2 = read_csv(StringIO(s.getvalue()), parse_dates=['dt'], index_col=0)
assert_frame_equal(df1, df2)
@td.skip_if_no('gcsfs')
-def test_gcs_get_filepath_or_buffer(mock):
+def test_gcs_get_filepath_or_buffer(monkeypatch):
df1 = DataFrame({'int': [1, 3], 'float': [2.0, np.nan], 'str': ['t', 's'],
'dt': date_range('2018-06-18', periods=2)})
- with mock.patch('pandas.io.gcs.get_filepath_or_buffer') as MockGetFilepath:
- MockGetFilepath.return_value = (StringIO(df1.to_csv(index=False)),
- None, None, False)
- df2 = read_csv('gs://test/test.csv', parse_dates=['dt'])
+
+ def mock_get_filepath_or_buffer(*args, **kwargs):
+ return (StringIO(df1.to_csv(index=False)),
+ None, None, False)
+
+ monkeypatch.setattr('pandas.io.gcs.get_filepath_or_buffer',
+ mock_get_filepath_or_buffer)
+ df2 = read_csv('gs://test/test.csv', parse_dates=['dt'])
assert_frame_equal(df1, df2)
- assert MockGetFilepath.called
@pytest.mark.skipif(td.safe_import('gcsfs'),
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 436ccef48ae12..0e7672f4e2f9d 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -2988,21 +2988,21 @@ def test_secondary_axis_font_size(self, method):
self._check_ticks_props(axes=ax.right_ax,
ylabelsize=fontsize)
- def test_misc_bindings(self, mock):
+ def test_misc_bindings(self, monkeypatch):
df = pd.DataFrame(randn(10, 10), columns=list('abcdefghij'))
- p1 = mock.patch('pandas.plotting._misc.scatter_matrix',
- return_value=2)
- p2 = mock.patch('pandas.plotting._misc.andrews_curves',
- return_value=2)
- p3 = mock.patch('pandas.plotting._misc.parallel_coordinates',
- return_value=2)
- p4 = mock.patch('pandas.plotting._misc.radviz',
- return_value=2)
- with p1, p2, p3, p4:
- assert df.plot.scatter_matrix() == 2
- assert df.plot.andrews_curves('a') == 2
- assert df.plot.parallel_coordinates('a') == 2
- assert df.plot.radviz('a') == 2
+ monkeypatch.setattr('pandas.plotting._misc.scatter_matrix',
+ lambda x: 2)
+ monkeypatch.setattr('pandas.plotting._misc.andrews_curves',
+ lambda x, y: 2)
+ monkeypatch.setattr('pandas.plotting._misc.parallel_coordinates',
+ lambda x, y: 2)
+ monkeypatch.setattr('pandas.plotting._misc.radviz',
+ lambda x, y: 2)
+
+ assert df.plot.scatter_matrix() == 2
+ assert df.plot.andrews_curves('a') == 2
+ assert df.plot.parallel_coordinates('a') == 2
+ assert df.plot.radviz('a') == 2
def _generate_4_axes_via_gridspec():
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 39f8f2f44fda0..1e223c20f55b7 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -878,18 +878,18 @@ def test_custom_business_day_freq(self):
_check_plot_works(s.plot)
- def test_misc_bindings(self, mock):
+ def test_misc_bindings(self, monkeypatch):
s = Series(randn(10))
- p1 = mock.patch('pandas.plotting._misc.lag_plot',
- return_value=2)
- p2 = mock.patch('pandas.plotting._misc.autocorrelation_plot',
- return_value=2)
- p3 = mock.patch('pandas.plotting._misc.bootstrap_plot',
- return_value=2)
- with p1, p2, p3:
- assert s.plot.lag() == 2
- assert s.plot.autocorrelation() == 2
- assert s.plot.bootstrap() == 2
+ monkeypatch.setattr('pandas.plotting._misc.lag_plot',
+ lambda x: 2)
+ monkeypatch.setattr('pandas.plotting._misc.autocorrelation_plot',
+ lambda x: 2)
+ monkeypatch.setattr('pandas.plotting._misc.bootstrap_plot',
+ lambda x: 2)
+
+ assert s.plot.lag() == 2
+ assert s.plot.autocorrelation() == 2
+ assert s.plot.bootstrap() == 2
@pytest.mark.xfail
def test_plot_accessor_updates_on_inplace(self):
| - [n/a ] xref https://github.com/pandas-dev/pandas/pull/24557#issuecomment-451174496
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [n/a ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24624 | 2019-01-04T19:39:01Z | 2019-01-05T14:52:20Z | 2019-01-05T14:52:20Z | 2019-01-05T21:04:24Z |
ensure DatetimeTZBlock always gets a DatetimeArray | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f88114e1c9e20..5f2fc17b08ac6 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2240,24 +2240,11 @@ class DatetimeTZBlock(ExtensionBlock, DatetimeBlock):
is_datetimetz = True
is_extension = True
- def __init__(self, values, placement, ndim=2, dtype=None):
- # XXX: This will end up calling _maybe_coerce_values twice
- # when dtype is not None. It's relatively cheap (just an isinstance)
- # but it'd nice to avoid.
- #
- # If we can remove dtype from __init__, and push that conversion
- # push onto the callers, then we can remove this entire __init__
- # and just use DatetimeBlock's.
- if dtype is not None:
- values = self._maybe_coerce_values(values, dtype=dtype)
- super(DatetimeTZBlock, self).__init__(values, placement=placement,
- ndim=ndim)
-
@property
def _holder(self):
return DatetimeArray
- def _maybe_coerce_values(self, values, dtype=None):
+ def _maybe_coerce_values(self, values):
"""Input validation for values passed to __init__. Ensure that
we have datetime64TZ, coercing if necessary.
@@ -2265,19 +2252,14 @@ def _maybe_coerce_values(self, values, dtype=None):
-----------
values : array-like
Must be convertible to datetime64
- dtype : string or DatetimeTZDtype, optional
- Does a shallow copy to this tz
Returns
-------
- values : ndarray[datetime64ns]
+ values : DatetimeArray
"""
if not isinstance(values, self._holder):
values = self._holder(values)
- if dtype is not None:
- values = type(values)(values, dtype=dtype)
-
if values.tz is None:
raise ValueError("cannot create a DatetimeTZBlock without a tz")
@@ -3087,8 +3069,9 @@ def make_block(values, placement, klass=None, ndim=None, dtype=None,
klass = get_block_type(values, dtype)
elif klass is DatetimeTZBlock and not is_datetime64tz_dtype(values):
- return klass(values, ndim=ndim,
- placement=placement, dtype=dtype)
+ # TODO: This is no longer hit internally; does it need to be retained
+ # for e.g. pyarrow?
+ values = DatetimeArray(values, dtype)
return klass(values, ndim=ndim, placement=placement)
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
index e6d18d5d4193a..b83eab7d0eba0 100644
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -53,14 +53,15 @@
BadMove as _BadMove, move_into_mutable_buffer as _move_into_mutable_buffer)
from pandas.core.dtypes.common import (
- is_categorical_dtype, is_object_dtype, needs_i8_conversion, pandas_dtype)
+ is_categorical_dtype, is_datetime64tz_dtype, is_object_dtype,
+ needs_i8_conversion, pandas_dtype)
from pandas import ( # noqa:F401
Categorical, CategoricalIndex, DataFrame, DatetimeIndex, Float64Index,
Index, Int64Index, Interval, IntervalIndex, MultiIndex, NaT, Panel, Period,
PeriodIndex, RangeIndex, Series, TimedeltaIndex, Timestamp)
from pandas.core import internals
-from pandas.core.arrays import IntervalArray, PeriodArray
+from pandas.core.arrays import DatetimeArray, IntervalArray, PeriodArray
from pandas.core.arrays.sparse import BlockIndex, IntIndex
from pandas.core.generic import NDFrame
from pandas.core.internals import BlockManager, _safe_reshape, make_block
@@ -651,6 +652,12 @@ def create_block(b):
placement = b[u'locs']
else:
placement = axes[0].get_indexer(b[u'items'])
+
+ if is_datetime64tz_dtype(b[u'dtype']):
+ assert isinstance(values, np.ndarray), type(values)
+ assert values.dtype == 'M8[ns]', values.dtype
+ values = DatetimeArray(values, dtype=b[u'dtype'])
+
return make_block(values=values,
klass=getattr(internals, b[u'klass']),
placement=placement,
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24622 | 2019-01-04T18:39:56Z | 2019-01-05T14:51:59Z | 2019-01-05T14:51:59Z | 2019-01-05T15:39:37Z |
Fix 32-bit builds by correctly using intp_t instead of int64_t for numpy.searchsorted result | diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 7f06784062d1a..7c9c2cafd1afb 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -3,7 +3,7 @@ import cython
import numpy as np
cimport numpy as cnp
-from numpy cimport uint8_t, int64_t, int32_t, ndarray
+from numpy cimport uint8_t, int64_t, int32_t, intp_t, ndarray
cnp.import_array()
import pytz
@@ -639,7 +639,7 @@ cdef inline int64_t[:] _tz_convert_dst(int64_t[:] values, tzinfo tz,
cdef:
Py_ssize_t n = len(values)
Py_ssize_t i
- int64_t[:] pos
+ intp_t[:] pos
int64_t[:] result = np.empty(n, dtype=np.int64)
ndarray[int64_t] trans
int64_t[:] deltas
| See discussion on #24613
- [x] closes #24613
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24621 | 2019-01-04T17:39:42Z | 2019-01-04T18:37:59Z | 2019-01-04T18:37:59Z | 2019-01-04T18:41:45Z |
CLN: _try_coerce_result, redundant dtypes.missing method | diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index c7f06bc5d7d4f..e922a5d1c3b27 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -12,8 +12,7 @@ cimport pandas._libs.util as util
from pandas._libs.tslibs.np_datetime cimport (
get_timedelta64_value, get_datetime64_value)
-from pandas._libs.tslibs.nattype cimport checknull_with_nat
-from pandas._libs.tslibs.nattype import NaT
+from pandas._libs.tslibs.nattype cimport checknull_with_nat, c_NaT
cdef float64_t INF = <float64_t>np.inf
cdef float64_t NEGINF = -INF
@@ -27,7 +26,7 @@ cdef inline bint _check_all_nulls(object val):
if isinstance(val, (float, complex)):
res = val != val
- elif val is NaT:
+ elif val is c_NaT:
res = 1
elif val is None:
res = 1
@@ -67,7 +66,7 @@ cpdef bint checknull(object val):
return val != val # and val != INF and val != NEGINF
elif util.is_datetime64_object(val):
return get_datetime64_value(val) == NPY_NAT
- elif val is NaT:
+ elif val is c_NaT:
return True
elif util.is_timedelta64_object(val):
return get_timedelta64_value(val) == NPY_NAT
@@ -106,7 +105,7 @@ cpdef bint checknull_old(object val):
return val != val or val == INF or val == NEGINF
elif util.is_datetime64_object(val):
return get_datetime64_value(val) == NPY_NAT
- elif val is NaT:
+ elif val is c_NaT:
return True
elif util.is_timedelta64_object(val):
return get_timedelta64_value(val) == NPY_NAT
@@ -190,7 +189,7 @@ def isnaobj_old(ndarray arr):
result = np.zeros(n, dtype=np.uint8)
for i in range(n):
val = arr[i]
- result[i] = val is NaT or _check_none_nan_inf_neginf(val)
+ result[i] = val is c_NaT or _check_none_nan_inf_neginf(val)
return result.view(np.bool_)
diff --git a/pandas/_libs/tslibs/__init__.py b/pandas/_libs/tslibs/__init__.py
index c7765a2c2b89c..38401cab57f5d 100644
--- a/pandas/_libs/tslibs/__init__.py
+++ b/pandas/_libs/tslibs/__init__.py
@@ -2,7 +2,7 @@
# flake8: noqa
from .conversion import normalize_date, localize_pydatetime, tz_convert_single
-from .nattype import NaT, iNaT
+from .nattype import NaT, iNaT, is_null_datetimelike
from .np_datetime import OutOfBoundsDatetime
from .period import Period, IncompatibleFrequency
from .timestamps import Timestamp
diff --git a/pandas/_libs/tslibs/nattype.pxd b/pandas/_libs/tslibs/nattype.pxd
index f649518e969be..ee8d5ca3d861c 100644
--- a/pandas/_libs/tslibs/nattype.pxd
+++ b/pandas/_libs/tslibs/nattype.pxd
@@ -17,4 +17,4 @@ cdef _NaT c_NaT
cdef bint checknull_with_nat(object val)
-cdef bint is_null_datetimelike(object val)
+cpdef bint is_null_datetimelike(object val)
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 604599f895476..df083f27ad653 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -686,7 +686,7 @@ cdef inline bint checknull_with_nat(object val):
return val is None or util.is_nan(val) or val is c_NaT
-cdef inline bint is_null_datetimelike(object val):
+cpdef bint is_null_datetimelike(object val):
"""
Determine if we have a null for a timedelta/datetime (or integer versions)
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index b22cb1050f140..3c6d3f212342b 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -10,9 +10,9 @@
_NS_DTYPE, _TD_DTYPE, ensure_object, is_bool_dtype, is_complex_dtype,
is_datetime64_dtype, is_datetime64tz_dtype, is_datetimelike,
is_datetimelike_v_numeric, is_dtype_equal, is_extension_array_dtype,
- is_float_dtype, is_integer, is_integer_dtype, is_object_dtype,
- is_period_dtype, is_scalar, is_string_dtype, is_string_like_dtype,
- is_timedelta64_dtype, needs_i8_conversion, pandas_dtype)
+ is_float_dtype, is_integer_dtype, is_object_dtype, is_period_dtype,
+ is_scalar, is_string_dtype, is_string_like_dtype, is_timedelta64_dtype,
+ needs_i8_conversion, pandas_dtype)
from .generic import (
ABCDatetimeArray, ABCExtensionArray, ABCGeneric, ABCIndexClass,
ABCMultiIndex, ABCSeries, ABCTimedeltaArray)
@@ -339,22 +339,6 @@ def notna(obj):
notnull = notna
-def is_null_datelike_scalar(other):
- """ test whether the object is a null datelike, e.g. Nat
- but guard against passing a non-scalar """
- if other is NaT or other is None:
- return True
- elif is_scalar(other):
-
- # a timedelta
- if hasattr(other, 'dtype'):
- return other.view('i8') == iNaT
- elif is_integer(other) and other == iNaT:
- return True
- return isna(other)
- return False
-
-
def _isna_compat(arr, fill_value=np.nan):
"""
Parameters
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f88114e1c9e20..721215538af37 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -8,7 +8,7 @@
import numpy as np
from pandas._libs import internals as libinternals, lib, tslib, tslibs
-from pandas._libs.tslibs import Timedelta, conversion
+from pandas._libs.tslibs import Timedelta, conversion, is_null_datetimelike
import pandas.compat as compat
from pandas.compat import range, zip
from pandas.util._validators import validate_bool_kwarg
@@ -31,7 +31,7 @@
ABCDataFrame, ABCDatetimeIndex, ABCExtensionArray, ABCIndexClass,
ABCSeries)
from pandas.core.dtypes.missing import (
- _isna_compat, array_equivalent, is_null_datelike_scalar, isna, notna)
+ _isna_compat, array_equivalent, isna, notna)
import pandas.core.algorithms as algos
from pandas.core.arrays import (
@@ -2077,10 +2077,6 @@ def get_values(self, dtype=None):
return values
return self.values
- @property
- def asi8(self):
- return self.values.view('i8')
-
class DatetimeBlock(DatetimeLikeBlockMixin, Block):
__slots__ = ()
@@ -2162,7 +2158,7 @@ def _try_coerce_args(self, values, other):
if isinstance(other, bool):
raise TypeError
- elif is_null_datelike_scalar(other):
+ elif is_null_datetimelike(other):
other = tslibs.iNaT
elif isinstance(other, (datetime, np.datetime64, date)):
other = self._box_func(other)
@@ -2175,18 +2171,16 @@ def _try_coerce_args(self, values, other):
else:
# coercion issues
# let higher levels handle
- raise TypeError
+ raise TypeError(other)
return values, other
def _try_coerce_result(self, result):
""" reverse of try_coerce_args """
if isinstance(result, np.ndarray):
- if result.dtype.kind in ['i', 'f', 'O']:
- try:
- result = result.astype('M8[ns]')
- except ValueError:
- pass
+ if result.dtype.kind in ['i', 'f']:
+ result = result.astype('M8[ns]')
+
elif isinstance(result, (np.integer, np.float, np.datetime64)):
result = self._box_func(result)
return result
@@ -2364,8 +2358,7 @@ def _try_coerce_args(self, values, other):
# add the tz back
other = self._holder(other, dtype=self.dtype)
- elif (is_null_datelike_scalar(other) or
- (lib.is_scalar(other) and isna(other))):
+ elif is_null_datetimelike(other):
other = tslibs.iNaT
elif isinstance(other, self._holder):
if other.tz != self.values.tz:
@@ -2380,17 +2373,19 @@ def _try_coerce_args(self, values, other):
raise ValueError("incompatible or non tz-aware value")
other = other.value
else:
- raise TypeError
+ raise TypeError(other)
return values, other
def _try_coerce_result(self, result):
""" reverse of try_coerce_args """
if isinstance(result, np.ndarray):
- if result.dtype.kind in ['i', 'f', 'O']:
+ if result.dtype.kind in ['i', 'f']:
result = result.astype('M8[ns]')
+
elif isinstance(result, (np.integer, np.float, np.datetime64)):
result = self._box_func(result)
+
if isinstance(result, np.ndarray):
# allow passing of > 1dim if its trivial
@@ -2531,20 +2526,16 @@ def _try_coerce_args(self, values, other):
if isinstance(other, bool):
raise TypeError
- elif is_null_datelike_scalar(other):
+ elif is_null_datetimelike(other):
other = tslibs.iNaT
- elif isinstance(other, Timedelta):
- other = other.value
- elif isinstance(other, timedelta):
- other = Timedelta(other).value
- elif isinstance(other, np.timedelta64):
+ elif isinstance(other, (timedelta, np.timedelta64)):
other = Timedelta(other).value
elif hasattr(other, 'dtype') and is_timedelta64_dtype(other):
other = other.astype('i8', copy=False).view('i8')
else:
# coercion issues
# let higher levels handle
- raise TypeError
+ raise TypeError(other)
return values, other
@@ -2552,11 +2543,13 @@ def _try_coerce_result(self, result):
""" reverse of try_coerce_args / try_operate """
if isinstance(result, np.ndarray):
mask = isna(result)
- if result.dtype.kind in ['i', 'f', 'O']:
+ if result.dtype.kind in ['i', 'f']:
result = result.astype('m8[ns]')
result[mask] = tslibs.iNaT
+
elif isinstance(result, (np.integer, np.float)):
result = self._box_func(result)
+
return result
def should_store(self, value):
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index 4ea4531c53c72..db4d3e876dec5 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -16,9 +16,9 @@
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution
-from pandas.core.dtypes.common import is_scalar
+from pandas.core.dtypes.common import is_integer, is_scalar
from pandas.core.dtypes.generic import ABCSeries, ABCSparseSeries
-from pandas.core.dtypes.missing import is_integer, isna, notna
+from pandas.core.dtypes.missing import isna, notna
from pandas.core import generic
from pandas.core.arrays import SparseArray
diff --git a/pandas/tests/tslibs/test_api.py b/pandas/tests/tslibs/test_api.py
index fb9355dfed645..de937d1a4c526 100644
--- a/pandas/tests/tslibs/test_api.py
+++ b/pandas/tests/tslibs/test_api.py
@@ -23,6 +23,7 @@ def test_namespace():
api = ['NaT',
'iNaT',
+ 'is_null_datetimelike',
'OutOfBoundsDatetime',
'Period',
'IncompatibleFrequency',
| Started this branch with the idea of making DatetimeBlock, TimedeltaBlock, DatetimeTZBlock define _try_coerce_result using self._holder._from_sequence (which I still think is worthwhile, cc @TomAugspurger ) and got side-tracked. This ends up just being some cleanup and removal of a redundant method.
Tiny perf bump expected from optimizing NaT checks in _libs.missing. Further improvements/cleanups may be possible pending #24607.
The object-dtype cases in try_coerce_result are no longer hit following #24606. | https://api.github.com/repos/pandas-dev/pandas/pulls/24619 | 2019-01-04T16:29:56Z | 2019-01-05T14:51:37Z | 2019-01-05T14:51:37Z | 2019-01-05T15:40:36Z |
REF/TST: Add more pytest idiom to test_to_html.py | diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 889b903088afa..554cfd306e2a7 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -38,564 +38,565 @@ def expected_html(datapath, name):
return html.rstrip()
-class TestToHTML(object):
-
- def test_to_html_with_col_space(self):
- def check_with_width(df, col_space):
- # check that col_space affects HTML generation
- # and be very brittle about it.
- html = df.to_html(col_space=col_space)
- hdrs = [x for x in html.split(r"\n") if re.search(r"<th[>\s]", x)]
- assert len(hdrs) > 0
- for h in hdrs:
- assert "min-width" in h
- assert str(col_space) in h
-
- df = DataFrame(np.random.random(size=(1, 3)))
-
- check_with_width(df, 30)
- check_with_width(df, 50)
-
- def test_to_html_with_empty_string_label(self):
- # GH 3547, to_html regards empty string labels as repeated labels
- data = {'c1': ['a', 'b'], 'c2': ['a', ''], 'data': [1, 2]}
- df = DataFrame(data).set_index(['c1', 'c2'])
- result = df.to_html()
- assert "rowspan" not in result
-
- def test_to_html_unicode(self, datapath):
- df = DataFrame({u('\u03c3'): np.arange(10.)})
- expected = expected_html(datapath, 'unicode_1')
- assert df.to_html() == expected
- df = DataFrame({'A': [u('\u03c3')]})
- expected = expected_html(datapath, 'unicode_2')
- assert df.to_html() == expected
-
- def test_to_html_decimal(self, datapath):
- # GH 12031
- df = DataFrame({'A': [6.0, 3.1, 2.2]})
- result = df.to_html(decimal=',')
- expected = expected_html(datapath, 'gh12031_expected_output')
- assert result == expected
-
- def test_to_html_escaped(self, datapath):
- a = 'str<ing1 &'
- b = 'stri>ng2 &'
-
- test_dict = {'co<l1': {a: "<type 'str'>",
- b: "<type 'str'>"},
- 'co>l2': {a: "<type 'str'>",
- b: "<type 'str'>"}}
- result = DataFrame(test_dict).to_html()
- expected = expected_html(datapath, 'escaped')
- assert result == expected
-
- def test_to_html_escape_disabled(self, datapath):
- a = 'str<ing1 &'
- b = 'stri>ng2 &'
-
- test_dict = {'co<l1': {a: "<b>bold</b>",
- b: "<b>bold</b>"},
- 'co>l2': {a: "<b>bold</b>",
- b: "<b>bold</b>"}}
- result = DataFrame(test_dict).to_html(escape=False)
- expected = expected_html(datapath, 'escape_disabled')
- assert result == expected
-
- def test_to_html_multiindex_index_false(self, datapath):
- # GH 8452
- df = DataFrame({
- 'a': range(2),
- 'b': range(3, 5),
- 'c': range(5, 7),
- 'd': range(3, 5)
- })
- df.columns = MultiIndex.from_product([['a', 'b'], ['c', 'd']])
- result = df.to_html(index=False)
- expected = expected_html(datapath, 'gh8452_expected_output')
- assert result == expected
-
+@pytest.fixture(params=['mixed', 'empty'])
+def biggie_df_fixture(request):
+ """Fixture for a big mixed Dataframe and an empty Dataframe"""
+ if request.param == 'mixed':
+ df = DataFrame({'A': np.random.randn(200),
+ 'B': tm.makeStringIndex(200)},
+ index=lrange(200))
+ df.loc[:20, 'A'] = np.nan
+ df.loc[:20, 'B'] = np.nan
+ return df
+ elif request.param == 'empty':
+ df = DataFrame(index=np.arange(200))
+ return df
+
+
+@pytest.fixture(params=fmt._VALID_JUSTIFY_PARAMETERS)
+def justify(request):
+ return request.param
+
+
+@pytest.mark.parametrize('col_space', [30, 50])
+def test_to_html_with_col_space(col_space):
+ df = DataFrame(np.random.random(size=(1, 3)))
+ # check that col_space affects HTML generation
+ # and be very brittle about it.
+ result = df.to_html(col_space=col_space)
+ hdrs = [x for x in result.split(r"\n") if re.search(r"<th[>\s]", x)]
+ assert len(hdrs) > 0
+ for h in hdrs:
+ assert "min-width" in h
+ assert str(col_space) in h
+
+
+def test_to_html_with_empty_string_label():
+ # GH 3547, to_html regards empty string labels as repeated labels
+ data = {'c1': ['a', 'b'], 'c2': ['a', ''], 'data': [1, 2]}
+ df = DataFrame(data).set_index(['c1', 'c2'])
+ result = df.to_html()
+ assert "rowspan" not in result
+
+
+@pytest.mark.parametrize('df,expected', [
+ (DataFrame({u('\u03c3'): np.arange(10.)}), 'unicode_1'),
+ (DataFrame({'A': [u('\u03c3')]}), 'unicode_2')
+])
+def test_to_html_unicode(df, expected, datapath):
+ expected = expected_html(datapath, expected)
+ result = df.to_html()
+ assert result == expected
+
+
+def test_to_html_decimal(datapath):
+ # GH 12031
+ df = DataFrame({'A': [6.0, 3.1, 2.2]})
+ result = df.to_html(decimal=',')
+ expected = expected_html(datapath, 'gh12031_expected_output')
+ assert result == expected
+
+
+@pytest.mark.parametrize('kwargs,string,expected', [
+ (dict(), "<type 'str'>", 'escaped'),
+ (dict(escape=False), "<b>bold</b>", 'escape_disabled')
+])
+def test_to_html_escaped(kwargs, string, expected, datapath):
+ a = 'str<ing1 &'
+ b = 'stri>ng2 &'
+
+ test_dict = {'co<l1': {a: string,
+ b: string},
+ 'co>l2': {a: string,
+ b: string}}
+ result = DataFrame(test_dict).to_html(**kwargs)
+ expected = expected_html(datapath, expected)
+ assert result == expected
+
+
+@pytest.mark.parametrize('index_is_named', [True, False])
+def test_to_html_multiindex_index_false(index_is_named, datapath):
+ # GH 8452
+ df = DataFrame({
+ 'a': range(2),
+ 'b': range(3, 5),
+ 'c': range(5, 7),
+ 'd': range(3, 5)
+ })
+ df.columns = MultiIndex.from_product([['a', 'b'], ['c', 'd']])
+ if index_is_named:
df.index = Index(df.index.values, name='idx')
- result = df.to_html(index=False)
- assert result == expected
-
- def test_to_html_multiindex_sparsify_false_multi_sparse(self, datapath):
- with option_context('display.multi_sparse', False):
- index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]],
- names=['foo', None])
-
- df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index)
- result = df.to_html()
- expected = expected_html(
- datapath, 'multiindex_sparsify_false_multi_sparse_1')
- assert result == expected
-
- df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]],
- columns=index[::2], index=index)
- result = df.to_html()
- expected = expected_html(
- datapath, 'multiindex_sparsify_false_multi_sparse_2')
- assert result == expected
-
- def test_to_html_multiindex_sparsify(self, datapath):
- index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]],
- names=['foo', None])
-
- df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index)
- result = df.to_html()
- expected = expected_html(datapath, 'multiindex_sparsify_1')
- assert result == expected
-
- df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], columns=index[::2],
- index=index)
+ result = df.to_html(index=False)
+ expected = expected_html(datapath, 'gh8452_expected_output')
+ assert result == expected
+
+
+@pytest.mark.parametrize('multi_sparse,expected', [
+ (False, 'multiindex_sparsify_false_multi_sparse_1'),
+ (False, 'multiindex_sparsify_false_multi_sparse_2'),
+ (True, 'multiindex_sparsify_1'),
+ (True, 'multiindex_sparsify_2')
+])
+def test_to_html_multiindex_sparsify(multi_sparse, expected, datapath):
+ index = MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=['foo', None])
+ df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], index=index)
+ if expected.endswith('2'):
+ df.columns = index[::2]
+ with option_context('display.multi_sparse', multi_sparse):
result = df.to_html()
- expected = expected_html(datapath, 'multiindex_sparsify_2')
- assert result == expected
-
- def test_to_html_multiindex_odd_even_truncate(self, datapath):
- # GH 14882 - Issue on truncation with odd length DataFrame
- mi = MultiIndex.from_product([[100, 200, 300],
- [10, 20, 30],
- [1, 2, 3, 4, 5, 6, 7]],
- names=['a', 'b', 'c'])
- df = DataFrame({'n': range(len(mi))}, index=mi)
- result = df.to_html(max_rows=60)
- expected = expected_html(datapath, 'gh14882_expected_output_1')
- assert result == expected
-
- # Test that ... appears in a middle level
- result = df.to_html(max_rows=56)
- expected = expected_html(datapath, 'gh14882_expected_output_2')
- assert result == expected
-
- def test_to_html_index_formatter(self, datapath):
- df = DataFrame([[0, 1], [2, 3], [4, 5], [6, 7]], columns=['foo', None],
- index=lrange(4))
-
- f = lambda x: 'abcd' [x]
- result = df.to_html(formatters={'__index__': f})
- expected = expected_html(datapath, 'index_formatter')
- assert result == expected
-
- def test_to_html_datetime64_monthformatter(self, datapath):
- months = [datetime(2016, 1, 1), datetime(2016, 2, 2)]
- x = DataFrame({'months': months})
-
- def format_func(x):
- return x.strftime('%Y-%m')
- result = x.to_html(formatters={'months': format_func})
- expected = expected_html(datapath, 'datetime64_monthformatter')
- assert result == expected
-
- def test_to_html_datetime64_hourformatter(self, datapath):
-
- x = DataFrame({'hod': pd.to_datetime(['10:10:10.100', '12:12:12.120'],
- format='%H:%M:%S.%f')})
-
- def format_func(x):
- return x.strftime('%H:%M')
- result = x.to_html(formatters={'hod': format_func})
- expected = expected_html(datapath, 'datetime64_hourformatter')
- assert result == expected
-
- def test_to_html_regression_GH6098(self):
- df = DataFrame({
- u('clé1'): [u('a'), u('a'), u('b'), u('b'), u('a')],
- u('clé2'): [u('1er'), u('2ème'), u('1er'), u('2ème'), u('1er')],
- 'données1': np.random.randn(5),
- 'données2': np.random.randn(5)})
-
- # it works
- df.pivot_table(index=[u('clé1')], columns=[u('clé2')])._repr_html_()
-
- def test_to_html_truncate(self, datapath):
- index = pd.date_range(start='20010101', freq='D', periods=20)
- df = DataFrame(index=index, columns=range(20))
- result = df.to_html(max_rows=8, max_cols=4)
- expected = expected_html(datapath, 'truncate')
- assert result == expected
-
- def test_to_html_truncate_multi_index(self, datapath):
- arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
- ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
- df = DataFrame(index=arrays, columns=arrays)
- result = df.to_html(max_rows=7, max_cols=7)
- expected = expected_html(datapath, 'truncate_multi_index')
- assert result == expected
-
- def test_to_html_truncate_multi_index_sparse_off(self, datapath):
- arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
- ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
- df = DataFrame(index=arrays, columns=arrays)
- result = df.to_html(max_rows=7, max_cols=7, sparsify=False)
- expected = expected_html(datapath, 'truncate_multi_index_sparse_off')
- assert result == expected
-
- def test_to_html_border(self):
- df = DataFrame({'A': [1, 2]})
- result = df.to_html()
- assert 'border="1"' in result
-
- def test_to_html_border_option(self):
- df = DataFrame({'A': [1, 2]})
- with option_context('display.html.border', 0):
- result = df.to_html()
- assert 'border="0"' in result
- assert 'border="0"' in df._repr_html_()
-
- def test_to_html_border_zero(self):
- df = DataFrame({'A': [1, 2]})
- result = df.to_html(border=0)
- assert 'border="0"' in result
-
- def test_display_option_warning(self):
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- pd.options.html.border
-
- def test_to_html(self):
- # big mixed
- biggie = DataFrame({'A': np.random.randn(200),
- 'B': tm.makeStringIndex(200)},
- index=lrange(200))
-
- biggie.loc[:20, 'A'] = np.nan
- biggie.loc[:20, 'B'] = np.nan
- s = biggie.to_html()
-
- buf = StringIO()
- retval = biggie.to_html(buf=buf)
- assert retval is None
- assert buf.getvalue() == s
-
- assert isinstance(s, compat.string_types)
-
- biggie.to_html(columns=['B', 'A'], col_space=17)
- biggie.to_html(columns=['B', 'A'],
- formatters={'A': lambda x: '{x:.1f}'.format(x=x)})
-
- biggie.to_html(columns=['B', 'A'], float_format=str)
- biggie.to_html(columns=['B', 'A'], col_space=12, float_format=str)
-
- frame = DataFrame(index=np.arange(200))
- frame.to_html()
-
- def test_to_html_filename(self):
- biggie = DataFrame({'A': np.random.randn(200),
- 'B': tm.makeStringIndex(200)},
- index=lrange(200))
-
- biggie.loc[:20, 'A'] = np.nan
- biggie.loc[:20, 'B'] = np.nan
- with tm.ensure_clean('test.html') as path:
- biggie.to_html(path)
- with open(path, 'r') as f:
- s = biggie.to_html()
- s2 = f.read()
- assert s == s2
-
- frame = DataFrame(index=np.arange(200))
- with tm.ensure_clean('test.html') as path:
- frame.to_html(path)
- with open(path, 'r') as f:
- assert frame.to_html() == f.read()
-
- def test_to_html_with_no_bold(self):
- x = DataFrame({'x': np.random.randn(5)})
- ashtml = x.to_html(bold_rows=False)
- assert '<strong' not in ashtml[ashtml.find("</thead>")]
-
- def test_to_html_columns_arg(self):
- frame = DataFrame(tm.getSeriesData())
- result = frame.to_html(columns=['A'])
- assert '<th>B</th>' not in result
-
- def test_to_html_multiindex(self, datapath):
- columns = MultiIndex.from_tuples(list(zip(np.arange(2).repeat(2),
- np.mod(lrange(4), 2))),
- names=['CL0', 'CL1'])
- df = DataFrame([list('abcd'), list('efgh')], columns=columns)
- result = df.to_html(justify='left')
- expected = expected_html(datapath, 'multiindex_1')
- assert result == expected
-
- columns = MultiIndex.from_tuples(list(zip(
- range(4), np.mod(
- lrange(4), 2))))
- df = DataFrame([list('abcd'), list('efgh')], columns=columns)
-
- result = df.to_html(justify='right')
- expected = expected_html(datapath, 'multiindex_2')
- assert result == expected
-
- @pytest.mark.parametrize("justify", fmt._VALID_JUSTIFY_PARAMETERS)
- def test_to_html_justify(self, justify, datapath):
- df = DataFrame({'A': [6, 30000, 2],
- 'B': [1, 2, 70000],
- 'C': [223442, 0, 1]},
- columns=['A', 'B', 'C'])
- result = df.to_html(justify=justify)
- expected = expected_html(datapath, 'justify').format(justify=justify)
- assert result == expected
-
- @pytest.mark.parametrize("justify", ["super-right", "small-left",
- "noinherit", "tiny", "pandas"])
- def test_to_html_invalid_justify(self, justify):
- # GH 17527
- df = DataFrame()
- msg = "Invalid value for justify parameter"
-
- with pytest.raises(ValueError, match=msg):
- df.to_html(justify=justify)
-
- def test_to_html_index(self, datapath):
- index = ['foo', 'bar', 'baz']
- df = DataFrame({'A': [1, 2, 3],
- 'B': [1.2, 3.4, 5.6],
- 'C': ['one', 'two', np.nan]},
- columns=['A', 'B', 'C'],
- index=index)
- expected_with_index = expected_html(datapath, 'index_1')
- assert df.to_html() == expected_with_index
-
- expected_without_index = expected_html(datapath, 'index_2')
- result = df.to_html(index=False)
- for i in index:
- assert i not in result
- assert result == expected_without_index
- df.index = Index(['foo', 'bar', 'baz'], name='idx')
- expected_with_index = expected_html(datapath, 'index_3')
- assert df.to_html() == expected_with_index
- assert df.to_html(index=False) == expected_without_index
-
- tuples = [('foo', 'car'), ('foo', 'bike'), ('bar', 'car')]
- df.index = MultiIndex.from_tuples(tuples)
-
- expected_with_index = expected_html(datapath, 'index_4')
- assert df.to_html() == expected_with_index
-
- result = df.to_html(index=False)
- for i in ['foo', 'bar', 'car', 'bike']:
- assert i not in result
- # must be the same result as normal index
- assert result == expected_without_index
-
- df.index = MultiIndex.from_tuples(tuples, names=['idx1', 'idx2'])
- expected_with_index = expected_html(datapath, 'index_5')
- assert df.to_html() == expected_with_index
- assert df.to_html(index=False) == expected_without_index
-
- def test_to_html_with_classes(self, datapath):
- df = DataFrame()
- result = df.to_html(classes="sortable draggable")
- expected = expected_html(datapath, 'with_classes')
- assert result == expected
-
- result = df.to_html(classes=["sortable", "draggable"])
- assert result == expected
-
- def test_to_html_no_index_max_rows(self, datapath):
- # GH 14998
- df = DataFrame({"A": [1, 2, 3, 4]})
- result = df.to_html(index=False, max_rows=1)
- expected = expected_html(datapath, 'gh14998_expected_output')
- assert result == expected
-
- def test_to_html_multiindex_max_cols(self, datapath):
- # GH 6131
- index = MultiIndex(levels=[['ba', 'bb', 'bc'], ['ca', 'cb', 'cc']],
- codes=[[0, 1, 2], [0, 1, 2]],
- names=['b', 'c'])
- columns = MultiIndex(levels=[['d'], ['aa', 'ab', 'ac']],
- codes=[[0, 0, 0], [0, 1, 2]],
- names=[None, 'a'])
- data = np.array(
- [[1., np.nan, np.nan], [np.nan, 2., np.nan], [np.nan, np.nan, 3.]])
- df = DataFrame(data, index, columns)
- result = df.to_html(max_cols=2)
- expected = expected_html(datapath, 'gh6131_expected_output')
- assert result == expected
-
- def test_to_html_multi_indexes_index_false(self, datapath):
- # GH 22579
- df = DataFrame({'a': range(10), 'b': range(10, 20), 'c': range(10, 20),
- 'd': range(10, 20)})
- df.columns = MultiIndex.from_product([['a', 'b'], ['c', 'd']])
- df.index = MultiIndex.from_product([['a', 'b'],
- ['c', 'd', 'e', 'f', 'g']])
- result = df.to_html(index=False)
- expected = expected_html(datapath, 'gh22579_expected_output')
- assert result == expected
-
- @pytest.mark.parametrize('index_names', [True, False])
- @pytest.mark.parametrize('header', [True, False])
- @pytest.mark.parametrize('index', [True, False])
- @pytest.mark.parametrize('column_index, column_type', [
- (Index([0, 1]), 'unnamed_standard'),
- (Index([0, 1], name='columns.name'), 'named_standard'),
- (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a'], ['b', 'c']], names=['columns.name.0',
- 'columns.name.1']), 'named_multi')
- ])
- @pytest.mark.parametrize('row_index, row_type', [
- (Index([0, 1]), 'unnamed_standard'),
- (Index([0, 1], name='index.name'), 'named_standard'),
- (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a'], ['b', 'c']], names=['index.name.0',
- 'index.name.1']), 'named_multi')
- ])
- def test_to_html_basic_alignment(
- self, datapath, row_index, row_type, column_index, column_type,
- index, header, index_names):
- # GH 22747, GH 22579
- df = DataFrame(np.zeros((2, 2), dtype=int),
- index=row_index, columns=column_index)
- result = df.to_html(
- index=index, header=header, index_names=index_names)
-
- if not index:
- row_type = 'none'
- elif not index_names and row_type.startswith('named'):
- row_type = 'un' + row_type
-
- if not header:
- column_type = 'none'
- elif not index_names and column_type.startswith('named'):
- column_type = 'un' + column_type
-
- filename = 'index_' + row_type + '_columns_' + column_type
- expected = expected_html(datapath, filename)
- assert result == expected
-
- @pytest.mark.parametrize('index_names', [True, False])
- @pytest.mark.parametrize('header', [True, False])
- @pytest.mark.parametrize('index', [True, False])
- @pytest.mark.parametrize('column_index, column_type', [
- (Index(np.arange(8)), 'unnamed_standard'),
- (Index(np.arange(8), name='columns.name'), 'named_standard'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
- 'named_multi')
- ])
- @pytest.mark.parametrize('row_index, row_type', [
- (Index(np.arange(8)), 'unnamed_standard'),
- (Index(np.arange(8), name='index.name'), 'named_standard'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
- (MultiIndex.from_product(
- [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
- 'named_multi')
- ])
- def test_to_html_alignment_with_truncation(
- self, datapath, row_index, row_type, column_index, column_type,
- index, header, index_names):
- # GH 22747, GH 22579
- df = DataFrame(np.arange(64).reshape(8, 8),
- index=row_index, columns=column_index)
- result = df.to_html(
- max_rows=4, max_cols=4,
- index=index, header=header, index_names=index_names)
-
- if not index:
- row_type = 'none'
- elif not index_names and row_type.startswith('named'):
- row_type = 'un' + row_type
-
- if not header:
- column_type = 'none'
- elif not index_names and column_type.startswith('named'):
- column_type = 'un' + column_type
-
- filename = 'trunc_df_index_' + row_type + '_columns_' + column_type
- expected = expected_html(datapath, filename)
- assert result == expected
-
- @pytest.mark.parametrize('index', [False, 0])
- def test_to_html_truncation_index_false_max_rows(self, datapath, index):
- # GH 15019
- data = [[1.764052, 0.400157],
- [0.978738, 2.240893],
- [1.867558, -0.977278],
- [0.950088, -0.151357],
- [-0.103219, 0.410599]]
- df = DataFrame(data)
- result = df.to_html(max_rows=4, index=index)
- expected = expected_html(datapath, 'gh15019_expected_output')
- assert result == expected
-
- @pytest.mark.parametrize('index', [False, 0])
- @pytest.mark.parametrize('col_index_named, expected_output', [
- (False, 'gh22783_expected_output'),
- (True, 'gh22783_named_columns_index')
- ])
- def test_to_html_truncation_index_false_max_cols(
- self, datapath, index, col_index_named, expected_output):
- # GH 22783
- data = [[1.764052, 0.400157, 0.978738, 2.240893, 1.867558],
- [-0.977278, 0.950088, -0.151357, -0.103219, 0.410599]]
- df = DataFrame(data)
- if col_index_named:
- df.columns.rename('columns.name', inplace=True)
- result = df.to_html(max_cols=4, index=index)
- expected = expected_html(datapath, expected_output)
- assert result == expected
-
- def test_to_html_notebook_has_style(self):
- df = DataFrame({"A": [1, 2, 3]})
- result = df.to_html(notebook=True)
+ expected = expected_html(datapath, expected)
+ assert result == expected
+
+
+@pytest.mark.parametrize('max_rows,expected', [
+ (60, 'gh14882_expected_output_1'),
+
+ # Test that ... appears in a middle level
+ (56, 'gh14882_expected_output_2')
+])
+def test_to_html_multiindex_odd_even_truncate(max_rows, expected, datapath):
+ # GH 14882 - Issue on truncation with odd length DataFrame
+ index = MultiIndex.from_product([[100, 200, 300],
+ [10, 20, 30],
+ [1, 2, 3, 4, 5, 6, 7]],
+ names=['a', 'b', 'c'])
+ df = DataFrame({'n': range(len(index))}, index=index)
+ result = df.to_html(max_rows=max_rows)
+ expected = expected_html(datapath, expected)
+ assert result == expected
+
+
+@pytest.mark.parametrize('df,formatters,expected', [
+ (DataFrame(
+ [[0, 1], [2, 3], [4, 5], [6, 7]],
+ columns=['foo', None], index=lrange(4)),
+ {'__index__': lambda x: 'abcd' [x]},
+ 'index_formatter'),
+
+ (DataFrame(
+ {'months': [datetime(2016, 1, 1), datetime(2016, 2, 2)]}),
+ {'months': lambda x: x.strftime('%Y-%m')},
+ 'datetime64_monthformatter'),
+
+ (DataFrame({'hod': pd.to_datetime(['10:10:10.100', '12:12:12.120'],
+ format='%H:%M:%S.%f')}),
+ {'hod': lambda x: x.strftime('%H:%M')},
+ 'datetime64_hourformatter')
+])
+def test_to_html_formatters(df, formatters, expected, datapath):
+ expected = expected_html(datapath, expected)
+ result = df.to_html(formatters=formatters)
+ assert result == expected
+
+
+def test_to_html_regression_GH6098():
+ df = DataFrame({
+ u('clé1'): [u('a'), u('a'), u('b'), u('b'), u('a')],
+ u('clé2'): [u('1er'), u('2ème'), u('1er'), u('2ème'), u('1er')],
+ 'données1': np.random.randn(5),
+ 'données2': np.random.randn(5)})
+
+ # it works
+ df.pivot_table(index=[u('clé1')], columns=[u('clé2')])._repr_html_()
+
+
+def test_to_html_truncate(datapath):
+ index = pd.date_range(start='20010101', freq='D', periods=20)
+ df = DataFrame(index=index, columns=range(20))
+ result = df.to_html(max_rows=8, max_cols=4)
+ expected = expected_html(datapath, 'truncate')
+ assert result == expected
+
+
+@pytest.mark.parametrize('sparsify,expected', [
+ (True, 'truncate_multi_index'),
+ (False, 'truncate_multi_index_sparse_off')
+])
+def test_to_html_truncate_multi_index(sparsify, expected, datapath):
+ arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
+ ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
+ df = DataFrame(index=arrays, columns=arrays)
+ result = df.to_html(max_rows=7, max_cols=7, sparsify=sparsify)
+ expected = expected_html(datapath, expected)
+ assert result == expected
+
+
+@pytest.mark.parametrize('option,result,expected', [
+ (None, lambda df: df.to_html(), '1'),
+ (None, lambda df: df.to_html(border=0), '0'),
+ (0, lambda df: df.to_html(), '0'),
+ (0, lambda df: df._repr_html_(), '0'),
+])
+def test_to_html_border(option, result, expected):
+ df = DataFrame({'A': [1, 2]})
+ if option is None:
+ result = result(df)
+ else:
+ with option_context('display.html.border', option):
+ result = result(df)
+ expected = 'border="{}"'.format(expected)
+ assert expected in result
+
+
+def test_display_option_warning():
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ pd.options.html.border
+
+
+@pytest.mark.parametrize('biggie_df_fixture', ['mixed'], indirect=True)
+def test_to_html(biggie_df_fixture):
+ # TODO: split this test
+ df = biggie_df_fixture
+ s = df.to_html()
+
+ buf = StringIO()
+ retval = df.to_html(buf=buf)
+ assert retval is None
+ assert buf.getvalue() == s
+
+ assert isinstance(s, compat.string_types)
+
+ df.to_html(columns=['B', 'A'], col_space=17)
+ df.to_html(columns=['B', 'A'],
+ formatters={'A': lambda x: '{x:.1f}'.format(x=x)})
+
+ df.to_html(columns=['B', 'A'], float_format=str)
+ df.to_html(columns=['B', 'A'], col_space=12, float_format=str)
+
+
+@pytest.mark.parametrize('biggie_df_fixture', ['empty'], indirect=True)
+def test_to_html_empty_dataframe(biggie_df_fixture):
+ df = biggie_df_fixture
+ df.to_html()
+
+
+def test_to_html_filename(biggie_df_fixture, tmpdir):
+ df = biggie_df_fixture
+ expected = df.to_html()
+ path = tmpdir.join('test.html')
+ df.to_html(path)
+ result = path.read()
+ assert result == expected
+
+
+def test_to_html_with_no_bold():
+ df = DataFrame({'x': np.random.randn(5)})
+ html = df.to_html(bold_rows=False)
+ result = html[html.find("</thead>")]
+ assert '<strong' not in result
+
+
+def test_to_html_columns_arg():
+ df = DataFrame(tm.getSeriesData())
+ result = df.to_html(columns=['A'])
+ assert '<th>B</th>' not in result
+
+
+@pytest.mark.parametrize('columns,justify,expected', [
+ (MultiIndex.from_tuples(
+ list(zip(np.arange(2).repeat(2), np.mod(lrange(4), 2))),
+ names=['CL0', 'CL1']),
+ 'left',
+ 'multiindex_1'),
+
+ (MultiIndex.from_tuples(
+ list(zip(range(4), np.mod(lrange(4), 2)))),
+ 'right',
+ 'multiindex_2')
+])
+def test_to_html_multiindex(columns, justify, expected, datapath):
+ df = DataFrame([list('abcd'), list('efgh')], columns=columns)
+ result = df.to_html(justify=justify)
+ expected = expected_html(datapath, expected)
+ assert result == expected
+
+
+def test_to_html_justify(justify, datapath):
+ df = DataFrame({'A': [6, 30000, 2],
+ 'B': [1, 2, 70000],
+ 'C': [223442, 0, 1]},
+ columns=['A', 'B', 'C'])
+ result = df.to_html(justify=justify)
+ expected = expected_html(datapath, 'justify').format(justify=justify)
+ assert result == expected
+
+
+@pytest.mark.parametrize("justify", ["super-right", "small-left",
+ "noinherit", "tiny", "pandas"])
+def test_to_html_invalid_justify(justify):
+ # GH 17527
+ df = DataFrame()
+ msg = "Invalid value for justify parameter"
+
+ with pytest.raises(ValueError, match=msg):
+ df.to_html(justify=justify)
+
+
+def test_to_html_index(datapath):
+ # TODO: split this test
+ index = ['foo', 'bar', 'baz']
+ df = DataFrame({'A': [1, 2, 3],
+ 'B': [1.2, 3.4, 5.6],
+ 'C': ['one', 'two', np.nan]},
+ columns=['A', 'B', 'C'],
+ index=index)
+ expected_with_index = expected_html(datapath, 'index_1')
+ assert df.to_html() == expected_with_index
+
+ expected_without_index = expected_html(datapath, 'index_2')
+ result = df.to_html(index=False)
+ for i in index:
+ assert i not in result
+ assert result == expected_without_index
+ df.index = Index(['foo', 'bar', 'baz'], name='idx')
+ expected_with_index = expected_html(datapath, 'index_3')
+ assert df.to_html() == expected_with_index
+ assert df.to_html(index=False) == expected_without_index
+
+ tuples = [('foo', 'car'), ('foo', 'bike'), ('bar', 'car')]
+ df.index = MultiIndex.from_tuples(tuples)
+
+ expected_with_index = expected_html(datapath, 'index_4')
+ assert df.to_html() == expected_with_index
+
+ result = df.to_html(index=False)
+ for i in ['foo', 'bar', 'car', 'bike']:
+ assert i not in result
+ # must be the same result as normal index
+ assert result == expected_without_index
+
+ df.index = MultiIndex.from_tuples(tuples, names=['idx1', 'idx2'])
+ expected_with_index = expected_html(datapath, 'index_5')
+ assert df.to_html() == expected_with_index
+ assert df.to_html(index=False) == expected_without_index
+
+
+@pytest.mark.parametrize('classes', [
+ "sortable draggable",
+ ["sortable", "draggable"]
+])
+def test_to_html_with_classes(classes, datapath):
+ df = DataFrame()
+ expected = expected_html(datapath, 'with_classes')
+ result = df.to_html(classes=classes)
+ assert result == expected
+
+
+def test_to_html_no_index_max_rows(datapath):
+ # GH 14998
+ df = DataFrame({"A": [1, 2, 3, 4]})
+ result = df.to_html(index=False, max_rows=1)
+ expected = expected_html(datapath, 'gh14998_expected_output')
+ assert result == expected
+
+
+def test_to_html_multiindex_max_cols(datapath):
+ # GH 6131
+ index = MultiIndex(levels=[['ba', 'bb', 'bc'], ['ca', 'cb', 'cc']],
+ codes=[[0, 1, 2], [0, 1, 2]],
+ names=['b', 'c'])
+ columns = MultiIndex(levels=[['d'], ['aa', 'ab', 'ac']],
+ codes=[[0, 0, 0], [0, 1, 2]],
+ names=[None, 'a'])
+ data = np.array(
+ [[1., np.nan, np.nan], [np.nan, 2., np.nan], [np.nan, np.nan, 3.]])
+ df = DataFrame(data, index, columns)
+ result = df.to_html(max_cols=2)
+ expected = expected_html(datapath, 'gh6131_expected_output')
+ assert result == expected
+
+
+def test_to_html_multi_indexes_index_false(datapath):
+ # GH 22579
+ df = DataFrame({'a': range(10), 'b': range(10, 20), 'c': range(10, 20),
+ 'd': range(10, 20)})
+ df.columns = MultiIndex.from_product([['a', 'b'], ['c', 'd']])
+ df.index = MultiIndex.from_product([['a', 'b'],
+ ['c', 'd', 'e', 'f', 'g']])
+ result = df.to_html(index=False)
+ expected = expected_html(datapath, 'gh22579_expected_output')
+ assert result == expected
+
+
+@pytest.mark.parametrize('index_names', [True, False])
+@pytest.mark.parametrize('header', [True, False])
+@pytest.mark.parametrize('index', [True, False])
+@pytest.mark.parametrize('column_index, column_type', [
+ (Index([0, 1]), 'unnamed_standard'),
+ (Index([0, 1], name='columns.name'), 'named_standard'),
+ (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a'], ['b', 'c']], names=['columns.name.0',
+ 'columns.name.1']), 'named_multi')
+])
+@pytest.mark.parametrize('row_index, row_type', [
+ (Index([0, 1]), 'unnamed_standard'),
+ (Index([0, 1], name='index.name'), 'named_standard'),
+ (MultiIndex.from_product([['a'], ['b', 'c']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a'], ['b', 'c']], names=['index.name.0',
+ 'index.name.1']), 'named_multi')
+])
+def test_to_html_basic_alignment(
+ datapath, row_index, row_type, column_index, column_type,
+ index, header, index_names):
+ # GH 22747, GH 22579
+ df = DataFrame(np.zeros((2, 2), dtype=int),
+ index=row_index, columns=column_index)
+ result = df.to_html(
+ index=index, header=header, index_names=index_names)
+
+ if not index:
+ row_type = 'none'
+ elif not index_names and row_type.startswith('named'):
+ row_type = 'un' + row_type
+
+ if not header:
+ column_type = 'none'
+ elif not index_names and column_type.startswith('named'):
+ column_type = 'un' + column_type
+
+ filename = 'index_' + row_type + '_columns_' + column_type
+ expected = expected_html(datapath, filename)
+ assert result == expected
+
+
+@pytest.mark.parametrize('index_names', [True, False])
+@pytest.mark.parametrize('header', [True, False])
+@pytest.mark.parametrize('index', [True, False])
+@pytest.mark.parametrize('column_index, column_type', [
+ (Index(np.arange(8)), 'unnamed_standard'),
+ (Index(np.arange(8), name='columns.name'), 'named_standard'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
+ 'named_multi')
+])
+@pytest.mark.parametrize('row_index, row_type', [
+ (Index(np.arange(8)), 'unnamed_standard'),
+ (Index(np.arange(8), name='index.name'), 'named_standard'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']]), 'unnamed_multi'),
+ (MultiIndex.from_product(
+ [['a', 'b'], ['c', 'd'], ['e', 'f']], names=['foo', None, 'baz']),
+ 'named_multi')
+])
+def test_to_html_alignment_with_truncation(
+ datapath, row_index, row_type, column_index, column_type,
+ index, header, index_names):
+ # GH 22747, GH 22579
+ df = DataFrame(np.arange(64).reshape(8, 8),
+ index=row_index, columns=column_index)
+ result = df.to_html(
+ max_rows=4, max_cols=4,
+ index=index, header=header, index_names=index_names)
+
+ if not index:
+ row_type = 'none'
+ elif not index_names and row_type.startswith('named'):
+ row_type = 'un' + row_type
+
+ if not header:
+ column_type = 'none'
+ elif not index_names and column_type.startswith('named'):
+ column_type = 'un' + column_type
+
+ filename = 'trunc_df_index_' + row_type + '_columns_' + column_type
+ expected = expected_html(datapath, filename)
+ assert result == expected
+
+
+@pytest.mark.parametrize('index', [False, 0])
+def test_to_html_truncation_index_false_max_rows(datapath, index):
+ # GH 15019
+ data = [[1.764052, 0.400157],
+ [0.978738, 2.240893],
+ [1.867558, -0.977278],
+ [0.950088, -0.151357],
+ [-0.103219, 0.410599]]
+ df = DataFrame(data)
+ result = df.to_html(max_rows=4, index=index)
+ expected = expected_html(datapath, 'gh15019_expected_output')
+ assert result == expected
+
+
+@pytest.mark.parametrize('index', [False, 0])
+@pytest.mark.parametrize('col_index_named, expected_output', [
+ (False, 'gh22783_expected_output'),
+ (True, 'gh22783_named_columns_index')
+])
+def test_to_html_truncation_index_false_max_cols(
+ datapath, index, col_index_named, expected_output):
+ # GH 22783
+ data = [[1.764052, 0.400157, 0.978738, 2.240893, 1.867558],
+ [-0.977278, 0.950088, -0.151357, -0.103219, 0.410599]]
+ df = DataFrame(data)
+ if col_index_named:
+ df.columns.rename('columns.name', inplace=True)
+ result = df.to_html(max_cols=4, index=index)
+ expected = expected_html(datapath, expected_output)
+ assert result == expected
+
+
+@pytest.mark.parametrize('notebook', [True, False])
+def test_to_html_notebook_has_style(notebook):
+ df = DataFrame({"A": [1, 2, 3]})
+ result = df.to_html(notebook=notebook)
+
+ if notebook:
assert "tbody tr th:only-of-type" in result
assert "vertical-align: middle;" in result
assert "thead th" in result
-
- def test_to_html_notebook_has_no_style(self):
- df = DataFrame({"A": [1, 2, 3]})
- result = df.to_html()
+ else:
assert "tbody tr th:only-of-type" not in result
assert "vertical-align: middle;" not in result
assert "thead th" not in result
- def test_to_html_with_index_names_false(self):
- # GH 16493
- df = DataFrame({"A": [1, 2]}, index=Index(['a', 'b'],
- name='myindexname'))
- result = df.to_html(index_names=False)
- assert 'myindexname' not in result
-
- def test_to_html_with_id(self):
- # GH 8496
- df = DataFrame({"A": [1, 2]}, index=Index(['a', 'b'],
- name='myindexname'))
- result = df.to_html(index_names=False, table_id="TEST_ID")
- assert ' id="TEST_ID"' in result
-
- def test_to_html_float_format_no_fixed_width(self, datapath):
-
- # GH 21625
- df = DataFrame({'x': [0.19999]})
- expected = expected_html(datapath, 'gh21625_expected_output')
- assert df.to_html(float_format='%.3f') == expected
-
- # GH 22270
- df = DataFrame({'x': [100.0]})
- expected = expected_html(datapath, 'gh22270_expected_output')
- assert df.to_html(float_format='%.0f') == expected
-
- @pytest.mark.parametrize("render_links, file_name", [
- (True, 'render_links_true'),
- (False, 'render_links_false'),
- ])
- def test_to_html_render_links(self, render_links, file_name, datapath):
- # GH 2679
- data = [
- [0, 'http://pandas.pydata.org/?q1=a&q2=b', 'pydata.org'],
- [0, 'www.pydata.org', 'pydata.org']
- ]
- df = DataFrame(data, columns=['foo', 'bar', None])
-
- result = df.to_html(render_links=render_links)
- expected = expected_html(datapath, file_name)
- assert result == expected
+
+def test_to_html_with_index_names_false():
+ # GH 16493
+ df = DataFrame({"A": [1, 2]}, index=Index(['a', 'b'],
+ name='myindexname'))
+ result = df.to_html(index_names=False)
+ assert 'myindexname' not in result
+
+
+def test_to_html_with_id():
+ # GH 8496
+ df = DataFrame({"A": [1, 2]}, index=Index(['a', 'b'],
+ name='myindexname'))
+ result = df.to_html(index_names=False, table_id="TEST_ID")
+ assert ' id="TEST_ID"' in result
+
+
+@pytest.mark.parametrize('value,float_format,expected', [
+ (0.19999, '%.3f', 'gh21625_expected_output'),
+ (100.0, '%.0f', 'gh22270_expected_output'),
+])
+def test_to_html_float_format_no_fixed_width(
+ value, float_format, expected, datapath):
+ # GH 21625, GH 22270
+ df = DataFrame({'x': [value]})
+ expected = expected_html(datapath, expected)
+ result = df.to_html(float_format=float_format)
+ assert result == expected
+
+
+@pytest.mark.parametrize("render_links,expected", [
+ (True, 'render_links_true'),
+ (False, 'render_links_false'),
+])
+def test_to_html_render_links(render_links, expected, datapath):
+ # GH 2679
+ data = [
+ [0, 'http://pandas.pydata.org/?q1=a&q2=b', 'pydata.org'],
+ [0, 'www.pydata.org', 'pydata.org']
+ ]
+ df = DataFrame(data, columns=['foo', 'bar', None])
+
+ result = df.to_html(render_links=render_links)
+ expected = expected_html(datapath, expected)
+ assert result == expected
| https://api.github.com/repos/pandas-dev/pandas/pulls/24609 | 2019-01-04T12:09:52Z | 2019-01-04T13:55:12Z | 2019-01-04T13:55:12Z | 2019-01-04T17:41:48Z | |
REF: dispatch Series.quantile to DataFrame, remove ScalarBlock | diff --git a/pandas/core/internals/__init__.py b/pandas/core/internals/__init__.py
index 7d6aa6a42efc2..7878613a8b1b1 100644
--- a/pandas/core/internals/__init__.py
+++ b/pandas/core/internals/__init__.py
@@ -5,8 +5,7 @@
make_block, # io.pytables, io.packers
FloatBlock, IntBlock, ComplexBlock, BoolBlock, ObjectBlock,
TimeDeltaBlock, DatetimeBlock, DatetimeTZBlock,
- CategoricalBlock, ExtensionBlock, ScalarBlock,
- Block)
+ CategoricalBlock, ExtensionBlock, Block)
from .managers import ( # noqa:F401
BlockManager, SingleBlockManager,
create_block_manager_from_arrays, create_block_manager_from_blocks,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 384676ede15f2..f88114e1c9e20 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -222,12 +222,6 @@ def make_block(self, values, placement=None, ndim=None):
return make_block(values, placement=placement, ndim=ndim)
- def make_block_scalar(self, values):
- """
- Create a ScalarBlock
- """
- return ScalarBlock(values)
-
def make_block_same_class(self, values, placement=None, ndim=None,
dtype=None):
""" Wrap given values in a block of same type as self. """
@@ -1468,13 +1462,15 @@ def quantile(self, qs, interpolation='linear', axis=0):
else:
# create the array of na_values
# 2d len(values) * len(qs)
- result = np.repeat(np.array([self._na_value] * len(qs)),
+ result = np.repeat(np.array([self.fill_value] * len(qs)),
len(values)).reshape(len(values),
len(qs))
else:
- mask = isna(self.values)
+ # asarray needed for Sparse, see GH#24600
+ # TODO: Why self.values and not values?
+ mask = np.asarray(isna(self.values))
result = nanpercentile(values, np.array(qs) * 100,
- axis=axis, na_value=self._na_value,
+ axis=axis, na_value=self.fill_value,
mask=mask, ndim=self.ndim,
interpolation=interpolation)
@@ -1490,8 +1486,6 @@ def quantile(self, qs, interpolation='linear', axis=0):
ndim = getattr(result, 'ndim', None) or 0
result = self._try_coerce_result(result)
- if lib.is_scalar(result):
- return self.make_block_scalar(result)
return make_block(result,
placement=np.arange(len(result)),
ndim=ndim)
@@ -1534,29 +1528,6 @@ def _replace_coerce(self, to_replace, value, inplace=True, regex=False,
return self
-class ScalarBlock(Block):
- """
- a scalar compat Block
- """
- __slots__ = ['_mgr_locs', 'values', 'ndim']
-
- def __init__(self, values):
- self.ndim = 0
- self.mgr_locs = [0]
- self.values = values
-
- @property
- def dtype(self):
- return type(self.values)
-
- @property
- def shape(self):
- return tuple([0])
-
- def __len__(self):
- return 0
-
-
class NonConsolidatableMixIn(object):
""" hold methods for the nonconsolidatable blocks """
_can_consolidate = False
@@ -2675,7 +2646,7 @@ def convert(self, *args, **kwargs):
if args:
raise NotImplementedError
- by_item = True if 'by_item' not in kwargs else kwargs['by_item']
+ by_item = kwargs.get('by_item', True)
new_inputs = ['coerce', 'datetime', 'numeric', 'timedelta']
new_style = False
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 0ad0a994e8a95..ab033ff4c1c4b 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -425,6 +425,10 @@ def quantile(self, axis=0, consolidate=True, transposed=False,
Block Manager (new object)
"""
+ # Series dispatches to DataFrame for quantile, which allows us to
+ # simplify some of the code here and in the blocks
+ assert self.ndim >= 2
+
if consolidate:
self._consolidate_inplace()
@@ -449,6 +453,7 @@ def get_axe(block, qs, axes):
# note that some DatetimeTZ, Categorical are always ndim==1
ndim = {b.ndim for b in blocks}
+ assert 0 not in ndim, ndim
if 2 in ndim:
@@ -474,15 +479,7 @@ def get_axe(block, qs, axes):
return self.__class__(blocks, new_axes)
- # 0 ndim
- if 0 in ndim and 1 not in ndim:
- values = np.array([b.values for b in blocks])
- if len(values) == 1:
- return values.item()
- blocks = [make_block(values, ndim=1)]
- axes = Index([ax[0] for ax in axes])
-
- # single block
+ # single block, i.e. ndim == {1}
values = _concat._concat_compat([b.values for b in blocks])
# compute the orderings of our original data
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 46ff04fdd31ae..de34227cda28a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1987,15 +1987,23 @@ def quantile(self, q=0.5, interpolation='linear'):
self._check_percentile(q)
- result = self._data.quantile(qs=q, interpolation=interpolation)
+ # We dispatch to DataFrame so that core.internals only has to worry
+ # about 2D cases.
+ df = self.to_frame()
+
+ result = df.quantile(q=q, interpolation=interpolation,
+ numeric_only=False)
+ if result.ndim == 2:
+ result = result.iloc[:, 0]
if is_list_like(q):
+ result.name = self.name
return self._constructor(result,
index=Float64Index(q),
name=self.name)
else:
# scalar
- return result
+ return result.iloc[0]
def corr(self, other, method='pearson', min_periods=None):
"""
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 31199dc01b659..0efd48c25ad62 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -218,5 +218,5 @@ def test_resample_quantile_all_ts(series):
q = 0.75
freq = 'H'
result = s.resample(freq).quantile(q)
- expected = s.resample(freq).agg(lambda x: x.quantile(q))
+ expected = s.resample(freq).agg(lambda x: x.quantile(q)).rename(s.name)
tm.assert_series_equal(result, expected)
| ScalarBlock exists because sometimes Block.quantile needs to return a scalar. By having Series dispatch to DataFrame, we simplify quantile in internals an get to remove ScalarBlock and make_block_scalar | https://api.github.com/repos/pandas-dev/pandas/pulls/24606 | 2019-01-04T02:09:48Z | 2019-01-04T12:11:57Z | 2019-01-04T12:11:57Z | 2019-01-04T16:20:05Z |
24024 follow-up: fix incorrectly accepting iNaT in validate_fill_value | diff --git a/pandas/_libs/algos_common_helper.pxi.in b/pandas/_libs/algos_common_helper.pxi.in
index 3708deb1a4b76..7d9ba420525c8 100644
--- a/pandas/_libs/algos_common_helper.pxi.in
+++ b/pandas/_libs/algos_common_helper.pxi.in
@@ -109,8 +109,6 @@ def ensure_object(object arr):
return arr
else:
return arr.astype(np.object_)
- elif hasattr(arr, '_box_values_as_index'):
- return arr._box_values_as_index()
else:
return np.array(arr, dtype=np.object_)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 9f1491bd68684..a55e8759deedb 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -588,7 +588,7 @@ def astype(self, dtype, copy=True):
@Appender(dtl.DatetimeLikeArrayMixin._validate_fill_value.__doc__)
def _validate_fill_value(self, fill_value):
- if isna(fill_value) or fill_value == iNaT:
+ if isna(fill_value):
fill_value = iNaT
elif isinstance(fill_value, (datetime, np.datetime64)):
self._assert_tzawareness_compat(fill_value)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index b55bad46580fe..6696d6d4ca83e 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas._libs import lib, tslib, tslibs
-from pandas._libs.tslibs import OutOfBoundsDatetime, Period, iNaT
+from pandas._libs.tslibs import NaT, OutOfBoundsDatetime, Period, iNaT
from pandas.compat import PY3, string_types, text_type, to_str
from .common import (
@@ -272,7 +272,7 @@ def maybe_promote(dtype, fill_value=np.nan):
fill_value = tslibs.Timedelta(fill_value).value
elif is_datetime64tz_dtype(dtype):
if isna(fill_value):
- fill_value = iNaT
+ fill_value = NaT
elif is_extension_array_dtype(dtype) and isna(fill_value):
fill_value = dtype.na_value
elif is_float(fill_value):
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index cfca5d1b7d2cc..082a314facdd6 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -203,15 +203,6 @@ def _ensure_localized(self, arg, ambiguous='raise', nonexistent='raise',
return type(self)._simple_new(result, name=self.name)
return arg
- def _box_values_as_index(self):
- """
- Return object Index which contains boxed values.
- """
- # XXX: this is broken (not called) for PeriodIndex, which doesn't
- # define _box_values AFAICT
- from pandas.core.index import Index
- return Index(self._box_values(self.asi8), name=self.name, dtype=object)
-
def _box_values(self, values):
return self._data._box_values(values)
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index f76999a0dbc32..db88d94be1cab 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -388,6 +388,10 @@ def test_take_fill_valid(self, datetime_index, tz_naive_fixture):
# Timestamp with mismatched tz-awareness
arr.take([-1, 1], allow_fill=True, fill_value=now)
+ with pytest.raises(ValueError):
+ # require NaT, not iNaT, as it could be confused with an integer
+ arr.take([-1, 1], allow_fill=True, fill_value=pd.NaT.value)
+
def test_concat_same_type_invalid(self, datetime_index):
# different timezones
dti = datetime_index
| remove box_values_as_index
xref #23833, #23982 for overhaul of maybe_promote testing | https://api.github.com/repos/pandas-dev/pandas/pulls/24605 | 2019-01-04T00:34:26Z | 2019-01-04T12:12:37Z | 2019-01-04T12:12:37Z | 2019-01-04T16:17:51Z |
Fixed PeriodIndex._shallow_copy for i8 | diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index b59c32bb8a9d4..5e4dd2998a3be 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -322,13 +322,9 @@ def _shallow_copy(self, values=None, **kwargs):
# this quite a bit.
values = period_array(values, freq=self.freq)
- # I don't like overloading shallow_copy with freq changes.
- # See if it's used anywhere outside of test_resample_empty_dataframe
+ # We don't allow changing `freq` in _shallow_copy.
+ validate_dtype_freq(self.dtype, kwargs.get('freq'))
attributes = self._get_attributes_dict()
- freq = kwargs.pop("freq", None)
- if freq:
- values = values.asfreq(freq)
- attributes.pop("freq", None)
attributes.update(kwargs)
if not len(values) and 'dtype' not in kwargs:
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index ee9137c264edc..25604b29f22f6 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -404,7 +404,10 @@ def _wrap_result(self, result):
if isinstance(result, ABCSeries) and result.empty:
obj = self.obj
- result.index = obj.index._shallow_copy(freq=to_offset(self.freq))
+ if isinstance(obj.index, PeriodIndex):
+ result.index = obj.index.asfreq(self.freq)
+ else:
+ result.index = obj.index._shallow_copy(freq=self.freq)
result.name = getattr(obj, 'name', None)
return result
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 53f28612305c2..464ff7aa5d58d 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -1,6 +1,7 @@
import numpy as np
import pytest
+from pandas._libs.tslibs.period import IncompatibleFrequency
import pandas.util._test_decorators as td
import pandas as pd
@@ -40,9 +41,7 @@ def test_where(self):
@pytest.mark.parametrize('use_numpy', [True, False])
@pytest.mark.parametrize('index', [
pd.period_range('2000-01-01', periods=3, freq='D'),
- pytest.param(
- pd.period_range('2001-01-01', periods=3, freq='2D'),
- marks=pytest.mark.xfail(reason='GH 24391')),
+ pd.period_range('2001-01-01', periods=3, freq='2D'),
pd.PeriodIndex(['2001-01', 'NaT', '2003-01'], freq='M')])
def test_repeat_freqstr(self, index, use_numpy):
# GH10183
@@ -117,6 +116,17 @@ def test_shallow_copy_empty(self):
tm.assert_index_equal(result, expected)
+ def test_shallow_copy_i8(self):
+ # GH-24391
+ pi = period_range("2018-01-01", periods=3, freq="2D")
+ result = pi._shallow_copy(pi.asi8, freq=pi.freq)
+ tm.assert_index_equal(result, pi)
+
+ def test_shallow_copy_changing_freq_raises(self):
+ pi = period_range("2018-01-01", periods=3, freq="2D")
+ with pytest.raises(IncompatibleFrequency, match="are different"):
+ pi._shallow_copy(pi, freq="H")
+
def test_dtype_str(self):
pi = pd.PeriodIndex([], freq='M')
assert pi.dtype_str == 'period[M]'
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index 0efd48c25ad62..911cd990ab881 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -109,7 +109,10 @@ def test_resample_empty_series_all_ts(freq, empty_series, resample_method):
result = getattr(s.resample(freq), resample_method)()
expected = s.copy()
- expected.index = s.index._shallow_copy(freq=freq)
+ if isinstance(s.index, PeriodIndex):
+ expected.index = s.index.asfreq(freq=freq)
+ else:
+ expected.index = s.index._shallow_copy(freq=freq)
assert_index_equal(result.index, expected.index)
assert result.index.freq == expected.index.freq
assert_series_equal(result, expected, check_dtype=False)
@@ -127,7 +130,10 @@ def test_resample_empty_dataframe_all_ts(empty_frame, freq, resample_method):
# GH14962
expected = Series([])
- expected.index = df.index._shallow_copy(freq=freq)
+ if isinstance(df.index, PeriodIndex):
+ expected.index = df.index.asfreq(freq=freq)
+ else:
+ expected.index = df.index._shallow_copy(freq=freq)
assert_index_equal(result.index, expected.index)
assert result.index.freq == expected.index.freq
assert_almost_equal(result, expected, check_dtype=False)
| Closes https://github.com/pandas-dev/pandas/issues/24391
cc @jschendel | https://api.github.com/repos/pandas-dev/pandas/pulls/24604 | 2019-01-03T21:55:30Z | 2019-01-04T17:57:42Z | 2019-01-04T17:57:42Z | 2019-01-04T19:53:58Z |
DOC: Update doc description for day_opt in offsets | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 0ca9410df89c0..c2f51436612a4 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -903,9 +903,13 @@ cpdef int get_day_of_month(datetime other, day_opt) except? -1:
Parameters
----------
other : datetime or Timestamp
- day_opt : 'start', 'end'
+ day_opt : 'start', 'end', 'business_start', 'business_end', or int
'start': returns 1
'end': returns last day of the month
+ 'business_start': returns the first business day of the month
+ 'business_end': returns the last business day of the month
+ int: returns the day in the month indicated by `other`, or the last of
+ day the month if the value exceeds in that month's number of days.
Returns
-------
@@ -980,7 +984,7 @@ def roll_qtrday(other: datetime, n: int, month: int,
other : datetime or Timestamp
n : number of periods to increment, before adjusting for rolling
month : int reference month giving the first month of the year
- day_opt : 'start', 'end', 'business_start', 'business_end'
+ day_opt : 'start', 'end', 'business_start', 'business_end', or int
The convention to use in finding the day in a given month against
which to compare for rollforward/rollbackward decisions.
modby : int 3 for quarters, 12 for years
@@ -988,6 +992,10 @@ def roll_qtrday(other: datetime, n: int, month: int,
Returns
-------
n : int number of periods to increment
+
+ See Also
+ --------
+ get_day_of_month : Find the day in a month provided an offset.
"""
cdef:
int months_since
@@ -1022,9 +1030,16 @@ def roll_yearday(other: datetime, n: int, month: int, day_opt: object) -> int:
other : datetime or Timestamp
n : number of periods to increment, before adjusting for rolling
month : reference month giving the first month of the year
- day_opt : 'start', 'end'
- 'start': returns 1
- 'end': returns last day of the month
+ day_opt : 'start', 'end', 'business_start', 'business_end', or int
+ The day of the month to compare against that of `other` when
+ incrementing or decrementing the number of periods:
+
+ 'start': 1
+ 'end': last day of the month
+ 'business_start': first business day of the month
+ 'business_end': last business day of the month
+ int: day in the month indicated by `other`, or the last of day
+ the month if the value exceeds in that month's number of days.
Returns
-------
| Follow-up to #24585. | https://api.github.com/repos/pandas-dev/pandas/pulls/24602 | 2019-01-03T20:42:12Z | 2019-01-03T21:46:01Z | 2019-01-03T21:46:01Z | 2019-01-03T21:47:00Z |
Rename DatetimeArray and TimedeltaArray | diff --git a/pandas/arrays/__init__.py b/pandas/arrays/__init__.py
index 5433d11eccff9..7d9b1b7c7a659 100644
--- a/pandas/arrays/__init__.py
+++ b/pandas/arrays/__init__.py
@@ -6,8 +6,8 @@
from pandas.core.arrays import (
IntervalArray, PeriodArray, Categorical, SparseArray, IntegerArray,
PandasArray,
- DatetimeArrayMixin as DatetimeArray,
- TimedeltaArrayMixin as TimedeltaArray,
+ DatetimeArray,
+ TimedeltaArray,
)
diff --git a/pandas/core/arrays/__init__.py b/pandas/core/arrays/__init__.py
index d6a61a26a954f..1033ce784046e 100644
--- a/pandas/core/arrays/__init__.py
+++ b/pandas/core/arrays/__init__.py
@@ -3,10 +3,10 @@
ExtensionOpsMixin,
ExtensionScalarOpsMixin)
from .categorical import Categorical # noqa
-from .datetimes import DatetimeArrayMixin # noqa
+from .datetimes import DatetimeArray # noqa
from .interval import IntervalArray # noqa
from .period import PeriodArray, period_array # noqa
-from .timedeltas import TimedeltaArrayMixin # noqa
+from .timedeltas import TimedeltaArray # noqa
from .integer import ( # noqa
IntegerArray, integer_array)
from .sparse import SparseArray # noqa
diff --git a/pandas/core/arrays/array_.py b/pandas/core/arrays/array_.py
index 4e84c62bce3d6..04842d82fca5d 100644
--- a/pandas/core/arrays/array_.py
+++ b/pandas/core/arrays/array_.py
@@ -184,8 +184,8 @@ def array(data, # type: Sequence[object]
"""
from pandas.core.arrays import (
period_array, ExtensionArray, IntervalArray, PandasArray,
- DatetimeArrayMixin,
- TimedeltaArrayMixin,
+ DatetimeArray,
+ TimedeltaArray,
)
from pandas.core.internals.arrays import extract_array
@@ -228,14 +228,14 @@ def array(data, # type: Sequence[object]
elif inferred_dtype.startswith('datetime'):
# datetime, datetime64
try:
- return DatetimeArrayMixin._from_sequence(data, copy=copy)
+ return DatetimeArray._from_sequence(data, copy=copy)
except ValueError:
# Mixture of timezones, fall back to PandasArray
pass
elif inferred_dtype.startswith('timedelta'):
# timedelta, timedelta64
- return TimedeltaArrayMixin._from_sequence(data, copy=copy)
+ return TimedeltaArray._from_sequence(data, copy=copy)
# TODO(BooleanArray): handle this type
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 8b5445bedd46c..65f9bb14158bb 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1228,9 +1228,9 @@ def __add__(self, other):
return NotImplemented
if is_timedelta64_dtype(result) and isinstance(result, np.ndarray):
- from pandas.core.arrays import TimedeltaArrayMixin
+ from pandas.core.arrays import TimedeltaArray
# TODO: infer freq?
- return TimedeltaArrayMixin(result)
+ return TimedeltaArray(result)
return result
def __radd__(self, other):
@@ -1295,9 +1295,9 @@ def __sub__(self, other):
return NotImplemented
if is_timedelta64_dtype(result) and isinstance(result, np.ndarray):
- from pandas.core.arrays import TimedeltaArrayMixin
+ from pandas.core.arrays import TimedeltaArray
# TODO: infer freq?
- return TimedeltaArrayMixin(result)
+ return TimedeltaArray(result)
return result
def __rsub__(self, other):
@@ -1306,8 +1306,8 @@ def __rsub__(self, other):
# we need to wrap in DatetimeArray/Index and flip the operation
if not isinstance(other, DatetimeLikeArrayMixin):
# Avoid down-casting DatetimeIndex
- from pandas.core.arrays import DatetimeArrayMixin
- other = DatetimeArrayMixin(other)
+ from pandas.core.arrays import DatetimeArray
+ other = DatetimeArray(other)
return other - self
elif (is_datetime64_any_dtype(self) and hasattr(other, 'dtype') and
not is_datetime64_any_dtype(other)):
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index c428fd2e75e08..520121710cbd4 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -127,7 +127,7 @@ def wrapper(self, other):
except ValueError:
other = np.array(other, dtype=np.object_)
elif not isinstance(other, (np.ndarray, ABCIndexClass, ABCSeries,
- DatetimeArrayMixin)):
+ DatetimeArray)):
# Following Timestamp convention, __eq__ is all-False
# and __ne__ is all True, others raise TypeError.
return ops.invalid_comparison(self, other, op)
@@ -176,9 +176,9 @@ def wrapper(self, other):
return compat.set_function_name(wrapper, opname, cls)
-class DatetimeArrayMixin(dtl.DatetimeLikeArrayMixin,
- dtl.TimelikeOps,
- dtl.DatelikeOps):
+class DatetimeArray(dtl.DatetimeLikeArrayMixin,
+ dtl.TimelikeOps,
+ dtl.DatelikeOps):
"""
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
@@ -718,7 +718,7 @@ def _add_delta(self, delta):
-------
result : DatetimeArray
"""
- new_values = super(DatetimeArrayMixin, self)._add_delta(delta)
+ new_values = super(DatetimeArray, self)._add_delta(delta)
return type(self)._from_sequence(new_values, tz=self.tz, freq='infer')
# -----------------------------------------------------------------
@@ -1135,10 +1135,10 @@ def to_perioddelta(self, freq):
TimedeltaArray/Index
"""
# TODO: consider privatizing (discussion in GH#23113)
- from pandas.core.arrays.timedeltas import TimedeltaArrayMixin
+ from pandas.core.arrays.timedeltas import TimedeltaArray
i8delta = self.asi8 - self.to_period(freq).to_timestamp().asi8
m8delta = i8delta.view('m8[ns]')
- return TimedeltaArrayMixin(m8delta)
+ return TimedeltaArray(m8delta)
# -----------------------------------------------------------------
# Properties - Vectorized Timestamp Properties/Methods
@@ -1610,7 +1610,7 @@ def to_julian_date(self):
) / 24.0)
-DatetimeArrayMixin._add_comparison_ops()
+DatetimeArray._add_comparison_ops()
# -------------------------------------------------------------------
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 70da02f2ba0a1..0eeb3f718734a 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -327,7 +327,7 @@ def to_timestamp(self, freq=None, how='start'):
-------
DatetimeArray/Index
"""
- from pandas.core.arrays import DatetimeArrayMixin
+ from pandas.core.arrays import DatetimeArray
how = libperiod._validate_end_alias(how)
@@ -351,7 +351,7 @@ def to_timestamp(self, freq=None, how='start'):
new_data = self.asfreq(freq, how=how)
new_data = libperiod.periodarr_to_dt64arr(new_data.asi8, base)
- return DatetimeArrayMixin._from_sequence(new_data, freq='infer')
+ return DatetimeArray._from_sequence(new_data, freq='infer')
# --------------------------------------------------------------------
# Array-like / EA-Interface Methods
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 3677d041886b3..0ccf82ebf7edd 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -102,7 +102,7 @@ def wrapper(self, other):
return compat.set_function_name(wrapper, opname, cls)
-class TimedeltaArrayMixin(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps):
+class TimedeltaArray(dtl.DatetimeLikeArrayMixin, dtl.TimelikeOps):
_typ = "timedeltaarray"
_scalar_type = Timedelta
__array_priority__ = 1000
@@ -348,7 +348,7 @@ def _add_delta(self, delta):
-------
result : TimedeltaArray
"""
- new_values = super(TimedeltaArrayMixin, self)._add_delta(delta)
+ new_values = super(TimedeltaArray, self)._add_delta(delta)
return type(self)._from_sequence(new_values, freq='infer')
def _add_datetime_arraylike(self, other):
@@ -357,15 +357,15 @@ def _add_datetime_arraylike(self, other):
"""
if isinstance(other, np.ndarray):
# At this point we have already checked that dtype is datetime64
- from pandas.core.arrays import DatetimeArrayMixin
- other = DatetimeArrayMixin(other)
+ from pandas.core.arrays import DatetimeArray
+ other = DatetimeArray(other)
# defer to implementation in DatetimeArray
return other + self
def _add_datetimelike_scalar(self, other):
# adding a timedeltaindex to a datetimelike
- from pandas.core.arrays import DatetimeArrayMixin
+ from pandas.core.arrays import DatetimeArray
assert other is not NaT
other = Timestamp(other)
@@ -373,14 +373,14 @@ def _add_datetimelike_scalar(self, other):
# In this case we specifically interpret NaT as a datetime, not
# the timedelta interpretation we would get by returning self + NaT
result = self.asi8.view('m8[ms]') + NaT.to_datetime64()
- return DatetimeArrayMixin(result)
+ return DatetimeArray(result)
i8 = self.asi8
result = checked_add_with_arr(i8, other.value,
arr_mask=self._isnan)
result = self._maybe_mask_results(result)
dtype = DatetimeTZDtype(tz=other.tz) if other.tz else _NS_DTYPE
- return DatetimeArrayMixin(result, dtype=dtype, freq=self.freq)
+ return DatetimeArray(result, dtype=dtype, freq=self.freq)
def _addsub_offset_array(self, other, op):
# Add or subtract Array-like of DateOffset objects
@@ -388,7 +388,7 @@ def _addsub_offset_array(self, other, op):
# TimedeltaIndex can only operate with a subset of DateOffset
# subclasses. Incompatible classes will raise AttributeError,
# which we re-raise as TypeError
- return super(TimedeltaArrayMixin, self)._addsub_offset_array(
+ return super(TimedeltaArray, self)._addsub_offset_array(
other, op
)
except AttributeError:
@@ -813,7 +813,7 @@ def f(x):
return result
-TimedeltaArrayMixin._add_comparison_ops()
+TimedeltaArray._add_comparison_ops()
# ---------------------------------------------------------------------
@@ -860,7 +860,7 @@ def sequence_to_td64ns(data, copy=False, unit="ns", errors="raise"):
data = np.array(data, copy=False)
elif isinstance(data, ABCSeries):
data = data._values
- elif isinstance(data, (ABCTimedeltaIndex, TimedeltaArrayMixin)):
+ elif isinstance(data, (ABCTimedeltaIndex, TimedeltaArray)):
inferred_freq = data.freq
data = data._data
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 9e2564c4f825b..ac69927d4adf1 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -572,8 +572,8 @@ def construct_array_type(cls):
-------
type
"""
- from pandas.core.arrays import DatetimeArrayMixin
- return DatetimeArrayMixin
+ from pandas.core.arrays import DatetimeArray
+ return DatetimeArray
@classmethod
def construct_from_string(cls, string):
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index 842fcd0680467..c43469d3c3a81 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -11,9 +11,7 @@
from pandas.core.accessor import PandasDelegate, delegate_names
from pandas.core.algorithms import take_1d
-from pandas.core.arrays import (
- DatetimeArrayMixin as DatetimeArray, PeriodArray,
- TimedeltaArrayMixin as TimedeltaArray)
+from pandas.core.arrays import DatetimeArray, PeriodArray, TimedeltaArray
from pandas.core.base import NoNewAttributesMixin, PandasObject
from pandas.core.indexes.datetimes import DatetimeIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 7d901f4656731..f396f081267b3 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -22,7 +22,7 @@
from pandas.core.accessor import delegate_names
from pandas.core.arrays.datetimes import (
- DatetimeArrayMixin as DatetimeArray, _to_M8, validate_tz_from_dtype)
+ DatetimeArray, _to_M8, validate_tz_from_dtype)
from pandas.core.base import _shared_docs
import pandas.core.common as com
from pandas.core.indexes.base import Index
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 5e8e6a423ab3f..9301638d4f632 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -17,8 +17,7 @@
from pandas.core.accessor import delegate_names
from pandas.core.arrays import datetimelike as dtl
-from pandas.core.arrays.timedeltas import (
- TimedeltaArrayMixin as TimedeltaArray, _is_convertible_to_td)
+from pandas.core.arrays.timedeltas import TimedeltaArray, _is_convertible_to_td
from pandas.core.base import _shared_docs
import pandas.core.common as com
from pandas.core.indexes.base import Index, _index_shared_docs
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 3b2c13af785d4..bd16495e472b1 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -35,8 +35,7 @@
import pandas.core.algorithms as algos
from pandas.core.arrays import (
- Categorical, DatetimeArrayMixin as DatetimeArray, ExtensionArray,
- TimedeltaArrayMixin as TimedeltaArray)
+ Categorical, DatetimeArray, ExtensionArray, TimedeltaArray)
from pandas.core.base import PandasObject
import pandas.core.common as com
from pandas.core.indexes.datetimes import DatetimeIndex
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 52b60339a7d68..46ff04fdd31ae 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1586,7 +1586,7 @@ def unique(self):
>>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern')
... for _ in range(3)]).unique()
- <DatetimeArrayMixin>
+ <DatetimeArray>
['2016-01-01 00:00:00-05:00']
Length: 1, dtype: datetime64[ns, US/Eastern]
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 69d735d7fdc65..5b540ee88a3f3 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -171,7 +171,7 @@ def _convert_listlike_datetimes(arg, box, format, name=None, tz=None,
- ndarray of Timestamps if box=False
"""
from pandas import DatetimeIndex
- from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+ from pandas.core.arrays import DatetimeArray
from pandas.core.arrays.datetimes import (
maybe_convert_dtype, objects_to_datetime64ns)
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index f5c4808a09123..f20d9a54e9da3 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -1981,7 +1981,7 @@ def test_dti_sub_tdi(self, tz_naive_fixture):
result = dti - tdi
tm.assert_index_equal(result, expected)
- msg = 'cannot subtract .*TimedeltaArrayMixin'
+ msg = 'cannot subtract .*TimedeltaArray'
with pytest.raises(TypeError, match=msg):
tdi - dti
@@ -1989,7 +1989,7 @@ def test_dti_sub_tdi(self, tz_naive_fixture):
result = dti - tdi.values
tm.assert_index_equal(result, expected)
- msg = 'cannot subtract DatetimeArrayMixin from'
+ msg = 'cannot subtract DatetimeArray from'
with pytest.raises(TypeError, match=msg):
tdi.values - dti
@@ -2005,7 +2005,7 @@ def test_dti_isub_tdi(self, tz_naive_fixture):
result -= tdi
tm.assert_index_equal(result, expected)
- msg = 'cannot subtract .* from a TimedeltaArrayMixin'
+ msg = 'cannot subtract .* from a TimedeltaArray'
with pytest.raises(TypeError, match=msg):
tdi -= dti
@@ -2016,7 +2016,7 @@ def test_dti_isub_tdi(self, tz_naive_fixture):
msg = '|'.join(['cannot perform __neg__ with this index type:',
'ufunc subtract cannot use operands with types',
- 'cannot subtract DatetimeArrayMixin from'])
+ 'cannot subtract DatetimeArray from'])
with pytest.raises(TypeError, match=msg):
tdi.values -= dti
@@ -2036,9 +2036,9 @@ def test_dti_isub_tdi(self, tz_naive_fixture):
def test_add_datetimelike_and_dti(self, addend, tz):
# GH#9631
dti = DatetimeIndex(['2011-01-01', '2011-01-02']).tz_localize(tz)
- msg = ('cannot add DatetimeArrayMixin and {0}'
+ msg = ('cannot add DatetimeArray and {0}'
.format(type(addend).__name__)).replace('DatetimeIndex',
- 'DatetimeArrayMixin')
+ 'DatetimeArray')
with pytest.raises(TypeError, match=msg):
dti + addend
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 348ac4579ffb5..f76999a0dbc32 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -5,9 +5,7 @@
import pandas.compat as compat
import pandas as pd
-from pandas.core.arrays import (
- DatetimeArrayMixin as DatetimeArray, PeriodArray,
- TimedeltaArrayMixin as TimedeltaArray)
+from pandas.core.arrays import DatetimeArray, PeriodArray, TimedeltaArray
import pandas.util.testing as tm
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 1375969c961fd..8890593b1fa9d 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -10,7 +10,7 @@
from pandas.core.dtypes.dtypes import DatetimeTZDtype
import pandas as pd
-from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+from pandas.core.arrays import DatetimeArray
from pandas.core.arrays.datetimes import sequence_to_dt64ns
import pandas.util.testing as tm
diff --git a/pandas/tests/arrays/test_timedeltas.py b/pandas/tests/arrays/test_timedeltas.py
index 08ef27297cca5..481350640e1a6 100644
--- a/pandas/tests/arrays/test_timedeltas.py
+++ b/pandas/tests/arrays/test_timedeltas.py
@@ -4,7 +4,7 @@
import pytest
import pandas as pd
-from pandas.core.arrays import TimedeltaArrayMixin as TimedeltaArray
+from pandas.core.arrays import TimedeltaArray
import pandas.util.testing as tm
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 96f92fccc5a71..1622088d05f4d 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -22,8 +22,8 @@ class TestABCClasses(object):
sparse_series = pd.Series([1, 2, 3]).to_sparse()
sparse_array = pd.SparseArray(np.random.randn(10))
sparse_frame = pd.SparseDataFrame({'a': [1, -1, None]})
- datetime_array = pd.core.arrays.DatetimeArrayMixin(datetime_index)
- timedelta_array = pd.core.arrays.TimedeltaArrayMixin(timedelta_index)
+ datetime_array = pd.core.arrays.DatetimeArray(datetime_index)
+ timedelta_array = pd.core.arrays.TimedeltaArray(timedelta_index)
def test_abc_types(self):
assert isinstance(pd.Index(['a', 'b', 'c']), gt.ABCIndex)
diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index 7c4491d6edbcf..00ad35bf6a924 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -4,7 +4,7 @@
from pandas.core.dtypes.dtypes import DatetimeTZDtype
import pandas as pd
-from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+from pandas.core.arrays import DatetimeArray
from pandas.tests.extension import base
@@ -129,7 +129,7 @@ def test_arith_series_with_scalar(self, data, all_arithmetic_operators):
def test_add_series_with_extension_array(self, data):
# Datetime + Datetime not implemented
s = pd.Series(data)
- msg = 'cannot add DatetimeArray(Mixin)? and DatetimeArray(Mixin)?'
+ msg = 'cannot add DatetimeArray and DatetimeArray'
with pytest.raises(TypeError, match=msg):
s + data
diff --git a/pandas/tests/indexes/datetimes/test_astype.py b/pandas/tests/indexes/datetimes/test_astype.py
index cda7a005c40c7..562be4cf85864 100644
--- a/pandas/tests/indexes/datetimes/test_astype.py
+++ b/pandas/tests/indexes/datetimes/test_astype.py
@@ -177,7 +177,7 @@ def test_astype_object_with_nat(self):
def test_astype_raises(self, dtype):
# GH 13149, GH 13209
idx = DatetimeIndex(['2016-05-16', 'NaT', NaT, np.NaN])
- msg = 'Cannot cast DatetimeArrayMixin to dtype'
+ msg = 'Cannot cast DatetimeArray to dtype'
with pytest.raises(TypeError, match=msg):
idx.astype(dtype)
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index bca99d27bda56..97de4cd98dedf 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -13,8 +13,7 @@
from pandas import (
DatetimeIndex, Index, Timestamp, date_range, datetime, offsets,
to_datetime)
-from pandas.core.arrays import (
- DatetimeArrayMixin as DatetimeArray, period_array)
+from pandas.core.arrays import DatetimeArray, period_array
import pandas.util.testing as tm
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index deb1850a8b483..50c8f8d4c1f4c 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -24,7 +24,7 @@
from pandas import (
DataFrame, DatetimeIndex, Index, NaT, Series, Timestamp, compat,
date_range, isna, to_datetime)
-from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+from pandas.core.arrays import DatetimeArray
from pandas.core.tools import datetimes as tools
from pandas.util import testing as tm
from pandas.util.testing import assert_series_equal
diff --git a/pandas/tests/indexes/timedeltas/test_astype.py b/pandas/tests/indexes/timedeltas/test_astype.py
index ae0dbf24f048e..3f5507612c8e6 100644
--- a/pandas/tests/indexes/timedeltas/test_astype.py
+++ b/pandas/tests/indexes/timedeltas/test_astype.py
@@ -83,7 +83,7 @@ def test_astype_timedelta64(self):
def test_astype_raises(self, dtype):
# GH 13149, GH 13209
idx = TimedeltaIndex([1e14, 'NaT', NaT, np.NaN])
- msg = 'Cannot cast TimedeltaArrayMixin to dtype'
+ msg = 'Cannot cast TimedeltaArray to dtype'
with pytest.raises(TypeError, match=msg):
idx.astype(dtype)
diff --git a/pandas/tests/indexes/timedeltas/test_construction.py b/pandas/tests/indexes/timedeltas/test_construction.py
index b9bbfaff06215..76f79e86e6f11 100644
--- a/pandas/tests/indexes/timedeltas/test_construction.py
+++ b/pandas/tests/indexes/timedeltas/test_construction.py
@@ -5,7 +5,7 @@
import pandas as pd
from pandas import Timedelta, TimedeltaIndex, timedelta_range, to_timedelta
-from pandas.core.arrays import TimedeltaArrayMixin as TimedeltaArray
+from pandas.core.arrays import TimedeltaArray
import pandas.util.testing as tm
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index b9196971d2e53..7147761d23caa 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -15,8 +15,8 @@
from pandas.compat import OrderedDict, lrange
from pandas.core.arrays import (
- DatetimeArrayMixin as DatetimeArray,
- TimedeltaArrayMixin as TimedeltaArray,
+ DatetimeArray,
+ TimedeltaArray,
)
from pandas.core.internals import (SingleBlockManager,
make_block, BlockManager)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 294eae9d45bee..42e9b1f5af8ad 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -22,7 +22,7 @@
Categorical, CategoricalIndex, DatetimeIndex, Index, IntervalIndex, Series,
Timestamp, compat)
import pandas.core.algorithms as algos
-from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+from pandas.core.arrays import DatetimeArray
import pandas.core.common as com
import pandas.util.testing as tm
from pandas.util.testing import assert_almost_equal
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index f60d73ea1b05b..657f5f193c85e 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -23,9 +23,7 @@
CategoricalIndex, DataFrame, DatetimeIndex, Index, Interval, IntervalIndex,
Panel, PeriodIndex, Series, Timedelta, TimedeltaIndex, Timestamp)
from pandas.core.accessor import PandasDelegate
-from pandas.core.arrays import (
- DatetimeArrayMixin as DatetimeArray, PandasArray,
- TimedeltaArrayMixin as TimedeltaArray)
+from pandas.core.arrays import DatetimeArray, PandasArray, TimedeltaArray
from pandas.core.base import NoNewAttributesMixin, PandasObject
from pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin
import pandas.util.testing as tm
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index cc793767d3af6..1e65118194be7 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -14,7 +14,7 @@
import pandas as pd
from pandas import Series, isna
-from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+from pandas.core.arrays import DatetimeArray
import pandas.core.nanops as nanops
import pandas.util.testing as tm
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index ebdfde2da24f8..2df43cd678764 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -36,8 +36,8 @@
IntervalIndex, MultiIndex, Panel, RangeIndex, Series, bdate_range)
from pandas.core.algorithms import take_1d
from pandas.core.arrays import (
- DatetimeArrayMixin as DatetimeArray, ExtensionArray, IntervalArray,
- PeriodArray, TimedeltaArrayMixin as TimedeltaArray, period_array)
+ DatetimeArray, ExtensionArray, IntervalArray, PeriodArray, TimedeltaArray,
+ period_array)
import pandas.core.common as com
from pandas.io.common import urlopen
| Closes #24231 | https://api.github.com/repos/pandas-dev/pandas/pulls/24601 | 2019-01-03T20:24:28Z | 2019-01-03T21:06:25Z | 2019-01-03T21:06:25Z | 2019-01-03T21:13:43Z |
Fixed construction / factorization of empty PA and IA | diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 0e3c59120415d..2e7216108a23e 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -217,6 +217,11 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
@classmethod
def _from_factorized(cls, values, original):
+ if len(values) == 0:
+ # An empty array returns object-dtype here. We can't create
+ # a new IA from an (empty) object-dtype array, so turn it into the
+ # correct dtype.
+ values = values.astype(original.dtype.subtype)
return cls(values, closed=original.closed)
_interval_shared_docs['from_breaks'] = """
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 70da02f2ba0a1..6e3dc6f789cc9 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -189,6 +189,13 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
freq = dtype.freq
else:
freq = None
+
+ if isinstance(scalars, cls):
+ validate_dtype_freq(scalars.dtype, freq)
+ if copy:
+ scalars = scalars.copy()
+ return scalars
+
periods = np.asarray(scalars, dtype=object)
if copy:
periods = periods.copy()
diff --git a/pandas/tests/arrays/test_period.py b/pandas/tests/arrays/test_period.py
index 387eaa5223bbe..affe3b3854490 100644
--- a/pandas/tests/arrays/test_period.py
+++ b/pandas/tests/arrays/test_period.py
@@ -225,8 +225,7 @@ def test_sub_period():
def test_where_different_freq_raises(other):
ser = pd.Series(period_array(['2000', '2001', '2002'], freq='D'))
cond = np.array([True, False, True])
- with pytest.raises(IncompatibleFrequency,
- match="Input has different freq=H"):
+ with pytest.raises(IncompatibleFrequency, match="freq"):
ser.where(cond, other)
diff --git a/pandas/tests/extension/arrow/test_bool.py b/pandas/tests/extension/arrow/test_bool.py
index f259e66e6cc76..2ace0fadc73e9 100644
--- a/pandas/tests/extension/arrow/test_bool.py
+++ b/pandas/tests/extension/arrow/test_bool.py
@@ -44,6 +44,11 @@ class TestConstructors(BaseArrowTests, base.BaseConstructorsTests):
def test_from_dtype(self, data):
pytest.skip("GH-22666")
+ # seems like some bug in isna on empty BoolArray returning floats.
+ @pytest.mark.xfail(reason='bad is-na for empty data')
+ def test_from_sequence_from_cls(self, data):
+ super(TestConstructors, self).test_from_sequence_from_cls(data)
+
class TestReduce(base.BaseNoReduceTests):
def test_reduce_series_boolean(self):
diff --git a/pandas/tests/extension/base/constructors.py b/pandas/tests/extension/base/constructors.py
index 9c719b1304629..231a1f648f8e8 100644
--- a/pandas/tests/extension/base/constructors.py
+++ b/pandas/tests/extension/base/constructors.py
@@ -9,6 +9,14 @@
class BaseConstructorsTests(BaseExtensionTests):
+ def test_from_sequence_from_cls(self, data):
+ result = type(data)._from_sequence(data, dtype=data.dtype)
+ self.assert_extension_array_equal(result, data)
+
+ data = data[:0]
+ result = type(data)._from_sequence(data, dtype=data.dtype)
+ self.assert_extension_array_equal(result, data)
+
def test_array_from_scalars(self, data):
scalars = [data[0], data[1], data[2]]
result = data._from_sequence(scalars)
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 2c04c4cd99801..f64df7a84b7c0 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -105,6 +105,14 @@ def test_factorize_equivalence(self, data_for_grouping, na_sentinel):
tm.assert_numpy_array_equal(l1, l2)
self.assert_extension_array_equal(u1, u2)
+ def test_factorize_empty(self, data):
+ labels, uniques = pd.factorize(data[:0])
+ expected_labels = np.array([], dtype=np.intp)
+ expected_uniques = type(data)._from_sequence([], dtype=data[:0].dtype)
+
+ tm.assert_numpy_array_equal(labels, expected_labels)
+ self.assert_extension_array_equal(uniques, expected_uniques)
+
def test_fillna_copy_frame(self, data_missing):
arr = data_missing.take([1, 1])
df = pd.DataFrame({"A": arr})
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index bd50584406312..10fd21f89c564 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -179,6 +179,9 @@ def _concat_same_type(cls, to_concat):
def _values_for_factorize(self):
frozen = self._values_for_argsort()
+ if len(frozen) == 0:
+ # _factorize_array expects 1-d array, this is a len-0 2-d array.
+ frozen = frozen.ravel()
return frozen, ()
def _values_for_argsort(self):
| Closes https://github.com/pandas-dev/pandas/issues/23933 | https://api.github.com/repos/pandas-dev/pandas/pulls/24599 | 2019-01-03T19:17:18Z | 2019-01-03T22:04:56Z | 2019-01-03T22:04:55Z | 2019-01-03T22:04:58Z |
DOC: Remove flake8 errors for basics.rst and contributing_docstring.rst | diff --git a/doc/source/contributing_docstring.rst b/doc/source/contributing_docstring.rst
index 7c7847a47a1a2..f7e2b42a1ccbd 100644
--- a/doc/source/contributing_docstring.rst
+++ b/doc/source/contributing_docstring.rst
@@ -457,12 +457,14 @@ For example, with a single value:
float
Random number generated.
"""
- return random.random()
+ return np.random.random()
With more than one value:
.. code-block:: python
+ import string
+
def random_letters():
"""
Generate and return a sequence of random letters.
@@ -477,8 +479,8 @@ With more than one value:
letters : str
String of random letters.
"""
- length = random.randint(1, 10)
- letters = ''.join(random.choice(string.ascii_lowercase)
+ length = np.random.randint(1, 10)
+ letters = ''.join(np.random.choice(string.ascii_lowercase)
for i in range(length))
return length, letters
@@ -499,7 +501,7 @@ If the method yields its value:
Random number generated.
"""
while True:
- yield random.random()
+ yield np.random.random()
.. _docstring.see_also:
@@ -686,8 +688,8 @@ shown:
.. code-block:: python
- import numpy as np # noqa: F401
- import pandas as pd # noqa: F401
+ import numpy as np
+ import pandas as pd
Any other module used in the examples must be explicitly imported, one per line (as
recommended in :pep:`8#imports`)
@@ -776,7 +778,7 @@ positional arguments ``head(3)``.
Examples
--------
- >>> s = pd.Series('Antelope', 'Lion', 'Zebra', numpy.nan)
+ >>> s = pd.Series('Antelope', 'Lion', 'Zebra', np.nan)
>>> s.contains(pattern='a')
0 False
1 False
@@ -834,7 +836,7 @@ positional arguments ``head(3)``.
--------
>>> import numpy as np
>>> import pandas as pd
- >>> df = pd.DataFrame(numpy.random.randn(3, 3),
+ >>> df = pd.DataFrame(np.random.randn(3, 3),
... columns=('a', 'b', 'c'))
>>> df.method(1)
21
| xref: #24173
| https://api.github.com/repos/pandas-dev/pandas/pulls/24598 | 2019-01-03T18:03:57Z | 2019-01-04T12:22:10Z | 2019-01-04T12:22:10Z | 2019-08-26T14:12:16Z |
REF: Simplify quantile, remove reduction from BlockManager | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index bd16495e472b1..384676ede15f2 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -42,6 +42,7 @@
from pandas.core.indexing import check_setitem_lengths
from pandas.core.internals.arrays import extract_array
import pandas.core.missing as missing
+from pandas.core.nanops import nanpercentile
from pandas.io.formats.printing import pprint_thing
@@ -1438,7 +1439,7 @@ def _unstack(self, unstacker_func, new_columns, n_rows, fill_value):
blocks = [make_block(new_values, placement=new_placement)]
return blocks, mask
- def quantile(self, qs, interpolation='linear', axis=0, axes=None):
+ def quantile(self, qs, interpolation='linear', axis=0):
"""
compute the quantiles of the
@@ -1447,94 +1448,53 @@ def quantile(self, qs, interpolation='linear', axis=0, axes=None):
qs: a scalar or list of the quantiles to be computed
interpolation: type of interpolation, default 'linear'
axis: axis to compute, default 0
- axes : BlockManager.axes
Returns
-------
- tuple of (axis, block)
-
+ Block
"""
- kw = {'interpolation': interpolation}
values = self.get_values()
values, _ = self._try_coerce_args(values, values)
- def _nanpercentile1D(values, mask, q, **kw):
- # mask is Union[ExtensionArray, ndarray]
- values = values[~mask]
-
- if len(values) == 0:
- if lib.is_scalar(q):
- return self._na_value
- else:
- return np.array([self._na_value] * len(q),
- dtype=values.dtype)
-
- return np.percentile(values, q, **kw)
-
- def _nanpercentile(values, q, axis, **kw):
-
- mask = isna(self.values)
- if not lib.is_scalar(mask) and mask.any():
- if self.ndim == 1:
- return _nanpercentile1D(values, mask, q, **kw)
- else:
- # for nonconsolidatable blocks mask is 1D, but values 2D
- if mask.ndim < values.ndim:
- mask = mask.reshape(values.shape)
- if axis == 0:
- values = values.T
- mask = mask.T
- result = [_nanpercentile1D(val, m, q, **kw) for (val, m)
- in zip(list(values), list(mask))]
- result = np.array(result, dtype=values.dtype, copy=False).T
- return result
- else:
- return np.percentile(values, q, axis=axis, **kw)
-
- from pandas import Float64Index
is_empty = values.shape[axis] == 0
- if is_list_like(qs):
- ax = Float64Index(qs)
+ orig_scalar = not is_list_like(qs)
+ if orig_scalar:
+ # make list-like, unpack later
+ qs = [qs]
- if is_empty:
- if self.ndim == 1:
- result = self._na_value
- else:
- # create the array of na_values
- # 2d len(values) * len(qs)
- result = np.repeat(np.array([self._na_value] * len(qs)),
- len(values)).reshape(len(values),
- len(qs))
+ if is_empty:
+ if self.ndim == 1:
+ result = self._na_value
else:
- result = _nanpercentile(values, np.array(qs) * 100,
- axis=axis, **kw)
-
- result = np.array(result, copy=False)
- if self.ndim > 1:
- result = result.T
-
+ # create the array of na_values
+ # 2d len(values) * len(qs)
+ result = np.repeat(np.array([self._na_value] * len(qs)),
+ len(values)).reshape(len(values),
+ len(qs))
else:
+ mask = isna(self.values)
+ result = nanpercentile(values, np.array(qs) * 100,
+ axis=axis, na_value=self._na_value,
+ mask=mask, ndim=self.ndim,
+ interpolation=interpolation)
- if self.ndim == 1:
- ax = Float64Index([qs])
- else:
- ax = axes[0]
+ result = np.array(result, copy=False)
+ if self.ndim > 1:
+ result = result.T
- if is_empty:
- if self.ndim == 1:
- result = self._na_value
- else:
- result = np.array([self._na_value] * len(self))
- else:
- result = _nanpercentile(values, qs * 100, axis=axis, **kw)
+ if orig_scalar and not lib.is_scalar(result):
+ # result could be scalar in case with is_empty and self.ndim == 1
+ assert result.shape[-1] == 1, result.shape
+ result = result[..., 0]
+ result = lib.item_from_zerodim(result)
ndim = getattr(result, 'ndim', None) or 0
result = self._try_coerce_result(result)
if lib.is_scalar(result):
- return ax, self.make_block_scalar(result)
- return ax, make_block(result,
- placement=np.arange(len(result)),
- ndim=ndim)
+ return self.make_block_scalar(result)
+ return make_block(result,
+ placement=np.arange(len(result)),
+ ndim=ndim)
def _replace_coerce(self, to_replace, value, inplace=True, regex=False,
convert=False, mask=None):
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index eba49d18431ef..0ad0a994e8a95 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -16,7 +16,7 @@
maybe_promote)
from pandas.core.dtypes.common import (
_NS_DTYPE, is_datetimelike_v_numeric, is_extension_array_dtype,
- is_extension_type, is_numeric_v_string_like, is_scalar)
+ is_extension_type, is_list_like, is_numeric_v_string_like, is_scalar)
import pandas.core.dtypes.concat as _concat
from pandas.core.dtypes.generic import ABCExtensionArray, ABCSeries
from pandas.core.dtypes.missing import isna
@@ -402,34 +402,47 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False,
bm._consolidate_inplace()
return bm
- def reduction(self, f, axis=0, consolidate=True, transposed=False,
- **kwargs):
+ def quantile(self, axis=0, consolidate=True, transposed=False,
+ interpolation='linear', qs=None, numeric_only=None):
"""
- iterate over the blocks, collect and create a new block manager.
+ Iterate over blocks applying quantile reduction.
This routine is intended for reduction type operations and
will do inference on the generated blocks.
Parameters
----------
- f: the callable or function name to operate on at the block level
axis: reduction axis, default 0
consolidate: boolean, default True. Join together blocks having same
dtype
transposed: boolean, default False
we are holding transposed data
+ interpolation : type of interpolation, default 'linear'
+ qs : a scalar or list of the quantiles to be computed
+ numeric_only : ignored
Returns
-------
Block Manager (new object)
-
"""
if consolidate:
self._consolidate_inplace()
+ def get_axe(block, qs, axes):
+ from pandas import Float64Index
+ if is_list_like(qs):
+ ax = Float64Index(qs)
+ elif block.ndim == 1:
+ ax = Float64Index([qs])
+ else:
+ ax = axes[0]
+ return ax
+
axes, blocks = [], []
for b in self.blocks:
- axe, block = getattr(b, f)(axis=axis, axes=self.axes, **kwargs)
+ block = b.quantile(axis=axis, qs=qs, interpolation=interpolation)
+
+ axe = get_axe(b, qs, axes=self.axes)
axes.append(axe)
blocks.append(block)
@@ -496,9 +509,6 @@ def isna(self, func, **kwargs):
def where(self, **kwargs):
return self.apply('where', **kwargs)
- def quantile(self, **kwargs):
- return self.reduction('quantile', **kwargs)
-
def setitem(self, **kwargs):
return self.apply('setitem', **kwargs)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index f95c133163ddb..89e191f171f97 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1194,3 +1194,75 @@ def f(x, y):
nanle = make_nancomp(operator.le)
naneq = make_nancomp(operator.eq)
nanne = make_nancomp(operator.ne)
+
+
+def _nanpercentile_1d(values, mask, q, na_value, interpolation):
+ """
+ Wraper for np.percentile that skips missing values, specialized to
+ 1-dimensional case.
+
+ Parameters
+ ----------
+ values : array over which to find quantiles
+ mask : ndarray[bool]
+ locations in values that should be considered missing
+ q : scalar or array of quantile indices to find
+ na_value : scalar
+ value to return for empty or all-null values
+ interpolation : str
+
+ Returns
+ -------
+ quantiles : scalar or array
+ """
+ # mask is Union[ExtensionArray, ndarray]
+ values = values[~mask]
+
+ if len(values) == 0:
+ if lib.is_scalar(q):
+ return na_value
+ else:
+ return np.array([na_value] * len(q),
+ dtype=values.dtype)
+
+ return np.percentile(values, q, interpolation=interpolation)
+
+
+def nanpercentile(values, q, axis, na_value, mask, ndim, interpolation):
+ """
+ Wraper for np.percentile that skips missing values.
+
+ Parameters
+ ----------
+ values : array over which to find quantiles
+ q : scalar or array of quantile indices to find
+ axis : {0, 1}
+ na_value : scalar
+ value to return for empty or all-null values
+ mask : ndarray[bool]
+ locations in values that should be considered missing
+ ndim : {1, 2}
+ interpolation : str
+
+ Returns
+ -------
+ quantiles : scalar or array
+ """
+ if not lib.is_scalar(mask) and mask.any():
+ if ndim == 1:
+ return _nanpercentile_1d(values, mask, q, na_value,
+ interpolation=interpolation)
+ else:
+ # for nonconsolidatable blocks mask is 1D, but values 2D
+ if mask.ndim < values.ndim:
+ mask = mask.reshape(values.shape)
+ if axis == 0:
+ values = values.T
+ mask = mask.T
+ result = [_nanpercentile_1d(val, m, q, na_value,
+ interpolation=interpolation)
+ for (val, m) in zip(list(values), list(mask))]
+ result = np.array(result, dtype=values.dtype, copy=False).T
+ return result
+ else:
+ return np.percentile(values, q, axis=axis, interpolation=interpolation)
| BlockManager.reduction is only ever called for quantile. Might as well remove the layer of indirection so we can simplify reduction (now renamed quantile). Most of the simplification comes in Block.quantile, since we can avoid passing around things we don't need.
Two nested functions currently defined inside Block.quantile are moved outside the closure so I don't have to double-check the namespace every time I look at them. Not sure if they belong somewhere else. | https://api.github.com/repos/pandas-dev/pandas/pulls/24597 | 2019-01-03T18:03:27Z | 2019-01-03T23:10:40Z | 2019-01-03T23:10:40Z | 2019-01-03T23:21:09Z |
DEPR: __array__ for tz-aware Series/Index | diff --git a/doc/source/api/series.rst b/doc/source/api/series.rst
index 7d5e6037b012a..8e4c378b9fefe 100644
--- a/doc/source/api/series.rst
+++ b/doc/source/api/series.rst
@@ -26,6 +26,7 @@ Attributes
.. autosummary::
:toctree: generated/
+ Series.array
Series.values
Series.dtype
Series.ftype
@@ -58,10 +59,12 @@ Conversion
Series.convert_objects
Series.copy
Series.bool
+ Series.to_numpy
Series.to_period
Series.to_timestamp
Series.to_list
Series.get_values
+ Series.__array__
Indexing, iteration
-------------------
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3be87c4cabaf0..f9a4a2b005045 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1227,7 +1227,7 @@ Deprecations
.. _whatsnew_0240.deprecations.datetimelike_int_ops:
Integer Addition/Subtraction with Datetimes and Timedeltas is Deprecated
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the past, users could—in some cases—add or subtract integers or integer-dtype
arrays from :class:`Timestamp`, :class:`DatetimeIndex` and :class:`TimedeltaIndex`.
@@ -1265,6 +1265,74 @@ the object's ``freq`` attribute (:issue:`21939`, :issue:`23878`).
dti = pd.date_range('2001-01-01', periods=2, freq='7D')
dti + pd.Index([1 * dti.freq, 2 * dti.freq])
+
+.. _whatsnew_0240.deprecations.tz_aware_array:
+
+Converting Timezone-Aware Series and Index to NumPy Arrays
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The conversion from a :class:`Series` or :class:`Index` with timezone-aware
+datetime data will change to preserve timezones by default (:issue:`23569`).
+
+NumPy doesn't have a dedicated dtype for timezone-aware datetimes.
+In the past, converting a :class:`Series` or :class:`DatetimeIndex` with
+timezone-aware datatimes would convert to a NumPy array by
+
+1. converting the tz-aware data to UTC
+2. dropping the timezone-info
+3. returning a :class:`numpy.ndarray` with ``datetime64[ns]`` dtype
+
+Future versions of pandas will preserve the timezone information by returning an
+object-dtype NumPy array where each value is a :class:`Timestamp` with the correct
+timezone attached
+
+.. ipython:: python
+
+ ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
+ ser
+
+The default behavior remains the same, but issues a warning
+
+.. code-block:: python
+
+ In [8]: np.asarray(ser)
+ /bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive
+ ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray
+ with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
+
+ To accept the future behavior, pass 'dtype=object'.
+ To keep the old behavior, pass 'dtype="datetime64[ns]"'.
+ #!/bin/python3
+ Out[8]:
+ array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
+ dtype='datetime64[ns]')
+
+The previous or future behavior can be obtained, without any warnings, by specifying
+the ``dtype``
+
+*Previous Behavior*
+
+.. ipython:: python
+
+ np.asarray(ser, dtype='datetime64[ns]')
+
+*Future Behavior*
+
+.. ipython:: python
+
+ # New behavior
+ np.asarray(ser, dtype=object)
+
+
+Or by using :meth:`Series.to_numpy`
+
+.. ipython:: python
+
+ ser.to_numpy()
+ ser.to_numpy(dtype="datetime64[ns]")
+
+All the above applies to a :class:`DatetimeIndex` with tz-aware values as well.
+
.. _whatsnew_0240.prior_deprecations:
Removal of prior version deprecations/changes
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index a55e8759deedb..e6fbc6d1f4b15 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -524,7 +524,7 @@ def _resolution(self):
# Array-Like / EA-Interface Methods
def __array__(self, dtype=None):
- if is_object_dtype(dtype):
+ if is_object_dtype(dtype) or (dtype is None and self.tz):
return np.array(list(self), dtype=object)
elif is_int64_dtype(dtype):
return self.asi8
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index b2d72eb49d2de..bd6094596c5e1 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1020,7 +1020,7 @@ def maybe_cast_to_datetime(value, dtype, errors='raise'):
# datetime64tz is assumed to be naive which should
# be localized to the timezone.
is_dt_string = is_string_dtype(value)
- value = to_datetime(value, errors=errors)
+ value = to_datetime(value, errors=errors).array
if is_dt_string:
# Strings here are naive, so directly localize
value = value.tz_localize(dtype.tz)
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index e35ee32657509..79756d4c0cfab 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -403,6 +403,7 @@ def _hash_categories(categories, ordered=True):
from pandas.core.util.hashing import (
hash_array, _combine_hash_arrays, hash_tuples
)
+ from pandas.core.dtypes.common import is_datetime64tz_dtype, _NS_DTYPE
if len(categories) and isinstance(categories[0], tuple):
# assumes if any individual category is a tuple, then all our. ATM
@@ -420,6 +421,11 @@ def _hash_categories(categories, ordered=True):
# find a better solution
hashed = hash((tuple(categories), ordered))
return hashed
+
+ if is_datetime64tz_dtype(categories.dtype):
+ # Avoid future warning.
+ categories = categories.astype(_NS_DTYPE)
+
cat_array = hash_array(np.asarray(categories), categorize=False)
if ordered:
cat_array = np.vstack([
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index b8b73b6aab1a5..e52ab66ef9cb4 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1271,8 +1271,8 @@ def f(self, **kwargs):
def first_compat(x, axis=0):
def first(x):
+ x = x.to_numpy()
- x = np.asarray(x)
x = x[notna(x)]
if len(x) == 0:
return np.nan
@@ -1286,8 +1286,7 @@ def first(x):
def last_compat(x, axis=0):
def last(x):
-
- x = np.asarray(x)
+ x = x.to_numpy()
x = x[notna(x)]
if len(x) == 0:
return np.nan
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index f396f081267b3..ab1ac45122658 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -339,6 +339,21 @@ def _simple_new(cls, values, name=None, freq=None, tz=None, dtype=None):
# --------------------------------------------------------------------
+ def __array__(self, dtype=None):
+ if (dtype is None and isinstance(self._data, DatetimeArray)
+ and getattr(self.dtype, 'tz', None)):
+ msg = (
+ "Converting timezone-aware DatetimeArray to timezone-naive "
+ "ndarray with 'datetime64[ns]' dtype. In the future, this "
+ "will return an ndarray with 'object' dtype where each "
+ "element is a 'pandas.Timestamp' with the correct 'tz'.\n\t"
+ "To accept the future behavior, pass 'dtype=object'.\n\t"
+ "To keep the old behavior, pass 'dtype=\"datetime64[ns]\"'."
+ )
+ warnings.warn(msg, FutureWarning, stacklevel=3)
+ dtype = 'M8[ns]'
+ return np.asarray(self._data, dtype=dtype)
+
@property
def dtype(self):
return self._eadata.dtype
@@ -1114,7 +1129,6 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None):
strftime = ea_passthrough(DatetimeArray.strftime)
_has_same_tz = ea_passthrough(DatetimeArray._has_same_tz)
- __array__ = ea_passthrough(DatetimeArray.__array__)
@property
def offset(self):
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 3504c6e12b896..95bf776b1f19d 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -581,7 +581,12 @@ def can_do_equal_len():
setter(item, v)
# we have an equal len ndarray/convertible to our labels
- elif np.array(value).ndim == 2:
+ # hasattr first, to avoid coercing to ndarray without reason.
+ # But we may be relying on the ndarray coercion to check ndim.
+ # Why not just convert to an ndarray earlier on if needed?
+ elif ((hasattr(value, 'ndim') and value.ndim == 2)
+ or (not hasattr(value, 'ndim') and
+ np.array(value).ndim) == 2):
# note that this coerces the dtype if we are mixed
# GH 7551
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f88114e1c9e20..4b2f93451dad0 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1447,8 +1447,20 @@ def quantile(self, qs, interpolation='linear', axis=0):
-------
Block
"""
- values = self.get_values()
- values, _ = self._try_coerce_args(values, values)
+ if self.is_datetimetz:
+ # TODO: cleanup this special case.
+ # We need to operate on i8 values for datetimetz
+ # but `Block.get_values()` returns an ndarray of objects
+ # right now. We need an API for "values to do numeric-like ops on"
+ values = self.values.asi8
+
+ # TODO: NonConsolidatableMixin shape
+ # Usual shape inconsistencies for ExtensionBlocks
+ if self.ndim > 1:
+ values = values[None, :]
+ else:
+ values = self.get_values()
+ values, _ = self._try_coerce_args(values, values)
is_empty = values.shape[axis] == 0
orig_scalar = not is_list_like(qs)
@@ -2055,10 +2067,6 @@ def _na_value(self):
def fill_value(self):
return tslibs.iNaT
- def to_dense(self):
- # TODO(DatetimeBlock): remove
- return np.asarray(self.values)
-
def get_values(self, dtype=None):
"""
return object dtype as boxed values, such as Timestamps/Timedelta
@@ -2330,6 +2338,12 @@ def get_values(self, dtype=None):
values = values.reshape(1, -1)
return values
+ def to_dense(self):
+ # we request M8[ns] dtype here, even though it discards tzinfo,
+ # as lots of code (e.g. anything using values_from_object)
+ # expects that behavior.
+ return np.asarray(self.values, dtype=_NS_DTYPE)
+
def _slice(self, slicer):
""" return a slice of my values """
if isinstance(slicer, tuple):
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 878a417b46674..7af347a141781 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -34,6 +34,7 @@
from pandas.core.indexes import base as ibase
from pandas.core.internals import (
create_block_manager_from_arrays, create_block_manager_from_blocks)
+from pandas.core.internals.arrays import extract_array
# ---------------------------------------------------------------------
# BlockManager Interface
@@ -539,7 +540,6 @@ def sanitize_array(data, index, dtype=None, copy=False,
Sanitize input data to an ndarray, copy if specified, coerce to the
dtype if specified.
"""
-
if dtype is not None:
dtype = pandas_dtype(dtype)
@@ -552,8 +552,10 @@ def sanitize_array(data, index, dtype=None, copy=False,
else:
data = data.copy()
+ data = extract_array(data, extract_numpy=True)
+
# GH#846
- if isinstance(data, (np.ndarray, Index, ABCSeries)):
+ if isinstance(data, np.ndarray):
if dtype is not None:
subarr = np.array(data, copy=False)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 89e191f171f97..cafd3a9915fa0 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -144,7 +144,9 @@ def f(values, axis=None, skipna=True, **kwds):
def _bn_ok_dtype(dt, name):
# Bottleneck chokes on datetime64
- if (not is_object_dtype(dt) and not is_datetime_or_timedelta_dtype(dt)):
+ if (not is_object_dtype(dt) and
+ not (is_datetime_or_timedelta_dtype(dt) or
+ is_datetime64tz_dtype(dt))):
# GH 15507
# bottleneck does not properly upcast during the sum
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index 6f95b14993228..15df0ca2442fa 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -8,7 +8,7 @@
from pandas._libs.lib import infer_dtype
from pandas.core.dtypes.common import (
- ensure_int64, is_categorical_dtype, is_datetime64_dtype,
+ _NS_DTYPE, ensure_int64, is_categorical_dtype, is_datetime64_dtype,
is_datetime64tz_dtype, is_datetime_or_timedelta_dtype, is_integer,
is_scalar, is_timedelta64_dtype)
from pandas.core.dtypes.missing import isna
@@ -226,7 +226,10 @@ def cut(x, bins, right=True, labels=None, retbins=False, precision=3,
raise ValueError('Overlapping IntervalIndex is not accepted.')
else:
- bins = np.asarray(bins)
+ if is_datetime64tz_dtype(bins):
+ bins = np.asarray(bins, dtype=_NS_DTYPE)
+ else:
+ bins = np.asarray(bins)
bins = _convert_bin_to_numeric_type(bins, dtype)
if (np.diff(bins) < 0).any():
raise ValueError('bins must increase monotonically.')
diff --git a/pandas/core/series.py b/pandas/core/series.py
index de34227cda28a..04b8b1ed74d9c 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -21,7 +21,8 @@
is_extension_array_dtype, is_extension_type, is_hashable, is_integer,
is_iterator, is_list_like, is_scalar, is_string_like, is_timedelta64_dtype)
from pandas.core.dtypes.generic import (
- ABCDataFrame, ABCDatetimeIndex, ABCSeries, ABCSparseArray, ABCSparseSeries)
+ ABCDataFrame, ABCDatetimeArray, ABCDatetimeIndex, ABCSeries,
+ ABCSparseArray, ABCSparseSeries)
from pandas.core.dtypes.missing import (
isna, na_value_for_dtype, notna, remove_na_arraylike)
@@ -661,11 +662,66 @@ def view(self, dtype=None):
# ----------------------------------------------------------------------
# NDArray Compat
- def __array__(self, result=None):
+ def __array__(self, dtype=None):
"""
- The array interface, return my values.
- """
- return self.get_values()
+ Return the values as a NumPy array.
+
+ Users should not call this directly. Rather, it is invoked by
+ :func:`numpy.array` and :func:`numpy.asarray`.
+
+ Parameters
+ ----------
+ dtype : str or numpy.dtype, optional
+ The dtype to use for the resulting NumPy array. By default,
+ the dtype is inferred from the data.
+
+ Returns
+ -------
+ numpy.ndarray
+ The values in the series converted to a :class:`numpy.ndarary`
+ with the specified `dtype`.
+
+ See Also
+ --------
+ pandas.array : Create a new array from data.
+ Series.array : Zero-copy view to the array backing the Series.
+ Series.to_numpy : Series method for similar behavior.
+
+ Examples
+ --------
+ >>> ser = pd.Series([1, 2, 3])
+ >>> np.asarray(ser)
+ array([1, 2, 3])
+
+ For timezone-aware data, the timezones may be retained with
+ ``dtype='object'``
+
+ >>> tzser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
+ >>> np.asarray(tzser, dtype="object")
+ array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'),
+ Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
+ dtype=object)
+
+ Or the values may be localized to UTC and the tzinfo discared with
+ ``dtype='datetime64[ns]'``
+
+ >>> np.asarray(tzser, dtype="datetime64[ns]") # doctest: +ELLIPSIS
+ array(['1999-12-31T23:00:00.000000000', ...],
+ dtype='datetime64[ns]')
+ """
+ if (dtype is None and isinstance(self.array, ABCDatetimeArray)
+ and getattr(self.dtype, 'tz', None)):
+ msg = (
+ "Converting timezone-aware DatetimeArray to timezone-naive "
+ "ndarray with 'datetime64[ns]' dtype. In the future, this "
+ "will return an ndarray with 'object' dtype where each "
+ "element is a 'pandas.Timestamp' with the correct 'tz'.\n\t"
+ "To accept the future behavior, pass 'dtype=object'.\n\t"
+ "To keep the old behavior, pass 'dtype=\"datetime64[ns]\"'."
+ )
+ warnings.warn(msg, FutureWarning, stacklevel=3)
+ dtype = 'M8[ns]'
+ return np.asarray(self.array, dtype)
def __array_wrap__(self, result, context=None):
"""
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index db88d94be1cab..8f8531ff97e69 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -262,11 +262,11 @@ def test_array(self, tz_naive_fixture):
arr = DatetimeArray(dti)
expected = dti.asi8.view('M8[ns]')
- result = np.array(arr)
+ result = np.array(arr, dtype='M8[ns]')
tm.assert_numpy_array_equal(result, expected)
# check that we are not making copies when setting copy=False
- result = np.array(arr, copy=False)
+ result = np.array(arr, dtype='M8[ns]', copy=False)
assert result.base is expected.base
assert result.base is not None
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 8890593b1fa9d..72504fe09259e 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -178,6 +178,39 @@ def test_fillna_preserves_tz(self, method):
assert arr[2] is pd.NaT
assert dti[2] == pd.Timestamp('2000-01-03', tz='US/Central')
+ def test_array_interface_tz(self):
+ tz = "US/Central"
+ data = DatetimeArray(pd.date_range('2017', periods=2, tz=tz))
+ result = np.asarray(data)
+
+ expected = np.array([pd.Timestamp('2017-01-01T00:00:00', tz=tz),
+ pd.Timestamp('2017-01-02T00:00:00', tz=tz)],
+ dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = np.asarray(data, dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = np.asarray(data, dtype='M8[ns]')
+
+ expected = np.array(['2017-01-01T06:00:00',
+ '2017-01-02T06:00:00'], dtype="M8[ns]")
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_array_interface(self):
+ data = DatetimeArray(pd.date_range('2017', periods=2))
+ expected = np.array(['2017-01-01T00:00:00', '2017-01-02T00:00:00'],
+ dtype='datetime64[ns]')
+
+ result = np.asarray(data)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = np.asarray(data, dtype=object)
+ expected = np.array([pd.Timestamp('2017-01-01T00:00:00'),
+ pd.Timestamp('2017-01-02T00:00:00')],
+ dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
+
class TestSequenceToDT64NS(object):
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 56c9395d0f802..965e5e000d026 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
from datetime import datetime
-from warnings import catch_warnings, simplefilter
+from warnings import catch_warnings, filterwarnings, simplefilter
import numpy as np
import pytest
@@ -278,17 +278,20 @@ def test_array_equivalent():
TimedeltaIndex([0, np.nan]))
assert not array_equivalent(
TimedeltaIndex([0, np.nan]), TimedeltaIndex([1, np.nan]))
- assert array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'),
- DatetimeIndex([0, np.nan], tz='US/Eastern'))
- assert not array_equivalent(
- DatetimeIndex([0, np.nan], tz='US/Eastern'), DatetimeIndex(
- [1, np.nan], tz='US/Eastern'))
- assert not array_equivalent(
- DatetimeIndex([0, np.nan]), DatetimeIndex(
- [0, np.nan], tz='US/Eastern'))
- assert not array_equivalent(
- DatetimeIndex([0, np.nan], tz='CET'), DatetimeIndex(
- [0, np.nan], tz='US/Eastern'))
+ with catch_warnings():
+ filterwarnings("ignore", "Converting timezone", FutureWarning)
+ assert array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'),
+ DatetimeIndex([0, np.nan], tz='US/Eastern'))
+ assert not array_equivalent(
+ DatetimeIndex([0, np.nan], tz='US/Eastern'), DatetimeIndex(
+ [1, np.nan], tz='US/Eastern'))
+ assert not array_equivalent(
+ DatetimeIndex([0, np.nan]), DatetimeIndex(
+ [0, np.nan], tz='US/Eastern'))
+ assert not array_equivalent(
+ DatetimeIndex([0, np.nan], tz='CET'), DatetimeIndex(
+ [0, np.nan], tz='US/Eastern'))
+
assert not array_equivalent(
DatetimeIndex([0, np.nan]), TimedeltaIndex([0, np.nan]))
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index e76de2ebedf67..e1ba0e1708442 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -392,3 +392,45 @@ def test_unique(self, arr, expected):
# GH 21737
# Ensure the underlying data is consistent
assert result[0] == expected[0]
+
+ def test_asarray_tz_naive(self):
+ # This shouldn't produce a warning.
+ idx = pd.date_range('2000', periods=2)
+ # M8[ns] by default
+ with tm.assert_produces_warning(None):
+ result = np.asarray(idx)
+
+ expected = np.array(['2000-01-01', '2000-01-02'], dtype='M8[ns]')
+ tm.assert_numpy_array_equal(result, expected)
+
+ # optionally, object
+ with tm.assert_produces_warning(None):
+ result = np.asarray(idx, dtype=object)
+
+ expected = np.array([pd.Timestamp('2000-01-01'),
+ pd.Timestamp('2000-01-02')])
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_asarray_tz_aware(self):
+ tz = 'US/Central'
+ idx = pd.date_range('2000', periods=2, tz=tz)
+ expected = np.array(['2000-01-01T06', '2000-01-02T06'], dtype='M8[ns]')
+ # We warn by default and return an ndarray[M8[ns]]
+ with tm.assert_produces_warning(FutureWarning):
+ result = np.asarray(idx)
+
+ tm.assert_numpy_array_equal(result, expected)
+
+ # Old behavior with no warning
+ with tm.assert_produces_warning(None):
+ result = np.asarray(idx, dtype="M8[ns]")
+
+ tm.assert_numpy_array_equal(result, expected)
+
+ # Future behavior with no warning
+ expected = np.array([pd.Timestamp("2000-01-01", tz=tz),
+ pd.Timestamp("2000-01-02", tz=tz)])
+ with tm.assert_produces_warning(None):
+ result = np.asarray(idx, dtype=object)
+
+ tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/series/test_timeseries.py b/pandas/tests/series/test_timeseries.py
index fcb486d832c76..07808008c081c 100644
--- a/pandas/tests/series/test_timeseries.py
+++ b/pandas/tests/series/test_timeseries.py
@@ -1036,3 +1036,44 @@ def test_view_tz(self):
946879200000000000,
946965600000000000])
tm.assert_series_equal(result, expected)
+
+ def test_asarray_tz_naive(self):
+ # This shouldn't produce a warning.
+ ser = pd.Series(pd.date_range('2000', periods=2))
+ expected = np.array(['2000-01-01', '2000-01-02'], dtype='M8[ns]')
+ with tm.assert_produces_warning(None):
+ result = np.asarray(ser)
+
+ tm.assert_numpy_array_equal(result, expected)
+
+ # optionally, object
+ with tm.assert_produces_warning(None):
+ result = np.asarray(ser, dtype=object)
+
+ expected = np.array([pd.Timestamp('2000-01-01'),
+ pd.Timestamp('2000-01-02')])
+ tm.assert_numpy_array_equal(result, expected)
+
+ def test_asarray_tz_aware(self):
+ tz = 'US/Central'
+ ser = pd.Series(pd.date_range('2000', periods=2, tz=tz))
+ expected = np.array(['2000-01-01T06', '2000-01-02T06'], dtype='M8[ns]')
+ # We warn by default and return an ndarray[M8[ns]]
+ with tm.assert_produces_warning(FutureWarning):
+ result = np.asarray(ser)
+
+ tm.assert_numpy_array_equal(result, expected)
+
+ # Old behavior with no warning
+ with tm.assert_produces_warning(None):
+ result = np.asarray(ser, dtype="M8[ns]")
+
+ tm.assert_numpy_array_equal(result, expected)
+
+ # Future behavior with no warning
+ expected = np.array([pd.Timestamp("2000-01-01", tz=tz),
+ pd.Timestamp("2000-01-02", tz=tz)])
+ with tm.assert_produces_warning(None):
+ result = np.asarray(ser, dtype=object)
+
+ tm.assert_numpy_array_equal(result, expected)
| This deprecates the current behvior when converting tz-aware Series
or Index to an ndarray. Previously, we converted to M8[ns], throwing
away the timezone information. In the future, we will return an
object-dtype array filled with Timestamps, each of which has the correct
tz.
```python
In [1]: import pandas as pd; import numpy as np
In [2]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
In [3]: np.asarray(ser)
/bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
To accept the future behavior, pass 'dtype=object'.
To keep the old behavior, pass 'dtype="datetime64[ns]"'.
#!/Users/taugspurger/Envs/pandas-dev/bin/python3
Out[3]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')
```
xref https://github.com/pandas-dev/pandas/issues/23569
closes https://github.com/pandas-dev/pandas/issues/15750 | https://api.github.com/repos/pandas-dev/pandas/pulls/24596 | 2019-01-03T17:55:54Z | 2019-01-05T14:51:12Z | 2019-01-05T14:51:12Z | 2019-12-30T20:16:47Z |
DOC: fix to_numpy explanation for tz aware data | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 8dca000dfa969..73ae26150b946 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -99,27 +99,6 @@ are two possibly useful representations:
Timezones may be preserved with ``dtype=object``
-.. ipython:: python
-
- ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
- ser.to_numpy(dtype=object)
-
-Or thrown away with ``dtype='datetime64[ns]'``
-
- ser.to_numpy(dtype="datetime64[ns]")
-
-:meth:`~Series.to_numpy` gives some control over the ``dtype`` of the
-resulting :class:`ndarray`. For example, consider datetimes with timezones.
-NumPy doesn't have a dtype to represent timezone-aware datetimes, so there
-are two possibly useful representations:
-
-1. An object-dtype :class:`ndarray` with :class:`Timestamp` objects, each
- with the correct ``tz``
-2. A ``datetime64[ns]`` -dtype :class:`ndarray`, where the values have
- been converted to UTC and the timezone discarded
-
-Timezones may be preserved with ``dtype=object``
-
.. ipython:: python
ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index a391d73b8922e..f56ad710973dd 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -2425,21 +2425,25 @@ a convert on an aware stamp.
.. note::
Using :meth:`Series.to_numpy` on a ``Series``, returns a NumPy array of the data.
- These values are converted to UTC, as NumPy does not currently support timezones (even though it is *printing* in the local timezone!).
+ NumPy does not currently support timezones (even though it is *printing* in the local timezone!),
+ therefore an object array of Timestamps is returned for timezone aware data:
.. ipython:: python
s_naive.to_numpy()
s_aware.to_numpy()
- Further note that once converted to a NumPy array these would lose the tz tenor.
+ By converting to an object array of Timestamps, it preserves the timezone
+ information. For example, when converting back to a Series:
.. ipython:: python
pd.Series(s_aware.to_numpy())
- However, these can be easily converted:
+ However, if you want an actual NumPy ``datetime64[ns]`` array (with the values
+ converted to UTC) instead of an array of objects, you can specify the
+ ``dtype`` argument:
.. ipython:: python
- pd.Series(s_aware.to_numpy()).dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
+ s_aware.to_numpy(dtype='datetime64[ns]')
diff --git a/pandas/core/base.py b/pandas/core/base.py
index c37ab48de7cb8..c02ba88ea7fda 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -899,7 +899,6 @@ def to_numpy(self, dtype=None, copy=False):
``to_numpy()`` will return a NumPy array and the categorical dtype
will be lost.
-
For NumPy dtypes, this will be a reference to the actual data stored
in this Series or Index (assuming ``copy=False``). Modifying the result
in place will modify the data stored in the Series or Index (not that
@@ -910,7 +909,7 @@ def to_numpy(self, dtype=None, copy=False):
expensive. When you need a no-copy reference to the underlying data,
:attr:`Series.array` should be used instead.
- This table lays out the different dtypes and return types of
+ This table lays out the different dtypes and default return types of
``to_numpy()`` for various dtypes within pandas.
================== ================================
@@ -920,6 +919,7 @@ def to_numpy(self, dtype=None, copy=False):
period ndarray[object] (Periods)
interval ndarray[object] (Intervals)
IntegerNA ndarray[object]
+ datetime64[ns] datetime64[ns]
datetime64[ns, tz] ndarray[object] (Timestamps)
================== ================================
| Some clean-up now `to_numpy` preserves timezone and no longer converts to UTC datetime64 by default (after #24024), the example in timeseries.rst was failing due to that. | https://api.github.com/repos/pandas-dev/pandas/pulls/24595 | 2019-01-03T16:26:52Z | 2019-01-03T21:08:46Z | 2019-01-03T21:08:46Z | 2019-01-03T22:49:17Z |
Remove unhittable methods in internals | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index d12114bd951ba..3b2c13af785d4 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -26,7 +26,7 @@
is_re, is_re_compilable, is_sparse, is_timedelta64_dtype, pandas_dtype)
import pandas.core.dtypes.concat as _concat
from pandas.core.dtypes.dtypes import (
- CategoricalDtype, DatetimeTZDtype, ExtensionDtype, PandasExtensionDtype)
+ CategoricalDtype, ExtensionDtype, PandasExtensionDtype)
from pandas.core.dtypes.generic import (
ABCDataFrame, ABCDatetimeIndex, ABCExtensionArray, ABCIndexClass,
ABCSeries)
@@ -1507,15 +1507,8 @@ def _nanpercentile(values, q, axis, **kw):
len(values)).reshape(len(values),
len(qs))
else:
-
- try:
- result = _nanpercentile(values, np.array(qs) * 100,
- axis=axis, **kw)
- except ValueError:
-
- # older numpies don't handle an array for q
- result = [_nanpercentile(values, q * 100,
- axis=axis, **kw) for q in qs]
+ result = _nanpercentile(values, np.array(qs) * 100,
+ axis=axis, **kw)
result = np.array(result, copy=False)
if self.ndim > 1:
@@ -1639,13 +1632,6 @@ def shape(self):
return (len(self.values)),
return (len(self.mgr_locs), len(self.values))
- def get_values(self, dtype=None):
- """ need to to_dense myself (and always return a ndim sized object) """
- values = self.values.to_dense()
- if values.ndim == self.ndim - 1:
- values = values.reshape((1,) + values.shape)
- return values
-
def iget(self, col):
if self.ndim == 2 and isinstance(col, tuple):
@@ -1700,49 +1686,9 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
new_values = self._try_coerce_result(new_values)
return [self.make_block(values=new_values)]
- def _slice(self, slicer):
- """ return a slice of my values (but densify first) """
- return self.get_values()[slicer]
-
def _try_cast_result(self, result, dtype=None):
return result
- def _unstack(self, unstacker_func, new_columns, n_rows, fill_value):
- """Return a list of unstacked blocks of self
-
- Parameters
- ----------
- unstacker_func : callable
- Partially applied unstacker.
- new_columns : Index
- All columns of the unstacked BlockManager.
- n_rows : int
- Only used in ExtensionBlock.unstack
- fill_value : int
- Only used in ExtensionBlock.unstack
-
- Returns
- -------
- blocks : list of Block
- New blocks of unstacked values.
- mask : array_like of bool
- The mask of columns of `blocks` we should keep.
- """
- # NonConsolidatable blocks can have a single item only, so we return
- # one block per item
- unstacker = unstacker_func(self.values.T)
-
- new_placement, new_values, mask = self._get_unstack_items(
- unstacker, new_columns
- )
-
- new_values = new_values.T[mask]
- new_placement = new_placement[mask]
-
- blocks = [self.make_block_same_class(vals, [place])
- for vals, place in zip(new_values, new_placement)]
- return blocks, mask
-
def _get_unstack_items(self, unstacker, new_columns):
"""
Get the placement, values, and mask for a Block unstack.
@@ -2330,11 +2276,11 @@ def to_native_types(self, slicer=None, na_rep=None, date_format=None,
i8values = i8values[..., slicer]
from pandas.io.formats.format import _get_format_datetime64_from_values
- format = _get_format_datetime64_from_values(values, date_format)
+ fmt = _get_format_datetime64_from_values(values, date_format)
result = tslib.format_array_from_datetime(
i8values.ravel(), tz=getattr(self.values, 'tz', None),
- format=format, na_rep=na_rep).reshape(i8values.shape)
+ format=fmt, na_rep=na_rep).reshape(i8values.shape)
return np.atleast_2d(result)
def should_store(self, value):
@@ -2400,8 +2346,6 @@ def _maybe_coerce_values(self, values, dtype=None):
values = self._holder(values)
if dtype is not None:
- if isinstance(dtype, compat.string_types):
- dtype = DatetimeTZDtype.construct_from_string(dtype)
values = type(values)(values, dtype=dtype)
if values.tz is None:
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 418046e42d581..b877ed93f07a2 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -3245,9 +3245,7 @@ def test_setitem(self):
b1 = df._data.blocks[1]
b2 = df._data.blocks[2]
tm.assert_extension_array_equal(b1.values, b2.values)
- if b1.values._data.base is not None:
- # base being None suffices to assure a copy was made
- assert id(b1.values._data.base) != id(b2.values._data.base)
+ assert id(b1.values._data.base) != id(b2.values._data.base)
# with nan
df2 = df.copy()
| And one more 24024 in test cleanup | https://api.github.com/repos/pandas-dev/pandas/pulls/24594 | 2019-01-03T15:56:39Z | 2019-01-03T17:21:20Z | 2019-01-03T17:21:20Z | 2019-01-03T18:26:47Z |
DOC: hide warning from arrow about deprecated MultiIndex labels | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 967648f3a168a..2149ee7fb46d9 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -4647,6 +4647,7 @@ Write to a feather file.
Read from a feather file.
.. ipython:: python
+ :okwarning:
result = pd.read_feather('example.feather')
result
@@ -4721,6 +4722,7 @@ Write to a parquet file.
Read from a parquet file.
.. ipython:: python
+ :okwarning:
result = pd.read_parquet('example_fp.parquet', engine='fastparquet')
result = pd.read_parquet('example_pa.parquet', engine='pyarrow')
@@ -4791,6 +4793,7 @@ Partitioning Parquet files
Parquet supports partitioning of data based on the values of one or more columns.
.. ipython:: python
+ :okwarning:
df = pd.DataFrame({'a': [0, 0, 1, 1], 'b': [0, 1, 0, 1]})
df.to_parquet(fname='test', engine='pyarrow',
| This is fixed upstream in arrow (https://github.com/apache/arrow/pull/3120), but until a new release is there, avoiding the warning in our docs to have a cleaner doc build output. | https://api.github.com/repos/pandas-dev/pandas/pulls/24591 | 2019-01-03T13:24:08Z | 2019-01-03T15:19:13Z | 2019-01-03T15:19:13Z | 2019-01-03T15:58:28Z |
DOC: Bump fastparquet version | diff --git a/ci/deps/azure-macos-35.yaml b/ci/deps/azure-macos-35.yaml
index b6dc2b3c27e8d..58abbabce3d86 100644
--- a/ci/deps/azure-macos-35.yaml
+++ b/ci/deps/azure-macos-35.yaml
@@ -14,7 +14,6 @@ dependencies:
- numpy=1.12.0
- openpyxl=2.5.5
- pyarrow
- - fastparquet
- pytables
- python=3.5*
- pytz
diff --git a/ci/deps/azure-windows-36.yaml b/ci/deps/azure-windows-36.yaml
index 817aab66c65aa..7b132a134c44e 100644
--- a/ci/deps/azure-windows-36.yaml
+++ b/ci/deps/azure-windows-36.yaml
@@ -6,7 +6,7 @@ dependencies:
- blosc
- bottleneck
- boost-cpp<1.67
- - fastparquet
+ - fastparquet>=0.2.1
- matplotlib
- numexpr
- numpy=1.14*
@@ -18,7 +18,6 @@ dependencies:
- python=3.6.6
- pytz
- scipy
- - thrift=0.10*
- xlrd
- xlsxwriter
- xlwt
diff --git a/ci/deps/travis-27.yaml b/ci/deps/travis-27.yaml
index 8d14673ebde6d..0f2194e71de31 100644
--- a/ci/deps/travis-27.yaml
+++ b/ci/deps/travis-27.yaml
@@ -6,7 +6,7 @@ dependencies:
- beautifulsoup4
- bottleneck
- cython=0.28.2
- - fastparquet
+ - fastparquet>=0.2.1
- gcsfs
- html5lib
- ipython
diff --git a/ci/deps/travis-36-doc.yaml b/ci/deps/travis-36-doc.yaml
index c345af0a2983c..26f3a17432ab2 100644
--- a/ci/deps/travis-36-doc.yaml
+++ b/ci/deps/travis-36-doc.yaml
@@ -6,7 +6,7 @@ dependencies:
- beautifulsoup4
- bottleneck
- cython>=0.28.2
- - fastparquet
+ - fastparquet>=0.2.1
- gitpython
- html5lib
- hypothesis>=3.58.0
diff --git a/ci/deps/travis-36.yaml b/ci/deps/travis-36.yaml
index 1085ecd008fa6..74db888d588f4 100644
--- a/ci/deps/travis-36.yaml
+++ b/ci/deps/travis-36.yaml
@@ -7,7 +7,7 @@ dependencies:
- botocore>=1.11
- cython>=0.28.2
- dask
- - fastparquet
+ - fastparquet>=0.2.1
- gcsfs
- geopandas
- html5lib
diff --git a/doc/source/install.rst b/doc/source/install.rst
index e25c343a1cce0..fa3ff2f20b150 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -258,7 +258,7 @@ Optional Dependencies
* `xarray <http://xarray.pydata.org>`__: pandas like handling for > 2 dims, needed for converting Panels to xarray objects. Version 0.7.0 or higher is recommended.
* `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage, Version 3.4.2 or higher
* `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.7.0): necessary for feather-based storage.
-* `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.7.0) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.1.2) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support.
+* `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.7.0) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.2.1) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support.
* `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 0.8.1 or higher recommended. Besides SQLAlchemy, you also need a database specific driver. You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. Some common drivers are:
* `psycopg2 <http://initd.org/psycopg/>`__: for PostgreSQL
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3be87c4cabaf0..3a3fde2772b29 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -474,7 +474,7 @@ If installed, we now require:
+-----------------+-----------------+----------+
| bottleneck | 1.2.0 | |
+-----------------+-----------------+----------+
-| fastparquet | 0.1.2 | |
+| fastparquet | 0.2.1 | |
+-----------------+-----------------+----------+
| matplotlib | 2.0.0 | |
+-----------------+-----------------+----------+
diff --git a/environment.yml b/environment.yml
index 42da3e31de548..a980499029478 100644
--- a/environment.yml
+++ b/environment.yml
@@ -29,7 +29,7 @@ dependencies:
- botocore>=1.11
- boto3
- bottleneck>=1.2.0
- - fastparquet>=0.1.2
+ - fastparquet>=0.2.1
- html5lib
- ipython>=5.6.0
- ipykernel
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index 4e52c35c6b1e6..a40fe0c9aa74f 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -151,9 +151,9 @@ def __init__(self):
"\nor via pip\n"
"pip install -U fastparquet"
)
- if LooseVersion(fastparquet.__version__) < '0.1.2':
+ if LooseVersion(fastparquet.__version__) < '0.2.1':
raise ImportError(
- "fastparquet >= 0.1.2 is required for parquet "
+ "fastparquet >= 0.2.1 is required for parquet "
"support\n\n"
"you can install via conda\n"
"conda install fastparquet -c conda-forge\n"
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index d985ca4eb67ea..8833c6f7813c6 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -59,15 +59,6 @@ def fp():
return 'fastparquet'
-@pytest.fixture
-def fp_lt_014():
- if not _HAVE_FASTPARQUET:
- pytest.skip("fastparquet is not installed")
- if LooseVersion(fastparquet.__version__) >= LooseVersion('0.1.4'):
- pytest.skip("fastparquet is >= 0.1.4")
- return 'fastparquet'
-
-
@pytest.fixture
def df_compat():
return pd.DataFrame({'A': [1, 2, 3], 'B': 'foo'})
@@ -510,16 +501,6 @@ def test_categorical(self, fp):
df = pd.DataFrame({'a': pd.Categorical(list('abc'))})
check_round_trip(df, fp)
- def test_datetime_tz(self, fp_lt_014):
-
- # fastparquet<0.1.4 doesn't preserve tz
- df = pd.DataFrame({'a': pd.date_range('20130101', periods=3,
- tz='US/Eastern')})
- # warns on the coercion
- with catch_warnings(record=True):
- check_round_trip(df, fp_lt_014,
- expected=df.astype('datetime64[ns]'))
-
def test_filter_row_groups(self, fp):
d = {'a': list(range(0, 3))}
df = pd.DataFrame(d)
diff --git a/requirements-dev.txt b/requirements-dev.txt
index a7aa0bacb5bd6..48bd95470d391 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -18,7 +18,7 @@ blosc
botocore>=1.11
boto3
bottleneck>=1.2.0
-fastparquet>=0.1.2
+fastparquet>=0.2.1
html5lib
ipython>=5.6.0
ipykernel
| Should we just bump the min version to 0.2.1? It's quite new, but I think reading datetime data is somewhat common :) It'll save us bug reports. | https://api.github.com/repos/pandas-dev/pandas/pulls/24590 | 2019-01-03T13:17:47Z | 2019-01-05T14:50:47Z | 2019-01-05T14:50:47Z | 2019-11-21T13:54:12Z |
TST/REF: Add more pytest idiom to tests/tslib | diff --git a/pandas/tests/tslibs/test_array_to_datetime.py b/pandas/tests/tslibs/test_array_to_datetime.py
index ff8880257b225..f5b036dde2094 100644
--- a/pandas/tests/tslibs/test_array_to_datetime.py
+++ b/pandas/tests/tslibs/test_array_to_datetime.py
@@ -12,159 +12,145 @@
import pandas.util.testing as tm
-class TestParseISO8601(object):
- @pytest.mark.parametrize('date_str, exp', [
- ('2011-01-02', datetime(2011, 1, 2)),
- ('2011-1-2', datetime(2011, 1, 2)),
- ('2011-01', datetime(2011, 1, 1)),
- ('2011-1', datetime(2011, 1, 1)),
- ('2011 01 02', datetime(2011, 1, 2)),
- ('2011.01.02', datetime(2011, 1, 2)),
- ('2011/01/02', datetime(2011, 1, 2)),
- ('2011\\01\\02', datetime(2011, 1, 2)),
- ('2013-01-01 05:30:00', datetime(2013, 1, 1, 5, 30)),
- ('2013-1-1 5:30:00', datetime(2013, 1, 1, 5, 30))])
- def test_parsers_iso8601(self, date_str, exp):
- # GH#12060
- # test only the iso parser - flexibility to different
- # separators and leadings 0s
- # Timestamp construction falls back to dateutil
- actual = tslib._test_parse_iso8601(date_str)
- assert actual == exp
-
- @pytest.mark.parametrize(
- 'date_str',
- ['2011-01/02', '2011^11^11',
- '201401', '201111', '200101',
- # mixed separated and unseparated
- '2005-0101', '200501-01',
- '20010101 12:3456',
- '20010101 1234:56',
- # HHMMSS must have two digits in
- # each component if unseparated
- '20010101 1', '20010101 123',
- '20010101 12345', '20010101 12345Z',
- # wrong separator for HHMMSS
- '2001-01-01 12-34-56'])
- def test_parsers_iso8601_invalid(self, date_str):
- # separators must all match - YYYYMM not valid
- with pytest.raises(ValueError):
- tslib._test_parse_iso8601(date_str)
-
-
-class TestArrayToDatetime(object):
- def test_parsing_valid_dates(self):
- arr = np.array(['01-01-2013', '01-02-2013'], dtype=object)
- result, _ = tslib.array_to_datetime(arr)
- expected = ['2013-01-01T00:00:00.000000000-0000',
- '2013-01-02T00:00:00.000000000-0000']
- tm.assert_numpy_array_equal(
- result,
- np_array_datetime64_compat(expected, dtype='M8[ns]'))
+@pytest.mark.parametrize("data,expected", [
+ (["01-01-2013", "01-02-2013"],
+ ["2013-01-01T00:00:00.000000000-0000",
+ "2013-01-02T00:00:00.000000000-0000"]),
+ (["Mon Sep 16 2013", "Tue Sep 17 2013"],
+ ["2013-09-16T00:00:00.000000000-0000",
+ "2013-09-17T00:00:00.000000000-0000"])
+])
+def test_parsing_valid_dates(data, expected):
+ arr = np.array(data, dtype=object)
+ result, _ = tslib.array_to_datetime(arr)
+
+ expected = np_array_datetime64_compat(expected, dtype="M8[ns]")
+ tm.assert_numpy_array_equal(result, expected)
+
+
+@pytest.mark.parametrize("dt_string, expected_tz", [
+ ["01-01-2013 08:00:00+08:00", 480],
+ ["2013-01-01T08:00:00.000000000+0800", 480],
+ ["2012-12-31T16:00:00.000000000-0800", -480],
+ ["12-31-2012 23:00:00-01:00", -60]
+])
+def test_parsing_timezone_offsets(dt_string, expected_tz):
+ # All of these datetime strings with offsets are equivalent
+ # to the same datetime after the timezone offset is added.
+ arr = np.array(["01-01-2013 00:00:00"], dtype=object)
+ expected, _ = tslib.array_to_datetime(arr)
+
+ arr = np.array([dt_string], dtype=object)
+ result, result_tz = tslib.array_to_datetime(arr)
+
+ tm.assert_numpy_array_equal(result, expected)
+ assert result_tz is pytz.FixedOffset(expected_tz)
+
+
+def test_parsing_non_iso_timezone_offset():
+ dt_string = "01-01-2013T00:00:00.000000000+0000"
+ arr = np.array([dt_string], dtype=object)
+
+ result, result_tz = tslib.array_to_datetime(arr)
+ expected = np.array([np.datetime64("2013-01-01 00:00:00.000000000")])
+
+ tm.assert_numpy_array_equal(result, expected)
+ assert result_tz is pytz.FixedOffset(0)
+
+
+def test_parsing_different_timezone_offsets():
+ # see gh-17697
+ data = ["2015-11-18 15:30:00+05:30", "2015-11-18 15:30:00+06:30"]
+ data = np.array(data, dtype=object)
+
+ result, result_tz = tslib.array_to_datetime(data)
+ expected = np.array([datetime(2015, 11, 18, 15, 30,
+ tzinfo=tzoffset(None, 19800)),
+ datetime(2015, 11, 18, 15, 30,
+ tzinfo=tzoffset(None, 23400))],
+ dtype=object)
+
+ tm.assert_numpy_array_equal(result, expected)
+ assert result_tz is None
+
+
+@pytest.mark.parametrize("data", [
+ ["-352.737091", "183.575577"],
+ ["1", "2", "3", "4", "5"]
+])
+def test_number_looking_strings_not_into_datetime(data):
+ # see gh-4601
+ #
+ # These strings don't look like datetimes, so
+ # they shouldn't be attempted to be converted.
+ arr = np.array(data, dtype=object)
+ result, _ = tslib.array_to_datetime(arr, errors="ignore")
+
+ tm.assert_numpy_array_equal(result, arr)
+
+
+@pytest.mark.parametrize("invalid_date", [
+ date(1000, 1, 1),
+ datetime(1000, 1, 1),
+ "1000-01-01",
+ "Jan 1, 1000",
+ np.datetime64("1000-01-01")])
+@pytest.mark.parametrize("errors", ["coerce", "raise"])
+def test_coerce_outside_ns_bounds(invalid_date, errors):
+ arr = np.array([invalid_date], dtype="object")
+ kwargs = dict(values=arr, errors=errors)
+
+ if errors == "raise":
+ msg = "Out of bounds nanosecond timestamp"
+
+ with pytest.raises(ValueError, match=msg):
+ tslib.array_to_datetime(**kwargs)
+ else: # coerce.
+ result, _ = tslib.array_to_datetime(**kwargs)
+ expected = np.array([iNaT], dtype="M8[ns]")
- arr = np.array(['Mon Sep 16 2013', 'Tue Sep 17 2013'], dtype=object)
- result, _ = tslib.array_to_datetime(arr)
- expected = ['2013-09-16T00:00:00.000000000-0000',
- '2013-09-17T00:00:00.000000000-0000']
- tm.assert_numpy_array_equal(
- result,
- np_array_datetime64_compat(expected, dtype='M8[ns]'))
-
- @pytest.mark.parametrize('dt_string, expected_tz', [
- ['01-01-2013 08:00:00+08:00', pytz.FixedOffset(480)],
- ['2013-01-01T08:00:00.000000000+0800', pytz.FixedOffset(480)],
- ['2012-12-31T16:00:00.000000000-0800', pytz.FixedOffset(-480)],
- ['12-31-2012 23:00:00-01:00', pytz.FixedOffset(-60)]])
- def test_parsing_timezone_offsets(self, dt_string, expected_tz):
- # All of these datetime strings with offsets are equivalent
- # to the same datetime after the timezone offset is added
- arr = np.array(['01-01-2013 00:00:00'], dtype=object)
- expected, _ = tslib.array_to_datetime(arr)
-
- arr = np.array([dt_string], dtype=object)
- result, result_tz = tslib.array_to_datetime(arr)
tm.assert_numpy_array_equal(result, expected)
- assert result_tz is expected_tz
- def test_parsing_non_iso_timezone_offset(self):
- dt_string = '01-01-2013T00:00:00.000000000+0000'
- arr = np.array([dt_string], dtype=object)
- result, result_tz = tslib.array_to_datetime(arr)
- expected = np.array([np.datetime64('2013-01-01 00:00:00.000000000')])
- tm.assert_numpy_array_equal(result, expected)
- assert result_tz is pytz.FixedOffset(0)
-
- def test_parsing_different_timezone_offsets(self):
- # GH 17697
- data = ["2015-11-18 15:30:00+05:30", "2015-11-18 15:30:00+06:30"]
- data = np.array(data, dtype=object)
- result, result_tz = tslib.array_to_datetime(data)
- expected = np.array([datetime(2015, 11, 18, 15, 30,
- tzinfo=tzoffset(None, 19800)),
- datetime(2015, 11, 18, 15, 30,
- tzinfo=tzoffset(None, 23400))],
- dtype=object)
- tm.assert_numpy_array_equal(result, expected)
- assert result_tz is None
-
- def test_number_looking_strings_not_into_datetime(self):
- # GH#4601
- # These strings don't look like datetimes so they shouldn't be
- # attempted to be converted
- arr = np.array(['-352.737091', '183.575577'], dtype=object)
- result, _ = tslib.array_to_datetime(arr, errors='ignore')
- tm.assert_numpy_array_equal(result, arr)
- arr = np.array(['1', '2', '3', '4', '5'], dtype=object)
- result, _ = tslib.array_to_datetime(arr, errors='ignore')
- tm.assert_numpy_array_equal(result, arr)
+def test_coerce_outside_ns_bounds_one_valid():
+ arr = np.array(["1/1/1000", "1/1/2000"], dtype=object)
+ result, _ = tslib.array_to_datetime(arr, errors="coerce")
- @pytest.mark.parametrize('invalid_date', [
- date(1000, 1, 1),
- datetime(1000, 1, 1),
- '1000-01-01',
- 'Jan 1, 1000',
- np.datetime64('1000-01-01')])
- def test_coerce_outside_ns_bounds(self, invalid_date):
- arr = np.array([invalid_date], dtype='object')
- with pytest.raises(ValueError):
- tslib.array_to_datetime(arr, errors='raise')
-
- result, _ = tslib.array_to_datetime(arr, errors='coerce')
- expected = np.array([iNaT], dtype='M8[ns]')
- tm.assert_numpy_array_equal(result, expected)
+ expected = [iNaT, "2000-01-01T00:00:00.000000000-0000"]
+ expected = np_array_datetime64_compat(expected, dtype="M8[ns]")
- def test_coerce_outside_ns_bounds_one_valid(self):
- arr = np.array(['1/1/1000', '1/1/2000'], dtype=object)
- result, _ = tslib.array_to_datetime(arr, errors='coerce')
- expected = [iNaT,
- '2000-01-01T00:00:00.000000000-0000']
- tm.assert_numpy_array_equal(
- result,
- np_array_datetime64_compat(expected, dtype='M8[ns]'))
+ tm.assert_numpy_array_equal(result, expected)
- def test_coerce_of_invalid_datetimes(self):
- arr = np.array(['01-01-2013', 'not_a_date', '1'], dtype=object)
- # Without coercing, the presence of any invalid dates prevents
- # any values from being converted
- result, _ = tslib.array_to_datetime(arr, errors='ignore')
- tm.assert_numpy_array_equal(result, arr)
+@pytest.mark.parametrize("errors", ["ignore", "coerce"])
+def test_coerce_of_invalid_datetimes(errors):
+ arr = np.array(["01-01-2013", "not_a_date", "1"], dtype=object)
+ kwargs = dict(values=arr, errors=errors)
+ if errors == "ignore":
+ # Without coercing, the presence of any invalid
+ # dates prevents any values from being converted.
+ result, _ = tslib.array_to_datetime(**kwargs)
+ tm.assert_numpy_array_equal(result, arr)
+ else: # coerce.
# With coercing, the invalid dates becomes iNaT
- result, _ = tslib.array_to_datetime(arr, errors='coerce')
- expected = ['2013-01-01T00:00:00.000000000-0000',
+ result, _ = tslib.array_to_datetime(arr, errors="coerce")
+ expected = ["2013-01-01T00:00:00.000000000-0000",
iNaT,
iNaT]
tm.assert_numpy_array_equal(
result,
- np_array_datetime64_compat(expected, dtype='M8[ns]'))
-
- def test_to_datetime_barely_out_of_bounds(self):
- # GH#19529
- # GH#19382 close enough to bounds that dropping nanos would result
- # in an in-bounds datetime
- arr = np.array(['2262-04-11 23:47:16.854775808'], dtype=object)
- with pytest.raises(tslib.OutOfBoundsDatetime):
- tslib.array_to_datetime(arr)
+ np_array_datetime64_compat(expected, dtype="M8[ns]"))
+
+
+def test_to_datetime_barely_out_of_bounds():
+ # see gh-19382, gh-19529
+ #
+ # Close enough to bounds that dropping nanos
+ # would result in an in-bounds datetime.
+ arr = np.array(["2262-04-11 23:47:16.854775808"], dtype=object)
+ msg = "Out of bounds nanosecond timestamp: 2262-04-11 23:47:16"
+
+ with pytest.raises(tslib.OutOfBoundsDatetime, match=msg):
+ tslib.array_to_datetime(arr)
diff --git a/pandas/tests/tslibs/test_ccalendar.py b/pandas/tests/tslibs/test_ccalendar.py
index b5d562a7b5a9c..255558a80018b 100644
--- a/pandas/tests/tslibs/test_ccalendar.py
+++ b/pandas/tests/tslibs/test_ccalendar.py
@@ -2,17 +2,24 @@
from datetime import datetime
import numpy as np
+import pytest
from pandas._libs.tslibs import ccalendar
-def test_get_day_of_year():
- assert ccalendar.get_day_of_year(2001, 3, 1) == 60
- assert ccalendar.get_day_of_year(2004, 3, 1) == 61
- assert ccalendar.get_day_of_year(1907, 12, 31) == 365
- assert ccalendar.get_day_of_year(2004, 12, 31) == 366
+@pytest.mark.parametrize("date_tuple,expected", [
+ ((2001, 3, 1), 60),
+ ((2004, 3, 1), 61),
+ ((1907, 12, 31), 365), # End-of-year, non-leap year.
+ ((2004, 12, 31), 366), # End-of-year, leap year.
+])
+def test_get_day_of_year_numeric(date_tuple, expected):
+ assert ccalendar.get_day_of_year(*date_tuple) == expected
+
+def test_get_day_of_year_dt():
dt = datetime.fromordinal(1 + np.random.randint(365 * 4000))
result = ccalendar.get_day_of_year(dt.year, dt.month, dt.day)
+
expected = (dt - dt.replace(month=1, day=1)).days + 1
assert result == expected
diff --git a/pandas/tests/tslibs/test_conversion.py b/pandas/tests/tslibs/test_conversion.py
index 6bfc686ba830e..13398a69b4982 100644
--- a/pandas/tests/tslibs/test_conversion.py
+++ b/pandas/tests/tslibs/test_conversion.py
@@ -11,61 +11,58 @@
import pandas.util.testing as tm
-def compare_utc_to_local(tz_didx, utc_didx):
- f = lambda x: conversion.tz_convert_single(x, UTC, tz_didx.tz)
+def _compare_utc_to_local(tz_didx):
+ def f(x):
+ return conversion.tz_convert_single(x, UTC, tz_didx.tz)
+
result = conversion.tz_convert(tz_didx.asi8, UTC, tz_didx.tz)
- result_single = np.vectorize(f)(tz_didx.asi8)
- tm.assert_numpy_array_equal(result, result_single)
+ expected = np.vectorize(f)(tz_didx.asi8)
+
+ tm.assert_numpy_array_equal(result, expected)
-def compare_local_to_utc(tz_didx, utc_didx):
- f = lambda x: conversion.tz_convert_single(x, tz_didx.tz, UTC)
+def _compare_local_to_utc(tz_didx, utc_didx):
+ def f(x):
+ return conversion.tz_convert_single(x, tz_didx.tz, UTC)
+
result = conversion.tz_convert(utc_didx.asi8, tz_didx.tz, UTC)
- result_single = np.vectorize(f)(utc_didx.asi8)
- tm.assert_numpy_array_equal(result, result_single)
-
-
-class TestTZConvert(object):
-
- @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo',
- 'US/Eastern', 'Europe/Moscow'])
- def test_tz_convert_single_matches_tz_convert_hourly(self, tz):
- # US: 2014-03-09 - 2014-11-11
- # MOSCOW: 2014-10-26 / 2014-12-31
- tz_didx = date_range('2014-03-01', '2015-01-10', freq='H', tz=tz)
- utc_didx = date_range('2014-03-01', '2015-01-10', freq='H')
- compare_utc_to_local(tz_didx, utc_didx)
-
- # local tz to UTC can be differ in hourly (or higher) freqs because
- # of DST
- compare_local_to_utc(tz_didx, utc_didx)
-
- @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo',
- 'US/Eastern', 'Europe/Moscow'])
- @pytest.mark.parametrize('freq', ['D', 'A'])
- def test_tz_convert_single_matches_tz_convert(self, tz, freq):
- tz_didx = date_range('2000-01-01', '2020-01-01', freq=freq, tz=tz)
- utc_didx = date_range('2000-01-01', '2020-01-01', freq=freq)
- compare_utc_to_local(tz_didx, utc_didx)
- compare_local_to_utc(tz_didx, utc_didx)
-
- @pytest.mark.parametrize('arr', [
- pytest.param(np.array([], dtype=np.int64), id='empty'),
- pytest.param(np.array([iNaT], dtype=np.int64), id='all_nat')])
- def test_tz_convert_corner(self, arr):
- result = conversion.tz_convert(arr,
- timezones.maybe_get_tz('US/Eastern'),
- timezones.maybe_get_tz('Asia/Tokyo'))
- tm.assert_numpy_array_equal(result, arr)
-
-
-class TestEnsureDatetime64NS(object):
- @pytest.mark.parametrize('copy', [True, False])
- @pytest.mark.parametrize('dtype', ['M8[ns]', 'M8[s]'])
- def test_length_zero_copy(self, dtype, copy):
- arr = np.array([], dtype=dtype)
- result = conversion.ensure_datetime64ns(arr, copy=copy)
- if copy:
- assert result.base is None
- else:
- assert result.base is arr
+ expected = np.vectorize(f)(utc_didx.asi8)
+
+ tm.assert_numpy_array_equal(result, expected)
+
+
+def test_tz_convert_single_matches_tz_convert_hourly(tz_aware_fixture):
+ tz = tz_aware_fixture
+ tz_didx = date_range("2014-03-01", "2015-01-10", freq="H", tz=tz)
+ utc_didx = date_range("2014-03-01", "2015-01-10", freq="H")
+
+ _compare_utc_to_local(tz_didx)
+ _compare_local_to_utc(tz_didx, utc_didx)
+
+
+@pytest.mark.parametrize("freq", ["D", "A"])
+def test_tz_convert_single_matches_tz_convert(tz_aware_fixture, freq):
+ tz = tz_aware_fixture
+ tz_didx = date_range("2000-01-01", "2020-01-01", freq=freq, tz=tz)
+ utc_didx = date_range("2000-01-01", "2020-01-01", freq=freq)
+
+ _compare_utc_to_local(tz_didx)
+ _compare_local_to_utc(tz_didx, utc_didx)
+
+
+@pytest.mark.parametrize("arr", [
+ pytest.param(np.array([], dtype=np.int64), id="empty"),
+ pytest.param(np.array([iNaT], dtype=np.int64), id="all_nat")])
+def test_tz_convert_corner(arr):
+ result = conversion.tz_convert(arr,
+ timezones.maybe_get_tz("US/Eastern"),
+ timezones.maybe_get_tz("Asia/Tokyo"))
+ tm.assert_numpy_array_equal(result, arr)
+
+
+@pytest.mark.parametrize("copy", [True, False])
+@pytest.mark.parametrize("dtype", ["M8[ns]", "M8[s]"])
+def test_length_zero_copy(dtype, copy):
+ arr = np.array([], dtype=dtype)
+ result = conversion.ensure_datetime64ns(arr, copy=copy)
+ assert result.base is (None if copy else arr)
diff --git a/pandas/tests/tslibs/test_libfrequencies.py b/pandas/tests/tslibs/test_libfrequencies.py
index 1bf6d0596e2fe..b9b1c72dbf2e1 100644
--- a/pandas/tests/tslibs/test_libfrequencies.py
+++ b/pandas/tests/tslibs/test_libfrequencies.py
@@ -9,108 +9,92 @@
from pandas.tseries import offsets
-def assert_aliases_deprecated(freq, expected, aliases):
+@pytest.mark.parametrize("obj,expected", [
+ ("W", "DEC"),
+ (offsets.Week(), "DEC"),
+
+ ("D", "DEC"),
+ (offsets.Day(), "DEC"),
+
+ ("Q", "DEC"),
+ (offsets.QuarterEnd(startingMonth=12), "DEC"),
+
+ ("Q-JAN", "JAN"),
+ (offsets.QuarterEnd(startingMonth=1), "JAN"),
+
+ ("A-DEC", "DEC"),
+ ("Y-DEC", "DEC"),
+ (offsets.YearEnd(), "DEC"),
+
+ ("A-MAY", "MAY"),
+ ("Y-MAY", "MAY"),
+ (offsets.YearEnd(month=5), "MAY")
+])
+def test_get_rule_month(obj, expected):
+ result = get_rule_month(obj)
+ assert result == expected
+
+
+@pytest.mark.parametrize("obj,expected", [
+ ("A", 1000),
+ ("A-DEC", 1000),
+ ("A-JAN", 1001),
+
+ ("Y", 1000),
+ ("Y-DEC", 1000),
+ ("Y-JAN", 1001),
+
+ ("Q", 2000),
+ ("Q-DEC", 2000),
+ ("Q-FEB", 2002),
+
+ ("W", 4000),
+ ("W-SUN", 4000),
+ ("W-FRI", 4005),
+
+ ("Min", 8000),
+ ("ms", 10000),
+ ("US", 11000),
+ ("NS", 12000)
+])
+def test_period_str_to_code(obj, expected):
+ assert _period_str_to_code(obj) == expected
+
+
+@pytest.mark.parametrize("p1,p2,expected", [
+ # Input validation.
+ (offsets.MonthEnd(), None, False),
+ (offsets.YearEnd(), None, False),
+ (None, offsets.YearEnd(), False),
+ (None, offsets.MonthEnd(), False),
+ (None, None, False),
+
+ (offsets.YearEnd(), offsets.MonthEnd(), True),
+ (offsets.Hour(), offsets.Minute(), True),
+ (offsets.Second(), offsets.Milli(), True),
+ (offsets.Milli(), offsets.Micro(), True),
+ (offsets.Micro(), offsets.Nano(), True)
+])
+def test_super_sub_symmetry(p1, p2, expected):
+ assert is_superperiod(p1, p2) is expected
+ assert is_subperiod(p2, p1) is expected
+
+
+@pytest.mark.parametrize("freq,expected,aliases", [
+ ("D", 6000, ["DAY", "DLY", "DAILY"]),
+ ("M", 3000, ["MTH", "MONTH", "MONTHLY"]),
+ ("N", 12000, ["NANOSECOND", "NANOSECONDLY"]),
+ ("H", 7000, ["HR", "HOUR", "HRLY", "HOURLY"]),
+ ("T", 8000, ["minute", "MINUTE", "MINUTELY"]),
+ ("L", 10000, ["MILLISECOND", "MILLISECONDLY"]),
+ ("U", 11000, ["MICROSECOND", "MICROSECONDLY"]),
+ ("S", 9000, ["sec", "SEC", "SECOND", "SECONDLY"]),
+ ("B", 5000, ["BUS", "BUSINESS", "BUSINESSLY", "WEEKDAY"]),
+])
+def test_assert_aliases_deprecated(freq, expected, aliases):
assert isinstance(aliases, list)
- assert (_period_str_to_code(freq) == expected)
+ assert _period_str_to_code(freq) == expected
for alias in aliases:
with pytest.raises(ValueError, match=INVALID_FREQ_ERR_MSG):
_period_str_to_code(alias)
-
-
-def test_get_rule_month():
- result = get_rule_month('W')
- assert (result == 'DEC')
- result = get_rule_month(offsets.Week())
- assert (result == 'DEC')
-
- result = get_rule_month('D')
- assert (result == 'DEC')
- result = get_rule_month(offsets.Day())
- assert (result == 'DEC')
-
- result = get_rule_month('Q')
- assert (result == 'DEC')
- result = get_rule_month(offsets.QuarterEnd(startingMonth=12))
-
- result = get_rule_month('Q-JAN')
- assert (result == 'JAN')
- result = get_rule_month(offsets.QuarterEnd(startingMonth=1))
- assert (result == 'JAN')
-
- result = get_rule_month('A-DEC')
- assert (result == 'DEC')
- result = get_rule_month('Y-DEC')
- assert (result == 'DEC')
- result = get_rule_month(offsets.YearEnd())
- assert (result == 'DEC')
-
- result = get_rule_month('A-MAY')
- assert (result == 'MAY')
- result = get_rule_month('Y-MAY')
- assert (result == 'MAY')
- result = get_rule_month(offsets.YearEnd(month=5))
- assert (result == 'MAY')
-
-
-def test_period_str_to_code():
- assert (_period_str_to_code('A') == 1000)
- assert (_period_str_to_code('A-DEC') == 1000)
- assert (_period_str_to_code('A-JAN') == 1001)
- assert (_period_str_to_code('Y') == 1000)
- assert (_period_str_to_code('Y-DEC') == 1000)
- assert (_period_str_to_code('Y-JAN') == 1001)
-
- assert (_period_str_to_code('Q') == 2000)
- assert (_period_str_to_code('Q-DEC') == 2000)
- assert (_period_str_to_code('Q-FEB') == 2002)
-
- assert_aliases_deprecated("M", 3000, ["MTH", "MONTH", "MONTHLY"])
-
- assert (_period_str_to_code('W') == 4000)
- assert (_period_str_to_code('W-SUN') == 4000)
- assert (_period_str_to_code('W-FRI') == 4005)
-
- assert_aliases_deprecated("B", 5000, ["BUS", "BUSINESS",
- "BUSINESSLY", "WEEKDAY"])
- assert_aliases_deprecated("D", 6000, ["DAY", "DLY", "DAILY"])
- assert_aliases_deprecated("H", 7000, ["HR", "HOUR", "HRLY", "HOURLY"])
-
- assert_aliases_deprecated("T", 8000, ["minute", "MINUTE", "MINUTELY"])
- assert (_period_str_to_code('Min') == 8000)
-
- assert_aliases_deprecated("S", 9000, ["sec", "SEC", "SECOND", "SECONDLY"])
- assert_aliases_deprecated("L", 10000, ["MILLISECOND", "MILLISECONDLY"])
- assert (_period_str_to_code('ms') == 10000)
-
- assert_aliases_deprecated("U", 11000, ["MICROSECOND", "MICROSECONDLY"])
- assert (_period_str_to_code('US') == 11000)
-
- assert_aliases_deprecated("N", 12000, ["NANOSECOND", "NANOSECONDLY"])
- assert (_period_str_to_code('NS') == 12000)
-
-
-def test_is_superperiod_subperiod():
-
- # input validation
- assert not (is_superperiod(offsets.YearEnd(), None))
- assert not (is_subperiod(offsets.MonthEnd(), None))
- assert not (is_superperiod(None, offsets.YearEnd()))
- assert not (is_subperiod(None, offsets.MonthEnd()))
- assert not (is_superperiod(None, None))
- assert not (is_subperiod(None, None))
-
- assert (is_superperiod(offsets.YearEnd(), offsets.MonthEnd()))
- assert (is_subperiod(offsets.MonthEnd(), offsets.YearEnd()))
-
- assert (is_superperiod(offsets.Hour(), offsets.Minute()))
- assert (is_subperiod(offsets.Minute(), offsets.Hour()))
-
- assert (is_superperiod(offsets.Second(), offsets.Milli()))
- assert (is_subperiod(offsets.Milli(), offsets.Second()))
-
- assert (is_superperiod(offsets.Milli(), offsets.Micro()))
- assert (is_subperiod(offsets.Micro(), offsets.Milli()))
-
- assert (is_superperiod(offsets.Micro(), offsets.Nano()))
- assert (is_subperiod(offsets.Nano(), offsets.Micro()))
diff --git a/pandas/tests/tslibs/test_liboffsets.py b/pandas/tests/tslibs/test_liboffsets.py
index 388df6453634e..cb699278595e7 100644
--- a/pandas/tests/tslibs/test_liboffsets.py
+++ b/pandas/tests/tslibs/test_liboffsets.py
@@ -12,161 +12,163 @@
from pandas import Timestamp
-def test_get_lastbday():
+@pytest.fixture(params=["start", "end", "business_start", "business_end"])
+def day_opt(request):
+ return request.param
+
+
+@pytest.mark.parametrize("dt,exp_week_day,exp_last_day", [
+ (datetime(2017, 11, 30), 3, 30), # Business day.
+ (datetime(1993, 10, 31), 6, 29) # Non-business day.
+])
+def test_get_last_bday(dt, exp_week_day, exp_last_day):
+ assert dt.weekday() == exp_week_day
+ assert liboffsets.get_lastbday(dt.year, dt.month) == exp_last_day
+
+
+@pytest.mark.parametrize("dt,exp_week_day,exp_first_day", [
+ (datetime(2017, 4, 1), 5, 3), # Non-weekday.
+ (datetime(1993, 10, 1), 4, 1) # Business day.
+])
+def test_get_first_bday(dt, exp_week_day, exp_first_day):
+ assert dt.weekday() == exp_week_day
+ assert liboffsets.get_firstbday(dt.year, dt.month) == exp_first_day
+
+
+@pytest.mark.parametrize("months,day_opt,expected", [
+ (0, 15, datetime(2017, 11, 15)),
+ (0, None, datetime(2017, 11, 30)),
+ (1, "start", datetime(2017, 12, 1)),
+ (-145, "end", datetime(2005, 10, 31)),
+ (0, "business_end", datetime(2017, 11, 30)),
+ (0, "business_start", datetime(2017, 11, 1))
+])
+def test_shift_month_dt(months, day_opt, expected):
dt = datetime(2017, 11, 30)
- assert dt.weekday() == 3 # i.e. this is a business day
- assert liboffsets.get_lastbday(dt.year, dt.month) == 30
+ assert liboffsets.shift_month(dt, months, day_opt=day_opt) == expected
- dt = datetime(1993, 10, 31)
- assert dt.weekday() == 6 # i.e. this is not a business day
- assert liboffsets.get_lastbday(dt.year, dt.month) == 29
+@pytest.mark.parametrize("months,day_opt,expected", [
+ (1, "start", Timestamp("1929-06-01")),
+ (-3, "end", Timestamp("1929-02-28")),
+ (25, None, Timestamp("1931-06-5")),
+ (-1, 31, Timestamp("1929-04-30"))
+])
+def test_shift_month_ts(months, day_opt, expected):
+ ts = Timestamp("1929-05-05")
+ assert liboffsets.shift_month(ts, months, day_opt=day_opt) == expected
-def test_get_firstbday():
- dt = datetime(2017, 4, 1)
- assert dt.weekday() == 5 # i.e. not a weekday
- assert liboffsets.get_firstbday(dt.year, dt.month) == 3
-
- dt = datetime(1993, 10, 1)
- assert dt.weekday() == 4 # i.e. a business day
- assert liboffsets.get_firstbday(dt.year, dt.month) == 1
-
-
-def test_shift_month():
- dt = datetime(2017, 11, 30)
- assert liboffsets.shift_month(dt, 0, 'business_end') == dt
- assert liboffsets.shift_month(dt, 0,
- 'business_start') == datetime(2017, 11, 1)
-
- ts = Timestamp('1929-05-05')
- assert liboffsets.shift_month(ts, 1, 'start') == Timestamp('1929-06-01')
- assert liboffsets.shift_month(ts, -3, 'end') == Timestamp('1929-02-28')
-
- assert liboffsets.shift_month(ts, 25, None) == Timestamp('1931-06-5')
-
- # Try to shift to April 31, then shift back to Apr 30 to get a real date
- assert liboffsets.shift_month(ts, -1, 31) == Timestamp('1929-04-30')
+def test_shift_month_error():
dt = datetime(2017, 11, 15)
+ day_opt = "this should raise"
- assert liboffsets.shift_month(dt, 0, day_opt=None) == dt
- assert liboffsets.shift_month(dt, 0, day_opt=15) == dt
+ with pytest.raises(ValueError, match=day_opt):
+ liboffsets.shift_month(dt, 3, day_opt=day_opt)
- assert liboffsets.shift_month(dt, 1,
- day_opt='start') == datetime(2017, 12, 1)
- assert liboffsets.shift_month(dt, -145,
- day_opt='end') == datetime(2005, 10, 31)
+@pytest.mark.parametrize("other,expected", [
+ # Before March 1.
+ (datetime(2017, 2, 10), {2: 1, -7: -7, 0: 0}),
- with pytest.raises(ValueError):
- liboffsets.shift_month(dt, 3, day_opt='this should raise')
+ # After March 1.
+ (Timestamp("2014-03-15", tz="US/Eastern"), {2: 2, -7: -6, 0: 1})
+])
+@pytest.mark.parametrize("n", [2, -7, 0])
+def test_roll_yearday(other, expected, n):
+ month = 3
+ day_opt = "start" # `other` will be compared to March 1.
+ assert liboffsets.roll_yearday(other, n, month, day_opt) == expected[n]
-def test_get_day_of_month():
- # get_day_of_month is not directly exposed; we test it via roll_yearday
- dt = datetime(2017, 11, 15)
- with pytest.raises(ValueError):
- # To hit the raising case we need month == dt.month and n > 0
- liboffsets.roll_yearday(dt, n=3, month=11, day_opt='foo')
+@pytest.mark.parametrize("other,expected", [
+ # Before June 30.
+ (datetime(1999, 6, 29), {5: 4, -7: -7, 0: 0}),
+ # After June 30.
+ (Timestamp(2072, 8, 24, 6, 17, 18), {5: 5, -7: -6, 0: 1})
+])
+@pytest.mark.parametrize("n", [5, -7, 0])
+def test_roll_yearday2(other, expected, n):
+ month = 6
+ day_opt = "end" # `other` will be compared to June 30.
-def test_roll_yearday():
- # Copied from doctest examples
- month = 3
- day_opt = 'start' # `other` will be compared to March 1
- other = datetime(2017, 2, 10) # before March 1
- assert liboffsets.roll_yearday(other, 2, month, day_opt) == 1
- assert liboffsets.roll_yearday(other, -7, month, day_opt) == -7
- assert liboffsets.roll_yearday(other, 0, month, day_opt) == 0
+ assert liboffsets.roll_yearday(other, n, month, day_opt) == expected[n]
- other = Timestamp('2014-03-15', tz='US/Eastern') # after March 1
- assert liboffsets.roll_yearday(other, 2, month, day_opt) == 2
- assert liboffsets.roll_yearday(other, -7, month, day_opt) == -6
- assert liboffsets.roll_yearday(other, 0, month, day_opt) == 1
- month = 6
- day_opt = 'end' # `other` will be compared to June 30
- other = datetime(1999, 6, 29) # before June 30
- assert liboffsets.roll_yearday(other, 5, month, day_opt) == 4
- assert liboffsets.roll_yearday(other, -7, month, day_opt) == -7
- assert liboffsets.roll_yearday(other, 0, month, day_opt) == 0
-
- other = Timestamp(2072, 8, 24, 6, 17, 18) # after June 30
- assert liboffsets.roll_yearday(other, 5, month, day_opt) == 5
- assert liboffsets.roll_yearday(other, -7, month, day_opt) == -6
- assert liboffsets.roll_yearday(other, 0, month, day_opt) == 1
-
-
-def test_roll_qtrday():
- other = Timestamp(2072, 10, 1, 6, 17, 18) # Saturday
- for day_opt in ['start', 'end', 'business_start', 'business_end']:
- # as long as (other.month % 3) != (month % 3), day_opt is irrelevant
- # the `day_opt` doesn't matter.
- month = 5 # (other.month % 3) < (month % 3)
- assert roll_qtrday(other, 4, month, day_opt, modby=3) == 3
- assert roll_qtrday(other, -3, month, day_opt, modby=3) == -3
-
- month = 3 # (other.month % 3) > (month % 3)
- assert roll_qtrday(other, 4, month, day_opt, modby=3) == 4
- assert roll_qtrday(other, -3, month, day_opt, modby=3) == -2
-
- month = 2
- other = datetime(1999, 5, 31) # Monday
- # has (other.month % 3) == (month % 3)
-
- n = 2
- assert roll_qtrday(other, n, month, 'start', modby=3) == n
- assert roll_qtrday(other, n, month, 'end', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_start', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_end', modby=3) == n
-
- n = -1
- assert roll_qtrday(other, n, month, 'start', modby=3) == n + 1
- assert roll_qtrday(other, n, month, 'end', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_start', modby=3) == n + 1
- assert roll_qtrday(other, n, month, 'business_end', modby=3) == n
-
- other = Timestamp(2072, 10, 1, 6, 17, 18) # Saturday
- month = 4 # (other.month % 3) == (month % 3)
- n = 2
- assert roll_qtrday(other, n, month, 'start', modby=3) == n
- assert roll_qtrday(other, n, month, 'end', modby=3) == n - 1
- assert roll_qtrday(other, n, month, 'business_start', modby=3) == n - 1
- assert roll_qtrday(other, n, month, 'business_end', modby=3) == n - 1
-
- n = -1
- assert roll_qtrday(other, n, month, 'start', modby=3) == n
- assert roll_qtrday(other, n, month, 'end', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_start', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_end', modby=3) == n
-
- other = Timestamp(2072, 10, 3, 6, 17, 18) # First businessday
- month = 4 # (other.month % 3) == (month % 3)
- n = 2
- assert roll_qtrday(other, n, month, 'start', modby=3) == n
- assert roll_qtrday(other, n, month, 'end', modby=3) == n - 1
- assert roll_qtrday(other, n, month, 'business_start', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_end', modby=3) == n - 1
-
- n = -1
- assert roll_qtrday(other, n, month, 'start', modby=3) == n + 1
- assert roll_qtrday(other, n, month, 'end', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_start', modby=3) == n
- assert roll_qtrday(other, n, month, 'business_end', modby=3) == n
-
-
-def test_roll_convention():
- other = 29
- before = 1
- after = 31
-
- n = 42
- assert liboffsets.roll_convention(other, n, other) == n
- assert liboffsets.roll_convention(other, n, before) == n
- assert liboffsets.roll_convention(other, n, after) == n - 1
-
- n = -4
- assert liboffsets.roll_convention(other, n, other) == n
- assert liboffsets.roll_convention(other, n, before) == n + 1
- assert liboffsets.roll_convention(other, n, after) == n
+def test_get_day_of_month_error():
+ # get_day_of_month is not directly exposed.
+ # We test it via roll_yearday.
+ dt = datetime(2017, 11, 15)
+ day_opt = "foo"
+
+ with pytest.raises(ValueError, match=day_opt):
+ # To hit the raising case we need month == dt.month and n > 0.
+ liboffsets.roll_yearday(dt, n=3, month=11, day_opt=day_opt)
+
+
+@pytest.mark.parametrize("month", [
+ 3, # (other.month % 3) < (month % 3)
+ 5 # (other.month % 3) > (month % 3)
+])
+@pytest.mark.parametrize("n", [4, -3])
+def test_roll_qtr_day_not_mod_unequal(day_opt, month, n):
+ expected = {
+ 3: {
+ -3: -2,
+ 4: 4
+ },
+ 5: {
+ -3: -3,
+ 4: 3
+ }
+ }
+
+ other = Timestamp(2072, 10, 1, 6, 17, 18) # Saturday.
+ assert roll_qtrday(other, n, month, day_opt, modby=3) == expected[month][n]
+
+
+@pytest.mark.parametrize("other,month,exp_dict", [
+ # Monday.
+ (datetime(1999, 5, 31), 2, {
+ -1: {
+ "start": 0,
+ "business_start": 0
+ }
+ }),
+
+ # Saturday.
+ (Timestamp(2072, 10, 1, 6, 17, 18), 4, {
+ 2: {
+ "end": 1,
+ "business_end": 1,
+ "business_start": 1
+ }
+ }),
+
+ # First business day.
+ (Timestamp(2072, 10, 3, 6, 17, 18), 4, {
+ 2: {
+ "end": 1,
+ "business_end": 1
+ },
+ -1: {
+ "start": 0
+ }
+ })
+])
+@pytest.mark.parametrize("n", [2, -1])
+def test_roll_qtr_day_mod_equal(other, month, exp_dict, n, day_opt):
+ # All cases have (other.month % 3) == (month % 3).
+ expected = exp_dict.get(n, {}).get(day_opt, n)
+ assert roll_qtrday(other, n, month, day_opt, modby=3) == expected
+
+
+@pytest.mark.parametrize("n,expected", [
+ (42, {29: 42, 1: 42, 31: 41}),
+ (-4, {29: -4, 1: -3, 31: -4})
+])
+@pytest.mark.parametrize("compare", [29, 1, 31])
+def test_roll_convention(n, expected, compare):
+ assert liboffsets.roll_convention(29, n, compare) == expected[compare]
diff --git a/pandas/tests/tslibs/test_normalize_date.py b/pandas/tests/tslibs/test_normalize_date.py
new file mode 100644
index 0000000000000..6124121b97186
--- /dev/null
+++ b/pandas/tests/tslibs/test_normalize_date.py
@@ -0,0 +1,18 @@
+# -*- coding: utf-8 -*-
+"""Tests for functions from pandas._libs.tslibs"""
+
+from datetime import date, datetime
+
+import pytest
+
+from pandas._libs import tslibs
+
+
+@pytest.mark.parametrize("value,expected", [
+ (date(2012, 9, 7), datetime(2012, 9, 7)),
+ (datetime(2012, 9, 7, 12), datetime(2012, 9, 7)),
+ (datetime(2007, 10, 1, 1, 12, 5, 10), datetime(2007, 10, 1))
+])
+def test_normalize_date(value, expected):
+ result = tslibs.normalize_date(value)
+ assert result == expected
diff --git a/pandas/tests/tslibs/test_parse_iso8601.py b/pandas/tests/tslibs/test_parse_iso8601.py
new file mode 100644
index 0000000000000..d1b3dee948afe
--- /dev/null
+++ b/pandas/tests/tslibs/test_parse_iso8601.py
@@ -0,0 +1,62 @@
+# -*- coding: utf-8 -*-
+from datetime import datetime
+
+import pytest
+
+from pandas._libs import tslib
+
+
+@pytest.mark.parametrize("date_str, exp", [
+ ("2011-01-02", datetime(2011, 1, 2)),
+ ("2011-1-2", datetime(2011, 1, 2)),
+ ("2011-01", datetime(2011, 1, 1)),
+ ("2011-1", datetime(2011, 1, 1)),
+ ("2011 01 02", datetime(2011, 1, 2)),
+ ("2011.01.02", datetime(2011, 1, 2)),
+ ("2011/01/02", datetime(2011, 1, 2)),
+ ("2011\\01\\02", datetime(2011, 1, 2)),
+ ("2013-01-01 05:30:00", datetime(2013, 1, 1, 5, 30)),
+ ("2013-1-1 5:30:00", datetime(2013, 1, 1, 5, 30))])
+def test_parsers_iso8601(date_str, exp):
+ # see gh-12060
+ #
+ # Test only the ISO parser - flexibility to
+ # different separators and leading zero's.
+ actual = tslib._test_parse_iso8601(date_str)
+ assert actual == exp
+
+
+@pytest.mark.parametrize("date_str", [
+ "2011-01/02",
+ "2011=11=11",
+ "201401",
+ "201111",
+ "200101",
+
+ # Mixed separated and unseparated.
+ "2005-0101",
+ "200501-01",
+ "20010101 12:3456",
+ "20010101 1234:56",
+
+ # HHMMSS must have two digits in
+ # each component if unseparated.
+ "20010101 1",
+ "20010101 123",
+ "20010101 12345",
+ "20010101 12345Z",
+])
+def test_parsers_iso8601_invalid(date_str):
+ msg = "Error parsing datetime string \"{s}\"".format(s=date_str)
+
+ with pytest.raises(ValueError, match=msg):
+ tslib._test_parse_iso8601(date_str)
+
+
+def test_parsers_iso8601_invalid_offset_invalid():
+ date_str = "2001-01-01 12-34-56"
+ msg = ("Timezone hours offset out of range "
+ "in datetime string \"{s}\"".format(s=date_str))
+
+ with pytest.raises(ValueError, match=msg):
+ tslib._test_parse_iso8601(date_str)
diff --git a/pandas/tests/tslibs/test_parsing.py b/pandas/tests/tslibs/test_parsing.py
index 45a841cd1136d..597ec6df7389f 100644
--- a/pandas/tests/tslibs/test_parsing.py
+++ b/pandas/tests/tslibs/test_parsing.py
@@ -10,168 +10,177 @@
from pandas._libs.tslibs import parsing
from pandas._libs.tslibs.parsing import parse_time_string
-import pandas.compat as compat
import pandas.util._test_decorators as td
from pandas.util import testing as tm
-class TestParseQuarters(object):
-
- def test_parse_time_string(self):
- (date, parsed, reso) = parse_time_string('4Q1984')
- (date_lower, parsed_lower, reso_lower) = parse_time_string('4q1984')
- assert date == date_lower
- assert parsed == parsed_lower
- assert reso == reso_lower
-
- def test_parse_time_quarter_w_dash(self):
- # https://github.com/pandas-dev/pandas/issue/9688
- pairs = [('1988-Q2', '1988Q2'), ('2Q-1988', '2Q1988')]
-
- for dashed, normal in pairs:
- (date_dash, parsed_dash, reso_dash) = parse_time_string(dashed)
- (date, parsed, reso) = parse_time_string(normal)
-
- assert date_dash == date
- assert parsed_dash == parsed
- assert reso_dash == reso
-
- pytest.raises(parsing.DateParseError, parse_time_string, "-2Q1992")
- pytest.raises(parsing.DateParseError, parse_time_string, "2-Q1992")
- pytest.raises(parsing.DateParseError, parse_time_string, "4-4Q1992")
-
-
-class TestDatetimeParsingWrappers(object):
- def test_does_not_convert_mixed_integer(self):
- bad_date_strings = ('-50000', '999', '123.1234', 'm', 'T')
-
- for bad_date_string in bad_date_strings:
- assert not parsing._does_string_look_like_datetime(bad_date_string)
-
- good_date_strings = ('2012-01-01',
- '01/01/2012',
- 'Mon Sep 16, 2013',
- '01012012',
- '0101',
- '1-1')
-
- for good_date_string in good_date_strings:
- assert parsing._does_string_look_like_datetime(good_date_string)
-
- def test_parsers_quarterly_with_freq(self):
- msg = ('Incorrect quarterly string is given, quarter '
- 'must be between 1 and 4: 2013Q5')
- with pytest.raises(parsing.DateParseError, match=msg):
- parsing.parse_time_string('2013Q5')
-
- # GH 5418
- msg = ('Unable to retrieve month information from given freq: '
- 'INVLD-L-DEC-SAT')
- with pytest.raises(parsing.DateParseError, match=msg):
- parsing.parse_time_string('2013Q1', freq='INVLD-L-DEC-SAT')
-
- cases = {('2013Q2', None): datetime(2013, 4, 1),
- ('2013Q2', 'A-APR'): datetime(2012, 8, 1),
- ('2013-Q2', 'A-DEC'): datetime(2013, 4, 1)}
-
- for (date_str, freq), exp in compat.iteritems(cases):
- result, _, _ = parsing.parse_time_string(date_str, freq=freq)
- assert result == exp
-
- def test_parsers_quarter_invalid(self):
-
- cases = ['2Q 2005', '2Q-200A', '2Q-200', '22Q2005', '6Q-20', '2Q200.']
- for case in cases:
- pytest.raises(ValueError, parsing.parse_time_string, case)
-
- def test_parsers_monthfreq(self):
- cases = {'201101': datetime(2011, 1, 1, 0, 0),
- '200005': datetime(2000, 5, 1, 0, 0)}
-
- for date_str, expected in compat.iteritems(cases):
- result1, _, _ = parsing.parse_time_string(date_str, freq='M')
- assert result1 == expected
-
-
-class TestGuessDatetimeFormat(object):
-
- @td.skip_if_not_us_locale
- @pytest.mark.parametrize(
- "string, format",
- [
- ('20111230', '%Y%m%d'),
- ('2011-12-30', '%Y-%m-%d'),
- ('30-12-2011', '%d-%m-%Y'),
- ('2011-12-30 00:00:00', '%Y-%m-%d %H:%M:%S'),
- ('2011-12-30T00:00:00', '%Y-%m-%dT%H:%M:%S'),
- ('2011-12-30 00:00:00.000000',
- '%Y-%m-%d %H:%M:%S.%f')])
- def test_guess_datetime_format_with_parseable_formats(
- self, string, format):
- result = parsing._guess_datetime_format(string)
- assert result == format
-
- @pytest.mark.parametrize(
- "dayfirst, expected",
- [
- (True, "%d/%m/%Y"),
- (False, "%m/%d/%Y")])
- def test_guess_datetime_format_with_dayfirst(self, dayfirst, expected):
- ambiguous_string = '01/01/2011'
- result = parsing._guess_datetime_format(
- ambiguous_string, dayfirst=dayfirst)
- assert result == expected
-
- @td.skip_if_has_locale
- @pytest.mark.parametrize(
- "string, format",
- [
- ('30/Dec/2011', '%d/%b/%Y'),
- ('30/December/2011', '%d/%B/%Y'),
- ('30/Dec/2011 00:00:00', '%d/%b/%Y %H:%M:%S')])
- def test_guess_datetime_format_with_locale_specific_formats(
- self, string, format):
- result = parsing._guess_datetime_format(string)
- assert result == format
-
- def test_guess_datetime_format_invalid_inputs(self):
- # A datetime string must include a year, month and a day for it
- # to be guessable, in addition to being a string that looks like
- # a datetime
- invalid_dts = [
- '2013',
- '01/2013',
- '12:00:00',
- '1/1/1/1',
- 'this_is_not_a_datetime',
- '51a',
- 9,
- datetime(2011, 1, 1),
- ]
-
- for invalid_dt in invalid_dts:
- assert parsing._guess_datetime_format(invalid_dt) is None
-
- @pytest.mark.parametrize(
- "string, format",
- [
- ('2011-1-1', '%Y-%m-%d'),
- ('30-1-2011', '%d-%m-%Y'),
- ('1/1/2011', '%m/%d/%Y'),
- ('2011-1-1 00:00:00', '%Y-%m-%d %H:%M:%S'),
- ('2011-1-1 0:0:0', '%Y-%m-%d %H:%M:%S'),
- ('2011-1-3T00:00:0', '%Y-%m-%dT%H:%M:%S')])
- def test_guess_datetime_format_nopadding(self, string, format):
- # GH 11142
- result = parsing._guess_datetime_format(string)
- assert result == format
-
-
-class TestArrayToDatetime(object):
- def test_try_parse_dates(self):
- arr = np.array(['5/1/2000', '6/1/2000', '7/1/2000'], dtype=object)
-
- result = parsing.try_parse_dates(arr, dayfirst=True)
- expected = np.array([parse(d, dayfirst=True) for d in arr])
- tm.assert_numpy_array_equal(result, expected)
+def test_parse_time_string():
+ (date, parsed, reso) = parse_time_string("4Q1984")
+ (date_lower, parsed_lower, reso_lower) = parse_time_string("4q1984")
+
+ assert date == date_lower
+ assert reso == reso_lower
+ assert parsed == parsed_lower
+
+
+@pytest.mark.parametrize("dashed,normal", [
+ ("1988-Q2", "1988Q2"),
+ ("2Q-1988", "2Q1988")
+])
+def test_parse_time_quarter_with_dash(dashed, normal):
+ # see gh-9688
+ (date_dash, parsed_dash, reso_dash) = parse_time_string(dashed)
+ (date, parsed, reso) = parse_time_string(normal)
+
+ assert date_dash == date
+ assert parsed_dash == parsed
+ assert reso_dash == reso
+
+
+@pytest.mark.parametrize("dashed", [
+ "-2Q1992", "2-Q1992", "4-4Q1992"
+])
+def test_parse_time_quarter_with_dash_error(dashed):
+ msg = ("Unknown datetime string format, "
+ "unable to parse: {dashed}".format(dashed=dashed))
+
+ with pytest.raises(parsing.DateParseError, match=msg):
+ parse_time_string(dashed)
+
+
+@pytest.mark.parametrize("date_string,expected", [
+ ("123.1234", False),
+ ("-50000", False),
+ ("999", False),
+ ("m", False),
+ ("T", False),
+
+ ("Mon Sep 16, 2013", True),
+ ("2012-01-01", True),
+ ("01/01/2012", True),
+ ("01012012", True),
+ ("0101", True),
+ ("1-1", True)
+])
+def test_does_not_convert_mixed_integer(date_string, expected):
+ assert parsing._does_string_look_like_datetime(date_string) is expected
+
+
+@pytest.mark.parametrize("date_str,kwargs,msg", [
+ ("2013Q5", dict(),
+ ("Incorrect quarterly string is given, "
+ "quarter must be between 1 and 4: 2013Q5")),
+
+ # see gh-5418
+ ("2013Q1", dict(freq="INVLD-L-DEC-SAT"),
+ ("Unable to retrieve month information "
+ "from given freq: INVLD-L-DEC-SAT"))
+])
+def test_parsers_quarterly_with_freq_error(date_str, kwargs, msg):
+ with pytest.raises(parsing.DateParseError, match=msg):
+ parsing.parse_time_string(date_str, **kwargs)
+
+
+@pytest.mark.parametrize("date_str,freq,expected", [
+ ("2013Q2", None, datetime(2013, 4, 1)),
+ ("2013Q2", "A-APR", datetime(2012, 8, 1)),
+ ("2013-Q2", "A-DEC", datetime(2013, 4, 1))
+])
+def test_parsers_quarterly_with_freq(date_str, freq, expected):
+ result, _, _ = parsing.parse_time_string(date_str, freq=freq)
+ assert result == expected
+
+
+@pytest.mark.parametrize("date_str", [
+ "2Q 2005", "2Q-200A", "2Q-200",
+ "22Q2005", "2Q200.", "6Q-20"
+])
+def test_parsers_quarter_invalid(date_str):
+ if date_str == "6Q-20":
+ msg = ("Incorrect quarterly string is given, quarter "
+ "must be between 1 and 4: {date_str}".format(date_str=date_str))
+ else:
+ msg = ("Unknown datetime string format, unable "
+ "to parse: {date_str}".format(date_str=date_str))
+
+ with pytest.raises(ValueError, match=msg):
+ parsing.parse_time_string(date_str)
+
+
+@pytest.mark.parametrize("date_str,expected", [
+ ("201101", datetime(2011, 1, 1, 0, 0)),
+ ("200005", datetime(2000, 5, 1, 0, 0))
+])
+def test_parsers_month_freq(date_str, expected):
+ result, _, _ = parsing.parse_time_string(date_str, freq="M")
+ assert result == expected
+
+
+@td.skip_if_not_us_locale
+@pytest.mark.parametrize("string,fmt", [
+ ("20111230", "%Y%m%d"),
+ ("2011-12-30", "%Y-%m-%d"),
+ ("30-12-2011", "%d-%m-%Y"),
+ ("2011-12-30 00:00:00", "%Y-%m-%d %H:%M:%S"),
+ ("2011-12-30T00:00:00", "%Y-%m-%dT%H:%M:%S"),
+ ("2011-12-30 00:00:00.000000", "%Y-%m-%d %H:%M:%S.%f")
+])
+def test_guess_datetime_format_with_parseable_formats(string, fmt):
+ result = parsing._guess_datetime_format(string)
+ assert result == fmt
+
+
+@pytest.mark.parametrize("dayfirst,expected", [
+ (True, "%d/%m/%Y"),
+ (False, "%m/%d/%Y")
+])
+def test_guess_datetime_format_with_dayfirst(dayfirst, expected):
+ ambiguous_string = "01/01/2011"
+ result = parsing._guess_datetime_format(ambiguous_string,
+ dayfirst=dayfirst)
+ assert result == expected
+
+
+@td.skip_if_has_locale
+@pytest.mark.parametrize("string,fmt", [
+ ("30/Dec/2011", "%d/%b/%Y"),
+ ("30/December/2011", "%d/%B/%Y"),
+ ("30/Dec/2011 00:00:00", "%d/%b/%Y %H:%M:%S")
+])
+def test_guess_datetime_format_with_locale_specific_formats(string, fmt):
+ result = parsing._guess_datetime_format(string)
+ assert result == fmt
+
+
+@pytest.mark.parametrize("invalid_dt", [
+ "2013", "01/2013", "12:00:00", "1/1/1/1",
+ "this_is_not_a_datetime", "51a", 9,
+ datetime(2011, 1, 1)
+])
+def test_guess_datetime_format_invalid_inputs(invalid_dt):
+ # A datetime string must include a year, month and a day for it to be
+ # guessable, in addition to being a string that looks like a datetime.
+ assert parsing._guess_datetime_format(invalid_dt) is None
+
+
+@pytest.mark.parametrize("string,fmt", [
+ ("2011-1-1", "%Y-%m-%d"),
+ ("1/1/2011", "%m/%d/%Y"),
+ ("30-1-2011", "%d-%m-%Y"),
+ ("2011-1-1 0:0:0", "%Y-%m-%d %H:%M:%S"),
+ ("2011-1-3T00:00:0", "%Y-%m-%dT%H:%M:%S"),
+ ("2011-1-1 00:00:00", "%Y-%m-%d %H:%M:%S")
+])
+def test_guess_datetime_format_no_padding(string, fmt):
+ # see gh-11142
+ result = parsing._guess_datetime_format(string)
+ assert result == fmt
+
+
+def test_try_parse_dates():
+ arr = np.array(["5/1/2000", "6/1/2000", "7/1/2000"], dtype=object)
+ result = parsing.try_parse_dates(arr, dayfirst=True)
+
+ expected = np.array([parse(d, dayfirst=True) for d in arr])
+ tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/tslibs/test_period_asfreq.py b/pandas/tests/tslibs/test_period_asfreq.py
index e5978a59bc2a1..6a9522e705318 100644
--- a/pandas/tests/tslibs/test_period_asfreq.py
+++ b/pandas/tests/tslibs/test_period_asfreq.py
@@ -1,82 +1,87 @@
# -*- coding: utf-8 -*-
+import pytest
+
from pandas._libs.tslibs.frequencies import get_freq
from pandas._libs.tslibs.period import period_asfreq, period_ordinal
-class TestPeriodFreqConversion(object):
-
- def test_intraday_conversion_factors(self):
- assert period_asfreq(1, get_freq('D'), get_freq('H'), False) == 24
- assert period_asfreq(1, get_freq('D'), get_freq('T'), False) == 1440
- assert period_asfreq(1, get_freq('D'), get_freq('S'), False) == 86400
- assert period_asfreq(1, get_freq('D'),
- get_freq('L'), False) == 86400000
- assert period_asfreq(1, get_freq('D'),
- get_freq('U'), False) == 86400000000
- assert period_asfreq(1, get_freq('D'),
- get_freq('N'), False) == 86400000000000
-
- assert period_asfreq(1, get_freq('H'), get_freq('T'), False) == 60
- assert period_asfreq(1, get_freq('H'), get_freq('S'), False) == 3600
- assert period_asfreq(1, get_freq('H'),
- get_freq('L'), False) == 3600000
- assert period_asfreq(1, get_freq('H'),
- get_freq('U'), False) == 3600000000
- assert period_asfreq(1, get_freq('H'),
- get_freq('N'), False) == 3600000000000
-
- assert period_asfreq(1, get_freq('T'), get_freq('S'), False) == 60
- assert period_asfreq(1, get_freq('T'), get_freq('L'), False) == 60000
- assert period_asfreq(1, get_freq('T'),
- get_freq('U'), False) == 60000000
- assert period_asfreq(1, get_freq('T'),
- get_freq('N'), False) == 60000000000
-
- assert period_asfreq(1, get_freq('S'), get_freq('L'), False) == 1000
- assert period_asfreq(1, get_freq('S'),
- get_freq('U'), False) == 1000000
- assert period_asfreq(1, get_freq('S'),
- get_freq('N'), False) == 1000000000
-
- assert period_asfreq(1, get_freq('L'), get_freq('U'), False) == 1000
- assert period_asfreq(1, get_freq('L'),
- get_freq('N'), False) == 1000000
-
- assert period_asfreq(1, get_freq('U'), get_freq('N'), False) == 1000
-
- def test_period_ordinal_start_values(self):
- # information for 1.1.1970
- assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('A')) == 0
- assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('M')) == 0
- assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('W')) == 1
- assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('D')) == 0
- assert period_ordinal(1970, 1, 1, 0, 0, 0, 0, 0, get_freq('B')) == 0
-
- def test_period_ordinal_week(self):
- assert period_ordinal(1970, 1, 4, 0, 0, 0, 0, 0, get_freq('W')) == 1
- assert period_ordinal(1970, 1, 5, 0, 0, 0, 0, 0, get_freq('W')) == 2
- assert period_ordinal(2013, 10, 6, 0,
- 0, 0, 0, 0, get_freq('W')) == 2284
- assert period_ordinal(2013, 10, 7, 0,
- 0, 0, 0, 0, get_freq('W')) == 2285
-
- def test_period_ordinal_business_day(self):
- # Thursday
- assert period_ordinal(2013, 10, 3, 0,
- 0, 0, 0, 0, get_freq('B')) == 11415
- # Friday
- assert period_ordinal(2013, 10, 4, 0,
- 0, 0, 0, 0, get_freq('B')) == 11416
- # Saturday
- assert period_ordinal(2013, 10, 5, 0,
- 0, 0, 0, 0, get_freq('B')) == 11417
- # Sunday
- assert period_ordinal(2013, 10, 6, 0,
- 0, 0, 0, 0, get_freq('B')) == 11417
- # Monday
- assert period_ordinal(2013, 10, 7, 0,
- 0, 0, 0, 0, get_freq('B')) == 11417
- # Tuesday
- assert period_ordinal(2013, 10, 8, 0,
- 0, 0, 0, 0, get_freq('B')) == 11418
+@pytest.mark.parametrize("freq1,freq2,expected", [
+ ("D", "H", 24),
+ ("D", "T", 1440),
+ ("D", "S", 86400),
+ ("D", "L", 86400000),
+ ("D", "U", 86400000000),
+ ("D", "N", 86400000000000),
+
+ ("H", "T", 60),
+ ("H", "S", 3600),
+ ("H", "L", 3600000),
+ ("H", "U", 3600000000),
+ ("H", "N", 3600000000000),
+
+ ("T", "S", 60),
+ ("T", "L", 60000),
+ ("T", "U", 60000000),
+ ("T", "N", 60000000000),
+
+ ("S", "L", 1000),
+ ("S", "U", 1000000),
+ ("S", "N", 1000000000),
+
+ ("L", "U", 1000),
+ ("L", "N", 1000000),
+
+ ("U", "N", 1000)
+])
+def test_intra_day_conversion_factors(freq1, freq2, expected):
+ assert period_asfreq(1, get_freq(freq1),
+ get_freq(freq2), False) == expected
+
+
+@pytest.mark.parametrize("freq,expected", [
+ ("A", 0),
+ ("M", 0),
+ ("W", 1),
+ ("D", 0),
+ ("B", 0)
+])
+def test_period_ordinal_start_values(freq, expected):
+ # information for Jan. 1, 1970.
+ assert period_ordinal(1970, 1, 1, 0, 0, 0,
+ 0, 0, get_freq(freq)) == expected
+
+
+@pytest.mark.parametrize("dt,expected", [
+ ((1970, 1, 4, 0, 0, 0, 0, 0), 1),
+ ((1970, 1, 5, 0, 0, 0, 0, 0), 2),
+ ((2013, 10, 6, 0, 0, 0, 0, 0), 2284),
+ ((2013, 10, 7, 0, 0, 0, 0, 0), 2285)
+])
+def test_period_ordinal_week(dt, expected):
+ args = dt + (get_freq("W"),)
+ assert period_ordinal(*args) == expected
+
+
+@pytest.mark.parametrize("day,expected", [
+ # Thursday (Oct. 3, 2013).
+ (3, 11415),
+
+ # Friday (Oct. 4, 2013).
+ (4, 11416),
+
+ # Saturday (Oct. 5, 2013).
+ (5, 11417),
+
+ # Sunday (Oct. 6, 2013).
+ (6, 11417),
+
+ # Monday (Oct. 7, 2013).
+ (7, 11417),
+
+ # Tuesday (Oct. 8, 2013).
+ (8, 11418)
+])
+def test_period_ordinal_business_day(day, expected):
+ args = (2013, 10, day, 0, 0, 0, 0, 0, get_freq("B"))
+ assert period_ordinal(*args) == expected
diff --git a/pandas/tests/tslibs/test_timedeltas.py b/pandas/tests/tslibs/test_timedeltas.py
index 50e64bb7c2082..fdc8eff80acad 100644
--- a/pandas/tests/tslibs/test_timedeltas.py
+++ b/pandas/tests/tslibs/test_timedeltas.py
@@ -5,37 +5,25 @@
from pandas._libs.tslibs.timedeltas import delta_to_nanoseconds
import pandas as pd
-
-
-def test_delta_to_nanoseconds():
- obj = np.timedelta64(14, 'D')
- result = delta_to_nanoseconds(obj)
- assert result == 14 * 24 * 3600 * 1e9
-
- obj = pd.Timedelta(minutes=-7)
- result = delta_to_nanoseconds(obj)
- assert result == -7 * 60 * 1e9
-
- obj = pd.Timedelta(minutes=-7).to_pytimedelta()
+from pandas import Timedelta
+
+
+@pytest.mark.parametrize("obj,expected", [
+ (np.timedelta64(14, "D"), 14 * 24 * 3600 * 1e9),
+ (Timedelta(minutes=-7), -7 * 60 * 1e9),
+ (Timedelta(minutes=-7).to_pytimedelta(), -7 * 60 * 1e9),
+ (pd.offsets.Nano(125), 125),
+ (1, 1),
+ (np.int64(2), 2),
+ (np.int32(3), 3)
+])
+def test_delta_to_nanoseconds(obj, expected):
result = delta_to_nanoseconds(obj)
- assert result == -7 * 60 * 1e9
+ assert result == expected
- obj = pd.offsets.Nano(125)
- result = delta_to_nanoseconds(obj)
- assert result == 125
-
- obj = 1
- result = delta_to_nanoseconds(obj)
- assert obj == 1
- obj = np.int64(2)
- result = delta_to_nanoseconds(obj)
- assert obj == 2
-
- obj = np.int32(3)
- result = delta_to_nanoseconds(obj)
- assert result == 3
+def test_delta_to_nanoseconds_error():
+ obj = np.array([123456789], dtype="m8[ns]")
- obj = np.array([123456789], dtype='m8[ns]')
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match="<(class|type) 'numpy.ndarray'>"):
delta_to_nanoseconds(obj)
diff --git a/pandas/tests/tslibs/test_timezones.py b/pandas/tests/tslibs/test_timezones.py
index 68a6c1b09b992..0255865dbdf71 100644
--- a/pandas/tests/tslibs/test_timezones.py
+++ b/pandas/tests/tslibs/test_timezones.py
@@ -10,39 +10,51 @@
from pandas import Timestamp
-@pytest.mark.parametrize('tz_name', list(pytz.common_timezones))
+@pytest.mark.parametrize("tz_name", list(pytz.common_timezones))
def test_cache_keys_are_distinct_for_pytz_vs_dateutil(tz_name):
- if tz_name == 'UTC':
- # skip utc as it's a special case in dateutil
- return
+ if tz_name == "UTC":
+ pytest.skip("UTC: special case in dateutil")
+
tz_p = timezones.maybe_get_tz(tz_name)
- tz_d = timezones.maybe_get_tz('dateutil/' + tz_name)
+ tz_d = timezones.maybe_get_tz("dateutil/" + tz_name)
+
if tz_d is None:
- # skip timezones that dateutil doesn't know about.
- return
+ pytest.skip(tz_name + ": dateutil does not know about this one")
+
assert timezones._p_tz_cache_key(tz_p) != timezones._p_tz_cache_key(tz_d)
-def test_tzlocal():
- # GH#13583
- ts = Timestamp('2011-01-01', tz=dateutil.tz.tzlocal())
+def test_tzlocal_repr():
+ # see gh-13583
+ ts = Timestamp("2011-01-01", tz=dateutil.tz.tzlocal())
assert ts.tz == dateutil.tz.tzlocal()
assert "tz='tzlocal()')" in repr(ts)
+
+def test_tzlocal_maybe_get_tz():
+ # see gh-13583
tz = timezones.maybe_get_tz('tzlocal()')
assert tz == dateutil.tz.tzlocal()
- # get offset using normal datetime for test
+
+def test_tzlocal_offset():
+ # see gh-13583
+ #
+ # Get offset using normal datetime for test.
+ ts = Timestamp("2011-01-01", tz=dateutil.tz.tzlocal())
+
offset = dateutil.tz.tzlocal().utcoffset(datetime(2011, 1, 1))
offset = offset.total_seconds() * 1000000000
- assert ts.value + offset == Timestamp('2011-01-01').value
+ assert ts.value + offset == Timestamp("2011-01-01").value
-@pytest.mark.parametrize('eastern, localize', [
- (pytz.timezone('US/Eastern'), lambda tz, x: tz.localize(x)),
- (dateutil.tz.gettz('US/Eastern'), lambda tz, x: x.replace(tzinfo=tz))])
-def test_infer_tz(eastern, localize):
- utc = pytz.utc
+
+@pytest.fixture(params=[
+ (pytz.timezone("US/Eastern"), lambda tz, x: tz.localize(x)),
+ (dateutil.tz.gettz("US/Eastern"), lambda tz, x: x.replace(tzinfo=tz))
+])
+def infer_setup(request):
+ eastern, localize = request.param
start_naive = datetime(2001, 1, 1)
end_naive = datetime(2009, 1, 1)
@@ -50,6 +62,12 @@ def test_infer_tz(eastern, localize):
start = localize(eastern, start_naive)
end = localize(eastern, end_naive)
+ return eastern, localize, start, end, start_naive, end_naive
+
+
+def test_infer_tz_compat(infer_setup):
+ eastern, _, start, end, start_naive, end_naive = infer_setup
+
assert (timezones.infer_tzinfo(start, end) is
conversion.localize_pydatetime(start_naive, eastern).tzinfo)
assert (timezones.infer_tzinfo(start, None) is
@@ -57,12 +75,27 @@ def test_infer_tz(eastern, localize):
assert (timezones.infer_tzinfo(None, end) is
conversion.localize_pydatetime(end_naive, eastern).tzinfo)
+
+def test_infer_tz_utc_localize(infer_setup):
+ _, _, start, end, start_naive, end_naive = infer_setup
+ utc = pytz.utc
+
start = utc.localize(start_naive)
end = utc.localize(end_naive)
+
assert timezones.infer_tzinfo(start, end) is utc
+
+@pytest.mark.parametrize("ordered", [True, False])
+def test_infer_tz_mismatch(infer_setup, ordered):
+ eastern, _, _, _, start_naive, end_naive = infer_setup
+ msg = "Inputs must both have the same timezone"
+
+ utc = pytz.utc
+ start = utc.localize(start_naive)
end = conversion.localize_pydatetime(end_naive, eastern)
- with pytest.raises(Exception):
- timezones.infer_tzinfo(start, end)
- with pytest.raises(Exception):
- timezones.infer_tzinfo(end, start)
+
+ args = (start, end) if ordered else (end, start)
+
+ with pytest.raises(AssertionError, match=msg):
+ timezones.infer_tzinfo(*args)
diff --git a/pandas/tests/tslibs/test_tslib.py b/pandas/tests/tslibs/test_tslib.py
deleted file mode 100644
index 17bd46cd235da..0000000000000
--- a/pandas/tests/tslibs/test_tslib.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Tests for functions from pandas._libs.tslibs"""
-
-from datetime import date, datetime
-
-from pandas._libs import tslibs
-
-
-def test_normalize_date():
- value = date(2012, 9, 7)
-
- result = tslibs.normalize_date(value)
- assert (result == datetime(2012, 9, 7))
-
- value = datetime(2012, 9, 7, 12)
-
- result = tslibs.normalize_date(value)
- assert (result == datetime(2012, 9, 7))
-
- value = datetime(2007, 10, 1, 1, 12, 5, 10)
-
- actual = tslibs.normalize_date(value)
- assert actual == datetime(2007, 10, 1)
| I was planning to correct individual files, but then I began to realize that the fixes were involving more and more files (was shifting tests around) in the directory, to the point that I just modified them all. 🙂 | https://api.github.com/repos/pandas-dev/pandas/pulls/24587 | 2019-01-03T08:47:13Z | 2019-01-03T21:14:30Z | 2019-01-03T21:14:30Z | 2019-01-03T21:46:49Z |
DOC: Correct description of day_opt in shift_month | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 11ce539d25767..0ca9410df89c0 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -847,11 +847,15 @@ def shift_month(stamp: datetime, months: int,
----------
stamp : datetime or Timestamp
months : int
- day_opt : None, 'start', 'end', or an integer
+ day_opt : None, 'start', 'end', 'business_start', 'business_end', or int
None: returned datetimelike has the same day as the input, or the
last day of the month if the new month is too short
'start': returned datetimelike has day=1
'end': returned datetimelike has day on the last day of the month
+ 'business_start': returned datetimelike has day on the first
+ business day of the month
+ 'business_end': returned datetimelike has day on the last
+ business day of the month
int: returned datetimelike has day equal to day_opt
Returns
| Doc is updated per implementation at [268150f](https://github.com/pandas-dev/pandas/blob/268150f/pandas/_libs/tslibs/offsets.pyx#L837-L891). | https://api.github.com/repos/pandas-dev/pandas/pulls/24585 | 2019-01-03T07:33:59Z | 2019-01-03T08:43:43Z | 2019-01-03T08:43:43Z | 2019-01-03T20:30:46Z |
Fix docstring templates not being filled (#24535) | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index a7f2d4fad38de..c853a30c0de79 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1920,7 +1920,7 @@ def notna(self):
Returns
-------
- filled : %(klass)s
+ filled : Index
"""
@Appender(_index_shared_docs['fillna'])
| - [x] closes #24535
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Fixes the docstring template issue
| https://api.github.com/repos/pandas-dev/pandas/pulls/24584 | 2019-01-03T07:28:22Z | 2019-01-03T12:18:45Z | 2019-01-03T12:18:45Z | 2019-01-04T06:57:12Z |
REF: put DatetimeBlock adjacent to DatetimeLikeBlockMixin | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 5ce5ae7186774..d12114bd951ba 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2206,48 +2206,71 @@ def asi8(self):
return self.values.view('i8')
-class TimeDeltaBlock(DatetimeLikeBlockMixin, IntBlock):
+class DatetimeBlock(DatetimeLikeBlockMixin, Block):
__slots__ = ()
- is_timedelta = True
+ is_datetime = True
_can_hold_na = True
- is_numeric = False
def __init__(self, values, placement, ndim=None):
- if values.dtype != _TD_DTYPE:
- values = conversion.ensure_timedelta64ns(values)
- if isinstance(values, TimedeltaArray):
+ values = self._maybe_coerce_values(values)
+ super(DatetimeBlock, self).__init__(values,
+ placement=placement, ndim=ndim)
+
+ def _maybe_coerce_values(self, values):
+ """Input validation for values passed to __init__. Ensure that
+ we have datetime64ns, coercing if necessary.
+
+ Parameters
+ ----------
+ values : array-like
+ Must be convertible to datetime64
+
+ Returns
+ -------
+ values : ndarray[datetime64ns]
+
+ Overridden by DatetimeTZBlock.
+ """
+ if values.dtype != _NS_DTYPE:
+ values = conversion.ensure_datetime64ns(values)
+
+ if isinstance(values, DatetimeArray):
values = values._data
+
assert isinstance(values, np.ndarray), type(values)
- super(TimeDeltaBlock, self).__init__(values,
- placement=placement, ndim=ndim)
+ return values
- @property
- def _holder(self):
- return TimedeltaArray
+ def _astype(self, dtype, **kwargs):
+ """
+ these automatically copy, so copy=True has no effect
+ raise on an except if raise == True
+ """
+ dtype = pandas_dtype(dtype)
- @property
- def _box_func(self):
- return lambda x: Timedelta(x, unit='ns')
+ # if we are passed a datetime64[ns, tz]
+ if is_datetime64tz_dtype(dtype):
+ values = self.values
+ if getattr(values, 'tz', None) is None:
+ values = DatetimeIndex(values).tz_localize('UTC')
+ values = values.tz_convert(dtype.tz)
+ return self.make_block(values)
+
+ # delegate
+ return super(DatetimeBlock, self)._astype(dtype=dtype, **kwargs)
def _can_hold_element(self, element):
tipo = maybe_infer_dtype_type(element)
if tipo is not None:
- return issubclass(tipo.type, (np.timedelta64, np.int64))
- return is_integer(element) or isinstance(
- element, (timedelta, np.timedelta64, np.int64))
-
- def fillna(self, value, **kwargs):
-
- # allow filling with integers to be
- # interpreted as seconds
- if is_integer(value) and not isinstance(value, np.timedelta64):
- value = Timedelta(value, unit='s')
- return super(TimeDeltaBlock, self).fillna(value, **kwargs)
+ return tipo == _NS_DTYPE or tipo == np.int64
+ return (is_integer(element) or isinstance(element, datetime) or
+ isna(element))
def _try_coerce_args(self, values, other):
"""
- Coerce values and other to int64, with null values converted to
- iNaT. values is always ndarray-like, other may not be
+ Coerce values and other to dtype 'i8'. NaN and NaT convert to
+ the smallest i8, and will correctly round-trip to NaT if converted
+ back in _try_coerce_result. values is always ndarray-like, other
+ may not be
Parameters
----------
@@ -2258,19 +2281,20 @@ def _try_coerce_args(self, values, other):
-------
base-type values, base-type other
"""
+
values = values.view('i8')
if isinstance(other, bool):
raise TypeError
elif is_null_datelike_scalar(other):
other = tslibs.iNaT
- elif isinstance(other, Timedelta):
- other = other.value
- elif isinstance(other, timedelta):
- other = Timedelta(other).value
- elif isinstance(other, np.timedelta64):
- other = Timedelta(other).value
- elif hasattr(other, 'dtype') and is_timedelta64_dtype(other):
+ elif isinstance(other, (datetime, np.datetime64, date)):
+ other = self._box_func(other)
+ if getattr(other, 'tz') is not None:
+ raise TypeError("cannot coerce a Timestamp with a tz on a "
+ "naive Block")
+ other = other.asm8.view('i8')
+ elif hasattr(other, 'dtype') and is_datetime64_dtype(other):
other = other.astype('i8', copy=False).view('i8')
else:
# coercion issues
@@ -2280,549 +2304,345 @@ def _try_coerce_args(self, values, other):
return values, other
def _try_coerce_result(self, result):
- """ reverse of try_coerce_args / try_operate """
+ """ reverse of try_coerce_args """
if isinstance(result, np.ndarray):
- mask = isna(result)
if result.dtype.kind in ['i', 'f', 'O']:
- result = result.astype('m8[ns]')
- result[mask] = tslibs.iNaT
- elif isinstance(result, (np.integer, np.float)):
+ try:
+ result = result.astype('M8[ns]')
+ except ValueError:
+ pass
+ elif isinstance(result, (np.integer, np.float, np.datetime64)):
result = self._box_func(result)
return result
- def should_store(self, value):
- return (issubclass(value.dtype.type, np.timedelta64) and
- not is_extension_array_dtype(value))
+ @property
+ def _box_func(self):
+ return tslibs.Timestamp
- def to_native_types(self, slicer=None, na_rep=None, quoting=None,
- **kwargs):
+ def to_native_types(self, slicer=None, na_rep=None, date_format=None,
+ quoting=None, **kwargs):
""" convert to our native types format, slicing if desired """
values = self.values
- if slicer is not None:
- values = values[:, slicer]
- mask = isna(values)
+ i8values = self.values.view('i8')
- rvalues = np.empty(values.shape, dtype=object)
- if na_rep is None:
- na_rep = 'NaT'
- rvalues[mask] = na_rep
- imask = (~mask).ravel()
+ if slicer is not None:
+ i8values = i8values[..., slicer]
- # FIXME:
- # should use the formats.format.Timedelta64Formatter here
- # to figure what format to pass to the Timedelta
- # e.g. to not show the decimals say
- rvalues.flat[imask] = np.array([Timedelta(val)._repr_base(format='all')
- for val in values.ravel()[imask]],
- dtype=object)
- return rvalues
+ from pandas.io.formats.format import _get_format_datetime64_from_values
+ format = _get_format_datetime64_from_values(values, date_format)
- def external_values(self, dtype=None):
- return np.asarray(self.values.astype("timedelta64[ns]", copy=False))
+ result = tslib.format_array_from_datetime(
+ i8values.ravel(), tz=getattr(self.values, 'tz', None),
+ format=format, na_rep=na_rep).reshape(i8values.shape)
+ return np.atleast_2d(result)
+ def should_store(self, value):
+ return (issubclass(value.dtype.type, np.datetime64) and
+ not is_datetime64tz_dtype(value) and
+ not is_extension_array_dtype(value))
-class BoolBlock(NumericBlock):
- __slots__ = ()
- is_bool = True
- _can_hold_na = False
+ def set(self, locs, values):
+ """
+ Modify Block in-place with new item value
- def _can_hold_element(self, element):
- tipo = maybe_infer_dtype_type(element)
- if tipo is not None:
- return issubclass(tipo.type, np.bool_)
- return isinstance(element, (bool, np.bool_))
+ Returns
+ -------
+ None
+ """
+ values = conversion.ensure_datetime64ns(values, copy=False)
- def should_store(self, value):
- return (issubclass(value.dtype.type, np.bool_) and not
- is_extension_array_dtype(value))
+ self.values[locs] = values
- def replace(self, to_replace, value, inplace=False, filter=None,
- regex=False, convert=True):
- inplace = validate_bool_kwarg(inplace, 'inplace')
- to_replace_values = np.atleast_1d(to_replace)
- if not np.can_cast(to_replace_values, bool):
- return self
- return super(BoolBlock, self).replace(to_replace, value,
- inplace=inplace, filter=filter,
- regex=regex, convert=convert)
+ def external_values(self):
+ return np.asarray(self.values.astype('datetime64[ns]', copy=False))
-class ObjectBlock(Block):
+class DatetimeTZBlock(ExtensionBlock, DatetimeBlock):
+ """ implement a datetime64 block with a tz attribute """
__slots__ = ()
- is_object = True
- _can_hold_na = True
-
- def __init__(self, values, placement=None, ndim=2):
- if issubclass(values.dtype.type, compat.string_types):
- values = np.array(values, dtype=object)
+ is_datetimetz = True
+ is_extension = True
- super(ObjectBlock, self).__init__(values, ndim=ndim,
- placement=placement)
+ def __init__(self, values, placement, ndim=2, dtype=None):
+ # XXX: This will end up calling _maybe_coerce_values twice
+ # when dtype is not None. It's relatively cheap (just an isinstance)
+ # but it'd nice to avoid.
+ #
+ # If we can remove dtype from __init__, and push that conversion
+ # push onto the callers, then we can remove this entire __init__
+ # and just use DatetimeBlock's.
+ if dtype is not None:
+ values = self._maybe_coerce_values(values, dtype=dtype)
+ super(DatetimeTZBlock, self).__init__(values, placement=placement,
+ ndim=ndim)
@property
- def is_bool(self):
- """ we can be a bool if we have only bool values but are of type
- object
- """
- return lib.is_bool_array(self.values.ravel())
+ def _holder(self):
+ return DatetimeArray
- # TODO: Refactor when convert_objects is removed since there will be 1 path
- def convert(self, *args, **kwargs):
- """ attempt to coerce any object types to better types return a copy of
- the block (if copy = True) by definition we ARE an ObjectBlock!!!!!
+ def _maybe_coerce_values(self, values, dtype=None):
+ """Input validation for values passed to __init__. Ensure that
+ we have datetime64TZ, coercing if necessary.
- can return multiple blocks!
+ Parametetrs
+ -----------
+ values : array-like
+ Must be convertible to datetime64
+ dtype : string or DatetimeTZDtype, optional
+ Does a shallow copy to this tz
+
+ Returns
+ -------
+ values : ndarray[datetime64ns]
"""
+ if not isinstance(values, self._holder):
+ values = self._holder(values)
- if args:
- raise NotImplementedError
- by_item = True if 'by_item' not in kwargs else kwargs['by_item']
+ if dtype is not None:
+ if isinstance(dtype, compat.string_types):
+ dtype = DatetimeTZDtype.construct_from_string(dtype)
+ values = type(values)(values, dtype=dtype)
- new_inputs = ['coerce', 'datetime', 'numeric', 'timedelta']
- new_style = False
- for kw in new_inputs:
- new_style |= kw in kwargs
+ if values.tz is None:
+ raise ValueError("cannot create a DatetimeTZBlock without a tz")
- if new_style:
- fn = soft_convert_objects
- fn_inputs = new_inputs
- else:
- fn = maybe_convert_objects
- fn_inputs = ['convert_dates', 'convert_numeric',
- 'convert_timedeltas']
- fn_inputs += ['copy']
+ return values
- fn_kwargs = {key: kwargs[key] for key in fn_inputs if key in kwargs}
+ @property
+ def is_view(self):
+ """ return a boolean if I am possibly a view """
+ # check the ndarray values of the DatetimeIndex values
+ return self.values._data.base is not None
- # operate column-by-column
- def f(m, v, i):
- shape = v.shape
- values = fn(v.ravel(), **fn_kwargs)
- try:
- values = values.reshape(shape)
- values = _block_shape(values, ndim=self.ndim)
- except (AttributeError, NotImplementedError):
- pass
-
- return values
-
- if by_item and not self._is_single_block:
- blocks = self.split_and_operate(None, f, False)
- else:
- values = f(None, self.values.ravel(), None)
- blocks = [make_block(values, ndim=self.ndim,
- placement=self.mgr_locs)]
-
- return blocks
-
- def set(self, locs, values):
- """
- Modify Block in-place with new item value
-
- Returns
- -------
- None
- """
- try:
- self.values[locs] = values
- except (ValueError):
-
- # broadcasting error
- # see GH6171
- new_shape = list(values.shape)
- new_shape[0] = len(self.items)
- self.values = np.empty(tuple(new_shape), dtype=self.dtype)
- self.values.fill(np.nan)
- self.values[locs] = values
-
- def _maybe_downcast(self, blocks, downcast=None):
-
- if downcast is not None:
- return blocks
-
- # split and convert the blocks
- return _extend_blocks([b.convert(datetime=True, numeric=False)
- for b in blocks])
-
- def _can_hold_element(self, element):
- return True
-
- def _try_coerce_args(self, values, other):
- """ provide coercion to our input arguments """
-
- if isinstance(other, ABCDatetimeIndex):
- # May get a DatetimeIndex here. Unbox it.
- other = other.array
-
- if isinstance(other, DatetimeArray):
- # hit in pandas/tests/indexing/test_coercion.py
- # ::TestWhereCoercion::test_where_series_datetime64[datetime64tz]
- # when falling back to ObjectBlock.where
- other = other.astype(object)
-
- return values, other
-
- def should_store(self, value):
- return not (issubclass(value.dtype.type,
- (np.integer, np.floating, np.complexfloating,
- np.datetime64, np.bool_)) or
- # TODO(ExtensionArray): remove is_extension_type
- # when all extension arrays have been ported.
- is_extension_type(value) or
- is_extension_array_dtype(value))
-
- def replace(self, to_replace, value, inplace=False, filter=None,
- regex=False, convert=True):
- to_rep_is_list = is_list_like(to_replace)
- value_is_list = is_list_like(value)
- both_lists = to_rep_is_list and value_is_list
- either_list = to_rep_is_list or value_is_list
-
- result_blocks = []
- blocks = [self]
-
- if not either_list and is_re(to_replace):
- return self._replace_single(to_replace, value, inplace=inplace,
- filter=filter, regex=True,
- convert=convert)
- elif not (either_list or regex):
- return super(ObjectBlock, self).replace(to_replace, value,
- inplace=inplace,
- filter=filter, regex=regex,
- convert=convert)
- elif both_lists:
- for to_rep, v in zip(to_replace, value):
- result_blocks = []
- for b in blocks:
- result = b._replace_single(to_rep, v, inplace=inplace,
- filter=filter, regex=regex,
- convert=convert)
- result_blocks = _extend_blocks(result, result_blocks)
- blocks = result_blocks
- return result_blocks
-
- elif to_rep_is_list and regex:
- for to_rep in to_replace:
- result_blocks = []
- for b in blocks:
- result = b._replace_single(to_rep, value, inplace=inplace,
- filter=filter, regex=regex,
- convert=convert)
- result_blocks = _extend_blocks(result, result_blocks)
- blocks = result_blocks
- return result_blocks
-
- return self._replace_single(to_replace, value, inplace=inplace,
- filter=filter, convert=convert,
- regex=regex)
+ def copy(self, deep=True):
+ """ copy constructor """
+ values = self.values
+ if deep:
+ values = values.copy(deep=True)
+ return self.make_block_same_class(values)
- def _replace_single(self, to_replace, value, inplace=False, filter=None,
- regex=False, convert=True, mask=None):
+ def get_values(self, dtype=None):
"""
- Replace elements by the given value.
+ Returns an ndarray of values.
Parameters
----------
- to_replace : object or pattern
- Scalar to replace or regular expression to match.
- value : object
- Replacement object.
- inplace : bool, default False
- Perform inplace modification.
- filter : list, optional
- regex : bool, default False
- If true, perform regular expression substitution.
- convert : bool, default True
- If true, try to coerce any object types to better types.
- mask : array-like of bool, optional
- True indicate corresponding element is ignored.
+ dtype : np.dtype
+ Only `object`-like dtypes are respected here (not sure
+ why).
Returns
-------
- a new block, the result after replacing
- """
- inplace = validate_bool_kwarg(inplace, 'inplace')
-
- # to_replace is regex compilable
- to_rep_re = regex and is_re_compilable(to_replace)
-
- # regex is regex compilable
- regex_re = is_re_compilable(regex)
-
- # only one will survive
- if to_rep_re and regex_re:
- raise AssertionError('only one of to_replace and regex can be '
- 'regex compilable')
-
- # if regex was passed as something that can be a regex (rather than a
- # boolean)
- if regex_re:
- to_replace = regex
-
- regex = regex_re or to_rep_re
-
- # try to get the pattern attribute (compiled re) or it's a string
- try:
- pattern = to_replace.pattern
- except AttributeError:
- pattern = to_replace
-
- # if the pattern is not empty and to_replace is either a string or a
- # regex
- if regex and pattern:
- rx = re.compile(to_replace)
- else:
- # if the thing to replace is not a string or compiled regex call
- # the superclass method -> to_replace is some kind of object
- return super(ObjectBlock, self).replace(to_replace, value,
- inplace=inplace,
- filter=filter, regex=regex)
-
- new_values = self.values if inplace else self.values.copy()
-
- # deal with replacing values with objects (strings) that match but
- # whose replacement is not a string (numeric, nan, object)
- if isna(value) or not isinstance(value, compat.string_types):
-
- def re_replacer(s):
- try:
- return value if rx.search(s) is not None else s
- except TypeError:
- return s
- else:
- # value is guaranteed to be a string here, s can be either a string
- # or null if it's null it gets returned
- def re_replacer(s):
- try:
- return rx.sub(value, s)
- except TypeError:
- return s
+ values : ndarray
+ When ``dtype=object``, then and object-dtype ndarray of
+ boxed values is returned. Otherwise, an M8[ns] ndarray
+ is returned.
- f = np.vectorize(re_replacer, otypes=[self.dtype])
+ DatetimeArray is always 1-d. ``get_values`` will reshape
+ the return value to be the same dimensionality as the
+ block.
+ """
+ values = self.values
+ if is_object_dtype(dtype):
+ values = values._box_values(values._data)
- if filter is None:
- filt = slice(None)
- else:
- filt = self.mgr_locs.isin(filter).nonzero()[0]
+ values = np.asarray(values)
- if mask is None:
- new_values[filt] = f(new_values[filt])
- else:
- new_values[filt][mask] = f(new_values[filt][mask])
+ if self.ndim == 2:
+ # Ensure that our shape is correct for DataFrame.
+ # ExtensionArrays are always 1-D, even in a DataFrame when
+ # the analogous NumPy-backed column would be a 2-D ndarray.
+ values = values.reshape(1, -1)
+ return values
- # convert
- block = self.make_block(new_values)
- if convert:
- block = block.convert(by_item=True, numeric=False)
- return block
+ def _slice(self, slicer):
+ """ return a slice of my values """
+ if isinstance(slicer, tuple):
+ col, loc = slicer
+ if not com.is_null_slice(col) and col != 0:
+ raise IndexError("{0} only contains one item".format(self))
+ return self.values[loc]
+ return self.values[slicer]
- def _replace_coerce(self, to_replace, value, inplace=True, regex=False,
- convert=False, mask=None):
+ def _try_coerce_args(self, values, other):
"""
- Replace value corresponding to the given boolean array with another
- value.
+ localize and return i8 for the values
Parameters
----------
- to_replace : object or pattern
- Scalar to replace or regular expression to match.
- value : object
- Replacement object.
- inplace : bool, default False
- Perform inplace modification.
- regex : bool, default False
- If true, perform regular expression substitution.
- convert : bool, default True
- If true, try to coerce any object types to better types.
- mask : array-like of bool, optional
- True indicate corresponding element is ignored.
+ values : ndarray-like
+ other : ndarray-like or scalar
Returns
-------
- A new block if there is anything to replace or the original block.
+ base-type values, base-type other
"""
- if mask.any():
- block = super(ObjectBlock, self)._replace_coerce(
- to_replace=to_replace, value=value, inplace=inplace,
- regex=regex, convert=convert, mask=mask)
- if convert:
- block = [b.convert(by_item=True, numeric=False, copy=True)
- for b in block]
- return block
- return self
-
+ # asi8 is a view, needs copy
+ values = _block_shape(values.view("i8"), ndim=self.ndim)
-class CategoricalBlock(ExtensionBlock):
- __slots__ = ()
- is_categorical = True
- _verify_integrity = True
- _can_hold_na = True
- _concatenator = staticmethod(_concat._concat_categorical)
+ if isinstance(other, ABCSeries):
+ other = self._holder(other)
- def __init__(self, values, placement, ndim=None):
- from pandas.core.arrays.categorical import _maybe_to_categorical
+ if isinstance(other, bool):
+ raise TypeError
+ elif is_datetime64_dtype(other):
+ # add the tz back
+ other = self._holder(other, dtype=self.dtype)
- # coerce to categorical if we can
- super(CategoricalBlock, self).__init__(_maybe_to_categorical(values),
- placement=placement,
- ndim=ndim)
+ elif (is_null_datelike_scalar(other) or
+ (lib.is_scalar(other) and isna(other))):
+ other = tslibs.iNaT
+ elif isinstance(other, self._holder):
+ if other.tz != self.values.tz:
+ raise ValueError("incompatible or non tz-aware value")
+ other = _block_shape(other.asi8, ndim=self.ndim)
+ elif isinstance(other, (np.datetime64, datetime, date)):
+ other = tslibs.Timestamp(other)
+ tz = getattr(other, 'tz', None)
- @property
- def _holder(self):
- return Categorical
+ # test we can have an equal time zone
+ if tz is None or str(tz) != str(self.values.tz):
+ raise ValueError("incompatible or non tz-aware value")
+ other = other.value
+ else:
+ raise TypeError
- @property
- def array_dtype(self):
- """ the dtype to return if I want to construct this block as an
- array
- """
- return np.object_
+ return values, other
def _try_coerce_result(self, result):
""" reverse of try_coerce_args """
+ if isinstance(result, np.ndarray):
+ if result.dtype.kind in ['i', 'f', 'O']:
+ result = result.astype('M8[ns]')
+ elif isinstance(result, (np.integer, np.float, np.datetime64)):
+ result = self._box_func(result)
+ if isinstance(result, np.ndarray):
+ # allow passing of > 1dim if its trivial
- # GH12564: CategoricalBlock is 1-dim only
- # while returned results could be any dim
- if ((not is_categorical_dtype(result)) and
- isinstance(result, np.ndarray)):
- result = _block_shape(result, ndim=self.ndim)
+ if result.ndim > 1:
+ result = result.reshape(np.prod(result.shape))
+ # GH#24096 new values invalidates a frequency
+ result = self._holder._simple_new(result, freq=None,
+ tz=self.values.tz)
return result
- def to_dense(self):
- # Categorical.get_values returns a DatetimeIndex for datetime
- # categories, so we can't simply use `np.asarray(self.values)` like
- # other types.
- return self.values.get_values()
+ @property
+ def _box_func(self):
+ return lambda x: tslibs.Timestamp(x, tz=self.dtype.tz)
- def to_native_types(self, slicer=None, na_rep='', quoting=None, **kwargs):
- """ convert to our native types format, slicing if desired """
+ def diff(self, n, axis=0):
+ """1st discrete difference
- values = self.values
- if slicer is not None:
- # Categorical is always one dimension
- values = values[slicer]
- mask = isna(values)
- values = np.array(values, dtype='object')
- values[mask] = na_rep
+ Parameters
+ ----------
+ n : int, number of periods to diff
+ axis : int, axis to diff upon. default 0
- # we are expected to return a 2-d ndarray
- return values.reshape(1, len(values))
+ Return
+ ------
+ A list with a new TimeDeltaBlock.
- def concat_same_type(self, to_concat, placement=None):
+ Note
+ ----
+ The arguments here are mimicking shift so they are called correctly
+ by apply.
"""
- Concatenate list of single blocks of the same type.
+ if axis == 0:
+ # Cannot currently calculate diff across multiple blocks since this
+ # function is invoked via apply
+ raise NotImplementedError
+ new_values = (self.values - self.shift(n, axis=axis)[0].values).asi8
- Note that this CategoricalBlock._concat_same_type *may* not
- return a CategoricalBlock. When the categories in `to_concat`
- differ, this will return an object ndarray.
+ # Reshape the new_values like how algos.diff does for timedelta data
+ new_values = new_values.reshape(1, len(new_values))
+ new_values = new_values.astype('timedelta64[ns]')
+ return [TimeDeltaBlock(new_values, placement=self.mgr_locs.indexer)]
- If / when we decide we don't like that behavior:
+ def concat_same_type(self, to_concat, placement=None):
+ # need to handle concat([tz1, tz2]) here, since DatetimeArray
+ # only handles cases where all the tzs are the same.
+ # Instead of placing the condition here, it could also go into the
+ # is_uniform_join_units check, but I'm not sure what is better.
+ if len({x.dtype for x in to_concat}) > 1:
+ values = _concat._concat_datetime([x.values for x in to_concat])
+ placement = placement or slice(0, len(values), 1)
- 1. Change Categorical._concat_same_type to use union_categoricals
- 2. Delete this method.
- """
- values = self._concatenator([blk.values for blk in to_concat],
- axis=self.ndim - 1)
- # not using self.make_block_same_class as values can be object dtype
- return make_block(
- values, placement=placement or slice(0, len(values), 1),
- ndim=self.ndim)
+ if self.ndim > 1:
+ values = np.atleast_2d(values)
+ return ObjectBlock(values, ndim=self.ndim, placement=placement)
+ return super(DatetimeTZBlock, self).concat_same_type(to_concat,
+ placement)
- def where(self, other, cond, align=True, errors='raise',
- try_cast=False, axis=0, transpose=False):
- # TODO(CategoricalBlock.where):
- # This can all be deleted in favor of ExtensionBlock.where once
- # we enforce the deprecation.
- object_msg = (
- "Implicitly converting categorical to object-dtype ndarray. "
- "One or more of the values in 'other' are not present in this "
- "categorical's categories. A future version of pandas will raise "
- "a ValueError when 'other' contains different categories.\n\n"
- "To preserve the current behavior, add the new categories to "
- "the categorical before calling 'where', or convert the "
- "categorical to a different dtype."
- )
+ def fillna(self, value, limit=None, inplace=False, downcast=None):
+ # We support filling a DatetimeTZ with a `value` whose timezone
+ # is different by coercing to object.
try:
- # Attempt to do preserve categorical dtype.
- result = super(CategoricalBlock, self).where(
- other, cond, align, errors, try_cast, axis, transpose
+ return super(DatetimeTZBlock, self).fillna(
+ value, limit, inplace, downcast
+ )
+ except (ValueError, TypeError):
+ # different timezones, or a non-tz
+ return self.astype(object).fillna(
+ value, limit=limit, inplace=inplace, downcast=downcast
)
- except (TypeError, ValueError):
- warnings.warn(object_msg, FutureWarning, stacklevel=6)
- result = self.astype(object).where(other, cond, align=align,
- errors=errors,
- try_cast=try_cast,
- axis=axis, transpose=transpose)
- return result
+ def setitem(self, indexer, value):
+ # https://github.com/pandas-dev/pandas/issues/24020
+ # Need a dedicated setitem until #24020 (type promotion in setitem
+ # for extension arrays) is designed and implemented.
+ try:
+ return super(DatetimeTZBlock, self).setitem(indexer, value)
+ except (ValueError, TypeError):
+ newb = make_block(self.values.astype(object),
+ placement=self.mgr_locs,
+ klass=ObjectBlock,)
+ return newb.setitem(indexer, value)
-class DatetimeBlock(DatetimeLikeBlockMixin, Block):
+
+class TimeDeltaBlock(DatetimeLikeBlockMixin, IntBlock):
__slots__ = ()
- is_datetime = True
+ is_timedelta = True
_can_hold_na = True
+ is_numeric = False
def __init__(self, values, placement, ndim=None):
- values = self._maybe_coerce_values(values)
- super(DatetimeBlock, self).__init__(values,
- placement=placement, ndim=ndim)
-
- def _maybe_coerce_values(self, values):
- """Input validation for values passed to __init__. Ensure that
- we have datetime64ns, coercing if necessary.
-
- Parameters
- ----------
- values : array-like
- Must be convertible to datetime64
-
- Returns
- -------
- values : ndarray[datetime64ns]
-
- Overridden by DatetimeTZBlock.
- """
- if values.dtype != _NS_DTYPE:
- values = conversion.ensure_datetime64ns(values)
-
- if isinstance(values, DatetimeArray):
+ if values.dtype != _TD_DTYPE:
+ values = conversion.ensure_timedelta64ns(values)
+ if isinstance(values, TimedeltaArray):
values = values._data
-
assert isinstance(values, np.ndarray), type(values)
- return values
-
- def _astype(self, dtype, **kwargs):
- """
- these automatically copy, so copy=True has no effect
- raise on an except if raise == True
- """
- dtype = pandas_dtype(dtype)
+ super(TimeDeltaBlock, self).__init__(values,
+ placement=placement, ndim=ndim)
- # if we are passed a datetime64[ns, tz]
- if is_datetime64tz_dtype(dtype):
- values = self.values
- if getattr(values, 'tz', None) is None:
- values = DatetimeIndex(values).tz_localize('UTC')
- values = values.tz_convert(dtype.tz)
- return self.make_block(values)
+ @property
+ def _holder(self):
+ return TimedeltaArray
- # delegate
- return super(DatetimeBlock, self)._astype(dtype=dtype, **kwargs)
+ @property
+ def _box_func(self):
+ return lambda x: Timedelta(x, unit='ns')
def _can_hold_element(self, element):
tipo = maybe_infer_dtype_type(element)
if tipo is not None:
- return tipo == _NS_DTYPE or tipo == np.int64
- return (is_integer(element) or isinstance(element, datetime) or
- isna(element))
+ return issubclass(tipo.type, (np.timedelta64, np.int64))
+ return is_integer(element) or isinstance(
+ element, (timedelta, np.timedelta64, np.int64))
+
+ def fillna(self, value, **kwargs):
+
+ # allow filling with integers to be
+ # interpreted as seconds
+ if is_integer(value) and not isinstance(value, np.timedelta64):
+ value = Timedelta(value, unit='s')
+ return super(TimeDeltaBlock, self).fillna(value, **kwargs)
def _try_coerce_args(self, values, other):
"""
- Coerce values and other to dtype 'i8'. NaN and NaT convert to
- the smallest i8, and will correctly round-trip to NaT if converted
- back in _try_coerce_result. values is always ndarray-like, other
- may not be
+ Coerce values and other to int64, with null values converted to
+ iNaT. values is always ndarray-like, other may not be
Parameters
----------
@@ -2833,20 +2653,19 @@ def _try_coerce_args(self, values, other):
-------
base-type values, base-type other
"""
-
values = values.view('i8')
if isinstance(other, bool):
raise TypeError
elif is_null_datelike_scalar(other):
other = tslibs.iNaT
- elif isinstance(other, (datetime, np.datetime64, date)):
- other = self._box_func(other)
- if getattr(other, 'tz') is not None:
- raise TypeError("cannot coerce a Timestamp with a tz on a "
- "naive Block")
- other = other.asm8.view('i8')
- elif hasattr(other, 'dtype') and is_datetime64_dtype(other):
+ elif isinstance(other, Timedelta):
+ other = other.value
+ elif isinstance(other, timedelta):
+ other = Timedelta(other).value
+ elif isinstance(other, np.timedelta64):
+ other = Timedelta(other).value
+ elif hasattr(other, 'dtype') and is_timedelta64_dtype(other):
other = other.astype('i8', copy=False).view('i8')
else:
# coercion issues
@@ -2856,43 +2675,141 @@ def _try_coerce_args(self, values, other):
return values, other
def _try_coerce_result(self, result):
- """ reverse of try_coerce_args """
+ """ reverse of try_coerce_args / try_operate """
if isinstance(result, np.ndarray):
+ mask = isna(result)
if result.dtype.kind in ['i', 'f', 'O']:
- try:
- result = result.astype('M8[ns]')
- except ValueError:
- pass
- elif isinstance(result, (np.integer, np.float, np.datetime64)):
+ result = result.astype('m8[ns]')
+ result[mask] = tslibs.iNaT
+ elif isinstance(result, (np.integer, np.float)):
result = self._box_func(result)
return result
+ def should_store(self, value):
+ return (issubclass(value.dtype.type, np.timedelta64) and
+ not is_extension_array_dtype(value))
+
+ def to_native_types(self, slicer=None, na_rep=None, quoting=None,
+ **kwargs):
+ """ convert to our native types format, slicing if desired """
+
+ values = self.values
+ if slicer is not None:
+ values = values[:, slicer]
+ mask = isna(values)
+
+ rvalues = np.empty(values.shape, dtype=object)
+ if na_rep is None:
+ na_rep = 'NaT'
+ rvalues[mask] = na_rep
+ imask = (~mask).ravel()
+
+ # FIXME:
+ # should use the formats.format.Timedelta64Formatter here
+ # to figure what format to pass to the Timedelta
+ # e.g. to not show the decimals say
+ rvalues.flat[imask] = np.array([Timedelta(val)._repr_base(format='all')
+ for val in values.ravel()[imask]],
+ dtype=object)
+ return rvalues
+
+ def external_values(self, dtype=None):
+ return np.asarray(self.values.astype("timedelta64[ns]", copy=False))
+
+
+class BoolBlock(NumericBlock):
+ __slots__ = ()
+ is_bool = True
+ _can_hold_na = False
+
+ def _can_hold_element(self, element):
+ tipo = maybe_infer_dtype_type(element)
+ if tipo is not None:
+ return issubclass(tipo.type, np.bool_)
+ return isinstance(element, (bool, np.bool_))
+
+ def should_store(self, value):
+ return (issubclass(value.dtype.type, np.bool_) and not
+ is_extension_array_dtype(value))
+
+ def replace(self, to_replace, value, inplace=False, filter=None,
+ regex=False, convert=True):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
+ to_replace_values = np.atleast_1d(to_replace)
+ if not np.can_cast(to_replace_values, bool):
+ return self
+ return super(BoolBlock, self).replace(to_replace, value,
+ inplace=inplace, filter=filter,
+ regex=regex, convert=convert)
+
+
+class ObjectBlock(Block):
+ __slots__ = ()
+ is_object = True
+ _can_hold_na = True
+
+ def __init__(self, values, placement=None, ndim=2):
+ if issubclass(values.dtype.type, compat.string_types):
+ values = np.array(values, dtype=object)
+
+ super(ObjectBlock, self).__init__(values, ndim=ndim,
+ placement=placement)
+
@property
- def _box_func(self):
- return tslibs.Timestamp
+ def is_bool(self):
+ """ we can be a bool if we have only bool values but are of type
+ object
+ """
+ return lib.is_bool_array(self.values.ravel())
+
+ # TODO: Refactor when convert_objects is removed since there will be 1 path
+ def convert(self, *args, **kwargs):
+ """ attempt to coerce any object types to better types return a copy of
+ the block (if copy = True) by definition we ARE an ObjectBlock!!!!!
+
+ can return multiple blocks!
+ """
+
+ if args:
+ raise NotImplementedError
+ by_item = True if 'by_item' not in kwargs else kwargs['by_item']
- def to_native_types(self, slicer=None, na_rep=None, date_format=None,
- quoting=None, **kwargs):
- """ convert to our native types format, slicing if desired """
+ new_inputs = ['coerce', 'datetime', 'numeric', 'timedelta']
+ new_style = False
+ for kw in new_inputs:
+ new_style |= kw in kwargs
- values = self.values
- i8values = self.values.view('i8')
+ if new_style:
+ fn = soft_convert_objects
+ fn_inputs = new_inputs
+ else:
+ fn = maybe_convert_objects
+ fn_inputs = ['convert_dates', 'convert_numeric',
+ 'convert_timedeltas']
+ fn_inputs += ['copy']
- if slicer is not None:
- i8values = i8values[..., slicer]
+ fn_kwargs = {key: kwargs[key] for key in fn_inputs if key in kwargs}
- from pandas.io.formats.format import _get_format_datetime64_from_values
- format = _get_format_datetime64_from_values(values, date_format)
+ # operate column-by-column
+ def f(m, v, i):
+ shape = v.shape
+ values = fn(v.ravel(), **fn_kwargs)
+ try:
+ values = values.reshape(shape)
+ values = _block_shape(values, ndim=self.ndim)
+ except (AttributeError, NotImplementedError):
+ pass
- result = tslib.format_array_from_datetime(
- i8values.ravel(), tz=getattr(self.values, 'tz', None),
- format=format, na_rep=na_rep).reshape(i8values.shape)
- return np.atleast_2d(result)
+ return values
- def should_store(self, value):
- return (issubclass(value.dtype.type, np.datetime64) and
- not is_datetime64tz_dtype(value) and
- not is_extension_array_dtype(value))
+ if by_item and not self._is_single_block:
+ blocks = self.split_and_operate(None, f, False)
+ else:
+ values = f(None, self.values.ravel(), None)
+ blocks = [make_block(values, ndim=self.ndim,
+ placement=self.mgr_locs)]
+
+ return blocks
def set(self, locs, values):
"""
@@ -2902,255 +2819,338 @@ def set(self, locs, values):
-------
None
"""
- values = conversion.ensure_datetime64ns(values, copy=False)
+ try:
+ self.values[locs] = values
+ except (ValueError):
- self.values[locs] = values
+ # broadcasting error
+ # see GH6171
+ new_shape = list(values.shape)
+ new_shape[0] = len(self.items)
+ self.values = np.empty(tuple(new_shape), dtype=self.dtype)
+ self.values.fill(np.nan)
+ self.values[locs] = values
- def external_values(self):
- return np.asarray(self.values.astype('datetime64[ns]', copy=False))
+ def _maybe_downcast(self, blocks, downcast=None):
+ if downcast is not None:
+ return blocks
-class DatetimeTZBlock(ExtensionBlock, DatetimeBlock):
- """ implement a datetime64 block with a tz attribute """
- __slots__ = ()
- is_datetimetz = True
- is_extension = True
+ # split and convert the blocks
+ return _extend_blocks([b.convert(datetime=True, numeric=False)
+ for b in blocks])
- def __init__(self, values, placement, ndim=2, dtype=None):
- # XXX: This will end up calling _maybe_coerce_values twice
- # when dtype is not None. It's relatively cheap (just an isinstance)
- # but it'd nice to avoid.
- #
- # If we can remove dtype from __init__, and push that conversion
- # push onto the callers, then we can remove this entire __init__
- # and just use DatetimeBlock's.
- if dtype is not None:
- values = self._maybe_coerce_values(values, dtype=dtype)
- super(DatetimeTZBlock, self).__init__(values, placement=placement,
- ndim=ndim)
+ def _can_hold_element(self, element):
+ return True
- @property
- def _holder(self):
- return DatetimeArray
+ def _try_coerce_args(self, values, other):
+ """ provide coercion to our input arguments """
- def _maybe_coerce_values(self, values, dtype=None):
- """Input validation for values passed to __init__. Ensure that
- we have datetime64TZ, coercing if necessary.
+ if isinstance(other, ABCDatetimeIndex):
+ # May get a DatetimeIndex here. Unbox it.
+ other = other.array
- Parametetrs
- -----------
- values : array-like
- Must be convertible to datetime64
- dtype : string or DatetimeTZDtype, optional
- Does a shallow copy to this tz
+ if isinstance(other, DatetimeArray):
+ # hit in pandas/tests/indexing/test_coercion.py
+ # ::TestWhereCoercion::test_where_series_datetime64[datetime64tz]
+ # when falling back to ObjectBlock.where
+ other = other.astype(object)
- Returns
- -------
- values : ndarray[datetime64ns]
- """
- if not isinstance(values, self._holder):
- values = self._holder(values)
+ return values, other
- if dtype is not None:
- if isinstance(dtype, compat.string_types):
- dtype = DatetimeTZDtype.construct_from_string(dtype)
- values = type(values)(values, dtype=dtype)
+ def should_store(self, value):
+ return not (issubclass(value.dtype.type,
+ (np.integer, np.floating, np.complexfloating,
+ np.datetime64, np.bool_)) or
+ # TODO(ExtensionArray): remove is_extension_type
+ # when all extension arrays have been ported.
+ is_extension_type(value) or
+ is_extension_array_dtype(value))
- if values.tz is None:
- raise ValueError("cannot create a DatetimeTZBlock without a tz")
+ def replace(self, to_replace, value, inplace=False, filter=None,
+ regex=False, convert=True):
+ to_rep_is_list = is_list_like(to_replace)
+ value_is_list = is_list_like(value)
+ both_lists = to_rep_is_list and value_is_list
+ either_list = to_rep_is_list or value_is_list
- return values
+ result_blocks = []
+ blocks = [self]
- @property
- def is_view(self):
- """ return a boolean if I am possibly a view """
- # check the ndarray values of the DatetimeIndex values
- return self.values._data.base is not None
+ if not either_list and is_re(to_replace):
+ return self._replace_single(to_replace, value, inplace=inplace,
+ filter=filter, regex=True,
+ convert=convert)
+ elif not (either_list or regex):
+ return super(ObjectBlock, self).replace(to_replace, value,
+ inplace=inplace,
+ filter=filter, regex=regex,
+ convert=convert)
+ elif both_lists:
+ for to_rep, v in zip(to_replace, value):
+ result_blocks = []
+ for b in blocks:
+ result = b._replace_single(to_rep, v, inplace=inplace,
+ filter=filter, regex=regex,
+ convert=convert)
+ result_blocks = _extend_blocks(result, result_blocks)
+ blocks = result_blocks
+ return result_blocks
- def copy(self, deep=True):
- """ copy constructor """
- values = self.values
- if deep:
- values = values.copy(deep=True)
- return self.make_block_same_class(values)
+ elif to_rep_is_list and regex:
+ for to_rep in to_replace:
+ result_blocks = []
+ for b in blocks:
+ result = b._replace_single(to_rep, value, inplace=inplace,
+ filter=filter, regex=regex,
+ convert=convert)
+ result_blocks = _extend_blocks(result, result_blocks)
+ blocks = result_blocks
+ return result_blocks
- def get_values(self, dtype=None):
+ return self._replace_single(to_replace, value, inplace=inplace,
+ filter=filter, convert=convert,
+ regex=regex)
+
+ def _replace_single(self, to_replace, value, inplace=False, filter=None,
+ regex=False, convert=True, mask=None):
"""
- Returns an ndarray of values.
+ Replace elements by the given value.
Parameters
----------
- dtype : np.dtype
- Only `object`-like dtypes are respected here (not sure
- why).
+ to_replace : object or pattern
+ Scalar to replace or regular expression to match.
+ value : object
+ Replacement object.
+ inplace : bool, default False
+ Perform inplace modification.
+ filter : list, optional
+ regex : bool, default False
+ If true, perform regular expression substitution.
+ convert : bool, default True
+ If true, try to coerce any object types to better types.
+ mask : array-like of bool, optional
+ True indicate corresponding element is ignored.
Returns
-------
- values : ndarray
- When ``dtype=object``, then and object-dtype ndarray of
- boxed values is returned. Otherwise, an M8[ns] ndarray
- is returned.
+ a new block, the result after replacing
+ """
+ inplace = validate_bool_kwarg(inplace, 'inplace')
+
+ # to_replace is regex compilable
+ to_rep_re = regex and is_re_compilable(to_replace)
+
+ # regex is regex compilable
+ regex_re = is_re_compilable(regex)
+
+ # only one will survive
+ if to_rep_re and regex_re:
+ raise AssertionError('only one of to_replace and regex can be '
+ 'regex compilable')
+
+ # if regex was passed as something that can be a regex (rather than a
+ # boolean)
+ if regex_re:
+ to_replace = regex
+
+ regex = regex_re or to_rep_re
+
+ # try to get the pattern attribute (compiled re) or it's a string
+ try:
+ pattern = to_replace.pattern
+ except AttributeError:
+ pattern = to_replace
+
+ # if the pattern is not empty and to_replace is either a string or a
+ # regex
+ if regex and pattern:
+ rx = re.compile(to_replace)
+ else:
+ # if the thing to replace is not a string or compiled regex call
+ # the superclass method -> to_replace is some kind of object
+ return super(ObjectBlock, self).replace(to_replace, value,
+ inplace=inplace,
+ filter=filter, regex=regex)
+
+ new_values = self.values if inplace else self.values.copy()
+
+ # deal with replacing values with objects (strings) that match but
+ # whose replacement is not a string (numeric, nan, object)
+ if isna(value) or not isinstance(value, compat.string_types):
+
+ def re_replacer(s):
+ try:
+ return value if rx.search(s) is not None else s
+ except TypeError:
+ return s
+ else:
+ # value is guaranteed to be a string here, s can be either a string
+ # or null if it's null it gets returned
+ def re_replacer(s):
+ try:
+ return rx.sub(value, s)
+ except TypeError:
+ return s
- DatetimeArray is always 1-d. ``get_values`` will reshape
- the return value to be the same dimensionality as the
- block.
- """
- values = self.values
- if is_object_dtype(dtype):
- values = values._box_values(values._data)
+ f = np.vectorize(re_replacer, otypes=[self.dtype])
- values = np.asarray(values)
+ if filter is None:
+ filt = slice(None)
+ else:
+ filt = self.mgr_locs.isin(filter).nonzero()[0]
- if self.ndim == 2:
- # Ensure that our shape is correct for DataFrame.
- # ExtensionArrays are always 1-D, even in a DataFrame when
- # the analogous NumPy-backed column would be a 2-D ndarray.
- values = values.reshape(1, -1)
- return values
+ if mask is None:
+ new_values[filt] = f(new_values[filt])
+ else:
+ new_values[filt][mask] = f(new_values[filt][mask])
- def _slice(self, slicer):
- """ return a slice of my values """
- if isinstance(slicer, tuple):
- col, loc = slicer
- if not com.is_null_slice(col) and col != 0:
- raise IndexError("{0} only contains one item".format(self))
- return self.values[loc]
- return self.values[slicer]
+ # convert
+ block = self.make_block(new_values)
+ if convert:
+ block = block.convert(by_item=True, numeric=False)
+ return block
- def _try_coerce_args(self, values, other):
+ def _replace_coerce(self, to_replace, value, inplace=True, regex=False,
+ convert=False, mask=None):
"""
- localize and return i8 for the values
+ Replace value corresponding to the given boolean array with another
+ value.
Parameters
----------
- values : ndarray-like
- other : ndarray-like or scalar
+ to_replace : object or pattern
+ Scalar to replace or regular expression to match.
+ value : object
+ Replacement object.
+ inplace : bool, default False
+ Perform inplace modification.
+ regex : bool, default False
+ If true, perform regular expression substitution.
+ convert : bool, default True
+ If true, try to coerce any object types to better types.
+ mask : array-like of bool, optional
+ True indicate corresponding element is ignored.
Returns
-------
- base-type values, base-type other
+ A new block if there is anything to replace or the original block.
"""
- # asi8 is a view, needs copy
- values = _block_shape(values.view("i8"), ndim=self.ndim)
+ if mask.any():
+ block = super(ObjectBlock, self)._replace_coerce(
+ to_replace=to_replace, value=value, inplace=inplace,
+ regex=regex, convert=convert, mask=mask)
+ if convert:
+ block = [b.convert(by_item=True, numeric=False, copy=True)
+ for b in block]
+ return block
+ return self
- if isinstance(other, ABCSeries):
- other = self._holder(other)
- if isinstance(other, bool):
- raise TypeError
- elif is_datetime64_dtype(other):
- # add the tz back
- other = self._holder(other, dtype=self.dtype)
+class CategoricalBlock(ExtensionBlock):
+ __slots__ = ()
+ is_categorical = True
+ _verify_integrity = True
+ _can_hold_na = True
+ _concatenator = staticmethod(_concat._concat_categorical)
- elif (is_null_datelike_scalar(other) or
- (lib.is_scalar(other) and isna(other))):
- other = tslibs.iNaT
- elif isinstance(other, self._holder):
- if other.tz != self.values.tz:
- raise ValueError("incompatible or non tz-aware value")
- other = _block_shape(other.asi8, ndim=self.ndim)
- elif isinstance(other, (np.datetime64, datetime, date)):
- other = tslibs.Timestamp(other)
- tz = getattr(other, 'tz', None)
+ def __init__(self, values, placement, ndim=None):
+ from pandas.core.arrays.categorical import _maybe_to_categorical
- # test we can have an equal time zone
- if tz is None or str(tz) != str(self.values.tz):
- raise ValueError("incompatible or non tz-aware value")
- other = other.value
- else:
- raise TypeError
+ # coerce to categorical if we can
+ super(CategoricalBlock, self).__init__(_maybe_to_categorical(values),
+ placement=placement,
+ ndim=ndim)
- return values, other
+ @property
+ def _holder(self):
+ return Categorical
+
+ @property
+ def array_dtype(self):
+ """ the dtype to return if I want to construct this block as an
+ array
+ """
+ return np.object_
def _try_coerce_result(self, result):
""" reverse of try_coerce_args """
- if isinstance(result, np.ndarray):
- if result.dtype.kind in ['i', 'f', 'O']:
- result = result.astype('M8[ns]')
- elif isinstance(result, (np.integer, np.float, np.datetime64)):
- result = tslibs.Timestamp(result, tz=self.values.tz)
- if isinstance(result, np.ndarray):
- # allow passing of > 1dim if its trivial
- if result.ndim > 1:
- result = result.reshape(np.prod(result.shape))
- # GH#24096 new values invalidates a frequency
- result = self._holder._simple_new(result, freq=None,
- tz=self.values.tz)
+ # GH12564: CategoricalBlock is 1-dim only
+ # while returned results could be any dim
+ if ((not is_categorical_dtype(result)) and
+ isinstance(result, np.ndarray)):
+ result = _block_shape(result, ndim=self.ndim)
return result
- @property
- def _box_func(self):
- return lambda x: tslibs.Timestamp(x, tz=self.dtype.tz)
+ def to_dense(self):
+ # Categorical.get_values returns a DatetimeIndex for datetime
+ # categories, so we can't simply use `np.asarray(self.values)` like
+ # other types.
+ return self.values.get_values()
- def diff(self, n, axis=0):
- """1st discrete difference
+ def to_native_types(self, slicer=None, na_rep='', quoting=None, **kwargs):
+ """ convert to our native types format, slicing if desired """
- Parameters
- ----------
- n : int, number of periods to diff
- axis : int, axis to diff upon. default 0
+ values = self.values
+ if slicer is not None:
+ # Categorical is always one dimension
+ values = values[slicer]
+ mask = isna(values)
+ values = np.array(values, dtype='object')
+ values[mask] = na_rep
- Return
- ------
- A list with a new TimeDeltaBlock.
+ # we are expected to return a 2-d ndarray
+ return values.reshape(1, len(values))
- Note
- ----
- The arguments here are mimicking shift so they are called correctly
- by apply.
+ def concat_same_type(self, to_concat, placement=None):
"""
- if axis == 0:
- # Cannot currently calculate diff across multiple blocks since this
- # function is invoked via apply
- raise NotImplementedError
- new_values = (self.values - self.shift(n, axis=axis)[0].values).asi8
+ Concatenate list of single blocks of the same type.
- # Reshape the new_values like how algos.diff does for timedelta data
- new_values = new_values.reshape(1, len(new_values))
- new_values = new_values.astype('timedelta64[ns]')
- return [TimeDeltaBlock(new_values, placement=self.mgr_locs.indexer)]
+ Note that this CategoricalBlock._concat_same_type *may* not
+ return a CategoricalBlock. When the categories in `to_concat`
+ differ, this will return an object ndarray.
- def concat_same_type(self, to_concat, placement=None):
- # need to handle concat([tz1, tz2]) here, since DatetimeArray
- # only handles cases where all the tzs are the same.
- # Instead of placing the condition here, it could also go into the
- # is_uniform_join_units check, but I'm not sure what is better.
- if len({x.dtype for x in to_concat}) > 1:
- values = _concat._concat_datetime([x.values for x in to_concat])
- placement = placement or slice(0, len(values), 1)
+ If / when we decide we don't like that behavior:
- if self.ndim > 1:
- values = np.atleast_2d(values)
- return ObjectBlock(values, ndim=self.ndim, placement=placement)
- return super(DatetimeTZBlock, self).concat_same_type(to_concat,
- placement)
+ 1. Change Categorical._concat_same_type to use union_categoricals
+ 2. Delete this method.
+ """
+ values = self._concatenator([blk.values for blk in to_concat],
+ axis=self.ndim - 1)
+ # not using self.make_block_same_class as values can be object dtype
+ return make_block(
+ values, placement=placement or slice(0, len(values), 1),
+ ndim=self.ndim)
- def fillna(self, value, limit=None, inplace=False, downcast=None):
- # We support filling a DatetimeTZ with a `value` whose timezone
- # is different by coercing to object.
+ def where(self, other, cond, align=True, errors='raise',
+ try_cast=False, axis=0, transpose=False):
+ # TODO(CategoricalBlock.where):
+ # This can all be deleted in favor of ExtensionBlock.where once
+ # we enforce the deprecation.
+ object_msg = (
+ "Implicitly converting categorical to object-dtype ndarray. "
+ "One or more of the values in 'other' are not present in this "
+ "categorical's categories. A future version of pandas will raise "
+ "a ValueError when 'other' contains different categories.\n\n"
+ "To preserve the current behavior, add the new categories to "
+ "the categorical before calling 'where', or convert the "
+ "categorical to a different dtype."
+ )
try:
- return super(DatetimeTZBlock, self).fillna(
- value, limit, inplace, downcast
- )
- except (ValueError, TypeError):
- # different timezones, or a non-tz
- return self.astype(object).fillna(
- value, limit=limit, inplace=inplace, downcast=downcast
+ # Attempt to do preserve categorical dtype.
+ result = super(CategoricalBlock, self).where(
+ other, cond, align, errors, try_cast, axis, transpose
)
-
- def setitem(self, indexer, value):
- # https://github.com/pandas-dev/pandas/issues/24020
- # Need a dedicated setitem until #24020 (type promotion in setitem
- # for extension arrays) is designed and implemented.
- try:
- return super(DatetimeTZBlock, self).setitem(indexer, value)
- except (ValueError, TypeError):
- newb = make_block(self.values.astype(object),
- placement=self.mgr_locs,
- klass=ObjectBlock,)
- return newb.setitem(indexer, value)
+ except (TypeError, ValueError):
+ warnings.warn(object_msg, FutureWarning, stacklevel=6)
+ result = self.astype(object).where(other, cond, align=align,
+ errors=errors,
+ try_cast=try_cast,
+ axis=axis, transpose=transpose)
+ return result
# -----------------------------------------------------------------
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24582 | 2019-01-03T02:53:43Z | 2019-01-03T03:45:14Z | 2019-01-03T03:45:14Z | 2019-01-03T03:46:39Z |
Support hard-masked numpy arrays | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 7628c53cefa06..c9210a5597d48 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1537,6 +1537,7 @@ Missing
- Bug in :func:`Series.hasnans` that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (:issue:`19700`)
- :func:`Series.isin` now treats all NaN-floats as equal also for ``np.object``-dtype. This behavior is consistent with the behavior for float64 (:issue:`22119`)
- :func:`unique` no longer mangles NaN-floats and the ``NaT``-object for ``np.object``-dtype, i.e. ``NaT`` is no longer coerced to a NaN-value and is treated as a different entity. (:issue:`22295`)
+- :func:`DataFrame` and :func:`Series` now properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (:issue:`24574`)
MultiIndex
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d6aa3117570af..76d3d704497b4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -400,6 +400,7 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
mask = ma.getmaskarray(data)
if mask.any():
data, fill_value = maybe_upcast(data, copy=True)
+ data.soften_mask() # set hardmask False if it was True
data[mask] = fill_value
else:
data = data.copy()
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index b3c893c7d84be..446ad72ac4a53 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -547,6 +547,7 @@ def sanitize_array(data, index, dtype=None, copy=False,
mask = ma.getmaskarray(data)
if mask.any():
data, fill_value = maybe_upcast(data, copy=True)
+ data.soften_mask() # set hardmask False if it was True
data[mask] = fill_value
else:
data = data.copy()
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 8a5ec1a16d1df..c8b3f23db1492 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -757,6 +757,28 @@ def test_constructor_maskedarray_nonfloat(self):
assert frame['A'][1] is True
assert frame['C'][2] is False
+ def test_constructor_maskedarray_hardened(self):
+ # Check numpy masked arrays with hard masks -- from GH24574
+ mat_hard = ma.masked_all((2, 2), dtype=float).harden_mask()
+ result = pd.DataFrame(mat_hard, columns=['A', 'B'], index=[1, 2])
+ expected = pd.DataFrame({
+ 'A': [np.nan, np.nan],
+ 'B': [np.nan, np.nan]},
+ columns=['A', 'B'],
+ index=[1, 2],
+ dtype=float)
+ tm.assert_frame_equal(result, expected)
+ # Check case where mask is hard but no data are masked
+ mat_hard = ma.ones((2, 2), dtype=float).harden_mask()
+ result = pd.DataFrame(mat_hard, columns=['A', 'B'], index=[1, 2])
+ expected = pd.DataFrame({
+ 'A': [1.0, 1.0],
+ 'B': [1.0, 1.0]},
+ columns=['A', 'B'],
+ index=[1, 2],
+ dtype=float)
+ tm.assert_frame_equal(result, expected)
+
def test_constructor_mrecarray(self):
# Ensure mrecarray produces frame identical to dict of masked arrays
# from GH3479
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index f5a445e2cca9a..667065d09758b 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -451,6 +451,13 @@ def test_constructor_maskedarray(self):
datetime(2001, 1, 3)], index=index, dtype='M8[ns]')
assert_series_equal(result, expected)
+ def test_constructor_maskedarray_hardened(self):
+ # Check numpy masked arrays with hard masks -- from GH24574
+ data = ma.masked_all((3, ), dtype=float).harden_mask()
+ result = pd.Series(data)
+ expected = pd.Series([nan, nan, nan])
+ tm.assert_series_equal(result, expected)
+
def test_series_ctor_plus_datetimeindex(self):
rng = date_range('20090415', '20090519', freq='B')
data = {k: 1 for k in rng}
| - [x] closes #24574
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
For the whatsnew entry, is this reasonable?
```rst
- :func:`DataFrame` and :func:`Series` now properly handle
numpy masked arrays with hardened masks. Previously, constructing
a DataFrame or Series from a masked array with a hard mask would
create a pandas object containing the underlying value, rather than the
expected NaN. (:issue:`24574`)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/24581 | 2019-01-03T02:42:49Z | 2019-01-04T00:24:23Z | 2019-01-04T00:24:22Z | 2019-01-04T00:39:26Z |
Fix import format at pandas/tests/io/plotting directory | diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index f41a3a10604af..4ca916a0aa4e4 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -1,25 +1,28 @@
#!/usr/bin/env python
# coding: utf-8
-import pytest
import os
import warnings
-from pandas import DataFrame, Series
-from pandas.compat import zip, iteritems
+import numpy as np
+from numpy import random
+import pytest
+
+from pandas.compat import iteritems, zip
from pandas.util._decorators import cache_readonly
-from pandas.core.dtypes.api import is_list_like
-import pandas.util.testing as tm
-from pandas.util.testing import (ensure_clean,
- assert_is_valid_plot_return_object)
import pandas.util._test_decorators as td
-import numpy as np
-from numpy import random
+from pandas.core.dtypes.api import is_list_like
+
+from pandas import DataFrame, Series
+import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_is_valid_plot_return_object, ensure_clean)
import pandas.plotting as plotting
from pandas.plotting._tools import _flatten
+
"""
This is a common base class used for various plotting tests
"""
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index e89584ca35d94..7d721c7de3398 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -1,21 +1,20 @@
# coding: utf-8
-import pytest
import itertools
import string
-from pandas import Series, DataFrame, MultiIndex
-from pandas.compat import range, lzip
-import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-
import numpy as np
from numpy import random
+import pytest
-import pandas.plotting as plotting
+from pandas.compat import lzip, range
+import pandas.util._test_decorators as td
-from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works)
+from pandas import DataFrame, MultiIndex, Series
+from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
+import pandas.util.testing as tm
+import pandas.plotting as plotting
""" Test cases for .boxplot method """
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index eed3679c5bc8c..01aa8e8ccc1ee 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -1,19 +1,22 @@
+from datetime import date, datetime
import subprocess
import sys
-import pytest
-from datetime import datetime, date
import numpy as np
-from pandas import Timestamp, Period, Index, date_range, Series
+import pytest
+
from pandas.compat import u
+from pandas.compat.numpy import np_datetime64_compat
+
+from pandas import Index, Period, Series, Timestamp, date_range
import pandas.core.config as cf
import pandas.util.testing as tm
-from pandas.tseries.offsets import Second, Milli, Micro, Day
-from pandas.compat.numpy import np_datetime64_compat
+
+from pandas.tseries.offsets import Day, Micro, Milli, Second
converter = pytest.importorskip('pandas.plotting._converter')
-from pandas.plotting import (register_matplotlib_converters,
- deregister_matplotlib_converters)
+from pandas.plotting import (deregister_matplotlib_converters, # isort:skip
+ register_matplotlib_converters)
def test_timtetonum_accepts_unicode():
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 7a28f05514dd5..c78ab41d2fae4 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -1,26 +1,25 @@
""" Test cases for time series specific (freq conversion, etc) """
-import sys
-from datetime import datetime, timedelta, date, time
+from datetime import date, datetime, time, timedelta
import pickle
+import sys
+import numpy as np
import pytest
-from pandas.compat import lrange, zip
-import numpy as np
-from pandas import Index, Series, DataFrame, NaT, isna
-from pandas.compat import PY3
-from pandas.core.indexes.datetimes import date_range, bdate_range
+from pandas.compat import PY3, lrange, zip
+import pandas.util._test_decorators as td
+
+from pandas import DataFrame, Index, NaT, Series, isna
+from pandas.core.indexes.datetimes import bdate_range, date_range
+from pandas.core.indexes.period import Period, PeriodIndex, period_range
from pandas.core.indexes.timedeltas import timedelta_range
-from pandas.tseries.offsets import DateOffset
-from pandas.core.indexes.period import period_range, Period, PeriodIndex
from pandas.core.resample import DatetimeIndex
-
-from pandas.util.testing import assert_series_equal, ensure_clean
+from pandas.tests.plotting.common import (
+ TestPlotBase, _skip_if_no_scipy_gaussian_kde)
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
+from pandas.util.testing import assert_series_equal, ensure_clean
-from pandas.tests.plotting.common import (TestPlotBase,
- _skip_if_no_scipy_gaussian_kde)
+from pandas.tseries.offsets import DateOffset
@td.skip_if_no_mpl
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index cc52130a10b2e..436ccef48ae12 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -2,28 +2,29 @@
""" Test cases for DataFrame.plot """
-import pytest
+from datetime import date, datetime
import string
import warnings
-from datetime import datetime, date
+import numpy as np
+from numpy.random import rand, randn
+import pytest
-import pandas as pd
-from pandas import (Series, DataFrame, MultiIndex, PeriodIndex, date_range,
- bdate_range)
-from pandas.core.dtypes.api import is_list_like
-from pandas.compat import range, lrange, lmap, lzip, u, zip, PY3
-from pandas.io.formats.printing import pprint_thing
-import pandas.util.testing as tm
+from pandas.compat import PY3, lmap, lrange, lzip, range, u, zip
import pandas.util._test_decorators as td
-import numpy as np
-from numpy.random import rand, randn
+from pandas.core.dtypes.api import is_list_like
+import pandas as pd
+from pandas import (
+ DataFrame, MultiIndex, PeriodIndex, Series, bdate_range, date_range)
+from pandas.tests.plotting.common import (
+ TestPlotBase, _check_plot_works, _ok_for_gaussian_kde,
+ _skip_if_no_scipy_gaussian_kde)
+import pandas.util.testing as tm
+
+from pandas.io.formats.printing import pprint_thing
import pandas.plotting as plotting
-from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works,
- _skip_if_no_scipy_gaussian_kde,
- _ok_for_gaussian_kde)
@td.skip_if_no_mpl
diff --git a/pandas/tests/plotting/test_groupby.py b/pandas/tests/plotting/test_groupby.py
index a7c99a06c34e9..5a5ee75928c97 100644
--- a/pandas/tests/plotting/test_groupby.py
+++ b/pandas/tests/plotting/test_groupby.py
@@ -3,13 +3,13 @@
""" Test cases for GroupBy.plot """
-from pandas import Series, DataFrame
-import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-
import numpy as np
+import pandas.util._test_decorators as td
+
+from pandas import DataFrame, Series
from pandas.tests.plotting.common import TestPlotBase
+import pandas.util.testing as tm
@td.skip_if_no_mpl
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index 1d9942603a269..7bdbdac54f7a6 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -2,18 +2,18 @@
""" Test cases for .hist method """
+import numpy as np
+from numpy.random import randn
import pytest
-from pandas import Series, DataFrame
-import pandas.util.testing as tm
import pandas.util._test_decorators as td
-import numpy as np
-from numpy.random import randn
+from pandas import DataFrame, Series
+from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
+import pandas.util.testing as tm
-from pandas.plotting._core import grouped_hist
from pandas.plotting._compat import _mpl_ge_2_2_0
-from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works)
+from pandas.plotting._core import grouped_hist
@td.skip_if_no_mpl
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index de9e2a16cd15e..44b95f7d1b00b 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -2,19 +2,19 @@
""" Test cases for misc plot functions """
+import numpy as np
+from numpy import random
+from numpy.random import randn
import pytest
-from pandas import DataFrame
from pandas.compat import lmap
-import pandas.util.testing as tm
import pandas.util._test_decorators as td
-import numpy as np
-from numpy import random
-from numpy.random import randn
+from pandas import DataFrame
+from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
+import pandas.util.testing as tm
import pandas.plotting as plotting
-from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
@td.skip_if_mpl
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index b857979005f5e..39f8f2f44fda0 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -3,24 +3,24 @@
""" Test cases for Series.plot """
+from datetime import datetime
from itertools import chain
+
+import numpy as np
+from numpy.random import randn
import pytest
-from datetime import datetime
+from pandas.compat import lrange, range
+import pandas.util._test_decorators as td
import pandas as pd
-from pandas import Series, DataFrame, date_range
-from pandas.compat import range, lrange
+from pandas import DataFrame, Series, date_range
+from pandas.tests.plotting.common import (
+ TestPlotBase, _check_plot_works, _ok_for_gaussian_kde,
+ _skip_if_no_scipy_gaussian_kde)
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-
-import numpy as np
-from numpy.random import randn
import pandas.plotting as plotting
-from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works,
- _skip_if_no_scipy_gaussian_kde,
- _ok_for_gaussian_kde)
@td.skip_if_no_mpl
diff --git a/setup.cfg b/setup.cfg
index a1c82304c5a72..6c076eed580dd 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -122,16 +122,6 @@ skip=
pandas/tests/api/test_api.py,
pandas/tests/tools/test_numeric.py,
pandas/tests/internals/test_internals.py,
- pandas/tests/plotting/test_datetimelike.py,
- pandas/tests/plotting/test_series.py,
- pandas/tests/plotting/test_groupby.py,
- pandas/tests/plotting/test_converter.py,
- pandas/tests/plotting/test_misc.py,
- pandas/tests/plotting/test_frame.py,
- pandas/tests/plotting/test_hist_method.py,
- pandas/tests/plotting/common.py,
- pandas/tests/plotting/test_boxplot_method.py,
- pandas/tests/plotting/test_deprecated.py,
pandas/tests/extension/test_sparse.py,
pandas/tests/extension/base/reduce.py,
pandas/tests/computation/test_compat.py,
| - [x] partial #23334
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Ran `isort --recursive pandas/tests/io/plotting` and then checked imports using `isort --recursive --check-only pandas/tests/io/plotting` | https://api.github.com/repos/pandas-dev/pandas/pulls/24580 | 2019-01-03T02:16:38Z | 2019-01-04T12:13:13Z | 2019-01-04T12:13:13Z | 2019-01-04T12:13:13Z |
Fix import format at pandas/tests/io/arithmetic directory | diff --git a/pandas/tests/arithmetic/conftest.py b/pandas/tests/arithmetic/conftest.py
index 44e6cc664de6d..671fe69750c57 100644
--- a/pandas/tests/arithmetic/conftest.py
+++ b/pandas/tests/arithmetic/conftest.py
@@ -1,16 +1,16 @@
# -*- coding: utf-8 -*-
-import pytest
-
import numpy as np
-import pandas as pd
+import pytest
from pandas.compat import long
-import pandas.util.testing as tm
+import pandas as pd
+import pandas.util.testing as tm
# ------------------------------------------------------------------
# Helper Functions
+
def id_func(x):
if isinstance(x, tuple):
assert len(x) == 2
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index f5c4808a09123..7d01d39ae6bb5 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -2,29 +2,26 @@
# Arithmetic tests for DataFrame/Series/Index/Array classes that should
# behave identically.
# Specifically for datetime64 and datetime64tz dtypes
-import operator
from datetime import datetime, timedelta
-import warnings
from itertools import product, starmap
+import operator
+import warnings
import numpy as np
import pytest
import pytz
-import pandas as pd
-import pandas.util.testing as tm
-
-from pandas.compat.numpy import np_datetime64_compat
-from pandas.errors import PerformanceWarning, NullFrequencyError
-
from pandas._libs.tslibs.conversion import localize_pydatetime
from pandas._libs.tslibs.offsets import shift_months
+from pandas.compat.numpy import np_datetime64_compat
+from pandas.errors import NullFrequencyError, PerformanceWarning
-from pandas.core.indexes.datetimes import _to_M8
-
+import pandas as pd
from pandas import (
- Timestamp, Timedelta, Period, Series, date_range, NaT,
- DatetimeIndex, TimedeltaIndex)
+ DatetimeIndex, NaT, Period, Series, Timedelta, TimedeltaIndex, Timestamp,
+ date_range)
+from pandas.core.indexes.datetimes import _to_M8
+import pandas.util.testing as tm
def assert_all(obj):
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index c603485f6f076..7afb90978131d 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -6,20 +6,20 @@
from itertools import combinations
import operator
-import pytest
import numpy as np
-
-import pandas as pd
-import pandas.util.testing as tm
+import pytest
from pandas.compat import PY3, Iterable
-from pandas.core import ops
-from pandas import Timedelta, Series, Index, TimedeltaIndex
+import pandas as pd
+from pandas import Index, Series, Timedelta, TimedeltaIndex
+from pandas.core import ops
+import pandas.util.testing as tm
# ------------------------------------------------------------------
# Comparisons
+
class TestNumericComparisons(object):
def test_operator_series_comparison_zerorank(self):
# GH#13006
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index e9a3f4accc486..9917c45ef6d12 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -4,19 +4,18 @@
# Specifically for object dtype
import operator
-import pytest
import numpy as np
+import pytest
import pandas as pd
-import pandas.util.testing as tm
-from pandas.core import ops
-
from pandas import Series, Timestamp
-
+from pandas.core import ops
+import pandas.util.testing as tm
# ------------------------------------------------------------------
# Comparisons
+
class TestObjectComparisons(object):
def test_comparison_object_numeric_nas(self):
diff --git a/pandas/tests/arithmetic/test_period.py b/pandas/tests/arithmetic/test_period.py
index 469353042a878..cdacd4b42d683 100644
--- a/pandas/tests/arithmetic/test_period.py
+++ b/pandas/tests/arithmetic/test_period.py
@@ -7,20 +7,20 @@
import numpy as np
import pytest
-import pandas as pd
-import pandas.util.testing as tm
-
from pandas._libs.tslibs.period import IncompatibleFrequency
from pandas.errors import PerformanceWarning
+import pandas as pd
+from pandas import Period, PeriodIndex, Series, period_range
from pandas.core import ops
-from pandas import Period, PeriodIndex, period_range, Series
-from pandas.tseries.frequencies import to_offset
+import pandas.util.testing as tm
+from pandas.tseries.frequencies import to_offset
# ------------------------------------------------------------------
# Comparisons
+
class TestPeriodIndexComparisons(object):
@pytest.mark.parametrize("other", ["2017", 2017])
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 12ed174d6cc53..4474b06b19536 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -3,17 +3,16 @@
# behave identically.
from datetime import datetime, timedelta
-import pytest
import numpy as np
-
-import pandas as pd
-import pandas.util.testing as tm
+import pytest
from pandas.errors import NullFrequencyError, PerformanceWarning
+
+import pandas as pd
from pandas import (
- timedelta_range,
- Timedelta, Timestamp, NaT, Series, TimedeltaIndex, DatetimeIndex,
- DataFrame)
+ DataFrame, DatetimeIndex, NaT, Series, Timedelta, TimedeltaIndex,
+ Timestamp, timedelta_range)
+import pandas.util.testing as tm
def get_upcast_box(box, vector):
diff --git a/setup.cfg b/setup.cfg
index c21f09f131dbd..a1c82304c5a72 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -121,12 +121,6 @@ skip=
pandas/tests/api/test_types.py,
pandas/tests/api/test_api.py,
pandas/tests/tools/test_numeric.py,
- pandas/tests/arithmetic/test_numeric.py,
- pandas/tests/arithmetic/test_object.py,
- pandas/tests/arithmetic/test_period.py,
- pandas/tests/arithmetic/test_datetime64.py,
- pandas/tests/arithmetic/conftest.py,
- pandas/tests/arithmetic/test_timedelta64.py,
pandas/tests/internals/test_internals.py,
pandas/tests/plotting/test_datetimelike.py,
pandas/tests/plotting/test_series.py,
| - [x] partial #23334
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Ran `isort --recursive pandas/tests/io/arithmetic` and then checked imports using `isort --recursive --check-only pandas/tests/io/arithmetic` | https://api.github.com/repos/pandas-dev/pandas/pulls/24579 | 2019-01-03T02:12:42Z | 2019-01-04T00:39:37Z | 2019-01-04T00:39:37Z | 2019-01-04T00:39:40Z |
Fix import format at pandas/tests/io/dtypes directory | diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index fcdcf96098f16..871e71ea2e4b0 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -5,30 +5,24 @@
"""
-import pytest
-from datetime import datetime, timedelta, date
-import numpy as np
+from datetime import date, datetime, timedelta
-import pandas as pd
-from pandas import (Timedelta, Timestamp, DatetimeIndex,
- DataFrame, NaT, Period, Series)
+import numpy as np
+import pytest
from pandas.core.dtypes.cast import (
- maybe_downcast_to_dtype,
- maybe_convert_objects,
- cast_scalar_to_array,
- infer_dtype_from_scalar,
- infer_dtype_from_array,
- find_common_type,
- construct_1d_object_array_from_listlike,
+ cast_scalar_to_array, construct_1d_arraylike_from_scalar,
construct_1d_ndarray_preserving_na,
- construct_1d_arraylike_from_scalar)
+ construct_1d_object_array_from_listlike, find_common_type,
+ infer_dtype_from_array, infer_dtype_from_scalar, maybe_convert_objects,
+ maybe_downcast_to_dtype)
+from pandas.core.dtypes.common import is_dtype_equal
from pandas.core.dtypes.dtypes import (
- CategoricalDtype,
- DatetimeTZDtype,
- PeriodDtype)
-from pandas.core.dtypes.common import (
- is_dtype_equal)
+ CategoricalDtype, DatetimeTZDtype, PeriodDtype)
+
+import pandas as pd
+from pandas import (
+ DataFrame, DatetimeIndex, NaT, Period, Series, Timedelta, Timestamp)
from pandas.util import testing as tm
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 2d6d3101f7371..5fcf19b0b12e7 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -1,15 +1,16 @@
# -*- coding: utf-8 -*-
-import pytest
import numpy as np
-import pandas as pd
+import pytest
-from pandas.core.dtypes.dtypes import (DatetimeTZDtype, PeriodDtype,
- CategoricalDtype, IntervalDtype)
-from pandas.core.sparse.api import SparseDtype
+import pandas.util._test_decorators as td
import pandas.core.dtypes.common as com
-import pandas.util._test_decorators as td
+from pandas.core.dtypes.dtypes import (
+ CategoricalDtype, DatetimeTZDtype, IntervalDtype, PeriodDtype)
+
+import pandas as pd
+from pandas.core.sparse.api import SparseDtype
import pandas.util.testing as tm
diff --git a/pandas/tests/dtypes/test_concat.py b/pandas/tests/dtypes/test_concat.py
index 35623415571c0..d58f8ee3b74f1 100644
--- a/pandas/tests/dtypes/test_concat.py
+++ b/pandas/tests/dtypes/test_concat.py
@@ -1,9 +1,11 @@
# -*- coding: utf-8 -*-
import pytest
+
import pandas.core.dtypes.concat as _concat
+
from pandas import (
- Index, DatetimeIndex, PeriodIndex, TimedeltaIndex, Series, Period)
+ DatetimeIndex, Index, Period, PeriodIndex, Series, TimedeltaIndex)
@pytest.mark.parametrize('to_concat, expected', [
diff --git a/pandas/tests/dtypes/test_dtypes.py b/pandas/tests/dtypes/test_dtypes.py
index aa29473ddf130..ab52a8a81385c 100644
--- a/pandas/tests/dtypes/test_dtypes.py
+++ b/pandas/tests/dtypes/test_dtypes.py
@@ -1,24 +1,20 @@
# -*- coding: utf-8 -*-
import re
-import pytest
import numpy as np
-import pandas as pd
-from pandas import (
- Series, Categorical, CategoricalIndex, IntervalIndex, date_range)
+import pytest
-from pandas.core.dtypes.dtypes import (
- DatetimeTZDtype, PeriodDtype,
- IntervalDtype, CategoricalDtype, registry)
from pandas.core.dtypes.common import (
- is_categorical_dtype, is_categorical,
- is_datetime64tz_dtype, is_datetimetz,
- is_period_dtype, is_period,
- is_dtype_equal, is_datetime64_ns_dtype,
- is_datetime64_dtype, is_interval_dtype,
- is_datetime64_any_dtype, is_string_dtype,
- is_bool_dtype,
-)
+ is_bool_dtype, is_categorical, is_categorical_dtype,
+ is_datetime64_any_dtype, is_datetime64_dtype, is_datetime64_ns_dtype,
+ is_datetime64tz_dtype, is_datetimetz, is_dtype_equal, is_interval_dtype,
+ is_period, is_period_dtype, is_string_dtype)
+from pandas.core.dtypes.dtypes import (
+ CategoricalDtype, DatetimeTZDtype, IntervalDtype, PeriodDtype, registry)
+
+import pandas as pd
+from pandas import (
+ Categorical, CategoricalIndex, IntervalIndex, Series, date_range)
from pandas.core.sparse.api import SparseDtype
import pandas.util.testing as tm
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 53fa482bdeaef..96f92fccc5a71 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -1,9 +1,12 @@
# -*- coding: utf-8 -*-
from warnings import catch_warnings, simplefilter
+
import numpy as np
-import pandas as pd
+
from pandas.core.dtypes import generic as gt
+
+import pandas as pd
from pandas.util import testing as tm
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index fff91991ee251..cc2aa64b98c8b 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -5,42 +5,34 @@
related to inference and not otherwise tested in types/test_common.py
"""
-from warnings import catch_warnings, simplefilter
import collections
-import re
-from datetime import datetime, date, timedelta, time
+from datetime import date, datetime, time, timedelta
from decimal import Decimal
-from numbers import Number
from fractions import Fraction
+from numbers import Number
+import re
+from warnings import catch_warnings, simplefilter
+
import numpy as np
-import pytz
import pytest
-import pandas as pd
-from pandas._libs import lib, iNaT, missing as libmissing
-from pandas import (Series, Index, DataFrame, Timedelta,
- DatetimeIndex, TimedeltaIndex, Timestamp,
- Panel, Period, Categorical, isna, Interval,
- DateOffset)
-from pandas import compat
-from pandas.compat import u, PY2, StringIO, lrange
+import pytz
+
+from pandas._libs import iNaT, lib, missing as libmissing
+from pandas.compat import PY2, StringIO, lrange, u
+import pandas.util._test_decorators as td
+
from pandas.core.dtypes import inference
from pandas.core.dtypes.common import (
- is_timedelta64_dtype,
- is_timedelta64_ns_dtype,
- is_datetime64_dtype,
- is_datetime64_ns_dtype,
- is_datetime64_any_dtype,
- is_datetime64tz_dtype,
- is_number,
- is_integer,
- is_float,
- is_bool,
- is_scalar,
- is_scipy_sparse,
- ensure_int32,
- ensure_categorical)
+ ensure_categorical, ensure_int32, is_bool, is_datetime64_any_dtype,
+ is_datetime64_dtype, is_datetime64_ns_dtype, is_datetime64tz_dtype,
+ is_float, is_integer, is_number, is_scalar, is_scipy_sparse,
+ is_timedelta64_dtype, is_timedelta64_ns_dtype)
+
+import pandas as pd
+from pandas import (
+ Categorical, DataFrame, DateOffset, DatetimeIndex, Index, Interval, Panel,
+ Period, Series, Timedelta, TimedeltaIndex, Timestamp, compat, isna)
from pandas.util import testing as tm
-import pandas.util._test_decorators as td
@pytest.fixture(params=[True, False], ids=str)
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index cb3f5933c885f..56c9395d0f802 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -1,25 +1,26 @@
# -*- coding: utf-8 -*-
-import pytest
-from warnings import catch_warnings, simplefilter
-import numpy as np
from datetime import datetime
-from pandas.util import testing as tm
+from warnings import catch_warnings, simplefilter
-import pandas as pd
-from pandas.core import config as cf
-from pandas.compat import u
+import numpy as np
+import pytest
from pandas._libs import missing as libmissing
from pandas._libs.tslib import iNaT
-from pandas import (NaT, Float64Index, Series,
- DatetimeIndex, TimedeltaIndex, date_range)
+from pandas.compat import u
+
from pandas.core.dtypes.common import is_scalar
from pandas.core.dtypes.dtypes import (
- DatetimeTZDtype, PeriodDtype, IntervalDtype)
+ DatetimeTZDtype, IntervalDtype, PeriodDtype)
from pandas.core.dtypes.missing import (
- array_equivalent, isna, notna, isnull, notnull,
- na_value_for_dtype)
+ array_equivalent, isna, isnull, na_value_for_dtype, notna, notnull)
+
+import pandas as pd
+from pandas import (
+ DatetimeIndex, Float64Index, NaT, Series, TimedeltaIndex, date_range)
+from pandas.core import config as cf
+from pandas.util import testing as tm
@pytest.mark.parametrize('notna_f', [notna, notnull])
diff --git a/setup.cfg b/setup.cfg
index 032a41df90f83..c21f09f131dbd 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -121,13 +121,6 @@ skip=
pandas/tests/api/test_types.py,
pandas/tests/api/test_api.py,
pandas/tests/tools/test_numeric.py,
- pandas/tests/dtypes/test_concat.py,
- pandas/tests/dtypes/test_generic.py,
- pandas/tests/dtypes/test_common.py,
- pandas/tests/dtypes/test_cast.py,
- pandas/tests/dtypes/test_dtypes.py,
- pandas/tests/dtypes/test_inference.py,
- pandas/tests/dtypes/test_missing.py,
pandas/tests/arithmetic/test_numeric.py,
pandas/tests/arithmetic/test_object.py,
pandas/tests/arithmetic/test_period.py,
| - [x] partial #23334
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Ran` isort --recursive pandas/tests/io/dtypes` and then checked imports using `isort --recursive --check-only pandas/tests/io/dtypes` | https://api.github.com/repos/pandas-dev/pandas/pulls/24578 | 2019-01-03T02:09:08Z | 2019-01-03T02:57:21Z | 2019-01-03T02:57:21Z | 2019-01-03T02:57:23Z |
DTA Followups - remove redundant methods | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 517c80619baea..3ca660b906f73 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -19,12 +19,11 @@
from pandas.util._validators import validate_fillna_kwargs
from pandas.core.dtypes.common import (
- is_bool_dtype, is_categorical_dtype, is_datetime64_any_dtype,
- is_datetime64_dtype, is_datetime64tz_dtype, is_datetime_or_timedelta_dtype,
- is_dtype_equal, is_extension_array_dtype, is_float_dtype, is_integer_dtype,
- is_list_like, is_object_dtype, is_offsetlike, is_period_dtype,
- is_string_dtype, is_timedelta64_dtype, is_unsigned_integer_dtype,
- needs_i8_conversion, pandas_dtype)
+ is_categorical_dtype, is_datetime64_any_dtype, is_datetime64_dtype,
+ is_datetime64tz_dtype, is_datetime_or_timedelta_dtype, is_dtype_equal,
+ is_extension_array_dtype, is_float_dtype, is_integer_dtype, is_list_like,
+ is_object_dtype, is_offsetlike, is_period_dtype, is_string_dtype,
+ is_timedelta64_dtype, is_unsigned_integer_dtype, pandas_dtype)
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import isna
@@ -40,32 +39,6 @@
from .base import ExtensionArray, ExtensionOpsMixin
-def _make_comparison_op(cls, op):
- # TODO: share code with indexes.base version? Main difference is that
- # the block for MultiIndex was removed here.
- def cmp_method(self, other):
- if isinstance(other, ABCDataFrame):
- return NotImplemented
-
- if needs_i8_conversion(self) and needs_i8_conversion(other):
- # we may need to directly compare underlying
- # representations
- return self._evaluate_compare(other, op)
-
- # numpy will show a DeprecationWarning on invalid elementwise
- # comparisons, this will raise in the future
- with warnings.catch_warnings(record=True):
- warnings.filterwarnings("ignore", "elementwise", FutureWarning)
- with np.errstate(all='ignore'):
- result = op(self._data, np.asarray(other))
-
- return result
-
- name = '__{name}__'.format(name=op.__name__)
- # TODO: docstring?
- return compat.set_function_name(cmp_method, name, cls)
-
-
class AttributesMixin(object):
@property
@@ -1358,41 +1331,6 @@ def __isub__(self, other):
# --------------------------------------------------------------
# Comparison Methods
- # Called by _add_comparison_methods defined in ExtensionOpsMixin
- _create_comparison_method = classmethod(_make_comparison_op)
-
- def _evaluate_compare(self, other, op):
- """
- We have been called because a comparison between
- 8 aware arrays. numpy will warn about NaT comparisons
- """
- # Called by comparison methods when comparing datetimelike
- # with datetimelike
-
- if not isinstance(other, type(self)):
- # coerce to a similar object
- if not is_list_like(other):
- # scalar
- other = [other]
- elif lib.is_scalar(lib.item_from_zerodim(other)):
- # ndarray scalar
- other = [other.item()]
- other = type(self)._from_sequence(other)
-
- # compare
- result = op(self.asi8, other.asi8)
-
- # technically we could support bool dtyped Index
- # for now just return the indexing array directly
- mask = (self._isnan) | (other._isnan)
-
- filler = iNaT
- if is_bool_dtype(result):
- filler = False
-
- result[mask] = filler
- return result
-
def _ensure_localized(self, arg, ambiguous='raise', nonexistent='raise',
from_utc=False):
"""
@@ -1493,9 +1431,6 @@ def max(self, axis=None, skipna=True, *args, **kwargs):
return self._box_func(result)
-DatetimeLikeArrayMixin._add_comparison_ops()
-
-
# -------------------------------------------------------------------
# Shared Constructor Helpers
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index ea2742c5808a3..f5903e19d2c45 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -34,7 +34,7 @@
_midnight = time(0, 0)
-def _to_m8(key, tz=None):
+def _to_M8(key, tz=None):
"""
Timestamp-like => dt64
"""
@@ -96,7 +96,6 @@ def _dt_array_cmp(cls, op):
nat_result = True if opname == '__ne__' else False
def wrapper(self, other):
- meth = getattr(dtl.DatetimeLikeArrayMixin, opname)
# TODO: return NotImplemented for Series / Index and let pandas unbox
# Right now, returning NotImplemented for Index fails because we
# go into the index implementation, which may be a bug?
@@ -109,7 +108,7 @@ def wrapper(self, other):
self._assert_tzawareness_compat(other)
try:
- other = _to_m8(other, tz=self.tz)
+ other = _to_M8(other, tz=self.tz)
except ValueError:
# string that cannot be parsed to Timestamp
return ops.invalid_comparison(self, other, op)
@@ -158,7 +157,7 @@ def wrapper(self, other):
# or an object-dtype ndarray
other = type(self)._from_sequence(other)
- result = meth(self, other)
+ result = op(self.view('i8'), other.view('i8'))
o_mask = other._isnan
result = com.values_from_object(result)
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index b747e2b6b096b..6a7225acfefbf 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -36,18 +36,6 @@
_BAD_DTYPE = "dtype {dtype} cannot be converted to timedelta64[ns]"
-def _to_m8(key):
- """
- Timedelta-like => dt64
- """
- if not isinstance(key, Timedelta):
- # this also converts strings
- key = Timedelta(key)
-
- # return an type that can be compared
- return np.int64(key.value).view(_TD_DTYPE)
-
-
def _is_convertible_to_td(key):
return isinstance(key, (Tick, timedelta,
np.timedelta64, compat.string_types))
@@ -75,17 +63,15 @@ def _td_array_cmp(cls, op):
opname = '__{name}__'.format(name=op.__name__)
nat_result = True if opname == '__ne__' else False
- meth = getattr(dtl.DatetimeLikeArrayMixin, opname)
-
def wrapper(self, other):
if _is_convertible_to_td(other) or other is NaT:
try:
- other = _to_m8(other)
+ other = Timedelta(other)
except ValueError:
# failed to parse as timedelta
return ops.invalid_comparison(self, other, op)
- result = meth(self, other)
+ result = op(self.view('i8'), other.value)
if isna(other):
result.fill(nat_result)
@@ -101,7 +87,7 @@ def wrapper(self, other):
except (ValueError, TypeError):
return ops.invalid_comparison(self, other, op)
- result = meth(self, other)
+ result = op(self.view('i8'), other.view('i8'))
result = com.values_from_object(result)
o_mask = np.array(isna(other))
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 5547266ea6bab..cfca5d1b7d2cc 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -13,8 +13,8 @@
from pandas.util._decorators import Appender, cache_readonly, deprecate_kwarg
from pandas.core.dtypes.common import (
- ensure_int64, is_bool_dtype, is_dtype_equal, is_float, is_integer,
- is_list_like, is_period_dtype, is_scalar)
+ ensure_int64, is_dtype_equal, is_float, is_integer, is_list_like,
+ is_period_dtype, is_scalar)
from pandas.core.dtypes.generic import ABCIndex, ABCIndexClass, ABCSeries
from pandas.core import algorithms, ops
@@ -191,16 +191,6 @@ def wrapper(left, right):
return wrapper
- @Appender(DatetimeLikeArrayMixin._evaluate_compare.__doc__)
- def _evaluate_compare(self, other, op):
- result = self._eadata._evaluate_compare(other, op)
- if is_bool_dtype(result):
- return result
- try:
- return Index(result)
- except TypeError:
- return result
-
def _ensure_localized(self, arg, ambiguous='raise', nonexistent='raise',
from_utc=False):
# See DatetimeLikeArrayMixin._ensure_localized.__doc__
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 6d9829d4ef659..7d901f4656731 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -22,7 +22,7 @@
from pandas.core.accessor import delegate_names
from pandas.core.arrays.datetimes import (
- DatetimeArrayMixin as DatetimeArray, _to_m8, validate_tz_from_dtype)
+ DatetimeArrayMixin as DatetimeArray, _to_M8, validate_tz_from_dtype)
from pandas.core.base import _shared_docs
import pandas.core.common as com
from pandas.core.indexes.base import Index
@@ -405,7 +405,7 @@ def __setstate__(self, state):
def _convert_for_op(self, value):
""" Convert value to be insertable to ndarray """
if self._has_same_tz(value):
- return _to_m8(value)
+ return _to_M8(value)
raise ValueError('Passed item and index have different timezone')
def _maybe_update_attributes(self, attrs):
@@ -1161,7 +1161,7 @@ def searchsorted(self, value, side='left', sorter=None):
if isinstance(value, (np.ndarray, Index)):
value = np.array(value, dtype=_NS_DTYPE, copy=False)
else:
- value = _to_m8(value, tz=self.tz)
+ value = _to_M8(value, tz=self.tz)
return self.values.searchsorted(value, side=side)
@@ -1211,7 +1211,7 @@ def insert(self, loc, item):
freq = self.freq
elif (loc == len(self)) and item - self.freq == self[-1]:
freq = self.freq
- item = _to_m8(item, tz=self.tz)
+ item = _to_M8(item, tz=self.tz)
try:
new_dates = np.concatenate((self[:loc].asi8, [item.view(np.int64)],
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 0eeb7551db26f..b59c32bb8a9d4 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -276,9 +276,6 @@ def _simple_new(cls, values, name=None, freq=None, **kwargs):
result._reset_identity()
return result
- # ------------------------------------------------------------------------
- # Wrapping PeriodArray
-
# ------------------------------------------------------------------------
# Data
@@ -416,6 +413,10 @@ def _mpl_repr(self):
# how to represent ourselves to matplotlib
return self.astype(object).values
+ @property
+ def _formatter_func(self):
+ return self.array._formatter(boxed=False)
+
# ------------------------------------------------------------------------
# Indexing
@@ -496,10 +497,6 @@ def __array_wrap__(self, result, context=None):
# cannot pass _simple_new as it is
return type(self)(result, freq=self.freq, name=self.name)
- @property
- def _formatter_func(self):
- return self.array._formatter(boxed=False)
-
def asof_locs(self, where, mask):
"""
where : array of timestamps
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 241d12dd06159..5e8e6a423ab3f 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -18,7 +18,7 @@
from pandas.core.accessor import delegate_names
from pandas.core.arrays import datetimelike as dtl
from pandas.core.arrays.timedeltas import (
- TimedeltaArrayMixin as TimedeltaArray, _is_convertible_to_td, _to_m8)
+ TimedeltaArrayMixin as TimedeltaArray, _is_convertible_to_td)
from pandas.core.base import _shared_docs
import pandas.core.common as com
from pandas.core.indexes.base import Index, _index_shared_docs
@@ -614,7 +614,7 @@ def searchsorted(self, value, side='left', sorter=None):
if isinstance(value, (np.ndarray, Index)):
value = np.array(value, dtype=_TD_DTYPE, copy=False)
else:
- value = _to_m8(value)
+ value = Timedelta(value).asm8.view(_TD_DTYPE)
return self.values.searchsorted(value, side=side, sorter=sorter)
@@ -664,7 +664,7 @@ def insert(self, loc, item):
freq = self.freq
elif (loc == len(self)) and item - self.freq == self[-1]:
freq = self.freq
- item = _to_m8(item)
+ item = Timedelta(item).asm8.view(_TD_DTYPE)
try:
new_tds = np.concatenate((self[:loc].asi8, [item.view(np.int64)],
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index d4e82fe2659a0..f5c4808a09123 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -20,7 +20,7 @@
from pandas._libs.tslibs.conversion import localize_pydatetime
from pandas._libs.tslibs.offsets import shift_months
-from pandas.core.indexes.datetimes import _to_m8
+from pandas.core.indexes.datetimes import _to_M8
from pandas import (
Timestamp, Timedelta, Period, Series, date_range, NaT,
@@ -349,7 +349,7 @@ class TestDatetimeIndexComparisons(object):
def test_comparators(self, op):
index = tm.makeDateIndex(100)
element = index[len(index) // 2]
- element = _to_m8(element)
+ element = _to_M8(element)
arr = np.array(index)
arr_result = op(arr, element)
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index dbdbb0bc238a9..f60d73ea1b05b 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -400,98 +400,98 @@ def test_value_counts_unique_nunique(self):
assert o.nunique() == len(np.unique(o.values))
- def test_value_counts_unique_nunique_null(self):
+ @pytest.mark.parametrize('null_obj', [np.nan, None])
+ def test_value_counts_unique_nunique_null(self, null_obj):
- for null_obj in [np.nan, None]:
- for orig in self.objs:
- o = orig.copy()
- klass = type(o)
- values = o._ndarray_values
-
- if not self._allow_na_ops(o):
- continue
+ for orig in self.objs:
+ o = orig.copy()
+ klass = type(o)
+ values = o._ndarray_values
- # special assign to the numpy array
- if is_datetime64tz_dtype(o):
- if isinstance(o, DatetimeIndex):
- v = o.asi8
- v[0:2] = iNaT
- values = o._shallow_copy(v)
- else:
- o = o.copy()
- o[0:2] = iNaT
- values = o._values
+ if not self._allow_na_ops(o):
+ continue
- elif needs_i8_conversion(o):
- values[0:2] = iNaT
- values = o._shallow_copy(values)
+ # special assign to the numpy array
+ if is_datetime64tz_dtype(o):
+ if isinstance(o, DatetimeIndex):
+ v = o.asi8
+ v[0:2] = iNaT
+ values = o._shallow_copy(v)
else:
- values[0:2] = null_obj
- # check values has the same dtype as the original
+ o = o.copy()
+ o[0:2] = iNaT
+ values = o._values
- assert values.dtype == o.dtype
+ elif needs_i8_conversion(o):
+ values[0:2] = iNaT
+ values = o._shallow_copy(values)
+ else:
+ values[0:2] = null_obj
+ # check values has the same dtype as the original
- # create repeated values, 'n'th element is repeated by n+1
- # times
- if isinstance(o, (DatetimeIndex, PeriodIndex)):
- expected_index = o.copy()
- expected_index.name = None
+ assert values.dtype == o.dtype
- # attach name to klass
- o = klass(values.repeat(range(1, len(o) + 1)))
- o.name = 'a'
- else:
- if isinstance(o, DatetimeIndex):
- expected_index = orig._values._shallow_copy(values)
- else:
- expected_index = Index(values)
- expected_index.name = None
- o = o.repeat(range(1, len(o) + 1))
- o.name = 'a'
+ # create repeated values, 'n'th element is repeated by n+1
+ # times
+ if isinstance(o, (DatetimeIndex, PeriodIndex)):
+ expected_index = o.copy()
+ expected_index.name = None
- # check values has the same dtype as the original
- assert o.dtype == orig.dtype
- # check values correctly have NaN
- nanloc = np.zeros(len(o), dtype=np.bool)
- nanloc[:3] = True
- if isinstance(o, Index):
- tm.assert_numpy_array_equal(pd.isna(o), nanloc)
- else:
- exp = Series(nanloc, o.index, name='a')
- tm.assert_series_equal(pd.isna(o), exp)
-
- expected_s_na = Series(list(range(10, 2, -1)) + [3],
- index=expected_index[9:0:-1],
- dtype='int64', name='a')
- expected_s = Series(list(range(10, 2, -1)),
- index=expected_index[9:1:-1],
- dtype='int64', name='a')
-
- result_s_na = o.value_counts(dropna=False)
- tm.assert_series_equal(result_s_na, expected_s_na)
- assert result_s_na.index.name is None
- assert result_s_na.name == 'a'
- result_s = o.value_counts()
- tm.assert_series_equal(o.value_counts(), expected_s)
- assert result_s.index.name is None
- assert result_s.name == 'a'
-
- result = o.unique()
- if isinstance(o, Index):
- tm.assert_index_equal(result,
- Index(values[1:], name='a'))
- elif is_datetime64tz_dtype(o):
- # unable to compare NaT / nan
- tm.assert_extension_array_equal(result[1:], values[2:])
- assert result[0] is pd.NaT
+ # attach name to klass
+ o = klass(values.repeat(range(1, len(o) + 1)))
+ o.name = 'a'
+ else:
+ if isinstance(o, DatetimeIndex):
+ expected_index = orig._values._shallow_copy(values)
else:
- tm.assert_numpy_array_equal(result[1:], values[2:])
+ expected_index = Index(values)
+ expected_index.name = None
+ o = o.repeat(range(1, len(o) + 1))
+ o.name = 'a'
+
+ # check values has the same dtype as the original
+ assert o.dtype == orig.dtype
+ # check values correctly have NaN
+ nanloc = np.zeros(len(o), dtype=np.bool)
+ nanloc[:3] = True
+ if isinstance(o, Index):
+ tm.assert_numpy_array_equal(pd.isna(o), nanloc)
+ else:
+ exp = Series(nanloc, o.index, name='a')
+ tm.assert_series_equal(pd.isna(o), exp)
+
+ expected_s_na = Series(list(range(10, 2, -1)) + [3],
+ index=expected_index[9:0:-1],
+ dtype='int64', name='a')
+ expected_s = Series(list(range(10, 2, -1)),
+ index=expected_index[9:1:-1],
+ dtype='int64', name='a')
+
+ result_s_na = o.value_counts(dropna=False)
+ tm.assert_series_equal(result_s_na, expected_s_na)
+ assert result_s_na.index.name is None
+ assert result_s_na.name == 'a'
+ result_s = o.value_counts()
+ tm.assert_series_equal(o.value_counts(), expected_s)
+ assert result_s.index.name is None
+ assert result_s.name == 'a'
+
+ result = o.unique()
+ if isinstance(o, Index):
+ tm.assert_index_equal(result,
+ Index(values[1:], name='a'))
+ elif is_datetime64tz_dtype(o):
+ # unable to compare NaT / nan
+ tm.assert_extension_array_equal(result[1:], values[2:])
+ assert result[0] is pd.NaT
+ else:
+ tm.assert_numpy_array_equal(result[1:], values[2:])
- assert pd.isna(result[0])
- assert result.dtype == orig.dtype
+ assert pd.isna(result[0])
+ assert result.dtype == orig.dtype
- assert o.nunique() == 8
- assert o.nunique(dropna=False) == 9
+ assert o.nunique() == 8
+ assert o.nunique(dropna=False) == 9
@pytest.mark.parametrize('klass', [Index, Series])
def test_value_counts_inferred(self, klass):
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index a938c1fe9a8fe..ac3955970587f 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -13,7 +13,7 @@
from pandas.compat import range
from pandas.compat.numpy import np_datetime64_compat
-from pandas.core.indexes.datetimes import DatetimeIndex, _to_m8, date_range
+from pandas.core.indexes.datetimes import DatetimeIndex, _to_M8, date_range
from pandas.core.series import Series
import pandas.util.testing as tm
@@ -47,9 +47,9 @@ class WeekDay(object):
####
-def test_to_m8():
+def test_to_M8():
valb = datetime(2007, 10, 1)
- valu = _to_m8(valb)
+ valu = _to_M8(valb)
assert isinstance(valu, np.datetime64)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index a6ba62bbdea1e..ebdfde2da24f8 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1861,10 +1861,6 @@ def getCols(k):
return string.ascii_uppercase[:k]
-def getArangeMat():
- return np.arange(N * K).reshape((N, K))
-
-
# make index
def makeStringIndex(k=10, name=None):
return Index(rands_array(nchars=10, size=k), name=name)
@@ -2322,13 +2318,6 @@ def add_nans(panel):
return panel
-def add_nans_panel4d(panel4d):
- for l, label in enumerate(panel4d.labels):
- panel = panel4d[label]
- add_nans(panel)
- return panel4d
-
-
class TestSubDict(dict):
def __init__(self, *args, **kwargs):
| Both arrays.datetimes and arrays.timedeltas have a `_to_m8` function. The timedeltas one is removed since it is unnecessary, the datetimes one is given a more accurate name `_to_M8`
A couple of unused funcs from `tm` are removed.
A test is parametrized. | https://api.github.com/repos/pandas-dev/pandas/pulls/24577 | 2019-01-03T01:23:36Z | 2019-01-03T02:25:08Z | 2019-01-03T02:25:07Z | 2019-01-03T02:31:39Z |
CLN: Follow-ups to #24024 | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 8d85b84ec7507..94d716a08d9dc 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -350,9 +350,6 @@ def unique(values):
if is_extension_array_dtype(values):
# Dispatch to extension dtype's unique.
return values.unique()
- elif is_datetime64tz_dtype(values):
- # TODO: merge this check into the previous one following #24024
- return values.unique()
original = values
htable, _, values, dtype, ndtype = _get_hashtable_algo(values)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index d233e1d09a1e9..517c80619baea 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -47,10 +47,6 @@ def cmp_method(self, other):
if isinstance(other, ABCDataFrame):
return NotImplemented
- if isinstance(other, (np.ndarray, ABCIndexClass, ABCSeries, cls)):
- if other.ndim > 0 and len(self) != len(other):
- raise ValueError('Lengths must match to compare')
-
if needs_i8_conversion(self) and needs_i8_conversion(other):
# we may need to directly compare underlying
# representations
@@ -586,10 +582,6 @@ def view(self, dtype=None):
# ------------------------------------------------------------------
# ExtensionArray Interface
- # TODO:
- # * _from_sequence
- # * argsort / _values_for_argsort
- # * _reduce
def unique(self):
result = unique1d(self.asi8)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index f42930929747d..ea2742c5808a3 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -280,8 +280,7 @@ def __init__(self, values, dtype=_NS_DTYPE, freq=None, copy=False):
)
raise ValueError(msg.format(values.dtype))
- dtype = pandas_dtype(dtype)
- _validate_dt64_dtype(dtype)
+ dtype = _validate_dt64_dtype(dtype)
if freq == "infer":
msg = (
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e7c03de879e8a..3e782c6ef89e0 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3082,7 +3082,7 @@ def _box_item_values(self, key, values):
def _maybe_cache_changed(self, item, value):
"""The object has called back to us saying maybe it has changed.
"""
- self._data.set(item, value, check=False)
+ self._data.set(item, value)
@property
def _is_cached(self):
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index a26daba49f5d1..c702eae5da012 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -68,8 +68,7 @@ def cmp_method(self, other):
if other.ndim > 0 and len(self) != len(other):
raise ValueError('Lengths must match to compare')
- from .multi import MultiIndex
- if is_object_dtype(self) and not isinstance(self, MultiIndex):
+ if is_object_dtype(self) and not isinstance(self, ABCMultiIndex):
# don't pass MultiIndex
with np.errstate(all='ignore'):
result = ops._comp_method_OBJECT_ARRAY(op, self.values, other)
@@ -1307,8 +1306,7 @@ def set_names(self, names, level=None, inplace=False):
names=['species', 'year'])
"""
- from .multi import MultiIndex
- if level is not None and not isinstance(self, MultiIndex):
+ if level is not None and not isinstance(self, ABCMultiIndex):
raise ValueError('Level must be None for non-MultiIndex')
if level is not None and not is_list_like(level) and is_list_like(
@@ -3145,9 +3143,8 @@ def _reindex_non_unique(self, target):
@Appender(_index_shared_docs['join'])
def join(self, other, how='left', level=None, return_indexers=False,
sort=False):
- from .multi import MultiIndex
- self_is_mi = isinstance(self, MultiIndex)
- other_is_mi = isinstance(other, MultiIndex)
+ self_is_mi = isinstance(self, ABCMultiIndex)
+ other_is_mi = isinstance(other, ABCMultiIndex)
# try to figure out the join level
# GH3662
@@ -4394,8 +4391,7 @@ def groupby(self, values):
# TODO: if we are a MultiIndex, we can do better
# that converting to tuples
- from .multi import MultiIndex
- if isinstance(values, MultiIndex):
+ if isinstance(values, ABCMultiIndex):
values = values.values
values = ensure_categorical(values)
result = values._reverse_indexer()
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index daca4b5116027..5547266ea6bab 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -31,23 +31,24 @@
_index_doc_kwargs = dict(ibase._index_doc_kwargs)
-def ea_passthrough(name):
+def ea_passthrough(array_method):
"""
Make an alias for a method of the underlying ExtensionArray.
Parameters
----------
- name : str
+ array_method : method on an Array class
Returns
-------
method
"""
+
def method(self, *args, **kwargs):
- return getattr(self._eadata, name)(*args, **kwargs)
+ return array_method(self._data, *args, **kwargs)
- method.__name__ = name
- # TODO: docstrings
+ method.__name__ = array_method.__name__
+ method.__doc__ = array_method.__doc__
return method
@@ -67,9 +68,10 @@ class DatetimeIndexOpsMixin(ExtensionOpsMixin):
_resolution = cache_readonly(DatetimeLikeArrayMixin._resolution.fget)
resolution = cache_readonly(DatetimeLikeArrayMixin.resolution.fget)
- _box_values = ea_passthrough("_box_values")
- _maybe_mask_results = ea_passthrough("_maybe_mask_results")
- __iter__ = ea_passthrough("__iter__")
+ _box_values = ea_passthrough(DatetimeLikeArrayMixin._box_values)
+ _maybe_mask_results = ea_passthrough(
+ DatetimeLikeArrayMixin._maybe_mask_results)
+ __iter__ = ea_passthrough(DatetimeLikeArrayMixin.__iter__)
@property
def _eadata(self):
@@ -275,9 +277,6 @@ def sort_values(self, return_indexer=False, ascending=True):
if not ascending:
sorted_values = sorted_values[::-1]
- sorted_values = self._maybe_box_as_values(sorted_values,
- **attribs)
-
return self._simple_new(sorted_values, **attribs)
@Appender(_index_shared_docs['take'] % _index_doc_kwargs)
@@ -613,14 +612,6 @@ def _concat_same_dtype(self, to_concat, name):
new_data = type(self._values)._concat_same_type(to_concat).asi8
return self._simple_new(new_data, **attribs)
- def _maybe_box_as_values(self, values, **attribs):
- # TODO(DatetimeArray): remove
- # This is a temporary shim while PeriodArray is an ExtensoinArray,
- # but others are not. When everyone is an ExtensionArray, this can
- # be removed. Currently used in
- # - sort_values
- return values
-
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
if is_dtype_equal(self.dtype, dtype) and copy is False:
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index a6a910f66359c..6d9829d4ef659 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -356,36 +356,6 @@ def tz(self, value):
tzinfo = tz
- @property
- def size(self):
- # TODO: Remove this when we have a DatetimeTZArray
- # Necessary to avoid recursion error since DTI._values is a DTI
- # for TZ-aware
- return self._ndarray_values.size
-
- @property
- def shape(self):
- # TODO: Remove this when we have a DatetimeTZArray
- # Necessary to avoid recursion error since DTI._values is a DTI
- # for TZ-aware
- return self._ndarray_values.shape
-
- @property
- def nbytes(self):
- # TODO: Remove this when we have a DatetimeTZArray
- # Necessary to avoid recursion error since DTI._values is a DTI
- # for TZ-aware
- return self._ndarray_values.nbytes
-
- def memory_usage(self, deep=False):
- # TODO: Remove this when we have a DatetimeTZArray
- # Necessary to avoid recursion error since DTI._values is a DTI
- # for TZ-aware
- result = self._ndarray_values.nbytes
- # include our engine hashtable
- result += self._engine.sizeof(deep=deep)
- return result
-
@cache_readonly
def _is_dates_only(self):
"""Return a boolean if we are only dates (and don't have a timezone)"""
@@ -455,11 +425,11 @@ def _mpl_repr(self):
def _format_native_types(self, na_rep='NaT', date_format=None, **kwargs):
from pandas.io.formats.format import _get_format_datetime64_from_values
- format = _get_format_datetime64_from_values(self, date_format)
+ fmt = _get_format_datetime64_from_values(self, date_format)
return libts.format_array_from_datetime(self.asi8,
tz=self.tz,
- format=format,
+ format=fmt,
na_rep=na_rep)
@property
@@ -1142,9 +1112,9 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None):
is_normalized = cache_readonly(DatetimeArray.is_normalized.fget)
_resolution = cache_readonly(DatetimeArray._resolution.fget)
- strftime = ea_passthrough("strftime")
- _has_same_tz = ea_passthrough("_has_same_tz")
- __array__ = ea_passthrough("__array__")
+ strftime = ea_passthrough(DatetimeArray.strftime)
+ _has_same_tz = ea_passthrough(DatetimeArray._has_same_tz)
+ __array__ = ea_passthrough(DatetimeArray.__array__)
@property
def offset(self):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 60059d5a43440..253ce2a28d165 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1468,9 +1468,9 @@ def to_frame(self, index=True, name=None):
# Guarantee resulting column order
result = DataFrame(
OrderedDict([
- ((level if name is None else name),
+ ((level if lvlname is None else lvlname),
self._get_level_values(level))
- for name, level in zip(idx_names, range(len(self.levels)))
+ for lvlname, level in zip(idx_names, range(len(self.levels)))
]),
copy=False
)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 5bc76ed210edb..0eeb7551db26f 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -357,17 +357,6 @@ def func(x):
return Period._from_ordinal(ordinal=x, freq=self.freq)
return func
- def _maybe_box_as_values(self, values, **attribs):
- """Box an array of ordinals to a PeriodArray
-
- This is purely for compatibility between PeriodIndex
- and Datetime/TimedeltaIndex. Once these are all backed by
- an ExtensionArray, this can be removed
- """
- # TODO(DatetimeArray): remove
- freq = attribs['freq']
- return PeriodArray(values, freq=freq)
-
def _maybe_convert_timedelta(self, other):
"""
Convert timedelta-like input to an integer multiple of self.freq
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 3a3b9ed97c8fe..241d12dd06159 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -303,11 +303,6 @@ def _format_native_types(self, na_rep='NaT', date_format=None, **kwargs):
_is_monotonic_decreasing = Index.is_monotonic_decreasing
_is_unique = Index.is_unique
- _create_comparison_method = DatetimeIndexOpsMixin._create_comparison_method
- # TODO: make sure we have a test for name retention analogous
- # to series.test_arithmetic.test_ser_cmp_result_names;
- # also for PeriodIndex which I think may be missing one
-
@property
def _box_func(self):
return lambda x: Timedelta(x, unit='ns')
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 7845a62bb7edb..5ce5ae7186774 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -338,7 +338,7 @@ def concat_same_type(self, to_concat, placement=None):
def iget(self, i):
return self.values[i]
- def set(self, locs, values, check=False):
+ def set(self, locs, values):
"""
Modify Block in-place with new item value
@@ -2416,7 +2416,7 @@ def f(m, v, i):
return blocks
- def set(self, locs, values, check=False):
+ def set(self, locs, values):
"""
Modify Block in-place with new item value
@@ -2424,14 +2424,6 @@ def set(self, locs, values, check=False):
-------
None
"""
-
- # GH6026
- if check:
- try:
- if (self.values[locs] == values).all():
- return
- except (IndexError, ValueError):
- pass
try:
self.values[locs] = values
except (ValueError):
@@ -2902,7 +2894,7 @@ def should_store(self, value):
not is_datetime64tz_dtype(value) and
not is_extension_array_dtype(value))
- def set(self, locs, values, check=False):
+ def set(self, locs, values):
"""
Modify Block in-place with new item value
@@ -3053,8 +3045,7 @@ def _try_coerce_args(self, values, other):
elif (is_null_datelike_scalar(other) or
(lib.is_scalar(other) and isna(other))):
other = tslibs.iNaT
- elif isinstance(other, (self._holder, DatetimeArray)):
- # TODO: DatetimeArray check will be redundant after GH#24024
+ elif isinstance(other, self._holder):
if other.tz != self.values.tz:
raise ValueError("incompatible or non tz-aware value")
other = _block_shape(other.asi8, ndim=self.ndim)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index d50f9c3e65ebd..eba49d18431ef 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1009,11 +1009,10 @@ def delete(self, item):
self._shape = None
self._rebuild_blknos_and_blklocs()
- def set(self, item, value, check=False):
+ def set(self, item, value):
"""
Set new item in-place. Does not consolidate. Adds new Block if not
contained in the current set of items
- if check, then validate that we are not setting the same data in-place
"""
# FIXME: refactor, clearly separate broadcasting & zip-like assignment
# can prob also fix the various if tests for sparse/categorical
@@ -1065,7 +1064,7 @@ def value_getitem(placement):
blk = self.blocks[blkno]
blk_locs = blklocs[val_locs.indexer]
if blk.should_store(value):
- blk.set(blk_locs, value_getitem(val_locs), check=check)
+ blk.set(blk_locs, value_getitem(val_locs))
else:
unfit_mgr_locs.append(blk.mgr_locs.as_array[blk_locs])
unfit_val_locs.append(val_locs)
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
index 1e41369b00811..e6d18d5d4193a 100644
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -584,23 +584,23 @@ def decode(obj):
dtype = dtype_for(obj[u'dtype'])
data = unconvert(obj[u'data'], dtype,
obj.get(u'compress'))
- return globals()[obj[u'klass']](data, dtype=dtype, name=obj[u'name'])
+ return Index(data, dtype=dtype, name=obj[u'name'])
elif typ == u'range_index':
- return globals()[obj[u'klass']](obj[u'start'],
- obj[u'stop'],
- obj[u'step'],
- name=obj[u'name'])
+ return RangeIndex(obj[u'start'],
+ obj[u'stop'],
+ obj[u'step'],
+ name=obj[u'name'])
elif typ == u'multi_index':
dtype = dtype_for(obj[u'dtype'])
data = unconvert(obj[u'data'], dtype,
obj.get(u'compress'))
data = [tuple(x) for x in data]
- return globals()[obj[u'klass']].from_tuples(data, names=obj[u'names'])
+ return MultiIndex.from_tuples(data, names=obj[u'names'])
elif typ == u'period_index':
data = unconvert(obj[u'data'], np.int64, obj.get(u'compress'))
d = dict(name=obj[u'name'], freq=obj[u'freq'])
freq = d.pop('freq', None)
- return globals()[obj[u'klass']](PeriodArray(data, freq), **d)
+ return PeriodIndex(PeriodArray(data, freq), **d)
elif typ == u'datetime_index':
data = unconvert(obj[u'data'], np.int64, obj.get(u'compress'))
@@ -631,11 +631,10 @@ def decode(obj):
pd_dtype = pandas_dtype(dtype)
index = obj[u'index']
- result = globals()[obj[u'klass']](unconvert(obj[u'data'], dtype,
- obj[u'compress']),
- index=index,
- dtype=pd_dtype,
- name=obj[u'name'])
+ result = Series(unconvert(obj[u'data'], dtype, obj[u'compress']),
+ index=index,
+ dtype=pd_dtype,
+ name=obj[u'name'])
return result
elif typ == u'block_manager':
@@ -671,18 +670,18 @@ def create_block(b):
return np.timedelta64(int(obj[u'data']))
# elif typ == 'sparse_series':
# dtype = dtype_for(obj['dtype'])
- # return globals()[obj['klass']](
+ # return SparseSeries(
# unconvert(obj['sp_values'], dtype, obj['compress']),
# sparse_index=obj['sp_index'], index=obj['index'],
# fill_value=obj['fill_value'], kind=obj['kind'], name=obj['name'])
# elif typ == 'sparse_dataframe':
- # return globals()[obj['klass']](
+ # return SparseDataFrame(
# obj['data'], columns=obj['columns'],
# default_fill_value=obj['default_fill_value'],
# default_kind=obj['default_kind']
# )
# elif typ == 'sparse_panel':
- # return globals()[obj['klass']](
+ # return SparsePanel(
# obj['data'], items=obj['items'],
# default_fill_value=obj['default_fill_value'],
# default_kind=obj['default_kind'])
| grepped for TODO in the affected files and took care of the easy ones.
Addresses a flake8 complaint in MultiIndex (no idea why it is only showing up locally) | https://api.github.com/repos/pandas-dev/pandas/pulls/24573 | 2019-01-02T22:09:22Z | 2019-01-03T00:12:44Z | 2019-01-03T00:12:44Z | 2019-01-03T00:14:40Z |
BUG: TypeError with to_html(sparsify=False) and max_cols < len(columns) | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 7628c53cefa06..826c5a795f886 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1604,6 +1604,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- Bug in :func:`to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`)
- Bug in :func:`to_html()` with ``index_names=False`` displaying index name (:issue:`22747`)
- Bug in :func:`to_html()` with ``header=False`` not displaying row index names (:issue:`23788`)
+- Bug in :func:`to_html()` with ``sparsify=False`` that caused it to raise ``TypeError`` (:issue:`22887`)
- Bug in :func:`DataFrame.to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`)
- Bug in :func:`DataFrame.to_string()` that caused representations of :class:`DataFrame` to not take up the whole window (:issue:`22984`)
- Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`).
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 58f5364f2b523..390c3f3d5c709 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -241,7 +241,7 @@ def _write_col_header(self, indent):
# GH3547
sentinel = com.sentinel_factory()
else:
- sentinel = None
+ sentinel = False
levels = self.columns.format(sparsify=sentinel, adjoin=False,
names=False)
level_lengths = get_level_lengths(levels, sentinel)
@@ -440,9 +440,6 @@ def _write_hierarchical_rows(self, fmt_values, indent):
truncate_v = self.fmt.truncate_v
frame = self.fmt.tr_frame
nrows = len(frame)
- # TODO: after gh-22887 fixed, refactor to use class property
- # in place of row_levels
- row_levels = self.frame.index.nlevels
idx_values = frame.index.format(sparsify=False, adjoin=False,
names=False)
@@ -520,18 +517,24 @@ def _write_hierarchical_rows(self, fmt_values, indent):
row.extend(fmt_values[j][i] for j in range(self.ncols))
if truncate_h:
- row.insert(row_levels - sparse_offset +
+ row.insert(self.row_levels - sparse_offset +
self.fmt.tr_col_num, '...')
self.write_tr(row, indent, self.indent_delta, tags=tags,
nindex_levels=len(levels) - sparse_offset)
else:
+ row = []
for i in range(len(frame)):
+ if truncate_v and i == (self.fmt.tr_row_num):
+ str_sep_row = ['...'] * len(row)
+ self.write_tr(str_sep_row, indent, self.indent_delta,
+ tags=None, nindex_levels=self.row_levels)
+
idx_values = list(zip(*frame.index.format(
sparsify=False, adjoin=False, names=False)))
row = []
row.extend(idx_values[i])
row.extend(fmt_values[j][i] for j in range(self.ncols))
if truncate_h:
- row.insert(row_levels + self.fmt.tr_col_num, '...')
+ row.insert(self.row_levels + self.fmt.tr_col_num, '...')
self.write_tr(row, indent, self.indent_delta, tags=None,
nindex_levels=frame.index.nlevels)
diff --git a/pandas/tests/io/formats/data/html/truncate_multi_index_sparse_off.html b/pandas/tests/io/formats/data/html/truncate_multi_index_sparse_off.html
index 05c644dfbfe08..6a7e1b5a59e3b 100644
--- a/pandas/tests/io/formats/data/html/truncate_multi_index_sparse_off.html
+++ b/pandas/tests/io/formats/data/html/truncate_multi_index_sparse_off.html
@@ -57,6 +57,17 @@
<td>NaN</td>
<td>NaN</td>
</tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
<tr>
<th>foo</th>
<th>two</th>
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index d333330c19e39..889b903088afa 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -223,7 +223,6 @@ def test_to_html_truncate_multi_index(self, datapath):
expected = expected_html(datapath, 'truncate_multi_index')
assert result == expected
- @pytest.mark.xfail(reason='GH22887 TypeError')
def test_to_html_truncate_multi_index_sparse_off(self, datapath):
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
| - [x] closes #22887
- [x] closes #11060
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24572 | 2019-01-02T21:25:03Z | 2019-01-03T00:40:12Z | 2019-01-03T00:40:12Z | 2019-01-03T00:56:03Z |
Added Datetime & Timedelta inference to array | diff --git a/pandas/core/arrays/array_.py b/pandas/core/arrays/array_.py
index 173ed7d191ac9..4e84c62bce3d6 100644
--- a/pandas/core/arrays/array_.py
+++ b/pandas/core/arrays/array_.py
@@ -46,12 +46,14 @@ def array(data, # type: Sequence[object]
Currently, pandas will infer an extension dtype for sequences of
- ========================== ==================================
- scalar type Array Type
- ========================== ==================================
- * :class:`pandas.Interval` :class:`pandas.IntervalArray`
- * :class:`pandas.Period` :class:`pandas.arrays.PeriodArray`
- ========================== ==================================
+ ============================== =====================================
+ scalar type Array Type
+ ============================= =====================================
+ * :class:`pandas.Interval` :class:`pandas.IntervalArray`
+ * :class:`pandas.Period` :class:`pandas.arrays.PeriodArray`
+ * :class:`datetime.datetime` :class:`pandas.arrays.DatetimeArray`
+ * :class:`datetime.timedelta` :class:`pandas.arrays.TimedeltaArray`
+ ============================= =====================================
For all other cases, NumPy's usual inference rules will be used.
@@ -62,7 +64,8 @@ def array(data, # type: Sequence[object]
Returns
-------
- array : ExtensionArray
+ ExtensionArray
+ The newly created array.
Raises
------
@@ -180,7 +183,9 @@ def array(data, # type: Sequence[object]
ValueError: Cannot pass scalar '1' to 'pandas.array'.
"""
from pandas.core.arrays import (
- period_array, ExtensionArray, IntervalArray, PandasArray
+ period_array, ExtensionArray, IntervalArray, PandasArray,
+ DatetimeArrayMixin,
+ TimedeltaArrayMixin,
)
from pandas.core.internals.arrays import extract_array
@@ -220,7 +225,18 @@ def array(data, # type: Sequence[object]
# We choose to return an ndarray, rather than raising.
pass
- # TODO(DatetimeArray): handle this type
+ elif inferred_dtype.startswith('datetime'):
+ # datetime, datetime64
+ try:
+ return DatetimeArrayMixin._from_sequence(data, copy=copy)
+ except ValueError:
+ # Mixture of timezones, fall back to PandasArray
+ pass
+
+ elif inferred_dtype.startswith('timedelta'):
+ # timedelta, timedelta64
+ return TimedeltaArrayMixin._from_sequence(data, copy=copy)
+
# TODO(BooleanArray): handle this type
result = PandasArray._from_sequence(data, dtype=dtype, copy=copy)
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index 76ef85b0317ad..1d09a1f65e43f 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -1,7 +1,9 @@
+import datetime
import decimal
import numpy as np
import pytest
+import pytz
from pandas.core.dtypes.dtypes import registry
@@ -89,11 +91,51 @@ def test_array_copy():
assert np.shares_memory(a, b._ndarray) is True
+cet = pytz.timezone("CET")
+
+
@pytest.mark.parametrize('data, expected', [
+ # period
([pd.Period("2000", "D"), pd.Period("2001", "D")],
period_array(["2000", "2001"], freq="D")),
+
+ # interval
([pd.Interval(0, 1), pd.Interval(1, 2)],
pd.IntervalArray.from_breaks([0, 1, 2])),
+
+ # datetime
+ ([pd.Timestamp('2000',), pd.Timestamp('2001')],
+ pd.arrays.DatetimeArray._from_sequence(['2000', '2001'])),
+
+ ([datetime.datetime(2000, 1, 1), datetime.datetime(2001, 1, 1)],
+ pd.arrays.DatetimeArray._from_sequence(['2000', '2001'])),
+
+ (np.array([1, 2], dtype='M8[ns]'),
+ pd.arrays.DatetimeArray(np.array([1, 2], dtype='M8[ns]'))),
+
+ (np.array([1, 2], dtype='M8[us]'),
+ pd.arrays.DatetimeArray(np.array([1000, 2000], dtype='M8[ns]'))),
+
+ # datetimetz
+ ([pd.Timestamp('2000', tz='CET'), pd.Timestamp('2001', tz='CET')],
+ pd.arrays.DatetimeArray._from_sequence(
+ ['2000', '2001'], dtype=pd.DatetimeTZDtype(tz='CET'))),
+
+ ([datetime.datetime(2000, 1, 1, tzinfo=cet),
+ datetime.datetime(2001, 1, 1, tzinfo=cet)],
+ pd.arrays.DatetimeArray._from_sequence(['2000', '2001'],
+ tz=cet)),
+
+ # timedelta
+ ([pd.Timedelta('1H'), pd.Timedelta('2H')],
+ pd.arrays.TimedeltaArray._from_sequence(['1H', '2H'])),
+
+ (np.array([1, 2], dtype='m8[ns]'),
+ pd.arrays.TimedeltaArray(np.array([1, 2], dtype='m8[ns]'))),
+
+ (np.array([1, 2], dtype='m8[us]'),
+ pd.arrays.TimedeltaArray(np.array([1000, 2000], dtype='m8[ns]'))),
+
])
def test_array_inference(data, expected):
result = pd.array(data)
@@ -105,6 +147,15 @@ def test_array_inference(data, expected):
[pd.Period("2000", "D"), pd.Period("2001", "A")],
# mix of closed
[pd.Interval(0, 1, closed='left'), pd.Interval(1, 2, closed='right')],
+ # Mix of timezones
+ [pd.Timestamp("2000", tz="CET"), pd.Timestamp("2000", tz="UTC")],
+ # Mix of tz-aware and tz-naive
+ [pd.Timestamp("2000", tz="CET"), pd.Timestamp("2000")],
+ # GH-24569
+ pytest.param(
+ np.array([pd.Timestamp('2000'), pd.Timestamp('2000', tz='CET')]),
+ marks=pytest.mark.xfail(reason="bug in DTA._from_sequence")
+ ),
])
def test_array_inference_fails(data):
result = pd.array(data)
| Closes https://github.com/pandas-dev/pandas/issues/24568 | https://api.github.com/repos/pandas-dev/pandas/pulls/24571 | 2019-01-02T20:10:35Z | 2019-01-03T00:41:37Z | 2019-01-03T00:41:37Z | 2019-01-03T00:41:41Z |
TST: isort tests/sparse | diff --git a/pandas/tests/sparse/frame/conftest.py b/pandas/tests/sparse/frame/conftest.py
index f36b4e643d10b..3423260c1720a 100644
--- a/pandas/tests/sparse/frame/conftest.py
+++ b/pandas/tests/sparse/frame/conftest.py
@@ -1,8 +1,7 @@
-import pytest
-
import numpy as np
+import pytest
-from pandas import SparseDataFrame, SparseArray, DataFrame, bdate_range
+from pandas import DataFrame, SparseArray, SparseDataFrame, bdate_range
data = {'A': [np.nan, np.nan, np.nan, 0, 1, 2, 3, 4, 5, 6],
'B': [0, 1, 2, np.nan, np.nan, np.nan, 3, 4, 5, 6],
diff --git a/pandas/tests/sparse/frame/test_analytics.py b/pandas/tests/sparse/frame/test_analytics.py
index 2d9ccaa059a8c..95c1c8c453d0a 100644
--- a/pandas/tests/sparse/frame/test_analytics.py
+++ b/pandas/tests/sparse/frame/test_analytics.py
@@ -1,6 +1,7 @@
-import pytest
import numpy as np
-from pandas import SparseDataFrame, DataFrame, SparseSeries
+import pytest
+
+from pandas import DataFrame, SparseDataFrame, SparseSeries
from pandas.util import testing as tm
diff --git a/pandas/tests/sparse/frame/test_apply.py b/pandas/tests/sparse/frame/test_apply.py
index c26776ac4fd49..b5ea0a5c90e1a 100644
--- a/pandas/tests/sparse/frame/test_apply.py
+++ b/pandas/tests/sparse/frame/test_apply.py
@@ -1,8 +1,9 @@
-import pytest
import numpy as np
-from pandas import SparseDataFrame, DataFrame, Series, bdate_range
-from pandas.core.sparse.api import SparseDtype
+import pytest
+
+from pandas import DataFrame, Series, SparseDataFrame, bdate_range
from pandas.core import nanops
+from pandas.core.sparse.api import SparseDtype
from pandas.util import testing as tm
diff --git a/pandas/tests/sparse/frame/test_frame.py b/pandas/tests/sparse/frame/test_frame.py
index 21100e3c3ffeb..f908c7b263dee 100644
--- a/pandas/tests/sparse/frame/test_frame.py
+++ b/pandas/tests/sparse/frame/test_frame.py
@@ -2,25 +2,24 @@
import operator
-import pytest
-from numpy import nan
import numpy as np
-import pandas as pd
+from numpy import nan
+import pytest
-from pandas import Series, DataFrame, bdate_range, Panel
+from pandas._libs.sparse import BlockIndex, IntIndex
+from pandas.compat import lrange
from pandas.errors import PerformanceWarning
+
+import pandas as pd
+from pandas import DataFrame, Panel, Series, bdate_range, compat
from pandas.core.indexes.datetimes import DatetimeIndex
-from pandas.tseries.offsets import BDay
-from pandas.util import testing as tm
-from pandas.compat import lrange
-from pandas import compat
from pandas.core.sparse import frame as spf
-
-from pandas._libs.sparse import BlockIndex, IntIndex
from pandas.core.sparse.api import (
- SparseSeries, SparseDataFrame, SparseArray, SparseDtype
-)
+ SparseArray, SparseDataFrame, SparseDtype, SparseSeries)
from pandas.tests.frame.test_api import SharedWithSparse
+from pandas.util import testing as tm
+
+from pandas.tseries.offsets import BDay
class TestSparseDataFrame(SharedWithSparse):
diff --git a/pandas/tests/sparse/frame/test_indexing.py b/pandas/tests/sparse/frame/test_indexing.py
index e4ca3b90ff8d0..2d2a7ac278dd6 100644
--- a/pandas/tests/sparse/frame/test_indexing.py
+++ b/pandas/tests/sparse/frame/test_indexing.py
@@ -1,8 +1,8 @@
-import pytest
import numpy as np
-from pandas import SparseDataFrame, DataFrame
-from pandas.util import testing as tm
+import pytest
+from pandas import DataFrame, SparseDataFrame
+from pandas.util import testing as tm
pytestmark = pytest.mark.skip("Wrong SparseBlock initialization (GH 17386)")
diff --git a/pandas/tests/sparse/frame/test_to_csv.py b/pandas/tests/sparse/frame/test_to_csv.py
index b0243dfde8d3f..ed19872f8a7ef 100644
--- a/pandas/tests/sparse/frame/test_to_csv.py
+++ b/pandas/tests/sparse/frame/test_to_csv.py
@@ -1,5 +1,6 @@
import numpy as np
import pytest
+
from pandas import SparseDataFrame, read_csv
from pandas.util import testing as tm
diff --git a/pandas/tests/sparse/frame/test_to_from_scipy.py b/pandas/tests/sparse/frame/test_to_from_scipy.py
index e5c50e9574f90..bdb2cd022b451 100644
--- a/pandas/tests/sparse/frame/test_to_from_scipy.py
+++ b/pandas/tests/sparse/frame/test_to_from_scipy.py
@@ -1,13 +1,14 @@
-import pytest
+from distutils.version import LooseVersion
+
import numpy as np
+import pytest
+
+from pandas.core.dtypes.common import is_bool_dtype
+
import pandas as pd
-from pandas.util import testing as tm
from pandas import SparseDataFrame, SparseSeries
from pandas.core.sparse.api import SparseDtype
-from distutils.version import LooseVersion
-from pandas.core.dtypes.common import (
- is_bool_dtype,
-)
+from pandas.util import testing as tm
scipy = pytest.importorskip('scipy')
ignore_matrix_warning = pytest.mark.filterwarnings(
diff --git a/pandas/tests/sparse/series/test_indexing.py b/pandas/tests/sparse/series/test_indexing.py
index 989cf3b974560..0f4235d7cc3fe 100644
--- a/pandas/tests/sparse/series/test_indexing.py
+++ b/pandas/tests/sparse/series/test_indexing.py
@@ -1,8 +1,8 @@
-import pytest
import numpy as np
-from pandas import SparseSeries, Series
-from pandas.util import testing as tm
+import pytest
+from pandas import Series, SparseSeries
+from pandas.util import testing as tm
pytestmark = pytest.mark.skip("Wrong SparseBlock initialization (GH 17386)")
diff --git a/pandas/tests/sparse/series/test_series.py b/pandas/tests/sparse/series/test_series.py
index 225ef96581e72..7eed47d0de888 100644
--- a/pandas/tests/sparse/series/test_series.py
+++ b/pandas/tests/sparse/series/test_series.py
@@ -1,28 +1,26 @@
# pylint: disable-msg=E1101,W0612
-import operator
from datetime import datetime
+import operator
-import pytest
-
-from numpy import nan
import numpy as np
-import pandas as pd
-
+from numpy import nan
+import pytest
-from pandas import Series, DataFrame, bdate_range, isna, compat
+from pandas._libs.sparse import BlockIndex, IntIndex
+from pandas.compat import PY36, range
from pandas.errors import PerformanceWarning
-from pandas.tseries.offsets import BDay
-import pandas.util.testing as tm
import pandas.util._test_decorators as td
-from pandas.compat import range, PY36
-from pandas.core.reshape.util import cartesian_product
+import pandas as pd
+from pandas import (
+ DataFrame, Series, SparseDtype, SparseSeries, bdate_range, compat, isna)
+from pandas.core.reshape.util import cartesian_product
import pandas.core.sparse.frame as spf
-
-from pandas._libs.sparse import BlockIndex, IntIndex
-from pandas import SparseSeries, SparseDtype
from pandas.tests.series.test_api import SharedWithSparse
+import pandas.util.testing as tm
+
+from pandas.tseries.offsets import BDay
def _test_data1():
diff --git a/pandas/tests/sparse/test_combine_concat.py b/pandas/tests/sparse/test_combine_concat.py
index 92483f1e7511e..97d5aaca82778 100644
--- a/pandas/tests/sparse/test_combine_concat.py
+++ b/pandas/tests/sparse/test_combine_concat.py
@@ -1,11 +1,13 @@
# pylint: disable-msg=E1101,W0612
-import pytest
+import itertools
import numpy as np
+import pytest
+
+from pandas.errors import PerformanceWarning
+
import pandas as pd
import pandas.util.testing as tm
-from pandas.errors import PerformanceWarning
-import itertools
class TestSparseArrayConcat(object):
diff --git a/pandas/tests/sparse/test_format.py b/pandas/tests/sparse/test_format.py
index 4186f579f62f5..63018f9525b1f 100644
--- a/pandas/tests/sparse/test_format.py
+++ b/pandas/tests/sparse/test_format.py
@@ -2,13 +2,12 @@
from __future__ import print_function
import numpy as np
-import pandas as pd
-import pandas.util.testing as tm
-from pandas.compat import (is_platform_windows,
- is_platform_32bit)
-from pandas.core.config import option_context
+from pandas.compat import is_platform_32bit, is_platform_windows
+import pandas as pd
+from pandas.core.config import option_context
+import pandas.util.testing as tm
use_32bit_repr = is_platform_windows() or is_platform_32bit()
diff --git a/pandas/tests/sparse/test_indexing.py b/pandas/tests/sparse/test_indexing.py
index fb10473ec78a8..6d8c6f13cd32b 100644
--- a/pandas/tests/sparse/test_indexing.py
+++ b/pandas/tests/sparse/test_indexing.py
@@ -1,10 +1,11 @@
# pylint: disable-msg=E1101,W0612
-import pytest
import numpy as np
+import pytest
+
import pandas as pd
-import pandas.util.testing as tm
from pandas.core.sparse.api import SparseDtype
+import pandas.util.testing as tm
class TestSparseSeriesIndexing(object):
diff --git a/pandas/tests/sparse/test_pivot.py b/pandas/tests/sparse/test_pivot.py
index 0e71048f51177..af7de43ec0f8a 100644
--- a/pandas/tests/sparse/test_pivot.py
+++ b/pandas/tests/sparse/test_pivot.py
@@ -1,4 +1,5 @@
import numpy as np
+
import pandas as pd
import pandas.util.testing as tm
diff --git a/pandas/tests/sparse/test_reshape.py b/pandas/tests/sparse/test_reshape.py
index d4ba672607982..6830e40ce6533 100644
--- a/pandas/tests/sparse/test_reshape.py
+++ b/pandas/tests/sparse/test_reshape.py
@@ -1,5 +1,5 @@
-import pytest
import numpy as np
+import pytest
import pandas as pd
import pandas.util.testing as tm
diff --git a/setup.cfg b/setup.cfg
index d4cdd57e7a448..032a41df90f83 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -145,23 +145,8 @@ skip=
pandas/tests/plotting/common.py,
pandas/tests/plotting/test_boxplot_method.py,
pandas/tests/plotting/test_deprecated.py,
- pandas/tests/sparse/test_indexing.py,
pandas/tests/extension/test_sparse.py,
pandas/tests/extension/base/reduce.py,
- pandas/tests/sparse/test_reshape.py,
- pandas/tests/sparse/test_pivot.py,
- pandas/tests/sparse/test_format.py,
- pandas/tests/sparse/test_groupby.py,
- pandas/tests/sparse/test_combine_concat.py,
- pandas/tests/sparse/series/test_indexing.py,
- pandas/tests/sparse/series/test_series.py,
- pandas/tests/sparse/frame/test_indexing.py,
- pandas/tests/sparse/frame/test_to_from_scipy.py,
- pandas/tests/sparse/frame/test_to_csv.py,
- pandas/tests/sparse/frame/test_apply.py,
- pandas/tests/sparse/frame/test_analytics.py,
- pandas/tests/sparse/frame/test_frame.py,
- pandas/tests/sparse/frame/conftest.py,
pandas/tests/computation/test_compat.py,
pandas/tests/computation/test_eval.py,
pandas/types/common.py,
| xref #23334
| https://api.github.com/repos/pandas-dev/pandas/pulls/24563 | 2019-01-02T18:10:49Z | 2019-01-03T00:45:43Z | 2019-01-03T00:45:43Z | 2019-01-03T00:54:58Z |
See also description formatting | diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 3a522baaa92af..3147f36dcc835 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -389,8 +389,8 @@ cdef class Interval(IntervalMixin):
See Also
--------
- IntervalArray.overlaps : The corresponding method for IntervalArray
- IntervalIndex.overlaps : The corresponding method for IntervalIndex
+ IntervalArray.overlaps : The corresponding method for IntervalArray.
+ IntervalIndex.overlaps : The corresponding method for IntervalIndex.
Examples
--------
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 483b84940dbc8..2f4edb7de8f95 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1830,9 +1830,8 @@ cdef class _Period(object):
See Also
--------
- Period.dayofweek : Get the day of the week
-
- Period.dayofyear : Get the day of the year
+ Period.dayofweek : Get the day of the week.
+ Period.dayofyear : Get the day of the year.
Examples
--------
@@ -2189,8 +2188,8 @@ cdef class _Period(object):
See Also
--------
- Period.days_in_month : Return the days of the month
- Period.dayofyear : Return the day of the year
+ Period.days_in_month : Return the days of the month.
+ Period.dayofyear : Return the day of the year.
Examples
--------
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 20ac13ed0ef71..49187aad4f1eb 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -660,9 +660,9 @@ def str_match(arr, pat, case=True, flags=0, na=np.nan):
See Also
--------
- contains : analogous, but less strict, relying on re.search instead of
- re.match
- extract : extract matched groups
+ contains : Analogous, but less strict, relying on re.search instead of
+ re.match.
+ extract : Extract matched groups.
"""
if not case:
flags |= re.IGNORECASE
@@ -1255,13 +1255,13 @@ def str_pad(arr, width, side='left', fillchar=' '):
See Also
--------
- Series.str.rjust: Fills the left side of strings with an arbitrary
+ Series.str.rjust : Fills the left side of strings with an arbitrary
character. Equivalent to ``Series.str.pad(side='left')``.
- Series.str.ljust: Fills the right side of strings with an arbitrary
+ Series.str.ljust : Fills the right side of strings with an arbitrary
character. Equivalent to ``Series.str.pad(side='right')``.
- Series.str.center: Fills boths sides of strings with an arbitrary
+ Series.str.center : Fills boths sides of strings with an arbitrary
character. Equivalent to ``Series.str.pad(side='both')``.
- Series.str.zfill: Pad strings in the Series/Index by prepending '0'
+ Series.str.zfill : Pad strings in the Series/Index by prepending '0'
character. Equivalent to ``Series.str.pad(side='left', fillchar='0')``.
Examples
@@ -2485,7 +2485,8 @@ def rsplit(self, pat=None, n=-1, expand=False):
'side': 'first',
'return': '3 elements containing the string itself, followed by two '
'empty strings',
- 'also': 'rpartition : Split the string at the last occurrence of `sep`'
+ 'also': 'rpartition : Split the string at the last occurrence of '
+ '`sep`.'
})
@deprecate_kwarg(old_arg_name='pat', new_arg_name='sep')
def partition(self, sep=' ', expand=True):
@@ -2497,7 +2498,8 @@ def partition(self, sep=' ', expand=True):
'side': 'last',
'return': '3 elements containing two empty strings, followed by the '
'string itself',
- 'also': 'partition : Split the string at the first occurrence of `sep`'
+ 'also': 'partition : Split the string at the first occurrence of '
+ '`sep`.'
})
@deprecate_kwarg(old_arg_name='pat', new_arg_name='sep')
def rpartition(self, sep=' ', expand=True):
@@ -2593,13 +2595,13 @@ def zfill(self, width):
See Also
--------
- Series.str.rjust: Fills the left side of strings with an arbitrary
+ Series.str.rjust : Fills the left side of strings with an arbitrary
character.
- Series.str.ljust: Fills the right side of strings with an arbitrary
+ Series.str.ljust : Fills the right side of strings with an arbitrary
character.
- Series.str.pad: Fills the specified sides of strings with an arbitrary
+ Series.str.pad : Fills the specified sides of strings with an arbitrary
character.
- Series.str.center: Fills boths sides of strings with an arbitrary
+ Series.str.center : Fills boths sides of strings with an arbitrary
character.
Notes
@@ -2793,14 +2795,14 @@ def extractall(self, pat, flags=0):
@Appender(_shared_docs['find'] %
dict(side='lowest', method='find',
- also='rfind : Return highest indexes in each strings'))
+ also='rfind : Return highest indexes in each strings.'))
def find(self, sub, start=0, end=None):
result = str_find(self._parent, sub, start=start, end=end, side='left')
return self._wrap_result(result)
@Appender(_shared_docs['find'] %
dict(side='highest', method='rfind',
- also='find : Return lowest indexes in each strings'))
+ also='find : Return lowest indexes in each strings.'))
def rfind(self, sub, start=0, end=None):
result = str_find(self._parent, sub,
start=start, end=end, side='right')
@@ -2852,7 +2854,7 @@ def normalize(self, form):
@Appender(_shared_docs['index'] %
dict(side='lowest', similar='find', method='index',
- also='rindex : Return highest indexes in each strings'))
+ also='rindex : Return highest indexes in each strings.'))
def index(self, sub, start=0, end=None):
result = str_index(self._parent, sub,
start=start, end=end, side='left')
@@ -2860,7 +2862,7 @@ def index(self, sub, start=0, end=None):
@Appender(_shared_docs['index'] %
dict(side='highest', similar='rfind', method='rindex',
- also='index : Return lowest indexes in each strings'))
+ also='index : Return lowest indexes in each strings.'))
def rindex(self, sub, start=0, end=None):
result = str_index(self._parent, sub,
start=start, end=end, side='right')
| - [x] xref #23630 DOC: Fix format of the See Also descriptions
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24561 | 2019-01-02T17:47:18Z | 2019-01-04T01:10:15Z | 2019-01-04T01:10:15Z | 2019-01-04T01:20:17Z |
REF: shift ravel in infer_dtype | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index d6e2b9a5288f5..1124000c97875 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -623,7 +623,7 @@ def clean_index_list(obj: list):
return obj, all_arrays
# don't force numpy coerce with nan's
- inferred = infer_dtype(obj)
+ inferred = infer_dtype(obj, skipna=False)
if inferred in ['string', 'bytes', 'unicode', 'mixed', 'mixed-integer']:
return np.asarray(obj, dtype=object), 0
elif inferred in ['integer']:
@@ -1210,6 +1210,10 @@ def infer_dtype(value: object, skipna: bool=False) -> str:
values = construct_1d_object_array_from_listlike(value)
values = getattr(values, 'values', values)
+
+ # make contiguous
+ values = values.ravel()
+
if skipna:
values = values[~isnaobj(values)]
@@ -1220,9 +1224,6 @@ def infer_dtype(value: object, skipna: bool=False) -> str:
if values.dtype != np.object_:
values = values.astype('O')
- # make contiguous
- values = values.ravel()
-
n = len(values)
if n == 0:
return 'empty'
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 94d716a08d9dc..b473a7aef929e 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -165,7 +165,7 @@ def _ensure_arraylike(values):
ensure that we are arraylike if not already
"""
if not is_array_like(values):
- inferred = lib.infer_dtype(values)
+ inferred = lib.infer_dtype(values, skipna=False)
if inferred in ['mixed', 'string', 'unicode']:
if isinstance(values, tuple):
values = list(values)
@@ -202,8 +202,10 @@ def _get_hashtable_algo(values):
if ndtype == 'object':
- # its cheaper to use a String Hash Table than Object
- if lib.infer_dtype(values) in ['string']:
+ # it's cheaper to use a String Hash Table than Object; we infer
+ # including nulls because that is the only difference between
+ # StringHashTable and ObjectHashtable
+ if lib.infer_dtype(values, skipna=False) in ['string']:
ndtype = 'string'
else:
ndtype = 'object'
@@ -220,8 +222,10 @@ def _get_data_algo(values, func_map):
values, dtype, ndtype = _ensure_data(values)
if ndtype == 'object':
- # its cheaper to use a String Hash Table than Object
- if lib.infer_dtype(values) in ['string']:
+ # it's cheaper to use a String Hash Table than Object; we infer
+ # including nulls because that is the only difference between
+ # StringHashTable and ObjectHashtable
+ if lib.infer_dtype(values, skipna=False) in ['string']:
ndtype = 'string'
f = func_map.get(ndtype, func_map['object'])
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index ea2742c5808a3..281fbe14e48c5 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1652,7 +1652,7 @@ def sequence_to_dt64ns(data, dtype=None, copy=False,
# TODO: We do not have tests specific to string-dtypes,
# also complex or categorical or other extension
copy = False
- if lib.infer_dtype(data) == 'integer':
+ if lib.infer_dtype(data, skipna=False) == 'integer':
data = data.astype(np.int64)
else:
# data comes back here as either i8 to denote UTC timestamps
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index eaec76b96a24d..af2c05bbee7c2 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -171,8 +171,8 @@ def coerce_to_array(values, dtype, mask=None, copy=False):
values = np.array(values, copy=copy)
if is_object_dtype(values):
- inferred_type = lib.infer_dtype(values)
- if inferred_type is 'mixed' and isna(values).all():
+ inferred_type = lib.infer_dtype(values, skipna=True)
+ if inferred_type == 'empty':
values = np.empty(len(values))
values.fill(np.nan)
elif inferred_type not in ['floating', 'integer',
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index b747e2b6b096b..b4b6d64b95b56 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -594,7 +594,7 @@ def __floordiv__(self, other):
elif is_object_dtype(other):
result = [self[n] // other[n] for n in range(len(self))]
result = np.array(result)
- if lib.infer_dtype(result) == 'timedelta':
+ if lib.infer_dtype(result, skipna=False) == 'timedelta':
result, _ = sequence_to_td64ns(result)
return type(self)(result)
return result
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 8f26f7ac209b1..b55bad46580fe 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -75,7 +75,8 @@ def trans(x):
if isinstance(dtype, string_types):
if dtype == 'infer':
- inferred_type = lib.infer_dtype(ensure_object(result.ravel()))
+ inferred_type = lib.infer_dtype(ensure_object(result.ravel()),
+ skipna=False)
if inferred_type == 'boolean':
dtype = 'bool'
elif inferred_type == 'integer':
@@ -460,7 +461,7 @@ def infer_dtype_from_array(arr, pandas_dtype=False):
return arr.dtype, np.asarray(arr)
# don't force numpy coerce with nan's
- inferred = lib.infer_dtype(arr)
+ inferred = lib.infer_dtype(arr, skipna=False)
if inferred in ['string', 'bytes', 'unicode',
'mixed', 'mixed-integer']:
return (np.object_, arr)
@@ -941,10 +942,11 @@ def try_timedelta(v):
# We have at least a NaT and a string
# try timedelta first to avoid spurious datetime conversions
- # e.g. '00:00:01' is a timedelta but
- # technically is also a datetime
+ # e.g. '00:00:01' is a timedelta but technically is also a datetime
value = try_timedelta(v)
- if lib.infer_dtype(value) in ['mixed']:
+ if lib.infer_dtype(value, skipna=False) in ['mixed']:
+ # cannot skip missing values, as NaT implies that the string
+ # is actually a datetime
value = try_datetime(v)
return value
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 293ce7d8e4aca..b4c769fab88ad 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -703,7 +703,8 @@ def is_datetime_arraylike(arr):
if isinstance(arr, ABCDatetimeIndex):
return True
elif isinstance(arr, (np.ndarray, ABCSeries)):
- return arr.dtype == object and lib.infer_dtype(arr) == 'datetime'
+ return (is_object_dtype(arr.dtype)
+ and lib.infer_dtype(arr, skipna=False) == 'datetime')
return getattr(arr, 'inferred_type', None) == 'datetime'
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index 21ec14ace3e44..b22cb1050f140 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -474,7 +474,7 @@ def _infer_fill_value(val):
if is_datetimelike(val):
return np.array('NaT', dtype=val.dtype)
elif is_object_dtype(val.dtype):
- dtype = lib.infer_dtype(ensure_object(val))
+ dtype = lib.infer_dtype(ensure_object(val), skipna=False)
if dtype in ['datetime', 'datetime64']:
return np.array('NaT', dtype=_NS_DTYPE)
elif dtype in ['timedelta', 'timedelta64']:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index c702eae5da012..a7f2d4fad38de 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -346,7 +346,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None,
# should not be coerced
# GH 11836
if is_integer_dtype(dtype):
- inferred = lib.infer_dtype(data)
+ inferred = lib.infer_dtype(data, skipna=False)
if inferred == 'integer':
data = maybe_cast_to_integer_array(data, dtype,
copy=copy)
@@ -376,7 +376,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None,
else:
data = data.astype(dtype)
elif is_float_dtype(dtype):
- inferred = lib.infer_dtype(data)
+ inferred = lib.infer_dtype(data, skipna=False)
if inferred == 'string':
pass
else:
@@ -414,7 +414,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None,
subarr = subarr.copy()
if dtype is None:
- inferred = lib.infer_dtype(subarr)
+ inferred = lib.infer_dtype(subarr, skipna=False)
if inferred == 'integer':
try:
return cls._try_convert_to_int_index(
@@ -1718,7 +1718,7 @@ def inferred_type(self):
"""
Return a string of the type inferred from the values.
"""
- return lib.infer_dtype(self)
+ return lib.infer_dtype(self, skipna=False)
@cache_readonly
def is_all_dates(self):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 253ce2a28d165..8d26080a0361d 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2318,7 +2318,8 @@ def _partial_tup_index(self, tup, side='left'):
section = labs[start:end]
if lab not in lev:
- if not lev.is_type_compatible(lib.infer_dtype([lab])):
+ if not lev.is_type_compatible(lib.infer_dtype([lab],
+ skipna=False)):
raise TypeError('Level type mismatch: %s' % lab)
# short circuit
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index b3c893c7d84be..62e7f64518bcc 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -667,7 +667,7 @@ def sanitize_array(data, index, dtype=None, copy=False,
subarr = np.array(data, dtype=object, copy=copy)
if is_object_dtype(subarr.dtype) and dtype != 'object':
- inferred = lib.infer_dtype(subarr)
+ inferred = lib.infer_dtype(subarr, skipna=False)
if inferred == 'period':
try:
subarr = period_array(subarr)
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 130bc2b080c72..191cd5d63eea3 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -947,7 +947,8 @@ def _maybe_coerce_merge_keys(self):
continue
# let's infer and see if we are ok
- elif lib.infer_dtype(lk) == lib.infer_dtype(rk):
+ elif (lib.infer_dtype(lk, skipna=False)
+ == lib.infer_dtype(rk, skipna=False)):
continue
# Check if we are trying to merge on obviously
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index 21a93f7deec8b..6f95b14993228 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -416,7 +416,7 @@ def _convert_bin_to_numeric_type(bins, dtype):
------
ValueError if bins are not of a compat dtype to dtype
"""
- bins_dtype = infer_dtype(bins)
+ bins_dtype = infer_dtype(bins, skipna=False)
if is_timedelta64_dtype(dtype):
if bins_dtype in ['timedelta', 'timedelta64']:
bins = to_timedelta(bins).view(np.int64)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3637081e09f8c..52b60339a7d68 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -875,7 +875,7 @@ def _get_with(self, key):
if isinstance(key, Index):
key_type = key.inferred_type
else:
- key_type = lib.infer_dtype(key)
+ key_type = lib.infer_dtype(key, skipna=False)
if key_type == 'integer':
if self.index.is_integer() or self.index.is_floating():
@@ -1012,7 +1012,7 @@ def _set_with(self, key, value):
if isinstance(key, Index):
key_type = key.inferred_type
else:
- key_type = lib.infer_dtype(key)
+ key_type = lib.infer_dtype(key, skipna=False)
if key_type == 'integer':
if self.index.inferred_type == 'integer':
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index b34dfddcc66e1..ef69939d6e978 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -454,7 +454,7 @@ def sort_mixed(values):
return np.concatenate([nums, np.asarray(strs, dtype=object)])
sorter = None
- if PY3 and lib.infer_dtype(values) == 'mixed-integer':
+ if PY3 and lib.infer_dtype(values, skipna=False) == 'mixed-integer':
# unorderable in py3 if mixed str/int
ordered = sort_mixed(values)
else:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 2861f32e54e5e..5590e8f445c67 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1300,7 +1300,7 @@ def _validate_usecols_arg(usecols):
elif not is_list_like(usecols):
raise ValueError(msg)
else:
- usecols_dtype = lib.infer_dtype(usecols)
+ usecols_dtype = lib.infer_dtype(usecols, skipna=False)
if usecols_dtype not in ('empty', 'integer',
'string', 'unicode'):
raise ValueError(msg)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index cec594a13b3d3..b115529f696b8 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1952,7 +1952,7 @@ def set_atom(self, block, block_items, existing_col, min_itemsize,
return self.set_atom_complex(block)
dtype = block.dtype.name
- inferred_type = lib.infer_dtype(block.values)
+ inferred_type = lib.infer_dtype(block.values, skipna=False)
if inferred_type == 'date':
raise TypeError(
@@ -1998,7 +1998,7 @@ def set_atom_string(self, block, block_items, existing_col, min_itemsize,
data = block.values
# see if we have a valid string type
- inferred_type = lib.infer_dtype(data.ravel())
+ inferred_type = lib.infer_dtype(data.ravel(), skipna=False)
if inferred_type != 'string':
# we cannot serialize this data, so report an exception on a column
@@ -2006,7 +2006,7 @@ def set_atom_string(self, block, block_items, existing_col, min_itemsize,
for i, item in enumerate(block_items):
col = block.iget(i)
- inferred_type = lib.infer_dtype(col.ravel())
+ inferred_type = lib.infer_dtype(col.ravel(), skipna=False)
if inferred_type != 'string':
raise TypeError(
"Cannot serialize the column [%s] because\n"
@@ -2745,7 +2745,7 @@ def write_array(self, key, value, items=None):
# infer the type, warn if we have a non-string type here (for
# performance)
- inferred_type = lib.infer_dtype(value.ravel())
+ inferred_type = lib.infer_dtype(value.ravel(), skipna=False)
if empty_array:
pass
elif inferred_type == 'string':
@@ -4512,7 +4512,7 @@ def _convert_index(index, encoding=None, errors='strict', format_type=None):
if isinstance(index, MultiIndex):
raise TypeError('MultiIndex not supported here!')
- inferred_type = lib.infer_dtype(index)
+ inferred_type = lib.infer_dtype(index, skipna=False)
values = np.asarray(index)
@@ -4745,7 +4745,7 @@ def __init__(self, table, where=None, start=None, stop=None):
# see if we have a passed coordinate like
try:
- inferred = lib.infer_dtype(where)
+ inferred = lib.infer_dtype(where, skipna=False)
if inferred == 'integer' or inferred == 'boolean':
where = np.asarray(where)
if where.dtype == np.bool_:
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 0eefa85211194..2f4093e154a95 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -857,27 +857,15 @@ def _harmonize_columns(self, parse_dates=None):
except KeyError:
pass # this column not in results
- def _get_notna_col_dtype(self, col):
- """
- Infer datatype of the Series col. In case the dtype of col is 'object'
- and it contains NA values, this infers the datatype of the not-NA
- values. Needed for inserting typed data containing NULLs, GH8778.
- """
- col_for_inference = col
- if col.dtype == 'object':
- notnadata = col[~isna(col)]
- if len(notnadata):
- col_for_inference = notnadata
-
- return lib.infer_dtype(col_for_inference)
-
def _sqlalchemy_type(self, col):
dtype = self.dtype or {}
if col.name in dtype:
return self.dtype[col.name]
- col_type = self._get_notna_col_dtype(col)
+ # Infer type of column, while ignoring missing values.
+ # Needed for inserting typed data containing NULLs, GH 8778.
+ col_type = lib.infer_dtype(col, skipna=True)
from sqlalchemy.types import (BigInteger, Integer, Float,
Text, Boolean,
@@ -1374,7 +1362,10 @@ def _sql_type_name(self, col):
if col.name in dtype:
return dtype[col.name]
- col_type = self._get_notna_col_dtype(col)
+ # Infer type of column, while ignoring missing values.
+ # Needed for inserting typed data containing NULLs, GH 8778.
+ col_type = lib.infer_dtype(col, skipna=True)
+
if col_type == 'timedelta64':
warnings.warn("the 'timedelta' type is not supported, and will be "
"written as integer values (ns frequency) to the "
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index fcd99e7cdce0d..aad57fc489fb6 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -396,7 +396,7 @@ def parse_dates_safe(dates, delta=False, year=False, days=False):
to_datetime(d['year'], format='%Y').astype(np.int64))
d['days'] = days // NS_PER_DAY
- elif infer_dtype(dates) == 'datetime':
+ elif infer_dtype(dates, skipna=False) == 'datetime':
if delta:
delta = dates.values - stata_epoch
f = lambda x: \
@@ -1867,7 +1867,7 @@ def _dtype_to_default_stata_fmt(dtype, column, dta_version=114,
if force_strl:
return '%9s'
if dtype.type == np.object_:
- inferred_dtype = infer_dtype(column.dropna())
+ inferred_dtype = infer_dtype(column, skipna=True)
if not (inferred_dtype in ('string', 'unicode') or
len(column) == 0):
raise ValueError('Column `{col}` cannot be exported.\n\nOnly '
diff --git a/pandas/plotting/_converter.py b/pandas/plotting/_converter.py
index 8cab00fba3aa8..4c6b3b5132fec 100644
--- a/pandas/plotting/_converter.py
+++ b/pandas/plotting/_converter.py
@@ -246,7 +246,7 @@ def _convert_1d(values, units, axis):
return values.asfreq(axis.freq)._ndarray_values
elif isinstance(values, Index):
return values.map(lambda x: get_datevalue(x, axis.freq))
- elif lib.infer_dtype(values) == 'period':
+ elif lib.infer_dtype(values, skipna=False) == 'period':
# https://github.com/pandas-dev/pandas/issues/24304
# convert ndarray[period] -> PeriodIndex
return PeriodIndex(values, freq=axis.freq)._ndarray_values
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index d9b1b0db90562..fff91991ee251 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -342,11 +342,11 @@ def test_infer_dtype_bytes(self):
# string array of bytes
arr = np.array(list('abc'), dtype='S1')
- assert lib.infer_dtype(arr) == compare
+ assert lib.infer_dtype(arr, skipna=False) == compare
# object array of bytes
arr = arr.astype(object)
- assert lib.infer_dtype(arr) == compare
+ assert lib.infer_dtype(arr, skipna=False) == compare
# object array of bytes with missing values
assert lib.infer_dtype([b'a', np.nan, b'c'], skipna=True) == compare
@@ -530,87 +530,91 @@ def test_inferred_dtype_fixture(self, any_skipna_inferred_dtype):
# make sure the inferred dtype of the fixture is as requested
assert inferred_dtype == lib.infer_dtype(values, skipna=True)
- def test_length_zero(self):
- result = lib.infer_dtype(np.array([], dtype='i4'))
+ @pytest.mark.parametrize('skipna', [True, False])
+ def test_length_zero(self, skipna):
+ result = lib.infer_dtype(np.array([], dtype='i4'), skipna=skipna)
assert result == 'integer'
- result = lib.infer_dtype([])
+ result = lib.infer_dtype([], skipna=skipna)
assert result == 'empty'
# GH 18004
arr = np.array([np.array([], dtype=object),
np.array([], dtype=object)])
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=skipna)
assert result == 'empty'
def test_integers(self):
arr = np.array([1, 2, 3, np.int64(4), np.int32(5)], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'integer'
arr = np.array([1, 2, 3, np.int64(4), np.int32(5), 'foo'], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'mixed-integer'
arr = np.array([1, 2, 3, 4, 5], dtype='i4')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'integer'
def test_bools(self):
arr = np.array([True, False, True, True, True], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'boolean'
arr = np.array([np.bool_(True), np.bool_(False)], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'boolean'
arr = np.array([True, False, True, 'foo'], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'mixed'
arr = np.array([True, False, True], dtype=bool)
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'boolean'
arr = np.array([True, np.nan, False], dtype='O')
result = lib.infer_dtype(arr, skipna=True)
assert result == 'boolean'
+ result = lib.infer_dtype(arr, skipna=False)
+ assert result == 'mixed'
+
def test_floats(self):
arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'floating'
arr = np.array([1, 2, 3, np.float64(4), np.float32(5), 'foo'],
dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'mixed-integer'
arr = np.array([1, 2, 3, 4, 5], dtype='f4')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'floating'
arr = np.array([1, 2, 3, 4, 5], dtype='f8')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'floating'
def test_decimals(self):
# GH15690
arr = np.array([Decimal(1), Decimal(2), Decimal(3)])
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'decimal'
arr = np.array([1.0, 2.0, Decimal(3)])
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'mixed'
arr = np.array([Decimal(1), Decimal('NaN'), Decimal(3)])
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'decimal'
arr = np.array([Decimal(1), np.nan, Decimal(3)], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'decimal'
def test_string(self):
@@ -618,7 +622,7 @@ def test_string(self):
def test_unicode(self):
arr = [u'a', np.nan, u'c']
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'mixed'
arr = [u'a', np.nan, u'c']
@@ -652,135 +656,135 @@ def test_infer_dtype_datetime(self):
arr = np.array([Timestamp('2011-01-01'),
Timestamp('2011-01-02')])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([np.datetime64('2011-01-01'),
np.datetime64('2011-01-01')], dtype=object)
- assert lib.infer_dtype(arr) == 'datetime64'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime64'
arr = np.array([datetime(2011, 1, 1), datetime(2012, 2, 1)])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
# starts with nan
for n in [pd.NaT, np.nan]:
arr = np.array([n, pd.Timestamp('2011-01-02')])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([n, np.datetime64('2011-01-02')])
- assert lib.infer_dtype(arr) == 'datetime64'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime64'
arr = np.array([n, datetime(2011, 1, 1)])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([n, pd.Timestamp('2011-01-02'), n])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([n, np.datetime64('2011-01-02'), n])
- assert lib.infer_dtype(arr) == 'datetime64'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime64'
arr = np.array([n, datetime(2011, 1, 1), n])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
# different type of nat
arr = np.array([np.timedelta64('nat'),
np.datetime64('2011-01-02')], dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([np.datetime64('2011-01-02'),
np.timedelta64('nat')], dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
# mixed datetime
arr = np.array([datetime(2011, 1, 1),
pd.Timestamp('2011-01-02')])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
# should be datetime?
arr = np.array([np.datetime64('2011-01-01'),
pd.Timestamp('2011-01-02')])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([pd.Timestamp('2011-01-02'),
np.datetime64('2011-01-01')])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([np.nan, pd.Timestamp('2011-01-02'), 1])
- assert lib.infer_dtype(arr) == 'mixed-integer'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed-integer'
arr = np.array([np.nan, pd.Timestamp('2011-01-02'), 1.1])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([np.nan, '2011-01-01', pd.Timestamp('2011-01-02')])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
def test_infer_dtype_timedelta(self):
arr = np.array([pd.Timedelta('1 days'),
pd.Timedelta('2 days')])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([np.timedelta64(1, 'D'),
np.timedelta64(2, 'D')], dtype=object)
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([timedelta(1), timedelta(2)])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
# starts with nan
for n in [pd.NaT, np.nan]:
arr = np.array([n, Timedelta('1 days')])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([n, np.timedelta64(1, 'D')])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([n, timedelta(1)])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([n, pd.Timedelta('1 days'), n])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([n, np.timedelta64(1, 'D'), n])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([n, timedelta(1), n])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
# different type of nat
arr = np.array([np.datetime64('nat'), np.timedelta64(1, 'D')],
dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([np.timedelta64(1, 'D'), np.datetime64('nat')],
dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
def test_infer_dtype_period(self):
# GH 13664
arr = np.array([pd.Period('2011-01', freq='D'),
pd.Period('2011-02', freq='D')])
- assert lib.infer_dtype(arr) == 'period'
+ assert lib.infer_dtype(arr, skipna=False) == 'period'
arr = np.array([pd.Period('2011-01', freq='D'),
pd.Period('2011-02', freq='M')])
- assert lib.infer_dtype(arr) == 'period'
+ assert lib.infer_dtype(arr, skipna=False) == 'period'
# starts with nan
for n in [pd.NaT, np.nan]:
arr = np.array([n, pd.Period('2011-01', freq='D')])
- assert lib.infer_dtype(arr) == 'period'
+ assert lib.infer_dtype(arr, skipna=False) == 'period'
arr = np.array([n, pd.Period('2011-01', freq='D'), n])
- assert lib.infer_dtype(arr) == 'period'
+ assert lib.infer_dtype(arr, skipna=False) == 'period'
# different type of nat
arr = np.array([np.datetime64('nat'), pd.Period('2011-01', freq='M')],
dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([pd.Period('2011-01', freq='M'), np.datetime64('nat')],
dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
@pytest.mark.parametrize(
"data",
@@ -850,60 +854,62 @@ def test_infer_datetimelike_array_nan_nat_like(self, first, second,
def test_infer_dtype_all_nan_nat_like(self):
arr = np.array([np.nan, np.nan])
- assert lib.infer_dtype(arr) == 'floating'
+ assert lib.infer_dtype(arr, skipna=False) == 'floating'
# nan and None mix are result in mixed
arr = np.array([np.nan, np.nan, None])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=True) == 'empty'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([None, np.nan, np.nan])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=True) == 'empty'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
# pd.NaT
arr = np.array([pd.NaT])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([pd.NaT, np.nan])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([np.nan, pd.NaT])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([np.nan, pd.NaT, np.nan])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
arr = np.array([None, pd.NaT, None])
- assert lib.infer_dtype(arr) == 'datetime'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime'
# np.datetime64(nat)
arr = np.array([np.datetime64('nat')])
- assert lib.infer_dtype(arr) == 'datetime64'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime64'
for n in [np.nan, pd.NaT, None]:
arr = np.array([n, np.datetime64('nat'), n])
- assert lib.infer_dtype(arr) == 'datetime64'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime64'
arr = np.array([pd.NaT, n, np.datetime64('nat'), n])
- assert lib.infer_dtype(arr) == 'datetime64'
+ assert lib.infer_dtype(arr, skipna=False) == 'datetime64'
arr = np.array([np.timedelta64('nat')], dtype=object)
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
for n in [np.nan, pd.NaT, None]:
arr = np.array([n, np.timedelta64('nat'), n])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
arr = np.array([pd.NaT, n, np.timedelta64('nat'), n])
- assert lib.infer_dtype(arr) == 'timedelta'
+ assert lib.infer_dtype(arr, skipna=False) == 'timedelta'
# datetime / timedelta mixed
arr = np.array([pd.NaT, np.datetime64('nat'),
np.timedelta64('nat'), np.nan])
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
arr = np.array([np.timedelta64('nat'), np.datetime64('nat')],
dtype=object)
- assert lib.infer_dtype(arr) == 'mixed'
+ assert lib.infer_dtype(arr, skipna=False) == 'mixed'
def test_is_datetimelike_array_all_nan_nat_like(self):
arr = np.array([np.nan, pd.NaT, np.datetime64('nat')])
@@ -967,7 +973,7 @@ def test_date(self):
assert index.inferred_type == 'date'
dates = [date(2012, 1, day) for day in range(1, 20)] + [np.nan]
- result = lib.infer_dtype(dates)
+ result = lib.infer_dtype(dates, skipna=False)
assert result == 'mixed'
result = lib.infer_dtype(dates, skipna=True)
@@ -1011,8 +1017,10 @@ def test_object(self):
# GH 7431
# cannot infer more than this as only a single element
arr = np.array([None], dtype='O')
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'mixed'
+ result = lib.infer_dtype(arr, skipna=True)
+ assert result == 'empty'
def test_to_object_array_width(self):
# see gh-13320
@@ -1043,17 +1051,17 @@ def test_categorical(self):
# GH 8974
from pandas import Categorical, Series
arr = Categorical(list('abc'))
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'categorical'
- result = lib.infer_dtype(Series(arr))
+ result = lib.infer_dtype(Series(arr), skipna=False)
assert result == 'categorical'
arr = Categorical(list('abc'), categories=['cegfab'], ordered=True)
- result = lib.infer_dtype(arr)
+ result = lib.infer_dtype(arr, skipna=False)
assert result == 'categorical'
- result = lib.infer_dtype(Series(arr))
+ result = lib.infer_dtype(Series(arr), skipna=False)
assert result == 'categorical'
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index f5a445e2cca9a..a9f78096f3cd1 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -806,12 +806,12 @@ def test_constructor_with_datetime_tz(self):
s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),
pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')])
assert s.dtype == 'datetime64[ns, US/Pacific]'
- assert lib.infer_dtype(s) == 'datetime64'
+ assert lib.infer_dtype(s, skipna=False) == 'datetime64'
s = Series([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'),
pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Eastern')])
assert s.dtype == 'object'
- assert lib.infer_dtype(s) == 'datetime'
+ assert lib.infer_dtype(s, skipna=False) == 'datetime'
# with all NaT
s = Series(pd.NaT, index=[0, 1], dtype='datetime64[ns, US/Eastern]')
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index d4ea21632edf9..7cea3be03d1a7 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -155,7 +155,7 @@ def any_allowed_skipna_inferred_dtype(request):
>>> import pandas._libs.lib as lib
>>>
>>> def test_something(any_allowed_skipna_inferred_dtype):
- ... inferred_dtype, values = any_skipna_inferred_dtype
+ ... inferred_dtype, values = any_allowed_skipna_inferred_dtype
... # will pass
... assert lib.infer_dtype(values, skipna=True) == inferred_dtype
"""
| Precursor to #24050 as requested from @jreback in review: https://github.com/pandas-dev/pandas/pull/24050/files#r243769223 / https://github.com/pandas-dev/pandas/pull/24050/files#r243769260
| https://api.github.com/repos/pandas-dev/pandas/pulls/24560 | 2019-01-02T17:39:18Z | 2019-01-03T01:46:55Z | 2019-01-03T01:46:55Z | 2019-01-03T09:56:46Z |
REF/TST: mixed use of mock/monkeypatch | diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index 15f366e5e2e9e..d3569af8d7786 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -12,12 +12,6 @@
from pandas import DataFrame, compat
import pandas.util.testing as tm
-try:
- from unittest import mock
-except ImportError:
- mock = pytest.importorskip("mock")
-
-
api_exceptions = pytest.importorskip("google.api_core.exceptions")
bigquery = pytest.importorskip("google.cloud.bigquery")
service_account = pytest.importorskip("google.oauth2.service_account")
@@ -104,8 +98,10 @@ def make_mixed_dataframe_v2(test_size):
def test_read_gbq_without_dialect_warns_future_change(monkeypatch):
# Default dialect is changing to standard SQL. See:
# https://github.com/pydata/pandas-gbq/issues/195
- mock_read_gbq = mock.Mock()
- mock_read_gbq.return_value = DataFrame([[1.0]])
+
+ def mock_read_gbq(*args, **kwargs):
+ return DataFrame([[1.0]])
+
monkeypatch.setattr(pandas_gbq, 'read_gbq', mock_read_gbq)
with tm.assert_produces_warning(FutureWarning):
pd.read_gbq("SELECT 1")
| doubt if this will work as don't have gbq configured locally.
will need to check output of CI. | https://api.github.com/repos/pandas-dev/pandas/pulls/24557 | 2019-01-02T16:52:47Z | 2019-01-05T14:50:26Z | 2019-01-05T14:50:26Z | 2019-01-05T21:03:12Z |
read_sas catches own error #24548 | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 826c5a795f886..3566d58f5c641 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1599,6 +1599,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- :func:`read_sas()` will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (:issue:`21616`)
- :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`)
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
+- Bug in :func:`read_sas()` in which an incorrect error was raised on an invalid file format. (:issue:`24548`)
- Bug in :meth:`detect_client_encoding` where potential ``IOError`` goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (:issue:`21552`)
- Bug in :func:`to_html()` with ``index=False`` misses truncation indicators (...) on truncated DataFrame (:issue:`15019`, :issue:`22783`)
- Bug in :func:`to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`)
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index 2da3775d5a6a7..9fae0da670bec 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -16,8 +16,8 @@ def read_sas(filepath_or_buffer, format=None, index=None, encoding=None,
filepath_or_buffer : string or file-like object
Path to the SAS file.
format : string {'xport', 'sas7bdat'} or None
- If None, file format is inferred. If 'xport' or 'sas7bdat',
- uses the corresponding format.
+ If None, file format is inferred from file extension. If 'xport' or
+ 'sas7bdat', uses the corresponding format.
index : identifier of index column, defaults to None
Identifier of column that should be used as index of the DataFrame.
encoding : string, default is None
@@ -39,16 +39,13 @@ def read_sas(filepath_or_buffer, format=None, index=None, encoding=None,
filepath_or_buffer = _stringify_path(filepath_or_buffer)
if not isinstance(filepath_or_buffer, compat.string_types):
raise ValueError(buffer_error_msg)
- try:
- fname = filepath_or_buffer.lower()
- if fname.endswith(".xpt"):
- format = "xport"
- elif fname.endswith(".sas7bdat"):
- format = "sas7bdat"
- else:
- raise ValueError("unable to infer format of SAS file")
- except ValueError:
- pass
+ fname = filepath_or_buffer.lower()
+ if fname.endswith(".xpt"):
+ format = "xport"
+ elif fname.endswith(".sas7bdat"):
+ format = "sas7bdat"
+ else:
+ raise ValueError("unable to infer format of SAS file")
if format.lower() == 'xport':
from pandas.io.sas.sas_xport import XportReader
diff --git a/pandas/tests/io/sas/test_sas.py b/pandas/tests/io/sas/test_sas.py
index 0f6342aa62ac0..34bca1e5b74a1 100644
--- a/pandas/tests/io/sas/test_sas.py
+++ b/pandas/tests/io/sas/test_sas.py
@@ -3,6 +3,7 @@
from pandas.compat import StringIO
from pandas import read_sas
+import pandas.util.testing as tm
class TestSas(object):
@@ -15,3 +16,10 @@ def test_sas_buffer_format(self):
"name, you must specify a format string")
with pytest.raises(ValueError, match=msg):
read_sas(b)
+
+ def test_sas_read_no_format_or_extension(self):
+ # see gh-24548
+ msg = ("unable to infer format of SAS file")
+ with tm.ensure_clean('test_file_no_extension') as path:
+ with pytest.raises(ValueError, match=msg):
+ read_sas(path)
| - [ ] closes #24548
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/24554 | 2019-01-02T15:47:13Z | 2019-01-03T00:44:21Z | 2019-01-03T00:44:21Z | 2019-01-03T00:44:24Z |
TST: isort tests/groupby | diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 52bfee66f94f8..62ec0555f9033 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -4,15 +4,15 @@
test .agg behavior / note that .apply is tested generally in test_groupby.py
"""
+import numpy as np
import pytest
-import numpy as np
-import pandas as pd
+from pandas.compat import OrderedDict
-from pandas import concat, DataFrame, Index, MultiIndex, Series
-from pandas.core.groupby.grouper import Grouping
+import pandas as pd
+from pandas import DataFrame, Index, MultiIndex, Series, concat
from pandas.core.base import SpecificationError
-from pandas.compat import OrderedDict
+from pandas.core.groupby.grouper import Grouping
import pandas.util.testing as tm
diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py
index ad5968bca5c03..ad3974d5e2fb8 100644
--- a/pandas/tests/groupby/aggregate/test_cython.py
+++ b/pandas/tests/groupby/aggregate/test_cython.py
@@ -6,13 +6,12 @@
from __future__ import print_function
+import numpy as np
import pytest
-import numpy as np
import pandas as pd
-
-from pandas import (bdate_range, DataFrame, Index, Series, Timestamp,
- Timedelta, NaT)
+from pandas import (
+ DataFrame, Index, NaT, Series, Timedelta, Timestamp, bdate_range)
from pandas.core.groupby.groupby import DataError
import pandas.util.testing as tm
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index fca863b4d8eb0..b5214b11bddcc 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -6,22 +6,22 @@
from __future__ import print_function
-import pytest
from collections import OrderedDict
-
import datetime as dt
from functools import partial
import numpy as np
-import pandas as pd
+import pytest
+import pandas as pd
from pandas import (
- date_range, DataFrame, Index, MultiIndex, PeriodIndex, period_range, Series
-)
+ DataFrame, Index, MultiIndex, PeriodIndex, Series, date_range,
+ period_range)
from pandas.core.groupby.groupby import SpecificationError
-from pandas.io.formats.printing import pprint_thing
import pandas.util.testing as tm
+from pandas.io.formats.printing import pprint_thing
+
def test_agg_api():
# GH 6337
diff --git a/pandas/tests/groupby/conftest.py b/pandas/tests/groupby/conftest.py
index 657da422bf02c..cb4fe511651ee 100644
--- a/pandas/tests/groupby/conftest.py
+++ b/pandas/tests/groupby/conftest.py
@@ -1,6 +1,7 @@
-import pytest
import numpy as np
-from pandas import MultiIndex, DataFrame
+import pytest
+
+from pandas import DataFrame, MultiIndex
from pandas.util import testing as tm
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 8366f75a5795e..659d1a9cf9813 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -1,9 +1,11 @@
-import pytest
+from datetime import datetime
+
import numpy as np
+import pytest
+
import pandas as pd
-from datetime import datetime
+from pandas import DataFrame, Index, MultiIndex, Series, bdate_range, compat
from pandas.util import testing as tm
-from pandas import DataFrame, MultiIndex, compat, Series, bdate_range, Index
def test_apply_issues():
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index 9dcc13c15736f..f33df5fb0eb98 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -1,16 +1,17 @@
# -*- coding: utf-8 -*-
+import numpy as np
+from numpy import nan
import pytest
-from numpy import nan
-import numpy as np
+from pandas._libs import groupby, lib, reduction
from pandas.core.dtypes.common import ensure_int64
+
from pandas import Index, isna
from pandas.core.groupby.ops import generate_bins_generic
-from pandas.util.testing import assert_almost_equal
import pandas.util.testing as tm
-from pandas._libs import lib, groupby, reduction
+from pandas.util.testing import assert_almost_equal
def test_series_grouper():
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index a39600d114b89..144b64025e1c0 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -1,17 +1,19 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
+
from datetime import datetime
+import numpy as np
import pytest
-import numpy as np
-import pandas as pd
from pandas.compat import PY37
-from pandas import (Index, MultiIndex, CategoricalIndex,
- DataFrame, Categorical, Series, qcut)
-from pandas.util.testing import (assert_equal,
- assert_frame_equal, assert_series_equal)
+
+import pandas as pd
+from pandas import (
+ Categorical, CategoricalIndex, DataFrame, Index, MultiIndex, Series, qcut)
import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_equal, assert_frame_equal, assert_series_equal)
def cartesian_product_for_groupers(result, args, names):
diff --git a/pandas/tests/groupby/test_counting.py b/pandas/tests/groupby/test_counting.py
index 8b9f3607d5c3e..1438de5b7e37c 100644
--- a/pandas/tests/groupby/test_counting.py
+++ b/pandas/tests/groupby/test_counting.py
@@ -4,10 +4,10 @@
import numpy as np
import pytest
-from pandas import (DataFrame, Series, MultiIndex, Timestamp, Timedelta,
- Period)
-from pandas.util.testing import (assert_series_equal, assert_frame_equal)
-from pandas.compat import (range, product as cart_product)
+from pandas.compat import product as cart_product, range
+
+from pandas import DataFrame, MultiIndex, Period, Series, Timedelta, Timestamp
+from pandas.util.testing import assert_frame_equal, assert_series_equal
class TestCounting(object):
diff --git a/pandas/tests/groupby/test_filters.py b/pandas/tests/groupby/test_filters.py
index 205b06c5b679f..8195d36b7bfe9 100644
--- a/pandas/tests/groupby/test_filters.py
+++ b/pandas/tests/groupby/test_filters.py
@@ -1,11 +1,12 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
-import pytest
import numpy as np
-import pandas.util.testing as tm
-from pandas import Timestamp, DataFrame, Series
+import pytest
+
import pandas as pd
+from pandas import DataFrame, Series, Timestamp
+import pandas.util.testing as tm
def test_filter_series():
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 310a2fb1e609d..00714c3333bde 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -1,14 +1,16 @@
-import pytest
+from string import ascii_lowercase
import numpy as np
-import pandas as pd
-from pandas import (DataFrame, Index, compat, isna,
- Series, MultiIndex, Timestamp, date_range)
+import pytest
+
+from pandas.compat import product as cart_product
from pandas.errors import UnsupportedFunctionCall
-from pandas.util import testing as tm
+
+import pandas as pd
+from pandas import (
+ DataFrame, Index, MultiIndex, Series, Timestamp, compat, date_range, isna)
import pandas.core.nanops as nanops
-from string import ascii_lowercase
-from pandas.compat import product as cart_product
+from pandas.util import testing as tm
@pytest.mark.parametrize("agg_func", ['any', 'all'])
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index e9de46bba03f1..33cfb9a06a805 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1,26 +1,25 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
-import pytest
-
+from collections import defaultdict
from datetime import datetime
from decimal import Decimal
-from pandas import (date_range, Timestamp,
- Index, MultiIndex, DataFrame, Series,
- Panel, read_csv)
-from pandas.errors import PerformanceWarning
-from pandas.util.testing import (assert_frame_equal,
- assert_series_equal, assert_almost_equal)
-from pandas.compat import (range, lrange, StringIO, lmap, lzip, map, zip,
- OrderedDict)
-from pandas import compat
-from collections import defaultdict
-import pandas.core.common as com
import numpy as np
+import pytest
+
+from pandas.compat import (
+ OrderedDict, StringIO, lmap, lrange, lzip, map, range, zip)
+from pandas.errors import PerformanceWarning
-import pandas.util.testing as tm
import pandas as pd
+from pandas import (
+ DataFrame, Index, MultiIndex, Panel, Series, Timestamp, compat, date_range,
+ read_csv)
+import pandas.core.common as com
+import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_almost_equal, assert_frame_equal, assert_series_equal)
def test_repr():
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index bcf4f42d8ca5e..55d9cee0376f1 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -2,25 +2,25 @@
""" test where we are determining what we are grouping, or getting groups """
+import numpy as np
import pytest
-from pandas import (date_range, Timestamp,
- Index, MultiIndex, DataFrame, Series, CategoricalIndex)
-from pandas.util.testing import (assert_panel_equal, assert_frame_equal,
- assert_series_equal, assert_almost_equal)
-from pandas.core.groupby.grouper import Grouping
-from pandas.compat import lrange, long
+from pandas.compat import long, lrange
-from pandas import compat
-import numpy as np
-
-import pandas.util.testing as tm
import pandas as pd
-
+from pandas import (
+ CategoricalIndex, DataFrame, Index, MultiIndex, Series, Timestamp, compat,
+ date_range)
+from pandas.core.groupby.grouper import Grouping
+import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_almost_equal, assert_frame_equal, assert_panel_equal,
+ assert_series_equal)
# selection
# --------------------------------
+
class TestSelection(object):
def test_select_bad_cols(self):
diff --git a/pandas/tests/groupby/test_index_as_string.py b/pandas/tests/groupby/test_index_as_string.py
index 6afa63c31e3b6..141381f84300b 100644
--- a/pandas/tests/groupby/test_index_as_string.py
+++ b/pandas/tests/groupby/test_index_as_string.py
@@ -1,7 +1,7 @@
-import pytest
-import pandas as pd
import numpy as np
+import pytest
+import pandas as pd
from pandas.util.testing import assert_frame_equal, assert_series_equal
diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index 4ea4b580a2c3f..255d9a8acf2d0 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -1,12 +1,12 @@
import numpy as np
-import pandas as pd
-from pandas import DataFrame, MultiIndex, Index, Series, isna, Timestamp
+import pytest
+
from pandas.compat import lrange
+
+import pandas as pd
+from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, isna
from pandas.util.testing import (
- assert_frame_equal,
- assert_produces_warning,
- assert_series_equal)
-import pytest
+ assert_frame_equal, assert_produces_warning, assert_series_equal)
def test_first_last_nth(df):
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index e58e12ab83143..9b0396bb530a1 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -1,5 +1,6 @@
-import pytest
import numpy as np
+import pytest
+
import pandas as pd
from pandas import DataFrame, Series, concat
from pandas.util import testing as tm
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index cb7b419710837..a2f2c1392b251 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -1,17 +1,17 @@
""" test with the TimeGrouper / grouping with datetimes """
-import pytest
-import pytz
-
from datetime import datetime
+
import numpy as np
from numpy import nan
+import pytest
+import pytz
+
+from pandas.compat import StringIO
import pandas as pd
-from pandas import (DataFrame, date_range, Index,
- Series, MultiIndex, Timestamp)
+from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, date_range
from pandas.core.groupby.ops import BinGrouper
-from pandas.compat import StringIO
from pandas.util import testing as tm
from pandas.util.testing import assert_frame_equal, assert_series_equal
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index b6361b4ad76a0..465ae67fd7318 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -1,19 +1,19 @@
""" test with the .transform """
+import numpy as np
import pytest
-import numpy as np
-import pandas as pd
-from pandas.util import testing as tm
-from pandas import Series, DataFrame, Timestamp, MultiIndex, concat, date_range
-from pandas.core.dtypes.common import (
- ensure_platform_int, is_timedelta64_dtype)
-from pandas.compat import StringIO
from pandas._libs import groupby
+from pandas.compat import StringIO
-from pandas.util.testing import assert_frame_equal, assert_series_equal
-from pandas.core.groupby.groupby import DataError
+from pandas.core.dtypes.common import ensure_platform_int, is_timedelta64_dtype
+
+import pandas as pd
+from pandas import DataFrame, MultiIndex, Series, Timestamp, concat, date_range
from pandas.core.config import option_context
+from pandas.core.groupby.groupby import DataError
+from pandas.util import testing as tm
+from pandas.util.testing import assert_frame_equal, assert_series_equal
def assert_fp_equal(a, b):
diff --git a/pandas/tests/groupby/test_value_counts.py b/pandas/tests/groupby/test_value_counts.py
index 1434656115d18..2b5f87aa59a8d 100644
--- a/pandas/tests/groupby/test_value_counts.py
+++ b/pandas/tests/groupby/test_value_counts.py
@@ -4,13 +4,13 @@
and proper parameter handling
"""
-import pytest
-
from itertools import product
+
import numpy as np
+import pytest
+from pandas import DataFrame, MultiIndex, Series, date_range
from pandas.util import testing as tm
-from pandas import MultiIndex, DataFrame, Series, date_range
# our starting frame
diff --git a/pandas/tests/groupby/test_whitelist.py b/pandas/tests/groupby/test_whitelist.py
index a451acebcdba4..b7302b3911e58 100644
--- a/pandas/tests/groupby/test_whitelist.py
+++ b/pandas/tests/groupby/test_whitelist.py
@@ -3,10 +3,12 @@
the so-called white/black lists
"""
-import pytest
from string import ascii_lowercase
+
import numpy as np
-from pandas import DataFrame, Series, compat, date_range, Index, MultiIndex
+import pytest
+
+from pandas import DataFrame, Index, MultiIndex, Series, compat, date_range
from pandas.util import testing as tm
AGG_FUNCTIONS = ['sum', 'prod', 'min', 'max', 'median', 'mean', 'skew',
diff --git a/setup.cfg b/setup.cfg
index 59e5991914ca6..0738eae9cfd6d 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -153,25 +153,6 @@ skip=
pandas/tests/arithmetic/conftest.py,
pandas/tests/arithmetic/test_timedelta64.py,
pandas/tests/internals/test_internals.py,
- pandas/tests/groupby/test_value_counts.py,
- pandas/tests/groupby/test_filters.py,
- pandas/tests/groupby/test_nth.py,
- pandas/tests/groupby/test_timegrouper.py,
- pandas/tests/groupby/test_transform.py,
- pandas/tests/groupby/test_bin_groupby.py,
- pandas/tests/groupby/test_index_as_string.py,
- pandas/tests/groupby/test_groupby.py,
- pandas/tests/groupby/test_whitelist.py,
- pandas/tests/groupby/test_function.py,
- pandas/tests/groupby/test_apply.py,
- pandas/tests/groupby/conftest.py,
- pandas/tests/groupby/test_counting.py,
- pandas/tests/groupby/test_categorical.py,
- pandas/tests/groupby/test_grouping.py,
- pandas/tests/groupby/test_rank.py,
- pandas/tests/groupby/aggregate/test_cython.py,
- pandas/tests/groupby/aggregate/test_other.py,
- pandas/tests/groupby/aggregate/test_aggregate.py,
pandas/tests/plotting/test_datetimelike.py,
pandas/tests/plotting/test_series.py,
pandas/tests/plotting/test_groupby.py,
| xref #23334
| https://api.github.com/repos/pandas-dev/pandas/pulls/24553 | 2019-01-02T14:30:03Z | 2019-01-02T17:56:21Z | 2019-01-02T17:56:21Z | 2019-01-02T17:57:49Z |
DOC: fix some doc build warnings/errors | diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 0c192a0aab24a..0f9726dc94816 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -1236,7 +1236,7 @@ the following Python code will read the binary file ``'binary.dat'`` into a
pandas ``DataFrame``, where each element of the struct corresponds to a column
in the frame:
-.. ipython:: python
+.. code-block:: python
names = 'count', 'avg', 'scale'
diff --git a/doc/source/integer_na.rst b/doc/source/integer_na.rst
index befcf7016f155..eb0c5e3d05863 100644
--- a/doc/source/integer_na.rst
+++ b/doc/source/integer_na.rst
@@ -2,7 +2,7 @@
{{ header }}
- .. _integer_na:
+.. _integer_na:
**************************
Nullable Integer Data Type
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 3bbd4e8410fa5..967648f3a168a 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -4880,7 +4880,7 @@ below and the SQLAlchemy `documentation <https://docs.sqlalchemy.org/en/latest/c
If you want to manage your own connections you can pass one of those instead:
-.. ipython:: python
+.. code-block:: python
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table('data', conn)
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index a9234b83c78ab..a462f01dcd14f 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -759,12 +759,7 @@ the ``dtype="Int64"``.
.. ipython:: python
- s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7],
- dtype="Int64")
- s > 0
- (s > 0).dtype
- crit = (s > 0).reindex(list(range(8)))
- crit
- crit.dtype
+ s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")
+ s
See :ref:`integer_na` for more.
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 59e8fa58fd9cf..84fca37318091 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1874,7 +1874,7 @@ has multiplied span.
.. ipython:: python
- pd.PeriodIndex(start='2014-01', freq='3M', periods=4)
+ pd.period_range(start='2014-01', freq='3M', periods=4)
If ``start`` or ``end`` are ``Period`` objects, they will be used as anchor
endpoints for a ``PeriodIndex`` with frequency matching that of the
@@ -1882,8 +1882,8 @@ endpoints for a ``PeriodIndex`` with frequency matching that of the
.. ipython:: python
- pd.PeriodIndex(start=pd.Period('2017Q1', freq='Q'),
- end=pd.Period('2017Q2', freq='Q'), freq='M')
+ pd.period_range(start=pd.Period('2017Q1', freq='Q'),
+ end=pd.Period('2017Q2', freq='Q'), freq='M')
Just like ``DatetimeIndex``, a ``PeriodIndex`` can also be used to index pandas
objects:
diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst
index f4f33a921dcce..8e23c643280c1 100644
--- a/doc/source/tutorials.rst
+++ b/doc/source/tutorials.rst
@@ -33,7 +33,7 @@ repository <http://github.com/jvns/pandas-cookbook>`_.
Learn Pandas by Hernan Rojas
----------------------------
-A set of lesson for new pandas users: `Learn pandas <https://bitbucket.org/hrojas/learn-pandas>`__.
+A set of lesson for new pandas users: https://bitbucket.org/hrojas/learn-pandas
Practical data analysis with Python
-----------------------------------
diff --git a/doc/source/whatsnew/v0.20.0.rst b/doc/source/whatsnew/v0.20.0.rst
index c78d5c1d178d2..c720e075012eb 100644
--- a/doc/source/whatsnew/v0.20.0.rst
+++ b/doc/source/whatsnew/v0.20.0.rst
@@ -189,14 +189,15 @@ URLs and paths are now inferred using their file extensions. Additionally,
support for bz2 compression in the python 2 C-engine improved (:issue:`14874`).
.. ipython:: python
- :okwarning:
url = ('https://github.com/{repo}/raw/{branch}/{path}'
.format(repo='pandas-dev/pandas',
branch='master',
path='pandas/tests/io/parser/data/salaries.csv.bz2'))
- df = pd.read_table(url, compression='infer') # default, infer compression
- df = pd.read_table(url, compression='bz2') # explicitly specify compression
+ # default, infer compression
+ df = pd.read_csv(url, sep='\t', compression='infer')
+ # explicitly specify compression
+ df = pd.read_csv(url, sep='\t', compression='bz2')
df.head(2)
.. _whatsnew_0200.enhancements.pickle_compression:
| Some follow-ups on recent doc PRs | https://api.github.com/repos/pandas-dev/pandas/pulls/24552 | 2019-01-02T14:29:31Z | 2019-01-02T17:03:17Z | 2019-01-02T17:03:17Z | 2019-01-02T17:07:46Z |
TST: isort tests/test_* | diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 8d7fd6449b354..294eae9d45bee 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1,28 +1,30 @@
# -*- coding: utf-8 -*-
-import numpy as np
-import pytest
-
-from numpy.random import RandomState
-from numpy import nan
from datetime import datetime
from itertools import permutations
import struct
-from pandas import (Series, Categorical, CategoricalIndex,
- Timestamp, DatetimeIndex, Index, IntervalIndex)
-import pandas as pd
-from pandas import compat
-from pandas._libs import (groupby as libgroupby, algos as libalgos,
- hashtable as ht)
+import numpy as np
+from numpy import nan
+from numpy.random import RandomState
+import pytest
+
+from pandas._libs import (
+ algos as libalgos, groupby as libgroupby, hashtable as ht)
from pandas.compat import lrange, range
-from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+from pandas.compat.numpy import np_array_datetime64_compat
+import pandas.util._test_decorators as td
+
+from pandas.core.dtypes.dtypes import CategoricalDtype as CDT
+
+import pandas as pd
+from pandas import (
+ Categorical, CategoricalIndex, DatetimeIndex, Index, IntervalIndex, Series,
+ Timestamp, compat)
import pandas.core.algorithms as algos
+from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
import pandas.core.common as com
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-from pandas.core.dtypes.dtypes import CategoricalDtype as CDT
-from pandas.compat.numpy import np_array_datetime64_compat
from pandas.util.testing import assert_almost_equal
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index f941f2ff32fa1..85650a9b0df0d 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -1,32 +1,33 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
+from datetime import datetime, timedelta
import re
import sys
-from datetime import datetime, timedelta
-import pytest
+
import numpy as np
+import pytest
-import pandas as pd
+from pandas._libs.tslib import iNaT
import pandas.compat as compat
-from pandas.core.dtypes.common import (
- is_object_dtype, is_datetime64_dtype, is_datetime64tz_dtype,
- needs_i8_conversion, is_timedelta64_dtype)
-import pandas.util.testing as tm
-from pandas import (Series, Index, DatetimeIndex, TimedeltaIndex,
- PeriodIndex, Timedelta, IntervalIndex, Interval,
- CategoricalIndex, Timestamp, DataFrame, Panel)
-from pandas.core.arrays import (
- PandasArray,
- DatetimeArrayMixin as DatetimeArray,
- TimedeltaArrayMixin as TimedeltaArray,
-)
-from pandas.compat import StringIO, PYPY, long
+from pandas.compat import PYPY, StringIO, long
from pandas.compat.numpy import np_array_datetime64_compat
+
+from pandas.core.dtypes.common import (
+ is_datetime64_dtype, is_datetime64tz_dtype, is_object_dtype,
+ is_timedelta64_dtype, needs_i8_conversion)
+
+import pandas as pd
+from pandas import (
+ CategoricalIndex, DataFrame, DatetimeIndex, Index, Interval, IntervalIndex,
+ Panel, PeriodIndex, Series, Timedelta, TimedeltaIndex, Timestamp)
from pandas.core.accessor import PandasDelegate
-from pandas.core.base import PandasObject, NoNewAttributesMixin
+from pandas.core.arrays import (
+ DatetimeArrayMixin as DatetimeArray, PandasArray,
+ TimedeltaArrayMixin as TimedeltaArray)
+from pandas.core.base import NoNewAttributesMixin, PandasObject
from pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin
-from pandas._libs.tslib import iNaT
+import pandas.util.testing as tm
class CheckStringMixin(object):
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index ae46bee901ff2..18eb760e31db8 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -1,18 +1,15 @@
# -*- coding: utf-8 -*-
import collections
-import string
from functools import partial
+import string
import numpy as np
import pytest
import pandas as pd
from pandas import Series, Timestamp
-from pandas.core import (
- common as com,
- ops,
-)
+from pandas.core import common as com, ops
def test_get_callable_name():
diff --git a/pandas/tests/test_compat.py b/pandas/tests/test_compat.py
index 79d3aad493182..d1a3ee43a4623 100644
--- a/pandas/tests/test_compat.py
+++ b/pandas/tests/test_compat.py
@@ -3,11 +3,13 @@
Testing that functions from compat work as expected
"""
-import pytest
import re
-from pandas.compat import (range, zip, map, filter, lrange, lzip, lmap,
- lfilter, builtins, iterkeys, itervalues, iteritems,
- next, get_range_parameters, PY2, re_type)
+
+import pytest
+
+from pandas.compat import (
+ PY2, builtins, filter, get_range_parameters, iteritems, iterkeys,
+ itervalues, lfilter, lmap, lrange, lzip, map, next, range, re_type, zip)
class TestBuiltinIterators(object):
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index fd8e98c483f78..2cdcb948eb917 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -1,10 +1,10 @@
# -*- coding: utf-8 -*-
+import warnings
+
import pytest
import pandas as pd
-import warnings
-
class TestConfig(object):
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 1d17b514a5b67..e22b9a0ef25e3 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -2,15 +2,17 @@
"""
Testing that we work in the downstream packages
"""
+import importlib
import subprocess
import sys
-import pytest
import numpy as np # noqa
-from pandas import DataFrame
+import pytest
+
from pandas.compat import PY36
+
+from pandas import DataFrame
from pandas.util import testing as tm
-import importlib
def import_module(name):
diff --git a/pandas/tests/test_errors.py b/pandas/tests/test_errors.py
index c5ea69b5ec46f..d3b6a237a97a1 100644
--- a/pandas/tests/test_errors.py
+++ b/pandas/tests/test_errors.py
@@ -1,10 +1,11 @@
# -*- coding: utf-8 -*-
import pytest
-import pandas # noqa
-import pandas as pd
+
from pandas.errors import AbstractMethodError
+import pandas as pd # noqa
+
@pytest.mark.parametrize(
"exc", ['UnsupportedFunctionCall', 'UnsortedIndexError',
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index cc5ae9b15ba9e..f5aa0b0b3c9c8 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -1,23 +1,25 @@
# -*- coding: utf-8 -*-
from __future__ import print_function
-# pylint: disable-msg=W0612,E1101
-from warnings import catch_warnings, simplefilter
-import re
import operator
-import pytest
-
-from numpy.random import randn
+import re
+from warnings import catch_warnings, simplefilter
import numpy as np
+from numpy.random import randn
+import pytest
+from pandas import _np_version_under1p13, compat
from pandas.core.api import DataFrame, Panel
from pandas.core.computation import expressions as expr
-from pandas import compat, _np_version_under1p13
-from pandas.util.testing import (assert_almost_equal, assert_series_equal,
- assert_frame_equal, assert_panel_equal)
-from pandas.io.formats.printing import pprint_thing
import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_almost_equal, assert_frame_equal, assert_panel_equal,
+ assert_series_equal)
+
+from pandas.io.formats.printing import pprint_thing
+
+# pylint: disable-msg=W0612,E1101
_frame = DataFrame(randn(10000, 4), columns=list('ABCD'), dtype='float64')
diff --git a/pandas/tests/test_join.py b/pandas/tests/test_join.py
index af946436b55c7..5b6656de15731 100644
--- a/pandas/tests/test_join.py
+++ b/pandas/tests/test_join.py
@@ -1,9 +1,10 @@
# -*- coding: utf-8 -*-
import numpy as np
-from pandas import Index, DataFrame, Categorical, merge
from pandas._libs import join as _join
+
+from pandas import Categorical, DataFrame, Index, merge
import pandas.util.testing as tm
from pandas.util.testing import assert_almost_equal, assert_frame_equal
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index d0812eae80f2d..c5dcfc89faa67 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -1,10 +1,11 @@
# -*- coding: utf-8 -*-
+import numpy as np
import pytest
-import numpy as np
-from pandas import Index
from pandas._libs import lib, writers as libwriters
+
+from pandas import Index
import pandas.util.testing as tm
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index ce95f0f86ef7b..b5023c376dedd 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1,23 +1,23 @@
# -*- coding: utf-8 -*-
# pylint: disable-msg=W0612,E1101,W0141
-from warnings import catch_warnings, simplefilter
import datetime
import itertools
+from warnings import catch_warnings, simplefilter
+import numpy as np
+from numpy.random import randn
import pytest
import pytz
-from numpy.random import randn
-import numpy as np
-
-from pandas.core.index import Index, MultiIndex
-from pandas import (Panel, DataFrame, Series, isna, Timestamp)
+from pandas.compat import (
+ StringIO, lrange, lzip, product as cart_product, range, u, zip)
from pandas.core.dtypes.common import is_float_dtype, is_integer_dtype
-import pandas.util.testing as tm
-from pandas.compat import (range, lrange, StringIO, lzip, u, product as
- cart_product, zip)
+
import pandas as pd
+from pandas import DataFrame, Panel, Series, Timestamp, isna
+from pandas.core.index import Index, MultiIndex
+import pandas.util.testing as tm
AGG_FUNCTIONS = ['sum', 'prod', 'min', 'max', 'median', 'mean', 'skew', 'mad',
'std', 'var', 'sem']
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 1e08914811402..cc793767d3af6 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -1,20 +1,22 @@
# -*- coding: utf-8 -*-
from __future__ import division, print_function
-import warnings
from functools import partial
+import warnings
import numpy as np
import pytest
-import pandas as pd
-import pandas.core.nanops as nanops
-import pandas.util._test_decorators as td
-import pandas.util.testing as tm
-from pandas import Series, isna
from pandas.compat.numpy import _np_version_under1p13
+import pandas.util._test_decorators as td
+
from pandas.core.dtypes.common import is_integer_dtype
+
+import pandas as pd
+from pandas import Series, isna
from pandas.core.arrays import DatetimeArrayMixin as DatetimeArray
+import pandas.core.nanops as nanops
+import pandas.util.testing as tm
use_bn = nanops._USE_BOTTLENECK
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 33f2c34400373..5539778e1d187 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1,31 +1,31 @@
# -*- coding: utf-8 -*-
# pylint: disable=W0612,E1101
-from warnings import catch_warnings, simplefilter
from datetime import datetime
import operator
-import pytest
+from warnings import catch_warnings, simplefilter
import numpy as np
+import pytest
+
+from pandas.compat import OrderedDict, StringIO, lrange, range, signature
+import pandas.util._test_decorators as td
from pandas.core.dtypes.common import is_float_dtype
-from pandas import (Series, DataFrame, Index, date_range, isna, notna,
- MultiIndex)
+
+from pandas import (
+ DataFrame, Index, MultiIndex, Series, compat, date_range, isna, notna)
from pandas.core.nanops import nanall, nanany
+import pandas.core.panel as panelm
from pandas.core.panel import Panel
+import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_almost_equal, assert_frame_equal, assert_panel_equal,
+ assert_series_equal, ensure_clean, makeCustomDataframe as mkdf,
+ makeMixedDataFrame)
from pandas.io.formats.printing import pprint_thing
-from pandas import compat
-from pandas.compat import range, lrange, StringIO, OrderedDict, signature
-
from pandas.tseries.offsets import BDay, MonthEnd
-from pandas.util.testing import (assert_panel_equal, assert_frame_equal,
- assert_series_equal, assert_almost_equal,
- ensure_clean, makeMixedDataFrame,
- makeCustomDataframe as mkdf)
-import pandas.core.panel as panelm
-import pandas.util.testing as tm
-import pandas.util._test_decorators as td
def make_test_panel():
diff --git a/pandas/tests/test_sorting.py b/pandas/tests/test_sorting.py
index 333b93dbdf580..7500cbb3cfc3a 100644
--- a/pandas/tests/test_sorting.py
+++ b/pandas/tests/test_sorting.py
@@ -1,21 +1,19 @@
-import pytest
-from itertools import product
from collections import defaultdict
-import warnings
from datetime import datetime
+from itertools import product
+import warnings
import numpy as np
from numpy import nan
+import pytest
+
+from pandas import DataFrame, MultiIndex, Series, compat, concat, merge
from pandas.core import common as com
-from pandas import DataFrame, MultiIndex, merge, concat, Series, compat
+from pandas.core.sorting import (
+ decons_group_index, get_group_index, is_int64_overflow_possible,
+ lexsort_indexer, nargsort, safe_sort)
from pandas.util import testing as tm
from pandas.util.testing import assert_frame_equal, assert_series_equal
-from pandas.core.sorting import (is_int64_overflow_possible,
- decons_group_index,
- get_group_index,
- nargsort,
- lexsort_indexer,
- safe_sort)
class TestSorting(object):
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index c5a4e9511a6ef..d4ea21632edf9 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -2,21 +2,20 @@
# pylint: disable-msg=E1101,W0612
from datetime import datetime, timedelta
-import pytest
import re
-from numpy import nan as NA
import numpy as np
+from numpy import nan as NA
from numpy.random import randint
+import pytest
-from pandas.compat import range, u, PY3
import pandas.compat as compat
-from pandas import Index, Series, DataFrame, isna, MultiIndex, notna, concat
-
-from pandas.util.testing import assert_series_equal, assert_index_equal
-import pandas.util.testing as tm
+from pandas.compat import PY3, range, u
+from pandas import DataFrame, Index, MultiIndex, Series, concat, isna, notna
import pandas.core.strings as strings
+import pandas.util.testing as tm
+from pandas.util.testing import assert_index_equal, assert_series_equal
def assert_series_or_index_equal(left, right):
diff --git a/pandas/tests/test_take.py b/pandas/tests/test_take.py
index 69150ee3c5454..c9e4ed90b1dea 100644
--- a/pandas/tests/test_take.py
+++ b/pandas/tests/test_take.py
@@ -1,13 +1,15 @@
# -*- coding: utf-8 -*-
-import re
from datetime import datetime
+import re
import numpy as np
import pytest
+
+from pandas._libs.tslib import iNaT
from pandas.compat import long
+
import pandas.core.algorithms as algos
import pandas.util.testing as tm
-from pandas._libs.tslib import iNaT
@pytest.fixture(params=[True, False])
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index b53aca2c9852b..412f70a3cb516 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -1,24 +1,26 @@
from collections import OrderedDict
+from datetime import datetime, timedelta
from itertools import product
-import pytest
import warnings
from warnings import catch_warnings
-from datetime import datetime, timedelta
-from numpy.random import randn
import numpy as np
+from numpy.random import randn
+import pytest
+
+from pandas.compat import range, zip
+from pandas.errors import UnsupportedFunctionCall
+import pandas.util._test_decorators as td
import pandas as pd
-from pandas import (Series, DataFrame, bdate_range,
- isna, notna, concat, Timestamp, Index)
-import pandas.core.window as rwindow
-import pandas.tseries.offsets as offsets
+from pandas import (
+ DataFrame, Index, Series, Timestamp, bdate_range, concat, isna, notna)
from pandas.core.base import SpecificationError
-from pandas.errors import UnsupportedFunctionCall
from pandas.core.sorting import safe_sort
+import pandas.core.window as rwindow
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-from pandas.compat import range, zip
+
+import pandas.tseries.offsets as offsets
N, K = 100, 10
diff --git a/setup.cfg b/setup.cfg
index 59e5991914ca6..26d08ba604c97 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -118,24 +118,6 @@ force_sort_within_sections=True
skip=
pandas/core/api.py,
pandas/core/frame.py,
- pandas/tests/test_errors.py,
- pandas/tests/test_base.py,
- pandas/tests/test_register_accessor.py,
- pandas/tests/test_window.py,
- pandas/tests/test_downstream.py,
- pandas/tests/test_multilevel.py,
- pandas/tests/test_common.py,
- pandas/tests/test_compat.py,
- pandas/tests/test_sorting.py,
- pandas/tests/test_algos.py,
- pandas/tests/test_expressions.py,
- pandas/tests/test_strings.py,
- pandas/tests/test_lib.py,
- pandas/tests/test_join.py,
- pandas/tests/test_panel.py,
- pandas/tests/test_take.py,
- pandas/tests/test_nanops.py,
- pandas/tests/test_config.py,
pandas/tests/api/test_types.py,
pandas/tests/api/test_api.py,
pandas/tests/tools/test_numeric.py,
| xref #23334
| https://api.github.com/repos/pandas-dev/pandas/pulls/24551 | 2019-01-02T14:24:47Z | 2019-01-02T16:39:12Z | 2019-01-02T16:39:12Z | 2019-01-02T17:04:31Z |
TST: isort tests/reshape | diff --git a/pandas/tests/reshape/merge/test_join.py b/pandas/tests/reshape/merge/test_join.py
index 083ce16ef9296..8ee1e49f01ac1 100644
--- a/pandas/tests/reshape/merge/test_join.py
+++ b/pandas/tests/reshape/merge/test_join.py
@@ -1,20 +1,20 @@
# pylint: disable=E1103
from warnings import catch_warnings
-from numpy.random import randn
+
import numpy as np
+from numpy.random import randn
import pytest
-import pandas as pd
-from pandas.compat import lrange
+from pandas._libs import join as libjoin
import pandas.compat as compat
-from pandas.util.testing import assert_frame_equal
-from pandas import DataFrame, MultiIndex, Series, Index, merge, concat
+from pandas.compat import lrange
-from pandas._libs import join as libjoin
+import pandas as pd
+from pandas import DataFrame, Index, MultiIndex, Series, concat, merge
+from pandas.tests.reshape.merge.test_merge import NGROUPS, N, get_test_data
import pandas.util.testing as tm
-from pandas.tests.reshape.merge.test_merge import get_test_data, N, NGROUPS
-
+from pandas.util.testing import assert_frame_equal
a_ = np.array
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 970802e94662a..f6882e9bc8394 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1,25 +1,27 @@
# pylint: disable=E1103
-import random
-import re
from collections import OrderedDict
from datetime import date, datetime
+import random
+import re
import numpy as np
-import pytest
from numpy import nan
+import pytest
-import pandas as pd
-import pandas.util.testing as tm
-from pandas import (Categorical, CategoricalIndex, DataFrame, DatetimeIndex,
- Float64Index, Int64Index, MultiIndex, RangeIndex,
- Series, UInt64Index)
-from pandas.api.types import CategoricalDtype as CDT
from pandas.compat import lrange
+
from pandas.core.dtypes.common import is_categorical_dtype, is_object_dtype
from pandas.core.dtypes.dtypes import CategoricalDtype
+
+import pandas as pd
+from pandas import (
+ Categorical, CategoricalIndex, DataFrame, DatetimeIndex, Float64Index,
+ Int64Index, MultiIndex, RangeIndex, Series, UInt64Index)
+from pandas.api.types import CategoricalDtype as CDT
from pandas.core.reshape.concat import concat
from pandas.core.reshape.merge import MergeError, merge
+import pandas.util.testing as tm
from pandas.util.testing import assert_frame_equal, assert_series_equal
N = 50
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 3035412d7b836..1483654daa99e 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1,10 +1,9 @@
+import numpy as np
import pytest
-
import pytz
-import numpy as np
+
import pandas as pd
-from pandas import (merge_asof, read_csv,
- to_datetime, Timedelta)
+from pandas import Timedelta, merge_asof, read_csv, to_datetime
from pandas.core.reshape.merge import MergeError
from pandas.util.testing import assert_frame_equal
diff --git a/pandas/tests/reshape/merge/test_merge_ordered.py b/pandas/tests/reshape/merge/test_merge_ordered.py
index 0f8ecc6370bfd..414f46cdb296c 100644
--- a/pandas/tests/reshape/merge/test_merge_ordered.py
+++ b/pandas/tests/reshape/merge/test_merge_ordered.py
@@ -1,10 +1,10 @@
+from numpy import nan
import pytest
+
import pandas as pd
from pandas import DataFrame, merge_ordered
from pandas.util.testing import assert_frame_equal
-from numpy import nan
-
class TestMergeOrdered(object):
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 0706cb12ac5d0..051462c5e9fc6 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -1,27 +1,26 @@
-from warnings import catch_warnings, simplefilter
-from itertools import combinations
from collections import deque
+import datetime as dt
+from datetime import datetime
from decimal import Decimal
+from itertools import combinations
+from warnings import catch_warnings, simplefilter
-import datetime as dt
import dateutil
import numpy as np
from numpy.random import randn
+import pytest
+
+from pandas.compat import PY2, Iterable, StringIO, iteritems
-from datetime import datetime
-from pandas.compat import Iterable, StringIO, iteritems, PY2
-import pandas as pd
-from pandas import (DataFrame, concat,
- read_csv, isna, Series, date_range,
- Index, Panel, MultiIndex, Timestamp,
- DatetimeIndex, Categorical)
from pandas.core.dtypes.dtypes import CategoricalDtype
-from pandas.util import testing as tm
-from pandas.util.testing import (assert_frame_equal,
- makeCustomDataframe as mkdf)
-from pandas.tests.extension.decimal import to_decimal
-import pytest
+import pandas as pd
+from pandas import (
+ Categorical, DataFrame, DatetimeIndex, Index, MultiIndex, Panel, Series,
+ Timestamp, concat, date_range, isna, read_csv)
+from pandas.tests.extension.decimal import to_decimal
+from pandas.util import testing as tm
+from pandas.util.testing import assert_frame_equal, makeCustomDataframe as mkdf
@pytest.fixture(params=[True, False])
diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py
index 8fd3ae8bb387b..6b633d7e77f52 100644
--- a/pandas/tests/reshape/test_melt.py
+++ b/pandas/tests/reshape/test_melt.py
@@ -1,17 +1,15 @@
# -*- coding: utf-8 -*-
# pylint: disable-msg=W0612,E1101
+import numpy as np
+from numpy import nan
import pytest
-from pandas import DataFrame
-import pandas as pd
-
-from numpy import nan
-import numpy as np
+from pandas.compat import range
-from pandas import melt, lreshape, wide_to_long
+import pandas as pd
+from pandas import DataFrame, lreshape, melt, wide_to_long
import pandas.util.testing as tm
-from pandas.compat import range
class TestMelt(object):
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index a2b5eacd873bb..f0d1ad57ba829 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1,20 +1,20 @@
# -*- coding: utf-8 -*-
-from datetime import datetime, date, timedelta
+from collections import OrderedDict
+from datetime import date, datetime, timedelta
+import numpy as np
import pytest
+from pandas.compat import product, range
-import numpy as np
-
-from collections import OrderedDict
import pandas as pd
-from pandas import (DataFrame, Series, Index, MultiIndex,
- Grouper, date_range, concat, Categorical)
-from pandas.core.reshape.pivot import pivot_table, crosstab
-from pandas.compat import range, product
-import pandas.util.testing as tm
+from pandas import (
+ Categorical, DataFrame, Grouper, Index, MultiIndex, Series, concat,
+ date_range)
from pandas.api.types import CategoricalDtype as CDT
+from pandas.core.reshape.pivot import crosstab, pivot_table
+import pandas.util.testing as tm
@pytest.fixture(params=[True, False])
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index edbe70d308b96..7b544b7981c1f 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -1,22 +1,21 @@
# -*- coding: utf-8 -*-
# pylint: disable-msg=W0612,E1101
-import pytest
from collections import OrderedDict
-from pandas import DataFrame, Series
-from pandas.core.dtypes.common import is_integer_dtype
-from pandas.core.sparse.api import SparseDtype, SparseArray
-import pandas as pd
-
-from numpy import nan
import numpy as np
+from numpy import nan
+import pytest
-from pandas.util.testing import assert_frame_equal
+from pandas.compat import u
+
+from pandas.core.dtypes.common import is_integer_dtype
-from pandas import get_dummies, Categorical, Index
+import pandas as pd
+from pandas import Categorical, DataFrame, Index, Series, get_dummies
+from pandas.core.sparse.api import SparseArray, SparseDtype
import pandas.util.testing as tm
-from pandas.compat import u
+from pandas.util.testing import assert_frame_equal
class TestGetDummies(object):
diff --git a/pandas/tests/reshape/test_union_categoricals.py b/pandas/tests/reshape/test_union_categoricals.py
index 80538b0c6de4e..9b2b8bf9ed49f 100644
--- a/pandas/tests/reshape/test_union_categoricals.py
+++ b/pandas/tests/reshape/test_union_categoricals.py
@@ -1,9 +1,10 @@
+import numpy as np
import pytest
-import numpy as np
-import pandas as pd
-from pandas import Categorical, Series, CategoricalIndex
from pandas.core.dtypes.concat import union_categoricals
+
+import pandas as pd
+from pandas import Categorical, CategoricalIndex, Series
from pandas.util import testing as tm
diff --git a/pandas/tests/reshape/test_util.py b/pandas/tests/reshape/test_util.py
index e7e1626bdb2da..a8d9e7a775442 100644
--- a/pandas/tests/reshape/test_util.py
+++ b/pandas/tests/reshape/test_util.py
@@ -1,8 +1,9 @@
-import pytest
import numpy as np
-from pandas import date_range, Index
-import pandas.util.testing as tm
+import pytest
+
+from pandas import Index, date_range
from pandas.core.reshape.util import cartesian_product
+import pandas.util.testing as tm
class TestCartesianProduct(object):
diff --git a/setup.cfg b/setup.cfg
index 59e5991914ca6..15a5384bc632c 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -182,18 +182,6 @@ skip=
pandas/tests/plotting/common.py,
pandas/tests/plotting/test_boxplot_method.py,
pandas/tests/plotting/test_deprecated.py,
- pandas/tests/reshape/test_concat.py,
- pandas/tests/reshape/test_util.py,
- pandas/tests/reshape/test_reshape.py,
- pandas/tests/reshape/test_tile.py,
- pandas/tests/reshape/test_pivot.py,
- pandas/tests/reshape/test_melt.py,
- pandas/tests/reshape/test_union_categoricals.py,
- pandas/tests/reshape/merge/test_merge_index_as_string.py,
- pandas/tests/reshape/merge/test_merge.py,
- pandas/tests/reshape/merge/test_merge_asof.py,
- pandas/tests/reshape/merge/test_join.py,
- pandas/tests/reshape/merge/test_merge_ordered.py,
pandas/tests/sparse/test_indexing.py,
pandas/tests/extension/test_sparse.py,
pandas/tests/extension/base/reduce.py,
| xref #23334
| https://api.github.com/repos/pandas-dev/pandas/pulls/24550 | 2019-01-02T14:18:57Z | 2019-01-02T15:16:33Z | 2019-01-02T15:16:33Z | 2019-01-02T15:20:52Z |
DOC: exclude autogenerated c/cpp/html files from 'trailing whitespace' checks | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index d16249724127f..87a1c8ae33489 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -93,7 +93,7 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
# this particular codebase (e.g. src/headers, src/klib, src/msgpack). However,
# we can lint all header files since they aren't "generated" like C files are.
MSG='Linting .c and .h' ; echo $MSG
- cpplint --quiet --extensions=c,h --headers=h --recursive --filter=-readability/casting,-runtime/int,-build/include_subdir pandas/_libs/src/*.h pandas/_libs/src/parser pandas/_libs/ujson pandas/_libs/tslibs/src/datetime
+ cpplint --quiet --extensions=c,h --headers=h --recursive --filter=-readability/casting,-runtime/int,-build/include_subdir pandas/_libs/src/*.h pandas/_libs/src/parser pandas/_libs/ujson pandas/_libs/tslibs/src/datetime pandas/io/msgpack pandas/_libs/*.cpp pandas/util
RET=$(($RET + $?)) ; echo $MSG "DONE"
echo "isort --version-number"
@@ -174,9 +174,10 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
MSG='Check that no file in the repo contains tailing whitespaces' ; echo $MSG
set -o pipefail
if [[ "$AZURE" == "true" ]]; then
- ! grep -n --exclude="*.svg" -RI "\s$" * | awk -F ":" '{print "##vso[task.logissue type=error;sourcepath=" $1 ";linenumber=" $2 ";] Tailing whitespaces found: " $3}'
+ # we exclude all c/cpp files as the c/cpp files of pandas code base are tested when Linting .c and .h files
+ ! grep -n '--exclude=*.'{svg,c,cpp,html} -RI "\s$" * | awk -F ":" '{print "##vso[task.logissue type=error;sourcepath=" $1 ";linenumber=" $2 ";] Tailing whitespaces found: " $3}'
else
- ! grep -n --exclude="*.svg" -RI "\s$" * | awk -F ":" '{print $1 ":" $2 ":Tailing whitespaces found: " $3}'
+ ! grep -n '--exclude=*.'{svg,c,cpp,html} -RI "\s$" * | awk -F ":" '{print $1 ":" $2 ":Tailing whitespaces found: " $3}'
fi
RET=$(($RET + $?)) ; echo $MSG "DONE"
fi
diff --git a/pandas/util/move.c b/pandas/util/move.c
index 62860adb1c1f6..9bb662d50cb3f 100644
--- a/pandas/util/move.c
+++ b/pandas/util/move.c
@@ -1,3 +1,12 @@
+/*
+Copyright (c) 2019, PyData Development Team
+All rights reserved.
+
+Distributed under the terms of the BSD Simplified License.
+
+The full license is in the LICENSE file, distributed with this software.
+*/
+
#include <Python.h>
#define COMPILING_IN_PY2 (PY_VERSION_HEX <= 0x03000000)
@@ -31,15 +40,13 @@ typedef struct {
static PyTypeObject stolenbuf_type; /* forward declare type */
static void
-stolenbuf_dealloc(stolenbufobject *self)
-{
+stolenbuf_dealloc(stolenbufobject *self) {
Py_DECREF(self->invalid_bytes);
PyObject_Del(self);
}
static int
-stolenbuf_getbuffer(stolenbufobject *self, Py_buffer *view, int flags)
-{
+stolenbuf_getbuffer(stolenbufobject *self, Py_buffer *view, int flags) {
return PyBuffer_FillInfo(view,
(PyObject*) self,
(void*) PyString_AS_STRING(self->invalid_bytes),
@@ -51,8 +58,8 @@ stolenbuf_getbuffer(stolenbufobject *self, Py_buffer *view, int flags)
#if COMPILING_IN_PY2
static Py_ssize_t
-stolenbuf_getreadwritebuf(stolenbufobject *self, Py_ssize_t segment, void **out)
-{
+stolenbuf_getreadwritebuf(stolenbufobject *self,
+ Py_ssize_t segment, void **out) {
if (segment != 0) {
PyErr_SetString(PyExc_SystemError,
"accessing non-existent string segment");
@@ -63,8 +70,7 @@ stolenbuf_getreadwritebuf(stolenbufobject *self, Py_ssize_t segment, void **out)
}
static Py_ssize_t
-stolenbuf_getsegcount(stolenbufobject *self, Py_ssize_t *len)
-{
+stolenbuf_getsegcount(stolenbufobject *self, Py_ssize_t *len) {
if (len) {
*len = PyString_GET_SIZE(self->invalid_bytes);
}
@@ -157,8 +163,7 @@ PyDoc_STRVAR(
however, if called through *unpacking like ``stolenbuf(*(a,))`` it would
only have the one reference (the tuple). */
static PyObject*
-move_into_mutable_buffer(PyObject *self, PyObject *bytes_rvalue)
-{
+move_into_mutable_buffer(PyObject *self, PyObject *bytes_rvalue) {
stolenbufobject *ret;
if (!PyString_CheckExact(bytes_rvalue)) {
| - [x] closes #24526
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24549 | 2019-01-02T13:39:49Z | 2019-02-08T02:51:21Z | 2019-02-08T02:51:21Z | 2019-02-08T15:00:36Z |
BUG: (row) Index Name with to_html(header=False) is not displayed | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 302f2bd05ee5c..43229f3f674d0 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1600,6 +1600,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
- Bug in :func:`to_html()` with ``index=False`` misses truncation indicators (...) on truncated DataFrame (:issue:`15019`, :issue:`22783`)
- Bug in :func:`to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`)
- Bug in :func:`to_html()` with ``index_names=False`` displaying index name (:issue:`22747`)
+- Bug in :func:`to_html()` with ``header=False`` not displaying row index names (:issue:`23788`)
- Bug in :func:`DataFrame.to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`)
- Bug in :func:`DataFrame.to_string()` that caused representations of :class:`DataFrame` to not take up the whole window (:issue:`22984`)
- Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`).
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index eb11dd461927b..58f5364f2b523 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -43,6 +43,12 @@ def __init__(self, formatter, classes=None, notebook=False, border=None,
self.table_id = table_id
self.render_links = render_links
+ @property
+ def show_row_idx_names(self):
+ return all((self.fmt.has_index_names,
+ self.fmt.index,
+ self.fmt.show_index_names))
+
@property
def show_col_idx_names(self):
# see gh-22579
@@ -165,9 +171,7 @@ def write_style(self):
element_props.append(('thead tr th',
'text-align',
'left'))
- if all((self.fmt.has_index_names,
- self.fmt.index,
- self.fmt.show_index_names)):
+ if self.show_row_idx_names:
element_props.append(('thead tr:last-of-type th',
'text-align',
'right'))
@@ -228,17 +232,8 @@ def write_result(self, buf):
buffer_put_lines(buf, self.elements)
- def _write_header(self, indent):
+ def _write_col_header(self, indent):
truncate_h = self.fmt.truncate_h
-
- if not self.fmt.header:
- # write nothing
- return indent
-
- self.write('<thead>', indent)
-
- indent += self.indent_delta
-
if isinstance(self.columns, ABCMultiIndex):
template = 'colspan="{span:d}" halign="left"'
@@ -357,12 +352,25 @@ def _write_header(self, indent):
self.write_tr(row, indent, self.indent_delta, header=True,
align=align)
- if all((self.fmt.has_index_names,
- self.fmt.index,
- self.fmt.show_index_names)):
- row = ([x if x is not None else '' for x in self.frame.index.names]
- + [''] * (self.ncols + (1 if truncate_h else 0)))
- self.write_tr(row, indent, self.indent_delta, header=True)
+ def _write_row_header(self, indent):
+ truncate_h = self.fmt.truncate_h
+ row = ([x if x is not None else '' for x in self.frame.index.names]
+ + [''] * (self.ncols + (1 if truncate_h else 0)))
+ self.write_tr(row, indent, self.indent_delta, header=True)
+
+ def _write_header(self, indent):
+ if not (self.fmt.header or self.show_row_idx_names):
+ # write nothing
+ return indent
+
+ self.write('<thead>', indent)
+ indent += self.indent_delta
+
+ if self.fmt.header:
+ self._write_col_header(indent)
+
+ if self.show_row_idx_names:
+ self._write_row_header(indent)
indent -= self.indent_delta
self.write('</thead>', indent)
diff --git a/pandas/tests/io/formats/data/html/index_named_multi_columns_none.html b/pandas/tests/io/formats/data/html/index_named_multi_columns_none.html
new file mode 100644
index 0000000000000..8c41d2e29f2c0
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_multi_columns_none.html
@@ -0,0 +1,23 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>index.name.0</th>
+ <th>index.name.1</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_named_standard_columns_none.html b/pandas/tests/io/formats/data/html/index_named_standard_columns_none.html
new file mode 100644
index 0000000000000..432d8e06d5784
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_named_standard_columns_none.html
@@ -0,0 +1,21 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_none.html b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_none.html
new file mode 100644
index 0000000000000..81da7c3619abc
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_multi_columns_none.html
@@ -0,0 +1,15 @@
+<table border="1" class="dataframe">
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th>b</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>c</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_none.html b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_none.html
new file mode 100644
index 0000000000000..3d958afe4a4ac
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/index_unnamed_standard_columns_none.html
@@ -0,0 +1,14 @@
+<table border="1" class="dataframe">
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>0</td>
+ <td>0</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_none.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_none.html
new file mode 100644
index 0000000000000..0f262495b6c6b
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_multi_columns_none.html
@@ -0,0 +1,62 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>foo</th>
+ <th></th>
+ <th>baz</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th rowspan="2" valign="top">a</th>
+ <th rowspan="2" valign="top">c</th>
+ <th>e</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <th>...</th>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th rowspan="2" valign="top">b</th>
+ <th rowspan="2" valign="top">d</th>
+ <th>e</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>f</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_none.html b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_none.html
new file mode 100644
index 0000000000000..d294a507dbce4
--- /dev/null
+++ b/pandas/tests/io/formats/data/html/trunc_df_index_named_standard_columns_none.html
@@ -0,0 +1,54 @@
+<table border="1" class="dataframe">
+ <thead>
+ <tr>
+ <th>index.name</th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ <th></th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>0</th>
+ <td>0</td>
+ <td>1</td>
+ <td>...</td>
+ <td>6</td>
+ <td>7</td>
+ </tr>
+ <tr>
+ <th>1</th>
+ <td>8</td>
+ <td>9</td>
+ <td>...</td>
+ <td>14</td>
+ <td>15</td>
+ </tr>
+ <tr>
+ <th>...</th>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ <td>...</td>
+ </tr>
+ <tr>
+ <th>6</th>
+ <td>48</td>
+ <td>49</td>
+ <td>...</td>
+ <td>54</td>
+ <td>55</td>
+ </tr>
+ <tr>
+ <th>7</th>
+ <td>56</td>
+ <td>57</td>
+ <td>...</td>
+ <td>62</td>
+ <td>63</td>
+ </tr>
+ </tbody>
+</table>
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 213eb0d5b5cb8..d333330c19e39 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -429,6 +429,7 @@ def test_to_html_multi_indexes_index_false(self, datapath):
assert result == expected
@pytest.mark.parametrize('index_names', [True, False])
+ @pytest.mark.parametrize('header', [True, False])
@pytest.mark.parametrize('index', [True, False])
@pytest.mark.parametrize('column_index, column_type', [
(Index([0, 1]), 'unnamed_standard'),
@@ -448,18 +449,21 @@ def test_to_html_multi_indexes_index_false(self, datapath):
])
def test_to_html_basic_alignment(
self, datapath, row_index, row_type, column_index, column_type,
- index, index_names):
+ index, header, index_names):
# GH 22747, GH 22579
df = DataFrame(np.zeros((2, 2), dtype=int),
index=row_index, columns=column_index)
- result = df.to_html(index=index, index_names=index_names)
+ result = df.to_html(
+ index=index, header=header, index_names=index_names)
if not index:
row_type = 'none'
elif not index_names and row_type.startswith('named'):
row_type = 'un' + row_type
- if not index_names and column_type.startswith('named'):
+ if not header:
+ column_type = 'none'
+ elif not index_names and column_type.startswith('named'):
column_type = 'un' + column_type
filename = 'index_' + row_type + '_columns_' + column_type
@@ -467,6 +471,7 @@ def test_to_html_basic_alignment(
assert result == expected
@pytest.mark.parametrize('index_names', [True, False])
+ @pytest.mark.parametrize('header', [True, False])
@pytest.mark.parametrize('index', [True, False])
@pytest.mark.parametrize('column_index, column_type', [
(Index(np.arange(8)), 'unnamed_standard'),
@@ -488,19 +493,22 @@ def test_to_html_basic_alignment(
])
def test_to_html_alignment_with_truncation(
self, datapath, row_index, row_type, column_index, column_type,
- index, index_names):
+ index, header, index_names):
# GH 22747, GH 22579
df = DataFrame(np.arange(64).reshape(8, 8),
index=row_index, columns=column_index)
- result = df.to_html(max_rows=4, max_cols=4,
- index=index, index_names=index_names)
+ result = df.to_html(
+ max_rows=4, max_cols=4,
+ index=index, header=header, index_names=index_names)
if not index:
row_type = 'none'
elif not index_names and row_type.startswith('named'):
row_type = 'un' + row_type
- if not index_names and column_type.startswith('named'):
+ if not header:
+ column_type = 'none'
+ elif not index_names and column_type.startswith('named'):
column_type = 'un' + column_type
filename = 'trunc_df_index_' + row_type + '_columns_' + column_type
| - [x] closes #23788
- [x] xref #24546
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/24547 | 2019-01-02T11:58:05Z | 2019-01-02T14:18:30Z | 2019-01-02T14:18:30Z | 2019-01-02T14:45:22Z |
TST: move test_non_reducing_slice_on_multiindex | diff --git a/pandas/tests/indexing/multiindex/test_slice.py b/pandas/tests/indexing/multiindex/test_slice.py
index 596fe5d564a40..fcecb2b454eb6 100644
--- a/pandas/tests/indexing/multiindex/test_slice.py
+++ b/pandas/tests/indexing/multiindex/test_slice.py
@@ -7,6 +7,7 @@
import pandas as pd
from pandas import DataFrame, Index, MultiIndex, Series, Timestamp
+from pandas.core.indexing import _non_reducing_slice
from pandas.tests.indexing.common import _mklbl
from pandas.util import testing as tm
@@ -556,3 +557,20 @@ def test_int_series_slicing(
result = ymd[5:]
expected = ymd.reindex(s.index[5:])
tm.assert_frame_equal(result, expected)
+
+ def test_non_reducing_slice_on_multiindex(self):
+ # GH 19861
+ dic = {
+ ('a', 'd'): [1, 4],
+ ('a', 'c'): [2, 3],
+ ('b', 'c'): [3, 2],
+ ('b', 'd'): [4, 1]
+ }
+ df = pd.DataFrame(dic, index=[0, 1])
+ idx = pd.IndexSlice
+ slice_ = idx[:, idx['b', 'd']]
+ tslice_ = _non_reducing_slice(slice_)
+
+ result = df.loc[tslice_]
+ expected = pd.DataFrame({('b', 'd'): [4, 1]})
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 2224c3ab9935a..03f1975c50d2a 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -812,23 +812,6 @@ def test_non_reducing_slice(self):
tslice_ = _non_reducing_slice(slice_)
assert isinstance(df.loc[tslice_], DataFrame)
- def test_non_reducing_slice_on_multiindex(self):
- # GH 19861
- dic = {
- ('a', 'd'): [1, 4],
- ('a', 'c'): [2, 3],
- ('b', 'c'): [3, 2],
- ('b', 'd'): [4, 1]
- }
- df = pd.DataFrame(dic, index=[0, 1])
- idx = pd.IndexSlice
- slice_ = idx[:, idx['b', 'd']]
- tslice_ = _non_reducing_slice(slice_)
-
- result = df.loc[tslice_]
- expected = pd.DataFrame({('b', 'd'): [4, 1]})
- tm.assert_frame_equal(result, expected)
-
def test_list_slice(self):
# like dataframe getitem
slices = [['A'], Series(['A']), np.array(['A'])]
| Follow-up to #19881 | https://api.github.com/repos/pandas-dev/pandas/pulls/24545 | 2019-01-02T09:23:52Z | 2019-01-02T12:21:10Z | 2019-01-02T12:21:10Z | 2019-01-02T13:48:12Z |
MAINT: Remove empty Python file | diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
deleted file mode 100644
index e69de29bb2d1d..0000000000000
| Follow-up to #23255. | https://api.github.com/repos/pandas-dev/pandas/pulls/24544 | 2019-01-02T05:47:10Z | 2019-01-02T06:25:35Z | 2019-01-02T06:25:35Z | 2019-01-02T06:27:17Z |
diff reduction for 24024 | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 3f32b7b7dcea9..8b0565a36648f 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -19,7 +19,7 @@
is_extension_type, is_float_dtype, is_int64_dtype, is_object_dtype,
is_period_dtype, is_string_dtype, is_timedelta64_dtype, pandas_dtype)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
-from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
+from pandas.core.dtypes.generic import ABCIndexClass, ABCPandasArray, ABCSeries
from pandas.core.dtypes.missing import isna
from pandas.core import ops
@@ -224,7 +224,7 @@ def _simple_new(cls, values, freq=None, tz=None):
# for compat with datetime/timedelta/period shared methods,
# we can sometimes get here with int64 values. These represent
# nanosecond UTC (or tz-naive) unix timestamps
- values = values.view('M8[ns]')
+ values = values.view(_NS_DTYPE)
assert values.dtype == 'M8[ns]', values.dtype
@@ -417,7 +417,7 @@ def tz(self):
Returns None when the array is tz-naive.
"""
# GH 18595
- return getattr(self._dtype, "tz", None)
+ return getattr(self.dtype, "tz", None)
@tz.setter
def tz(self, value):
@@ -517,10 +517,6 @@ def astype(self, dtype, copy=True):
# ----------------------------------------------------------------
# ExtensionArray Interface
- @property
- def _ndarray_values(self):
- return self._data
-
@Appender(dtl.DatetimeLikeArrayMixin._validate_fill_value.__doc__)
def _validate_fill_value(self, fill_value):
if isna(fill_value):
@@ -1568,6 +1564,8 @@ def sequence_to_dt64ns(data, dtype=None, copy=False,
copy = False
elif isinstance(data, ABCSeries):
data = data._values
+ if isinstance(data, ABCPandasArray):
+ data = data.to_numpy()
if hasattr(data, "freq"):
# i.e. DatetimeArray/Index
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 7199d88d4bde5..45a6081093aed 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -269,11 +269,6 @@ def _check_compatible_with(self, other):
def dtype(self):
return self._dtype
- @property
- def _ndarray_values(self):
- # Ordinals
- return self._data
-
@property
def freq(self):
"""
@@ -475,7 +470,6 @@ def _format_native_types(self, na_rep=u'NaT', date_format=None, **kwargs):
"""
actually format my specific types
"""
- # TODO(DatetimeArray): remove
values = self.astype(object)
if date_format:
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 719a79cf300a0..78570be8dc07f 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -369,8 +369,9 @@ def _addsub_offset_array(self, other, op):
# TimedeltaIndex can only operate with a subset of DateOffset
# subclasses. Incompatible classes will raise AttributeError,
# which we re-raise as TypeError
- return dtl.DatetimeLikeArrayMixin._addsub_offset_array(self, other,
- op)
+ return super(TimedeltaArrayMixin, self)._addsub_offset_array(
+ other, op
+ )
except AttributeError:
raise TypeError("Cannot add/subtract non-tick DateOffset to {cls}"
.format(cls=type(self).__name__))
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 25cd5cda9989c..50b2413167b32 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -55,6 +55,7 @@ class DatetimeIndexOpsMixin(ExtensionOpsMixin):
"""
common ops mixin to support a unified interface datetimelike Index
"""
+ _data = None # type: DatetimeLikeArrayMixin
# DatetimeLikeArrayMixin assumes subclasses are mutable, so these are
# properties there. They can be made into cache_readonly for Index
@@ -72,6 +73,9 @@ class DatetimeIndexOpsMixin(ExtensionOpsMixin):
@property
def freq(self):
+ """
+ Return the frequency object if it is set, otherwise None.
+ """
return self._eadata.freq
@freq.setter
@@ -81,6 +85,9 @@ def freq(self, value):
@property
def freqstr(self):
+ """
+ Return the frequency object as a string if it is set, otherwise None.
+ """
return self._eadata.freqstr
def unique(self, level=None):
@@ -111,6 +118,20 @@ def wrapper(self, other):
def _ndarray_values(self):
return self._eadata._ndarray_values
+ # ------------------------------------------------------------------------
+ # Abstract data attributes
+
+ @property
+ def values(self):
+ # type: () -> np.ndarray
+ # Note: PeriodArray overrides this to return an ndarray of objects.
+ return self._eadata._data
+
+ @property
+ @Appender(DatetimeLikeArrayMixin.asi8.__doc__)
+ def asi8(self):
+ return self._eadata.asi8
+
# ------------------------------------------------------------------------
def equals(self, other):
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 5695d3d9e67f3..690a3db28fe83 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -330,7 +330,6 @@ def _simple_new(cls, values, name=None, freq=None, tz=None, dtype=None):
result._eadata = dtarr
result.name = name
# For groupby perf. See note in indexes/base about _index_data
- # TODO: make sure this is updated correctly if edited
result._index_data = result._data
result._reset_identity()
return result
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index a915f24e3c87f..4bd8f7407500b 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -18,7 +18,6 @@
from pandas.core import common as com
from pandas.core.accessor import delegate_names
from pandas.core.algorithms import unique1d
-from pandas.core.arrays.datetimelike import DatelikeOps
from pandas.core.arrays.period import (
PeriodArray, period_array, validate_dtype_freq)
from pandas.core.base import _shared_docs
@@ -70,9 +69,9 @@ class PeriodDelegateMixin(DatetimelikeDelegateMixin):
typ='property')
@delegate_names(PeriodArray,
PeriodDelegateMixin._delegated_methods,
- typ="method")
-class PeriodIndex(DatelikeOps, DatetimeIndexOpsMixin, Int64Index,
- PeriodDelegateMixin):
+ typ="method",
+ overwrite=True)
+class PeriodIndex(DatetimeIndexOpsMixin, Int64Index, PeriodDelegateMixin):
"""
Immutable ndarray holding ordinal values indicating regular periods in
time such as particular years, quarters, months, etc.
@@ -291,20 +290,15 @@ def _eadata(self):
def values(self):
return np.asarray(self)
- @property
- def _values(self):
- return self._data
-
@property
def freq(self):
- # TODO(DatetimeArray): remove
- # Can't simply use delegate_names since our base class is defining
- # freq
return self._data.freq
@freq.setter
def freq(self, value):
value = Period._maybe_convert_freq(value)
+ # TODO: When this deprecation is enforced, PeriodIndex.freq can
+ # be removed entirely, and we'll just inherit.
msg = ('Setting {cls}.freq has been deprecated and will be '
'removed in a future version; use {cls}.asfreq instead. '
'The {cls}.freq setter is not guaranteed to work.')
@@ -897,11 +891,6 @@ def flags(self):
FutureWarning, stacklevel=2)
return self._ndarray_values.flags
- @property
- def asi8(self):
- # TODO(DatetimeArray): remove
- return self.view('i8')
-
def item(self):
"""
return the first element of the underlying data as a python
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 6206a6a615d64..0798dd6eee0c9 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -70,8 +70,8 @@ class TimedeltaDelegateMixin(DatetimelikeDelegateMixin):
@delegate_names(TimedeltaArray,
TimedeltaDelegateMixin._delegated_methods,
typ="method", overwrite=False)
-class TimedeltaIndex(DatetimeIndexOpsMixin,
- dtl.TimelikeOps, Int64Index, TimedeltaDelegateMixin):
+class TimedeltaIndex(DatetimeIndexOpsMixin, dtl.TimelikeOps, Int64Index,
+ TimedeltaDelegateMixin):
"""
Immutable ndarray of timedelta64 data, represented internally as int64, and
which can be boxed to timedelta objects
@@ -238,7 +238,6 @@ def _simple_new(cls, values, name=None, freq=None, dtype=_TD_DTYPE):
result._eadata = tdarr
result.name = name
# For groupby perf. See note in indexes/base about _index_data
- # TODO: make sure this is updated correctly if edited
result._index_data = tdarr._data
result._reset_identity()
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index c9ed2521676ad..346f56968c963 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2165,7 +2165,7 @@ def should_store(self, value):
class DatetimeLikeBlockMixin(object):
- """Mixin class for DatetimeBlock and DatetimeTZBlock."""
+ """Mixin class for DatetimeBlock, DatetimeTZBlock, and TimedeltaBlock."""
@property
def _holder(self):
@@ -2857,15 +2857,17 @@ def to_native_types(self, slicer=None, na_rep=None, date_format=None,
""" convert to our native types format, slicing if desired """
values = self.values
+ i8values = self.values.view('i8')
+
if slicer is not None:
- values = values[..., slicer]
+ i8values = i8values[..., slicer]
from pandas.io.formats.format import _get_format_datetime64_from_values
format = _get_format_datetime64_from_values(values, date_format)
result = tslib.format_array_from_datetime(
- values.view('i8').ravel(), tz=getattr(self.values, 'tz', None),
- format=format, na_rep=na_rep).reshape(values.shape)
+ i8values.ravel(), tz=getattr(self.values, 'tz', None),
+ format=format, na_rep=na_rep).reshape(i8values.shape)
return np.atleast_2d(result)
def should_store(self, value):
@@ -3115,8 +3117,16 @@ def get_block_type(values, dtype=None):
dtype = dtype or values.dtype
vtype = dtype.type
- if is_categorical(values):
+ if is_sparse(dtype):
+ # Need this first(ish) so that Sparse[datetime] is sparse
+ cls = ExtensionBlock
+ elif is_categorical(values):
cls = CategoricalBlock
+ elif issubclass(vtype, np.datetime64):
+ assert not is_datetime64tz_dtype(values)
+ cls = DatetimeBlock
+ elif is_datetime64tz_dtype(values):
+ cls = DatetimeTZBlock
elif is_interval_dtype(dtype) or is_period_dtype(dtype):
cls = ObjectValuesExtensionBlock
elif is_extension_array_dtype(values):
@@ -3128,11 +3138,6 @@ def get_block_type(values, dtype=None):
cls = TimeDeltaBlock
elif issubclass(vtype, np.complexfloating):
cls = ComplexBlock
- elif issubclass(vtype, np.datetime64):
- assert not is_datetime64tz_dtype(values)
- cls = DatetimeBlock
- elif is_datetime64tz_dtype(values):
- cls = DatetimeTZBlock
elif issubclass(vtype, np.integer):
cls = IntBlock
elif dtype == np.bool_:
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 7cab52ddda87f..e11f0ee01e57c 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1539,17 +1539,20 @@ def wrapper(left, right):
raise TypeError("{typ} cannot perform the operation "
"{op}".format(typ=type(left).__name__, op=str_rep))
- elif (is_extension_array_dtype(left) or
- (is_extension_array_dtype(right) and not is_scalar(right))):
- # GH#22378 disallow scalar to exclude e.g. "category", "Int64"
- return dispatch_to_extension_op(op, left, right)
-
elif is_datetime64_dtype(left) or is_datetime64tz_dtype(left):
+ # Give dispatch_to_index_op a chance for tests like
+ # test_dt64_series_add_intlike, which the index dispatching handles
+ # specifically.
result = dispatch_to_index_op(op, left, right, pd.DatetimeIndex)
return construct_result(left, result,
index=left.index, name=res_name,
dtype=result.dtype)
+ elif (is_extension_array_dtype(left) or
+ (is_extension_array_dtype(right) and not is_scalar(right))):
+ # GH#22378 disallow scalar to exclude e.g. "category", "Int64"
+ return dispatch_to_extension_op(op, left, right)
+
elif is_timedelta64_dtype(left):
result = dispatch_to_index_op(op, left, right, pd.TimedeltaIndex)
return construct_result(left, result,
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 8a833d8197381..48b64c2968219 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -16,6 +16,14 @@
class TestDatetimeArrayConstructor(object):
+ def test_from_pandas_array(self):
+ arr = pd.array(np.arange(5, dtype=np.int64)) * 3600 * 10**9
+
+ result = DatetimeArray._from_sequence(arr, freq='infer')
+
+ expected = pd.date_range('1970-01-01', periods=5, freq='H')._eadata
+ tm.assert_datetime_array_equal(result, expected)
+
def test_mismatched_timezone_raises(self):
arr = DatetimeArray(np.array(['2000-01-01T06:00:00'], dtype='M8[ns]'),
dtype=DatetimeTZDtype(tz='US/Central'))
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index a21d0104b0d04..6e006c1707604 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -3245,7 +3245,9 @@ def test_setitem(self):
b1 = df._data.blocks[1]
b2 = df._data.blocks[2]
assert b1.values.equals(b2.values)
- assert id(b1.values.values.base) != id(b2.values.values.base)
+ if b1.values.values.base is not None:
+ # base being None suffices to assure a copy was made
+ assert id(b1.values.values.base) != id(b2.values.values.base)
# with nan
df2 = df.copy()
| docstrings, comments, edits that are correct both before and after #24024
the only substantive change is adding a test specifically for constructing a DatetimeArray from a PandasArray. | https://api.github.com/repos/pandas-dev/pandas/pulls/24543 | 2019-01-02T04:15:19Z | 2019-01-02T14:16:52Z | 2019-01-02T14:16:52Z | 2019-01-02T14:44:30Z |
CLN: use idiomatic pandas_dtypes in pandas/dtypes/common.py | diff --git a/asv_bench/benchmarks/dtypes.py b/asv_bench/benchmarks/dtypes.py
new file mode 100644
index 0000000000000..e59154cd99965
--- /dev/null
+++ b/asv_bench/benchmarks/dtypes.py
@@ -0,0 +1,39 @@
+from pandas.api.types import pandas_dtype
+
+import numpy as np
+from .pandas_vb_common import (
+ numeric_dtypes, datetime_dtypes, string_dtypes, extension_dtypes)
+
+
+_numpy_dtypes = [np.dtype(dtype)
+ for dtype in (numeric_dtypes +
+ datetime_dtypes +
+ string_dtypes)]
+_dtypes = _numpy_dtypes + extension_dtypes
+
+
+class Dtypes(object):
+ params = (_dtypes +
+ list(map(lambda dt: dt.name, _dtypes)))
+ param_names = ['dtype']
+
+ def time_pandas_dtype(self, dtype):
+ pandas_dtype(dtype)
+
+
+class DtypesInvalid(object):
+ param_names = ['dtype']
+ params = ['scalar-string', 'scalar-int', 'list-string', 'array-string']
+ data_dict = {'scalar-string': 'foo',
+ 'scalar-int': 1,
+ 'list-string': ['foo'] * 1000,
+ 'array-string': np.array(['foo'] * 1000)}
+
+ def time_pandas_dtype_invalid(self, dtype):
+ try:
+ pandas_dtype(self.data_dict[dtype])
+ except TypeError:
+ pass
+
+
+from .pandas_vb_common import setup # noqa: F401
diff --git a/asv_bench/benchmarks/pandas_vb_common.py b/asv_bench/benchmarks/pandas_vb_common.py
index e7b25d567e03b..ab5e5fd3bfe10 100644
--- a/asv_bench/benchmarks/pandas_vb_common.py
+++ b/asv_bench/benchmarks/pandas_vb_common.py
@@ -2,6 +2,7 @@
from importlib import import_module
import numpy as np
+import pandas as pd
# Compatibility import for lib
for imp in ['pandas._libs.lib', 'pandas.lib']:
@@ -14,6 +15,15 @@
numeric_dtypes = [np.int64, np.int32, np.uint32, np.uint64, np.float32,
np.float64, np.int16, np.int8, np.uint16, np.uint8]
datetime_dtypes = [np.datetime64, np.timedelta64]
+string_dtypes = [np.object]
+extension_dtypes = [pd.Int8Dtype, pd.Int16Dtype,
+ pd.Int32Dtype, pd.Int64Dtype,
+ pd.UInt8Dtype, pd.UInt16Dtype,
+ pd.UInt32Dtype, pd.UInt64Dtype,
+ pd.CategoricalDtype,
+ pd.IntervalDtype,
+ pd.DatetimeTZDtype('ns', 'UTC'),
+ pd.PeriodDtype('D')]
def setup(*args, **kwargs):
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 3a04789b609f8..78673a607b206 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -430,7 +430,7 @@ Backwards incompatible API changes
- The column order of the resultant :class:`DataFrame` from :meth:`MultiIndex.to_frame` is now guaranteed to match the :attr:`MultiIndex.names` order. (:issue:`22420`)
- Incorrectly passing a :class:`DatetimeIndex` to :meth:`MultiIndex.from_tuples`, rather than a sequence of tuples, now raises a ``TypeError`` rather than a ``ValueError`` (:issue:`24024`)
- :func:`pd.offsets.generate_range` argument ``time_rule`` has been removed; use ``offset`` instead (:issue:`24157`)
-- In 0.23.x, pandas would raise a ``ValueError`` on a merge of a numeric column (e.g. ``int`` dtyped column) and an ``object`` dtyped column (:issue:`9780`). We have re-enabled the ability to merge ``object`` and other dtypes (:issue:`21681`)
+- In 0.23.x, pandas would raise a ``ValueError`` on a merge of a numeric column (e.g. ``int`` dtyped column) and an ``object`` dtyped column (:issue:`9780`). We have re-enabled the ability to merge ``object`` and other dtypes; pandas will still raise on a merge between a numeric and an ``object`` dtyped column that is composed only of strings (:issue:`21681`)
Percentage change on groupby
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/pandas/conftest.py b/pandas/conftest.py
index f383fb32810e7..30b24e00779a9 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -388,9 +388,14 @@ def tz_aware_fixture(request):
return request.param
+# ----------------------------------------------------------------
+# Dtypes
UNSIGNED_INT_DTYPES = ["uint8", "uint16", "uint32", "uint64"]
+UNSIGNED_EA_INT_DTYPES = ["UInt8", "UInt16", "UInt32", "UInt64"]
SIGNED_INT_DTYPES = [int, "int8", "int16", "int32", "int64"]
+SIGNED_EA_INT_DTYPES = ["Int8", "Int16", "Int32", "Int64"]
ALL_INT_DTYPES = UNSIGNED_INT_DTYPES + SIGNED_INT_DTYPES
+ALL_EA_INT_DTYPES = UNSIGNED_EA_INT_DTYPES + SIGNED_EA_INT_DTYPES
FLOAT_DTYPES = [float, "float32", "float64"]
COMPLEX_DTYPES = [complex, "complex64", "complex128"]
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index af2c05bbee7c2..f8f87ff1c96f1 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -32,6 +32,7 @@ class _IntegerDtype(ExtensionDtype):
The attributes name & type are set when these subclasses are created.
"""
name = None
+ base = None
type = None
na_value = np.nan
@@ -153,6 +154,7 @@ def coerce_to_array(values, dtype, mask=None, copy=False):
# Avoid DeprecationWarning from NumPy about np.dtype("Int64")
# https://github.com/numpy/numpy/pull/7476
dtype = dtype.lower()
+
if not issubclass(type(dtype), _IntegerDtype):
try:
dtype = _dtypes[str(np.dtype(dtype))]
@@ -655,7 +657,8 @@ def integer_arithmetic_method(self, other):
else:
name = dtype.capitalize()
classname = "{}Dtype".format(name)
- attributes_dict = {'type': getattr(np, dtype),
+ numpy_dtype = getattr(np, dtype)
+ attributes_dict = {'type': numpy_dtype,
'name': name}
dtype_type = register_extension_dtype(
type(classname, (_IntegerDtype, ), attributes_dict)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index b55bad46580fe..a67bdffc2aeb7 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -9,9 +9,9 @@
from pandas.compat import PY3, string_types, text_type, to_str
from .common import (
- _INT64_DTYPE, _NS_DTYPE, _POSSIBLY_CAST_DTYPES, _TD_DTYPE, _string_dtypes,
- ensure_int8, ensure_int16, ensure_int32, ensure_int64, ensure_object,
- is_bool, is_bool_dtype, is_categorical_dtype, is_complex, is_complex_dtype,
+ _INT64_DTYPE, _NS_DTYPE, _POSSIBLY_CAST_DTYPES, _TD_DTYPE, ensure_int8,
+ ensure_int16, ensure_int32, ensure_int64, ensure_object, is_bool,
+ is_bool_dtype, is_categorical_dtype, is_complex, is_complex_dtype,
is_datetime64_dtype, is_datetime64_ns_dtype, is_datetime64tz_dtype,
is_datetime_or_timedelta_dtype, is_datetimelike, is_dtype_equal,
is_extension_array_dtype, is_extension_type, is_float, is_float_dtype,
@@ -544,7 +544,7 @@ def invalidate_string_dtypes(dtype_set):
"""Change string like dtypes to object for
``DataFrame.select_dtypes()``.
"""
- non_string_dtypes = dtype_set - _string_dtypes
+ non_string_dtypes = dtype_set - {np.dtype('S').type, np.dtype('<U').type}
if non_string_dtypes != dtype_set:
raise TypeError("string dtypes are not allowed, use 'object' instead")
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index b4c769fab88ad..507dacb5322a6 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -4,17 +4,15 @@
import numpy as np
from pandas._libs import algos, lib
-from pandas._libs.interval import Interval
-from pandas._libs.tslibs import Period, Timestamp, conversion
-from pandas.compat import PY3, PY36, binary_type, string_types, text_type
+from pandas._libs.tslibs import conversion
+from pandas.compat import PY3, PY36, string_types
from pandas.core.dtypes.dtypes import (
- CategoricalDtype, CategoricalDtypeType, DatetimeTZDtype, ExtensionDtype,
- IntervalDtype, PandasExtensionDtype, PeriodDtype, registry)
+ CategoricalDtype, DatetimeTZDtype, ExtensionDtype, IntervalDtype,
+ PandasExtensionDtype, PeriodDtype, registry)
from pandas.core.dtypes.generic import (
- ABCCategorical, ABCCategoricalIndex, ABCDateOffset, ABCDatetimeIndex,
- ABCIndexClass, ABCPeriodArray, ABCPeriodIndex, ABCSeries, ABCSparseArray,
- ABCSparseSeries)
+ ABCCategorical, ABCDateOffset, ABCDatetimeIndex, ABCIndexClass,
+ ABCPeriodArray, ABCPeriodIndex, ABCSeries)
from pandas.core.dtypes.inference import ( # noqa:F401
is_array_like, is_bool, is_complex, is_decimal, is_dict_like, is_file_like,
is_float, is_hashable, is_integer, is_interval, is_iterator, is_list_like,
@@ -116,6 +114,20 @@ def ensure_int64_or_float64(arr, copy=False):
return arr.astype('float64', copy=copy)
+def classes(*klasses):
+ """ evaluate if the tipo is a subclass of the klasses """
+ return lambda tipo: issubclass(tipo, klasses)
+
+
+def classes_and_not_datetimelike(*klasses):
+ """
+ evaluate if the tipo is a subclass of the klasses
+ and not a datetimelike
+ """
+ return lambda tipo: (issubclass(tipo, klasses) and
+ not issubclass(tipo, (np.datetime64, np.timedelta64)))
+
+
def is_object_dtype(arr_or_dtype):
"""
Check whether an array-like or dtype is of the object dtype.
@@ -142,11 +154,7 @@ def is_object_dtype(arr_or_dtype):
>>> is_object_dtype([1, 2, 3])
False
"""
-
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return issubclass(tipo, np.object_)
+ return _is_dtype_type(arr_or_dtype, classes(np.object_))
def is_sparse(arr):
@@ -420,13 +428,7 @@ def is_datetime64_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- try:
- tipo = _get_dtype_type(arr_or_dtype)
- except (TypeError, UnicodeEncodeError):
- return False
- return issubclass(tipo, np.datetime64)
+ return _is_dtype_type(arr_or_dtype, classes(np.datetime64))
def is_datetime64tz_dtype(arr_or_dtype):
@@ -495,13 +497,7 @@ def is_timedelta64_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- try:
- tipo = _get_dtype_type(arr_or_dtype)
- except (TypeError, ValueError, SyntaxError):
- return False
- return issubclass(tipo, np.timedelta64)
+ return _is_dtype_type(arr_or_dtype, classes(np.timedelta64))
def is_period_dtype(arr_or_dtype):
@@ -635,14 +631,9 @@ def is_string_dtype(arr_or_dtype):
"""
# TODO: gh-15585: consider making the checks stricter.
-
- if arr_or_dtype is None:
- return False
- try:
- dtype = _get_dtype(arr_or_dtype)
+ def condition(dtype):
return dtype.kind in ('O', 'S', 'U') and not is_period_dtype(dtype)
- except TypeError:
- return False
+ return _is_dtype(arr_or_dtype, condition)
def is_period_arraylike(arr):
@@ -832,6 +823,11 @@ def is_any_int_dtype(arr_or_dtype):
This function is internal and should not be exposed in the public API.
+ .. versionchanged:: 0.24.0
+
+ The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered
+ as integer by this function.
+
Parameters
----------
arr_or_dtype : array-like
@@ -865,10 +861,8 @@ def is_any_int_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return issubclass(tipo, np.integer)
+ return _is_dtype_type(
+ arr_or_dtype, classes(np.integer, np.timedelta64))
def is_integer_dtype(arr_or_dtype):
@@ -877,6 +871,11 @@ def is_integer_dtype(arr_or_dtype):
Unlike in `in_any_int_dtype`, timedelta64 instances will return False.
+ .. versionchanged:: 0.24.0
+
+ The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered
+ as integer by this function.
+
Parameters
----------
arr_or_dtype : array-like
@@ -897,6 +896,12 @@ def is_integer_dtype(arr_or_dtype):
False
>>> is_integer_dtype(np.uint64)
True
+ >>> is_integer_dtype('int8')
+ True
+ >>> is_integer_dtype('Int8')
+ True
+ >>> is_integer_dtype(pd.Int8Dtype)
+ True
>>> is_integer_dtype(np.datetime64)
False
>>> is_integer_dtype(np.timedelta64)
@@ -911,11 +916,8 @@ def is_integer_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return (issubclass(tipo, np.integer) and
- not issubclass(tipo, (np.datetime64, np.timedelta64)))
+ return _is_dtype_type(
+ arr_or_dtype, classes_and_not_datetimelike(np.integer))
def is_signed_integer_dtype(arr_or_dtype):
@@ -924,6 +926,11 @@ def is_signed_integer_dtype(arr_or_dtype):
Unlike in `in_any_int_dtype`, timedelta64 instances will return False.
+ .. versionchanged:: 0.24.0
+
+ The nullable Integer dtypes (e.g. pandas.Int64Dtype) are also considered
+ as integer by this function.
+
Parameters
----------
arr_or_dtype : array-like
@@ -944,6 +951,12 @@ def is_signed_integer_dtype(arr_or_dtype):
False
>>> is_signed_integer_dtype(np.uint64) # unsigned
False
+ >>> is_signed_integer_dtype('int8')
+ True
+ >>> is_signed_integer_dtype('Int8')
+ True
+ >>> is_signed_dtype(pd.Int8Dtype)
+ True
>>> is_signed_integer_dtype(np.datetime64)
False
>>> is_signed_integer_dtype(np.timedelta64)
@@ -960,17 +973,19 @@ def is_signed_integer_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return (issubclass(tipo, np.signedinteger) and
- not issubclass(tipo, (np.datetime64, np.timedelta64)))
+ return _is_dtype_type(
+ arr_or_dtype, classes_and_not_datetimelike(np.signedinteger))
def is_unsigned_integer_dtype(arr_or_dtype):
"""
Check whether the provided array or dtype is of an unsigned integer dtype.
+ .. versionchanged:: 0.24.0
+
+ The nullable Integer dtypes (e.g. pandas.UInt64Dtype) are also
+ considered as integer by this function.
+
Parameters
----------
arr_or_dtype : array-like
@@ -991,6 +1006,12 @@ def is_unsigned_integer_dtype(arr_or_dtype):
False
>>> is_unsigned_integer_dtype(np.uint64)
True
+ >>> is_unsigned_integer_dtype('uint8')
+ True
+ >>> is_unsigned_integer_dtype('UInt8')
+ True
+ >>> is_unsigned_integer_dtype(pd.UInt8Dtype)
+ True
>>> is_unsigned_integer_dtype(np.array(['a', 'b']))
False
>>> is_unsigned_integer_dtype(pd.Series([1, 2])) # signed
@@ -1000,12 +1021,8 @@ def is_unsigned_integer_dtype(arr_or_dtype):
>>> is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
True
"""
-
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return (issubclass(tipo, np.unsignedinteger) and
- not issubclass(tipo, (np.datetime64, np.timedelta64)))
+ return _is_dtype_type(
+ arr_or_dtype, classes_and_not_datetimelike(np.unsignedinteger))
def is_int64_dtype(arr_or_dtype):
@@ -1035,6 +1052,12 @@ def is_int64_dtype(arr_or_dtype):
False
>>> is_int64_dtype(np.int64)
True
+ >>> is_int64_dtype('int8')
+ False
+ >>> is_int64_dtype('Int8')
+ False
+ >>> is_int64_dtype(pd.Int64Dtype)
+ True
>>> is_int64_dtype(float)
False
>>> is_int64_dtype(np.uint64) # unsigned
@@ -1049,10 +1072,7 @@ def is_int64_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return issubclass(tipo, np.int64)
+ return _is_dtype_type(arr_or_dtype, classes(np.int64))
def is_datetime64_any_dtype(arr_or_dtype):
@@ -1172,14 +1192,7 @@ def is_timedelta64_ns_dtype(arr_or_dtype):
>>> is_timedelta64_ns_dtype(np.array([1, 2], dtype=np.timedelta64))
False
"""
-
- if arr_or_dtype is None:
- return False
- try:
- tipo = _get_dtype(arr_or_dtype)
- return tipo == _TD_DTYPE
- except TypeError:
- return False
+ return _is_dtype(arr_or_dtype, lambda dtype: dtype == _TD_DTYPE)
def is_datetime_or_timedelta_dtype(arr_or_dtype):
@@ -1217,10 +1230,8 @@ def is_datetime_or_timedelta_dtype(arr_or_dtype):
True
"""
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return issubclass(tipo, (np.datetime64, np.timedelta64))
+ return _is_dtype_type(
+ arr_or_dtype, classes(np.datetime64, np.timedelta64))
def _is_unorderable_exception(e):
@@ -1495,11 +1506,8 @@ def is_numeric_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return (issubclass(tipo, (np.number, np.bool_)) and
- not issubclass(tipo, (np.datetime64, np.timedelta64)))
+ return _is_dtype_type(
+ arr_or_dtype, classes_and_not_datetimelike(np.number, np.bool_))
def is_string_like_dtype(arr_or_dtype):
@@ -1530,13 +1538,8 @@ def is_string_like_dtype(arr_or_dtype):
False
"""
- if arr_or_dtype is None:
- return False
- try:
- dtype = _get_dtype(arr_or_dtype)
- return dtype.kind in ('S', 'U')
- except TypeError:
- return False
+ return _is_dtype(
+ arr_or_dtype, lambda dtype: dtype.kind in ('S', 'U'))
def is_float_dtype(arr_or_dtype):
@@ -1569,11 +1572,7 @@ def is_float_dtype(arr_or_dtype):
>>> is_float_dtype(pd.Index([1, 2.]))
True
"""
-
- if arr_or_dtype is None:
- return False
- tipo = _get_dtype_type(arr_or_dtype)
- return issubclass(tipo, np.floating)
+ return _is_dtype_type(arr_or_dtype, classes(np.floating))
def is_bool_dtype(arr_or_dtype):
@@ -1618,14 +1617,10 @@ def is_bool_dtype(arr_or_dtype):
if arr_or_dtype is None:
return False
try:
- tipo = _get_dtype_type(arr_or_dtype)
- except ValueError:
- # this isn't even a dtype
+ dtype = _get_dtype(arr_or_dtype)
+ except TypeError:
return False
- if isinstance(arr_or_dtype, (ABCCategorical, ABCCategoricalIndex)):
- arr_or_dtype = arr_or_dtype.dtype
-
if isinstance(arr_or_dtype, CategoricalDtype):
arr_or_dtype = arr_or_dtype.categories
# now we use the special definition for Index
@@ -1642,7 +1637,7 @@ def is_bool_dtype(arr_or_dtype):
dtype = getattr(arr_or_dtype, 'dtype', arr_or_dtype)
return dtype._is_boolean
- return issubclass(tipo, np.bool_)
+ return issubclass(dtype.type, np.bool_)
def is_extension_type(arr):
@@ -1761,10 +1756,32 @@ def is_complex_dtype(arr_or_dtype):
True
"""
+ return _is_dtype_type(arr_or_dtype, classes(np.complexfloating))
+
+
+def _is_dtype(arr_or_dtype, condition):
+ """
+ Return a boolean if the condition is satisfied for the arr_or_dtype.
+
+ Parameters
+ ----------
+ arr_or_dtype : array-like, str, np.dtype, or ExtensionArrayType
+ The array-like or dtype object whose dtype we want to extract.
+ condition : callable[Union[np.dtype, ExtensionDtype]]
+
+ Returns
+ -------
+ bool
+
+ """
+
if arr_or_dtype is None:
return False
- tipo = _get_dtype_type(arr_or_dtype)
- return issubclass(tipo, np.complexfloating)
+ try:
+ dtype = _get_dtype(arr_or_dtype)
+ except (TypeError, ValueError, UnicodeEncodeError):
+ return False
+ return condition(dtype)
def _get_dtype(arr_or_dtype):
@@ -1787,95 +1804,70 @@ def _get_dtype(arr_or_dtype):
TypeError : The passed in object is None.
"""
- # TODO(extension)
- # replace with pandas_dtype
-
if arr_or_dtype is None:
raise TypeError("Cannot deduce dtype from null object")
- if isinstance(arr_or_dtype, np.dtype):
+
+ # fastpath
+ elif isinstance(arr_or_dtype, np.dtype):
return arr_or_dtype
elif isinstance(arr_or_dtype, type):
return np.dtype(arr_or_dtype)
- elif isinstance(arr_or_dtype, ExtensionDtype):
- return arr_or_dtype
- elif isinstance(arr_or_dtype, DatetimeTZDtype):
- return arr_or_dtype
- elif isinstance(arr_or_dtype, PeriodDtype):
- return arr_or_dtype
- elif isinstance(arr_or_dtype, IntervalDtype):
- return arr_or_dtype
- elif isinstance(arr_or_dtype, string_types):
- if is_categorical_dtype(arr_or_dtype):
- return CategoricalDtype.construct_from_string(arr_or_dtype)
- elif is_datetime64tz_dtype(arr_or_dtype):
- return DatetimeTZDtype.construct_from_string(arr_or_dtype)
- elif is_period_dtype(arr_or_dtype):
- return PeriodDtype.construct_from_string(arr_or_dtype)
- elif is_interval_dtype(arr_or_dtype):
- return IntervalDtype.construct_from_string(arr_or_dtype)
- elif isinstance(arr_or_dtype, (ABCCategorical, ABCCategoricalIndex,
- ABCSparseArray, ABCSparseSeries)):
- return arr_or_dtype.dtype
- if hasattr(arr_or_dtype, 'dtype'):
+ # if we have an array-like
+ elif hasattr(arr_or_dtype, 'dtype'):
arr_or_dtype = arr_or_dtype.dtype
- return np.dtype(arr_or_dtype)
+ return pandas_dtype(arr_or_dtype)
-def _get_dtype_type(arr_or_dtype):
+
+def _is_dtype_type(arr_or_dtype, condition):
"""
- Get the type (NOT dtype) instance associated with
- an array or dtype object.
+ Return a boolean if the condition is satisfied for the arr_or_dtype.
Parameters
----------
arr_or_dtype : array-like
- The array-like or dtype object whose type we want to extract.
+ The array-like or dtype object whose dtype we want to extract.
+ condition : callable[Union[np.dtype, ExtensionDtypeType]]
Returns
-------
- obj_type : The extract type instance from the
- passed in array or dtype object.
+ bool : if the condition is satisifed for the arr_or_dtype
"""
- # TODO(extension)
- # replace with pandas_dtype
+ if arr_or_dtype is None:
+ return condition(type(None))
+
+ # fastpath
if isinstance(arr_or_dtype, np.dtype):
- return arr_or_dtype.type
+ return condition(arr_or_dtype.type)
elif isinstance(arr_or_dtype, type):
- return np.dtype(arr_or_dtype).type
- elif isinstance(arr_or_dtype, CategoricalDtype):
- return CategoricalDtypeType
- elif isinstance(arr_or_dtype, DatetimeTZDtype):
- return Timestamp
- elif isinstance(arr_or_dtype, IntervalDtype):
- return Interval
- elif isinstance(arr_or_dtype, PeriodDtype):
- return Period
- elif isinstance(arr_or_dtype, string_types):
- if is_categorical_dtype(arr_or_dtype):
- return CategoricalDtypeType
- elif is_datetime64tz_dtype(arr_or_dtype):
- return Timestamp
- elif is_period_dtype(arr_or_dtype):
- return Period
- elif is_interval_dtype(arr_or_dtype):
- return Interval
- return _get_dtype_type(np.dtype(arr_or_dtype))
- else:
- from pandas.core.arrays.sparse import SparseDtype
- if isinstance(arr_or_dtype, (ABCSparseSeries,
- ABCSparseArray,
- SparseDtype)):
- dtype = getattr(arr_or_dtype, 'dtype', arr_or_dtype)
- return dtype.type
+ if issubclass(arr_or_dtype, (PandasExtensionDtype, ExtensionDtype)):
+ arr_or_dtype = arr_or_dtype.type
+ return condition(np.dtype(arr_or_dtype).type)
+ elif arr_or_dtype is None:
+ return condition(type(None))
+
+ # if we have an array-like
+ if hasattr(arr_or_dtype, 'dtype'):
+ arr_or_dtype = arr_or_dtype.dtype
+
+ # we are not possibly a dtype
+ elif is_list_like(arr_or_dtype):
+ return condition(type(None))
+
try:
- return arr_or_dtype.dtype.type
- except AttributeError:
- return type(None)
+ tipo = pandas_dtype(arr_or_dtype).type
+ except (TypeError, ValueError, UnicodeEncodeError):
+ if is_scalar(arr_or_dtype):
+ return condition(type(None))
+
+ return False
+
+ return condition(tipo)
-def _get_dtype_from_object(dtype):
+def infer_dtype_from_object(dtype):
"""
Get a numpy dtype.type-style object for a dtype object.
@@ -1898,18 +1890,26 @@ def _get_dtype_from_object(dtype):
if isinstance(dtype, type) and issubclass(dtype, np.generic):
# Type object from a dtype
return dtype
- elif is_categorical(dtype):
- return CategoricalDtype().type
- elif is_datetime64tz_dtype(dtype):
- return DatetimeTZDtype(dtype).type
- elif isinstance(dtype, np.dtype): # dtype object
+ elif isinstance(dtype, (np.dtype, PandasExtensionDtype, ExtensionDtype)):
+ # dtype object
try:
_validate_date_like_dtype(dtype)
except TypeError:
# Should still pass if we don't have a date-like
pass
return dtype.type
+
+ try:
+ dtype = pandas_dtype(dtype)
+ except TypeError:
+ pass
+
+ if is_extension_array_dtype(dtype):
+ return dtype.type
elif isinstance(dtype, string_types):
+
+ # TODO(jreback)
+ # should deprecate these
if dtype in ['datetimetz', 'datetime64tz']:
return DatetimeTZDtype.type
elif dtype in ['period']:
@@ -1917,9 +1917,8 @@ def _get_dtype_from_object(dtype):
if dtype == 'datetime' or dtype == 'timedelta':
dtype += '64'
-
try:
- return _get_dtype_from_object(getattr(np, dtype))
+ return infer_dtype_from_object(getattr(np, dtype))
except (AttributeError, TypeError):
# Handles cases like _get_dtype(int) i.e.,
# Python objects that are valid dtypes
@@ -1929,7 +1928,7 @@ def _get_dtype_from_object(dtype):
# further handle internal types
pass
- return _get_dtype_from_object(np.dtype(dtype))
+ return infer_dtype_from_object(np.dtype(dtype))
def _validate_date_like_dtype(dtype):
@@ -1957,10 +1956,6 @@ def _validate_date_like_dtype(dtype):
raise ValueError(msg.format(name=dtype.name, type=dtype.type.__name__))
-_string_dtypes = frozenset(map(_get_dtype_from_object, (binary_type,
- text_type)))
-
-
def pandas_dtype(dtype):
"""
Converts input into a pandas only dtype object or a numpy dtype object.
@@ -1980,7 +1975,7 @@ def pandas_dtype(dtype):
# short-circuit
if isinstance(dtype, np.ndarray):
return dtype.dtype
- elif isinstance(dtype, np.dtype):
+ elif isinstance(dtype, (np.dtype, PandasExtensionDtype, ExtensionDtype)):
return dtype
# registered extension types
@@ -1988,10 +1983,6 @@ def pandas_dtype(dtype):
if result is not None:
return result
- # un-registered extension types
- elif isinstance(dtype, (PandasExtensionDtype, ExtensionDtype)):
- return dtype
-
# try a numpy dtype
# raise a consistent TypeError if failed
try:
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index e6967ed2a4d3d..aada777decaa7 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -9,8 +9,7 @@
from pandas.core.dtypes.common import (
_NS_DTYPE, _TD_DTYPE, is_bool_dtype, is_categorical_dtype,
is_datetime64_dtype, is_datetime64tz_dtype, is_dtype_equal,
- is_extension_array_dtype, is_interval_dtype, is_object_dtype,
- is_period_dtype, is_sparse, is_timedelta64_dtype)
+ is_extension_array_dtype, is_object_dtype, is_sparse, is_timedelta64_dtype)
from pandas.core.dtypes.generic import (
ABCDatetimeArray, ABCDatetimeIndex, ABCIndexClass, ABCPeriodIndex,
ABCRangeIndex, ABCSparseDataFrame, ABCTimedeltaIndex)
@@ -51,9 +50,7 @@ def get_dtype_kinds(l):
typ = 'object'
elif is_bool_dtype(dtype):
typ = 'bool'
- elif is_period_dtype(dtype):
- typ = str(arr.dtype)
- elif is_interval_dtype(dtype):
+ elif is_extension_array_dtype(dtype):
typ = str(arr.dtype)
else:
typ = dtype.kind
@@ -136,7 +133,6 @@ def is_nonempty(x):
# np.concatenate which has them both implemented is compiled.
typs = get_dtype_kinds(to_concat)
-
_contains_datetime = any(typ.startswith('datetime') for typ in typs)
_contains_period = any(typ.startswith('period') for typ in typs)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 76d3d704497b4..a50def7357826 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -60,7 +60,7 @@
is_scalar,
is_dtype_equal,
needs_i8_conversion,
- _get_dtype_from_object,
+ infer_dtype_from_object,
ensure_float64,
ensure_int64,
ensure_platform_int,
@@ -3292,7 +3292,7 @@ def _get_info_slice(obj, indexer):
# convert the myriad valid dtypes object to a single representation
include, exclude = map(
- lambda x: frozenset(map(_get_dtype_from_object, x)), selection)
+ lambda x: frozenset(map(infer_dtype_from_object, x)), selection)
for dtypes in (include, exclude):
invalidate_string_dtypes(dtypes)
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 9d6a56200df6e..379464f4fced6 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -7,8 +7,8 @@
from pandas.util._decorators import Appender, cache_readonly
from pandas.core.dtypes.common import (
- is_bool, is_bool_dtype, is_dtype_equal, is_float, is_integer_dtype,
- is_scalar, needs_i8_conversion, pandas_dtype)
+ is_bool, is_bool_dtype, is_dtype_equal, is_extension_array_dtype, is_float,
+ is_integer_dtype, is_scalar, needs_i8_conversion, pandas_dtype)
import pandas.core.dtypes.concat as _concat
from pandas.core.dtypes.missing import isna
@@ -328,7 +328,9 @@ def astype(self, dtype, copy=True):
msg = ('Cannot convert Float64Index to dtype {dtype}; integer '
'values are required for conversion').format(dtype=dtype)
raise TypeError(msg)
- elif is_integer_dtype(dtype) and self.hasnans:
+ elif (is_integer_dtype(dtype) and
+ not is_extension_array_dtype(dtype)) and self.hasnans:
+ # TODO(jreback); this can change once we have an EA Index type
# GH 13149
raise ValueError('Cannot convert NA to integer')
return super(Float64Index, self).astype(dtype, copy=copy)
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 067b95f9d8847..4a16707a376e9 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -11,8 +11,8 @@
from pandas.core.dtypes.cast import maybe_promote
from pandas.core.dtypes.common import (
_get_dtype, is_categorical_dtype, is_datetime64_dtype,
- is_datetime64tz_dtype, is_float_dtype, is_numeric_dtype, is_sparse,
- is_timedelta64_dtype)
+ is_datetime64tz_dtype, is_extension_array_dtype, is_float_dtype,
+ is_numeric_dtype, is_sparse, is_timedelta64_dtype)
import pandas.core.dtypes.concat as _concat
from pandas.core.dtypes.missing import isna
@@ -306,6 +306,8 @@ def get_empty_dtype_and_na(join_units):
upcast_cls = 'timedelta'
elif is_sparse(dtype):
upcast_cls = dtype.subtype.name
+ elif is_extension_array_dtype(dtype):
+ upcast_cls = 'object'
elif is_float_dtype(dtype) or is_numeric_dtype(dtype):
upcast_cls = dtype.name
else:
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index f62a4f8b5fba2..878a417b46674 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -560,11 +560,12 @@ def sanitize_array(data, index, dtype=None, copy=False,
# possibility of nan -> garbage
if is_float_dtype(data.dtype) and is_integer_dtype(dtype):
- if not isna(data).any():
+ try:
subarr = _try_cast(data, True, dtype, copy,
- raise_cast_failure)
- elif copy:
- subarr = data.copy()
+ True)
+ except ValueError:
+ if copy:
+ subarr = data.copy()
else:
subarr = _try_cast(data, True, dtype, copy, raise_cast_failure)
elif isinstance(data, Index):
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 5fcf19b0b12e7..f0f77b4977610 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -7,13 +7,28 @@
import pandas.core.dtypes.common as com
from pandas.core.dtypes.dtypes import (
- CategoricalDtype, DatetimeTZDtype, IntervalDtype, PeriodDtype)
+ CategoricalDtype, CategoricalDtypeType, DatetimeTZDtype, IntervalDtype,
+ PeriodDtype)
import pandas as pd
+from pandas.conftest import (
+ ALL_EA_INT_DTYPES, ALL_INT_DTYPES, SIGNED_EA_INT_DTYPES, SIGNED_INT_DTYPES,
+ UNSIGNED_EA_INT_DTYPES, UNSIGNED_INT_DTYPES)
from pandas.core.sparse.api import SparseDtype
import pandas.util.testing as tm
+# EA & Actual Dtypes
+def to_ea_dtypes(dtypes):
+ """ convert list of string dtypes to EA dtype """
+ return [getattr(pd, dt + 'Dtype') for dt in dtypes]
+
+
+def to_numpy_dtypes(dtypes):
+ """ convert list of string dtypes to numpy dtype """
+ return [getattr(np, dt) for dt in dtypes if isinstance(dt, str)]
+
+
class TestPandasDtype(object):
# Passing invalid dtype, both as a string or object, must raise TypeError
@@ -278,58 +293,80 @@ def test_is_datetimelike():
assert com.is_datetimelike(s)
-def test_is_integer_dtype():
- assert not com.is_integer_dtype(str)
- assert not com.is_integer_dtype(float)
- assert not com.is_integer_dtype(np.datetime64)
- assert not com.is_integer_dtype(np.timedelta64)
- assert not com.is_integer_dtype(pd.Index([1, 2.]))
- assert not com.is_integer_dtype(np.array(['a', 'b']))
- assert not com.is_integer_dtype(np.array([], dtype=np.timedelta64))
-
- assert com.is_integer_dtype(int)
- assert com.is_integer_dtype(np.uint64)
- assert com.is_integer_dtype(pd.Series([1, 2]))
-
-
-def test_is_signed_integer_dtype():
- assert not com.is_signed_integer_dtype(str)
- assert not com.is_signed_integer_dtype(float)
- assert not com.is_signed_integer_dtype(np.uint64)
- assert not com.is_signed_integer_dtype(np.datetime64)
- assert not com.is_signed_integer_dtype(np.timedelta64)
- assert not com.is_signed_integer_dtype(pd.Index([1, 2.]))
- assert not com.is_signed_integer_dtype(np.array(['a', 'b']))
- assert not com.is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32))
- assert not com.is_signed_integer_dtype(np.array([], dtype=np.timedelta64))
-
- assert com.is_signed_integer_dtype(int)
- assert com.is_signed_integer_dtype(pd.Series([1, 2]))
-
-
-def test_is_unsigned_integer_dtype():
- assert not com.is_unsigned_integer_dtype(str)
- assert not com.is_unsigned_integer_dtype(int)
- assert not com.is_unsigned_integer_dtype(float)
- assert not com.is_unsigned_integer_dtype(pd.Series([1, 2]))
- assert not com.is_unsigned_integer_dtype(pd.Index([1, 2.]))
- assert not com.is_unsigned_integer_dtype(np.array(['a', 'b']))
-
- assert com.is_unsigned_integer_dtype(np.uint64)
- assert com.is_unsigned_integer_dtype(np.array([1, 2], dtype=np.uint32))
-
-
-def test_is_int64_dtype():
- assert not com.is_int64_dtype(str)
- assert not com.is_int64_dtype(float)
- assert not com.is_int64_dtype(np.int32)
- assert not com.is_int64_dtype(np.uint64)
- assert not com.is_int64_dtype(pd.Index([1, 2.]))
- assert not com.is_int64_dtype(np.array(['a', 'b']))
- assert not com.is_int64_dtype(np.array([1, 2], dtype=np.uint32))
-
- assert com.is_int64_dtype(np.int64)
- assert com.is_int64_dtype(np.array([1, 2], dtype=np.int64))
+@pytest.mark.parametrize(
+ 'dtype', [
+ pd.Series([1, 2])] +
+ ALL_INT_DTYPES + to_numpy_dtypes(ALL_INT_DTYPES) +
+ ALL_EA_INT_DTYPES + to_ea_dtypes(ALL_EA_INT_DTYPES))
+def test_is_integer_dtype(dtype):
+ assert com.is_integer_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype', [str, float, np.datetime64, np.timedelta64,
+ pd.Index([1, 2.]), np.array(['a', 'b']),
+ np.array([], dtype=np.timedelta64)])
+def test_is_not_integer_dtype(dtype):
+ assert not com.is_integer_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype', [
+ pd.Series([1, 2])] +
+ SIGNED_INT_DTYPES + to_numpy_dtypes(SIGNED_INT_DTYPES) +
+ SIGNED_EA_INT_DTYPES + to_ea_dtypes(SIGNED_EA_INT_DTYPES))
+def test_is_signed_integer_dtype(dtype):
+ assert com.is_integer_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype',
+ [
+ str, float, np.datetime64, np.timedelta64,
+ pd.Index([1, 2.]), np.array(['a', 'b']),
+ np.array([], dtype=np.timedelta64)] +
+ UNSIGNED_INT_DTYPES + to_numpy_dtypes(UNSIGNED_INT_DTYPES) +
+ UNSIGNED_EA_INT_DTYPES + to_ea_dtypes(UNSIGNED_EA_INT_DTYPES))
+def test_is_not_signed_integer_dtype(dtype):
+ assert not com.is_signed_integer_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype',
+ [pd.Series([1, 2], dtype=np.uint32)] +
+ UNSIGNED_INT_DTYPES + to_numpy_dtypes(UNSIGNED_INT_DTYPES) +
+ UNSIGNED_EA_INT_DTYPES + to_ea_dtypes(UNSIGNED_EA_INT_DTYPES))
+def test_is_unsigned_integer_dtype(dtype):
+ assert com.is_unsigned_integer_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype',
+ [
+ str, float, np.datetime64, np.timedelta64,
+ pd.Index([1, 2.]), np.array(['a', 'b']),
+ np.array([], dtype=np.timedelta64)] +
+ SIGNED_INT_DTYPES + to_numpy_dtypes(SIGNED_INT_DTYPES) +
+ SIGNED_EA_INT_DTYPES + to_ea_dtypes(SIGNED_EA_INT_DTYPES))
+def test_is_not_unsigned_integer_dtype(dtype):
+ assert not com.is_unsigned_integer_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype',
+ [np.int64, np.array([1, 2], dtype=np.int64), 'Int64', pd.Int64Dtype])
+def test_is_int64_dtype(dtype):
+ assert com.is_int64_dtype(dtype)
+
+
+@pytest.mark.parametrize(
+ 'dtype',
+ [
+ str, float, np.int32, np.uint64, pd.Index([1, 2.]),
+ np.array(['a', 'b']), np.array([1, 2], dtype=np.uint32),
+ 'int8', 'Int8', pd.Int8Dtype])
+def test_is_not_int64_dtype(dtype):
+ assert not com.is_int64_dtype(dtype)
def test_is_datetime64_any_dtype():
@@ -375,6 +412,8 @@ def test_is_datetime_or_timedelta_dtype():
assert not com.is_datetime_or_timedelta_dtype(str)
assert not com.is_datetime_or_timedelta_dtype(pd.Series([1, 2]))
assert not com.is_datetime_or_timedelta_dtype(np.array(['a', 'b']))
+
+ # TODO(jreback), this is sligthly suspect
assert not com.is_datetime_or_timedelta_dtype(
DatetimeTZDtype("ns", "US/Eastern"))
@@ -588,11 +627,11 @@ def test__get_dtype_fails(input_param):
(pd.Series(['a', 'b']), np.object_),
(pd.Index([1, 2], dtype='int64'), np.int64),
(pd.Index(['a', 'b']), np.object_),
- ('category', com.CategoricalDtypeType),
- (pd.Categorical(['a', 'b']).dtype, com.CategoricalDtypeType),
- (pd.Categorical(['a', 'b']), com.CategoricalDtypeType),
- (pd.CategoricalIndex(['a', 'b']).dtype, com.CategoricalDtypeType),
- (pd.CategoricalIndex(['a', 'b']), com.CategoricalDtypeType),
+ ('category', CategoricalDtypeType),
+ (pd.Categorical(['a', 'b']).dtype, CategoricalDtypeType),
+ (pd.Categorical(['a', 'b']), CategoricalDtypeType),
+ (pd.CategoricalIndex(['a', 'b']).dtype, CategoricalDtypeType),
+ (pd.CategoricalIndex(['a', 'b']), CategoricalDtypeType),
(pd.DatetimeIndex([1, 2]), np.datetime64),
(pd.DatetimeIndex([1, 2]).dtype, np.datetime64),
('<M8[ns]', np.datetime64),
@@ -610,5 +649,5 @@ def test__get_dtype_fails(input_param):
(1.2, type(None)),
(pd.DataFrame([1, 2]), type(None)), # composite dtype
])
-def test__get_dtype_type(input_param, result):
- assert com._get_dtype_type(input_param) == result
+def test__is_dtype_type(input_param, result):
+ assert com._is_dtype_type(input_param, lambda tipo: tipo == result)
| closes #24593
Some benchmarks of ``pandas_dtype`` construction from a dtype object & strings, only thing slightly suprising is ``Period[D]``
```
[ 87.50%] ··· ============================================ =============
dtype
-------------------------------------------- -------------
dtype('int64') 463±20ns
dtype('int32') 450±20ns
dtype('uint32') 444±4ns
dtype('uint64') 484±20ns
dtype('float32') 494±30ns
dtype('float64') 507±30ns
dtype('int16') 471±30ns
dtype('int8') 506±30ns
dtype('uint16') 505±40ns
dtype('uint8') 485±20ns
dtype('<M8') 451±20ns
dtype('<m8') 480±20ns
dtype('O') 627±200ns
pandas.core.arrays.integer.Int8Dtype 995±60ns
pandas.core.arrays.integer.Int16Dtype 944±60ns
pandas.core.arrays.integer.Int32Dtype 939±70ns
pandas.core.arrays.integer.Int64Dtype 988±30ns
pandas.core.arrays.integer.UInt8Dtype 924±50ns
pandas.core.arrays.integer.UInt16Dtype 954±60ns
pandas.core.arrays.integer.UInt32Dtype 966±70ns
pandas.core.arrays.integer.UInt64Dtype 1.00±0.06μs
pandas.core.dtypes.dtypes.CategoricalDtype 978±30ns
pandas.core.dtypes.dtypes.IntervalDtype 929±300ns
datetime64[ns, UTC] 1.52±0.03μs
period[D] 958±9ns
int64 16.1±0.1μs
int32 16.1±0.6μs
uint32 16.0±0.1μs
uint64 16.2±0.3μs
float32 15.8±0.4μs
float64 15.7±0.6μs
int16 16.0±0.2μs
int8 15.7±0.06μs
uint16 15.9±0.08μs
uint8 17.3±0.6μs
datetime64 16.2±0.3μs
timedelta64 16.0±0.1μs
object 16.0±0.2μs
Int8 6.24±0.1μs
Int16 7.41±0.2μs
Int32 8.15±0.04μs
Int64 9.39±0.1μs
UInt8 10.1±0.1μs
UInt16 11.0±0.04μs
UInt32 11.8±0.5μs
UInt64 12.7±0.08μs
category 2.62±0.03μs
interval 6.72±0.2μs
datetime64[ns, UTC] 2.93±0.1μs
period[D] 52.6±0.6μs
============================================ =============
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/24541 | 2019-01-02T01:02:23Z | 2019-01-04T13:55:44Z | 2019-01-04T13:55:44Z | 2019-01-04T13:55:44Z |
Cython language level 3 | diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 71e25c3955a6d..c1fc0062dff09 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -57,10 +57,10 @@ cdef inline float64_t median_linear(float64_t* a, int n) nogil:
n -= na_count
if n % 2:
- result = kth_smallest_c( a, n / 2, n)
+ result = kth_smallest_c( a, n // 2, n)
else:
- result = (kth_smallest_c(a, n / 2, n) +
- kth_smallest_c(a, n / 2 - 1, n)) / 2
+ result = (kth_smallest_c(a, n // 2, n) +
+ kth_smallest_c(a, n // 2 - 1, n)) / 2
if na_count:
free(a)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index f679746643643..36c4c752206a8 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -948,7 +948,7 @@ cdef class TextReader:
status = tokenize_nrows(self.parser, nrows)
if self.parser.warn_msg != NULL:
- print >> sys.stderr, self.parser.warn_msg
+ print(self.parser.warn_msg, file=sys.stderr)
free(self.parser.warn_msg)
self.parser.warn_msg = NULL
@@ -976,7 +976,7 @@ cdef class TextReader:
status = tokenize_all_rows(self.parser)
if self.parser.warn_msg != NULL:
- print >> sys.stderr, self.parser.warn_msg
+ print(self.parser.warn_msg, file=sys.stderr)
free(self.parser.warn_msg)
self.parser.warn_msg = NULL
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 624872c1c56c6..44ea875f0b49d 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -275,7 +275,7 @@ def format_array_from_datetime(ndarray[int64_t] values, object tz=None,
dts.sec)
if show_ns:
- ns = dts.ps / 1000
+ ns = dts.ps // 1000
res += '.%.9d' % (ns + 1000 * dts.us)
elif show_us:
res += '.%.6d' % dts.us
diff --git a/pandas/_libs/tslibs/ccalendar.pyx b/pandas/_libs/tslibs/ccalendar.pyx
index c48812acd3de1..9c88ca05ebcf0 100644
--- a/pandas/_libs/tslibs/ccalendar.pyx
+++ b/pandas/_libs/tslibs/ccalendar.pyx
@@ -159,7 +159,7 @@ cpdef int32_t get_week_of_year(int year, int month, int day) nogil:
# estimate
woy = (doy - 1) - dow + 3
if woy >= 0:
- woy = woy / 7 + 1
+ woy = woy // 7 + 1
# verify
if woy < 0:
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 1c0adaaa288a9..d8c3b91d1e460 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -462,8 +462,8 @@ cdef _TSObject convert_str_to_tsobject(object ts, object tz, object unit,
dt = datetime(obj.dts.year, obj.dts.month, obj.dts.day,
obj.dts.hour, obj.dts.min, obj.dts.sec,
obj.dts.us, obj.tzinfo)
- obj = convert_datetime_to_tsobject(dt, tz,
- nanos=obj.dts.ps / 1000)
+ obj = convert_datetime_to_tsobject(
+ dt, tz, nanos=obj.dts.ps // 1000)
return obj
else:
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index 240f008394099..dfd8c86c92c86 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -478,7 +478,7 @@ def get_date_field(int64_t[:] dtindex, object field):
continue
dt64_to_dtstruct(dtindex[i], &dts)
- out[i] = dts.ps / 1000
+ out[i] = dts.ps // 1000
return out
elif field == 'doy':
with nogil:
@@ -522,7 +522,7 @@ def get_date_field(int64_t[:] dtindex, object field):
dt64_to_dtstruct(dtindex[i], &dts)
out[i] = dts.month
- out[i] = ((out[i] - 1) / 3) + 1
+ out[i] = ((out[i] - 1) // 3) + 1
return out
elif field == 'dim':
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index e28462f7103b9..7e98fba48b51a 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -587,7 +587,7 @@ def shift_day(other: datetime, days: int) -> datetime:
cdef inline int year_add_months(npy_datetimestruct dts, int months) nogil:
"""new year number after shifting npy_datetimestruct number of months"""
- return dts.year + (dts.month + months - 1) / 12
+ return dts.year + (dts.month + months - 1) // 12
cdef inline int month_add_months(npy_datetimestruct dts, int months) nogil:
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index 87658ae92175e..f6866f797d576 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -240,7 +240,7 @@ def array_strptime(object[:] values, object fmt,
s += "0" * (9 - len(s))
us = long(s)
ns = us % 1000
- us = us / 1000
+ us = us // 1000
elif parse_code == 11:
weekday = locale_time.f_weekday.index(found_dict['A'].lower())
elif parse_code == 12:
@@ -662,7 +662,7 @@ cdef parse_timezone_directive(object z):
gmtoff_remainder_padding = "0" * pad_number
microseconds = int(gmtoff_remainder + gmtoff_remainder_padding)
- total_minutes = ((hours * 60) + minutes + (seconds / 60) +
- (microseconds / 60000000))
+ total_minutes = ((hours * 60) + minutes + (seconds // 60) +
+ (microseconds // 60000000))
total_minutes = -total_minutes if z.startswith("-") else total_minutes
return pytz.FixedOffset(total_minutes)
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 5918c7963acf7..e1788db1cf8f8 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -587,7 +587,7 @@ def _binary_op_method_timedeltalike(op, name):
# the PyDateTime_CheckExact case is for a datetime object that
# is specifically *not* a Timestamp, as the Timestamp case will be
# handled after `_validate_ops_compat` returns False below
- from timestamps import Timestamp
+ from pandas._libs.tslibs.timestamps import Timestamp
return op(self, Timestamp(other))
# We are implicitly requiring the canonical behavior to be
# defined by Timestamp methods.
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 8d825e0a6179e..c4d47a3c2384a 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -70,7 +70,7 @@ cdef inline object create_timestamp_from_ts(int64_t value,
dts.sec, dts.us, tz)
ts_base.value = value
ts_base.freq = freq
- ts_base.nanosecond = dts.ps / 1000
+ ts_base.nanosecond = dts.ps // 1000
return ts_base
diff --git a/pandas/_libs/writers.pyx b/pandas/_libs/writers.pyx
index 6449a331689ad..8f035d0c205e3 100644
--- a/pandas/_libs/writers.pyx
+++ b/pandas/_libs/writers.pyx
@@ -16,7 +16,6 @@ from numpy cimport ndarray, uint8_t
ctypedef fused pandas_string:
str
- unicode
bytes
diff --git a/pandas/io/sas/sas.pyx b/pandas/io/sas/sas.pyx
index 9b8fba16741f6..ed6a3efae137c 100644
--- a/pandas/io/sas/sas.pyx
+++ b/pandas/io/sas/sas.pyx
@@ -2,7 +2,7 @@
# cython: boundscheck=False, initializedcheck=False
import numpy as np
-import sas_constants as const
+import pandas.io.sas.sas_constants as const
ctypedef signed long long int64_t
ctypedef unsigned char uint8_t
diff --git a/setup.py b/setup.py
index d58d444f9a481..09e1e226881fd 100755
--- a/setup.py
+++ b/setup.py
@@ -451,7 +451,7 @@ def run(self):
# pinning `ext.cython_directives = directives` to each ext in extensions.
# github.com/cython/cython/wiki/enhancements-compilerdirectives#in-setuppy
directives = {'linetrace': False,
- 'language_level': 2}
+ 'language_level': 3}
macros = []
if linetrace:
# https://pypkg.com/pypi/pytest-cython/f/tests/example-project/setup.py
| - [X] closes #23927
| https://api.github.com/repos/pandas-dev/pandas/pulls/24538 | 2019-01-01T20:25:59Z | 2019-03-19T23:48:32Z | 2019-03-19T23:48:32Z | 2020-01-16T00:34:36Z |
implement fillna from 24024, with fixes and tests | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 98a1f1b925447..ab5621d857e89 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -16,6 +16,7 @@
from pandas.errors import (
AbstractMethodError, NullFrequencyError, PerformanceWarning)
from pandas.util._decorators import Appender, Substitution
+from pandas.util._validators import validate_fillna_kwargs
from pandas.core.dtypes.common import (
is_bool_dtype, is_categorical_dtype, is_datetime64_any_dtype,
@@ -25,9 +26,10 @@
is_string_dtype, is_timedelta64_dtype, is_unsigned_integer_dtype,
needs_i8_conversion, pandas_dtype)
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
+from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import isna
-from pandas.core import nanops
+from pandas.core import missing, nanops
from pandas.core.algorithms import (
checked_add_with_arr, take, unique1d, value_counts)
import pandas.core.common as com
@@ -787,6 +789,52 @@ def _maybe_mask_results(self, result, fill_value=iNaT, convert=None):
result[self._isnan] = fill_value
return result
+ def fillna(self, value=None, method=None, limit=None):
+ # TODO(GH-20300): remove this
+ # Just overriding to ensure that we avoid an astype(object).
+ # Either 20300 or a `_values_for_fillna` would avoid this duplication.
+ if isinstance(value, ABCSeries):
+ value = value.array
+
+ value, method = validate_fillna_kwargs(value, method)
+
+ mask = self.isna()
+
+ if is_array_like(value):
+ if len(value) != len(self):
+ raise ValueError("Length of 'value' does not match. Got ({}) "
+ " expected {}".format(len(value), len(self)))
+ value = value[mask]
+
+ if mask.any():
+ if method is not None:
+ if method == 'pad':
+ func = missing.pad_1d
+ else:
+ func = missing.backfill_1d
+
+ values = self._data
+ if not is_period_dtype(self):
+ # For PeriodArray self._data is i8, which gets copied
+ # by `func`. Otherwise we need to make a copy manually
+ # to avoid modifying `self` in-place.
+ values = values.copy()
+
+ new_values = func(values, limit=limit,
+ mask=mask)
+ if is_datetime64tz_dtype(self):
+ # we need to pass int64 values to the constructor to avoid
+ # re-localizing incorrectly
+ new_values = new_values.view("i8")
+ new_values = type(self)(new_values, dtype=self.dtype)
+ else:
+ # fill with value
+ new_values = self.copy()
+ new_values[mask] = value
+ else:
+ new_values = self.copy()
+ return new_values
+
# ------------------------------------------------------------------
# Frequency Properties/Methods
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 5a74f04c237d0..7199d88d4bde5 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -12,11 +12,10 @@
from pandas._libs.tslibs.timedeltas import Timedelta, delta_to_nanoseconds
import pandas.compat as compat
from pandas.util._decorators import Appender, cache_readonly
-from pandas.util._validators import validate_fillna_kwargs
from pandas.core.dtypes.common import (
- _TD_DTYPE, ensure_object, is_array_like, is_datetime64_dtype,
- is_float_dtype, is_list_like, is_period_dtype, pandas_dtype)
+ _TD_DTYPE, ensure_object, is_datetime64_dtype, is_float_dtype,
+ is_list_like, is_period_dtype, pandas_dtype)
from pandas.core.dtypes.dtypes import PeriodDtype
from pandas.core.dtypes.generic import ABCIndexClass, ABCPeriodIndex, ABCSeries
from pandas.core.dtypes.missing import isna, notna
@@ -24,7 +23,6 @@
import pandas.core.algorithms as algos
from pandas.core.arrays import datetimelike as dtl
import pandas.core.common as com
-from pandas.core.missing import backfill_1d, pad_1d
from pandas.tseries import frequencies
from pandas.tseries.offsets import DateOffset, Tick, _delta_to_tick
@@ -381,41 +379,6 @@ def _validate_fill_value(self, fill_value):
"Got '{got}'.".format(got=fill_value))
return fill_value
- def fillna(self, value=None, method=None, limit=None):
- # TODO(#20300)
- # To avoid converting to object, we re-implement here with the changes
- # 1. Passing `_data` to func instead of self.astype(object)
- # 2. Re-boxing output of 1.
- # #20300 should let us do this kind of logic on ExtensionArray.fillna
- # and we can use it.
-
- if isinstance(value, ABCSeries):
- value = value._values
-
- value, method = validate_fillna_kwargs(value, method)
-
- mask = self.isna()
-
- if is_array_like(value):
- if len(value) != len(self):
- raise ValueError("Length of 'value' does not match. Got ({}) "
- " expected {}".format(len(value), len(self)))
- value = value[mask]
-
- if mask.any():
- if method is not None:
- func = pad_1d if method == 'pad' else backfill_1d
- new_values = func(self._data, limit=limit,
- mask=mask)
- new_values = type(self)(new_values, freq=self.freq)
- else:
- # fill with value
- new_values = self.copy()
- new_values[mask] = value
- else:
- new_values = self.copy()
- return new_values
-
# --------------------------------------------------------------------
def _time_shift(self, periods, freq=None):
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index 1012639fe0f9d..ee9aa9e229126 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -13,7 +13,7 @@
from pandas.core.dtypes.common import (
ensure_float64, is_datetime64_dtype, is_datetime64tz_dtype, is_float_dtype,
is_integer, is_integer_dtype, is_numeric_v_string_like, is_scalar,
- needs_i8_conversion)
+ is_timedelta64_dtype, needs_i8_conversion)
from pandas.core.dtypes.missing import isna
@@ -481,6 +481,10 @@ def pad_1d(values, limit=None, mask=None, dtype=None):
_method = algos.pad_inplace_float64
elif values.dtype == np.object_:
_method = algos.pad_inplace_object
+ elif is_timedelta64_dtype(values):
+ # NaTs are treated identically to datetime64, so we can dispatch
+ # to that implementation
+ _method = _pad_1d_datetime
if _method is None:
raise ValueError('Invalid dtype for pad_1d [{name}]'
@@ -507,6 +511,10 @@ def backfill_1d(values, limit=None, mask=None, dtype=None):
_method = algos.backfill_inplace_float64
elif values.dtype == np.object_:
_method = algos.backfill_inplace_object
+ elif is_timedelta64_dtype(values):
+ # NaTs are treated identically to datetime64, so we can dispatch
+ # to that implementation
+ _method = _backfill_1d_datetime
if _method is None:
raise ValueError('Invalid dtype for backfill_1d [{name}]'
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 9ef331be32417..348ac4579ffb5 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -164,6 +164,20 @@ def test_reduce_invalid(self):
with pytest.raises(TypeError, match='cannot perform'):
arr._reduce("not a method")
+ @pytest.mark.parametrize('method', ['pad', 'backfill'])
+ def test_fillna_method_doesnt_change_orig(self, method):
+ data = np.arange(10, dtype='i8') * 24 * 3600 * 10**9
+ arr = self.array_cls(data, freq='D')
+ arr[4] = pd.NaT
+
+ fill_value = arr[3] if method == 'pad' else arr[5]
+
+ result = arr.fillna(method=method)
+ assert result[4] == fill_value
+
+ # check that the original was not changed
+ assert arr[4] is pd.NaT
+
def test_searchsorted(self):
data = np.arange(10, dtype='i8') * 24 * 3600 * 10**9
arr = self.array_cls(data, freq='D')
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 9f0954d328f89..8a833d8197381 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -138,6 +138,23 @@ def test_value_counts_preserves_tz(self):
index=[pd.NaT, dti[0], dti[1]])
tm.assert_series_equal(result, expected)
+ @pytest.mark.parametrize('method', ['pad', 'backfill'])
+ def test_fillna_preserves_tz(self, method):
+ dti = pd.date_range('2000-01-01', periods=5, freq='D', tz='US/Central')
+ arr = DatetimeArray(dti, copy=True)
+ arr[2] = pd.NaT
+
+ fill_val = dti[1] if method == 'pad' else dti[3]
+ expected = DatetimeArray([dti[0], dti[1], fill_val, dti[3], dti[4]],
+ freq=None, tz='US/Central')
+
+ result = arr.fillna(method=method)
+ tm.assert_extension_array_equal(result, expected)
+
+ # assert that arr and dti were not modified in-place
+ assert arr[2] is pd.NaT
+ assert dti[2] == pd.Timestamp('2000-01-03', tz='US/Central')
+
class TestSequenceToDT64NS(object):
| cc @jreback @TomAugspurger
couple of issues with `fillna` needed sorting out
- The DTA version was operating in-place (fixed+tested)
- The TDA version would raise because it wasn't supported in core.missing (fixed+tested)
- The DTA[tz] version would incorrectly re-localize using the existing constructors, i.e. was dependent on the constructor changes in 24024. With the edits here it is correct regardless of whether the constructor is changed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/24536 | 2019-01-01T19:36:10Z | 2019-01-01T22:52:51Z | 2019-01-01T22:52:51Z | 2019-01-01T23:14:41Z |
Make DTI[tz]._values and Series[tz]._values return DTA | diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index a90cfa4e4c906..0501889d743d4 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -426,8 +426,7 @@ def _concat_datetime(to_concat, axis=0, typs=None):
if any(typ.startswith('datetime') for typ in typs):
if 'datetime' in typs:
- to_concat = [np.array(x, copy=False).view(np.int64)
- for x in to_concat]
+ to_concat = [x.astype(np.int64, copy=False) for x in to_concat]
return _concatenate_2d(to_concat, axis=axis).view(_NS_DTYPE)
else:
# when to_concat has different tz, len(typs) > 1.
@@ -451,7 +450,7 @@ def _convert_datetimelike_to_object(x):
# if dtype is of datetimetz or timezone
if x.dtype.kind == _NS_DTYPE.kind:
if getattr(x, 'tz', None) is not None:
- x = x.astype(object).values
+ x = np.asarray(x.astype(object))
else:
shape = x.shape
x = tslib.ints_to_pydatetime(x.view(np.int64).ravel(),
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 5ed8bd45a6aff..5695d3d9e67f3 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -316,6 +316,12 @@ def _simple_new(cls, values, name=None, freq=None, tz=None, dtype=None):
we require the we have a dtype compat for the values
if we are passed a non-dtype compat, then coerce using the constructor
"""
+ if isinstance(values, DatetimeArray):
+ values = DatetimeArray(values, freq=freq, tz=tz, dtype=dtype)
+ tz = values.tz
+ freq = values.freq
+ values = values._data
+
# DatetimeArray._simple_new will accept either i8 or M8[ns] dtypes
assert isinstance(values, np.ndarray), type(values)
@@ -340,7 +346,7 @@ def _values(self):
# tz-naive -> ndarray
# tz-aware -> DatetimeIndex
if self.tz is not None:
- return self
+ return self._eadata
else:
return self.values
@@ -629,6 +635,9 @@ def intersection(self, other):
not other.freq.isAnchored() or
(not self.is_monotonic or not other.is_monotonic)):
result = Index.intersection(self, other)
+ # Invalidate the freq of `result`, which may not be correct at
+ # this point, depending on the values.
+ result.freq = None
result = self._shallow_copy(result._values, name=result.name,
tz=result.tz, freq=None)
if result.freq is None:
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 375b4ccbc122f..c9ed2521676ad 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -34,7 +34,8 @@
_isna_compat, array_equivalent, is_null_datelike_scalar, isna, notna)
import pandas.core.algorithms as algos
-from pandas.core.arrays import Categorical, ExtensionArray
+from pandas.core.arrays import (
+ Categorical, DatetimeArrayMixin as DatetimeArray, ExtensionArray)
from pandas.core.base import PandasObject
import pandas.core.common as com
from pandas.core.indexes.datetimes import DatetimeIndex
@@ -2437,8 +2438,14 @@ def _try_coerce_args(self, values, other):
""" provide coercion to our input arguments """
if isinstance(other, ABCDatetimeIndex):
- # to store DatetimeTZBlock as object
- other = other.astype(object).values
+ # May get a DatetimeIndex here. Unbox it.
+ other = other.array
+
+ if isinstance(other, DatetimeArray):
+ # hit in pandas/tests/indexing/test_coercion.py
+ # ::TestWhereCoercion::test_where_series_datetime64[datetime64tz]
+ # when falling back to ObjectBlock.where
+ other = other.astype(object)
return values, other
@@ -2985,7 +2992,8 @@ def _try_coerce_args(self, values, other):
elif (is_null_datelike_scalar(other) or
(lib.is_scalar(other) and isna(other))):
other = tslibs.iNaT
- elif isinstance(other, self._holder):
+ elif isinstance(other, (self._holder, DatetimeArray)):
+ # TODO: DatetimeArray check will be redundant after GH#24024
if other.tz != self.values.tz:
raise ValueError("incompatible or non tz-aware value")
other = _block_shape(other.asi8, ndim=self.ndim)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 672fa2edb00ba..3637081e09f8c 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -477,7 +477,10 @@ def _values(self):
"""
Return the internal repr of this data.
"""
- return self._data.internal_values()
+ result = self._data.internal_values()
+ if isinstance(result, DatetimeIndex):
+ result = result._eadata
+ return result
def _formatting_values(self):
"""
@@ -1602,10 +1605,6 @@ def unique(self):
Categories (3, object): [a < b < c]
"""
result = super(Series, self).unique()
- if isinstance(result, DatetimeIndex):
- # TODO: This should be unnecessary after Series._values returns
- # DatetimeArray
- result = result._eadata
return result
def drop_duplicates(self, keep='first', inplace=False):
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index 29b60d80750b2..280db3b2b3004 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -31,7 +31,7 @@ def has_test(combo):
for combo in combos:
if not has_test(combo):
msg = 'test method is not defined: {0}, {1}'
- raise AssertionError(msg.format(type(cls), combo))
+ raise AssertionError(msg.format(cls.__name__, combo))
yield
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 50db4f67cc3cf..f941f2ff32fa1 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -17,12 +17,12 @@
PeriodIndex, Timedelta, IntervalIndex, Interval,
CategoricalIndex, Timestamp, DataFrame, Panel)
from pandas.core.arrays import (
+ PandasArray,
DatetimeArrayMixin as DatetimeArray,
TimedeltaArrayMixin as TimedeltaArray,
)
from pandas.compat import StringIO, PYPY, long
from pandas.compat.numpy import np_array_datetime64_compat
-from pandas.core.arrays import PandasArray
from pandas.core.accessor import PandasDelegate
from pandas.core.base import PandasObject, NoNewAttributesMixin
from pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin
@@ -388,11 +388,9 @@ def test_value_counts_unique_nunique(self):
for r in result:
assert isinstance(r, Timestamp)
- # TODO(#24024) once orig._values returns DTA, remove
- # the `._eadata` below
tm.assert_numpy_array_equal(
result.astype(object),
- orig._values._eadata.astype(object))
+ orig._values.astype(object))
else:
tm.assert_numpy_array_equal(result, orig.values)
@@ -418,9 +416,7 @@ def test_value_counts_unique_nunique_null(self):
else:
o = o.copy()
o[0:2] = iNaT
- # TODO(#24024) once Series._values returns DTA, remove
- # the `._eadata` here
- values = o._values._eadata
+ values = o._values
elif needs_i8_conversion(o):
values[0:2] = iNaT
@@ -1158,7 +1154,7 @@ def test_iter_box(self):
(np.array(['a', 'b']), np.ndarray, 'object'),
(pd.Categorical(['a', 'b']), pd.Categorical, 'category'),
(pd.DatetimeIndex(['2017', '2018']), np.ndarray, 'datetime64[ns]'),
- (pd.DatetimeIndex(['2017', '2018'], tz="US/Central"), pd.DatetimeIndex,
+ (pd.DatetimeIndex(['2017', '2018'], tz="US/Central"), DatetimeArray,
'datetime64[ns, US/Central]'),
(pd.TimedeltaIndex([10**10]), np.ndarray, 'm8[ns]'),
(pd.PeriodIndex([2018, 2019], freq='A'), pd.core.arrays.PeriodArray,
| broken off of #24024, cc @jreback @TomAugspurger
I think the edits in core.dtypes.concat are unrelated, but they are correct regardless and easy to trim a few more lines off the diff.
The edit in tests.indexing.test_coercion was needed during troubleshooting, decided to keep it. | https://api.github.com/repos/pandas-dev/pandas/pulls/24534 | 2019-01-01T18:25:09Z | 2019-01-01T20:07:38Z | 2019-01-01T20:07:38Z | 2019-01-01T21:35:03Z |
CLN: Refactor some sorting code in Index set operations | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 07aec6a0d833f..1380c5caed1c9 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2302,27 +2302,15 @@ def union(self, other):
allow_fill=False)
result = _concat._concat_compat((lvals, other_diff))
- try:
- lvals[0] < other_diff[0]
- except TypeError as e:
- warnings.warn("%s, sort order is undefined for "
- "incomparable objects" % e, RuntimeWarning,
- stacklevel=3)
- else:
- types = frozenset((self.inferred_type,
- other.inferred_type))
- if not types & _unsortable_types:
- result.sort()
-
else:
result = lvals
- try:
- result = np.sort(result)
- except TypeError as e:
- warnings.warn("%s, sort order is undefined for "
- "incomparable objects" % e, RuntimeWarning,
- stacklevel=3)
+ try:
+ result = sorting.safe_sort(result)
+ except TypeError as e:
+ warnings.warn("%s, sort order is undefined for "
+ "incomparable objects" % e, RuntimeWarning,
+ stacklevel=3)
# for subclasses
return self._wrap_setop_result(other, result)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index b13110a04e1c1..2108206a8cbe5 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -805,8 +805,7 @@ def test_union_name_preservation(self, first_list, second_list, first_name,
def test_union_dt_as_obj(self):
# TODO: Replace with fixturesult
- with tm.assert_produces_warning(RuntimeWarning):
- firstCat = self.strIndex.union(self.dateIndex)
+ firstCat = self.strIndex.union(self.dateIndex)
secondCat = self.strIndex.union(self.strIndex)
if self.dateIndex.dtype == np.object_:
@@ -1615,7 +1614,7 @@ def test_drop_tuple(self, values, to_drop):
@pytest.mark.parametrize("method,expected", [
('intersection', np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B')],
dtype=[('num', int), ('let', 'a1')])),
- ('union', np.array([(1, 'A'), (2, 'A'), (1, 'B'), (2, 'B'), (1, 'C'),
+ ('union', np.array([(1, 'A'), (1, 'B'), (1, 'C'), (2, 'A'), (2, 'B'),
(2, 'C')], dtype=[('num', int), ('let', 'a1')]))
])
def test_tuple_union_bug(self, method, expected):
@@ -2242,10 +2241,7 @@ def test_copy_name(self):
s1 = Series(2, index=first)
s2 = Series(3, index=second[:-1])
- warning_type = RuntimeWarning if PY3 else None
- with tm.assert_produces_warning(warning_type):
- # Python 3: Unorderable types
- s3 = s1 * s2
+ s3 = s1 * s2
assert s3.index.name == 'mario'
@@ -2274,16 +2270,9 @@ def test_union_base(self):
first = index[3:]
second = index[:5]
- if PY3:
- # unorderable types
- warn_type = RuntimeWarning
- else:
- warn_type = None
-
- with tm.assert_produces_warning(warn_type):
- result = first.union(second)
+ result = first.union(second)
- expected = Index(['b', 2, 'c', 0, 'a', 1])
+ expected = Index([0, 1, 2, 'a', 'b', 'c'])
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize("klass", [
@@ -2294,14 +2283,7 @@ def test_union_different_type_base(self, klass):
first = index[3:]
second = index[:5]
- if PY3:
- # unorderable types
- warn_type = RuntimeWarning
- else:
- warn_type = None
-
- with tm.assert_produces_warning(warn_type):
- result = first.union(klass(second.values))
+ result = first.union(klass(second.values))
assert tm.equalContents(result, index)
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index f6fb5f0c46cc8..4d3c9926fc5ae 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -120,24 +120,12 @@ def test_operators_bitwise(self):
s_0123 & [0.1, 4, 3.14, 2]
# s_0123 will be all false now because of reindexing like s_tft
- if compat.PY3:
- # unable to sort incompatible object via .union.
- exp = Series([False] * 7, index=['b', 'c', 'a', 0, 1, 2, 3])
- with tm.assert_produces_warning(RuntimeWarning):
- assert_series_equal(s_tft & s_0123, exp)
- else:
- exp = Series([False] * 7, index=[0, 1, 2, 3, 'a', 'b', 'c'])
- assert_series_equal(s_tft & s_0123, exp)
+ exp = Series([False] * 7, index=[0, 1, 2, 3, 'a', 'b', 'c'])
+ assert_series_equal(s_tft & s_0123, exp)
# s_tft will be all false now because of reindexing like s_0123
- if compat.PY3:
- # unable to sort incompatible object via .union.
- exp = Series([False] * 7, index=[0, 1, 2, 3, 'b', 'c', 'a'])
- with tm.assert_produces_warning(RuntimeWarning):
- assert_series_equal(s_0123 & s_tft, exp)
- else:
- exp = Series([False] * 7, index=[0, 1, 2, 3, 'a', 'b', 'c'])
- assert_series_equal(s_0123 & s_tft, exp)
+ exp = Series([False] * 7, index=[0, 1, 2, 3, 'a', 'b', 'c'])
+ assert_series_equal(s_0123 & s_tft, exp)
assert_series_equal(s_0123 & False, Series([False] * 4))
assert_series_equal(s_0123 ^ False, Series([False, True, True, True]))
@@ -280,11 +268,7 @@ def test_logical_ops_label_based(self):
assert_series_equal(result, a[a])
for e in [Series(['z'])]:
- if compat.PY3:
- with tm.assert_produces_warning(RuntimeWarning):
- result = a[a | e]
- else:
- result = a[a | e]
+ result = a[a | e]
assert_series_equal(result, a[a])
# vs scalars
| - [ ] Related to #24521
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
This is a pre-cursor to #24521 and cleans up some of the sorting code in set operations on ``Index``. | https://api.github.com/repos/pandas-dev/pandas/pulls/24533 | 2019-01-01T18:20:57Z | 2019-01-01T20:07:23Z | 2019-01-01T20:07:23Z | 2019-01-01T20:07:26Z |
Run isort at pandas/tests/io | diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index e83220a476f9b..598453eb92d25 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -2,7 +2,8 @@
Module for applying conditional formatting to
DataFrames and Series.
"""
-from collections import MutableMapping, defaultdict
+
+from collections import defaultdict
from contextlib import contextmanager
import copy
from functools import partial
@@ -18,7 +19,7 @@
from pandas.core.dtypes.generic import ABCSeries
import pandas as pd
-from pandas.api.types import is_list_like
+from pandas.api.types import is_dict_like, is_list_like
import pandas.core.common as com
from pandas.core.config import get_option
from pandas.core.generic import _shared_docs
@@ -401,7 +402,7 @@ def format(self, formatter, subset=None):
row_locs = self.data.index.get_indexer_for(sub_df.index)
col_locs = self.data.columns.get_indexer_for(sub_df.columns)
- if isinstance(formatter, MutableMapping):
+ if is_dict_like(formatter):
for col, col_formatter in formatter.items():
# formatter must be callable, so '{}' are converted to lambdas
col_formatter = _maybe_wrap_formatter(col_formatter)
diff --git a/pandas/tests/io/formats/test_css.py b/pandas/tests/io/formats/test_css.py
index e7adfe4883d98..f251bd983509e 100644
--- a/pandas/tests/io/formats/test_css.py
+++ b/pandas/tests/io/formats/test_css.py
@@ -1,6 +1,7 @@
import pytest
from pandas.util import testing as tm
+
from pandas.io.formats.css import CSSResolver, CSSWarning
diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py
index 9d5773283176c..455b6454d73ff 100644
--- a/pandas/tests/io/formats/test_eng_formatting.py
+++ b/pandas/tests/io/formats/test_eng_formatting.py
@@ -1,10 +1,13 @@
import numpy as np
+
+from pandas.compat import u
+
import pandas as pd
from pandas import DataFrame
-from pandas.compat import u
-import pandas.io.formats.format as fmt
from pandas.util import testing as tm
+import pandas.io.formats.format as fmt
+
class TestEngFormatter(object):
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index b974415ffb029..c979894048127 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -5,35 +5,35 @@
"""
from __future__ import print_function
-import re
-import pytz
-import dateutil
+from datetime import datetime
import itertools
from operator import methodcaller
import os
+import re
import sys
import warnings
-from datetime import datetime
+import dateutil
+import numpy as np
import pytest
+import pytz
-import numpy as np
-import pandas as pd
-from pandas import (DataFrame, Series, Index, Timestamp, MultiIndex,
- date_range, NaT, read_csv)
-from pandas.compat import (range, zip, lrange, StringIO, PY3,
- u, lzip, is_platform_windows,
- is_platform_32bit)
import pandas.compat as compat
+from pandas.compat import (
+ PY3, StringIO, is_platform_32bit, is_platform_windows, lrange, lzip, range,
+ u, zip)
+
+import pandas as pd
+from pandas import (
+ DataFrame, Index, MultiIndex, NaT, Series, Timestamp, date_range, read_csv)
+from pandas.core.config import (
+ get_option, option_context, reset_option, set_option)
+import pandas.util.testing as tm
import pandas.io.formats.format as fmt
import pandas.io.formats.printing as printing
-
-import pandas.util.testing as tm
from pandas.io.formats.terminal import get_terminal_size
-from pandas.core.config import (set_option, get_option, option_context,
- reset_option)
use_32bit_repr = is_platform_windows() or is_platform_32bit()
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index c9c46d4a991ec..67ff68ac4db8c 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -1,14 +1,14 @@
# -*- coding: utf-8 -*-
+import numpy as np
import pytest
-import numpy as np
import pandas as pd
-
from pandas import compat
-import pandas.io.formats.printing as printing
-import pandas.io.formats.format as fmt
import pandas.core.config as cf
+import pandas.io.formats.format as fmt
+import pandas.io.formats.printing as printing
+
def test_adjoin():
data = [['a', 'b', 'c'], ['dd', 'ee', 'ff'], ['ggg', 'hhh', 'iii']]
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 3432d686a9fd6..407c786725f13 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -1,16 +1,18 @@
import copy
-import textwrap
import re
+import textwrap
-import pytest
import numpy as np
+import pytest
+
+import pandas.util._test_decorators as td
+
import pandas as pd
from pandas import DataFrame
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
jinja2 = pytest.importorskip('jinja2')
-from pandas.io.formats.style import Styler, _get_level_lengths # noqa
+from pandas.io.formats.style import Styler, _get_level_lengths # noqa # isort:skip
class TestStyler(object):
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index 786c8fab08a01..1929817a49b3c 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -1,13 +1,12 @@
# -*- coding: utf-8 -*-
+import os
import sys
+import numpy as np
import pytest
-import os
-import numpy as np
import pandas as pd
-
from pandas import DataFrame, compat
from pandas.util import testing as tm
diff --git a/pandas/tests/io/formats/test_to_excel.py b/pandas/tests/io/formats/test_to_excel.py
index 7d54f93c9831e..13eb517fcab6a 100644
--- a/pandas/tests/io/formats/test_to_excel.py
+++ b/pandas/tests/io/formats/test_to_excel.py
@@ -4,10 +4,11 @@
"""
import pytest
+
import pandas.util.testing as tm
-from pandas.io.formats.excel import CSSToExcelConverter
from pandas.io.formats.css import CSSWarning
+from pandas.io.formats.excel import CSSToExcelConverter
@pytest.mark.parametrize('css,expected', [
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index eb11433f46c0e..213eb0d5b5cb8 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -1,15 +1,18 @@
# -*- coding: utf-8 -*-
-import re
from datetime import datetime
from io import open
+import re
-import pytest
import numpy as np
+import pytest
+
+from pandas.compat import StringIO, lrange, u
+
import pandas as pd
-from pandas import compat, DataFrame, MultiIndex, option_context, Index
-from pandas.compat import u, lrange, StringIO
+from pandas import DataFrame, Index, MultiIndex, compat, option_context
from pandas.util import testing as tm
+
import pandas.io.formats.format as fmt
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index f55fa289ea085..1653e474aa7b0 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -1,12 +1,13 @@
+import codecs
from datetime import datetime
import pytest
+from pandas.compat import u
+
import pandas as pd
-from pandas import DataFrame, compat, Series
+from pandas import DataFrame, Series, compat
from pandas.util import testing as tm
-from pandas.compat import u
-import codecs
@pytest.fixture
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
index 4ebf435f7d75f..6774eac6d6c1a 100755
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -35,28 +35,29 @@
"""
from __future__ import print_function
-from warnings import catch_warnings, filterwarnings
+
+from datetime import timedelta
from distutils.version import LooseVersion
-from pandas import (Series, DataFrame, Panel,
- SparseSeries, SparseDataFrame,
- Index, MultiIndex, bdate_range, to_msgpack,
- date_range, period_range, timedelta_range,
- Timestamp, NaT, Categorical, Period)
-from pandas.tseries.offsets import (
- DateOffset, Hour, Minute, Day,
- MonthBegin, MonthEnd, YearBegin,
- YearEnd, Week, WeekOfMonth, LastWeekOfMonth,
- BusinessDay, BusinessHour, CustomBusinessDay, FY5253,
- Easter,
- SemiMonthEnd, SemiMonthBegin,
- QuarterBegin, QuarterEnd)
-from pandas.compat import u
import os
+import platform as pl
import sys
+from warnings import catch_warnings, filterwarnings
+
import numpy as np
+
+from pandas.compat import u
+
import pandas
-import platform as pl
-from datetime import timedelta
+from pandas import (
+ Categorical, DataFrame, Index, MultiIndex, NaT, Panel, Period, Series,
+ SparseDataFrame, SparseSeries, Timestamp, bdate_range, date_range,
+ period_range, timedelta_range, to_msgpack)
+
+from pandas.tseries.offsets import (
+ FY5253, BusinessDay, BusinessHour, CustomBusinessDay, DateOffset, Day,
+ Easter, Hour, LastWeekOfMonth, Minute, MonthBegin, MonthEnd, QuarterBegin,
+ QuarterEnd, SemiMonthBegin, SemiMonthEnd, Week, WeekOfMonth, YearBegin,
+ YearEnd)
_loose_version = LooseVersion(pandas.__version__)
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 46a5e511fe748..430acbdac804a 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -1,8 +1,9 @@
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
from pandas.util.testing import assert_frame_equal
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 4fda977706d8b..6fa3b5b3b2ed4 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -1,22 +1,21 @@
"""Tests for Table Schema integration."""
-import json
from collections import OrderedDict
+import json
import numpy as np
-import pandas as pd
import pytest
-from pandas import DataFrame
from pandas.core.dtypes.dtypes import (
- PeriodDtype, CategoricalDtype, DatetimeTZDtype)
-from pandas.io.json.table_schema import (
- as_json_table_type,
- build_table_schema,
- convert_pandas_type_to_json_field,
- convert_json_field_to_pandas_type,
- set_default_names)
+ CategoricalDtype, DatetimeTZDtype, PeriodDtype)
+
+import pandas as pd
+from pandas import DataFrame
import pandas.util.testing as tm
+from pandas.io.json.table_schema import (
+ as_json_table_type, build_table_schema, convert_json_field_to_pandas_type,
+ convert_pandas_type_to_json_field, set_default_names)
+
class TestBuildSchema(object):
diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py
index 3881b315bbed9..fd0953a4834ca 100644
--- a/pandas/tests/io/json/test_normalize.py
+++ b/pandas/tests/io/json/test_normalize.py
@@ -1,9 +1,10 @@
-import pytest
-import numpy as np
import json
+import numpy as np
+import pytest
+
+from pandas import DataFrame, Index, compat
import pandas.util.testing as tm
-from pandas import compat, Index, DataFrame
from pandas.io.json import json_normalize
from pandas.io.json.normalize import nested_to_record
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 3fdf303ea2e8e..5468413033002 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1,22 +1,24 @@
# -*- coding: utf-8 -*-
# pylint: disable-msg=W0612,E1101
-import pytest
-from pandas.compat import (range, lrange, StringIO,
- OrderedDict, is_platform_32bit)
-import os
-import numpy as np
-from pandas import (Series, DataFrame, DatetimeIndex, Timestamp,
- read_json, compat)
from datetime import timedelta
-import pandas as pd
import json
+import os
-from pandas.util.testing import (assert_almost_equal, assert_frame_equal,
- assert_series_equal, network,
- ensure_clean, assert_index_equal)
-import pandas.util.testing as tm
+import numpy as np
+import pytest
+
+from pandas.compat import (
+ OrderedDict, StringIO, is_platform_32bit, lrange, range)
import pandas.util._test_decorators as td
+import pandas as pd
+from pandas import (
+ DataFrame, DatetimeIndex, Series, Timestamp, compat, read_json)
+import pandas.util.testing as tm
+from pandas.util.testing import (
+ assert_almost_equal, assert_frame_equal, assert_index_equal,
+ assert_series_equal, ensure_clean, network)
+
_seriesd = tm.getSeriesData()
_tsd = tm.getTimeSeriesData()
diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index 25750f4fd23b5..25e78526b2e5a 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -1,12 +1,15 @@
# -*- coding: utf-8 -*-
import pytest
+
+from pandas.compat import StringIO
+
import pandas as pd
from pandas import DataFrame, read_json
-from pandas.compat import StringIO
-from pandas.io.json.json import JsonReader
import pandas.util.testing as tm
-from pandas.util.testing import (assert_frame_equal, assert_series_equal,
- ensure_clean)
+from pandas.util.testing import (
+ assert_frame_equal, assert_series_equal, ensure_clean)
+
+from pandas.io.json.json import JsonReader
@pytest.fixture
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 4ad4f71791079..7f5241def597f 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -4,27 +4,28 @@
import json
except ImportError:
import simplejson as json
-import math
-import pytz
-import locale
-import pytest
-import time
-import datetime
import calendar
-import re
+import datetime
import decimal
-import dateutil
from functools import partial
-from pandas.compat import range, StringIO, u
-from pandas._libs.tslib import Timestamp
+import locale
+import math
+import re
+import time
+
+import dateutil
+import numpy as np
+import pytest
+import pytz
+
import pandas._libs.json as ujson
+from pandas._libs.tslib import Timestamp
import pandas.compat as compat
+from pandas.compat import StringIO, range, u
-import numpy as np
-from pandas import DataFrame, Series, Index, NaT, DatetimeIndex, date_range
+from pandas import DataFrame, DatetimeIndex, Index, NaT, Series, date_range
import pandas.util.testing as tm
-
json_unicode = (json.dumps if compat.PY3
else partial(json.dumps, encoding="utf-8"))
diff --git a/pandas/tests/io/msgpack/common.py b/pandas/tests/io/msgpack/common.py
index b770d12cffbfa..434d347c5742a 100644
--- a/pandas/tests/io/msgpack/common.py
+++ b/pandas/tests/io/msgpack/common.py
@@ -1,6 +1,5 @@
from pandas.compat import PY3
-
# array compat
if PY3:
frombytes = lambda obj, data: obj.frombytes(data)
diff --git a/pandas/tests/io/msgpack/test_buffer.py b/pandas/tests/io/msgpack/test_buffer.py
index 8ebec734f1d3d..e36dc5bbdb4ba 100644
--- a/pandas/tests/io/msgpack/test_buffer.py
+++ b/pandas/tests/io/msgpack/test_buffer.py
@@ -1,6 +1,7 @@
# coding: utf-8
from pandas.io.msgpack import packb, unpackb
+
from .common import frombytes
diff --git a/pandas/tests/io/msgpack/test_except.py b/pandas/tests/io/msgpack/test_except.py
index 8e8d43a16eee9..d670e846c382a 100644
--- a/pandas/tests/io/msgpack/test_except.py
+++ b/pandas/tests/io/msgpack/test_except.py
@@ -1,10 +1,11 @@
# coding: utf-8
from datetime import datetime
-from pandas.io.msgpack import packb, unpackb
import pytest
+from pandas.io.msgpack import packb, unpackb
+
class DummyException(Exception):
pass
diff --git a/pandas/tests/io/msgpack/test_extension.py b/pandas/tests/io/msgpack/test_extension.py
index 2ee72c8a55cb4..06a0691bf4f7e 100644
--- a/pandas/tests/io/msgpack/test_extension.py
+++ b/pandas/tests/io/msgpack/test_extension.py
@@ -1,8 +1,10 @@
from __future__ import print_function
+
import array
import pandas.io.msgpack as msgpack
from pandas.io.msgpack import ExtType
+
from .common import frombytes, tobytes
diff --git a/pandas/tests/io/msgpack/test_limits.py b/pandas/tests/io/msgpack/test_limits.py
index 2d759d6117f2a..cad51da483c71 100644
--- a/pandas/tests/io/msgpack/test_limits.py
+++ b/pandas/tests/io/msgpack/test_limits.py
@@ -1,10 +1,11 @@
# coding: utf-8
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-from pandas.io.msgpack import packb, unpackb, Packer, Unpacker, ExtType
+from __future__ import (
+ absolute_import, division, print_function, unicode_literals)
import pytest
+from pandas.io.msgpack import ExtType, Packer, Unpacker, packb, unpackb
+
class TestLimits(object):
diff --git a/pandas/tests/io/msgpack/test_newspec.py b/pandas/tests/io/msgpack/test_newspec.py
index 783bfc1b364f8..d92c649c5e1ca 100644
--- a/pandas/tests/io/msgpack/test_newspec.py
+++ b/pandas/tests/io/msgpack/test_newspec.py
@@ -1,6 +1,6 @@
# coding: utf-8
-from pandas.io.msgpack import packb, unpackb, ExtType
+from pandas.io.msgpack import ExtType, packb, unpackb
def test_str8():
diff --git a/pandas/tests/io/msgpack/test_pack.py b/pandas/tests/io/msgpack/test_pack.py
index 3afd1fc086b33..f69ac0a0bc4ce 100644
--- a/pandas/tests/io/msgpack/test_pack.py
+++ b/pandas/tests/io/msgpack/test_pack.py
@@ -1,12 +1,14 @@
# coding: utf-8
+import struct
+
import pytest
-import struct
+from pandas.compat import OrderedDict, u
from pandas import compat
-from pandas.compat import u, OrderedDict
-from pandas.io.msgpack import packb, unpackb, Unpacker, Packer
+
+from pandas.io.msgpack import Packer, Unpacker, packb, unpackb
class TestPack(object):
diff --git a/pandas/tests/io/msgpack/test_read_size.py b/pandas/tests/io/msgpack/test_read_size.py
index ef521fa345637..42791b571e8e7 100644
--- a/pandas/tests/io/msgpack/test_read_size.py
+++ b/pandas/tests/io/msgpack/test_read_size.py
@@ -1,5 +1,6 @@
"""Test Unpacker's read_array_header and read_map_header methods"""
-from pandas.io.msgpack import packb, Unpacker, OutOfData
+from pandas.io.msgpack import OutOfData, Unpacker, packb
+
UnexpectedTypeException = ValueError
diff --git a/pandas/tests/io/msgpack/test_seq.py b/pandas/tests/io/msgpack/test_seq.py
index 06e9872a22777..68be8c2d975aa 100644
--- a/pandas/tests/io/msgpack/test_seq.py
+++ b/pandas/tests/io/msgpack/test_seq.py
@@ -1,6 +1,7 @@
# coding: utf-8
import io
+
import pandas.io.msgpack as msgpack
binarydata = bytes(bytearray(range(256)))
diff --git a/pandas/tests/io/msgpack/test_sequnpack.py b/pandas/tests/io/msgpack/test_sequnpack.py
index be0a23f60f18a..48f9817142762 100644
--- a/pandas/tests/io/msgpack/test_sequnpack.py
+++ b/pandas/tests/io/msgpack/test_sequnpack.py
@@ -1,10 +1,10 @@
# coding: utf-8
+import pytest
+
from pandas import compat
-from pandas.io.msgpack import Unpacker, BufferFull
-from pandas.io.msgpack import OutOfData
-import pytest
+from pandas.io.msgpack import BufferFull, OutOfData, Unpacker
class TestPack(object):
diff --git a/pandas/tests/io/msgpack/test_subtype.py b/pandas/tests/io/msgpack/test_subtype.py
index e27ec66c63e1f..8af7e0b91d9b7 100644
--- a/pandas/tests/io/msgpack/test_subtype.py
+++ b/pandas/tests/io/msgpack/test_subtype.py
@@ -1,8 +1,9 @@
# coding: utf-8
-from pandas.io.msgpack import packb
from collections import namedtuple
+from pandas.io.msgpack import packb
+
class MyList(list):
pass
diff --git a/pandas/tests/io/msgpack/test_unpack.py b/pandas/tests/io/msgpack/test_unpack.py
index c056f8d800e11..e63631a97bbb4 100644
--- a/pandas/tests/io/msgpack/test_unpack.py
+++ b/pandas/tests/io/msgpack/test_unpack.py
@@ -1,8 +1,10 @@
from io import BytesIO
import sys
-from pandas.io.msgpack import Unpacker, packb, OutOfData, ExtType
+
import pytest
+from pandas.io.msgpack import ExtType, OutOfData, Unpacker, packb
+
class TestUnpack(object):
diff --git a/pandas/tests/io/msgpack/test_unpack_raw.py b/pandas/tests/io/msgpack/test_unpack_raw.py
index a261bf4cbbcd7..09ebb681d8709 100644
--- a/pandas/tests/io/msgpack/test_unpack_raw.py
+++ b/pandas/tests/io/msgpack/test_unpack_raw.py
@@ -1,6 +1,7 @@
"""Tests for cases where the user seeks to obtain packed msgpack objects"""
import io
+
from pandas.io.msgpack import Unpacker, packb
diff --git a/pandas/tests/io/sas/test_sas.py b/pandas/tests/io/sas/test_sas.py
index 016dc56b4d800..0f6342aa62ac0 100644
--- a/pandas/tests/io/sas/test_sas.py
+++ b/pandas/tests/io/sas/test_sas.py
@@ -1,6 +1,7 @@
import pytest
from pandas.compat import StringIO
+
from pandas import read_sas
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index 705387188438f..3dd8d0449ef5f 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -1,13 +1,16 @@
-import pandas as pd
-from pandas.compat import PY2
-import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-from pandas.errors import EmptyDataError
-import os
import io
+import os
+
import numpy as np
import pytest
+from pandas.compat import PY2
+from pandas.errors import EmptyDataError
+import pandas.util._test_decorators as td
+
+import pandas as pd
+import pandas.util.testing as tm
+
# https://github.com/cython/cython/issues/1720
@pytest.mark.filterwarnings("ignore:can't resolve package:ImportWarning")
diff --git a/pandas/tests/io/sas/test_xport.py b/pandas/tests/io/sas/test_xport.py
index 6e5b2ab067aa5..1b086daf51c41 100644
--- a/pandas/tests/io/sas/test_xport.py
+++ b/pandas/tests/io/sas/test_xport.py
@@ -1,9 +1,12 @@
+import os
+
+import numpy as np
import pytest
+
import pandas as pd
import pandas.util.testing as tm
+
from pandas.io.sas.sasreader import read_sas
-import numpy as np
-import os
# CSV versions of test xpt files were obtained using the R foreign library
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index 99bece0efc8c8..8eb26d9f3dec5 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -1,19 +1,18 @@
# -*- coding: utf-8 -*-
-import numpy as np
-from numpy.random import randint
from textwrap import dedent
+import numpy as np
+from numpy.random import randint
import pytest
-import pandas as pd
-from pandas import DataFrame
-from pandas import read_clipboard
-from pandas import get_option
from pandas.compat import PY2
+
+import pandas as pd
+from pandas import DataFrame, get_option, read_clipboard
from pandas.util import testing as tm
from pandas.util.testing import makeCustomDataframe as mkdf
-from pandas.io.clipboard.exceptions import PyperclipException
+from pandas.io.clipboard.exceptions import PyperclipException
try:
DataFrame({'A': [1, 2]}).to_clipboard()
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 2f2b792588a92..a4c76285c95aa 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -6,15 +6,13 @@
import pytest
-import pandas as pd
-import pandas.io.common as icom
+from pandas.compat import FileNotFoundError, StringIO, is_platform_windows
import pandas.util._test_decorators as td
+
+import pandas as pd
import pandas.util.testing as tm
-from pandas.compat import (
- is_platform_windows,
- StringIO,
- FileNotFoundError,
-)
+
+import pandas.io.common as icom
class CustomFSPath(object):
diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index b62a1e6c4933e..a3fb35f9f01f2 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -1,13 +1,14 @@
+import contextlib
import os
import warnings
-import contextlib
import pytest
import pandas as pd
-import pandas.io.common as icom
import pandas.util.testing as tm
+import pandas.io.common as icom
+
@contextlib.contextmanager
def catch_to_csv_depr():
diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py
index 44d642399ced9..d170e4c43feb3 100644
--- a/pandas/tests/io/test_feather.py
+++ b/pandas/tests/io/test_feather.py
@@ -2,15 +2,16 @@
from distutils.version import LooseVersion
import numpy as np
+import pytest
import pandas as pd
import pandas.util.testing as tm
from pandas.util.testing import assert_frame_equal, ensure_clean
-import pytest
+from pandas.io.feather_format import read_feather, to_feather # noqa:E402
+
pyarrow = pytest.importorskip('pyarrow')
-from pandas.io.feather_format import to_feather, read_feather # noqa:E402
pyarrow_version = LooseVersion(pyarrow.__version__)
diff --git a/pandas/tests/io/test_gbq.py b/pandas/tests/io/test_gbq.py
index 6dd16107bc7d7..15f366e5e2e9e 100644
--- a/pandas/tests/io/test_gbq.py
+++ b/pandas/tests/io/test_gbq.py
@@ -1,20 +1,22 @@
-import pytest
from datetime import datetime
-import pytz
-import platform
import os
+import platform
+
+import numpy as np
+import pytest
+import pytz
+
+from pandas.compat import range
+
+import pandas as pd
+from pandas import DataFrame, compat
+import pandas.util.testing as tm
try:
from unittest import mock
except ImportError:
mock = pytest.importorskip("mock")
-import numpy as np
-import pandas as pd
-from pandas import compat, DataFrame
-from pandas.compat import range
-import pandas.util.testing as tm
-
api_exceptions = pytest.importorskip("google.api_core.exceptions")
bigquery = pytest.importorskip("google.cloud.bigquery")
diff --git a/pandas/tests/io/test_gcs.py b/pandas/tests/io/test_gcs.py
index efbd57dec9f1b..12b082c3d4099 100644
--- a/pandas/tests/io/test_gcs.py
+++ b/pandas/tests/io/test_gcs.py
@@ -1,12 +1,14 @@
import numpy as np
import pytest
-from pandas import DataFrame, date_range, read_csv
from pandas.compat import StringIO
-from pandas.io.common import is_gcs_url
+
+from pandas import DataFrame, date_range, read_csv
from pandas.util import _test_decorators as td
from pandas.util.testing import assert_frame_equal
+from pandas.io.common import is_gcs_url
+
def test_is_gcs_url():
assert is_gcs_url("gcs://pandas/somethingelse.com")
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 492089644fb15..b2b0c21c81263 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -1,29 +1,28 @@
from __future__ import print_function
+from functools import partial
import os
import re
import threading
-from functools import partial
-
-import pytest
-
import numpy as np
from numpy.random import rand
+import pytest
-from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index,
- date_range, Series)
-from pandas.compat import (map, zip, StringIO, BytesIO,
- is_platform_windows, PY3, reload)
+from pandas.compat import (
+ PY3, BytesIO, StringIO, is_platform_windows, map, reload, zip)
from pandas.errors import ParserError
-from pandas.io.common import URLError, file_path_to_url
-import pandas.io.html
-from pandas.io.html import read_html
+import pandas.util._test_decorators as td
+from pandas import (
+ DataFrame, Index, MultiIndex, Series, Timestamp, date_range, read_csv)
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
from pandas.util.testing import makeCustomDataframe as mkdf, network
+from pandas.io.common import URLError, file_path_to_url
+import pandas.io.html
+from pandas.io.html import read_html
+
HERE = os.path.dirname(__file__)
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index e4f10de7f5b2b..9034b964033ed 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -1,18 +1,20 @@
""" test parquet compat """
-import os
-
-import pytest
import datetime
from distutils.version import LooseVersion
+import os
from warnings import catch_warnings
import numpy as np
-import pandas as pd
+import pytest
+
from pandas.compat import PY3
-from pandas.io.parquet import (to_parquet, read_parquet, get_engine,
- PyArrowImpl, FastParquetImpl)
+
+import pandas as pd
from pandas.util import testing as tm
+from pandas.io.parquet import (
+ FastParquetImpl, PyArrowImpl, get_engine, read_parquet, to_parquet)
+
try:
import pyarrow # noqa
_HAVE_PYARROW = True
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 85d467650d5c4..7f3fe1aa401ea 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -12,20 +12,22 @@
3. Move the created pickle to "data/legacy_pickle/<version>" directory.
"""
+from distutils.version import LooseVersion
import glob
-import pytest
+import os
+import shutil
from warnings import catch_warnings, simplefilter
-import os
-from distutils.version import LooseVersion
+import pytest
+
+from pandas.compat import PY3, is_platform_little_endian
+import pandas.util._test_decorators as td
+
import pandas as pd
from pandas import Index
-from pandas.compat import is_platform_little_endian, PY3
-import pandas
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
+
from pandas.tseries.offsets import Day, MonthEnd
-import shutil
@pytest.fixture(scope='module')
@@ -63,7 +65,7 @@ def compare(data, vf, version):
# py3 compat when reading py2 pickle
try:
- data = pandas.read_pickle(vf)
+ data = pd.read_pickle(vf)
except (ValueError) as e:
if 'unsupported pickle protocol:' in str(e):
# trying to read a py3 pickle in py2
@@ -111,13 +113,13 @@ def compare_series_ts(result, expected, typ, version):
freq = result.index.freq
assert freq + Day(1) == Day(2)
- res = freq + pandas.Timedelta(hours=1)
- assert isinstance(res, pandas.Timedelta)
- assert res == pandas.Timedelta(days=1, hours=1)
+ res = freq + pd.Timedelta(hours=1)
+ assert isinstance(res, pd.Timedelta)
+ assert res == pd.Timedelta(days=1, hours=1)
- res = freq + pandas.Timedelta(nanoseconds=1)
- assert isinstance(res, pandas.Timedelta)
- assert res == pandas.Timedelta(days=1, nanoseconds=1)
+ res = freq + pd.Timedelta(nanoseconds=1)
+ assert isinstance(res, pd.Timedelta)
+ assert res == pd.Timedelta(days=1, nanoseconds=1)
def compare_series_dt_tz(result, expected, typ, version):
@@ -337,7 +339,7 @@ def compress_file(self, src_path, dest_path, compression):
compression=zipfile.ZIP_DEFLATED) as f:
f.write(src_path, os.path.basename(src_path))
elif compression == 'xz':
- lzma = pandas.compat.import_lzma()
+ lzma = pd.compat.import_lzma()
f = lzma.LZMAFile(dest_path, "w")
else:
msg = 'Unrecognized compression type: {}'.format(compression)
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 4179e81d02042..55b738a56f809 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -1,39 +1,37 @@
-import pytest
-import os
-import tempfile
from contextlib import contextmanager
-from warnings import catch_warnings, simplefilter
-from distutils.version import LooseVersion
-
import datetime
from datetime import timedelta
+from distutils.version import LooseVersion
+import os
+import tempfile
+from warnings import catch_warnings, simplefilter
import numpy as np
+import pytest
-import pandas as pd
-from pandas import (Series, DataFrame, Panel, MultiIndex, Int64Index,
- RangeIndex, Categorical, bdate_range,
- date_range, timedelta_range, Index, DatetimeIndex,
- isna, compat, concat, Timestamp)
+from pandas.compat import (
+ PY35, PY36, BytesIO, is_platform_little_endian, is_platform_windows,
+ lrange, range, text_type, u)
+import pandas.util._test_decorators as td
+from pandas.core.dtypes.common import is_categorical_dtype
+
+import pandas as pd
+from pandas import (
+ Categorical, DataFrame, DatetimeIndex, Index, Int64Index, MultiIndex,
+ Panel, RangeIndex, Series, Timestamp, bdate_range, compat, concat,
+ date_range, isna, timedelta_range)
import pandas.util.testing as tm
-import pandas.util._test_decorators as td
-from pandas.util.testing import (assert_panel_equal,
- assert_frame_equal,
- assert_series_equal,
- set_timezone)
-
-from pandas.compat import (is_platform_windows, is_platform_little_endian,
- PY35, PY36, BytesIO, text_type,
- range, lrange, u)
+from pandas.util.testing import (
+ assert_frame_equal, assert_panel_equal, assert_series_equal, set_timezone)
+
+from pandas.io import pytables as pytables # noqa:E402
from pandas.io.formats.printing import pprint_thing
-from pandas.core.dtypes.common import is_categorical_dtype
+from pandas.io.pytables import (
+ ClosedFileError, HDFStore, PossibleDataLossError, Term, read_hdf)
+from pandas.io.pytables import TableIterator # noqa:E402
tables = pytest.importorskip('tables')
-from pandas.io import pytables as pytables # noqa:E402
-from pandas.io.pytables import (TableIterator, # noqa:E402
- HDFStore, Term, read_hdf,
- PossibleDataLossError, ClosedFileError)
_default_compressor = ('blosc' if LooseVersion(tables.__version__) >=
diff --git a/pandas/tests/io/test_s3.py b/pandas/tests/io/test_s3.py
index a2c3d17f8754a..32eae8ed328f4 100644
--- a/pandas/tests/io/test_s3.py
+++ b/pandas/tests/io/test_s3.py
@@ -1,7 +1,9 @@
import pytest
-from pandas import read_csv
from pandas.compat import BytesIO
+
+from pandas import read_csv
+
from pandas.io.common import is_s3_url
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 1ad5d636ccf23..75a6d8d009083 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -18,27 +18,29 @@
"""
from __future__ import print_function
-import pytest
-import sqlite3
-import csv
+import csv
+from datetime import date, datetime, time
+import sqlite3
import warnings
-import numpy as np
-import pandas as pd
-from datetime import datetime, date, time
+import numpy as np
+import pytest
-from pandas.core.dtypes.common import (is_datetime64_dtype,
- is_datetime64tz_dtype)
-from pandas import DataFrame, Series, Index, MultiIndex, isna, concat
-from pandas import date_range, to_datetime, to_timedelta, Timestamp
import pandas.compat as compat
-from pandas.compat import range, lrange, string_types, PY36
+from pandas.compat import PY36, lrange, range, string_types
-import pandas.io.sql as sql
-from pandas.io.sql import read_sql_table, read_sql_query
+from pandas.core.dtypes.common import (
+ is_datetime64_dtype, is_datetime64tz_dtype)
+
+import pandas as pd
+from pandas import (
+ DataFrame, Index, MultiIndex, Series, Timestamp, concat, date_range, isna,
+ to_datetime, to_timedelta)
import pandas.util.testing as tm
+import pandas.io.sql as sql
+from pandas.io.sql import read_sql_query, read_sql_table
try:
import sqlalchemy
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 3413b8fdf18d1..ce9be6a7857bf 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -1,27 +1,31 @@
# -*- coding: utf-8 -*-
# pylint: disable=E1101
+from collections import OrderedDict
import datetime as dt
-import io
+from datetime import datetime
import gzip
+import io
import os
import struct
import warnings
-from collections import OrderedDict
-from datetime import datetime
import numpy as np
import pytest
-import pandas as pd
-import pandas.util.testing as tm
import pandas.compat as compat
-from pandas.compat import iterkeys, PY3, ResourceWarning
+from pandas.compat import PY3, ResourceWarning, iterkeys
+
from pandas.core.dtypes.common import is_categorical_dtype
+
+import pandas as pd
from pandas.core.frame import DataFrame, Series
+import pandas.util.testing as tm
+
from pandas.io.parsers import read_csv
-from pandas.io.stata import (InvalidColumnName, PossiblePrecisionLoss,
- StataMissingValue, StataReader, read_stata)
+from pandas.io.stata import (
+ InvalidColumnName, PossiblePrecisionLoss, StataMissingValue, StataReader,
+ read_stata)
@pytest.fixture
diff --git a/setup.cfg b/setup.cfg
index 1d8b1d2a37249..89c47ed9074bb 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -137,52 +137,6 @@ skip=
pandas/tests/test_take.py,
pandas/tests/test_nanops.py,
pandas/tests/test_config.py,
- pandas/tests/io/test_clipboard.py,
- pandas/tests/io/test_compression.py,
- pandas/tests/io/test_pytables.py,
- pandas/tests/io/test_parquet.py,
- pandas/tests/io/generate_legacy_storage_files.py,
- pandas/tests/io/test_common.py,
- pandas/tests/io/test_feather.py,
- pandas/tests/io/test_s3.py,
- pandas/tests/io/test_html.py,
- pandas/tests/io/test_sql.py,
- pandas/tests/io/test_stata.py,
- pandas/tests/io/conftest.py,
- pandas/tests/io/test_pickle.py,
- pandas/tests/io/test_gbq.py,
- pandas/tests/io/test_gcs.py,
- pandas/tests/io/sas/test_sas.py,
- pandas/tests/io/sas/test_sas7bdat.py,
- pandas/tests/io/sas/test_xport.py,
- pandas/tests/io/formats/test_eng_formatting.py,
- pandas/tests/io/formats/test_to_excel.py,
- pandas/tests/io/formats/test_to_html.py,
- pandas/tests/io/formats/test_style.py,
- pandas/tests/io/formats/test_format.py,
- pandas/tests/io/formats/test_to_csv.py,
- pandas/tests/io/formats/test_css.py,
- pandas/tests/io/formats/test_to_latex.py,
- pandas/tests/io/formats/test_printing.py,
- pandas/tests/io/msgpack/test_buffer.py,
- pandas/tests/io/msgpack/test_read_size.py,
- pandas/tests/io/msgpack/test_pack.py,
- pandas/tests/io/msgpack/test_except.py,
- pandas/tests/io/msgpack/test_unpack_raw.py,
- pandas/tests/io/msgpack/test_unpack.py,
- pandas/tests/io/msgpack/test_newspec.py,
- pandas/tests/io/msgpack/common.py,
- pandas/tests/io/msgpack/test_limits.py,
- pandas/tests/io/msgpack/test_extension.py,
- pandas/tests/io/msgpack/test_sequnpack.py,
- pandas/tests/io/msgpack/test_subtype.py,
- pandas/tests/io/msgpack/test_seq.py,
- pandas/tests/io/json/test_compression.py,
- pandas/tests/io/json/test_ujson.py,
- pandas/tests/io/json/test_normalize.py,
- pandas/tests/io/json/test_readlines.py,
- pandas/tests/io/json/test_pandas.py,
- pandas/tests/io/json/test_json_table_schema.py,
pandas/tests/api/test_types.py,
pandas/tests/api/test_api.py,
pandas/tests/tools/test_numeric.py,
| - [ ] xref #23334
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/24532 | 2019-01-01T18:11:22Z | 2019-01-02T02:54:07Z | 2019-01-02T02:54:07Z | 2019-01-02T20:25:41Z |
CLN: Unused doc files | diff --git a/doc/source/_static/legacy_0.10.h5 b/doc/source/_static/legacy_0.10.h5
deleted file mode 100644
index b1439ef16361a..0000000000000
Binary files a/doc/source/_static/legacy_0.10.h5 and /dev/null differ
diff --git a/doc/source/_static/stub b/doc/source/_static/stub
deleted file mode 100644
index e69de29bb2d1d..0000000000000
diff --git a/doc/source/styled.xlsx b/doc/source/styled.xlsx
deleted file mode 100644
index 1233ff2b8692b..0000000000000
Binary files a/doc/source/styled.xlsx and /dev/null differ
| Didn't find any usages of these files in the docs | https://api.github.com/repos/pandas-dev/pandas/pulls/51850 | 2023-03-08T22:15:20Z | 2023-03-08T23:46:53Z | 2023-03-08T23:46:53Z | 2023-03-09T00:09:03Z |
Backport PR #51464 on branch 2.0.x (CoW: Ignore copy=True when copy_on_write is enabled) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4d171a6490ccc..36fd0dda5d2bc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -11028,7 +11028,7 @@ def to_timestamp(
>>> df2.index
DatetimeIndex(['2023-01-31', '2024-01-31'], dtype='datetime64[ns]', freq=None)
"""
- new_obj = self.copy(deep=copy)
+ new_obj = self.copy(deep=copy and not using_copy_on_write())
axis_name = self._get_axis_name(axis)
old_ax = getattr(self, axis_name)
@@ -11085,7 +11085,7 @@ def to_period(
>>> idx.to_period("Y")
PeriodIndex(['2001', '2002', '2003'], dtype='period[A-DEC]')
"""
- new_obj = self.copy(deep=copy)
+ new_obj = self.copy(deep=copy and not using_copy_on_write())
axis_name = self._get_axis_name(axis)
old_ax = getattr(self, axis_name)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 65f76f7e295a6..8a34df3385036 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -442,7 +442,7 @@ def set_flags(
>>> df2.flags.allows_duplicate_labels
False
"""
- df = self.copy(deep=copy)
+ df = self.copy(deep=copy and not using_copy_on_write())
if allows_duplicate_labels is not None:
df.flags["allows_duplicate_labels"] = allows_duplicate_labels
return df
@@ -713,7 +713,7 @@ def _set_axis_nocheck(
else:
# With copy=False, we create a new object but don't copy the
# underlying data.
- obj = self.copy(deep=copy)
+ obj = self.copy(deep=copy and not using_copy_on_write())
setattr(obj, obj._get_axis_name(axis), labels)
return obj
@@ -742,7 +742,7 @@ def swapaxes(
j = self._get_axis_number(axis2)
if i == j:
- return self.copy(deep=copy)
+ return self.copy(deep=copy and not using_copy_on_write())
mapping = {i: j, j: i}
@@ -999,7 +999,7 @@ def _rename(
index = mapper
self._check_inplace_and_allows_duplicate_labels(inplace)
- result = self if inplace else self.copy(deep=copy)
+ result = self if inplace else self.copy(deep=copy and not using_copy_on_write())
for axis_no, replacements in enumerate((index, columns)):
if replacements is None:
@@ -1215,6 +1215,9 @@ class name
inplace = validate_bool_kwarg(inplace, "inplace")
+ if copy and using_copy_on_write():
+ copy = False
+
if mapper is not lib.no_default:
# Use v0.23 behavior if a scalar or list
non_mapper = is_scalar(mapper) or (
@@ -5322,6 +5325,8 @@ def reindex(
# if all axes that are requested to reindex are equal, then only copy
# if indicated must have index names equal here as well as values
+ if copy and using_copy_on_write():
+ copy = False
if all(
self._get_axis(axis_name).identical(ax)
for axis_name, ax in axes.items()
@@ -5416,10 +5421,14 @@ def _reindex_with_indexers(
# If we've made a copy once, no need to make another one
copy = False
- if (copy or copy is None) and new_data is self._mgr:
+ if (
+ (copy or copy is None)
+ and new_data is self._mgr
+ and not using_copy_on_write()
+ ):
new_data = new_data.copy(deep=copy)
elif using_copy_on_write() and new_data is self._mgr:
- new_data = new_data.copy(deep=copy)
+ new_data = new_data.copy(deep=False)
return self._constructor(new_data).__finalize__(self)
@@ -6239,6 +6248,9 @@ def astype(
2 2020-01-03
dtype: datetime64[ns]
"""
+ if copy and using_copy_on_write():
+ copy = False
+
if is_dict_like(dtype):
if self.ndim == 1: # i.e. Series
if len(dtype) > 1 or self.name not in dtype:
@@ -9499,6 +9511,8 @@ def _align_series(
fill_axis: Axis = 0,
):
is_series = isinstance(self, ABCSeries)
+ if copy and using_copy_on_write():
+ copy = False
if (not is_series and axis is None) or axis not in [None, 0, 1]:
raise ValueError("Must specify axis=0 or 1")
@@ -10261,8 +10275,7 @@ def truncate(
if isinstance(ax, MultiIndex):
setattr(result, self._get_axis_name(axis), ax.truncate(before, after))
- if copy or (copy is None and not using_copy_on_write()):
- result = result.copy(deep=copy)
+ result = result.copy(deep=copy and not using_copy_on_write())
return result
@@ -10343,7 +10356,7 @@ def _tz_convert(ax, tz):
raise ValueError(f"The level {level} is not valid")
ax = _tz_convert(ax, tz)
- result = self.copy(deep=copy)
+ result = self.copy(deep=copy and not using_copy_on_write())
result = result.set_axis(ax, axis=axis, copy=False)
return result.__finalize__(self, method="tz_convert")
@@ -10525,7 +10538,7 @@ def _tz_localize(ax, tz, ambiguous, nonexistent):
raise ValueError(f"The level {level} is not valid")
ax = _tz_localize(ax, tz, ambiguous, nonexistent)
- result = self.copy(deep=copy)
+ result = self.copy(deep=copy and not using_copy_on_write())
result = result.set_axis(ax, axis=axis, copy=False)
return result.__finalize__(self, method="tz_localize")
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index dfe6a5a94ad58..05ba4170cb329 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -442,6 +442,8 @@ def astype(self: T, dtype, copy: bool | None = False, errors: str = "raise") ->
copy = False
else:
copy = True
+ elif using_copy_on_write():
+ copy = False
return self.apply(
"astype",
@@ -457,6 +459,8 @@ def convert(self: T, copy: bool | None) -> T:
copy = False
else:
copy = True
+ elif using_copy_on_write():
+ copy = False
return self.apply("convert", copy=copy, using_cow=using_copy_on_write())
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 6758ab9cb6814..bc8f4b97d539a 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -366,6 +366,8 @@ def concat(
copy = False
else:
copy = True
+ elif copy and using_copy_on_write():
+ copy = False
op = _Concatenator(
objs,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index c7a86e2b0bf09..02ec7208be0cd 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4129,7 +4129,7 @@ def swaplevel(
{examples}
"""
assert isinstance(self.index, MultiIndex)
- result = self.copy(deep=copy)
+ result = self.copy(deep=copy and not using_copy_on_write())
result.index = self.index.swaplevel(i, j)
return result
@@ -5743,7 +5743,7 @@ def to_timestamp(
if not isinstance(self.index, PeriodIndex):
raise TypeError(f"unsupported Type {type(self.index).__name__}")
- new_obj = self.copy(deep=copy)
+ new_obj = self.copy(deep=copy and not using_copy_on_write())
new_index = self.index.to_timestamp(freq=freq, how=how)
setattr(new_obj, "index", new_index)
return new_obj
@@ -5783,7 +5783,7 @@ def to_period(self, freq: str | None = None, copy: bool | None = None) -> Series
if not isinstance(self.index, DatetimeIndex):
raise TypeError(f"unsupported Type {type(self.index).__name__}")
- new_obj = self.copy(deep=copy)
+ new_obj = self.copy(deep=copy and not using_copy_on_write())
new_index = self.index.to_period(freq=freq)
setattr(new_obj, "index", new_index)
return new_obj
diff --git a/pandas/tests/copy_view/test_functions.py b/pandas/tests/copy_view/test_functions.py
index b6f2f0543cb2b..53d72baf7da4e 100644
--- a/pandas/tests/copy_view/test_functions.py
+++ b/pandas/tests/copy_view/test_functions.py
@@ -181,6 +181,21 @@ def test_concat_mixed_series_frame(using_copy_on_write):
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("copy", [True, None, False])
+def test_concat_copy_keyword(using_copy_on_write, copy):
+ df = DataFrame({"a": [1, 2]})
+ df2 = DataFrame({"b": [1.5, 2.5]})
+
+ result = concat([df, df2], axis=1, copy=copy)
+
+ if using_copy_on_write or copy is False:
+ assert np.shares_memory(get_array(df, "a"), get_array(result, "a"))
+ assert np.shares_memory(get_array(df2, "b"), get_array(result, "b"))
+ else:
+ assert not np.shares_memory(get_array(df, "a"), get_array(result, "a"))
+ assert not np.shares_memory(get_array(df2, "b"), get_array(result, "b"))
+
+
@pytest.mark.parametrize(
"func",
[
@@ -280,3 +295,18 @@ def test_merge_on_key_enlarging_one(using_copy_on_write, func, how):
assert not np.shares_memory(get_array(result, "a"), get_array(df1, "a"))
tm.assert_frame_equal(df1, df1_orig)
tm.assert_frame_equal(df2, df2_orig)
+
+
+@pytest.mark.parametrize("copy", [True, None, False])
+def test_merge_copy_keyword(using_copy_on_write, copy):
+ df = DataFrame({"a": [1, 2]})
+ df2 = DataFrame({"b": [3, 4.5]})
+
+ result = df.merge(df2, copy=copy, left_index=True, right_index=True)
+
+ if using_copy_on_write or copy is False:
+ assert np.shares_memory(get_array(df, "a"), get_array(result, "a"))
+ assert np.shares_memory(get_array(df2, "b"), get_array(result, "b"))
+ else:
+ assert not np.shares_memory(get_array(df, "a"), get_array(result, "a"))
+ assert not np.shares_memory(get_array(df2, "b"), get_array(result, "b"))
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index b30f8ab4c7b9c..7429a73717470 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -66,6 +66,7 @@ def test_copy_shallow(using_copy_on_write):
lambda df, copy: df.rename(columns=str.lower, copy=copy),
lambda df, copy: df.reindex(columns=["a", "c"], copy=copy),
lambda df, copy: df.reindex_like(df, copy=copy),
+ lambda df, copy: df.align(df, copy=copy)[0],
lambda df, copy: df.set_axis(["a", "b", "c"], axis="index", copy=copy),
lambda df, copy: df.rename_axis(index="test", copy=copy),
lambda df, copy: df.rename_axis(columns="test", copy=copy),
@@ -84,6 +85,7 @@ def test_copy_shallow(using_copy_on_write):
"rename",
"reindex",
"reindex_like",
+ "align",
"set_axis",
"rename_axis0",
"rename_axis1",
@@ -115,15 +117,12 @@ def test_methods_copy_keyword(
df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [0.1, 0.2, 0.3]}, index=index)
df2 = method(df, copy=copy)
- share_memory = (using_copy_on_write and copy is not True) or copy is False
+ share_memory = using_copy_on_write or copy is False
if request.node.callspec.id.startswith("reindex-"):
# TODO copy=False without CoW still returns a copy in this case
if not using_copy_on_write and not using_array_manager and copy is False:
share_memory = False
- # TODO copy=True with CoW still returns a view
- if using_copy_on_write:
- share_memory = True
if share_memory:
assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
@@ -131,6 +130,83 @@ def test_methods_copy_keyword(
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+@pytest.mark.parametrize("copy", [True, None, False])
+@pytest.mark.parametrize(
+ "method",
+ [
+ lambda ser, copy: ser.rename(index={0: 100}, copy=copy),
+ lambda ser, copy: ser.reindex(index=ser.index, copy=copy),
+ lambda ser, copy: ser.reindex_like(ser, copy=copy),
+ lambda ser, copy: ser.align(ser, copy=copy)[0],
+ lambda ser, copy: ser.set_axis(["a", "b", "c"], axis="index", copy=copy),
+ lambda ser, copy: ser.rename_axis(index="test", copy=copy),
+ lambda ser, copy: ser.astype("int64", copy=copy),
+ lambda ser, copy: ser.swaplevel(0, 1, copy=copy),
+ lambda ser, copy: ser.swapaxes(0, 0, copy=copy),
+ lambda ser, copy: ser.truncate(0, 5, copy=copy),
+ lambda ser, copy: ser.infer_objects(copy=copy),
+ lambda ser, copy: ser.to_timestamp(copy=copy),
+ lambda ser, copy: ser.to_period(freq="D", copy=copy),
+ lambda ser, copy: ser.tz_localize("US/Central", copy=copy),
+ lambda ser, copy: ser.tz_convert("US/Central", copy=copy),
+ lambda ser, copy: ser.set_flags(allows_duplicate_labels=False, copy=copy),
+ ],
+ ids=[
+ "rename",
+ "reindex",
+ "reindex_like",
+ "align",
+ "set_axis",
+ "rename_axis0",
+ "astype",
+ "swaplevel",
+ "swapaxes",
+ "truncate",
+ "infer_objects",
+ "to_timestamp",
+ "to_period",
+ "tz_localize",
+ "tz_convert",
+ "set_flags",
+ ],
+)
+def test_methods_series_copy_keyword(request, method, copy, using_copy_on_write):
+ index = None
+ if "to_timestamp" in request.node.callspec.id:
+ index = period_range("2012-01-01", freq="D", periods=3)
+ elif "to_period" in request.node.callspec.id:
+ index = date_range("2012-01-01", freq="D", periods=3)
+ elif "tz_localize" in request.node.callspec.id:
+ index = date_range("2012-01-01", freq="D", periods=3)
+ elif "tz_convert" in request.node.callspec.id:
+ index = date_range("2012-01-01", freq="D", periods=3, tz="Europe/Brussels")
+ elif "swaplevel" in request.node.callspec.id:
+ index = MultiIndex.from_arrays([[1, 2, 3], [4, 5, 6]])
+
+ ser = Series([1, 2, 3], index=index)
+ ser2 = method(ser, copy=copy)
+
+ share_memory = using_copy_on_write or copy is False
+
+ if share_memory:
+ assert np.shares_memory(get_array(ser2), get_array(ser))
+ else:
+ assert not np.shares_memory(get_array(ser2), get_array(ser))
+
+
+@pytest.mark.parametrize("copy", [True, None, False])
+def test_transpose_copy_keyword(using_copy_on_write, copy, using_array_manager):
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
+ result = df.transpose(copy=copy)
+ share_memory = using_copy_on_write or copy is False or copy is None
+ share_memory = share_memory and not using_array_manager
+
+ if share_memory:
+ assert np.shares_memory(get_array(df, "a"), get_array(result, 0))
+ else:
+ assert not np.shares_memory(get_array(df, "a"), get_array(result, 0))
+
+
# -----------------------------------------------------------------------------
# DataFrame methods returning new DataFrame using shallow copy
@@ -1119,14 +1195,13 @@ def test_set_flags(using_copy_on_write):
tm.assert_series_equal(ser, expected)
-@pytest.mark.parametrize("copy_kwargs", [{"copy": True}, {}])
@pytest.mark.parametrize("kwargs", [{"mapper": "test"}, {"index": "test"}])
-def test_rename_axis(using_copy_on_write, kwargs, copy_kwargs):
+def test_rename_axis(using_copy_on_write, kwargs):
df = DataFrame({"a": [1, 2, 3, 4]}, index=Index([1, 2, 3, 4], name="a"))
df_orig = df.copy()
- df2 = df.rename_axis(**kwargs, **copy_kwargs)
+ df2 = df.rename_axis(**kwargs)
- if using_copy_on_write and not copy_kwargs:
+ if using_copy_on_write:
assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
else:
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
diff --git a/pandas/tests/frame/methods/test_reindex.py b/pandas/tests/frame/methods/test_reindex.py
index ceea53e3dd8bf..52e841a8c569a 100644
--- a/pandas/tests/frame/methods/test_reindex.py
+++ b/pandas/tests/frame/methods/test_reindex.py
@@ -149,7 +149,10 @@ def test_reindex_copies_ea(self, using_copy_on_write):
# pass both columns and index
result2 = df.reindex(columns=cols, index=df.index, copy=True)
- assert not np.shares_memory(result2[0].array._data, df[0].array._data)
+ if using_copy_on_write:
+ assert np.shares_memory(result2[0].array._data, df[0].array._data)
+ else:
+ assert not np.shares_memory(result2[0].array._data, df[0].array._data)
@td.skip_array_manager_not_yet_implemented
def test_reindex_date_fill_value(self):
diff --git a/pandas/tests/frame/methods/test_set_axis.py b/pandas/tests/frame/methods/test_set_axis.py
index fd140e0098f2a..2fc629b14a50e 100644
--- a/pandas/tests/frame/methods/test_set_axis.py
+++ b/pandas/tests/frame/methods/test_set_axis.py
@@ -33,13 +33,14 @@ def test_set_axis_copy(self, obj, using_copy_on_write):
tm.assert_equal(expected, result)
assert result is not obj
# check we DID make a copy
- if obj.ndim == 1:
- assert not tm.shares_memory(result, obj)
- else:
- assert not any(
- tm.shares_memory(result.iloc[:, i], obj.iloc[:, i])
- for i in range(obj.shape[1])
- )
+ if not using_copy_on_write:
+ if obj.ndim == 1:
+ assert not tm.shares_memory(result, obj)
+ else:
+ assert not any(
+ tm.shares_memory(result.iloc[:, i], obj.iloc[:, i])
+ for i in range(obj.shape[1])
+ )
result = obj.set_axis(new_index, axis=0, copy=False)
tm.assert_equal(expected, result)
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index b08d0a33d08c6..44b02310eb8a7 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -59,8 +59,12 @@ def test_concat_copy(self, using_array_manager, using_copy_on_write):
# These are actual copies.
result = concat([df, df2, df3], axis=1, copy=True)
- for arr in result._mgr.arrays:
- assert arr.base is None
+ if not using_copy_on_write:
+ for arr in result._mgr.arrays:
+ assert arr.base is None
+ else:
+ for arr in result._mgr.arrays:
+ assert arr.base is not None
# These are the same.
result = concat([df, df2, df3], axis=1, copy=False)
diff --git a/pandas/tests/reshape/concat/test_dataframe.py b/pandas/tests/reshape/concat/test_dataframe.py
index 23a49c33099cb..105ffe84a0703 100644
--- a/pandas/tests/reshape/concat/test_dataframe.py
+++ b/pandas/tests/reshape/concat/test_dataframe.py
@@ -195,15 +195,16 @@ def test_concat_duplicates_in_index_with_keys(self):
@pytest.mark.parametrize("ignore_index", [True, False])
@pytest.mark.parametrize("order", ["C", "F"])
@pytest.mark.parametrize("axis", [0, 1])
- def test_concat_copies(self, axis, order, ignore_index):
+ def test_concat_copies(self, axis, order, ignore_index, using_copy_on_write):
# based on asv ConcatDataFrames
df = DataFrame(np.zeros((10000, 200), dtype=np.float32, order=order))
res = concat([df] * 5, axis=axis, ignore_index=ignore_index, copy=True)
- for arr in res._iter_column_arrays():
- for arr2 in df._iter_column_arrays():
- assert not np.shares_memory(arr, arr2)
+ if not using_copy_on_write:
+ for arr in res._iter_column_arrays():
+ for arr2 in df._iter_column_arrays():
+ assert not np.shares_memory(arr, arr2)
def test_outer_sort_columns(self):
# GH#47127
diff --git a/pandas/tests/reshape/concat/test_index.py b/pandas/tests/reshape/concat/test_index.py
index e0ea09138ef3c..ce06e74de91b9 100644
--- a/pandas/tests/reshape/concat/test_index.py
+++ b/pandas/tests/reshape/concat/test_index.py
@@ -100,18 +100,28 @@ def test_concat_rename_index(self):
tm.assert_frame_equal(result, exp)
assert result.index.names == exp.index.names
- def test_concat_copy_index_series(self, axis):
+ def test_concat_copy_index_series(self, axis, using_copy_on_write):
# GH 29879
ser = Series([1, 2])
comb = concat([ser, ser], axis=axis, copy=True)
- assert comb.index is not ser.index
+ if not using_copy_on_write or axis in [0, "index"]:
+ assert comb.index is not ser.index
+ else:
+ assert comb.index is ser.index
- def test_concat_copy_index_frame(self, axis):
+ def test_concat_copy_index_frame(self, axis, using_copy_on_write):
# GH 29879
df = DataFrame([[1, 2], [3, 4]], columns=["a", "b"])
comb = concat([df, df], axis=axis, copy=True)
- assert comb.index is not df.index
- assert comb.columns is not df.columns
+ if not using_copy_on_write:
+ assert comb.index is not df.index
+ assert comb.columns is not df.columns
+ elif axis in [0, "index"]:
+ assert comb.index is not df.index
+ assert comb.columns is df.columns
+ elif axis in [1, "columns"]:
+ assert comb.index is df.index
+ assert comb.columns is not df.columns
def test_default_index(self):
# is_series and ignore_index
diff --git a/pandas/tests/series/methods/test_align.py b/pandas/tests/series/methods/test_align.py
index b2e03684bc902..7f34f4046d33c 100644
--- a/pandas/tests/series/methods/test_align.py
+++ b/pandas/tests/series/methods/test_align.py
@@ -118,14 +118,18 @@ def test_align_nocopy(datetime_series, using_copy_on_write):
assert (b[:2] == 5).all()
-def test_align_same_index(datetime_series):
+def test_align_same_index(datetime_series, using_copy_on_write):
a, b = datetime_series.align(datetime_series, copy=False)
assert a.index is datetime_series.index
assert b.index is datetime_series.index
a, b = datetime_series.align(datetime_series, copy=True)
- assert a.index is not datetime_series.index
- assert b.index is not datetime_series.index
+ if not using_copy_on_write:
+ assert a.index is not datetime_series.index
+ assert b.index is not datetime_series.index
+ else:
+ assert a.index is datetime_series.index
+ assert b.index is datetime_series.index
def test_align_multiindex():
| Backport PR #51464: CoW: Ignore copy=True when copy_on_write is enabled | https://api.github.com/repos/pandas-dev/pandas/pulls/51849 | 2023-03-08T22:04:58Z | 2023-03-08T23:45:44Z | 2023-03-08T23:45:44Z | 2023-03-08T23:45:44Z |
Backport PR #51689 on branch 2.0.x (BUG: mask/where raising for mixed dtype frame and no other) | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 65f76f7e295a6..2e335e3f79f3d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6069,8 +6069,8 @@ def _is_mixed_type(self) -> bool_t:
def _check_inplace_setting(self, value) -> bool_t:
"""check whether we allow in-place setting with this type of value"""
if self._is_mixed_type and not self._mgr.is_numeric_mixed_type:
- # allow an actual np.nan thru
- if is_float(value) and np.isnan(value):
+ # allow an actual np.nan through
+ if is_float(value) and np.isnan(value) or value is lib.no_default:
return True
raise TypeError(
diff --git a/pandas/tests/frame/indexing/test_mask.py b/pandas/tests/frame/indexing/test_mask.py
index 23458b096a140..233e2dcce81a7 100644
--- a/pandas/tests/frame/indexing/test_mask.py
+++ b/pandas/tests/frame/indexing/test_mask.py
@@ -141,3 +141,12 @@ def test_mask_return_dtype():
excepted = Series([1.0, 0.0, 1.0, 0.0], dtype=ser.dtype)
result = ser.mask(cond, other)
tm.assert_series_equal(result, excepted)
+
+
+def test_mask_inplace_no_other():
+ # GH#51685
+ df = DataFrame({"a": [1, 2], "b": ["x", "y"]})
+ cond = DataFrame({"a": [True, False], "b": [False, True]})
+ df.mask(cond, inplace=True)
+ expected = DataFrame({"a": [np.nan, 2], "b": ["x", np.nan]})
+ tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index f0fb0a0595cbd..e4b9f9baf5b45 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -1025,3 +1025,12 @@ def test_where_int_overflow(replacement):
expected = DataFrame([[1.0, 2e25, "nine"], [replacement, 0.1, replacement]])
tm.assert_frame_equal(result, expected)
+
+
+def test_where_inplace_no_other():
+ # GH#51685
+ df = DataFrame({"a": [1, 2], "b": ["x", "y"]})
+ cond = DataFrame({"a": [True, False], "b": [False, True]})
+ df.where(cond, inplace=True)
+ expected = DataFrame({"a": [1, np.nan], "b": [np.nan, "y"]})
+ tm.assert_frame_equal(df, expected)
| Backport PR #51689: BUG: mask/where raising for mixed dtype frame and no other | https://api.github.com/repos/pandas-dev/pandas/pulls/51848 | 2023-03-08T21:45:24Z | 2023-03-08T23:46:25Z | 2023-03-08T23:46:25Z | 2023-03-08T23:46:25Z |
Remove flake8-rst usage | diff --git a/setup.cfg b/setup.cfg
index 88b61086e1e0f..77717f4347473 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -33,35 +33,3 @@ exclude =
versioneer.py,
# exclude asv benchmark environments from linting
env
-
-[flake8-rst]
-max-line-length = 84
-bootstrap =
- import numpy as np
- import pandas as pd
- # avoiding error when importing again numpy or pandas
- np
- # (in some cases we want to do it to show users)
- pd
-ignore =
- # space before : (needed for how black formats slicing)
- E203,
- # module level import not at top of file
- E402,
- # line break before binary operator
- W503,
- # Classes/functions in different blocks can generate those errors
- # expected 2 blank lines, found 0
- E302,
- # expected 2 blank lines after class or function definition, found 0
- E305,
- # We use semicolon at the end to avoid displaying plot objects
- # statement ends with a semicolon
- E703,
- # comparison to none should be 'if cond is none:'
- E711,
-exclude =
- doc/source/development/contributing_docstring.rst,
- # work around issue of undefined variable warnings
- # https://github.com/pandas-dev/pandas/pull/38837#issuecomment-752884156
- doc/source/getting_started/comparison/includes/*.rst
| This is required to make all the tests pass and merge #51827
As background, flake8-rst is breaking on python3.10, and the repository has not been maintained since December 2020:
[https://pypi.org/project/flake8-rst/#history](url) | https://api.github.com/repos/pandas-dev/pandas/pulls/51847 | 2023-03-08T20:59:51Z | 2023-03-08T21:24:41Z | 2023-03-08T21:24:41Z | 2023-03-08T21:24:42Z |
STYLE Remove flake8-rst | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index d39de3ee301f3..f10cfac5f1276 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -160,14 +160,6 @@ repos:
types: [pyi]
args: [scripts/run_stubtest.py]
stages: [manual]
- - id: flake8-rst
- name: flake8-rst
- description: Run flake8 on code snippets in docstrings or RST files
- language: python
- entry: flake8-rst
- types: [rst]
- args: [--filename=*.rst]
- additional_dependencies: [flake8-rst==0.7.0, flake8==3.7.9]
- id: inconsistent-namespace-usage
name: 'Check for inconsistent use of pandas namespace'
entry: python scripts/check_for_inconsistent_pandas_namespace.py
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
flake8-rst is unmantained. It causes errors on Python3.10 - (Explanation by @MarcoGorelli)
| https://api.github.com/repos/pandas-dev/pandas/pulls/51843 | 2023-03-08T16:17:51Z | 2023-03-08T19:22:48Z | 2023-03-08T19:22:48Z | 2023-03-09T15:00:50Z |
DOC add example of Series.index | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 45df480779ee7..c046d55d80b49 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -85,7 +85,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
MSG='Partially validate docstrings (EX01)' ; echo $MSG
$BASE_DIR/scripts/validate_docstrings.py --format=actions --errors=EX01 --ignore_functions \
- pandas.Series.index \
pandas.Series.__iter__ \
pandas.Series.keys \
pandas.Series.item \
diff --git a/pandas/core/series.py b/pandas/core/series.py
index c1997331ed06d..6551ba5b6761d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5745,7 +5745,46 @@ def to_period(self, freq: str | None = None, copy: bool | None = None) -> Series
_info_axis_name: Literal["index"] = "index"
index = properties.AxisProperty(
- axis=0, doc="The index (axis labels) of the Series."
+ axis=0,
+ doc="""
+ The index (axis labels) of the Series.
+
+ The index of a Series is used to label and identify each element of the
+ underlying data. The index can be thought of as an immutable ordered set
+ (technically a multi-set, as it may contain duplicate labels), and is
+ used to index and align data in pandas.
+
+ Returns
+ -------
+ Index
+ The index labels of the Series.
+
+ See Also
+ --------
+ Series.reindex : Conform Series to new index.
+ Series.set_index : Set Series as DataFrame index.
+ Index : The base pandas index type.
+
+ Notes
+ -----
+ For more information on pandas indexing, see the `indexing user guide
+ <https://pandas.pydata.org/docs/user_guide/indexing.html>`__.
+
+ Examples
+ --------
+ To create a Series with a custom index and view the index labels:
+
+ >>> cities = ['Kolkata', 'Chicago', 'Toronto', 'Lisbon']
+ >>> populations = [14.85, 2.71, 2.93, 0.51]
+ >>> city_series = pd.Series(populations, index=cities)
+ >>> city_series.index
+ Index(['Kolkata', 'Chicago', 'Toronto', 'Lisbon'], dtype='object')
+
+ To change the index labels of an existing Series:
+ >>> city_series.index = ['KOL', 'CHI', 'TOR', 'LIS']
+ >>> city_series.index
+ Index(['KOL', 'CHI', 'TOR', 'LIS'], dtype='object')
+ """,
)
# ----------------------------------------------------------------------
| Added the example section.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/51842 | 2023-03-08T15:22:04Z | 2023-04-16T20:44:11Z | 2023-04-16T20:44:11Z | 2023-04-17T06:05:24Z |
Backport PR #51741 on branch 2.0.x (BUG: indexing empty pyarrow backed object returning corrupt object) | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 12eb2375b69e1..317eca7dc8723 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -1247,6 +1247,7 @@ Indexing
- Bug in :meth:`DataFrame.compare` does not recognize differences when comparing ``NA`` with value in nullable dtypes (:issue:`48939`)
- Bug in :meth:`Series.rename` with :class:`MultiIndex` losing extension array dtypes (:issue:`21055`)
- Bug in :meth:`DataFrame.isetitem` coercing extension array dtypes in :class:`DataFrame` to object (:issue:`49922`)
+- Bug in :meth:`Series.__getitem__` returning corrupt object when selecting from an empty pyarrow backed object (:issue:`51734`)
- Bug in :class:`BusinessHour` would cause creation of :class:`DatetimeIndex` to fail when no opening hour was included in the index (:issue:`49835`)
Missing
diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 69ca809e4f498..f9add5c2c5d88 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -252,6 +252,7 @@
else:
FLOAT_PYARROW_DTYPES_STR_REPR = []
ALL_INT_PYARROW_DTYPES_STR_REPR = []
+ ALL_PYARROW_DTYPES = []
EMPTY_STRING_PATTERN = re.compile("^$")
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 4e0dd6b75d46a..fbd7626c8637d 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1010,7 +1010,12 @@ def _concat_same_type(
ArrowExtensionArray
"""
chunks = [array for ea in to_concat for array in ea._data.iterchunks()]
- arr = pa.chunked_array(chunks)
+ if to_concat[0].dtype == "string":
+ # StringDtype has no attrivute pyarrow_dtype
+ pa_dtype = pa.string()
+ else:
+ pa_dtype = to_concat[0].dtype.pyarrow_dtype
+ arr = pa.chunked_array(chunks, type=pa_dtype)
return cls(arr)
def _accumulate(
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index cedaaa500736b..91a96c8154779 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -2298,3 +2298,11 @@ def test_dt_tz_localize(unit):
dtype=ArrowDtype(pa.timestamp(unit, "US/Pacific")),
)
tm.assert_series_equal(result, expected)
+
+
+def test_concat_empty_arrow_backed_series(dtype):
+ # GH#51734
+ ser = pd.Series([], dtype=dtype)
+ expected = ser.copy()
+ result = pd.concat([ser[np.array([], dtype=np.bool_)]])
+ tm.assert_series_equal(result, expected)
| #51741
| https://api.github.com/repos/pandas-dev/pandas/pulls/51841 | 2023-03-08T13:23:12Z | 2023-03-08T17:25:58Z | 2023-03-08T17:25:58Z | 2023-03-08T17:26:25Z |
Backport PR #51766 on branch 2.0.x (CLN: Use type_mapper instead of manual conversion) | diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 8a21d99124ec6..1a09157fabd09 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -18,10 +18,8 @@
from pandas.compat._optional import import_optional_dependency
from pandas.util._decorators import doc
-from pandas import (
- arrays,
- get_option,
-)
+import pandas as pd
+from pandas import get_option
from pandas.core.api import (
DataFrame,
RangeIndex,
@@ -173,11 +171,4 @@ def read_feather(
return pa_table.to_pandas(types_mapper=_arrow_dtype_mapping().get)
elif dtype_backend == "pyarrow":
- return DataFrame(
- {
- col_name: arrays.ArrowExtensionArray(pa_col)
- for col_name, pa_col in zip(
- pa_table.column_names, pa_table.itercolumns()
- )
- }
- )
+ return pa_table.to_pandas(types_mapper=pd.ArrowDtype)
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index bdc070d04bd69..b8f2645b788ea 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -54,6 +54,7 @@
from pandas.core.dtypes.generic import ABCIndex
from pandas import (
+ ArrowDtype,
DataFrame,
MultiIndex,
Series,
@@ -960,16 +961,8 @@ def read(self) -> DataFrame | Series:
pa_table = pyarrow_json.read_json(self.data)
if self.use_nullable_dtypes:
if get_option("mode.dtype_backend") == "pyarrow":
- from pandas.arrays import ArrowExtensionArray
-
- return DataFrame(
- {
- col_name: ArrowExtensionArray(pa_col)
- for col_name, pa_col in zip(
- pa_table.column_names, pa_table.itercolumns()
- )
- }
- )
+ return pa_table.to_pandas(types_mapper=ArrowDtype)
+
elif get_option("mode.dtype_backend") == "pandas":
from pandas.io._util import _arrow_dtype_mapping
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index 5336e2a14f66d..28526ec249d9d 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -28,7 +28,7 @@
is_unsigned_integer_dtype,
)
-from pandas.core.arrays import ArrowExtensionArray
+import pandas as pd
from pandas.core.frame import DataFrame
from pandas.io.common import get_handle
@@ -99,14 +99,7 @@ def read_orc(
if use_nullable_dtypes:
dtype_backend = get_option("mode.dtype_backend")
if dtype_backend == "pyarrow":
- df = DataFrame(
- {
- col_name: ArrowExtensionArray(pa_col)
- for col_name, pa_col in zip(
- pa_table.column_names, pa_table.itercolumns()
- )
- }
- )
+ df = pa_table.to_pandas(types_mapper=pd.ArrowDtype)
else:
from pandas.io._util import _arrow_dtype_mapping
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index aec31f40f8570..7dc839f47b186 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -22,10 +22,10 @@
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
+import pandas as pd
from pandas import (
DataFrame,
MultiIndex,
- arrays,
get_option,
)
from pandas.core.shared_docs import _shared_docs
@@ -250,14 +250,11 @@ def read(
if dtype_backend == "pandas":
result = pa_table.to_pandas(**to_pandas_kwargs)
elif dtype_backend == "pyarrow":
- result = DataFrame(
- {
- col_name: arrays.ArrowExtensionArray(pa_col)
- for col_name, pa_col in zip(
- pa_table.column_names, pa_table.itercolumns()
- )
- }
- )
+ # Incompatible types in assignment (expression has type
+ # "Type[ArrowDtype]", target has type overloaded function
+ to_pandas_kwargs["types_mapper"] = pd.ArrowDtype # type: ignore[assignment] # noqa
+ result = pa_table.to_pandas(**to_pandas_kwargs)
+
if manager == "array":
result = result._as_manager("array", copy=False)
return result
diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index 420b6212f857a..58dfc95c1e5b6 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -5,9 +5,9 @@
from pandas.core.dtypes.inference import is_integer
+import pandas as pd
from pandas import (
DataFrame,
- arrays,
get_option,
)
@@ -153,12 +153,7 @@ def read(self) -> DataFrame:
self.kwds["use_nullable_dtypes"]
and get_option("mode.dtype_backend") == "pyarrow"
):
- frame = DataFrame(
- {
- col_name: arrays.ArrowExtensionArray(pa_col)
- for col_name, pa_col in zip(table.column_names, table.itercolumns())
- }
- )
+ frame = table.to_pandas(types_mapper=pd.ArrowDtype)
else:
frame = table.to_pandas()
return self._finalize_pandas_output(frame)
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 353dc4f1cbd8a..2124787e8a80e 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -1034,14 +1034,7 @@ def test_read_use_nullable_types_pyarrow_config(self, pa, df_full):
df["bool_with_none"] = [True, None, True]
pa_table = pyarrow.Table.from_pandas(df)
- expected = pd.DataFrame(
- {
- col_name: pd.arrays.ArrowExtensionArray(pa_column)
- for col_name, pa_column in zip(
- pa_table.column_names, pa_table.itercolumns()
- )
- }
- )
+ expected = pa_table.to_pandas(types_mapper=pd.ArrowDtype)
# pyarrow infers datetimes as us instead of ns
expected["datetime"] = expected["datetime"].astype("timestamp[us][pyarrow]")
expected["datetime_with_nat"] = expected["datetime_with_nat"].astype(
@@ -1059,6 +1052,20 @@ def test_read_use_nullable_types_pyarrow_config(self, pa, df_full):
expected=expected,
)
+ def test_read_use_nullable_types_pyarrow_config_index(self, pa):
+ df = pd.DataFrame(
+ {"a": [1, 2]}, index=pd.Index([3, 4], name="test"), dtype="int64[pyarrow]"
+ )
+ expected = df.copy()
+
+ with pd.option_context("mode.dtype_backend", "pyarrow"):
+ check_round_trip(
+ df,
+ engine=pa,
+ read_kwargs={"use_nullable_dtypes": True},
+ expected=expected,
+ )
+
class TestParquetFastParquet(Base):
def test_basic(self, fp, df_full):
| #51766 | https://api.github.com/repos/pandas-dev/pandas/pulls/51840 | 2023-03-08T13:19:51Z | 2023-03-08T17:26:11Z | 2023-03-08T17:26:11Z | 2023-03-08T17:26:17Z |
CLN: Remove pep8speaks | diff --git a/.pep8speaks.yml b/.pep8speaks.yml
deleted file mode 100644
index 5a83727ddf5f8..0000000000000
--- a/.pep8speaks.yml
+++ /dev/null
@@ -1,4 +0,0 @@
-# File : .pep8speaks.yml
-
-scanner:
- diff_only: True # If True, errors caused by only the patch are shown
| Looks like this integration hasn't been working since ~October 2022 https://github.com/OrkoHunter/pep8speaks/issues/189. Also don't think it's entirely necessary as it only covers a small fraction of code checks that are done | https://api.github.com/repos/pandas-dev/pandas/pulls/51838 | 2023-03-08T02:04:12Z | 2023-03-08T04:33:04Z | 2023-03-08T04:33:04Z | 2023-03-08T04:33:09Z |
PERF: ArrowExtensionArray._from_sequence | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index ce2ac4213d806..f0d6774c833db 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1,6 +1,5 @@
from __future__ import annotations
-from copy import deepcopy
import operator
import re
from typing import (
@@ -243,7 +242,7 @@ def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = Fal
elif not isinstance(scalars, (pa.Array, pa.ChunkedArray)):
if copy and is_array_like(scalars):
# pa array should not get updated when numpy array is updated
- scalars = deepcopy(scalars)
+ scalars = scalars.copy()
try:
scalars = pa.array(scalars, type=pa_dtype, from_pandas=True)
except pa.ArrowInvalid:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @phofl
per https://github.com/pandas-dev/pandas/pull/51643#issuecomment-1459089269
no whatsnew as this was a recent change | https://api.github.com/repos/pandas-dev/pandas/pulls/51836 | 2023-03-08T01:18:40Z | 2023-03-08T04:34:45Z | 2023-03-08T04:34:45Z | 2023-03-17T22:04:23Z |
STYLE: enable ruff TCH on some file | diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 3d763ebb7a11f..0fbd587c1971e 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -1,7 +1,6 @@
from __future__ import annotations
import importlib
-import types
from typing import (
TYPE_CHECKING,
Sequence,
@@ -9,7 +8,6 @@
from pandas._config import get_option
-from pandas._typing import IndexLabel
from pandas.util._decorators import (
Appender,
Substitution,
@@ -27,8 +25,12 @@
from pandas.core.base import PandasObject
if TYPE_CHECKING:
+ import types
+
from matplotlib.axes import Axes
+ from pandas._typing import IndexLabel
+
from pandas import DataFrame
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index ad054b6065756..ca08f39b852ee 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -11,7 +11,6 @@
from matplotlib.artist import setp
import numpy as np
-from pandas._typing import MatplotlibColor
from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import is_dict_like
@@ -37,6 +36,8 @@
from matplotlib.axes import Axes
from matplotlib.lines import Line2D
+ from pandas._typing import MatplotlibColor
+
class BoxPlot(LinePlot):
@property
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 94e416e2a2e8a..013e36a456e30 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -14,14 +14,8 @@
import warnings
import matplotlib as mpl
-from matplotlib.artist import Artist
import numpy as np
-from pandas._typing import (
- IndexLabel,
- PlottingOrientation,
- npt,
-)
from pandas.errors import AbstractMethodError
from pandas.util._decorators import cache_readonly
from pandas.util._exceptions import find_stack_level
@@ -79,9 +73,16 @@
)
if TYPE_CHECKING:
+ from matplotlib.artist import Artist
from matplotlib.axes import Axes
from matplotlib.axis import Axis
+ from pandas._typing import (
+ IndexLabel,
+ PlottingOrientation,
+ npt,
+ )
+
def _color_in_style(style: str) -> bool:
"""
diff --git a/pandas/plotting/_matplotlib/groupby.py b/pandas/plotting/_matplotlib/groupby.py
index 17a214292608b..94533d55d31df 100644
--- a/pandas/plotting/_matplotlib/groupby.py
+++ b/pandas/plotting/_matplotlib/groupby.py
@@ -1,11 +1,8 @@
from __future__ import annotations
-import numpy as np
+from typing import TYPE_CHECKING
-from pandas._typing import (
- Dict,
- IndexLabel,
-)
+import numpy as np
from pandas.core.dtypes.missing import remove_na_arraylike
@@ -18,6 +15,12 @@
from pandas.plotting._matplotlib.misc import unpack_single_str_list
+if TYPE_CHECKING:
+ from pandas._typing import (
+ Dict,
+ IndexLabel,
+ )
+
def create_iter_data_given_by(
data: DataFrame, kind: str = "hist"
diff --git a/pandas/plotting/_matplotlib/hist.py b/pandas/plotting/_matplotlib/hist.py
index bc8e6ed753d99..710c20db0526e 100644
--- a/pandas/plotting/_matplotlib/hist.py
+++ b/pandas/plotting/_matplotlib/hist.py
@@ -7,8 +7,6 @@
import numpy as np
-from pandas._typing import PlottingOrientation
-
from pandas.core.dtypes.common import (
is_integer,
is_list_like,
@@ -42,6 +40,8 @@
if TYPE_CHECKING:
from matplotlib.axes import Axes
+ from pandas._typing import PlottingOrientation
+
from pandas import DataFrame
diff --git a/pandas/plotting/_matplotlib/timeseries.py b/pandas/plotting/_matplotlib/timeseries.py
index ab051ee58da70..15af2dc6aa7bd 100644
--- a/pandas/plotting/_matplotlib/timeseries.py
+++ b/pandas/plotting/_matplotlib/timeseries.py
@@ -2,7 +2,6 @@
from __future__ import annotations
-from datetime import timedelta
import functools
from typing import (
TYPE_CHECKING,
@@ -37,6 +36,8 @@
)
if TYPE_CHECKING:
+ from datetime import timedelta
+
from matplotlib.axes import Axes
from pandas import (
diff --git a/pandas/tests/extension/array_with_attr/array.py b/pandas/tests/extension/array_with_attr/array.py
index d9327ca9f2f3f..4e40b6d0a714f 100644
--- a/pandas/tests/extension/array_with_attr/array.py
+++ b/pandas/tests/extension/array_with_attr/array.py
@@ -5,16 +5,18 @@
from __future__ import annotations
import numbers
+from typing import TYPE_CHECKING
import numpy as np
-from pandas._typing import type_t
-
from pandas.core.dtypes.base import ExtensionDtype
import pandas as pd
from pandas.core.arrays import ExtensionArray
+if TYPE_CHECKING:
+ from pandas._typing import type_t
+
class FloatAttrDtype(ExtensionDtype):
type = float
diff --git a/pandas/tests/extension/date/array.py b/pandas/tests/extension/date/array.py
index eca935cdc9128..08d7e0de82ba8 100644
--- a/pandas/tests/extension/date/array.py
+++ b/pandas/tests/extension/date/array.py
@@ -1,20 +1,15 @@
+from __future__ import annotations
+
import datetime as dt
from typing import (
+ TYPE_CHECKING,
Any,
- Optional,
Sequence,
- Tuple,
- Union,
cast,
)
import numpy as np
-from pandas._typing import (
- Dtype,
- PositionalIndexer,
-)
-
from pandas.core.dtypes.dtypes import register_extension_dtype
from pandas.api.extensions import (
@@ -23,6 +18,12 @@
)
from pandas.api.types import pandas_dtype
+if TYPE_CHECKING:
+ from pandas._typing import (
+ Dtype,
+ PositionalIndexer,
+ )
+
@register_extension_dtype
class DateDtype(ExtensionDtype):
@@ -61,12 +62,12 @@ def __repr__(self) -> str:
class DateArray(ExtensionArray):
def __init__(
self,
- dates: Union[
- dt.date,
- Sequence[dt.date],
- Tuple[np.ndarray, np.ndarray, np.ndarray],
- np.ndarray,
- ],
+ dates: (
+ dt.date
+ | Sequence[dt.date]
+ | tuple[np.ndarray, np.ndarray, np.ndarray]
+ | np.ndarray
+ ),
) -> None:
if isinstance(dates, dt.date):
self._year = np.array([dates.year])
@@ -146,7 +147,7 @@ def __getitem__(self, item: PositionalIndexer):
else:
raise NotImplementedError("only ints are supported as indexes")
- def __setitem__(self, key: Union[int, slice, np.ndarray], value: Any):
+ def __setitem__(self, key: int | slice | np.ndarray, value: Any):
if not isinstance(key, int):
raise NotImplementedError("only ints are supported as indexes")
@@ -160,7 +161,7 @@ def __setitem__(self, key: Union[int, slice, np.ndarray], value: Any):
def __repr__(self) -> str:
return f"DateArray{list(zip(self._year, self._month, self._day))}"
- def copy(self) -> "DateArray":
+ def copy(self) -> DateArray:
return DateArray((self._year.copy(), self._month.copy(), self._day.copy()))
def isna(self) -> np.ndarray:
@@ -172,7 +173,7 @@ def isna(self) -> np.ndarray:
)
@classmethod
- def _from_sequence(cls, scalars, *, dtype: Optional[Dtype] = None, copy=False):
+ def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy=False):
if isinstance(scalars, dt.date):
pass
elif isinstance(scalars, DateArray):
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index afeca326a9fd4..3e495e9ac6814 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -4,11 +4,10 @@
import numbers
import random
import sys
+from typing import TYPE_CHECKING
import numpy as np
-from pandas._typing import type_t
-
from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.common import (
is_dtype_equal,
@@ -33,6 +32,9 @@
)
from pandas.core.indexers import check_array_indexer
+if TYPE_CHECKING:
+ from pandas._typing import type_t
+
@register_extension_dtype
class DecimalDtype(ExtensionDtype):
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index f7de31e58b104..9ce60ae5d435c 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -23,14 +23,13 @@
import string
import sys
from typing import (
+ TYPE_CHECKING,
Any,
Mapping,
)
import numpy as np
-from pandas._typing import type_t
-
from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
from pandas.core.dtypes.common import (
is_bool_dtype,
@@ -45,6 +44,9 @@
)
from pandas.core.indexers import unpack_tuple_and_ellipses
+if TYPE_CHECKING:
+ from pandas._typing import type_t
+
class JSONDtype(ExtensionDtype):
type = abc.Mapping
diff --git a/pandas/tests/extension/list/array.py b/pandas/tests/extension/list/array.py
index f281a0f82e0e7..68ffaed2b98f2 100644
--- a/pandas/tests/extension/list/array.py
+++ b/pandas/tests/extension/list/array.py
@@ -8,11 +8,10 @@
import numbers
import random
import string
+from typing import TYPE_CHECKING
import numpy as np
-from pandas._typing import type_t
-
from pandas.core.dtypes.base import ExtensionDtype
import pandas as pd
@@ -22,6 +21,9 @@
)
from pandas.core.arrays import ExtensionArray
+if TYPE_CHECKING:
+ from pandas._typing import type_t
+
class ListDtype(ExtensionDtype):
type = list
diff --git a/pandas/tests/frame/common.py b/pandas/tests/frame/common.py
index 70115f8679337..fc41d7907a240 100644
--- a/pandas/tests/frame/common.py
+++ b/pandas/tests/frame/common.py
@@ -1,12 +1,15 @@
from __future__ import annotations
-from pandas._typing import AxisInt
+from typing import TYPE_CHECKING
from pandas import (
DataFrame,
concat,
)
+if TYPE_CHECKING:
+ from pandas._typing import AxisInt
+
def _check_mixed_float(df, dtype=None):
# float16 are most likely to be upcasted to float32
diff --git a/pandas/tests/tseries/offsets/test_custom_business_month.py b/pandas/tests/tseries/offsets/test_custom_business_month.py
index 3e2500b8009a2..faf0f9810200b 100644
--- a/pandas/tests/tseries/offsets/test_custom_business_month.py
+++ b/pandas/tests/tseries/offsets/test_custom_business_month.py
@@ -11,6 +11,7 @@
datetime,
timedelta,
)
+from typing import TYPE_CHECKING
import numpy as np
import pytest
@@ -29,11 +30,13 @@
assert_is_on_offset,
assert_offset_equal,
)
-from pandas.tests.tseries.offsets.test_offsets import _ApplyCases
from pandas.tseries import offsets
from pandas.tseries.holiday import USFederalHolidayCalendar
+if TYPE_CHECKING:
+ from pandas.tests.tseries.offsets.test_offsets import _ApplyCases
+
@pytest.fixture
def dt():
diff --git a/pandas/tseries/__init__.py b/pandas/tseries/__init__.py
index dd4ce02b19453..e361726dc6f80 100644
--- a/pandas/tseries/__init__.py
+++ b/pandas/tseries/__init__.py
@@ -1,3 +1,4 @@
+# ruff: noqa: TCH004
from typing import TYPE_CHECKING
if TYPE_CHECKING:
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index e1af8c0b48c2f..9bd88c4c905c3 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -1,5 +1,7 @@
from __future__ import annotations
+from typing import TYPE_CHECKING
+
import numpy as np
from pandas._libs.algos import unique_deltas
@@ -26,7 +28,6 @@
to_offset,
)
from pandas._libs.tslibs.parsing import get_rule_month
-from pandas._typing import npt
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.common import (
@@ -42,6 +43,8 @@
from pandas.core.algorithms import unique
+if TYPE_CHECKING:
+ from pandas._typing import npt
# ---------------------------------------------------------------------
# Offset names ("time rules") and related functions
diff --git a/pyproject.toml b/pyproject.toml
index ddce400a90ebb..00fcf1eb06d4e 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -300,11 +300,6 @@ exclude = [
"pandas/core/algorithms.py" = ["TCH"]
"pandas/core/ops/*" = ["TCH"]
"pandas/core/sorting.py" = ["TCH"]
-"pandas/core/construction.py" = ["TCH"]
-"pandas/core/missing.py" = ["TCH"]
-"pandas/tseries/*" = ["TCH"]
-"pandas/tests/*" = ["TCH"]
-"pandas/plotting/*" = ["TCH"]
"pandas/util/*" = ["TCH"]
"pandas/_libs/*" = ["TCH"]
# Keep this one enabled
| xref #51740, to fix
```
"pandas/core/construction.py" = ["TCH"]
"pandas/core/missing.py" = ["TCH"]
"pandas/tseries/*" = ["TCH"]
"pandas/tests/*" = ["TCH"]
"pandas/plotting/*" = ["TCH"]
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/51835 | 2023-03-08T01:00:33Z | 2023-03-09T10:15:24Z | 2023-03-09T10:15:24Z | 2023-03-11T00:10:47Z |
CoW: Set copy=False in internal usages of Series/DataFrame constructors | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 19e3eed2d3444..6d0a522becddc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1603,16 +1603,21 @@ def dot(self, other: AnyArrayLike | DataFrame) -> DataFrame | Series:
if isinstance(other, DataFrame):
return self._constructor(
- np.dot(lvals, rvals), index=left.index, columns=other.columns
+ np.dot(lvals, rvals),
+ index=left.index,
+ columns=other.columns,
+ copy=False,
)
elif isinstance(other, Series):
- return self._constructor_sliced(np.dot(lvals, rvals), index=left.index)
+ return self._constructor_sliced(
+ np.dot(lvals, rvals), index=left.index, copy=False
+ )
elif isinstance(rvals, (np.ndarray, Index)):
result = np.dot(lvals, rvals)
if result.ndim == 2:
- return self._constructor(result, index=left.index)
+ return self._constructor(result, index=left.index, copy=False)
else:
- return self._constructor_sliced(result, index=left.index)
+ return self._constructor_sliced(result, index=left.index, copy=False)
else: # pragma: no cover
raise TypeError(f"unsupported type: {type(other)}")
@@ -3610,10 +3615,15 @@ def transpose(self, *args, copy: bool = False) -> DataFrame:
else:
new_arr = self.values.T
- if copy:
+ if copy and not using_copy_on_write():
new_arr = new_arr.copy()
result = self._constructor(
- new_arr, index=self.columns, columns=self.index, dtype=new_arr.dtype
+ new_arr,
+ index=self.columns,
+ columns=self.index,
+ dtype=new_arr.dtype,
+ # We already made a copy (more than one block)
+ copy=False,
)
return result.__finalize__(self, method="transpose")
@@ -3839,7 +3849,7 @@ def _getitem_multilevel(self, key):
else:
new_values = self._values[:, loc]
result = self._constructor(
- new_values, index=self.index, columns=result_columns
+ new_values, index=self.index, columns=result_columns, copy=False
)
if using_copy_on_write() and isinstance(loc, slice):
result._mgr.add_references(self._mgr) # type: ignore[arg-type]
@@ -4079,7 +4089,7 @@ def _setitem_frame(self, key, value):
if isinstance(key, np.ndarray):
if key.shape != self.shape:
raise ValueError("Array conditional must be same shape as self")
- key = self._constructor(key, **self._construct_axes_dict())
+ key = self._constructor(key, **self._construct_axes_dict(), copy=False)
if key.size and not all(is_bool_dtype(dtype) for dtype in key.dtypes):
raise TypeError(
@@ -4997,7 +5007,9 @@ def _reindex_multi(
# condition more specific.
indexer = row_indexer, col_indexer
new_values = take_2d_multi(self.values, indexer, fill_value=fill_value)
- return self._constructor(new_values, index=new_index, columns=new_columns)
+ return self._constructor(
+ new_values, index=new_index, columns=new_columns, copy=False
+ )
else:
return self._reindex_with_indexers(
{0: [new_index, row_indexer], 1: [new_columns, col_indexer]},
@@ -10527,7 +10539,7 @@ def corr(
f"'{method}' was supplied"
)
- result = self._constructor(correl, index=idx, columns=cols)
+ result = self._constructor(correl, index=idx, columns=cols, copy=False)
return result.__finalize__(self, method="corr")
def cov(
@@ -10658,7 +10670,7 @@ def cov(
else:
base_cov = libalgos.nancorr(mat, cov=True, minp=min_periods)
- result = self._constructor(base_cov, index=idx, columns=cols)
+ result = self._constructor(base_cov, index=idx, columns=cols, copy=False)
return result.__finalize__(self, method="cov")
def corrwith(
@@ -10771,7 +10783,9 @@ def c(x):
return nanops.nancorr(x[0], x[1], method=method)
correl = self._constructor_sliced(
- map(c, zip(left.values.T, right.values.T)), index=left.columns
+ map(c, zip(left.values.T, right.values.T)),
+ index=left.columns,
+ copy=False,
)
else:
@@ -10882,7 +10896,7 @@ def count(self, axis: Axis = 0, numeric_only: bool = False):
series_counts = notna(frame).sum(axis=axis)
counts = series_counts._values
result = self._constructor_sliced(
- counts, index=frame._get_agg_axis(axis)
+ counts, index=frame._get_agg_axis(axis), copy=False
)
return result.astype("int64").__finalize__(self, method="count")
@@ -10991,7 +11005,7 @@ def _reduce_axis1(self, name: str, func, skipna: bool) -> Series:
middle = func(arr, axis=0, skipna=skipna)
result = ufunc(result, middle)
- res_ser = self._constructor_sliced(result, index=self.index)
+ res_ser = self._constructor_sliced(result, index=self.index, copy=False)
return res_ser
def nunique(self, axis: Axis = 0, dropna: bool = True) -> Series:
@@ -11673,6 +11687,7 @@ def isin(self, values: Series | DataFrame | Sequence | Mapping) -> DataFrame:
).reshape(self.shape),
self.index,
self.columns,
+ copy=False,
)
return result.__finalize__(self, method="isin")
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c3818a0a425b9..b070f1a9b7d90 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -779,6 +779,8 @@ def swapaxes(self, axis1: Axis, axis2: Axis, copy: bool_t | None = None) -> Self
return self._constructor(
new_values,
*new_axes,
+ # The no-copy case for CoW is handled above
+ copy=False,
).__finalize__(self, method="swapaxes")
@final
@@ -9686,7 +9688,7 @@ def _where(
cond = np.asanyarray(cond)
if cond.shape != self.shape:
raise ValueError("Array conditional must be same shape as self")
- cond = self._constructor(cond, **self._construct_axes_dict())
+ cond = self._constructor(cond, **self._construct_axes_dict(), copy=False)
# make sure we are boolean
fill_value = bool(inplace)
@@ -9767,7 +9769,9 @@ def _where(
# we are the same shape, so create an actual object for alignment
else:
- other = self._constructor(other, **self._construct_axes_dict())
+ other = self._constructor(
+ other, **self._construct_axes_dict(), copy=False
+ )
if axis is None:
axis = 0
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 23d553c6fecc4..75a009447e037 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -843,7 +843,7 @@ def view(self, dtype: Dtype | None = None) -> Series:
# self.array instead of self._values so we piggyback on PandasArray
# implementation
res_values = self.array.view(dtype)
- res_ser = self._constructor(res_values, index=self.index)
+ res_ser = self._constructor(res_values, index=self.index, copy=False)
if isinstance(res_ser._mgr, SingleBlockManager) and using_copy_on_write():
blk = res_ser._mgr._block
blk.refs = cast("BlockValuesRefs", self._references)
@@ -1044,7 +1044,7 @@ def _get_values_tuple(self, key: tuple):
# If key is contained, would have returned by now
indexer, new_index = self.index.get_loc_level(key)
- new_ser = self._constructor(self._values[indexer], index=new_index)
+ new_ser = self._constructor(self._values[indexer], index=new_index, copy=False)
if using_copy_on_write() and isinstance(indexer, slice):
new_ser._mgr.add_references(self._mgr) # type: ignore[arg-type]
return new_ser.__finalize__(self)
@@ -1084,7 +1084,9 @@ def _get_value(self, label, takeable: bool = False):
new_index = mi[loc]
new_index = maybe_droplevels(new_index, label)
- new_ser = self._constructor(new_values, index=new_index, name=self.name)
+ new_ser = self._constructor(
+ new_values, index=new_index, name=self.name, copy=False
+ )
if using_copy_on_write() and isinstance(loc, slice):
new_ser._mgr.add_references(self._mgr) # type: ignore[arg-type]
return new_ser.__finalize__(self)
@@ -1384,7 +1386,7 @@ def repeat(self, repeats: int | Sequence[int], axis: None = None) -> Series:
nv.validate_repeat((), {"axis": axis})
new_index = self.index.repeat(repeats)
new_values = self._values.repeat(repeats)
- return self._constructor(new_values, index=new_index).__finalize__(
+ return self._constructor(new_values, index=new_index, copy=False).__finalize__(
self, method="repeat"
)
@@ -1550,7 +1552,7 @@ def reset_index(
self.index = new_index
else:
return self._constructor(
- self._values.copy(), index=new_index
+ self._values.copy(), index=new_index, copy=False
).__finalize__(self, method="reset_index")
elif inplace:
raise TypeError(
@@ -2072,7 +2074,7 @@ def mode(self, dropna: bool = True) -> Series:
# Ensure index is type stable (should always use int index)
return self._constructor(
- res_values, index=range(len(res_values)), name=self.name
+ res_values, index=range(len(res_values)), name=self.name, copy=False
)
def unique(self) -> ArrayLike: # pylint: disable=useless-parent-delegation
@@ -2336,7 +2338,7 @@ def duplicated(self, keep: DropKeep = "first") -> Series:
dtype: bool
"""
res = self._duplicated(keep=keep)
- result = self._constructor(res, index=self.index)
+ result = self._constructor(res, index=self.index, copy=False)
return result.__finalize__(self, method="duplicated")
def idxmin(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashable:
@@ -2514,7 +2516,7 @@ def round(self, decimals: int = 0, *args, **kwargs) -> Series:
"""
nv.validate_round(args, kwargs)
result = self._values.round(decimals)
- result = self._constructor(result, index=self.index).__finalize__(
+ result = self._constructor(result, index=self.index, copy=False).__finalize__(
self, method="round"
)
@@ -2820,7 +2822,7 @@ def diff(self, periods: int = 1) -> Series:
{examples}
"""
result = algorithms.diff(self._values, periods)
- return self._constructor(result, index=self.index).__finalize__(
+ return self._constructor(result, index=self.index, copy=False).__finalize__(
self, method="diff"
)
@@ -2938,7 +2940,7 @@ def dot(self, other: AnyArrayLike) -> Series | np.ndarray:
if isinstance(other, ABCDataFrame):
return self._constructor(
- np.dot(lvals, rvals), index=other.columns
+ np.dot(lvals, rvals), index=other.columns, copy=False
).__finalize__(self, method="dot")
elif isinstance(other, Series):
return np.dot(lvals, rvals)
@@ -3167,7 +3169,7 @@ def combine(
# try_float=False is to match agg_series
npvalues = lib.maybe_convert_objects(new_values, try_float=False)
res_values = maybe_cast_pointwise_result(npvalues, self.dtype, same_dtype=False)
- return self._constructor(res_values, index=new_index, name=new_name)
+ return self._constructor(res_values, index=new_index, name=new_name, copy=False)
def combine_first(self, other) -> Series:
"""
@@ -3528,7 +3530,7 @@ def sort_values(
return self.copy(deep=None)
result = self._constructor(
- self._values[sorted_index], index=self.index[sorted_index]
+ self._values[sorted_index], index=self.index[sorted_index], copy=False
)
if ignore_index:
@@ -3776,7 +3778,9 @@ def argsort(
else:
result = np.argsort(values, kind=kind)
- res = self._constructor(result, index=self.index, name=self.name, dtype=np.intp)
+ res = self._constructor(
+ result, index=self.index, name=self.name, dtype=np.intp, copy=False
+ )
return res.__finalize__(self, method="argsort")
def nlargest(
@@ -4151,7 +4155,7 @@ def explode(self, ignore_index: bool = False) -> Series:
else:
index = self.index.repeat(counts)
- return self._constructor(values, index=index, name=self.name)
+ return self._constructor(values, index=index, name=self.name, copy=False)
def unstack(self, level: IndexLabel = -1, fill_value: Hashable = None) -> DataFrame:
"""
@@ -4282,7 +4286,7 @@ def map(
dtype: object
"""
new_values = self._map_values(arg, na_action=na_action)
- return self._constructor(new_values, index=self.index).__finalize__(
+ return self._constructor(new_values, index=self.index, copy=False).__finalize__(
self, method="map"
)
@@ -4576,7 +4580,7 @@ def _reindex_indexer(
new_values = algorithms.take_nd(
self._values, indexer, allow_fill=True, fill_value=None
)
- return self._constructor(new_values, index=new_index)
+ return self._constructor(new_values, index=new_index, copy=False)
def _needs_reindex_multi(self, axes, method, level) -> bool:
"""
@@ -5291,7 +5295,7 @@ def isin(self, values) -> Series:
dtype: bool
"""
result = algorithms.isin(self._values, values)
- return self._constructor(result, index=self.index).__finalize__(
+ return self._constructor(result, index=self.index, copy=False).__finalize__(
self, method="isin"
)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @jbrockmendel This keeps copy=False internally where necessary, to avoid unnecessary copies as a side-effect of https://github.com/pandas-dev/pandas/pull/51731 (by default copying numpy arrays in the DataFrame constructor) | https://api.github.com/repos/pandas-dev/pandas/pulls/51834 | 2023-03-08T00:31:21Z | 2023-03-16T07:05:19Z | 2023-03-16T07:05:19Z | 2023-03-17T09:36:54Z |
PERF: avoid non-cython in testing.pyx | diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index 5e1f9a2f723fb..ca19670f37710 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -7,17 +7,14 @@ from numpy cimport import_array
import_array()
+from pandas._libs.missing cimport checknull
from pandas._libs.util cimport (
is_array,
is_complex_object,
is_real_number_object,
)
-from pandas.core.dtypes.common import is_dtype_equal
-from pandas.core.dtypes.missing import (
- array_equivalent,
- isna,
-)
+from pandas.core.dtypes.missing import array_equivalent
cdef bint isiterable(obj):
@@ -133,7 +130,7 @@ cpdef assert_almost_equal(a, b,
raise_assert_detail(
obj, f"{obj} shapes are different", a.shape, b.shape)
- if check_dtype and not is_dtype_equal(a.dtype, b.dtype):
+ if check_dtype and a.dtype != b.dtype:
from pandas._testing import assert_attr_equal
assert_attr_equal("dtype", a, b, obj=obj)
@@ -181,12 +178,12 @@ cpdef assert_almost_equal(a, b,
# classes can't be the same, to raise error
assert_class_equal(a, b, obj=obj)
- if isna(a) and isna(b):
+ if checknull(a) and checknull(b):
# TODO: Should require same-dtype NA?
# nan / None comparison
return True
- if isna(a) and not isna(b) or not isna(a) and isna(b):
+ if (checknull(a) and not checknull(b)) or (not checknull(a) and checknull(b)):
# boolean value of pd.NA is ambiguous
raise AssertionError(f"{a} != {b}")
@@ -195,10 +192,6 @@ cpdef assert_almost_equal(a, b,
return True
if is_real_number_object(a) and is_real_number_object(b):
- if array_equivalent(a, b, strict_nan=True):
- # inf comparison
- return True
-
fa, fb = a, b
if not math.isclose(fa, fb, rel_tol=rtol, abs_tol=atol):
@@ -207,10 +200,6 @@ cpdef assert_almost_equal(a, b,
return True
if is_complex_object(a) and is_complex_object(b):
- if array_equivalent(a, b, strict_nan=True):
- # inf comparison
- return True
-
if not cmath.isclose(a, b, rel_tol=rtol, abs_tol=atol):
assert False, (f"expected {b:.5f} but got {a:.5f}, "
f"with rtol={rtol}, atol={atol}")
diff --git a/pandas/tests/util/test_assert_almost_equal.py b/pandas/tests/util/test_assert_almost_equal.py
index 987af3ee1e78e..0167e105efa23 100644
--- a/pandas/tests/util/test_assert_almost_equal.py
+++ b/pandas/tests/util/test_assert_almost_equal.py
@@ -45,7 +45,7 @@ def _assert_not_almost_equal(a, b, **kwargs):
try:
tm.assert_almost_equal(a, b, **kwargs)
msg = f"{a} and {b} were approximately equal when they shouldn't have been"
- pytest.fail(msg=msg)
+ pytest.fail(reason=msg)
except AssertionError:
pass
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/51833 | 2023-03-08T00:22:47Z | 2023-03-08T20:07:01Z | 2023-03-08T20:07:01Z | 2023-03-08T20:19:06Z |
BUG / CoW: Series.view not respecting CoW | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 15e3d66ecc551..05957f4da3f0d 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -221,6 +221,8 @@ Copy-on-Write improvements
- Arithmetic operations that can be inplace, e.g. ``ser *= 2`` will now respect the
Copy-on-Write mechanism.
+- :meth:`Series.view` will now respect the Copy-on-Write mechanism.
+
Copy-on-Write can be enabled through one of
.. code-block:: python
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 95ee3f1af58f1..43674acde7dc2 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -837,6 +837,10 @@ def view(self, dtype: Dtype | None = None) -> Series:
# implementation
res_values = self.array.view(dtype)
res_ser = self._constructor(res_values, index=self.index)
+ if isinstance(res_ser._mgr, SingleBlockManager) and using_copy_on_write():
+ blk = res_ser._mgr._block
+ blk.refs = cast("BlockValuesRefs", self._references)
+ blk.refs.add_reference(blk) # type: ignore[arg-type]
return res_ser.__finalize__(self, method="view")
# ----------------------------------------------------------------------
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index b30f8ab4c7b9c..482a13a224340 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -1597,3 +1597,21 @@ def test_transpose_ea_single_column(using_copy_on_write):
result = df.T
assert not np.shares_memory(get_array(df, "a"), get_array(result, 0))
+
+
+def test_series_view(using_copy_on_write):
+ ser = Series([1, 2, 3])
+ ser_orig = ser.copy()
+
+ ser2 = ser.view()
+ assert np.shares_memory(get_array(ser), get_array(ser2))
+ if using_copy_on_write:
+ assert not ser2._mgr._has_no_reference(0)
+
+ ser2.iloc[0] = 100
+
+ if using_copy_on_write:
+ tm.assert_series_equal(ser_orig, ser)
+ else:
+ expected = Series([100, 2, 3])
+ tm.assert_series_equal(ser, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This isn't a particularly good solution, but ties into the problem we will have to solve for setitem with a Series anyway so should be good for now. | https://api.github.com/repos/pandas-dev/pandas/pulls/51832 | 2023-03-08T00:10:57Z | 2023-03-13T19:17:33Z | 2023-03-13T19:17:33Z | 2023-03-13T22:37:56Z |
ERR: Raise ValueError when non-default index is given for orc format | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 7e8403c94ceef..933709ce2cde8 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -183,7 +183,7 @@ MultiIndex
I/O
^^^
--
+- :meth:`DataFrame.to_orc` now raising ``ValueError`` when non-default :class:`Index` is given (:issue:`51828`)
-
Period
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index b1bb4cef73c33..72423473e019f 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -21,6 +21,7 @@
)
import pandas as pd
+from pandas.core.indexes.api import default_index
from pandas.io.common import (
get_handle,
@@ -190,6 +191,21 @@ def to_orc(
if engine_kwargs is None:
engine_kwargs = {}
+ # validate index
+ # --------------
+
+ # validate that we have only a default index
+ # raise on anything else as we don't serialize the index
+
+ if not df.index.equals(default_index(len(df))):
+ raise ValueError(
+ "orc does not support serializing a non-default index for the index; "
+ "you can .reset_index() to make the index into column(s)"
+ )
+
+ if df.index.name is not None:
+ raise ValueError("orc does not serialize index meta-data on a default index")
+
# If unsupported dtypes are found raise NotImplementedError
# In Pyarrow 8.0.0 this check will no longer be needed
if pa_version_under8p0:
diff --git a/pandas/tests/io/test_orc.py b/pandas/tests/io/test_orc.py
index 35df047915255..dccdfdc897dc1 100644
--- a/pandas/tests/io/test_orc.py
+++ b/pandas/tests/io/test_orc.py
@@ -391,3 +391,21 @@ def test_orc_uri_path():
uri = pathlib.Path(path).as_uri()
result = read_orc(uri)
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "index",
+ [
+ pd.RangeIndex(start=2, stop=5, step=1),
+ pd.RangeIndex(start=0, stop=3, step=1, name="non-default"),
+ pd.Index([1, 2, 3]),
+ ],
+)
+def test_to_orc_non_default_index(index):
+ df = pd.DataFrame({"a": [1, 2, 3]}, index=index)
+ msg = (
+ "orc does not support serializing a non-default index|"
+ "orc does not serialize index meta-data"
+ )
+ with pytest.raises(ValueError, match=msg):
+ df.to_orc()
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
orc can't handle indexes right now (they are dropped), so we should raise like we did in feather | https://api.github.com/repos/pandas-dev/pandas/pulls/51828 | 2023-03-07T22:19:15Z | 2023-03-14T02:31:43Z | 2023-03-14T02:31:43Z | 2023-03-14T10:51:11Z |
Enabled ruff TCH on some files | diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index c0ab72e9d796b..87b3091fca75a 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -7,7 +7,10 @@
import datetime
from functools import partial
import operator
-from typing import Any
+from typing import (
+ TYPE_CHECKING,
+ Any,
+)
import numpy as np
@@ -19,10 +22,6 @@
ops as libops,
)
from pandas._libs.tslibs import BaseOffset
-from pandas._typing import (
- ArrayLike,
- Shape,
-)
from pandas.core.dtypes.cast import (
construct_1d_object_array_from_listlike,
@@ -54,6 +53,12 @@
from pandas.core.ops.dispatch import should_extension_dispatch
from pandas.core.ops.invalid import invalid_comparison
+if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ Shape,
+ )
+
# -----------------------------------------------------------------------------
# Masking NA values and fallbacks for operations numpy does not support
diff --git a/pandas/core/ops/common.py b/pandas/core/ops/common.py
index d4ae143372271..01fb9aa17fc48 100644
--- a/pandas/core/ops/common.py
+++ b/pandas/core/ops/common.py
@@ -5,11 +5,13 @@
from functools import wraps
import sys
-from typing import Callable
+from typing import (
+ TYPE_CHECKING,
+ Callable,
+)
from pandas._libs.lib import item_from_zerodim
from pandas._libs.missing import is_matching_na
-from pandas._typing import F
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -17,6 +19,9 @@
ABCSeries,
)
+if TYPE_CHECKING:
+ from pandas._typing import F
+
def unpack_zerodim_and_defer(name: str) -> Callable[[F], F]:
"""
diff --git a/pandas/core/ops/dispatch.py b/pandas/core/ops/dispatch.py
index 2f500703ccfb3..a939fdd3d041e 100644
--- a/pandas/core/ops/dispatch.py
+++ b/pandas/core/ops/dispatch.py
@@ -3,12 +3,16 @@
"""
from __future__ import annotations
-from typing import Any
-
-from pandas._typing import ArrayLike
+from typing import (
+ TYPE_CHECKING,
+ Any,
+)
from pandas.core.dtypes.generic import ABCExtensionArray
+if TYPE_CHECKING:
+ from pandas._typing import ArrayLike
+
def should_extension_dispatch(left: ArrayLike, right: Any) -> bool:
"""
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 1430dd22c32db..d6adf01f4e12b 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -20,15 +20,6 @@
lib,
)
from pandas._libs.hashtable import unique_label_indices
-from pandas._typing import (
- AxisInt,
- IndexKeyFunc,
- Level,
- NaPosition,
- Shape,
- SortKind,
- npt,
-)
from pandas.core.dtypes.common import (
ensure_int64,
@@ -44,6 +35,16 @@
from pandas.core.construction import extract_array
if TYPE_CHECKING:
+ from pandas._typing import (
+ AxisInt,
+ IndexKeyFunc,
+ Level,
+ NaPosition,
+ Shape,
+ SortKind,
+ npt,
+ )
+
from pandas import MultiIndex
from pandas.core.arrays import ExtensionArray
from pandas.core.indexes.base import Index
diff --git a/pyproject.toml b/pyproject.toml
index 00fcf1eb06d4e..c5c9cd702a380 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -295,11 +295,6 @@ exclude = [
# TCH to be enabled gradually
"pandas/core/arrays/*" = ["TCH"]
"pandas/core/nanops.py" = ["TCH"]
-"pandas/core/apply.py" = ["TCH"]
-"pandas/core/base.py" = ["TCH"]
-"pandas/core/algorithms.py" = ["TCH"]
-"pandas/core/ops/*" = ["TCH"]
-"pandas/core/sorting.py" = ["TCH"]
"pandas/util/*" = ["TCH"]
"pandas/_libs/*" = ["TCH"]
# Keep this one enabled
| #51740 for the following elements:
- [ ] "pandas/core/apply.py": no change was required
- [ ] "pandas/core/base.py": no change was required
- [ ] "pandas/core/algorithms.py": no change was required
- [ ] "pandas/core/ops/*"
- [ ] "pandas/core/sorting.py"
No change was required means that the command:
`pre-commit run ruff --all-files`
had zero errors after removing the relative line from the `pyproject.toml` file without any change (can be tried by doing so in the main branch). | https://api.github.com/repos/pandas-dev/pandas/pulls/51827 | 2023-03-07T21:38:17Z | 2023-03-09T19:07:31Z | 2023-03-09T19:07:31Z | 2023-03-09T19:07:32Z |
DOC: sort has no effect on filters | diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 184b77c880238..f3917f539ae3f 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -112,13 +112,23 @@
If the axis is a MultiIndex (hierarchical), group by a particular
level or levels. Do not specify both ``by`` and ``level``.
as_index : bool, default True
- For aggregated output, return object with group labels as the
+ Return object with group labels as the
index. Only relevant for DataFrame input. as_index=False is
- effectively "SQL-style" grouped output.
+ effectively "SQL-style" grouped output. This argument has no effect
+ on filtrations (see the `filtrations in the user guide
+ <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#filtration>`_),
+ such as ``head()``, ``tail()``, ``nth()`` and in transformations
+ (see the `transformations in the user guide
+ <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#transformation>`_).
sort : bool, default True
Sort group keys. Get better performance by turning this off.
Note this does not influence the order of observations within each
group. Groupby preserves the order of rows within each group.
+ This argument has no effect on filtrations (see the `filtrations in the user guide
+ <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#filtration>`_),
+ such as ``head()``, ``tail()``, ``nth()`` and in transformations
+ (see the `transformations in the user guide
+ <https://pandas.pydata.org/docs/dev/user_guide/groupby.html#transformation>`_).
.. versionchanged:: 2.0.0
| - [x] closes #51692
Updated docs for `DataFrame.groupby` and `Series.groupby`, pointed out that the parameter the `sort` parameter has no effect on filters.
| https://api.github.com/repos/pandas-dev/pandas/pulls/51825 | 2023-03-07T21:20:50Z | 2023-03-10T12:17:10Z | 2023-03-10T12:17:09Z | 2023-03-10T13:11:30Z |
Backport PR #51762 on branch 2.0.x (DOC: Add explanation how to convert arrow table to pandas df) | diff --git a/doc/source/user_guide/pyarrow.rst b/doc/source/user_guide/pyarrow.rst
index 94b2c724cd229..876ca9c164823 100644
--- a/doc/source/user_guide/pyarrow.rst
+++ b/doc/source/user_guide/pyarrow.rst
@@ -76,6 +76,18 @@ the pyarrow array constructor on the :class:`Series` or :class:`Index`.
idx = pd.Index(ser)
pa.array(idx)
+To convert a :external+pyarrow:py:class:`pyarrow.Table` to a :class:`DataFrame`, you can call the
+:external+pyarrow:py:meth:`pyarrow.Table.to_pandas` method with ``types_mapper=pd.ArrowDtype``.
+
+.. ipython:: python
+
+ table = pa.table([pa.array([1, 2, 3], type=pa.int64())], names=["a"])
+
+ df = table.to_pandas(types_mapper=pd.ArrowDtype)
+ df
+ df.dtypes
+
+
Operations
----------
| Backport PR #51762: DOC: Add explanation how to convert arrow table to pandas df | https://api.github.com/repos/pandas-dev/pandas/pulls/51822 | 2023-03-07T18:26:23Z | 2023-03-07T20:18:37Z | 2023-03-07T20:18:37Z | 2023-03-07T20:18:37Z |
DEPR: scalar index for length-1-list level groupby | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 74319b0444659..ee6f8b81e3b5b 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -813,6 +813,7 @@ Deprecations
- Deprecated :meth:`DataFrame.pad` in favor of :meth:`DataFrame.ffill` (:issue:`33396`)
- Deprecated :meth:`DataFrame.backfill` in favor of :meth:`DataFrame.bfill` (:issue:`33396`)
- Deprecated :meth:`~pandas.io.stata.StataReader.close`. Use :class:`~pandas.io.stata.StataReader` as a context manager instead (:issue:`49228`)
+- Deprecated producing a scalar when iterating over a :class:`.DataFrameGroupBy` or a :class:`.SeriesGroupBy` that has been grouped by a ``level`` parameter that is a list of length 1; a tuple of length one will be returned instead (:issue:`51583`)
.. ---------------------------------------------------------------------------
.. _whatsnew_200.prior_deprecations:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 96f39bb99e544..f0cf9abc5bbaf 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -68,6 +68,7 @@ class providing the base-class of operations.
cache_readonly,
doc,
)
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import ensure_dtype_can_hold_na
from pandas.core.dtypes.common import (
@@ -76,6 +77,7 @@ class providing the base-class of operations.
is_hashable,
is_integer,
is_integer_dtype,
+ is_list_like,
is_numeric_dtype,
is_object_dtype,
is_scalar,
@@ -626,6 +628,7 @@ class BaseGroupBy(PandasObject, SelectionMixin[NDFrameT], GroupByIndexingMixin):
axis: AxisInt
grouper: ops.BaseGrouper
keys: _KeysArgType | None = None
+ level: IndexLabel | None = None
group_keys: bool | lib.NoDefault
@final
@@ -810,7 +813,19 @@ def __iter__(self) -> Iterator[tuple[Hashable, NDFrameT]]:
for each group
"""
keys = self.keys
+ level = self.level
result = self.grouper.get_iterator(self._selected_obj, axis=self.axis)
+ # error: Argument 1 to "len" has incompatible type "Hashable"; expected "Sized"
+ if is_list_like(level) and len(level) == 1: # type: ignore[arg-type]
+ # GH 51583
+ warnings.warn(
+ "Creating a Groupby object with a length-1 list-like "
+ "level parameter will yield indexes as tuples in a future version. "
+ "To keep indexes as scalars, create Groupby objects with "
+ "a scalar level parameter instead.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
if isinstance(keys, list) and len(keys) == 1:
# GH#42795 - when keys is a list, return tuples even when length is 1
result = (((key,), group) for key, group in result)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index ea4bb42fb7ee1..2441be4528c99 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2709,6 +2709,24 @@ def test_single_element_list_grouping():
assert result == expected
+@pytest.mark.parametrize(
+ "level_arg, multiindex", [([0], False), ((0,), False), ([0], True), ((0,), True)]
+)
+def test_single_element_listlike_level_grouping_deprecation(level_arg, multiindex):
+ # GH 51583
+ df = DataFrame({"a": [1, 2], "b": [3, 4], "c": [5, 6]}, index=["x", "y"])
+ if multiindex:
+ df = df.set_index(["a", "b"])
+ depr_msg = (
+ "Creating a Groupby object with a length-1 list-like "
+ "level parameter will yield indexes as tuples in a future version. "
+ "To keep indexes as scalars, create Groupby objects with "
+ "a scalar level parameter instead."
+ )
+ with tm.assert_produces_warning(FutureWarning, match=depr_msg):
+ [key for key, _ in df.groupby(level=level_arg)]
+
+
@pytest.mark.parametrize("func", ["sum", "cumsum", "cumprod", "prod"])
def test_groupby_avoid_casting_to_float(func):
# GH#37493
| - [x] closes #51583
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions. (nothing new here)
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Previously, iterating over a Groupby object that had been initialized with a length-1-list ```level``` parameter (ex. level = [0]) would return scalar indexes as opposed to tuples with only the first element defined (ex. _0_ as opposed to _(0, )_ ). This behavior differs from when using the ```by``` parameter and so the change is motivated. | https://api.github.com/repos/pandas-dev/pandas/pulls/51817 | 2023-03-07T11:16:12Z | 2023-03-18T12:23:36Z | 2023-03-18T12:23:36Z | 2023-03-18T20:55:38Z |
add concat function dtype test | diff --git a/pandas/tests/reshape/concat/test_datetimes.py b/pandas/tests/reshape/concat/test_datetimes.py
index f16358813488e..43c6bb03b6a9a 100644
--- a/pandas/tests/reshape/concat/test_datetimes.py
+++ b/pandas/tests/reshape/concat/test_datetimes.py
@@ -538,3 +538,40 @@ def test_concat_multiindex_datetime_nat():
{"a": [1.0, np.nan], "b": 2}, MultiIndex.from_tuples([(1, pd.NaT), (2, pd.NaT)])
)
tm.assert_frame_equal(result, expected)
+
+
+def test_concat_float_datetime64(using_array_manager):
+ # GH#32934
+ df_time = DataFrame({"A": pd.array(["2000"], dtype="datetime64[ns]")})
+ df_float = DataFrame({"A": pd.array([1.0], dtype="float64")})
+
+ expected = DataFrame(
+ {
+ "A": [
+ pd.array(["2000"], dtype="datetime64[ns]")[0],
+ pd.array([1.0], dtype="float64")[0],
+ ]
+ },
+ index=[0, 0],
+ )
+ result = concat([df_time, df_float])
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame({"A": pd.array([], dtype="object")})
+ result = concat([df_time.iloc[:0], df_float.iloc[:0]])
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame({"A": pd.array([1.0], dtype="object")})
+ result = concat([df_time.iloc[:0], df_float])
+ tm.assert_frame_equal(result, expected)
+
+ if not using_array_manager:
+ expected = DataFrame({"A": pd.array(["2000"], dtype="datetime64[ns]")})
+ result = concat([df_time, df_float.iloc[:0]])
+ tm.assert_frame_equal(result, expected)
+ else:
+ expected = DataFrame({"A": pd.array(["2000"], dtype="datetime64[ns]")}).astype(
+ {"A": "object"}
+ )
+ result = concat([df_time, df_float.iloc[:0]])
+ tm.assert_frame_equal(result, expected)
| - [ ] closes #32934
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
The old PR closed, so created new one to close #32934. | https://api.github.com/repos/pandas-dev/pandas/pulls/51816 | 2023-03-07T04:59:23Z | 2023-03-10T18:17:00Z | 2023-03-10T18:17:00Z | 2023-03-11T00:37:44Z |
REF: define arithmetic methods non-dynamically | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index dd48da9ab6c16..be6c8493963ea 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1349,7 +1349,7 @@ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# for binary ops, use our custom dunder methods
- result = ops.maybe_dispatch_ufunc_to_dunder_op(
+ result = arraylike.maybe_dispatch_ufunc_to_dunder_op(
self, ufunc, method, *inputs, **kwargs
)
if result is not NotImplemented:
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 9b9cb3e29810d..0461b0f528878 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -525,7 +525,7 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
return NotImplemented
# for binary ops, use our custom dunder methods
- result = ops.maybe_dispatch_ufunc_to_dunder_op(
+ result = arraylike.maybe_dispatch_ufunc_to_dunder_op(
self, ufunc, method, *inputs, **kwargs
)
if result is not NotImplemented:
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 216dbede39a6a..4effe97f2f04f 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -142,7 +142,7 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
# in PandasArray, since pandas' ExtensionArrays are 1-d.
out = kwargs.get("out", ())
- result = ops.maybe_dispatch_ufunc_to_dunder_op(
+ result = arraylike.maybe_dispatch_ufunc_to_dunder_op(
self, ufunc, method, *inputs, **kwargs
)
if result is not NotImplemented:
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index fcebd17ace2d3..78153890745d7 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -78,10 +78,7 @@
notna,
)
-from pandas.core import (
- arraylike,
- ops,
-)
+from pandas.core import arraylike
import pandas.core.algorithms as algos
from pandas.core.arraylike import OpsMixin
from pandas.core.arrays import ExtensionArray
@@ -1643,7 +1640,7 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
return NotImplemented
# for binary ops, use our custom dunder methods
- result = ops.maybe_dispatch_ufunc_to_dunder_op(
+ result = arraylike.maybe_dispatch_ufunc_to_dunder_op(
self, ufunc, method, *inputs, **kwargs
)
if result is not NotImplemented:
diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
index 10b0670a78d6f..6219cac4aeb16 100644
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -17,8 +17,8 @@
from pandas.util._exceptions import find_stack_level
+from pandas.core import roperator
from pandas.core.computation.check import NUMEXPR_INSTALLED
-from pandas.core.ops import roperator
if NUMEXPR_INSTALLED:
import numexpr as ne
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 965b93f24121a..7db8c48b467a6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -15,6 +15,7 @@
import functools
from io import StringIO
import itertools
+import operator
import sys
from textwrap import dedent
from typing import (
@@ -88,6 +89,7 @@
from pandas.core.dtypes.common import (
infer_dtype_from_object,
is_1d_only_ea_dtype,
+ is_array_like,
is_bool_dtype,
is_dataclass,
is_dict_like,
@@ -116,6 +118,7 @@
common as com,
nanops,
ops,
+ roperator,
)
from pandas.core.accessor import CachedAccessor
from pandas.core.apply import (
@@ -7459,20 +7462,20 @@ class diet
def _cmp_method(self, other, op):
axis: Literal[1] = 1 # only relevant for Series other case
- self, other = ops.align_method_FRAME(self, other, axis, flex=False, level=None)
+ self, other = self._align_for_op(other, axis, flex=False, level=None)
# See GH#4537 for discussion of scalar op behavior
new_data = self._dispatch_frame_op(other, op, axis=axis)
return self._construct_result(new_data)
def _arith_method(self, other, op):
- if ops.should_reindex_frame_op(self, other, op, 1, None, None):
- return ops.frame_arith_method_with_reindex(self, other, op)
+ if self._should_reindex_frame_op(other, op, 1, None, None):
+ return self._arith_method_with_reindex(other, op)
axis: Literal[1] = 1 # only relevant for Series other case
other = ops.maybe_prepare_scalar_for_op(other, (self.shape[axis],))
- self, other = ops.align_method_FRAME(self, other, axis, flex=True, level=None)
+ self, other = self._align_for_op(other, axis, flex=True, level=None)
new_data = self._dispatch_frame_op(other, op, axis=axis)
return self._construct_result(new_data)
@@ -7540,14 +7543,13 @@ def _dispatch_frame_op(self, right, func: Callable, axis: AxisInt | None = None)
]
elif isinstance(right, Series):
- assert right.index.equals(self.index) # Handle other cases later
+ assert right.index.equals(self.index)
right = right._values
with np.errstate(all="ignore"):
arrays = [array_op(left, right) for left in self._iter_column_arrays()]
else:
- # Remaining cases have less-obvious dispatch rules
raise NotImplementedError(right)
return type(self)._from_arrays(
@@ -7574,6 +7576,275 @@ def _arith_op(left, right):
new_data = self._dispatch_frame_op(other, _arith_op)
return new_data
+ def _arith_method_with_reindex(self, right: DataFrame, op) -> DataFrame:
+ """
+ For DataFrame-with-DataFrame operations that require reindexing,
+ operate only on shared columns, then reindex.
+
+ Parameters
+ ----------
+ right : DataFrame
+ op : binary operator
+
+ Returns
+ -------
+ DataFrame
+ """
+ left = self
+
+ # GH#31623, only operate on shared columns
+ cols, lcols, rcols = left.columns.join(
+ right.columns, how="inner", level=None, return_indexers=True
+ )
+
+ new_left = left.iloc[:, lcols]
+ new_right = right.iloc[:, rcols]
+ result = op(new_left, new_right)
+
+ # Do the join on the columns instead of using left._align_for_op
+ # to avoid constructing two potentially large/sparse DataFrames
+ join_columns, _, _ = left.columns.join(
+ right.columns, how="outer", level=None, return_indexers=True
+ )
+
+ if result.columns.has_duplicates:
+ # Avoid reindexing with a duplicate axis.
+ # https://github.com/pandas-dev/pandas/issues/35194
+ indexer, _ = result.columns.get_indexer_non_unique(join_columns)
+ indexer = algorithms.unique1d(indexer)
+ result = result._reindex_with_indexers(
+ {1: [join_columns, indexer]}, allow_dups=True
+ )
+ else:
+ result = result.reindex(join_columns, axis=1)
+
+ return result
+
+ def _should_reindex_frame_op(self, right, op, axis: int, fill_value, level) -> bool:
+ """
+ Check if this is an operation between DataFrames that will need to reindex.
+ """
+ if op is operator.pow or op is roperator.rpow:
+ # GH#32685 pow has special semantics for operating with null values
+ return False
+
+ if not isinstance(right, DataFrame):
+ return False
+
+ if fill_value is None and level is None and axis == 1:
+ # TODO: any other cases we should handle here?
+
+ # Intersection is always unique so we have to check the unique columns
+ left_uniques = self.columns.unique()
+ right_uniques = right.columns.unique()
+ cols = left_uniques.intersection(right_uniques)
+ if len(cols) and not (
+ len(cols) == len(left_uniques) and len(cols) == len(right_uniques)
+ ):
+ # TODO: is there a shortcut available when len(cols) == 0?
+ return True
+
+ return False
+
+ def _align_for_op(
+ self, other, axis, flex: bool | None = False, level: Level = None
+ ):
+ """
+ Convert rhs to meet lhs dims if input is list, tuple or np.ndarray.
+
+ Parameters
+ ----------
+ left : DataFrame
+ right : Any
+ axis : int, str, or None
+ flex : bool or None, default False
+ Whether this is a flex op, in which case we reindex.
+ None indicates not to check for alignment.
+ level : int or level name, default None
+
+ Returns
+ -------
+ left : DataFrame
+ right : Any
+ """
+ left, right = self, other
+
+ def to_series(right):
+ msg = (
+ "Unable to coerce to Series, "
+ "length must be {req_len}: given {given_len}"
+ )
+
+ # pass dtype to avoid doing inference, which would break consistency
+ # with Index/Series ops
+ dtype = None
+ if getattr(right, "dtype", None) == object:
+ # can't pass right.dtype unconditionally as that would break on e.g.
+ # datetime64[h] ndarray
+ dtype = object
+
+ if axis is not None and left._get_axis_number(axis) == 0:
+ if len(left.index) != len(right):
+ raise ValueError(
+ msg.format(req_len=len(left.index), given_len=len(right))
+ )
+ right = left._constructor_sliced(right, index=left.index, dtype=dtype)
+ else:
+ if len(left.columns) != len(right):
+ raise ValueError(
+ msg.format(req_len=len(left.columns), given_len=len(right))
+ )
+ right = left._constructor_sliced(right, index=left.columns, dtype=dtype)
+ return right
+
+ if isinstance(right, np.ndarray):
+ if right.ndim == 1:
+ right = to_series(right)
+
+ elif right.ndim == 2:
+ # We need to pass dtype=right.dtype to retain object dtype
+ # otherwise we lose consistency with Index and array ops
+ dtype = None
+ if right.dtype == object:
+ # can't pass right.dtype unconditionally as that would break on e.g.
+ # datetime64[h] ndarray
+ dtype = object
+
+ if right.shape == left.shape:
+ right = left._constructor(
+ right, index=left.index, columns=left.columns, dtype=dtype
+ )
+
+ elif right.shape[0] == left.shape[0] and right.shape[1] == 1:
+ # Broadcast across columns
+ right = np.broadcast_to(right, left.shape)
+ right = left._constructor(
+ right, index=left.index, columns=left.columns, dtype=dtype
+ )
+
+ elif right.shape[1] == left.shape[1] and right.shape[0] == 1:
+ # Broadcast along rows
+ right = to_series(right[0, :])
+
+ else:
+ raise ValueError(
+ "Unable to coerce to DataFrame, shape "
+ f"must be {left.shape}: given {right.shape}"
+ )
+
+ elif right.ndim > 2:
+ raise ValueError(
+ "Unable to coerce to Series/DataFrame, "
+ f"dimension must be <= 2: {right.shape}"
+ )
+
+ elif is_list_like(right) and not isinstance(right, (Series, DataFrame)):
+ # GH#36702. Raise when attempting arithmetic with list of array-like.
+ if any(is_array_like(el) for el in right):
+ raise ValueError(
+ f"Unable to coerce list of {type(right[0])} to Series/DataFrame"
+ )
+ # GH#17901
+ right = to_series(right)
+
+ if flex is not None and isinstance(right, DataFrame):
+ if not left._indexed_same(right):
+ if flex:
+ left, right = left.align(
+ right, join="outer", level=level, copy=False
+ )
+ else:
+ raise ValueError(
+ "Can only compare identically-labeled (both index and columns) "
+ "DataFrame objects"
+ )
+ elif isinstance(right, Series):
+ # axis=1 is default for DataFrame-with-Series op
+ axis = left._get_axis_number(axis) if axis is not None else 1
+
+ if not flex:
+ if not left.axes[axis].equals(right.index):
+ raise ValueError(
+ "Operands are not aligned. Do "
+ "`left, right = left.align(right, axis=1, copy=False)` "
+ "before operating."
+ )
+
+ left, right = left.align(
+ # error: Argument 1 to "align" of "DataFrame" has incompatible
+ # type "Series"; expected "DataFrame"
+ right, # type: ignore[arg-type]
+ join="outer",
+ axis=axis,
+ level=level,
+ copy=False,
+ )
+ right = left._maybe_align_series_as_frame(right, axis)
+
+ return left, right
+
+ def _maybe_align_series_as_frame(self, series: Series, axis: AxisInt):
+ """
+ If the Series operand is not EA-dtype, we can broadcast to 2D and operate
+ blockwise.
+ """
+ rvalues = series._values
+ if not isinstance(rvalues, np.ndarray):
+ # TODO(EA2D): no need to special-case with 2D EAs
+ if rvalues.dtype in ("datetime64[ns]", "timedelta64[ns]"):
+ # We can losslessly+cheaply cast to ndarray
+ rvalues = np.asarray(rvalues)
+ else:
+ return series
+
+ if axis == 0:
+ rvalues = rvalues.reshape(-1, 1)
+ else:
+ rvalues = rvalues.reshape(1, -1)
+
+ rvalues = np.broadcast_to(rvalues, self.shape)
+ # pass dtype to avoid doing inference
+ return self._constructor(
+ rvalues,
+ index=self.index,
+ columns=self.columns,
+ dtype=rvalues.dtype,
+ )
+
+ def _flex_arith_method(
+ self, other, op, *, axis: Axis = "columns", level=None, fill_value=None
+ ):
+ axis = self._get_axis_number(axis) if axis is not None else 1
+
+ if self._should_reindex_frame_op(other, op, axis, fill_value, level):
+ return self._arith_method_with_reindex(other, op)
+
+ if isinstance(other, Series) and fill_value is not None:
+ # TODO: We could allow this in cases where we end up going
+ # through the DataFrame path
+ raise NotImplementedError(f"fill_value {fill_value} not supported.")
+
+ other = ops.maybe_prepare_scalar_for_op(
+ other,
+ self.shape,
+ )
+ self, other = self._align_for_op(other, axis, flex=True, level=level)
+
+ if isinstance(other, DataFrame):
+ # Another DataFrame
+ new_data = self._combine_frame(other, op, fill_value)
+
+ elif isinstance(other, Series):
+ new_data = self._dispatch_frame_op(other, op, axis=axis)
+ else:
+ # in this case we always have `np.ndim(other) == 0`
+ if fill_value is not None:
+ self = self.fillna(fill_value)
+
+ new_data = self._dispatch_frame_op(other, op)
+
+ return self._construct_result(new_data)
+
def _construct_result(self, result) -> DataFrame:
"""
Wrap the result of an arithmetic, comparison, or logical operation.
@@ -7605,6 +7876,131 @@ def __rdivmod__(self, other) -> tuple[DataFrame, DataFrame]:
mod = other - div * self
return div, mod
+ def _flex_cmp_method(self, other, op, *, axis: Axis = "columns", level=None):
+ axis = self._get_axis_number(axis) if axis is not None else 1
+
+ self, other = self._align_for_op(other, axis, flex=True, level=level)
+
+ new_data = self._dispatch_frame_op(other, op, axis=axis)
+ return self._construct_result(new_data)
+
+ @Appender(ops.make_flex_doc("eq", "dataframe"))
+ def eq(self, other, axis: Axis = "columns", level=None):
+ return self._flex_cmp_method(other, operator.eq, axis=axis, level=level)
+
+ @Appender(ops.make_flex_doc("ne", "dataframe"))
+ def ne(self, other, axis: Axis = "columns", level=None):
+ return self._flex_cmp_method(other, operator.ne, axis=axis, level=level)
+
+ @Appender(ops.make_flex_doc("le", "dataframe"))
+ def le(self, other, axis: Axis = "columns", level=None):
+ return self._flex_cmp_method(other, operator.le, axis=axis, level=level)
+
+ @Appender(ops.make_flex_doc("lt", "dataframe"))
+ def lt(self, other, axis: Axis = "columns", level=None):
+ return self._flex_cmp_method(other, operator.lt, axis=axis, level=level)
+
+ @Appender(ops.make_flex_doc("ge", "dataframe"))
+ def ge(self, other, axis: Axis = "columns", level=None):
+ return self._flex_cmp_method(other, operator.ge, axis=axis, level=level)
+
+ @Appender(ops.make_flex_doc("gt", "dataframe"))
+ def gt(self, other, axis: Axis = "columns", level=None):
+ return self._flex_cmp_method(other, operator.gt, axis=axis, level=level)
+
+ @Appender(ops.make_flex_doc("add", "dataframe"))
+ def add(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.add, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("radd", "dataframe"))
+ def radd(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.radd, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("sub", "dataframe"))
+ def sub(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.sub, level=level, fill_value=fill_value, axis=axis
+ )
+
+ subtract = sub
+
+ @Appender(ops.make_flex_doc("rsub", "dataframe"))
+ def rsub(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.rsub, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("mul", "dataframe"))
+ def mul(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.mul, level=level, fill_value=fill_value, axis=axis
+ )
+
+ multiply = mul
+
+ @Appender(ops.make_flex_doc("rmul", "dataframe"))
+ def rmul(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.rmul, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("truediv", "dataframe"))
+ def truediv(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.truediv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ div = truediv
+ divide = truediv
+
+ @Appender(ops.make_flex_doc("rtruediv", "dataframe"))
+ def rtruediv(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.rtruediv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ rdiv = rtruediv
+
+ @Appender(ops.make_flex_doc("floordiv", "dataframe"))
+ def floordiv(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.floordiv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rfloordiv", "dataframe"))
+ def rfloordiv(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.rfloordiv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("mod", "dataframe"))
+ def mod(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.mod, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rmod", "dataframe"))
+ def rmod(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.rmod, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("pow", "dataframe"))
+ def pow(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, operator.pow, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rpow", "dataframe"))
+ def rpow(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ return self._flex_arith_method(
+ other, roperator.rpow, level=level, fill_value=fill_value, axis=axis
+ )
+
# ----------------------------------------------------------------------
# Combination-Related
@@ -11619,8 +12015,6 @@ def mask(
DataFrame._add_numeric_operations()
-ops.add_flex_arithmetic_methods(DataFrame)
-
def _from_nested_dict(data) -> collections.defaultdict:
new_data: collections.defaultdict = collections.defaultdict(dict)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 003e4cc5b8b23..f2e39b5c1d0fc 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -169,7 +169,6 @@
clean_reindex_fill_method,
find_valid_index,
)
-from pandas.core.ops import align_method_FRAME
from pandas.core.reshape.concat import concat
from pandas.core.shared_docs import _shared_docs
from pandas.core.sorting import get_indexer_indexer
@@ -8069,7 +8068,7 @@ def _clip_with_one_bound(self, threshold, method, axis, inplace):
if isinstance(self, ABCSeries):
threshold = self._constructor(threshold, index=self.index)
else:
- threshold = align_method_FRAME(self, threshold, axis, flex=None)[1]
+ threshold = self._align_for_op(threshold, axis, flex=None)[1]
# GH 40420
# Treat missing thresholds as no bounds, not clipping the values
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index f4cb3992d20fc..b2ea8102e2747 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -5,40 +5,11 @@
"""
from __future__ import annotations
-import operator
-from typing import (
- TYPE_CHECKING,
- cast,
-)
-
-import numpy as np
-
-from pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op
-from pandas._typing import (
- Axis,
- AxisInt,
- Level,
-)
-from pandas.util._decorators import Appender
-
-from pandas.core.dtypes.common import (
- is_array_like,
- is_list_like,
-)
-from pandas.core.dtypes.generic import (
- ABCDataFrame,
- ABCSeries,
-)
-from pandas.core.dtypes.missing import isna
-
-from pandas.core import (
- algorithms,
- roperator,
-)
from pandas.core.ops.array_ops import (
arithmetic_op,
comp_method_OBJECT_ARRAY,
comparison_op,
+ fill_binop,
get_array_op,
logical_op,
maybe_prepare_scalar_for_op,
@@ -47,18 +18,13 @@
get_op_result_name,
unpack_zerodim_and_defer,
)
-from pandas.core.ops.docstrings import (
- _flex_comp_doc_FRAME,
- _op_descriptions,
- make_flex_doc,
-)
+from pandas.core.ops.docstrings import make_flex_doc
from pandas.core.ops.invalid import invalid_comparison
from pandas.core.ops.mask_ops import (
kleene_and,
kleene_or,
kleene_xor,
)
-from pandas.core.ops.methods import add_flex_arithmetic_methods
from pandas.core.roperator import (
radd,
rand_,
@@ -74,12 +40,6 @@
rxor,
)
-if TYPE_CHECKING:
- from pandas import (
- DataFrame,
- Series,
- )
-
# -----------------------------------------------------------------------------
# constants
ARITHMETIC_BINOPS: set[str] = {
@@ -105,419 +65,19 @@
COMPARISON_BINOPS: set[str] = {"eq", "ne", "lt", "gt", "le", "ge"}
-# -----------------------------------------------------------------------------
-# Masking NA values and fallbacks for operations numpy does not support
-
-
-def fill_binop(left, right, fill_value):
- """
- If a non-None fill_value is given, replace null entries in left and right
- with this value, but only in positions where _one_ of left/right is null,
- not both.
-
- Parameters
- ----------
- left : array-like
- right : array-like
- fill_value : object
-
- Returns
- -------
- left : array-like
- right : array-like
-
- Notes
- -----
- Makes copies if fill_value is not None and NAs are present.
- """
- if fill_value is not None:
- left_mask = isna(left)
- right_mask = isna(right)
-
- # one but not both
- mask = left_mask ^ right_mask
-
- if left_mask.any():
- # Avoid making a copy if we can
- left = left.copy()
- left[left_mask & mask] = fill_value
-
- if right_mask.any():
- # Avoid making a copy if we can
- right = right.copy()
- right[right_mask & mask] = fill_value
-
- return left, right
-
-
-# -----------------------------------------------------------------------------
-# Series
-
-
-def align_method_SERIES(left: Series, right, align_asobject: bool = False):
- """align lhs and rhs Series"""
- # ToDo: Different from align_method_FRAME, list, tuple and ndarray
- # are not coerced here
- # because Series has inconsistencies described in #13637
-
- if isinstance(right, ABCSeries):
- # avoid repeated alignment
- if not left.index.equals(right.index):
- if align_asobject:
- # to keep original value's dtype for bool ops
- left = left.astype(object)
- right = right.astype(object)
-
- left, right = left.align(right, copy=False)
-
- return left, right
-
-
-def flex_method_SERIES(op):
- name = op.__name__.strip("_")
- doc = make_flex_doc(name, "series")
-
- @Appender(doc)
- def flex_wrapper(self, other, level=None, fill_value=None, axis: Axis = 0):
- # validate axis
- if axis is not None:
- self._get_axis_number(axis)
-
- res_name = get_op_result_name(self, other)
-
- if isinstance(other, ABCSeries):
- return self._binop(other, op, level=level, fill_value=fill_value)
- elif isinstance(other, (np.ndarray, list, tuple)):
- if len(other) != len(self):
- raise ValueError("Lengths must be equal")
- other = self._constructor(other, self.index)
- result = self._binop(other, op, level=level, fill_value=fill_value)
- result.name = res_name
- return result
- else:
- if fill_value is not None:
- self = self.fillna(fill_value)
-
- return op(self, other)
-
- flex_wrapper.__name__ = name
- return flex_wrapper
-
-
-# -----------------------------------------------------------------------------
-# DataFrame
-
-
-def align_method_FRAME(
- left, right, axis, flex: bool | None = False, level: Level = None
-):
- """
- Convert rhs to meet lhs dims if input is list, tuple or np.ndarray.
-
- Parameters
- ----------
- left : DataFrame
- right : Any
- axis : int, str, or None
- flex : bool or None, default False
- Whether this is a flex op, in which case we reindex.
- None indicates not to check for alignment.
- level : int or level name, default None
-
- Returns
- -------
- left : DataFrame
- right : Any
- """
-
- def to_series(right):
- msg = "Unable to coerce to Series, length must be {req_len}: given {given_len}"
-
- # pass dtype to avoid doing inference, which would break consistency
- # with Index/Series ops
- dtype = None
- if getattr(right, "dtype", None) == object:
- # can't pass right.dtype unconditionally as that would break on e.g.
- # datetime64[h] ndarray
- dtype = object
-
- if axis is not None and left._get_axis_name(axis) == "index":
- if len(left.index) != len(right):
- raise ValueError(
- msg.format(req_len=len(left.index), given_len=len(right))
- )
- right = left._constructor_sliced(right, index=left.index, dtype=dtype)
- else:
- if len(left.columns) != len(right):
- raise ValueError(
- msg.format(req_len=len(left.columns), given_len=len(right))
- )
- right = left._constructor_sliced(right, index=left.columns, dtype=dtype)
- return right
-
- if isinstance(right, np.ndarray):
- if right.ndim == 1:
- right = to_series(right)
-
- elif right.ndim == 2:
- # We need to pass dtype=right.dtype to retain object dtype
- # otherwise we lose consistency with Index and array ops
- dtype = None
- if right.dtype == object:
- # can't pass right.dtype unconditionally as that would break on e.g.
- # datetime64[h] ndarray
- dtype = object
-
- if right.shape == left.shape:
- right = left._constructor(
- right, index=left.index, columns=left.columns, dtype=dtype
- )
-
- elif right.shape[0] == left.shape[0] and right.shape[1] == 1:
- # Broadcast across columns
- right = np.broadcast_to(right, left.shape)
- right = left._constructor(
- right, index=left.index, columns=left.columns, dtype=dtype
- )
-
- elif right.shape[1] == left.shape[1] and right.shape[0] == 1:
- # Broadcast along rows
- right = to_series(right[0, :])
-
- else:
- raise ValueError(
- "Unable to coerce to DataFrame, shape "
- f"must be {left.shape}: given {right.shape}"
- )
-
- elif right.ndim > 2:
- raise ValueError(
- "Unable to coerce to Series/DataFrame, "
- f"dimension must be <= 2: {right.shape}"
- )
-
- elif is_list_like(right) and not isinstance(right, (ABCSeries, ABCDataFrame)):
- # GH 36702. Raise when attempting arithmetic with list of array-like.
- if any(is_array_like(el) for el in right):
- raise ValueError(
- f"Unable to coerce list of {type(right[0])} to Series/DataFrame"
- )
- # GH17901
- right = to_series(right)
-
- if flex is not None and isinstance(right, ABCDataFrame):
- if not left._indexed_same(right):
- if flex:
- left, right = left.align(right, join="outer", level=level, copy=False)
- else:
- raise ValueError(
- "Can only compare identically-labeled (both index and columns) "
- "DataFrame objects"
- )
- elif isinstance(right, ABCSeries):
- # axis=1 is default for DataFrame-with-Series op
- axis = left._get_axis_number(axis) if axis is not None else 1
-
- if not flex:
- if not left.axes[axis].equals(right.index):
- raise ValueError(
- "Operands are not aligned. Do "
- "`left, right = left.align(right, axis=1, copy=False)` "
- "before operating."
- )
-
- left, right = left.align(
- right, join="outer", axis=axis, level=level, copy=False
- )
- right = _maybe_align_series_as_frame(left, right, axis)
-
- return left, right
-
-
-def should_reindex_frame_op(
- left: DataFrame, right, op, axis: int, fill_value, level
-) -> bool:
- """
- Check if this is an operation between DataFrames that will need to reindex.
- """
- assert isinstance(left, ABCDataFrame)
-
- if op is operator.pow or op is roperator.rpow:
- # GH#32685 pow has special semantics for operating with null values
- return False
-
- if not isinstance(right, ABCDataFrame):
- return False
-
- if fill_value is None and level is None and axis == 1:
- # TODO: any other cases we should handle here?
-
- # Intersection is always unique so we have to check the unique columns
- left_uniques = left.columns.unique()
- right_uniques = right.columns.unique()
- cols = left_uniques.intersection(right_uniques)
- if len(cols) and not (
- len(cols) == len(left_uniques) and len(cols) == len(right_uniques)
- ):
- # TODO: is there a shortcut available when len(cols) == 0?
- return True
-
- return False
-
-
-def frame_arith_method_with_reindex(left: DataFrame, right: DataFrame, op) -> DataFrame:
- """
- For DataFrame-with-DataFrame operations that require reindexing,
- operate only on shared columns, then reindex.
-
- Parameters
- ----------
- left : DataFrame
- right : DataFrame
- op : binary operator
-
- Returns
- -------
- DataFrame
- """
- # GH#31623, only operate on shared columns
- cols, lcols, rcols = left.columns.join(
- right.columns, how="inner", level=None, return_indexers=True
- )
-
- new_left = left.iloc[:, lcols]
- new_right = right.iloc[:, rcols]
- result = op(new_left, new_right)
-
- # Do the join on the columns instead of using align_method_FRAME
- # to avoid constructing two potentially large/sparse DataFrames
- join_columns, _, _ = left.columns.join(
- right.columns, how="outer", level=None, return_indexers=True
- )
-
- if result.columns.has_duplicates:
- # Avoid reindexing with a duplicate axis.
- # https://github.com/pandas-dev/pandas/issues/35194
- indexer, _ = result.columns.get_indexer_non_unique(join_columns)
- indexer = algorithms.unique1d(indexer)
- result = result._reindex_with_indexers(
- {1: [join_columns, indexer]}, allow_dups=True
- )
- else:
- result = result.reindex(join_columns, axis=1)
-
- return result
-
-
-def _maybe_align_series_as_frame(frame: DataFrame, series: Series, axis: AxisInt):
- """
- If the Series operand is not EA-dtype, we can broadcast to 2D and operate
- blockwise.
- """
- rvalues = series._values
- if not isinstance(rvalues, np.ndarray):
- # TODO(EA2D): no need to special-case with 2D EAs
- if rvalues.dtype in ("datetime64[ns]", "timedelta64[ns]"):
- # We can losslessly+cheaply cast to ndarray
- rvalues = np.asarray(rvalues)
- else:
- return series
-
- if axis == 0:
- rvalues = rvalues.reshape(-1, 1)
- else:
- rvalues = rvalues.reshape(1, -1)
-
- rvalues = np.broadcast_to(rvalues, frame.shape)
- # pass dtype to avoid doing inference
- return type(frame)(
- rvalues, index=frame.index, columns=frame.columns, dtype=rvalues.dtype
- )
-
-
-def flex_arith_method_FRAME(op):
- op_name = op.__name__.strip("_")
-
- na_op = get_array_op(op)
- doc = make_flex_doc(op_name, "dataframe")
-
- @Appender(doc)
- def f(self, other, axis: Axis = "columns", level=None, fill_value=None):
- axis = self._get_axis_number(axis) if axis is not None else 1
- axis = cast(int, axis)
-
- if should_reindex_frame_op(self, other, op, axis, fill_value, level):
- return frame_arith_method_with_reindex(self, other, op)
-
- if isinstance(other, ABCSeries) and fill_value is not None:
- # TODO: We could allow this in cases where we end up going
- # through the DataFrame path
- raise NotImplementedError(f"fill_value {fill_value} not supported.")
-
- other = maybe_prepare_scalar_for_op(other, self.shape)
- self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
-
- if isinstance(other, ABCDataFrame):
- # Another DataFrame
- new_data = self._combine_frame(other, na_op, fill_value)
-
- elif isinstance(other, ABCSeries):
- new_data = self._dispatch_frame_op(other, op, axis=axis)
- else:
- # in this case we always have `np.ndim(other) == 0`
- if fill_value is not None:
- self = self.fillna(fill_value)
-
- new_data = self._dispatch_frame_op(other, op)
-
- return self._construct_result(new_data)
-
- f.__name__ = op_name
-
- return f
-
-
-def flex_comp_method_FRAME(op):
- op_name = op.__name__.strip("_")
-
- doc = _flex_comp_doc_FRAME.format(
- op_name=op_name, desc=_op_descriptions[op_name]["desc"]
- )
-
- @Appender(doc)
- def f(self, other, axis: Axis = "columns", level=None):
- axis = self._get_axis_number(axis) if axis is not None else 1
-
- self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
-
- new_data = self._dispatch_frame_op(other, op, axis=axis)
- return self._construct_result(new_data)
-
- f.__name__ = op_name
-
- return f
-
-
__all__ = [
- "add_flex_arithmetic_methods",
- "align_method_FRAME",
- "align_method_SERIES",
"ARITHMETIC_BINOPS",
"arithmetic_op",
"COMPARISON_BINOPS",
"comparison_op",
"comp_method_OBJECT_ARRAY",
- "fill_binop",
- "flex_arith_method_FRAME",
- "flex_comp_method_FRAME",
- "flex_method_SERIES",
- "frame_arith_method_with_reindex",
"invalid_comparison",
+ "fill_binop",
"kleene_and",
"kleene_or",
"kleene_xor",
"logical_op",
- "maybe_dispatch_ufunc_to_dunder_op",
+ "make_flex_doc",
"radd",
"rand_",
"rdiv",
@@ -530,6 +90,8 @@ def f(self, other, axis: Axis = "columns", level=None):
"rsub",
"rtruediv",
"rxor",
- "should_reindex_frame_op",
"unpack_zerodim_and_defer",
+ "get_op_result_name",
+ "maybe_prepare_scalar_for_op",
+ "get_array_op",
]
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index dfffe77fe1b76..c0ab72e9d796b 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -54,6 +54,50 @@
from pandas.core.ops.dispatch import should_extension_dispatch
from pandas.core.ops.invalid import invalid_comparison
+# -----------------------------------------------------------------------------
+# Masking NA values and fallbacks for operations numpy does not support
+
+
+def fill_binop(left, right, fill_value):
+ """
+ If a non-None fill_value is given, replace null entries in left and right
+ with this value, but only in positions where _one_ of left/right is null,
+ not both.
+
+ Parameters
+ ----------
+ left : array-like
+ right : array-like
+ fill_value : object
+
+ Returns
+ -------
+ left : array-like
+ right : array-like
+
+ Notes
+ -----
+ Makes copies if fill_value is not None and NAs are present.
+ """
+ if fill_value is not None:
+ left_mask = isna(left)
+ right_mask = isna(right)
+
+ # one but not both
+ mask = left_mask ^ right_mask
+
+ if left_mask.any():
+ # Avoid making a copy if we can
+ left = left.copy()
+ left[left_mask & mask] = fill_value
+
+ if right_mask.any():
+ # Avoid making a copy if we can
+ right = right.copy()
+ right[right_mask & mask] = fill_value
+
+ return left, right
+
def comp_method_OBJECT_ARRAY(op, x, y):
if isinstance(y, list):
diff --git a/pandas/core/ops/docstrings.py b/pandas/core/ops/docstrings.py
index cdf1c120719e9..9a469169151c3 100644
--- a/pandas/core/ops/docstrings.py
+++ b/pandas/core/ops/docstrings.py
@@ -49,13 +49,20 @@ def make_flex_doc(op_name: str, typ: str) -> str:
else:
doc = doc_no_examples
elif typ == "dataframe":
- base_doc = _flex_doc_FRAME
- doc = base_doc.format(
- desc=op_desc["desc"],
- op_name=op_name,
- equiv=equiv,
- reverse=op_desc["reverse"],
- )
+ if op_name in ["eq", "ne", "le", "lt", "ge", "gt"]:
+ base_doc = _flex_comp_doc_FRAME
+ doc = _flex_comp_doc_FRAME.format(
+ op_name=op_name,
+ desc=op_desc["desc"],
+ )
+ else:
+ base_doc = _flex_doc_FRAME
+ doc = base_doc.format(
+ desc=op_desc["desc"],
+ op_name=op_name,
+ equiv=equiv,
+ reverse=op_desc["reverse"],
+ )
else:
raise AssertionError("Invalid typ argument.")
return doc
@@ -169,8 +176,8 @@ def make_flex_doc(op_name: str, typ: str) -> str:
+ """
>>> a.divmod(b, fill_value=0)
(a 1.0
- b NaN
- c NaN
+ b inf
+ c inf
d 0.0
e NaN
dtype: float64,
diff --git a/pandas/core/ops/methods.py b/pandas/core/ops/methods.py
deleted file mode 100644
index dda20d2fe5adb..0000000000000
--- a/pandas/core/ops/methods.py
+++ /dev/null
@@ -1,124 +0,0 @@
-"""
-Functions to generate methods and pin them to the appropriate classes.
-"""
-from __future__ import annotations
-
-import operator
-
-from pandas.core.dtypes.generic import (
- ABCDataFrame,
- ABCSeries,
-)
-
-from pandas.core import roperator
-
-
-def _get_method_wrappers(cls):
- """
- Find the appropriate operation-wrappers to use when defining flex/special
- arithmetic, boolean, and comparison operations with the given class.
-
- Parameters
- ----------
- cls : class
-
- Returns
- -------
- arith_flex : function or None
- comp_flex : function or None
- """
- # TODO: make these non-runtime imports once the relevant functions
- # are no longer in __init__
- from pandas.core.ops import (
- flex_arith_method_FRAME,
- flex_comp_method_FRAME,
- flex_method_SERIES,
- )
-
- if issubclass(cls, ABCSeries):
- # Just Series
- arith_flex = flex_method_SERIES
- comp_flex = flex_method_SERIES
- elif issubclass(cls, ABCDataFrame):
- arith_flex = flex_arith_method_FRAME
- comp_flex = flex_comp_method_FRAME
- return arith_flex, comp_flex
-
-
-def add_flex_arithmetic_methods(cls) -> None:
- """
- Adds the full suite of flex arithmetic methods (``pow``, ``mul``, ``add``)
- to the class.
-
- Parameters
- ----------
- cls : class
- flex methods will be defined and pinned to this class
- """
- flex_arith_method, flex_comp_method = _get_method_wrappers(cls)
- new_methods = _create_methods(cls, flex_arith_method, flex_comp_method)
- new_methods.update(
- {
- "multiply": new_methods["mul"],
- "subtract": new_methods["sub"],
- "divide": new_methods["div"],
- }
- )
- # opt out of bool flex methods for now
- assert not any(kname in new_methods for kname in ("ror_", "rxor", "rand_"))
-
- _add_methods(cls, new_methods=new_methods)
-
-
-def _create_methods(cls, arith_method, comp_method):
- # creates actual flex methods based upon arithmetic, and comp method
- # constructors.
-
- have_divmod = issubclass(cls, ABCSeries)
- # divmod is available for Series
-
- new_methods = {}
-
- new_methods.update(
- {
- "add": arith_method(operator.add),
- "radd": arith_method(roperator.radd),
- "sub": arith_method(operator.sub),
- "mul": arith_method(operator.mul),
- "truediv": arith_method(operator.truediv),
- "floordiv": arith_method(operator.floordiv),
- "mod": arith_method(operator.mod),
- "pow": arith_method(operator.pow),
- "rmul": arith_method(roperator.rmul),
- "rsub": arith_method(roperator.rsub),
- "rtruediv": arith_method(roperator.rtruediv),
- "rfloordiv": arith_method(roperator.rfloordiv),
- "rpow": arith_method(roperator.rpow),
- "rmod": arith_method(roperator.rmod),
- }
- )
- new_methods["div"] = new_methods["truediv"]
- new_methods["rdiv"] = new_methods["rtruediv"]
- if have_divmod:
- # divmod doesn't have an op that is supported by numexpr
- new_methods["divmod"] = arith_method(divmod)
- new_methods["rdivmod"] = arith_method(roperator.rdivmod)
-
- new_methods.update(
- {
- "eq": comp_method(operator.eq),
- "ne": comp_method(operator.ne),
- "lt": comp_method(operator.lt),
- "gt": comp_method(operator.gt),
- "le": comp_method(operator.le),
- "ge": comp_method(operator.ge),
- }
- )
-
- new_methods = {k.strip("_"): v for k, v in new_methods.items()}
- return new_methods
-
-
-def _add_methods(cls, new_methods) -> None:
- for name, method in new_methods.items():
- setattr(cls, name, method)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 95ee3f1af58f1..6f22355d59676 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3,6 +3,7 @@
"""
from __future__ import annotations
+import operator
import sys
from textwrap import dedent
from typing import (
@@ -91,6 +92,7 @@
missing,
nanops,
ops,
+ roperator,
)
from pandas.core.accessor import CachedAccessor
from pandas.core.apply import SeriesApply
@@ -357,8 +359,6 @@ class Series(base.IndexOpsMixin, NDFrame): # type: ignore[misc]
doc=base.IndexOpsMixin.hasnans.__doc__,
)
_mgr: SingleManager
- div: Callable[[Series, Any], Series]
- rdiv: Callable[[Series, Any], Series]
# ----------------------------------------------------------------------
# Constructors
@@ -2967,79 +2967,6 @@ def _append(
to_concat, ignore_index=ignore_index, verify_integrity=verify_integrity
)
- def _binop(self, other: Series, func, level=None, fill_value=None):
- """
- Perform generic binary operation with optional fill value.
-
- Parameters
- ----------
- other : Series
- func : binary operator
- fill_value : float or object
- Value to substitute for NA/null values. If both Series are NA in a
- location, the result will be NA regardless of the passed fill value.
- level : int or level name, default None
- Broadcast across a level, matching Index values on the
- passed MultiIndex level.
-
- Returns
- -------
- Series
- """
- if not isinstance(other, Series):
- raise AssertionError("Other operand must be Series")
-
- this = self
-
- if not self.index.equals(other.index):
- this, other = self.align(other, level=level, join="outer", copy=False)
-
- this_vals, other_vals = ops.fill_binop(this._values, other._values, fill_value)
-
- with np.errstate(all="ignore"):
- result = func(this_vals, other_vals)
-
- name = ops.get_op_result_name(self, other)
- return this._construct_result(result, name)
-
- def _construct_result(
- self, result: ArrayLike | tuple[ArrayLike, ArrayLike], name: Hashable
- ) -> Series | tuple[Series, Series]:
- """
- Construct an appropriately-labelled Series from the result of an op.
-
- Parameters
- ----------
- result : ndarray or ExtensionArray
- name : Label
-
- Returns
- -------
- Series
- In the case of __divmod__ or __rdivmod__, a 2-tuple of Series.
- """
- if isinstance(result, tuple):
- # produced by divmod or rdivmod
-
- res1 = self._construct_result(result[0], name=name)
- res2 = self._construct_result(result[1], name=name)
-
- # GH#33427 assertions to keep mypy happy
- assert isinstance(res1, Series)
- assert isinstance(res2, Series)
- return (res1, res2)
-
- # TODO: result should always be ArrayLike, but this fails for some
- # JSONArray tests
- dtype = getattr(result, "dtype", None)
- out = self._constructor(result, index=self.index, dtype=dtype)
- out = out.__finalize__(self)
-
- # Set the result's name after __finalize__ is called because __finalize__
- # would set it back to self.name
- out.name = name
- return out
-
@doc(
_shared_docs["compare"],
"""
@@ -6027,7 +5954,7 @@ def _cmp_method(self, other, op):
def _logical_method(self, other, op):
res_name = ops.get_op_result_name(self, other)
- self, other = ops.align_method_SERIES(self, other, align_asobject=True)
+ self, other = self._align_for_op(other, align_asobject=True)
lvalues = self._values
rvalues = extract_array(other, extract_numpy=True, extract_range=True)
@@ -6036,11 +5963,263 @@ def _logical_method(self, other, op):
return self._construct_result(res_values, name=res_name)
def _arith_method(self, other, op):
- self, other = ops.align_method_SERIES(self, other)
+ self, other = self._align_for_op(other)
return base.IndexOpsMixin._arith_method(self, other, op)
+ def _align_for_op(self, right, align_asobject: bool = False):
+ """align lhs and rhs Series"""
+ # TODO: Different from DataFrame._align_for_op, list, tuple and ndarray
+ # are not coerced here
+ # because Series has inconsistencies described in GH#13637
+ left = self
-Series._add_numeric_operations()
+ if isinstance(right, Series):
+ # avoid repeated alignment
+ if not left.index.equals(right.index):
+ if align_asobject:
+ # to keep original value's dtype for bool ops
+ left = left.astype(object)
+ right = right.astype(object)
-# Add arithmetic!
-ops.add_flex_arithmetic_methods(Series)
+ left, right = left.align(right, copy=False)
+
+ return left, right
+
+ def _binop(self, other: Series, func, level=None, fill_value=None) -> Series:
+ """
+ Perform generic binary operation with optional fill value.
+
+ Parameters
+ ----------
+ other : Series
+ func : binary operator
+ fill_value : float or object
+ Value to substitute for NA/null values. If both Series are NA in a
+ location, the result will be NA regardless of the passed fill value.
+ level : int or level name, default None
+ Broadcast across a level, matching Index values on the
+ passed MultiIndex level.
+
+ Returns
+ -------
+ Series
+ """
+ if not isinstance(other, Series):
+ raise AssertionError("Other operand must be Series")
+
+ this = self
+
+ if not self.index.equals(other.index):
+ this, other = self.align(other, level=level, join="outer", copy=False)
+
+ this_vals, other_vals = ops.fill_binop(this._values, other._values, fill_value)
+
+ with np.errstate(all="ignore"):
+ result = func(this_vals, other_vals)
+
+ name = ops.get_op_result_name(self, other)
+ out = this._construct_result(result, name)
+ return cast(Series, out)
+
+ def _construct_result(
+ self, result: ArrayLike | tuple[ArrayLike, ArrayLike], name: Hashable
+ ) -> Series | tuple[Series, Series]:
+ """
+ Construct an appropriately-labelled Series from the result of an op.
+
+ Parameters
+ ----------
+ result : ndarray or ExtensionArray
+ name : Label
+
+ Returns
+ -------
+ Series
+ In the case of __divmod__ or __rdivmod__, a 2-tuple of Series.
+ """
+ if isinstance(result, tuple):
+ # produced by divmod or rdivmod
+
+ res1 = self._construct_result(result[0], name=name)
+ res2 = self._construct_result(result[1], name=name)
+
+ # GH#33427 assertions to keep mypy happy
+ assert isinstance(res1, Series)
+ assert isinstance(res2, Series)
+ return (res1, res2)
+
+ # TODO: result should always be ArrayLike, but this fails for some
+ # JSONArray tests
+ dtype = getattr(result, "dtype", None)
+ out = self._constructor(result, index=self.index, dtype=dtype)
+ out = out.__finalize__(self)
+
+ # Set the result's name after __finalize__ is called because __finalize__
+ # would set it back to self.name
+ out.name = name
+ return out
+
+ def _flex_method(self, other, op, *, level=None, fill_value=None, axis: Axis = 0):
+ if axis is not None:
+ self._get_axis_number(axis)
+
+ res_name = ops.get_op_result_name(self, other)
+
+ if isinstance(other, Series):
+ return self._binop(other, op, level=level, fill_value=fill_value)
+ elif isinstance(other, (np.ndarray, list, tuple)):
+ if len(other) != len(self):
+ raise ValueError("Lengths must be equal")
+ other = self._constructor(other, self.index)
+ result = self._binop(other, op, level=level, fill_value=fill_value)
+ result.name = res_name
+ return result
+ else:
+ if fill_value is not None:
+ self = self.fillna(fill_value)
+
+ return op(self, other)
+
+ @Appender(ops.make_flex_doc("eq", "series"))
+ def eq(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.eq, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("ne", "series"))
+ def ne(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.ne, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("le", "series"))
+ def le(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.le, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("lt", "series"))
+ def lt(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.lt, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("ge", "series"))
+ def ge(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.ge, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("gt", "series"))
+ def gt(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.gt, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("add", "series"))
+ def add(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.add, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("radd", "series"))
+ def radd(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.radd, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("sub", "series"))
+ def sub(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.sub, level=level, fill_value=fill_value, axis=axis
+ )
+
+ subtract = sub
+
+ @Appender(ops.make_flex_doc("rsub", "series"))
+ def rsub(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rsub, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("mul", "series"))
+ def mul(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.mul, level=level, fill_value=fill_value, axis=axis
+ )
+
+ multiply = mul
+
+ @Appender(ops.make_flex_doc("rmul", "series"))
+ def rmul(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rmul, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("truediv", "series"))
+ def truediv(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.truediv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ div = truediv
+ divide = truediv
+
+ @Appender(ops.make_flex_doc("rtruediv", "series"))
+ def rtruediv(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rtruediv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ rdiv = rtruediv
+
+ @Appender(ops.make_flex_doc("floordiv", "series"))
+ def floordiv(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.floordiv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rfloordiv", "series"))
+ def rfloordiv(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rfloordiv, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("mod", "series"))
+ def mod(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.mod, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rmod", "series"))
+ def rmod(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rmod, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("pow", "series"))
+ def pow(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, operator.pow, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rpow", "series"))
+ def rpow(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rpow, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("divmod", "series"))
+ def divmod(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, divmod, level=level, fill_value=fill_value, axis=axis
+ )
+
+ @Appender(ops.make_flex_doc("rdivmod", "series"))
+ def rdivmod(self, other, level=None, fill_value=None, axis: Axis = 0):
+ return self._flex_method(
+ other, roperator.rdivmod, level=level, fill_value=fill_value, axis=axis
+ )
+
+
+Series._add_numeric_operations()
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index bcc1ae0183b97..b581dfd8c44b0 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -1800,7 +1800,7 @@ def test_alignment_non_pandas(self, val):
columns = ["X", "Y", "Z"]
df = DataFrame(np.random.randn(3, 3), index=index, columns=columns)
- align = pd.core.ops.align_method_FRAME
+ align = DataFrame._align_for_op
expected = DataFrame({"X": val, "Y": val, "Z": val}, index=df.index)
tm.assert_frame_equal(align(df, val, "index")[1], expected)
@@ -1816,7 +1816,7 @@ def test_alignment_non_pandas_length_mismatch(self, val):
columns = ["X", "Y", "Z"]
df = DataFrame(np.random.randn(3, 3), index=index, columns=columns)
- align = pd.core.ops.align_method_FRAME
+ align = DataFrame._align_for_op
# length mismatch
msg = "Unable to coerce to Series, length must be 3: given 2"
with pytest.raises(ValueError, match=msg):
@@ -1830,7 +1830,7 @@ def test_alignment_non_pandas_index_columns(self):
columns = ["X", "Y", "Z"]
df = DataFrame(np.random.randn(3, 3), index=index, columns=columns)
- align = pd.core.ops.align_method_FRAME
+ align = DataFrame._align_for_op
val = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
tm.assert_frame_equal(
align(df, val, "index")[1],
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Avoid runtime imports, get stuff out of `ops.__init__`, which was the goal 4-5 years ago when ops was first madea directory.
Move relevant helper methods from frame/series files . Decided on a new mixin rather than dumping everything directly in DataFrame/Series bc those files are too bug as it is. If others feel strongly i'd be OK putting it all in DataFrame/Series. | https://api.github.com/repos/pandas-dev/pandas/pulls/51813 | 2023-03-07T01:32:27Z | 2023-03-08T21:31:49Z | 2023-03-08T21:31:49Z | 2023-03-08T21:33:48Z |
STYLE: Enable ruff TCH on some files | diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 6758ab9cb6814..9a4aa69345248 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -19,11 +19,6 @@
from pandas._config import using_copy_on_write
-from pandas._typing import (
- Axis,
- AxisInt,
- HashableT,
-)
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.concat import concat_compat
@@ -51,6 +46,12 @@
from pandas.core.internals import concatenate_managers
if TYPE_CHECKING:
+ from pandas._typing import (
+ Axis,
+ AxisInt,
+ HashableT,
+ )
+
from pandas import (
DataFrame,
Series,
diff --git a/pandas/core/reshape/encoding.py b/pandas/core/reshape/encoding.py
index d00fade8a1c6f..92d556a582262 100644
--- a/pandas/core/reshape/encoding.py
+++ b/pandas/core/reshape/encoding.py
@@ -3,6 +3,7 @@
from collections import defaultdict
import itertools
from typing import (
+ TYPE_CHECKING,
Hashable,
Iterable,
)
@@ -10,7 +11,6 @@
import numpy as np
from pandas._libs.sparse import IntIndex
-from pandas._typing import NpDtype
from pandas.core.dtypes.common import (
is_integer_dtype,
@@ -28,6 +28,9 @@
)
from pandas.core.series import Series
+if TYPE_CHECKING:
+ from pandas._typing import NpDtype
+
def get_dummies(
data,
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 5dd95d78c7b79..ee5851fcc2dd6 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -11,12 +11,6 @@
import numpy as np
from pandas._libs import lib
-from pandas._typing import (
- AggFuncType,
- AggFuncTypeBase,
- AggFuncTypeDict,
- IndexLabel,
-)
from pandas.util._decorators import (
Appender,
Substitution,
@@ -48,6 +42,13 @@
from pandas.core.series import Series
if TYPE_CHECKING:
+ from pandas._typing import (
+ AggFuncType,
+ AggFuncTypeBase,
+ AggFuncTypeDict,
+ IndexLabel,
+ )
+
from pandas import DataFrame
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index e83317ebc74ce..21aa9f0f07b26 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -10,7 +10,6 @@
import numpy as np
import pandas._libs.reshape as libreshape
-from pandas._typing import npt
from pandas.errors import PerformanceWarning
from pandas.util._decorators import cache_readonly
from pandas.util._exceptions import find_stack_level
@@ -44,6 +43,8 @@
)
if TYPE_CHECKING:
+ from pandas._typing import npt
+
from pandas.core.arrays import ExtensionArray
from pandas.core.indexes.frozen import FrozenList
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index 267abdb8d0104..54ae217990d96 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -4,6 +4,7 @@
from __future__ import annotations
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
Literal,
@@ -16,7 +17,6 @@
Timestamp,
)
from pandas._libs.lib import infer_dtype
-from pandas._typing import IntervalLeftRight
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
@@ -46,6 +46,9 @@
from pandas.core import nanops
import pandas.core.algorithms as algos
+if TYPE_CHECKING:
+ from pandas._typing import IntervalLeftRight
+
def cut(
x,
diff --git a/pandas/core/reshape/util.py b/pandas/core/reshape/util.py
index a92b439927fff..bcd51e095a1a1 100644
--- a/pandas/core/reshape/util.py
+++ b/pandas/core/reshape/util.py
@@ -1,11 +1,14 @@
from __future__ import annotations
-import numpy as np
+from typing import TYPE_CHECKING
-from pandas._typing import NumpyIndexT
+import numpy as np
from pandas.core.dtypes.common import is_list_like
+if TYPE_CHECKING:
+ from pandas._typing import NumpyIndexT
+
def cartesian_product(X) -> list[np.ndarray]:
"""
diff --git a/pandas/core/strings/base.py b/pandas/core/strings/base.py
index f1e716b64644a..10d8e94972725 100644
--- a/pandas/core/strings/base.py
+++ b/pandas/core/strings/base.py
@@ -1,7 +1,6 @@
from __future__ import annotations
import abc
-import re
from typing import (
TYPE_CHECKING,
Callable,
@@ -10,9 +9,11 @@
import numpy as np
-from pandas._typing import Scalar
-
if TYPE_CHECKING:
+ import re
+
+ from pandas._typing import Scalar
+
from pandas import Series
diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py
index 508ac122d67af..f8e3f0756dfbd 100644
--- a/pandas/core/strings/object_array.py
+++ b/pandas/core/strings/object_array.py
@@ -16,10 +16,6 @@
from pandas._libs import lib
import pandas._libs.missing as libmissing
import pandas._libs.ops as libops
-from pandas._typing import (
- NpDtype,
- Scalar,
-)
from pandas.core.dtypes.common import is_scalar
from pandas.core.dtypes.missing import isna
@@ -27,6 +23,11 @@
from pandas.core.strings.base import BaseStringArrayMethods
if TYPE_CHECKING:
+ from pandas._typing import (
+ NpDtype,
+ Scalar,
+ )
+
from pandas import Series
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index ea150f714845c..7517d5278e52a 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -1,6 +1,9 @@
from __future__ import annotations
-from typing import Literal
+from typing import (
+ TYPE_CHECKING,
+ Literal,
+)
import numpy as np
@@ -10,10 +13,6 @@
)
from pandas._libs import lib
-from pandas._typing import (
- DateTimeErrorChoices,
- npt,
-)
from pandas.core.dtypes.cast import maybe_downcast_numeric
from pandas.core.dtypes.common import (
@@ -36,6 +35,12 @@
import pandas as pd
from pandas.core.arrays import BaseMaskedArray
+if TYPE_CHECKING:
+ from pandas._typing import (
+ DateTimeErrorChoices,
+ npt,
+ )
+
def to_numeric(
arg,
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index 42cf92c6b2a35..2e91927a002bb 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -3,7 +3,6 @@
"""
from __future__ import annotations
-from datetime import timedelta
from typing import (
TYPE_CHECKING,
overload,
@@ -30,6 +29,8 @@
from pandas.core.arrays.timedeltas import sequence_to_td64ns
if TYPE_CHECKING:
+ from datetime import timedelta
+
from pandas._libs.tslibs.timedeltas import UnitChoices
from pandas._typing import (
ArrayLike,
diff --git a/pandas/core/tools/times.py b/pandas/core/tools/times.py
index cb178926123d3..e8fc07c1876f8 100644
--- a/pandas/core/tools/times.py
+++ b/pandas/core/tools/times.py
@@ -4,11 +4,11 @@
datetime,
time,
)
+from typing import TYPE_CHECKING
import numpy as np
from pandas._libs.lib import is_list_like
-from pandas._typing import DateTimeErrorChoices
from pandas.core.dtypes.generic import (
ABCIndex,
@@ -16,6 +16,9 @@
)
from pandas.core.dtypes.missing import notna
+if TYPE_CHECKING:
+ from pandas._typing import DateTimeErrorChoices
+
def to_time(
arg,
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index b61fa9a92539e..34dc49ff4a82a 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -9,15 +9,6 @@
from pandas._libs.tslibs import Timedelta
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import (
- Axis,
- TimedeltaConvertibleTypes,
-)
-
-if TYPE_CHECKING:
- from pandas import DataFrame, Series
- from pandas.core.generic import NDFrame
-
from pandas.util._decorators import doc
from pandas.core.dtypes.common import (
@@ -60,6 +51,18 @@
BaseWindowGroupby,
)
+if TYPE_CHECKING:
+ from pandas._typing import (
+ Axis,
+ TimedeltaConvertibleTypes,
+ )
+
+ from pandas import (
+ DataFrame,
+ Series,
+ )
+ from pandas.core.generic import NDFrame
+
def get_center_of_mass(
comass: float | None,
diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py
index 6147f0f43c558..b3caa189bd579 100644
--- a/pandas/core/window/expanding.py
+++ b/pandas/core/window/expanding.py
@@ -7,16 +7,6 @@
Callable,
)
-from pandas._typing import (
- Axis,
- QuantileInterpolation,
- WindowingRankType,
-)
-
-if TYPE_CHECKING:
- from pandas import DataFrame, Series
- from pandas.core.generic import NDFrame
-
from pandas.util._decorators import doc
from pandas.core.indexers.objects import (
@@ -40,6 +30,19 @@
RollingAndExpandingMixin,
)
+if TYPE_CHECKING:
+ from pandas._typing import (
+ Axis,
+ QuantileInterpolation,
+ WindowingRankType,
+ )
+
+ from pandas import (
+ DataFrame,
+ Series,
+ )
+ from pandas.core.generic import NDFrame
+
class Expanding(RollingAndExpandingMixin):
"""
diff --git a/pandas/core/window/numba_.py b/pandas/core/window/numba_.py
index 756f8e3a1cffc..14775cc7f457e 100644
--- a/pandas/core/window/numba_.py
+++ b/pandas/core/window/numba_.py
@@ -9,11 +9,13 @@
import numpy as np
-from pandas._typing import Scalar
from pandas.compat._optional import import_optional_dependency
from pandas.core.util.numba_ import jit_user_function
+if TYPE_CHECKING:
+ from pandas._typing import Scalar
+
@functools.lru_cache(maxsize=None)
def generate_numba_apply_func(
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 46ce5950a1465..b11ff11421ed4 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -26,13 +26,6 @@
to_offset,
)
import pandas._libs.window.aggregations as window_aggregations
-from pandas._typing import (
- ArrayLike,
- Axis,
- NDFrameT,
- QuantileInterpolation,
- WindowingRankType,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import DataError
from pandas.util._decorators import doc
@@ -99,6 +92,14 @@
)
if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ Axis,
+ NDFrameT,
+ QuantileInterpolation,
+ WindowingRankType,
+ )
+
from pandas import (
DataFrame,
Series,
diff --git a/pandas/io/__init__.py b/pandas/io/__init__.py
index bd3ddc09393d8..c804b81c49e7c 100644
--- a/pandas/io/__init__.py
+++ b/pandas/io/__init__.py
@@ -1,3 +1,4 @@
+# ruff: noqa: TCH004
from typing import TYPE_CHECKING
if TYPE_CHECKING:
@@ -8,5 +9,5 @@
stata,
)
- # and mark only those modules as public
+ # mark only those modules as public
__all__ = ["formats", "json", "stata"]
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 7cad7ecbf777a..065d8992ab0ac 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -6,9 +6,9 @@
from io import BytesIO
import os
from textwrap import fill
-from types import TracebackType
from typing import (
IO,
+ TYPE_CHECKING,
Any,
Callable,
Hashable,
@@ -30,14 +30,6 @@
from pandas._libs import lib
from pandas._libs.parsers import STR_NA_VALUES
-from pandas._typing import (
- DtypeArg,
- FilePath,
- IntStrT,
- ReadBuffer,
- StorageOptions,
- WriteExcelBuffer,
-)
from pandas.compat._optional import (
get_version,
import_optional_dependency,
@@ -75,6 +67,17 @@
from pandas.io.parsers import TextParser
from pandas.io.parsers.readers import validate_integer
+if TYPE_CHECKING:
+ from types import TracebackType
+
+ from pandas._typing import (
+ DtypeArg,
+ FilePath,
+ IntStrT,
+ ReadBuffer,
+ StorageOptions,
+ WriteExcelBuffer,
+ )
_read_excel_doc = (
"""
Read an Excel file into a pandas DataFrame.
diff --git a/pandas/io/excel/_odswriter.py b/pandas/io/excel/_odswriter.py
index 6f1d62111e5b4..f5aaf08530591 100644
--- a/pandas/io/excel/_odswriter.py
+++ b/pandas/io/excel/_odswriter.py
@@ -11,11 +11,6 @@
)
from pandas._libs import json
-from pandas._typing import (
- FilePath,
- StorageOptions,
- WriteExcelBuffer,
-)
from pandas.io.excel._base import ExcelWriter
from pandas.io.excel._util import (
@@ -24,6 +19,12 @@
)
if TYPE_CHECKING:
+ from pandas._typing import (
+ FilePath,
+ StorageOptions,
+ WriteExcelBuffer,
+ )
+
from pandas.io.formats.excel import ExcelCell
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index 594813fe0c1ac..e751c919ee8dc 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -10,13 +10,6 @@
import numpy as np
-from pandas._typing import (
- FilePath,
- ReadBuffer,
- Scalar,
- StorageOptions,
- WriteExcelBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.util._decorators import doc
@@ -35,6 +28,14 @@
from openpyxl.descriptors.serialisable import Serialisable
from openpyxl.workbook import Workbook
+ from pandas._typing import (
+ FilePath,
+ ReadBuffer,
+ Scalar,
+ StorageOptions,
+ WriteExcelBuffer,
+ )
+
class OpenpyxlWriter(ExcelWriter):
_engine = "openpyxl"
diff --git a/pandas/io/excel/_pyxlsb.py b/pandas/io/excel/_pyxlsb.py
index 634baee63137e..64076e4952cde 100644
--- a/pandas/io/excel/_pyxlsb.py
+++ b/pandas/io/excel/_pyxlsb.py
@@ -1,12 +1,8 @@
# pyright: reportMissingImports=false
from __future__ import annotations
-from pandas._typing import (
- FilePath,
- ReadBuffer,
- Scalar,
- StorageOptions,
-)
+from typing import TYPE_CHECKING
+
from pandas.compat._optional import import_optional_dependency
from pandas.util._decorators import doc
@@ -14,6 +10,14 @@
from pandas.io.excel._base import BaseExcelReader
+if TYPE_CHECKING:
+ from pandas._typing import (
+ FilePath,
+ ReadBuffer,
+ Scalar,
+ StorageOptions,
+ )
+
class PyxlsbReader(BaseExcelReader):
@doc(storage_options=_shared_docs["storage_options"])
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index 37bd4c1ba5ba5..702d00e7fdea7 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -1,13 +1,10 @@
from __future__ import annotations
from datetime import time
+from typing import TYPE_CHECKING
import numpy as np
-from pandas._typing import (
- Scalar,
- StorageOptions,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.util._decorators import doc
@@ -15,6 +12,12 @@
from pandas.io.excel._base import BaseExcelReader
+if TYPE_CHECKING:
+ from pandas._typing import (
+ Scalar,
+ StorageOptions,
+ )
+
class XlrdReader(BaseExcelReader):
@doc(storage_options=_shared_docs["storage_options"])
diff --git a/pandas/io/excel/_xlsxwriter.py b/pandas/io/excel/_xlsxwriter.py
index 1800d3d87f7c0..8cb88403b2f87 100644
--- a/pandas/io/excel/_xlsxwriter.py
+++ b/pandas/io/excel/_xlsxwriter.py
@@ -1,13 +1,11 @@
from __future__ import annotations
-from typing import Any
+from typing import (
+ TYPE_CHECKING,
+ Any,
+)
from pandas._libs import json
-from pandas._typing import (
- FilePath,
- StorageOptions,
- WriteExcelBuffer,
-)
from pandas.io.excel._base import ExcelWriter
from pandas.io.excel._util import (
@@ -15,6 +13,13 @@
validate_freeze_panes,
)
+if TYPE_CHECKING:
+ from pandas._typing import (
+ FilePath,
+ StorageOptions,
+ WriteExcelBuffer,
+ )
+
class _XlsxStyler:
# Map from openpyxl-oriented styles to flatter xlsxwriter representation
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index bf0e174674cb1..4593c48c926f4 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -2,6 +2,7 @@
from __future__ import annotations
from typing import (
+ TYPE_CHECKING,
Hashable,
Sequence,
)
@@ -9,12 +10,6 @@
from pandas._config import using_nullable_dtypes
from pandas._libs import lib
-from pandas._typing import (
- FilePath,
- ReadBuffer,
- StorageOptions,
- WriteBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.util._decorators import doc
@@ -27,6 +22,14 @@
from pandas.io.common import get_handle
+if TYPE_CHECKING:
+ from pandas._typing import (
+ FilePath,
+ ReadBuffer,
+ StorageOptions,
+ WriteBuffer,
+ )
+
@doc(storage_options=_shared_docs["storage_options"])
def to_feather(
diff --git a/pandas/io/formats/__init__.py b/pandas/io/formats/__init__.py
index 8a3486a4d71fe..5e56b1bc7ba43 100644
--- a/pandas/io/formats/__init__.py
+++ b/pandas/io/formats/__init__.py
@@ -1,3 +1,4 @@
+# ruff: noqa: TCH004
from typing import TYPE_CHECKING
if TYPE_CHECKING:
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index baf264353fca7..672f7c1f71b15 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -18,14 +18,6 @@
import numpy as np
from pandas._libs import writers as libwriters
-from pandas._typing import (
- CompressionOptions,
- FilePath,
- FloatFormatType,
- IndexLabel,
- StorageOptions,
- WriteBuffer,
-)
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.generic import (
@@ -41,6 +33,15 @@
from pandas.io.common import get_handle
if TYPE_CHECKING:
+ from pandas._typing import (
+ CompressionOptions,
+ FilePath,
+ FloatFormatType,
+ IndexLabel,
+ StorageOptions,
+ WriteBuffer,
+ )
+
from pandas.io.formats.format import DataFrameFormatter
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index 34c4d330761f5..806496d5a7108 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -10,6 +10,7 @@
import itertools
import re
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
Hashable,
@@ -23,10 +24,6 @@
import numpy as np
from pandas._libs.lib import is_list_like
-from pandas._typing import (
- IndexLabel,
- StorageOptions,
-)
from pandas.util._decorators import doc
from pandas.util._exceptions import find_stack_level
@@ -53,6 +50,12 @@
from pandas.io.formats.format import get_level_lengths
from pandas.io.formats.printing import pprint_thing
+if TYPE_CHECKING:
+ from pandas._typing import (
+ IndexLabel,
+ StorageOptions,
+ )
+
class ExcelCell:
__fields__ = ("row", "col", "val", "style", "mergestart", "mergeend")
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index f9b081afcd045..feda414a0113d 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -49,19 +49,6 @@
periods_per_day,
)
from pandas._libs.tslibs.nattype import NaTType
-from pandas._typing import (
- ArrayLike,
- Axes,
- ColspaceArgType,
- ColspaceType,
- CompressionOptions,
- FilePath,
- FloatFormatType,
- FormattersType,
- IndexLabel,
- StorageOptions,
- WriteBuffer,
-)
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -109,6 +96,20 @@
from pandas.io.formats import printing
if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ Axes,
+ ColspaceArgType,
+ ColspaceType,
+ CompressionOptions,
+ FilePath,
+ FloatFormatType,
+ FormattersType,
+ IndexLabel,
+ StorageOptions,
+ WriteBuffer,
+ )
+
from pandas import (
DataFrame,
Series,
diff --git a/pandas/io/formats/info.py b/pandas/io/formats/info.py
index d826c0a148ebe..7f31c354b5c85 100644
--- a/pandas/io/formats/info.py
+++ b/pandas/io/formats/info.py
@@ -16,15 +16,15 @@
from pandas._config import get_option
-from pandas._typing import (
- Dtype,
- WriteBuffer,
-)
-
from pandas.io.formats import format as fmt
from pandas.io.formats.printing import pprint_thing
if TYPE_CHECKING:
+ from pandas._typing import (
+ Dtype,
+ WriteBuffer,
+ )
+
from pandas import (
DataFrame,
Index,
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 8ee73e77f5b11..d989a9d199078 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -21,17 +21,6 @@
from pandas._config import get_option
-from pandas._typing import (
- Axis,
- AxisInt,
- FilePath,
- IndexLabel,
- Level,
- QuantileInterpolation,
- Scalar,
- StorageOptions,
- WriteBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.util._decorators import (
Substitution,
@@ -71,6 +60,18 @@
if TYPE_CHECKING:
from matplotlib.colors import Colormap
+ from pandas._typing import (
+ Axis,
+ AxisInt,
+ FilePath,
+ IndexLabel,
+ Level,
+ QuantileInterpolation,
+ Scalar,
+ StorageOptions,
+ WriteBuffer,
+ )
+
try:
import matplotlib as mpl
import matplotlib.pyplot as plt
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 69cc1e9f1f7ae..317ce93cf3da6 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -4,6 +4,7 @@
from functools import partial
import re
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
DefaultDict,
@@ -22,10 +23,6 @@
from pandas._config import get_option
from pandas._libs import lib
-from pandas._typing import (
- Axis,
- Level,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.core.dtypes.common import (
@@ -46,6 +43,11 @@
from pandas.api.types import is_list_like
import pandas.core.common as com
+if TYPE_CHECKING:
+ from pandas._typing import (
+ Axis,
+ Level,
+ )
jinja2 = import_optional_dependency("jinja2", extra="DataFrame.style requires jinja2.")
from markupsafe import escape as escape_html # markupsafe is jinja2 dependency
diff --git a/pandas/io/formats/xml.py b/pandas/io/formats/xml.py
index cc258e0271031..7927e27cc9284 100644
--- a/pandas/io/formats/xml.py
+++ b/pandas/io/formats/xml.py
@@ -10,13 +10,6 @@
Any,
)
-from pandas._typing import (
- CompressionOptions,
- FilePath,
- ReadBuffer,
- StorageOptions,
- WriteBuffer,
-)
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
@@ -32,6 +25,14 @@
)
if TYPE_CHECKING:
+ from pandas._typing import (
+ CompressionOptions,
+ FilePath,
+ ReadBuffer,
+ StorageOptions,
+ WriteBuffer,
+ )
+
from pandas import DataFrame
diff --git a/pandas/io/html.py b/pandas/io/html.py
index d6d1c5651dd37..25eb6bd0bbc90 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -21,11 +21,6 @@
from pandas._config import using_nullable_dtypes
from pandas._libs import lib
-from pandas._typing import (
- BaseBuffer,
- FilePath,
- ReadBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import (
AbstractMethodError,
@@ -51,6 +46,12 @@
from pandas.io.parsers import TextParser
if TYPE_CHECKING:
+ from pandas._typing import (
+ BaseBuffer,
+ FilePath,
+ ReadBuffer,
+ )
+
from pandas import DataFrame
#############
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 80d2f9eda7ce5..d052ec60a7a80 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -7,7 +7,6 @@
from collections import abc
from io import StringIO
from itertools import islice
-from types import TracebackType
from typing import (
TYPE_CHECKING,
Any,
@@ -32,17 +31,6 @@
loads,
)
from pandas._libs.tslibs import iNaT
-from pandas._typing import (
- CompressionOptions,
- DtypeArg,
- FilePath,
- IndexLabel,
- JSONEngine,
- JSONSerializable,
- ReadBuffer,
- StorageOptions,
- WriteBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
@@ -83,6 +71,20 @@
from pandas.io.parsers.readers import validate_integer
if TYPE_CHECKING:
+ from types import TracebackType
+
+ from pandas._typing import (
+ CompressionOptions,
+ DtypeArg,
+ FilePath,
+ IndexLabel,
+ JSONEngine,
+ JSONSerializable,
+ ReadBuffer,
+ StorageOptions,
+ WriteBuffer,
+ )
+
from pandas.core.generic import NDFrame
FrameSeriesStrT = TypeVar("FrameSeriesStrT", bound=Literal["frame", "series"])
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 577d677e7b3a0..0937828b00e38 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -9,6 +9,7 @@
import copy
import sys
from typing import (
+ TYPE_CHECKING,
Any,
DefaultDict,
Iterable,
@@ -17,14 +18,16 @@
import numpy as np
from pandas._libs.writers import convert_json_to_lines
-from pandas._typing import (
- IgnoreRaise,
- Scalar,
-)
import pandas as pd
from pandas import DataFrame
+if TYPE_CHECKING:
+ from pandas._typing import (
+ IgnoreRaise,
+ Scalar,
+ )
+
def convert_to_line_delimits(s: str) -> str:
"""
diff --git a/pandas/io/json/_table_schema.py b/pandas/io/json/_table_schema.py
index 372aaf98c3b2c..e23b5fe01d134 100644
--- a/pandas/io/json/_table_schema.py
+++ b/pandas/io/json/_table_schema.py
@@ -14,10 +14,6 @@
from pandas._libs.json import loads
from pandas._libs.tslibs import timezones
-from pandas._typing import (
- DtypeObj,
- JSONSerializable,
-)
from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.base import _registry as registry
@@ -39,6 +35,11 @@
import pandas.core.common as com
if TYPE_CHECKING:
+ from pandas._typing import (
+ DtypeObj,
+ JSONSerializable,
+ )
+
from pandas import Series
from pandas.core.indexes.multi import MultiIndex
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index 1b9be9adc1196..49edf91a51e01 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -4,6 +4,7 @@
import io
from types import ModuleType
from typing import (
+ TYPE_CHECKING,
Any,
Literal,
)
@@ -14,11 +15,6 @@
)
from pandas._libs import lib
-from pandas._typing import (
- FilePath,
- ReadBuffer,
- WriteBuffer,
-)
from pandas.compat import pa_version_under8p0
from pandas.compat._optional import import_optional_dependency
@@ -37,6 +33,13 @@
is_fsspec_url,
)
+if TYPE_CHECKING:
+ from pandas._typing import (
+ FilePath,
+ ReadBuffer,
+ WriteBuffer,
+ )
+
def read_orc(
path: FilePath | ReadBuffer[bytes],
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index aec31f40f8570..0936544929071 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -4,6 +4,7 @@
import io
import os
from typing import (
+ TYPE_CHECKING,
Any,
Literal,
)
@@ -12,12 +13,6 @@
from pandas._config import using_nullable_dtypes
from pandas._libs import lib
-from pandas._typing import (
- FilePath,
- ReadBuffer,
- StorageOptions,
- WriteBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import AbstractMethodError
from pandas.util._decorators import doc
@@ -39,6 +34,14 @@
stringify_path,
)
+if TYPE_CHECKING:
+ from pandas._typing import (
+ FilePath,
+ ReadBuffer,
+ StorageOptions,
+ WriteBuffer,
+ )
+
def get_engine(engine: str) -> BaseImpl:
"""return our implementation"""
diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index 420b6212f857a..ddbd902acdb08 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -1,6 +1,7 @@
from __future__ import annotations
-from pandas._typing import ReadBuffer
+from typing import TYPE_CHECKING
+
from pandas.compat._optional import import_optional_dependency
from pandas.core.dtypes.inference import is_integer
@@ -13,6 +14,9 @@
from pandas.io.parsers.base_parser import ParserBase
+if TYPE_CHECKING:
+ from pandas._typing import ReadBuffer
+
class ArrowParserWrapper(ParserBase):
"""
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 8d71ce6dff7f8..90ad0d04e0ea7 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -34,12 +34,6 @@
import pandas._libs.ops as libops
from pandas._libs.parsers import STR_NA_VALUES
from pandas._libs.tslibs import parsing
-from pandas._typing import (
- ArrayLike,
- DtypeArg,
- DtypeObj,
- Scalar,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import (
ParserError,
@@ -91,6 +85,13 @@
from pandas.io.common import is_potential_multi_index
if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ DtypeArg,
+ DtypeObj,
+ Scalar,
+ )
+
from pandas import DataFrame
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index c10f1811751cd..f463647d5a464 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -14,12 +14,6 @@
from pandas._config.config import get_option
from pandas._libs import parsers
-from pandas._typing import (
- ArrayLike,
- DtypeArg,
- DtypeObj,
- ReadCsvBuffer,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import DtypeWarning
from pandas.util._exceptions import find_stack_level
@@ -44,6 +38,13 @@
)
if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ DtypeArg,
+ DtypeObj,
+ ReadCsvBuffer,
+ )
+
from pandas import (
Index,
MultiIndex,
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 263966269c0fa..5b30596eebf6f 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -24,11 +24,6 @@
import numpy as np
from pandas._libs import lib
-from pandas._typing import (
- ArrayLike,
- ReadCsvBuffer,
- Scalar,
-)
from pandas.errors import (
EmptyDataError,
ParserError,
@@ -47,6 +42,12 @@
)
if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ ReadCsvBuffer,
+ Scalar,
+ )
+
from pandas import (
Index,
MultiIndex,
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 635c98e38da16..af0d34745a317 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -9,9 +9,9 @@
import csv
import sys
from textwrap import fill
-from types import TracebackType
from typing import (
IO,
+ TYPE_CHECKING,
Any,
Callable,
Hashable,
@@ -28,15 +28,6 @@
from pandas._libs import lib
from pandas._libs.parsers import STR_NA_VALUES
-from pandas._typing import (
- CompressionOptions,
- CSVEngine,
- DtypeArg,
- FilePath,
- IndexLabel,
- ReadCsvBuffer,
- StorageOptions,
-)
from pandas.errors import (
AbstractMethodError,
ParserWarning,
@@ -73,6 +64,18 @@
PythonParser,
)
+if TYPE_CHECKING:
+ from types import TracebackType
+
+ from pandas._typing import (
+ CompressionOptions,
+ CSVEngine,
+ DtypeArg,
+ FilePath,
+ IndexLabel,
+ ReadCsvBuffer,
+ StorageOptions,
+ )
_doc_read_csv_and_table = (
r"""
{summary}
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index a9ab925536d11..ba837e2f57243 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -2,16 +2,12 @@
from __future__ import annotations
import pickle
-from typing import Any
+from typing import (
+ TYPE_CHECKING,
+ Any,
+)
import warnings
-from pandas._typing import (
- CompressionOptions,
- FilePath,
- ReadPickleBuffer,
- StorageOptions,
- WriteBuffer,
-)
from pandas.compat import pickle_compat as pc
from pandas.util._decorators import doc
@@ -19,6 +15,15 @@
from pandas.io.common import get_handle
+if TYPE_CHECKING:
+ from pandas._typing import (
+ CompressionOptions,
+ FilePath,
+ ReadPickleBuffer,
+ StorageOptions,
+ WriteBuffer,
+ )
+
@doc(
storage_options=_shared_docs["storage_options"],
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index a813a0e285027..bd427ec98e8dc 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -14,7 +14,6 @@
import os
import re
from textwrap import dedent
-from types import TracebackType
from typing import (
TYPE_CHECKING,
Any,
@@ -41,15 +40,6 @@
writers as libwriters,
)
from pandas._libs.tslibs import timezones
-from pandas._typing import (
- AnyArrayLike,
- ArrayLike,
- AxisInt,
- DtypeArg,
- FilePath,
- Shape,
- npt,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.compat.pickle_compat import patch_pickle
from pandas.errors import (
@@ -115,14 +105,25 @@
)
if TYPE_CHECKING:
+ from types import TracebackType
+
from tables import (
Col,
File,
Node,
)
- from pandas.core.internals import Block
+ from pandas._typing import (
+ AnyArrayLike,
+ ArrayLike,
+ AxisInt,
+ DtypeArg,
+ FilePath,
+ Shape,
+ npt,
+ )
+ from pandas.core.internals import Block
# versioning attribute
_version = "0.15.2"
diff --git a/pandas/io/sas/_sas.pyi b/pandas/io/sas/_sas.pyi
index 5d65e2b56b591..73068693aa2c6 100644
--- a/pandas/io/sas/_sas.pyi
+++ b/pandas/io/sas/_sas.pyi
@@ -1,4 +1,7 @@
-from pandas.io.sas.sas7bdat import SAS7BDATReader
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from pandas.io.sas.sas7bdat import SAS7BDATReader
class Parser:
def __init__(self, parser: SAS7BDATReader) -> None: ...
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index 7fd93e7ed0e23..303d3be17f0a7 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -21,15 +21,13 @@
timedelta,
)
import sys
-from typing import cast
+from typing import (
+ TYPE_CHECKING,
+ cast,
+)
import numpy as np
-from pandas._typing import (
- CompressionOptions,
- FilePath,
- ReadBuffer,
-)
from pandas.errors import (
EmptyDataError,
OutOfBoundsDatetime,
@@ -56,6 +54,13 @@
import pandas.io.sas.sas_constants as const
from pandas.io.sas.sasreader import ReaderBase
+if TYPE_CHECKING:
+ from pandas._typing import (
+ CompressionOptions,
+ FilePath,
+ ReadBuffer,
+ )
+
def _parse_datetime(sas_datetime: float, unit: str):
if isna(sas_datetime):
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 6767dec6e4528..f23b18fdcb584 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -12,16 +12,11 @@
from collections import abc
from datetime import datetime
import struct
+from typing import TYPE_CHECKING
import warnings
import numpy as np
-from pandas._typing import (
- CompressionOptions,
- DatetimeNaTType,
- FilePath,
- ReadBuffer,
-)
from pandas.util._decorators import Appender
from pandas.util._exceptions import find_stack_level
@@ -30,6 +25,13 @@
from pandas.io.common import get_handle
from pandas.io.sas.sasreader import ReaderBase
+if TYPE_CHECKING:
+ from pandas._typing import (
+ CompressionOptions,
+ DatetimeNaTType,
+ FilePath,
+ ReadBuffer,
+ )
_correct_line1 = (
"HEADER RECORD*******LIBRARY HEADER RECORD!!!!!!!"
"000000000000000000000000000000 "
diff --git a/pandas/io/sas/sasreader.py b/pandas/io/sas/sasreader.py
index 22910484876a6..d56f4c7ebc695 100644
--- a/pandas/io/sas/sasreader.py
+++ b/pandas/io/sas/sasreader.py
@@ -7,18 +7,12 @@
ABCMeta,
abstractmethod,
)
-from types import TracebackType
from typing import (
TYPE_CHECKING,
Hashable,
overload,
)
-from pandas._typing import (
- CompressionOptions,
- FilePath,
- ReadBuffer,
-)
from pandas.util._decorators import doc
from pandas.core.shared_docs import _shared_docs
@@ -26,6 +20,14 @@
from pandas.io.common import stringify_path
if TYPE_CHECKING:
+ from types import TracebackType
+
+ from pandas._typing import (
+ CompressionOptions,
+ FilePath,
+ ReadBuffer,
+ )
+
from pandas import DataFrame
diff --git a/pandas/io/spss.py b/pandas/io/spss.py
index bb9ace600e6f2..4f898b3e2402d 100644
--- a/pandas/io/spss.py
+++ b/pandas/io/spss.py
@@ -1,6 +1,5 @@
from __future__ import annotations
-from pathlib import Path
from typing import (
TYPE_CHECKING,
Sequence,
@@ -16,6 +15,8 @@
from pandas.io.common import stringify_path
if TYPE_CHECKING:
+ from pathlib import Path
+
from pandas import DataFrame
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 7dedd705f8c70..340e181121bdb 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -35,11 +35,6 @@
from pandas._config import using_nullable_dtypes
from pandas._libs import lib
-from pandas._typing import (
- DateTimeErrorChoices,
- DtypeArg,
- IndexLabel,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import (
AbstractMethodError,
@@ -74,6 +69,12 @@
TextClause,
)
+ from pandas._typing import (
+ DateTimeErrorChoices,
+ DtypeArg,
+ IndexLabel,
+ )
+
# -----------------------------------------------------------------------------
# -- Helper functions
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 5cc13892224c5..5b6326685d63e 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -17,7 +17,6 @@
import os
import struct
import sys
-from types import TracebackType
from typing import (
IO,
TYPE_CHECKING,
@@ -36,13 +35,6 @@
from pandas._libs.lib import infer_dtype
from pandas._libs.writers import max_len_string_array
-from pandas._typing import (
- CompressionOptions,
- FilePath,
- ReadBuffer,
- StorageOptions,
- WriteBuffer,
-)
from pandas.errors import (
CategoricalConversionWarning,
InvalidColumnName,
@@ -81,8 +73,17 @@
from pandas.io.common import get_handle
if TYPE_CHECKING:
+ from types import TracebackType
from typing import Literal
+ from pandas._typing import (
+ CompressionOptions,
+ FilePath,
+ ReadBuffer,
+ StorageOptions,
+ WriteBuffer,
+ )
+
_version_error = (
"Version of given Stata file is {version}. pandas supports importing "
"versions 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), "
diff --git a/pandas/io/xml.py b/pandas/io/xml.py
index 90d67ac45d4fd..17c6f92847d5c 100644
--- a/pandas/io/xml.py
+++ b/pandas/io/xml.py
@@ -6,6 +6,7 @@
import io
from typing import (
+ TYPE_CHECKING,
Any,
Callable,
Sequence,
@@ -14,17 +15,6 @@
from pandas._config import using_nullable_dtypes
from pandas._libs import lib
-from pandas._typing import (
- TYPE_CHECKING,
- CompressionOptions,
- ConvertersArg,
- DtypeArg,
- FilePath,
- ParseDatesArg,
- ReadBuffer,
- StorageOptions,
- XMLParsers,
-)
from pandas.compat._optional import import_optional_dependency
from pandas.errors import (
AbstractMethodError,
@@ -51,6 +41,17 @@
from lxml import etree
+ from pandas._typing import (
+ CompressionOptions,
+ ConvertersArg,
+ DtypeArg,
+ FilePath,
+ ParseDatesArg,
+ ReadBuffer,
+ StorageOptions,
+ XMLParsers,
+ )
+
from pandas import DataFrame
diff --git a/pyproject.toml b/pyproject.toml
index 7af017b2de298..07913b7895500 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -300,11 +300,6 @@ exclude = [
"pandas/core/sorting.py" = ["TCH"]
"pandas/core/construction.py" = ["TCH"]
"pandas/core/missing.py" = ["TCH"]
-"pandas/core/reshape/*" = ["TCH"]
-"pandas/core/strings/*" = ["TCH"]
-"pandas/core/tools/*" = ["TCH"]
-"pandas/core/window/*" = ["TCH"]
-"pandas/io/*" = ["TCH"]
"pandas/tseries/*" = ["TCH"]
"pandas/tests/*" = ["TCH"]
"pandas/plotting/*" = ["TCH"]
| xref #51740, and fix
```
"pandas/core/reshape/*"
"pandas/core/strings/*"
"pandas/core/tools/*"
"pandas/core/window/*"
"pandas/io/*"
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/51812 | 2023-03-07T01:20:44Z | 2023-03-07T10:23:06Z | 2023-03-07T10:23:06Z | 2023-03-08T00:17:44Z |
DEPR: observed=False default in groupby | diff --git a/doc/source/user_guide/10min.rst b/doc/source/user_guide/10min.rst
index 6fc53fe09d791..7c98c99fecd5b 100644
--- a/doc/source/user_guide/10min.rst
+++ b/doc/source/user_guide/10min.rst
@@ -702,11 +702,11 @@ Sorting is per order in the categories, not lexical order:
df.sort_values(by="grade")
-Grouping by a categorical column also shows empty categories:
+Grouping by a categorical column with ``observed=False`` also shows empty categories:
.. ipython:: python
- df.groupby("grade").size()
+ df.groupby("grade", observed=False).size()
Plotting
diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index ef08d709822d2..3ce54cfebf65a 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -800,8 +800,8 @@ Groupby operations on the index will preserve the index nature as well.
.. ipython:: python
- df2.groupby(level=0).sum()
- df2.groupby(level=0).sum().index
+ df2.groupby(level=0, observed=True).sum()
+ df2.groupby(level=0, observed=True).sum().index
Reindexing operations will return a resulting index based on the type of the passed
indexer. Passing a list will return a plain-old ``Index``; indexing with
diff --git a/doc/source/user_guide/categorical.rst b/doc/source/user_guide/categorical.rst
index 0b2224fe9bb32..e486235f044f5 100644
--- a/doc/source/user_guide/categorical.rst
+++ b/doc/source/user_guide/categorical.rst
@@ -607,7 +607,7 @@ even if some categories are not present in the data:
s = pd.Series(pd.Categorical(["a", "b", "c", "c"], categories=["c", "a", "b", "d"]))
s.value_counts()
-``DataFrame`` methods like :meth:`DataFrame.sum` also show "unused" categories.
+``DataFrame`` methods like :meth:`DataFrame.sum` also show "unused" categories when ``observed=False``.
.. ipython:: python
@@ -618,9 +618,9 @@ even if some categories are not present in the data:
data=[[1, 2, 3], [4, 5, 6]],
columns=pd.MultiIndex.from_arrays([["A", "B", "B"], columns]),
).T
- df.groupby(level=1).sum()
+ df.groupby(level=1, observed=False).sum()
-Groupby will also show "unused" categories:
+Groupby will also show "unused" categories when ``observed=False``:
.. ipython:: python
@@ -628,7 +628,7 @@ Groupby will also show "unused" categories:
["a", "b", "b", "b", "c", "c", "c"], categories=["a", "b", "c", "d"]
)
df = pd.DataFrame({"cats": cats, "values": [1, 2, 2, 2, 3, 4, 5]})
- df.groupby("cats").mean()
+ df.groupby("cats", observed=False).mean()
cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
df2 = pd.DataFrame(
@@ -638,7 +638,7 @@ Groupby will also show "unused" categories:
"values": [1, 2, 3, 4],
}
)
- df2.groupby(["cats", "B"]).mean()
+ df2.groupby(["cats", "B"], observed=False).mean()
Pivot tables:
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 31c4bd1d7c87c..56e62ba20e030 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -1401,7 +1401,7 @@ can be used as group keys. If so, the order of the levels will be preserved:
factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
- data.groupby(factor).mean()
+ data.groupby(factor, observed=False).mean()
.. _groupby.specify:
diff --git a/doc/source/whatsnew/v0.15.0.rst b/doc/source/whatsnew/v0.15.0.rst
index f52253687ecfd..67e91751e9527 100644
--- a/doc/source/whatsnew/v0.15.0.rst
+++ b/doc/source/whatsnew/v0.15.0.rst
@@ -85,7 +85,7 @@ For full docs, see the :ref:`categorical introduction <categorical>` and the
"medium", "good", "very good"])
df["grade"]
df.sort_values("grade")
- df.groupby("grade").size()
+ df.groupby("grade", observed=False).size()
- ``pandas.core.group_agg`` and ``pandas.core.factor_agg`` were removed. As an alternative, construct
a dataframe and use ``df.groupby(<group>).agg(<func>)``.
diff --git a/doc/source/whatsnew/v0.19.0.rst b/doc/source/whatsnew/v0.19.0.rst
index feeb7b5ee30ce..ab17cacd830e5 100644
--- a/doc/source/whatsnew/v0.19.0.rst
+++ b/doc/source/whatsnew/v0.19.0.rst
@@ -1134,7 +1134,7 @@ As a consequence, ``groupby`` and ``set_index`` also preserve categorical dtypes
.. ipython:: python
df = pd.DataFrame({"A": [0, 1], "B": [10, 11], "C": cat})
- df_grouped = df.groupby(by=["A", "C"]).first()
+ df_grouped = df.groupby(by=["A", "C"], observed=False).first()
df_set_idx = df.set_index(["A", "C"])
**Previous behavior**:
diff --git a/doc/source/whatsnew/v0.20.0.rst b/doc/source/whatsnew/v0.20.0.rst
index b41a469fe0c1f..34a875f59e808 100644
--- a/doc/source/whatsnew/v0.20.0.rst
+++ b/doc/source/whatsnew/v0.20.0.rst
@@ -289,7 +289,7 @@ In previous versions, ``.groupby(..., sort=False)`` would fail with a ``ValueErr
.. code-block:: ipython
- In [3]: df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
+ In [3]: df[df.chromosomes != '1'].groupby('chromosomes', observed=False, sort=False).sum()
---------------------------------------------------------------------------
ValueError: items in new_categories are not the same as in old categories
@@ -297,7 +297,7 @@ In previous versions, ``.groupby(..., sort=False)`` would fail with a ``ValueErr
.. ipython:: python
- df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
+ df[df.chromosomes != '1'].groupby('chromosomes', observed=False, sort=False).sum()
.. _whatsnew_0200.enhancements.table_schema:
diff --git a/doc/source/whatsnew/v0.22.0.rst b/doc/source/whatsnew/v0.22.0.rst
index ec9769c22e76b..c494b4f286662 100644
--- a/doc/source/whatsnew/v0.22.0.rst
+++ b/doc/source/whatsnew/v0.22.0.rst
@@ -109,7 +109,7 @@ instead of ``NaN``.
In [8]: grouper = pd.Categorical(['a', 'a'], categories=['a', 'b'])
- In [9]: pd.Series([1, 2]).groupby(grouper).sum()
+ In [9]: pd.Series([1, 2]).groupby(grouper, observed=False).sum()
Out[9]:
a 3.0
b NaN
@@ -120,14 +120,14 @@ instead of ``NaN``.
.. ipython:: python
grouper = pd.Categorical(["a", "a"], categories=["a", "b"])
- pd.Series([1, 2]).groupby(grouper).sum()
+ pd.Series([1, 2]).groupby(grouper, observed=False).sum()
To restore the 0.21 behavior of returning ``NaN`` for unobserved groups,
use ``min_count>=1``.
.. ipython:: python
- pd.Series([1, 2]).groupby(grouper).sum(min_count=1)
+ pd.Series([1, 2]).groupby(grouper, observed=False).sum(min_count=1)
Resample
^^^^^^^^
diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 865fa3c6ac949..6cf0d3848b912 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -99,6 +99,7 @@ Deprecations
- Deprecated silently dropping unrecognized timezones when parsing strings to datetimes (:issue:`18702`)
- Deprecated :meth:`DataFrame._data` and :meth:`Series._data`, use public APIs instead (:issue:`33333`)
- Deprecating pinning ``group.name`` to each group in :meth:`SeriesGroupBy.aggregate` aggregations; if your operation requires utilizing the groupby keys, iterate over the groupby object instead (:issue:`41090`)
+- Deprecated the default of ``observed=False`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby`; this will default to ``True`` in a future version (:issue:`43999`)
- Deprecated ``axis=1`` in :meth:`DataFrame.groupby` and in :class:`Grouper` constructor, do ``frame.T.groupby(...)`` instead (:issue:`51203`)
- Deprecated passing a :class:`DataFrame` to :meth:`DataFrame.from_records`, use :meth:`DataFrame.set_index` or :meth:`DataFrame.drop` instead (:issue:`51353`)
- Deprecated accepting slices in :meth:`DataFrame.take`, call ``obj[slicer]`` or pass a sequence of integers instead (:issue:`51539`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9dd5ee426e37c..f5ed3e8adf976 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -8677,7 +8677,7 @@ def groupby(
as_index: bool = True,
sort: bool = True,
group_keys: bool = True,
- observed: bool = False,
+ observed: bool | lib.NoDefault = lib.no_default,
dropna: bool = True,
) -> DataFrameGroupBy:
if axis is not lib.no_default:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 96f39bb99e544..4f40728449d8a 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -68,6 +68,7 @@ class providing the base-class of operations.
cache_readonly,
doc,
)
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import ensure_dtype_can_hold_na
from pandas.core.dtypes.common import (
@@ -905,7 +906,7 @@ def __init__(
as_index: bool = True,
sort: bool = True,
group_keys: bool | lib.NoDefault = True,
- observed: bool = False,
+ observed: bool | lib.NoDefault = lib.no_default,
dropna: bool = True,
) -> None:
self._selection = selection
@@ -922,7 +923,6 @@ def __init__(
self.keys = keys
self.sort = sort
self.group_keys = group_keys
- self.observed = observed
self.dropna = dropna
if grouper is None:
@@ -932,10 +932,23 @@ def __init__(
axis=axis,
level=level,
sort=sort,
- observed=observed,
+ observed=False if observed is lib.no_default else observed,
dropna=self.dropna,
)
+ if observed is lib.no_default:
+ if any(ping._passed_categorical for ping in grouper.groupings):
+ warnings.warn(
+ "The default of observed=False is deprecated and will be changed "
+ "to True in a future version of pandas. Pass observed=False to "
+ "retain current behavior or observed=True to adopt the future "
+ "default and silence this warning.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ observed = False
+ self.observed = observed
+
self.obj = obj
self.axis = obj._get_axis_number(axis)
self.grouper = grouper
@@ -2125,6 +2138,8 @@ def _value_counts(
result_series.index.droplevel(levels),
sort=self.sort,
dropna=self.dropna,
+ # GH#43999 - deprecation of observed=False
+ observed=False,
).transform("sum")
result_series /= indexed_group_size
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 87fed03a73daf..8fdc3da908c42 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -723,7 +723,11 @@ def _format_duplicate_message(self) -> DataFrame:
duplicates = self[self.duplicated(keep="first")].unique()
assert len(duplicates)
- out = Series(np.arange(len(self))).groupby(self).agg(list)[duplicates]
+ out = (
+ Series(np.arange(len(self)))
+ .groupby(self, observed=False)
+ .agg(list)[duplicates]
+ )
if self._is_multi:
# test_format_duplicate_labels_message_multi
# error: "Type[Index]" has no attribute "from_tuples" [attr-defined]
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b0958869c67f3..f8723b5ebf9c7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1999,7 +1999,7 @@ def groupby(
as_index: bool = True,
sort: bool = True,
group_keys: bool = True,
- observed: bool = False,
+ observed: bool | lib.NoDefault = lib.no_default,
dropna: bool = True,
) -> SeriesGroupBy:
from pandas.core.groupby.generic import SeriesGroupBy
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index f3917f539ae3f..09bebf6a92dca 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -154,6 +154,11 @@
This only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
+
+ .. deprecated:: 2.1.0
+
+ The default value will change to True in a future version of pandas.
+
dropna : bool, default True
If True, and if group keys contain NA values, NA values together
with row/column will be dropped.
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index ca08f39b852ee..b39fc93f4f024 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -254,7 +254,7 @@ def _grouped_plot_by_column(
return_type=None,
**kwargs,
):
- grouped = data.groupby(by)
+ grouped = data.groupby(by, observed=False)
if columns is None:
if not isinstance(by, (list, tuple)):
by = [by]
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index ad53cf6629adb..200b04e0524f2 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1250,7 +1250,7 @@ def test_groupby_single_agg_cat_cols(grp_col_dict, exp_data):
input_df = input_df.astype({"cat": "category", "cat_ord": "category"})
input_df["cat_ord"] = input_df["cat_ord"].cat.as_ordered()
- result_df = input_df.groupby("cat").agg(grp_col_dict)
+ result_df = input_df.groupby("cat", observed=False).agg(grp_col_dict)
# create expected dataframe
cat_index = pd.CategoricalIndex(
@@ -1289,7 +1289,7 @@ def test_groupby_combined_aggs_cat_cols(grp_col_dict, exp_data):
input_df = input_df.astype({"cat": "category", "cat_ord": "category"})
input_df["cat_ord"] = input_df["cat_ord"].cat.as_ordered()
- result_df = input_df.groupby("cat").agg(grp_col_dict)
+ result_df = input_df.groupby("cat", observed=False).agg(grp_col_dict)
# create expected dataframe
cat_index = pd.CategoricalIndex(
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index a7ba1e8e81848..0699b7c1369f2 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -883,7 +883,7 @@ def test_apply_multi_level_name(category):
df = DataFrame(
{"A": np.arange(10), "B": b, "C": list(range(10)), "D": list(range(10))}
).set_index(["A", "B"])
- result = df.groupby("B").apply(lambda x: x.sum())
+ result = df.groupby("B", observed=False).apply(lambda x: x.sum())
tm.assert_frame_equal(result, expected)
assert df.index.names == ["A", "B"]
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index dbbfab14d5c76..e4dd07f790f47 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -739,7 +739,7 @@ def test_categorical_series(series, data):
# Group the given series by a series with categorical data type such that group A
# takes indices 0 and 3 and group B indices 1 and 2, obtaining the values mapped in
# the given data.
- groupby = series.groupby(Series(list("ABBA"), dtype="category"))
+ groupby = series.groupby(Series(list("ABBA"), dtype="category"), observed=False)
result = groupby.aggregate(list)
expected = Series(data, index=CategoricalIndex(data.keys()))
tm.assert_series_equal(result, expected)
@@ -1115,7 +1115,7 @@ def test_groupby_multiindex_categorical_datetime():
"values": np.arange(9),
}
)
- result = df.groupby(["key1", "key2"]).mean()
+ result = df.groupby(["key1", "key2"], observed=False).mean()
idx = MultiIndex.from_product(
[
@@ -1291,8 +1291,8 @@ def test_seriesgroupby_observed_apply_dict(df_cat, observed, index, data):
def test_groupby_categorical_series_dataframe_consistent(df_cat):
# GH 20416
- expected = df_cat.groupby(["A", "B"])["C"].mean()
- result = df_cat.groupby(["A", "B"]).mean()["C"]
+ expected = df_cat.groupby(["A", "B"], observed=False)["C"].mean()
+ result = df_cat.groupby(["A", "B"], observed=False).mean()["C"]
tm.assert_series_equal(result, expected)
@@ -1303,11 +1303,11 @@ def test_groupby_categorical_axis_1(code):
cat = Categorical.from_codes(code, categories=list("abc"))
msg = "DataFrame.groupby with axis=1 is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
- gb = df.groupby(cat, axis=1)
+ gb = df.groupby(cat, axis=1, observed=False)
result = gb.mean()
msg = "The 'axis' keyword in DataFrame.groupby is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
- gb2 = df.T.groupby(cat, axis=0)
+ gb2 = df.T.groupby(cat, axis=0, observed=False)
expected = gb2.mean().T
tm.assert_frame_equal(result, expected)
@@ -1478,7 +1478,7 @@ def test_series_groupby_categorical_aggregation_getitem():
df = DataFrame(d)
cat = pd.cut(df["foo"], np.linspace(0, 20, 5))
df["range"] = cat
- groups = df.groupby(["range", "baz"], as_index=True, sort=True)
+ groups = df.groupby(["range", "baz"], as_index=True, sort=True, observed=False)
result = groups["foo"].agg("mean")
expected = groups.agg("mean")["foo"]
tm.assert_series_equal(result, expected)
@@ -1539,7 +1539,7 @@ def test_read_only_category_no_sort():
{"a": [1, 3, 5, 7], "b": Categorical([1, 1, 2, 2], categories=Index(cats))}
)
expected = DataFrame(data={"a": [2.0, 6.0]}, index=CategoricalIndex(cats, name="b"))
- result = df.groupby("b", sort=False).mean()
+ result = df.groupby("b", sort=False, observed=False).mean()
tm.assert_frame_equal(result, expected)
@@ -1583,7 +1583,7 @@ def test_sorted_missing_category_values():
dtype="category",
)
- result = df.groupby(["bar", "foo"]).size().unstack()
+ result = df.groupby(["bar", "foo"], observed=False).size().unstack()
tm.assert_frame_equal(result, expected)
@@ -1748,7 +1748,7 @@ def test_groupby_categorical_indices_unused_categories():
"col": range(3),
}
)
- grouped = df.groupby("key", sort=False)
+ grouped = df.groupby("key", sort=False, observed=False)
result = grouped.indices
expected = {
"b": np.array([0, 1], dtype="intp"),
@@ -2013,3 +2013,15 @@ def test_many_categories(as_index, sort, index_kind, ordered):
expected = DataFrame({"a": Series(index), "b": data})
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("cat_columns", ["a", "b", ["a", "b"]])
+@pytest.mark.parametrize("keys", ["a", "b", ["a", "b"]])
+def test_groupby_default_depr(cat_columns, keys):
+ # GH#43999
+ df = DataFrame({"a": [1, 1, 2, 3], "b": [4, 5, 6, 7]})
+ df[cat_columns] = df[cat_columns].astype("category")
+ msg = "The default of observed=False is deprecated"
+ klass = FutureWarning if set(cat_columns) & set(keys) else None
+ with tm.assert_produces_warning(klass, match=msg):
+ df.groupby(keys)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index ea4bb42fb7ee1..f1dad7a22c789 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1926,7 +1926,7 @@ def test_empty_groupby(
df = df.iloc[:0]
- gb = df.groupby(keys, group_keys=False, dropna=dropna)[columns]
+ gb = df.groupby(keys, group_keys=False, dropna=dropna, observed=False)[columns]
def get_result(**kwargs):
if method == "attr":
@@ -2638,7 +2638,7 @@ def test_datetime_categorical_multikey_groupby_indices():
"c": Categorical.from_codes([-1, 0, 1], categories=[0, 1]),
}
)
- result = df.groupby(["a", "b"]).indices
+ result = df.groupby(["a", "b"], observed=False).indices
expected = {
("a", Timestamp("2018-01-01 00:00:00")): np.array([0]),
("b", Timestamp("2018-02-01 00:00:00")): np.array([1]),
diff --git a/pandas/tests/groupby/test_groupby_dropna.py b/pandas/tests/groupby/test_groupby_dropna.py
index 95184bfd770d1..a051b30307a28 100644
--- a/pandas/tests/groupby/test_groupby_dropna.py
+++ b/pandas/tests/groupby/test_groupby_dropna.py
@@ -448,7 +448,7 @@ def test_no_sort_keep_na(sequence_index, dtype, test_series, as_index):
"a": [0, 1, 2, 3],
}
)
- gb = df.groupby("key", dropna=False, sort=False, as_index=as_index)
+ gb = df.groupby("key", dropna=False, sort=False, as_index=as_index, observed=False)
if test_series:
gb = gb["a"]
result = gb.sum()
@@ -665,7 +665,7 @@ def test_categorical_agg():
df = pd.DataFrame(
{"x": pd.Categorical(values, categories=[1, 2, 3]), "y": range(len(values))}
)
- gb = df.groupby("x", dropna=False)
+ gb = df.groupby("x", dropna=False, observed=False)
result = gb.agg(lambda x: x.sum())
expected = gb.sum()
tm.assert_frame_equal(result, expected)
@@ -677,7 +677,7 @@ def test_categorical_transform():
df = pd.DataFrame(
{"x": pd.Categorical(values, categories=[1, 2, 3]), "y": range(len(values))}
)
- gb = df.groupby("x", dropna=False)
+ gb = df.groupby("x", dropna=False, observed=False)
result = gb.transform(lambda x: x.sum())
expected = gb.transform("sum")
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/test_min_max.py b/pandas/tests/groupby/test_min_max.py
index 11f62c5d03c49..8602f8bdb1aa1 100644
--- a/pandas/tests/groupby/test_min_max.py
+++ b/pandas/tests/groupby/test_min_max.py
@@ -236,7 +236,7 @@ def test_min_max_nullable_uint64_empty_group():
# don't raise NotImplementedError from libgroupby
cat = pd.Categorical([0] * 10, categories=[0, 1])
df = DataFrame({"A": cat, "B": pd.array(np.arange(10, dtype=np.uint64))})
- gb = df.groupby("A")
+ gb = df.groupby("A", observed=False)
res = gb.min()
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index 9f42f6ad72591..8c863dc2982ae 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -21,11 +21,11 @@ def test_rank_unordered_categorical_typeerror():
msg = "Cannot perform rank with non-ordered Categorical"
- gb = ser.groupby(cat)
+ gb = ser.groupby(cat, observed=False)
with pytest.raises(TypeError, match=msg):
gb.rank()
- gb2 = df.groupby(cat)
+ gb2 = df.groupby(cat, observed=False)
with pytest.raises(TypeError, match=msg):
gb2.rank()
diff --git a/pandas/tests/groupby/test_size.py b/pandas/tests/groupby/test_size.py
index e29f87992f8a1..7da6bc8a32013 100644
--- a/pandas/tests/groupby/test_size.py
+++ b/pandas/tests/groupby/test_size.py
@@ -83,7 +83,7 @@ def test_size_period_index():
def test_size_on_categorical(as_index):
df = DataFrame([[1, 1], [2, 2]], columns=["A", "B"])
df["A"] = df["A"].astype("category")
- result = df.groupby(["A", "B"], as_index=as_index).size()
+ result = df.groupby(["A", "B"], as_index=as_index, observed=False).size()
expected = DataFrame(
[[1, 1, 1], [1, 2, 0], [2, 1, 0], [2, 2, 1]], columns=["A", "B", "size"]
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 27ffeb9247556..d6d0b03a65ebb 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -1078,7 +1078,7 @@ def test_transform_absent_categories(func):
x_cats = range(2)
y = [1]
df = DataFrame({"x": Categorical(x_vals, x_cats), "y": y})
- result = getattr(df.y.groupby(df.x), func)()
+ result = getattr(df.y.groupby(df.x, observed=False), func)()
expected = df.y
tm.assert_series_equal(result, expected)
| - [x] closes #43999 (Replace xxxx with the GitHub issue number)
- [x] closes #30552
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/51811 | 2023-03-07T00:04:45Z | 2023-03-17T17:03:05Z | 2023-03-17T17:03:05Z | 2023-03-18T11:58:02Z |
Backport PR #51688 on branch 2.0.x (BUG: ArrowExtensionArray logical ops raising KeyError) | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index c6dd6d00f7c5a..12eb2375b69e1 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -1368,6 +1368,7 @@ ExtensionArray
- Bug in :class:`Series` constructor unnecessarily overflowing for nullable unsigned integer dtypes (:issue:`38798`, :issue:`25880`)
- Bug in setting non-string value into ``StringArray`` raising ``ValueError`` instead of ``TypeError`` (:issue:`49632`)
- Bug in :meth:`DataFrame.reindex` not honoring the default ``copy=True`` keyword in case of columns with ExtensionDtype (and as a result also selecting multiple columns with getitem (``[]``) didn't correctly result in a copy) (:issue:`51197`)
+- Bug in :class:`~arrays.ArrowExtensionArray` logical operations ``&`` and ``|`` raising ``KeyError`` (:issue:`51688`)
Styler
^^^^^^
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 76723a8973e27..4e0dd6b75d46a 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -81,10 +81,10 @@
}
ARROW_LOGICAL_FUNCS = {
- "and": pc.and_kleene,
- "rand": lambda x, y: pc.and_kleene(y, x),
- "or": pc.or_kleene,
- "ror": lambda x, y: pc.or_kleene(y, x),
+ "and_": pc.and_kleene,
+ "rand_": lambda x, y: pc.and_kleene(y, x),
+ "or_": pc.or_kleene,
+ "ror_": lambda x, y: pc.or_kleene(y, x),
"xor": pc.xor,
"rxor": lambda x, y: pc.xor(y, x),
}
@@ -491,7 +491,12 @@ def _evaluate_op_method(self, other, op, arrow_funcs):
elif isinstance(other, (np.ndarray, list)):
result = pc_func(self._data, pa.array(other, from_pandas=True))
elif is_scalar(other):
- result = pc_func(self._data, pa.scalar(other))
+ if isna(other) and op.__name__ in ARROW_LOGICAL_FUNCS:
+ # pyarrow kleene ops require null to be typed
+ pa_scalar = pa.scalar(None, type=self._data.type)
+ else:
+ pa_scalar = pa.scalar(other)
+ result = pc_func(self._data, pa_scalar)
else:
raise NotImplementedError(
f"{op.__name__} not implemented for {type(other)}"
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index a15aa66d0f966..cedaaa500736b 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -1270,6 +1270,150 @@ def test_invalid_other_comp(self, data, comparison_op):
comparison_op(data, object())
+class TestLogicalOps:
+ """Various Series and DataFrame logical ops methods."""
+
+ def test_kleene_or(self):
+ a = pd.Series([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean[pyarrow]")
+ b = pd.Series([True, False, None] * 3, dtype="boolean[pyarrow]")
+ result = a | b
+ expected = pd.Series(
+ [True, True, True, True, False, None, True, None, None],
+ dtype="boolean[pyarrow]",
+ )
+ tm.assert_series_equal(result, expected)
+
+ result = b | a
+ tm.assert_series_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_series_equal(
+ a,
+ pd.Series([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean[pyarrow]"),
+ )
+ tm.assert_series_equal(
+ b, pd.Series([True, False, None] * 3, dtype="boolean[pyarrow]")
+ )
+
+ @pytest.mark.parametrize(
+ "other, expected",
+ [
+ (None, [True, None, None]),
+ (pd.NA, [True, None, None]),
+ (True, [True, True, True]),
+ (np.bool_(True), [True, True, True]),
+ (False, [True, False, None]),
+ (np.bool_(False), [True, False, None]),
+ ],
+ )
+ def test_kleene_or_scalar(self, other, expected):
+ a = pd.Series([True, False, None], dtype="boolean[pyarrow]")
+ result = a | other
+ expected = pd.Series(expected, dtype="boolean[pyarrow]")
+ tm.assert_series_equal(result, expected)
+
+ result = other | a
+ tm.assert_series_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_series_equal(
+ a, pd.Series([True, False, None], dtype="boolean[pyarrow]")
+ )
+
+ def test_kleene_and(self):
+ a = pd.Series([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean[pyarrow]")
+ b = pd.Series([True, False, None] * 3, dtype="boolean[pyarrow]")
+ result = a & b
+ expected = pd.Series(
+ [True, False, None, False, False, False, None, False, None],
+ dtype="boolean[pyarrow]",
+ )
+ tm.assert_series_equal(result, expected)
+
+ result = b & a
+ tm.assert_series_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_series_equal(
+ a,
+ pd.Series([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean[pyarrow]"),
+ )
+ tm.assert_series_equal(
+ b, pd.Series([True, False, None] * 3, dtype="boolean[pyarrow]")
+ )
+
+ @pytest.mark.parametrize(
+ "other, expected",
+ [
+ (None, [None, False, None]),
+ (pd.NA, [None, False, None]),
+ (True, [True, False, None]),
+ (False, [False, False, False]),
+ (np.bool_(True), [True, False, None]),
+ (np.bool_(False), [False, False, False]),
+ ],
+ )
+ def test_kleene_and_scalar(self, other, expected):
+ a = pd.Series([True, False, None], dtype="boolean[pyarrow]")
+ result = a & other
+ expected = pd.Series(expected, dtype="boolean[pyarrow]")
+ tm.assert_series_equal(result, expected)
+
+ result = other & a
+ tm.assert_series_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_series_equal(
+ a, pd.Series([True, False, None], dtype="boolean[pyarrow]")
+ )
+
+ def test_kleene_xor(self):
+ a = pd.Series([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean[pyarrow]")
+ b = pd.Series([True, False, None] * 3, dtype="boolean[pyarrow]")
+ result = a ^ b
+ expected = pd.Series(
+ [False, True, None, True, False, None, None, None, None],
+ dtype="boolean[pyarrow]",
+ )
+ tm.assert_series_equal(result, expected)
+
+ result = b ^ a
+ tm.assert_series_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_series_equal(
+ a,
+ pd.Series([True] * 3 + [False] * 3 + [None] * 3, dtype="boolean[pyarrow]"),
+ )
+ tm.assert_series_equal(
+ b, pd.Series([True, False, None] * 3, dtype="boolean[pyarrow]")
+ )
+
+ @pytest.mark.parametrize(
+ "other, expected",
+ [
+ (None, [None, None, None]),
+ (pd.NA, [None, None, None]),
+ (True, [False, True, None]),
+ (np.bool_(True), [False, True, None]),
+ (np.bool_(False), [True, False, None]),
+ ],
+ )
+ def test_kleene_xor_scalar(self, other, expected):
+ a = pd.Series([True, False, None], dtype="boolean[pyarrow]")
+ result = a ^ other
+ expected = pd.Series(expected, dtype="boolean[pyarrow]")
+ tm.assert_series_equal(result, expected)
+
+ result = other ^ a
+ tm.assert_series_equal(result, expected)
+
+ # ensure we haven't mutated anything inplace
+ tm.assert_series_equal(
+ a, pd.Series([True, False, None], dtype="boolean[pyarrow]")
+ )
+
+
def test_arrowdtype_construct_from_string_type_with_unsupported_parameters():
with pytest.raises(NotImplementedError, match="Passing pyarrow type"):
ArrowDtype.construct_from_string("not_a_real_dype[s, tz=UTC][pyarrow]")
| Backport PR #51688: BUG: ArrowExtensionArray logical ops raising KeyError | https://api.github.com/repos/pandas-dev/pandas/pulls/51810 | 2023-03-06T23:43:58Z | 2023-03-07T22:20:07Z | 2023-03-07T22:20:07Z | 2023-03-07T22:20:08Z |
REF: Add ExtensionArray.map | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index c12807304f74d..2dea25d6f10f4 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -32,6 +32,7 @@ Other enhancements
- :meth:`MultiIndex.sortlevel` and :meth:`Index.sortlevel` gained a new keyword ``na_position`` (:issue:`51612`)
- Improve error message when setting :class:`DataFrame` with wrong number of columns through :meth:`DataFrame.isetitem` (:issue:`51701`)
- Let :meth:`DataFrame.to_feather` accept a non-default :class:`Index` and non-string column names (:issue:`51787`)
+- :class:`api.extensions.ExtensionArray` now has a :meth:`~api.extensions.ExtensionArray.map` method (:issue:`51809`).
.. ---------------------------------------------------------------------------
.. _whatsnew_210.notable_bug_fixes:
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index c82b47867fbb3..e07d6c8298bca 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -46,6 +46,7 @@
is_categorical_dtype,
is_complex_dtype,
is_datetime64_dtype,
+ is_dict_like,
is_extension_array_dtype,
is_float_dtype,
is_integer,
@@ -1678,3 +1679,75 @@ def union_with_duplicates(
unique_vals = ensure_wrapped_if_datetimelike(unique_vals)
repeats = final_count.reindex(unique_vals).values
return np.repeat(unique_vals, repeats)
+
+
+def map_array(
+ arr: ArrayLike, mapper, na_action: Literal["ignore"] | None = None
+) -> np.ndarray | ExtensionArray | Index:
+ """
+ Map values using an input mapping or function.
+
+ Parameters
+ ----------
+ mapper : function, dict, or Series
+ Mapping correspondence.
+ na_action : {None, 'ignore'}, default None
+ If 'ignore', propagate NA values, without passing them to the
+ mapping correspondence.
+
+ Returns
+ -------
+ Union[ndarray, Index, ExtensionArray]
+ The output of the mapping function applied to the array.
+ If the function returns a tuple with more than one element
+ a MultiIndex will be returned.
+ """
+ if na_action not in (None, "ignore"):
+ msg = f"na_action must either be 'ignore' or None, {na_action} was passed"
+ raise ValueError(msg)
+
+ # we can fastpath dict/Series to an efficient map
+ # as we know that we are not going to have to yield
+ # python types
+ if is_dict_like(mapper):
+ if isinstance(mapper, dict) and hasattr(mapper, "__missing__"):
+ # If a dictionary subclass defines a default value method,
+ # convert mapper to a lookup function (GH #15999).
+ dict_with_default = mapper
+ mapper = lambda x: dict_with_default[
+ np.nan if isinstance(x, float) and np.isnan(x) else x
+ ]
+ else:
+ # Dictionary does not have a default. Thus it's safe to
+ # convert to an Series for efficiency.
+ # we specify the keys here to handle the
+ # possibility that they are tuples
+
+ # The return value of mapping with an empty mapper is
+ # expected to be pd.Series(np.nan, ...). As np.nan is
+ # of dtype float64 the return value of this method should
+ # be float64 as well
+ from pandas import Series
+
+ if len(mapper) == 0:
+ mapper = Series(mapper, dtype=np.float64)
+ else:
+ mapper = Series(mapper)
+
+ if isinstance(mapper, ABCSeries):
+ if na_action == "ignore":
+ mapper = mapper[mapper.index.notna()]
+
+ # Since values were input this means we came from either
+ # a dict or a series and mapper should be an index
+ indexer = mapper.index.get_indexer(arr)
+ new_values = take_nd(mapper._values, indexer)
+
+ return new_values
+
+ # we must convert to python types
+ values = arr.astype(object, copy=False)
+ if na_action is None:
+ return lib.map_infer(values, mapper)
+ else:
+ return lib.map_infer_mask(values, mapper, isna(values).view(np.uint8))
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 4790d00a071cb..f23b05a5c41b8 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -1068,8 +1068,7 @@ def apply_standard(self) -> DataFrame | Series:
return f(obj)
# row-wise access
- if is_extension_array_dtype(obj.dtype) and hasattr(obj._values, "map"):
- # GH#23179 some EAs do not have `map`
+ if is_extension_array_dtype(obj.dtype):
mapped = obj._values.map(f)
else:
values = obj.astype(object)._values
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 1a082a7579dc3..9313d49ec87de 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -78,6 +78,7 @@
from pandas.core.algorithms import (
factorize_array,
isin,
+ map_array,
mode,
rank,
unique,
@@ -180,6 +181,7 @@ class ExtensionArray:
* factorize / _values_for_factorize
* argsort, argmax, argmin / _values_for_argsort
* searchsorted
+ * map
The remaining methods implemented on this class should be performant,
as they only compose abstract methods. Still, a more efficient
@@ -1706,6 +1708,28 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
return arraylike.default_array_ufunc(self, ufunc, method, *inputs, **kwargs)
+ def map(self, mapper, na_action=None):
+ """
+ Map values using an input mapping or function.
+
+ Parameters
+ ----------
+ mapper : function, dict, or Series
+ Mapping correspondence.
+ na_action : {None, 'ignore'}, default None
+ If 'ignore', propagate NA values, without passing them to the
+ mapping correspondence. If 'ignore' is not supported, a
+ ``NotImplementedError`` should be raised.
+
+ Returns
+ -------
+ Union[ndarray, Index, ExtensionArray]
+ The output of the mapping function applied to the array.
+ If the function returns a tuple with more than one element
+ a MultiIndex will be returned.
+ """
+ return map_array(self, mapper, na_action=na_action)
+
class ExtensionArraySupportsAnyAll(ExtensionArray):
def any(self, *, skipna: bool = True) -> bool:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index be6c8493963ea..bdd9e1727ca10 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1198,7 +1198,7 @@ def remove_unused_categories(self) -> Categorical:
# ------------------------------------------------------------------
- def map(self, mapper):
+ def map(self, mapper, na_action=None):
"""
Map categories using an input mapping or function.
@@ -1267,6 +1267,9 @@ def map(self, mapper):
>>> cat.map({'a': 'first', 'b': 'second'})
Index(['first', 'second', nan], dtype='object')
"""
+ if na_action is not None:
+ raise NotImplementedError
+
new_categories = self.categories.map(mapper)
try:
return self.from_codes(
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 1e9b5641aa5e0..5295f89626e04 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -750,7 +750,10 @@ def _unbox(self, other) -> np.int64 | np.datetime64 | np.timedelta64 | np.ndarra
# pandas assumes they're there.
@ravel_compat
- def map(self, mapper):
+ def map(self, mapper, na_action=None):
+ if na_action is not None:
+ raise NotImplementedError
+
# TODO(GH-23179): Add ExtensionArray.map
# Need to figure out if we want ExtensionArray.map first.
# If so, then we can refactor IndexOpsMixin._map_values to
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 78153890745d7..600fb92a6b279 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -1270,7 +1270,7 @@ def astype(self, dtype: AstypeArg | None = None, copy: bool = True):
return self._simple_new(sp_values, self.sp_index, dtype)
- def map(self: SparseArrayT, mapper) -> SparseArrayT:
+ def map(self: SparseArrayT, mapper, na_action=None) -> SparseArrayT:
"""
Map categories using an input mapping or function.
@@ -1278,6 +1278,9 @@ def map(self: SparseArrayT, mapper) -> SparseArrayT:
----------
mapper : dict, Series, callable
The correspondence from old values to new.
+ na_action : {None, 'ignore'}, default None
+ If 'ignore', propagate NA values, without passing them to the
+ mapping correspondence.
Returns
-------
@@ -1307,6 +1310,9 @@ def map(self: SparseArrayT, mapper) -> SparseArrayT:
IntIndex
Indices: array([1, 2], dtype=int32)
"""
+ if na_action is not None:
+ raise NotImplementedError
+
# this is used in apply.
# We get hit since we're an "is_extension_array_dtype" but regular extension
# types are not hit. This may be worth adding to the interface.
diff --git a/pandas/core/base.py b/pandas/core/base.py
index d9b2647d19f93..2d3338d7c9f10 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -40,8 +40,6 @@
from pandas.core.dtypes.cast import can_hold_element
from pandas.core.dtypes.common import (
- is_categorical_dtype,
- is_dict_like,
is_extension_array_dtype,
is_object_dtype,
is_scalar,
@@ -78,7 +76,6 @@
)
from pandas import (
- Categorical,
Index,
Series,
)
@@ -882,87 +879,19 @@ def _map_values(self, mapper, na_action=None):
If the function returns a tuple with more than one element
a MultiIndex will be returned.
"""
- # we can fastpath dict/Series to an efficient map
- # as we know that we are not going to have to yield
- # python types
- if is_dict_like(mapper):
- if isinstance(mapper, dict) and hasattr(mapper, "__missing__"):
- # If a dictionary subclass defines a default value method,
- # convert mapper to a lookup function (GH #15999).
- dict_with_default = mapper
- mapper = lambda x: dict_with_default[
- np.nan if isinstance(x, float) and np.isnan(x) else x
- ]
- else:
- # Dictionary does not have a default. Thus it's safe to
- # convert to an Series for efficiency.
- # we specify the keys here to handle the
- # possibility that they are tuples
-
- # The return value of mapping with an empty mapper is
- # expected to be pd.Series(np.nan, ...). As np.nan is
- # of dtype float64 the return value of this method should
- # be float64 as well
- from pandas import Series
-
- if len(mapper) == 0:
- mapper = Series(mapper, dtype=np.float64)
- else:
- mapper = Series(mapper)
-
- if isinstance(mapper, ABCSeries):
- if na_action not in (None, "ignore"):
- msg = (
- "na_action must either be 'ignore' or None, "
- f"{na_action} was passed"
- )
- raise ValueError(msg)
-
- if na_action == "ignore":
- mapper = mapper[mapper.index.notna()]
-
- # Since values were input this means we came from either
- # a dict or a series and mapper should be an index
- if is_categorical_dtype(self.dtype):
- # use the built in categorical series mapper which saves
- # time by mapping the categories instead of all values
-
- cat = cast("Categorical", self._values)
- return cat.map(mapper)
-
- values = self._values
-
- indexer = mapper.index.get_indexer(values)
- new_values = algorithms.take_nd(mapper._values, indexer)
-
- return new_values
-
- # we must convert to python types
- if is_extension_array_dtype(self.dtype) and hasattr(self._values, "map"):
- # GH#23179 some EAs do not have `map`
- values = self._values
- if na_action is not None:
- raise NotImplementedError
- map_f = lambda values, f: values.map(f)
- else:
- values = self._values.astype(object)
- if na_action == "ignore":
- map_f = lambda values, f: lib.map_infer_mask(
- values, f, isna(values).view(np.uint8)
- )
- elif na_action is None:
- map_f = lib.map_infer
- else:
- msg = (
- "na_action must either be 'ignore' or None, "
- f"{na_action} was passed"
- )
- raise ValueError(msg)
-
- # mapper is a function
- new_values = map_f(values, mapper)
-
- return new_values
+ arr = extract_array(self, extract_numpy=True, extract_range=True)
+
+ if is_extension_array_dtype(arr.dtype):
+ # Item "IndexOpsMixin" of "Union[IndexOpsMixin, ExtensionArray,
+ # ndarray[Any, Any]]" has no attribute "map"
+ return arr.map(mapper, na_action=na_action) # type: ignore[union-attr]
+
+ # Argument 1 to "map_array" has incompatible type
+ # "Union[IndexOpsMixin, ExtensionArray, ndarray[Any, Any]]";
+ # expected "Union[ExtensionArray, ndarray[Any, Any]]"
+ return algorithms.map_array(
+ arr, mapper, na_action=na_action # type: ignore[arg-type]
+ )
@final
def value_counts(
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 173edfbdcd7a7..9f556b47937f7 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -88,6 +88,12 @@ def test_apply_simple_series(self, data):
result = pd.Series(data).apply(id)
assert isinstance(result, pd.Series)
+ @pytest.mark.parametrize("na_action", [None, "ignore"])
+ def test_map(self, data, na_action):
+ result = data.map(lambda x: x, na_action=na_action)
+ expected = data.to_numpy()
+ tm.assert_numpy_array_equal(result, expected)
+
def test_argsort(self, data_for_sorting):
result = pd.Series(data_for_sorting).argsort()
# argsort result gets passed to take, so should be np.intp
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index 9a363c6a0f022..34a23315fd9fa 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -184,6 +184,15 @@ def test_combine_add(self, data_repeated):
expected = pd.Series([a + val for a in list(orig_data1)])
self.assert_series_equal(result, expected)
+ @pytest.mark.parametrize("na_action", [None, "ignore"])
+ def test_map(self, data, na_action):
+ if na_action is not None:
+ with pytest.raises(NotImplementedError, match=""):
+ data.map(lambda x: x, na_action=na_action)
+ else:
+ result = data.map(lambda x: x, na_action=na_action)
+ self.assert_extension_array_equal(result, data)
+
class TestCasting(base.BaseCastingTests):
@pytest.mark.parametrize("cls", [Categorical, CategoricalIndex])
diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index 92796c604333d..21f4f949cdfa9 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -116,6 +116,15 @@ def test_combine_add(self, data_repeated):
# Timestamp.__add__(Timestamp) not defined
pass
+ @pytest.mark.parametrize("na_action", [None, "ignore"])
+ def test_map(self, data, na_action):
+ if na_action is not None:
+ with pytest.raises(NotImplementedError, match=""):
+ data.map(lambda x: x, na_action=na_action)
+ else:
+ result = data.map(lambda x: x, na_action=na_action)
+ self.assert_extension_array_equal(result, data)
+
class TestInterface(BaseDatetimeTests, base.BaseInterfaceTests):
pass
diff --git a/pandas/tests/extension/test_period.py b/pandas/tests/extension/test_period.py
index cb1ebd87875e1..4be761358ba15 100644
--- a/pandas/tests/extension/test_period.py
+++ b/pandas/tests/extension/test_period.py
@@ -105,6 +105,15 @@ def test_diff(self, data, periods):
else:
super().test_diff(data, periods)
+ @pytest.mark.parametrize("na_action", [None, "ignore"])
+ def test_map(self, data, na_action):
+ if na_action is not None:
+ with pytest.raises(NotImplementedError, match=""):
+ data.map(lambda x: x, na_action=na_action)
+ else:
+ result = data.map(lambda x: x, na_action=na_action)
+ self.assert_extension_array_equal(result, data)
+
class TestInterface(BasePeriodTests, base.BaseInterfaceTests):
pass
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 836e644affbda..e14de81d6fbd6 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -351,6 +351,15 @@ def test_equals(self, data, na_value, as_series, box):
self._check_unsupported(data)
super().test_equals(data, na_value, as_series, box)
+ @pytest.mark.parametrize("na_action", [None, "ignore"])
+ def test_map(self, data, na_action):
+ if na_action is not None:
+ with pytest.raises(NotImplementedError, match=""):
+ data.map(lambda x: x, na_action=na_action)
+ else:
+ result = data.map(lambda x: x, na_action=na_action)
+ self.assert_extension_array_equal(result, data)
+
class TestCasting(BaseSparseTests, base.BaseCastingTests):
def test_astype_str(self, data):
| - [x] closes https://github.com/pandas-dev/pandas/issues/23179
Working with the `.map` method is very uncomfortable in main, because of issues with `ExtensionArray`:
* `ExtensionArray` is not required to have a `.map` method, but may have it.
* Some `ExtensionArray` subclasses, that do have a `map` method have one function parameter (`mapper`), while other have two parameters (`mapper` and `na_action`).
* There is fallback functionality for calculation `ExtensionArray.map` in `IndexOps.Mixin._map_values`. This means we can calculate `.map` for `ExtensionArrays` indirectly, if we wrap them in a `Series` or `Index`, but we cannot rely on the ability to do it directly on the `ExtensionArray`.
* `ExtensionArray` subclasses often want create their own `.map` methods
* Because of all the above issues it is often very maze-like to follow code that uses `.map`
This PR solves this by:
1. moving the functionality in `IndexOpsMixin._map_values` to a function `map_array` in `algorithms`.
2. adding a `.map` method to `ExtensionArray` that calls the new `map_array` function. Subclasses will override this method where needed.
3. Call the new `map_array` function inside `IndexOpsMixin._map_values`.
4. Ensuring a common signature for the `.map` method for `ExtensionArray`, i.e. always two method parameters (`mapper` and `na_action`), so we can reliably call the `.map` method on any `ExtensionArray` subclass.
Note that this PR doesn't really add new functionality, because we're mostly just moving code around, though it is now possible to call the `map` method on any `Extensionarray` directly, instead of going through e.g. `Series.map`.
In the main branch, there are currently several `ExtensionArray` subclasses where `na_action='ignore'` doesn't work. I'm working on making it work with all of the `ExtensionArray` subclasses and there is already #51645, which handles `Categorical`/`CategoricalIndex` and more will follow.
| https://api.github.com/repos/pandas-dev/pandas/pulls/51809 | 2023-03-06T23:04:26Z | 2023-03-13T16:49:55Z | 2023-03-13T16:49:55Z | 2023-09-25T08:33:34Z |
DOC Fix EX01 issues in docstrings | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index bb1b2ab8accbb..8fefa47c16bab 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -93,8 +93,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.Series.item \
pandas.Series.pipe \
pandas.Series.mode \
- pandas.Series.sem \
- pandas.Series.skew \
pandas.Series.is_unique \
pandas.Series.is_monotonic_increasing \
pandas.Series.is_monotonic_decreasing \
@@ -542,8 +540,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
pandas.DataFrame.keys \
pandas.DataFrame.iterrows \
pandas.DataFrame.pipe \
- pandas.DataFrame.sem \
- pandas.DataFrame.skew \
pandas.DataFrame.backfill \
pandas.DataFrame.pad \
pandas.DataFrame.swapaxes \
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 821e41db6b065..003e4cc5b8b23 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11382,7 +11382,41 @@ def all(
name2=name2,
axis_descr=axis_descr,
notes="",
- examples="",
+ examples="""
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2, 3])
+ >>> s.sem().round(6)
+ 0.57735
+
+ With a DataFrame
+
+ >>> df = pd.DataFrame({'a': [1, 2], 'b': [2, 3]}, index=['tiger', 'zebra'])
+ >>> df
+ a b
+ tiger 1 2
+ zebra 2 3
+ >>> df.sem()
+ a 0.5
+ b 0.5
+ dtype: float64
+
+ Using axis=1
+
+ >>> df.sem(axis=1)
+ tiger 0.5
+ zebra 0.5
+ dtype: float64
+
+ In this case, `numeric_only` should be set to `True`
+ to avoid getting an error.
+
+ >>> df = pd.DataFrame({'a': [1, 2], 'b': ['T', 'Z']},
+ ... index=['tiger', 'zebra'])
+ >>> df.sem(numeric_only=True)
+ a 0.5
+ dtype: float64""",
)
def sem(
self,
@@ -11615,7 +11649,45 @@ def mean(
axis_descr=axis_descr,
min_count="",
see_also="",
- examples="",
+ examples="""
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2, 3])
+ >>> s.skew()
+ 0.0
+
+ With a DataFrame
+
+ >>> df = pd.DataFrame({'a': [1, 2, 3], 'b': [2, 3, 4], 'c': [1, 3, 5]},
+ ... index=['tiger', 'zebra', 'cow'])
+ >>> df
+ a b c
+ tiger 1 2 1
+ zebra 2 3 3
+ cow 3 4 5
+ >>> df.skew()
+ a 0.0
+ b 0.0
+ c 0.0
+ dtype: float64
+
+ Using axis=1
+
+ >>> df.skew(axis=1)
+ tiger 1.732051
+ zebra -1.732051
+ cow 0.000000
+ dtype: float64
+
+ In this case, `numeric_only` should be set to `True` to avoid
+ getting an error.
+
+ >>> df = pd.DataFrame({'a': [1, 2, 3], 'b': ['T', 'Z', 'X']},
+ ... index=['tiger', 'zebra', 'cow'])
+ >>> df.skew(numeric_only=True)
+ a 0.0
+ dtype: float64""",
)
def skew(
self,
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Towards https://github.com/pandas-dev/pandas/issues/37875
Attaching resulting pages:
[pandas.DataFrame.sem - documentation.pdf](https://github.com/pandas-dev/pandas/files/10897602/pandas.DataFrame.sem.-.documentation.pdf)
[pandas.DataFrame.skew - documentation.pdf](https://github.com/pandas-dev/pandas/files/10897604/pandas.DataFrame.skew.-.documentation.pdf)
| https://api.github.com/repos/pandas-dev/pandas/pulls/51806 | 2023-03-06T11:32:55Z | 2023-03-06T14:57:29Z | 2023-03-06T14:57:29Z | 2023-03-06T15:32:44Z |
STYLE Enable ruff TCH for pandas/core/util/* | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 073af11b719cc..8f3b3bff5641a 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -30,13 +30,6 @@
import numpy as np
from pandas._libs import lib
-from pandas._typing import (
- AnyArrayLike,
- ArrayLike,
- NpDtype,
- RandomState,
- T,
-)
from pandas.core.dtypes.cast import construct_1d_object_array_from_listlike
from pandas.core.dtypes.common import (
@@ -54,6 +47,14 @@
from pandas.core.dtypes.missing import isna
if TYPE_CHECKING:
+ from pandas._typing import (
+ AnyArrayLike,
+ ArrayLike,
+ NpDtype,
+ RandomState,
+ T,
+ )
+
from pandas import Index
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 280e9068b9f44..15490ad853d84 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -14,10 +14,6 @@
import numpy as np
from pandas._libs.hashing import hash_object_array
-from pandas._typing import (
- ArrayLike,
- npt,
-)
from pandas.core.dtypes.common import is_list_like
from pandas.core.dtypes.generic import (
@@ -29,6 +25,11 @@
)
if TYPE_CHECKING:
+ from pandas._typing import (
+ ArrayLike,
+ npt,
+ )
+
from pandas import (
DataFrame,
Index,
diff --git a/pyproject.toml b/pyproject.toml
index edcf6384c432b..9016627441484 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -306,9 +306,7 @@ exclude = [
"pandas/core/ops/*" = ["TCH"]
"pandas/core/sorting.py" = ["TCH"]
"pandas/core/construction.py" = ["TCH"]
-"pandas/core/common.py" = ["TCH"]
"pandas/core/missing.py" = ["TCH"]
-"pandas/core/util/*" = ["TCH"]
"pandas/core/reshape/*" = ["TCH"]
"pandas/core/strings/*" = ["TCH"]
"pandas/core/tools/*" = ["TCH"]
| ref #51740
This PR will enable ruff TCH for `pandas/core/util/*` and fix associated issues | https://api.github.com/repos/pandas-dev/pandas/pulls/51805 | 2023-03-06T07:01:43Z | 2023-03-06T08:39:02Z | 2023-03-06T08:39:02Z | 2023-03-06T08:39:02Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.