title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
CI: Drop Python 3.5 support | diff --git a/ci/azure/posix.yml b/ci/azure/posix.yml
index 62b15bae6d2ce..d6afb263b447f 100644
--- a/ci/azure/posix.yml
+++ b/ci/azure/posix.yml
@@ -9,17 +9,16 @@ jobs:
strategy:
matrix:
${{ if eq(parameters.name, 'macOS') }}:
- py35_macos:
- ENV_FILE: ci/deps/azure-macos-35.yaml
- CONDA_PY: "35"
+ py36_macos:
+ ENV_FILE: ci/deps/azure-macos-36.yaml
+ CONDA_PY: "36"
PATTERN: "not slow and not network"
${{ if eq(parameters.name, 'Linux') }}:
- py35_compat:
- ENV_FILE: ci/deps/azure-35-compat.yaml
- CONDA_PY: "35"
+ py36_minimum_versions:
+ ENV_FILE: ci/deps/azure-36-minimum_versions.yaml
+ CONDA_PY: "36"
PATTERN: "not slow and not network"
-
py36_locale_slow_old_np:
ENV_FILE: ci/deps/azure-36-locale.yaml
CONDA_PY: "36"
diff --git a/ci/deps/azure-35-compat.yaml b/ci/deps/azure-36-minimum_versions.yaml
similarity index 69%
rename from ci/deps/azure-35-compat.yaml
rename to ci/deps/azure-36-minimum_versions.yaml
index dd54001984ec7..e2c78165fe4b9 100644
--- a/ci/deps/azure-35-compat.yaml
+++ b/ci/deps/azure-36-minimum_versions.yaml
@@ -5,26 +5,23 @@ channels:
dependencies:
- beautifulsoup4=4.6.0
- bottleneck=1.2.1
+ - cython>=0.29.13
- jinja2=2.8
- numexpr=2.6.2
- numpy=1.13.3
- openpyxl=2.4.8
- pytables=3.4.2
- python-dateutil=2.6.1
- - python=3.5.3
+ - python=3.6.1
- pytz=2017.2
- scipy=0.19.0
- xlrd=1.1.0
- xlsxwriter=0.9.8
- xlwt=1.2.0
# universal
+ - html5lib=1.0.1
- hypothesis>=3.58.0
+ - pytest=4.5.0
- pytest-xdist
- pytest-mock
- pytest-azurepipelines
- - pip
- - pip:
- # for python 3.5, pytest>=4.0.2, cython>=0.29.13 is not available in conda
- - cython>=0.29.13
- - pytest==4.5.0
- - html5lib==1.0b2
diff --git a/ci/deps/azure-macos-35.yaml b/ci/deps/azure-macos-36.yaml
similarity index 97%
rename from ci/deps/azure-macos-35.yaml
rename to ci/deps/azure-macos-36.yaml
index 4e0f09904b695..85c090bf6f938 100644
--- a/ci/deps/azure-macos-35.yaml
+++ b/ci/deps/azure-macos-36.yaml
@@ -14,7 +14,7 @@ dependencies:
- openpyxl
- pyarrow
- pytables
- - python=3.5.*
+ - python=3.6.*
- python-dateutil==2.6.1
- pytz
- xarray
diff --git a/doc/source/development/contributing.rst b/doc/source/development/contributing.rst
index eed4a7862cc5f..8fe5b174c77d3 100644
--- a/doc/source/development/contributing.rst
+++ b/doc/source/development/contributing.rst
@@ -236,7 +236,7 @@ Creating a Python environment (pip)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you aren't using conda for your development environment, follow these instructions.
-You'll need to have at least python3.5 installed on your system.
+You'll need to have at least Python 3.6.1 installed on your system.
**Unix**/**Mac OS**
@@ -847,29 +847,6 @@ The limitation here is that while a human can reasonably understand that ``is_nu
With custom types and inference this is not always possible so exceptions are made, but every effort should be exhausted to avoid ``cast`` before going down such paths.
-Syntax Requirements
-~~~~~~~~~~~~~~~~~~~
-
-Because *pandas* still supports Python 3.5, :pep:`526` does not apply and variables **must** be annotated with type comments. Specifically, this is a valid annotation within pandas:
-
-.. code-block:: python
-
- primes = [] # type: List[int]
-
-Whereas this is **NOT** allowed:
-
-.. code-block:: python
-
- primes: List[int] = [] # not supported in Python 3.5!
-
-Note that function signatures can always be annotated per :pep:`3107`:
-
-.. code-block:: python
-
- def sum_of_primes(primes: List[int] = []) -> int:
- ...
-
-
Pandas-specific Types
~~~~~~~~~~~~~~~~~~~~~
@@ -1296,7 +1273,7 @@ environment by::
or, to use a specific Python interpreter,::
- asv run -e -E existing:python3.5
+ asv run -e -E existing:python3.6
This will display stderr from the benchmarks, and use your local
``python`` that comes from your ``$PATH``.
diff --git a/doc/source/development/policies.rst b/doc/source/development/policies.rst
index 2083a30af09c3..224948738341e 100644
--- a/doc/source/development/policies.rst
+++ b/doc/source/development/policies.rst
@@ -51,7 +51,7 @@ Pandas may change the behavior of experimental features at any time.
Python Support
~~~~~~~~~~~~~~
-Pandas will only drop support for specific Python versions (e.g. 3.5.x, 3.6.x) in
+Pandas will only drop support for specific Python versions (e.g. 3.6.x, 3.7.x) in
pandas **major** releases.
.. _SemVer: https://semver.org
diff --git a/doc/source/getting_started/dsintro.rst b/doc/source/getting_started/dsintro.rst
index 9e18951fe3f4c..a07fcbd8b67c4 100644
--- a/doc/source/getting_started/dsintro.rst
+++ b/doc/source/getting_started/dsintro.rst
@@ -564,53 +564,6 @@ to a column created earlier in the same :meth:`~DataFrame.assign`.
In the second expression, ``x['C']`` will refer to the newly created column,
that's equal to ``dfa['A'] + dfa['B']``.
-To write code compatible with all versions of Python, split the assignment in two.
-
-.. ipython:: python
-
- dependent = pd.DataFrame({"A": [1, 1, 1]})
- (dependent.assign(A=lambda x: x['A'] + 1)
- .assign(B=lambda x: x['A'] + 2))
-
-.. warning::
-
- Dependent assignment may subtly change the behavior of your code between
- Python 3.6 and older versions of Python.
-
- If you wish to write code that supports versions of python before and after 3.6,
- you'll need to take care when passing ``assign`` expressions that
-
- * Update an existing column
- * Refer to the newly updated column in the same ``assign``
-
- For example, we'll update column "A" and then refer to it when creating "B".
-
- .. code-block:: python
-
- >>> dependent = pd.DataFrame({"A": [1, 1, 1]})
- >>> dependent.assign(A=lambda x: x["A"] + 1, B=lambda x: x["A"] + 2)
-
- For Python 3.5 and earlier the expression creating ``B`` refers to the
- "old" value of ``A``, ``[1, 1, 1]``. The output is then
-
- .. code-block:: console
-
- A B
- 0 2 3
- 1 2 3
- 2 2 3
-
- For Python 3.6 and later, the expression creating ``A`` refers to the
- "new" value of ``A``, ``[2, 2, 2]``, which results in
-
- .. code-block:: console
-
- A B
- 0 2 4
- 1 2 4
- 2 2 4
-
-
Indexing / selection
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 7d1150c2f65fa..f5bb152d6c1b9 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -18,7 +18,7 @@ Instructions for installing from source,
Python version support
----------------------
-Officially Python 3.5.3 and above, 3.6, 3.7, and 3.8.
+Officially Python 3.6.1 and above, 3.7, and 3.8.
Installing pandas
-----------------
@@ -140,7 +140,7 @@ Installing with ActivePython
Installation instructions for
`ActivePython <https://www.activestate.com/activepython>`__ can be found
`here <https://www.activestate.com/activepython/downloads>`__. Versions
-2.7 and 3.5 include pandas.
+2.7, 3.5 and 3.6 include pandas.
Installing using your Linux distribution's package manager.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index cb1d80a34514c..4e6d98ec06795 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -3,6 +3,10 @@
What's new in 1.0.0 (??)
------------------------
+.. warning::
+
+ Starting with the 1.x series of releases, pandas only supports Python 3.6.1 and higher.
+
New Deprecation Policy
~~~~~~~~~~~~~~~~~~~~~~
@@ -37,10 +41,6 @@ See :ref:`policies.version` for more.
.. _2019 Pandas User Survey: http://dev.pandas.io/pandas-blog/2019-pandas-user-survey.html
.. _SemVer: https://semver.org
-.. warning::
-
- The minimum supported Python version will be bumped to 3.6 in a future release.
-
{{ header }}
These are the changes in pandas 1.0.0. See :ref:`release` for a full changelog
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 81431db5b867c..0419c8ac6057e 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -12,7 +12,6 @@
import sys
import warnings
-PY35 = sys.version_info[:2] == (3, 5)
PY36 = sys.version_info >= (3, 6)
PY37 = sys.version_info >= (3, 7)
PY38 = sys.version_info >= (3, 8)
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 6e9e0a0b01200..8615355996031 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -12,7 +12,7 @@
from pandas.core.dtypes.common import ensure_str, is_period_dtype
-from pandas import DataFrame, MultiIndex, Series, compat, isna, to_datetime
+from pandas import DataFrame, MultiIndex, Series, isna, to_datetime
from pandas._typing import JSONSerializable
from pandas.core.reshape.concat import concat
@@ -1115,8 +1115,6 @@ def _parse_no_numpy(self):
dtype=None,
orient="index",
)
- if compat.PY35:
- self.obj = self.obj.sort_index(axis="columns").sort_index(axis="index")
elif orient == "table":
self.obj = parse_table_schema(json, precise_float=self.precise_float)
else:
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index b56da5fba6f80..c03ffe317083c 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -598,10 +598,7 @@ def test_agg_with_one_lambda(self):
}
)
- # sort for 35 and earlier
columns = ["height_sqr_min", "height_max", "weight_max"]
- if compat.PY35:
- columns = ["height_max", "height_sqr_min", "weight_max"]
expected = pd.DataFrame(
{
"height_sqr_min": [82.81, 36.00],
@@ -640,7 +637,6 @@ def test_agg_multiple_lambda(self):
"weight": [7.9, 7.5, 9.9, 198.0],
}
)
- # sort for 35 and earlier
columns = [
"height_sqr_min",
"height_max",
@@ -648,14 +644,6 @@ def test_agg_multiple_lambda(self):
"height_max_2",
"weight_min",
]
- if compat.PY35:
- columns = [
- "height_max",
- "height_max_2",
- "height_sqr_min",
- "weight_max",
- "weight_min",
- ]
expected = pd.DataFrame(
{
"height_sqr_min": [82.81, 36.00],
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index 569e299860614..ef9b0bdf053e9 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -5,8 +5,6 @@
import numpy as np
import pytest
-from pandas.compat import PY35
-
from pandas.core.dtypes.dtypes import CategoricalDtype, DatetimeTZDtype, PeriodDtype
import pandas as pd
@@ -22,14 +20,6 @@
)
-def assert_results_equal(result, expected):
- """Helper function for comparing deserialized JSON with Py35 compat."""
- if PY35:
- assert sorted(result.items()) == sorted(expected.items())
- else:
- assert result == expected
-
-
class TestBuildSchema:
def setup_method(self, method):
self.df = DataFrame(
@@ -245,7 +235,7 @@ def test_build_series(self):
]
)
- assert_results_equal(result, expected)
+ assert result == expected
def test_to_json(self):
df = self.df.copy()
@@ -335,7 +325,7 @@ def test_to_json(self):
]
expected = OrderedDict([("schema", schema), ("data", data)])
- assert_results_equal(result, expected)
+ assert result == expected
def test_to_json_float_index(self):
data = pd.Series(1, index=[1.0, 2.0])
@@ -365,7 +355,7 @@ def test_to_json_float_index(self):
]
)
- assert_results_equal(result, expected)
+ assert result == expected
def test_to_json_period_index(self):
idx = pd.period_range("2016", freq="Q-JAN", periods=2)
@@ -386,7 +376,7 @@ def test_to_json_period_index(self):
]
expected = OrderedDict([("schema", schema), ("data", data)])
- assert_results_equal(result, expected)
+ assert result == expected
def test_to_json_categorical_index(self):
data = pd.Series(1, pd.CategoricalIndex(["a", "b"]))
@@ -421,7 +411,7 @@ def test_to_json_categorical_index(self):
]
)
- assert_results_equal(result, expected)
+ assert result == expected
def test_date_format_raises(self):
with pytest.raises(ValueError):
@@ -558,7 +548,7 @@ def test_categorical(self):
]
)
- assert_results_equal(result, expected)
+ assert result == expected
@pytest.mark.parametrize(
"idx,nm,prop",
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index aa065b6e13079..d31aa04b223e8 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -7,7 +7,7 @@
import numpy as np
import pytest
-from pandas.compat import PY35, is_platform_32bit, is_platform_windows
+from pandas.compat import is_platform_32bit, is_platform_windows
import pandas.util._test_decorators as td
import pandas as pd
@@ -160,9 +160,6 @@ def test_roundtrip_simple(self, orient, convert_axes, numpy, dtype):
expected = self.frame.copy()
- if not numpy and PY35 and orient in ("index", "columns"):
- expected = expected.sort_index()
-
assert_json_roundtrip_equal(result, expected, orient)
@pytest.mark.parametrize("dtype", [False, np.int64])
@@ -174,9 +171,6 @@ def test_roundtrip_intframe(self, orient, convert_axes, numpy, dtype):
data, orient=orient, convert_axes=convert_axes, numpy=numpy, dtype=dtype
)
expected = self.intframe.copy()
- if not numpy and PY35 and orient in ("index", "columns"):
- expected = expected.sort_index()
-
if (
numpy
and (is_platform_32bit() or is_platform_windows())
@@ -209,9 +203,6 @@ def test_roundtrip_str_axes(self, orient, convert_axes, numpy, dtype):
)
expected = df.copy()
- if not numpy and PY35 and orient in ("index", "columns"):
- expected = expected.sort_index()
-
if not dtype:
expected = expected.astype(np.int64)
@@ -250,7 +241,7 @@ def test_roundtrip_categorical(self, orient, convert_axes, numpy):
expected.index = expected.index.astype(str) # Categorical not preserved
expected.index.name = None # index names aren't preserved in JSON
- if not numpy and (orient == "index" or (PY35 and orient == "columns")):
+ if not numpy and orient == "index":
expected = expected.sort_index()
assert_json_roundtrip_equal(result, expected, orient)
@@ -317,7 +308,7 @@ def test_roundtrip_mixed(self, orient, convert_axes, numpy):
expected = df.copy()
expected = expected.assign(**expected.select_dtypes("number").astype(np.int64))
- if not numpy and (orient == "index" or (PY35 and orient == "columns")):
+ if not numpy and orient == "index":
expected = expected.sort_index()
assert_json_roundtrip_equal(result, expected, orient)
@@ -652,8 +643,6 @@ def test_series_roundtrip_simple(self, orient, numpy):
result = pd.read_json(data, typ="series", orient=orient, numpy=numpy)
expected = self.series.copy()
- if not numpy and PY35 and orient in ("index", "columns"):
- expected = expected.sort_index()
if orient in ("values", "records"):
expected = expected.reset_index(drop=True)
if orient != "split":
@@ -670,8 +659,6 @@ def test_series_roundtrip_object(self, orient, numpy, dtype):
)
expected = self.objSeries.copy()
- if not numpy and PY35 and orient in ("index", "columns"):
- expected = expected.sort_index()
if orient in ("values", "records"):
expected = expected.reset_index(drop=True)
if orient != "split":
@@ -686,8 +673,6 @@ def test_series_roundtrip_empty(self, orient, numpy):
expected = self.empty_series.copy()
# TODO: see what causes inconsistency
- if not numpy and PY35 and orient == "index":
- expected = expected.sort_index()
if orient in ("values", "records"):
expected = expected.reset_index(drop=True)
else:
@@ -1611,11 +1596,7 @@ def test_json_indent_all_orients(self, orient, expected):
# GH 12004
df = pd.DataFrame([["foo", "bar"], ["baz", "qux"]], columns=["a", "b"])
result = df.to_json(orient=orient, indent=4)
-
- if PY35:
- assert json.loads(result) == json.loads(expected)
- else:
- assert result == expected
+ assert result == expected
def test_json_negative_indent_raises(self):
with pytest.raises(ValueError, match="must be a nonnegative integer"):
diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py
index bbd97291fab3f..3bdf91cbf838b 100644
--- a/pandas/tests/scalar/period/test_period.py
+++ b/pandas/tests/scalar/period/test_period.py
@@ -1,5 +1,7 @@
from datetime import date, datetime, timedelta
+from distutils.version import StrictVersion
+import dateutil
import numpy as np
import pytest
import pytz
@@ -10,7 +12,6 @@
from pandas._libs.tslibs.parsing import DateParseError
from pandas._libs.tslibs.period import IncompatibleFrequency
from pandas._libs.tslibs.timezones import dateutil_gettz, maybe_get_tz
-from pandas.compat import PY35
from pandas.compat.numpy import np_datetime64_compat
import pandas as pd
@@ -1546,10 +1547,8 @@ def test_period_immutable():
@pytest.mark.xfail(
- # xpassing on MacPython with strict=False
- # https://travis-ci.org/MacPython/pandas-wheels/jobs/574706922
- PY35,
- reason="Parsing as Period('0007-01-01', 'D') for reasons unknown",
+ StrictVersion(dateutil.__version__.split(".dev")[0]) < StrictVersion("2.7.0"),
+ reason="Bug in dateutil < 2.7.0 when parsing old dates: Period('0001-01-07', 'D')",
strict=False,
)
def test_small_year_parsing():
diff --git a/pyproject.toml b/pyproject.toml
index 2ec4739c2f7f8..b105f8aeb3291 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -5,10 +5,8 @@ requires = [
"setuptools",
"wheel",
"Cython>=0.29.13", # Note: sync with setup.py
- "numpy==1.13.3; python_version=='3.5' and platform_system!='AIX'",
"numpy==1.13.3; python_version=='3.6' and platform_system!='AIX'",
"numpy==1.14.5; python_version>='3.7' and platform_system!='AIX'",
- "numpy==1.16.0; python_version=='3.5' and platform_system=='AIX'",
"numpy==1.16.0; python_version=='3.6' and platform_system=='AIX'",
"numpy==1.16.0; python_version>='3.7' and platform_system=='AIX'",
]
diff --git a/setup.py b/setup.py
index 3dd38bdb6adbb..a7bc7a333cdd6 100755
--- a/setup.py
+++ b/setup.py
@@ -223,7 +223,6 @@ def build_extensions(self):
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
@@ -812,7 +811,7 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
long_description=LONG_DESCRIPTION,
classifiers=CLASSIFIERS,
platforms="any",
- python_requires=">=3.5.3",
+ python_requires=">=3.6.1",
extras_require={
"test": [
# sync with setup.cfg minversion & install.rst
| - [X] closes #29034
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/29212 | 2019-10-24T19:05:47Z | 2019-11-08T14:48:11Z | 2019-11-08T14:48:11Z | 2019-11-08T14:48:41Z |
Adding test for assert_equal_index with empty iterables | diff --git a/pandas/tests/indexes/multi/test_constructor.py b/pandas/tests/indexes/multi/test_constructor.py
index ff98da85cfb2d..c32adf275ac98 100644
--- a/pandas/tests/indexes/multi/test_constructor.py
+++ b/pandas/tests/indexes/multi/test_constructor.py
@@ -722,3 +722,10 @@ def test_from_frame_invalid_names(names, expected_error_msg):
)
with pytest.raises(ValueError, match=expected_error_msg):
pd.MultiIndex.from_frame(df, names=names)
+
+
+def test_index_equal_empty_iterable():
+ # #16844
+ a = MultiIndex(levels=[[], []], codes=[[], []], names=["a", "b"])
+ b = MultiIndex.from_arrays(arrays=[[], []], names=["a", "b"])
+ tm.assert_index_equal(a, b)
| - [x] closes #16844
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/29211 | 2019-10-24T18:52:58Z | 2019-10-29T15:22:48Z | 2019-10-29T15:22:47Z | 2019-10-30T07:13:21Z |
CLN: Remove unnecessary sys.version_info checks | diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index a85f3677bc3ab..095dfb7876154 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -11,7 +11,7 @@
class TestToCSV:
@pytest.mark.xfail(
- (3, 6, 5) > sys.version_info >= (3, 5),
+ (3, 6, 5) > sys.version_info,
reason=("Python csv library bug (see https://bugs.python.org/issue32255)"),
)
def test_to_csv_with_single_column(self):
diff --git a/versioneer.py b/versioneer.py
index 24d8105c307c0..8a4710da5958a 100644
--- a/versioneer.py
+++ b/versioneer.py
@@ -8,7 +8,6 @@
* https://github.com/warner/python-versioneer
* Brian Warner
* License: Public Domain
-* Compatible With: python2.6, 2.7, 3.2, 3.3, 3.4, and pypy
* [![Latest Version]
(https://pypip.in/version/versioneer/badge.svg?style=flat)
](https://pypi.org/project/versioneer/)
@@ -464,9 +463,9 @@ def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False):
if verbose:
print("unable to find command, tried %s" % (commands,))
return None
- stdout = p.communicate()[0].strip()
- if sys.version_info[0] >= 3:
- stdout = stdout.decode()
+
+ stdout = p.communicate()[0].strip().decode()
+
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
@@ -561,9 +560,9 @@ def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False):
if verbose:
print("unable to find command, tried %%s" %% (commands,))
return None
- stdout = p.communicate()[0].strip()
- if sys.version_info[0] >= 3:
- stdout = stdout.decode()
+
+ stdout = p.communicate()[0].strip().decode()
+
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% dispcmd)
| Title is self-explanatory. | https://api.github.com/repos/pandas-dev/pandas/pulls/29210 | 2019-10-24T18:28:23Z | 2019-10-25T12:44:19Z | 2019-10-25T12:44:19Z | 2019-10-25T15:03:55Z |
Clean up noqa E241 #29207 | diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 60afd768195d9..aeec12b9ad14e 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -62,46 +62,46 @@ def coerce(request):
# collect all objects to be tested for list-like-ness; use tuples of objects,
# whether they are list-like or not (special casing for sets), and their ID
ll_params = [
- ([1], True, "list"), # noqa: E241
- ([], True, "list-empty"), # noqa: E241
- ((1,), True, "tuple"), # noqa: E241
- (tuple(), True, "tuple-empty"), # noqa: E241
- ({"a": 1}, True, "dict"), # noqa: E241
- (dict(), True, "dict-empty"), # noqa: E241
- ({"a", 1}, "set", "set"), # noqa: E241
- (set(), "set", "set-empty"), # noqa: E241
- (frozenset({"a", 1}), "set", "frozenset"), # noqa: E241
- (frozenset(), "set", "frozenset-empty"), # noqa: E241
- (iter([1, 2]), True, "iterator"), # noqa: E241
- (iter([]), True, "iterator-empty"), # noqa: E241
- ((x for x in [1, 2]), True, "generator"), # noqa: E241
- ((_ for _ in []), True, "generator-empty"), # noqa: E241
- (Series([1]), True, "Series"), # noqa: E241
- (Series([]), True, "Series-empty"), # noqa: E241
- (Series(["a"]).str, True, "StringMethods"), # noqa: E241
- (Series([], dtype="O").str, True, "StringMethods-empty"), # noqa: E241
- (Index([1]), True, "Index"), # noqa: E241
- (Index([]), True, "Index-empty"), # noqa: E241
- (DataFrame([[1]]), True, "DataFrame"), # noqa: E241
- (DataFrame(), True, "DataFrame-empty"), # noqa: E241
- (np.ndarray((2,) * 1), True, "ndarray-1d"), # noqa: E241
- (np.array([]), True, "ndarray-1d-empty"), # noqa: E241
- (np.ndarray((2,) * 2), True, "ndarray-2d"), # noqa: E241
- (np.array([[]]), True, "ndarray-2d-empty"), # noqa: E241
- (np.ndarray((2,) * 3), True, "ndarray-3d"), # noqa: E241
- (np.array([[[]]]), True, "ndarray-3d-empty"), # noqa: E241
- (np.ndarray((2,) * 4), True, "ndarray-4d"), # noqa: E241
- (np.array([[[[]]]]), True, "ndarray-4d-empty"), # noqa: E241
- (np.array(2), False, "ndarray-0d"), # noqa: E241
- (1, False, "int"), # noqa: E241
- (b"123", False, "bytes"), # noqa: E241
- (b"", False, "bytes-empty"), # noqa: E241
- ("123", False, "string"), # noqa: E241
- ("", False, "string-empty"), # noqa: E241
- (str, False, "string-type"), # noqa: E241
- (object(), False, "object"), # noqa: E241
- (np.nan, False, "NaN"), # noqa: E241
- (None, False, "None"), # noqa: E241
+ ([1], True, "list"),
+ ([], True, "list-empty"),
+ ((1,), True, "tuple"),
+ (tuple(), True, "tuple-empty"),
+ ({"a": 1}, True, "dict"),
+ (dict(), True, "dict-empty"),
+ ({"a", 1}, "set", "set"),
+ (set(), "set", "set-empty"),
+ (frozenset({"a", 1}), "set", "frozenset"),
+ (frozenset(), "set", "frozenset-empty"),
+ (iter([1, 2]), True, "iterator"),
+ (iter([]), True, "iterator-empty"),
+ ((x for x in [1, 2]), True, "generator"),
+ ((_ for _ in []), True, "generator-empty"),
+ (Series([1]), True, "Series"),
+ (Series([]), True, "Series-empty"),
+ (Series(["a"]).str, True, "StringMethods"),
+ (Series([], dtype="O").str, True, "StringMethods-empty"),
+ (Index([1]), True, "Index"),
+ (Index([]), True, "Index-empty"),
+ (DataFrame([[1]]), True, "DataFrame"),
+ (DataFrame(), True, "DataFrame-empty"),
+ (np.ndarray((2,) * 1), True, "ndarray-1d"),
+ (np.array([]), True, "ndarray-1d-empty"),
+ (np.ndarray((2,) * 2), True, "ndarray-2d"),
+ (np.array([[]]), True, "ndarray-2d-empty"),
+ (np.ndarray((2,) * 3), True, "ndarray-3d"),
+ (np.array([[[]]]), True, "ndarray-3d-empty"),
+ (np.ndarray((2,) * 4), True, "ndarray-4d"),
+ (np.array([[[[]]]]), True, "ndarray-4d-empty"),
+ (np.array(2), False, "ndarray-0d"),
+ (1, False, "int"),
+ (b"123", False, "bytes"),
+ (b"", False, "bytes-empty"),
+ ("123", False, "string"),
+ ("", False, "string-empty"),
+ (str, False, "string-type"),
+ (object(), False, "object"),
+ (np.nan, False, "NaN"),
+ (None, False, "None"),
]
objs, expected, ids = zip(*ll_params)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 53d74f74dc439..cfaf123045b1f 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -21,46 +21,46 @@ def assert_series_or_index_equal(left, right):
_any_string_method = [
- ("cat", (), {"sep": ","}), # noqa: E241
- ("cat", (Series(list("zyx")),), {"sep": ",", "join": "left"}), # noqa: E241
- ("center", (10,), {}), # noqa: E241
- ("contains", ("a",), {}), # noqa: E241
- ("count", ("a",), {}), # noqa: E241
- ("decode", ("UTF-8",), {}), # noqa: E241
- ("encode", ("UTF-8",), {}), # noqa: E241
- ("endswith", ("a",), {}), # noqa: E241
- ("extract", ("([a-z]*)",), {"expand": False}), # noqa: E241
- ("extract", ("([a-z]*)",), {"expand": True}), # noqa: E241
- ("extractall", ("([a-z]*)",), {}), # noqa: E241
- ("find", ("a",), {}), # noqa: E241
- ("findall", ("a",), {}), # noqa: E241
- ("get", (0,), {}), # noqa: E241
+ ("cat", (), {"sep": ","}),
+ ("cat", (Series(list("zyx")),), {"sep": ",", "join": "left"}),
+ ("center", (10,), {}),
+ ("contains", ("a",), {}),
+ ("count", ("a",), {}),
+ ("decode", ("UTF-8",), {}),
+ ("encode", ("UTF-8",), {}),
+ ("endswith", ("a",), {}),
+ ("extract", ("([a-z]*)",), {"expand": False}),
+ ("extract", ("([a-z]*)",), {"expand": True}),
+ ("extractall", ("([a-z]*)",), {}),
+ ("find", ("a",), {}),
+ ("findall", ("a",), {}),
+ ("get", (0,), {}),
# because "index" (and "rindex") fail intentionally
# if the string is not found, search only for empty string
- ("index", ("",), {}), # noqa: E241
- ("join", (",",), {}), # noqa: E241
- ("ljust", (10,), {}), # noqa: E241
- ("match", ("a",), {}), # noqa: E241
- ("normalize", ("NFC",), {}), # noqa: E241
- ("pad", (10,), {}), # noqa: E241
- ("partition", (" ",), {"expand": False}), # noqa: E241
- ("partition", (" ",), {"expand": True}), # noqa: E241
- ("repeat", (3,), {}), # noqa: E241
- ("replace", ("a", "z"), {}), # noqa: E241
- ("rfind", ("a",), {}), # noqa: E241
- ("rindex", ("",), {}), # noqa: E241
- ("rjust", (10,), {}), # noqa: E241
- ("rpartition", (" ",), {"expand": False}), # noqa: E241
- ("rpartition", (" ",), {"expand": True}), # noqa: E241
- ("slice", (0, 1), {}), # noqa: E241
- ("slice_replace", (0, 1, "z"), {}), # noqa: E241
- ("split", (" ",), {"expand": False}), # noqa: E241
- ("split", (" ",), {"expand": True}), # noqa: E241
- ("startswith", ("a",), {}), # noqa: E241
+ ("index", ("",), {}),
+ ("join", (",",), {}),
+ ("ljust", (10,), {}),
+ ("match", ("a",), {}),
+ ("normalize", ("NFC",), {}),
+ ("pad", (10,), {}),
+ ("partition", (" ",), {"expand": False}),
+ ("partition", (" ",), {"expand": True}),
+ ("repeat", (3,), {}),
+ ("replace", ("a", "z"), {}),
+ ("rfind", ("a",), {}),
+ ("rindex", ("",), {}),
+ ("rjust", (10,), {}),
+ ("rpartition", (" ",), {"expand": False}),
+ ("rpartition", (" ",), {"expand": True}),
+ ("slice", (0, 1), {}),
+ ("slice_replace", (0, 1, "z"), {}),
+ ("split", (" ",), {"expand": False}),
+ ("split", (" ",), {"expand": True}),
+ ("startswith", ("a",), {}),
# translating unicode points of "a" to "d"
- ("translate", ({97: 100},), {}), # noqa: E241
- ("wrap", (2,), {}), # noqa: E241
- ("zfill", (10,), {}), # noqa: E241
+ ("translate", ({97: 100},), {}),
+ ("wrap", (2,), {}),
+ ("zfill", (10,), {}),
] + list(
zip(
[
| - [X] closes #29207
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/29209 | 2019-10-24T17:40:27Z | 2019-10-24T18:38:04Z | 2019-10-24T18:38:04Z | 2019-10-25T04:54:30Z |
CLN: simplify core.algorithms | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 2c9f632e8bc24..e64290a196523 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -47,7 +47,7 @@
from pandas.core.dtypes.missing import isna, na_value_for_dtype
from pandas.core import common as com
-from pandas.core.construction import array
+from pandas.core.construction import array, extract_array
from pandas.core.indexers import validate_indices
_shared_docs = {} # type: Dict[str, str]
@@ -82,9 +82,12 @@ def _ensure_data(values, dtype=None):
"""
# we check some simple dtypes first
+ if is_object_dtype(dtype):
+ return ensure_object(np.asarray(values)), "object", "object"
+ elif is_object_dtype(values) and dtype is None:
+ return ensure_object(np.asarray(values)), "object", "object"
+
try:
- if is_object_dtype(dtype):
- return ensure_object(np.asarray(values)), "object", "object"
if is_bool_dtype(values) or is_bool_dtype(dtype):
# we are actually coercing to uint64
# until our algos support uint8 directly (see TODO)
@@ -95,8 +98,6 @@ def _ensure_data(values, dtype=None):
return ensure_uint64(values), "uint64", "uint64"
elif is_float_dtype(values) or is_float_dtype(dtype):
return ensure_float64(values), "float64", "float64"
- elif is_object_dtype(values) and dtype is None:
- return ensure_object(np.asarray(values)), "object", "object"
elif is_complex_dtype(values) or is_complex_dtype(dtype):
# ignore the fact that we are casting to float
@@ -207,11 +208,11 @@ def _ensure_arraylike(values):
_hashtables = {
- "float64": (htable.Float64HashTable, htable.Float64Vector),
- "uint64": (htable.UInt64HashTable, htable.UInt64Vector),
- "int64": (htable.Int64HashTable, htable.Int64Vector),
- "string": (htable.StringHashTable, htable.ObjectVector),
- "object": (htable.PyObjectHashTable, htable.ObjectVector),
+ "float64": htable.Float64HashTable,
+ "uint64": htable.UInt64HashTable,
+ "int64": htable.Int64HashTable,
+ "string": htable.StringHashTable,
+ "object": htable.PyObjectHashTable,
}
@@ -223,11 +224,9 @@ def _get_hashtable_algo(values):
Returns
-------
- tuples(hashtable class,
- vector class,
- values,
- dtype,
- ndtype)
+ htable : HashTable subclass
+ values : ndarray
+ dtype : str or dtype
"""
values, dtype, ndtype = _ensure_data(values)
@@ -238,23 +237,21 @@ def _get_hashtable_algo(values):
# StringHashTable and ObjectHashtable
if lib.infer_dtype(values, skipna=False) in ["string"]:
ndtype = "string"
- else:
- ndtype = "object"
- htable, table = _hashtables[ndtype]
- return (htable, table, values, dtype, ndtype)
+ htable = _hashtables[ndtype]
+ return htable, values, dtype
def _get_values_for_rank(values):
if is_categorical_dtype(values):
values = values._values_for_rank()
- values, dtype, ndtype = _ensure_data(values)
- return values, dtype, ndtype
+ values, _, ndtype = _ensure_data(values)
+ return values, ndtype
-def _get_data_algo(values, func_map):
- values, dtype, ndtype = _get_values_for_rank(values)
+def _get_data_algo(values):
+ values, ndtype = _get_values_for_rank(values)
if ndtype == "object":
@@ -264,7 +261,7 @@ def _get_data_algo(values, func_map):
if lib.infer_dtype(values, skipna=False) in ["string"]:
ndtype = "string"
- f = func_map.get(ndtype, func_map["object"])
+ f = _hashtables.get(ndtype, _hashtables["object"])
return f, values
@@ -295,7 +292,7 @@ def match(to_match, values, na_sentinel=-1):
match : ndarray of integers
"""
values = com.asarray_tuplesafe(values)
- htable, _, values, dtype, ndtype = _get_hashtable_algo(values)
+ htable, values, dtype = _get_hashtable_algo(values)
to_match, _, _ = _ensure_data(to_match, dtype)
table = htable(min(len(to_match), 1000000))
table.map_locations(values)
@@ -398,7 +395,7 @@ def unique(values):
return values.unique()
original = values
- htable, _, values, dtype, ndtype = _get_hashtable_algo(values)
+ htable, values, _ = _get_hashtable_algo(values)
table = htable(len(values))
uniques = table.unique(values)
@@ -480,7 +477,8 @@ def isin(comps, values):
def _factorize_array(values, na_sentinel=-1, size_hint=None, na_value=None):
- """Factorize an array-like to labels and uniques.
+ """
+ Factorize an array-like to labels and uniques.
This doesn't do any coercion of types or unboxing before factorization.
@@ -498,9 +496,10 @@ def _factorize_array(values, na_sentinel=-1, size_hint=None, na_value=None):
Returns
-------
- labels, uniques : ndarray
+ labels : ndarray
+ uniques : ndarray
"""
- (hash_klass, _), values = _get_data_algo(values, _hashtables)
+ hash_klass, values = _get_data_algo(values)
table = hash_klass(size_hint or len(values))
uniques, labels = table.factorize(
@@ -652,17 +651,13 @@ def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None):
original = values
if is_extension_array_dtype(values):
- values = getattr(values, "_values", values)
+ values = extract_array(values)
labels, uniques = values.factorize(na_sentinel=na_sentinel)
dtype = original.dtype
else:
values, dtype, _ = _ensure_data(values)
- if (
- is_datetime64_any_dtype(original)
- or is_timedelta64_dtype(original)
- or is_period_dtype(original)
- ):
+ if original.dtype.kind in ["m", "M"]:
na_value = na_value_for_dtype(original.dtype)
else:
na_value = None
@@ -831,7 +826,7 @@ def duplicated(values, keep="first"):
duplicated : ndarray
"""
- values, dtype, ndtype = _ensure_data(values)
+ values, _, ndtype = _ensure_data(values)
f = getattr(htable, "duplicated_{dtype}".format(dtype=ndtype))
return f(values, keep=keep)
@@ -868,7 +863,7 @@ def mode(values, dropna: bool = True):
mask = values.isnull()
values = values[~mask]
- values, dtype, ndtype = _ensure_data(values)
+ values, _, ndtype = _ensure_data(values)
f = getattr(htable, "mode_{dtype}".format(dtype=ndtype))
result = f(values, dropna=dropna)
@@ -906,7 +901,7 @@ def rank(values, axis=0, method="average", na_option="keep", ascending=True, pct
(e.g. 1, 2, 3) or in percentile form (e.g. 0.333..., 0.666..., 1).
"""
if values.ndim == 1:
- values, _, _ = _get_values_for_rank(values)
+ values, _ = _get_values_for_rank(values)
ranks = algos.rank_1d(
values,
ties_method=method,
@@ -915,7 +910,7 @@ def rank(values, axis=0, method="average", na_option="keep", ascending=True, pct
pct=pct,
)
elif values.ndim == 2:
- values, _, _ = _get_values_for_rank(values)
+ values, _ = _get_values_for_rank(values)
ranks = algos.rank_2d(
values,
axis=axis,
@@ -1630,9 +1625,7 @@ def take_nd(
if is_extension_array_dtype(arr):
return arr.take(indexer, fill_value=fill_value, allow_fill=allow_fill)
- if isinstance(arr, (ABCIndexClass, ABCSeries)):
- arr = arr._values
-
+ arr = extract_array(arr)
arr = np.asarray(arr)
if indexer is None:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 70ed411f6a3e4..4d065bd234e0b 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -47,14 +47,7 @@
from pandas.core import ops
from pandas.core.accessor import PandasDelegate, delegate_names
import pandas.core.algorithms as algorithms
-from pandas.core.algorithms import (
- _get_data_algo,
- _hashtables,
- factorize,
- take,
- take_1d,
- unique1d,
-)
+from pandas.core.algorithms import _get_data_algo, factorize, take, take_1d, unique1d
from pandas.core.base import NoNewAttributesMixin, PandasObject, _shared_docs
import pandas.core.common as com
from pandas.core.construction import array, extract_array, sanitize_array
@@ -2097,7 +2090,6 @@ def __setitem__(self, key, value):
"""
Item assignment.
-
Raises
------
ValueError
@@ -2631,8 +2623,8 @@ def _get_codes_for_values(values, categories):
values = ensure_object(values)
categories = ensure_object(categories)
- (hash_klass, vec_klass), vals = _get_data_algo(values, _hashtables)
- (_, _), cats = _get_data_algo(categories, _hashtables)
+ hash_klass, vals = _get_data_algo(values)
+ _, cats = _get_data_algo(categories)
t = hash_klass(len(cats))
t.map_locations(cats)
return coerce_indexer_dtype(t.lookup(vals), cats)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 7fcaf60088ad2..3e92906be706c 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -21,7 +21,6 @@
ensure_str,
is_bool,
is_bool_dtype,
- is_categorical_dtype,
is_complex,
is_complex_dtype,
is_datetime64_dtype,
@@ -1325,14 +1324,10 @@ def construct_1d_arraylike_from_scalar(value, length, dtype):
np.ndarray / pandas type of length, filled with value
"""
- if is_datetime64tz_dtype(dtype):
- from pandas import DatetimeIndex
-
- subarr = DatetimeIndex([value] * length, dtype=dtype)
- elif is_categorical_dtype(dtype):
- from pandas import Categorical
+ if is_extension_array_dtype(dtype):
+ cls = dtype.construct_array_type()
+ subarr = cls._from_sequence([value] * length, dtype=dtype)
- subarr = Categorical([value] * length, dtype=dtype)
else:
if not isinstance(dtype, (np.dtype, type(np.dtype))):
dtype = dtype.dtype
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 94810369785d3..706f6159bcafe 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -484,9 +484,7 @@ def sort_mixed(values):
if sorter is None:
# mixed types
- (hash_klass, _), values = algorithms._get_data_algo(
- values, algorithms._hashtables
- )
+ hash_klass, values = algorithms._get_data_algo(values)
t = hash_klass(len(values))
t.map_locations(values)
sorter = ensure_platform_int(t.lookup(ordered))
| https://api.github.com/repos/pandas-dev/pandas/pulls/29199 | 2019-10-24T04:01:02Z | 2019-10-25T17:49:34Z | 2019-10-25T17:49:34Z | 2019-10-31T14:52:05Z | |
CLN: don't catch AttributeError in _wrap_applied_output | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 5ae3926952a67..9586d49c555ff 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -571,8 +571,6 @@ def _aggregate_multiple_funcs(self, arg, _level, _axis):
except (TypeError, DataError):
pass
- except SpecificationError:
- raise
else:
results.append(new_res)
@@ -591,8 +589,6 @@ def _aggregate_multiple_funcs(self, arg, _level, _axis):
except ValueError:
# cannot aggregate
continue
- except SpecificationError:
- raise
else:
results.append(new_res)
keys.append(col)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index c766fcaa4f849..58ede6307525c 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -475,7 +475,7 @@ def _transform_fast(self, func, func_nm):
out = self._try_cast(out, self.obj)
return Series(out, index=self.obj.index, name=self.obj.name)
- def filter(self, func, dropna=True, *args, **kwargs): # noqa
+ def filter(self, func, dropna=True, *args, **kwargs):
"""
Return a copy of a Series excluding elements from groups that
do not satisfy the boolean criterion specified by func.
@@ -1230,7 +1230,7 @@ def first_not_none(values):
return self._concat_objects(keys, values, not_indexed_same=True)
try:
- if self.axis == 0:
+ if self.axis == 0 and isinstance(v, ABCSeries):
# GH6124 if the list of Series have a consistent name,
# then propagate that name to the result.
index = v.index.copy()
@@ -1266,15 +1266,24 @@ def first_not_none(values):
axis=self.axis,
).unstack()
result.columns = index
- else:
+ elif isinstance(v, ABCSeries):
stacked_values = np.vstack([np.asarray(v) for v in values])
result = DataFrame(
stacked_values.T, index=v.index, columns=key_index
)
+ else:
+ # GH#1738: values is list of arrays of unequal lengths
+ # fall through to the outer else clause
+ # TODO: sure this is right? we used to do this
+ # after raising AttributeError above
+ return Series(
+ values, index=key_index, name=self._selection_name
+ )
- except (ValueError, AttributeError):
+ except ValueError:
+ # TODO: not reached in tests; is this still needed?
# GH1738: values is list of arrays of unequal lengths fall
- # through to the outer else caluse
+ # through to the outer else clause
return Series(values, index=key_index, name=self._selection_name)
# if we have date/time like in the original, then coerce dates
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index e6f4f2f056058..8bebe5e8161a5 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -198,6 +198,9 @@ def apply(self, f, data, axis=0):
f_name not in base.plotting_methods
and hasattr(splitter, "fast_apply")
and axis == 0
+ # with MultiIndex, apply_frame_axis0 would raise InvalidApply
+ # TODO: can we make this check prettier?
+ and not splitter._get_sorted_data().index._has_complex_internals
):
try:
result_values, mutated = splitter.fast_apply(f, group_keys)
@@ -207,11 +210,14 @@ def apply(self, f, data, axis=0):
if len(result_values) == len(group_keys):
return group_keys, result_values, mutated
- except libreduction.InvalidApply:
+ except libreduction.InvalidApply as err:
# Cannot fast apply on MultiIndex (_has_complex_internals).
# This Exception is also raised if `f` triggers an exception
# but it is preferable to raise the exception in Python.
- pass
+ if "Let this error raise above us" not in str(err):
+ # TODO: can we infer anything about whether this is
+ # worth-retrying in pure-python?
+ raise
except TypeError as err:
if "Cannot convert" in str(err):
# via apply_frame_axis0 if we pass a non-ndarray
| cc @jreback @WillAyd | https://api.github.com/repos/pandas-dev/pandas/pulls/29195 | 2019-10-23T21:26:49Z | 2019-10-25T17:10:33Z | 2019-10-25T17:10:33Z | 2019-10-25T17:36:14Z |
CLN: dont catch Exception in _aggregate_frame | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index c766fcaa4f849..0ae3d71d077df 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1074,15 +1074,8 @@ def _aggregate_frame(self, func, *args, **kwargs):
else:
for name in self.indices:
data = self.get_group(name, obj=obj)
- try:
- fres = func(data, *args, **kwargs)
- except AssertionError:
- raise
- except Exception:
- wrapper = lambda x: func(x, *args, **kwargs)
- result[name] = data.apply(wrapper, axis=axis)
- else:
- result[name] = self._try_cast(fres, data)
+ fres = func(data, *args, **kwargs)
+ result[name] = self._try_cast(fres, data)
return self._wrap_frame_output(result, obj)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index dff5baa9b5984..43e2a6f040414 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -775,7 +775,7 @@ def test_omit_nuisance(df):
# won't work with axis = 1
grouped = df.groupby({"A": 0, "C": 0, "D": 1, "E": 1}, axis=1)
- msg = r'\("unsupported operand type\(s\) for \+: ' "'Timestamp' and 'float'\", 0"
+ msg = r"unsupported operand type\(s\) for \+: 'Timestamp'"
with pytest.raises(TypeError, match=msg):
grouped.agg(lambda x: x.sum(0, numeric_only=False))
| cc @jreback @WillAyd | https://api.github.com/repos/pandas-dev/pandas/pulls/29194 | 2019-10-23T20:46:56Z | 2019-10-24T12:06:27Z | 2019-10-24T12:06:27Z | 2019-10-25T13:52:32Z |
Removed generate_bins_generic | diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 79b51ef57cd37..17bd60fb9d152 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -53,59 +53,6 @@
)
-def generate_bins_generic(values, binner, closed):
- """
- Generate bin edge offsets and bin labels for one array using another array
- which has bin edge values. Both arrays must be sorted.
-
- Parameters
- ----------
- values : array of values
- binner : a comparable array of values representing bins into which to bin
- the first array. Note, 'values' end-points must fall within 'binner'
- end-points.
- closed : which end of bin is closed; left (default), right
-
- Returns
- -------
- bins : array of offsets (into 'values' argument) of bins.
- Zero and last edge are excluded in result, so for instance the first
- bin is values[0:bin[0]] and the last is values[bin[-1]:]
- """
- lenidx = len(values)
- lenbin = len(binner)
-
- if lenidx <= 0 or lenbin <= 0:
- raise ValueError("Invalid length for values or for binner")
-
- # check binner fits data
- if values[0] < binner[0]:
- raise ValueError("Values falls before first bin")
-
- if values[lenidx - 1] > binner[lenbin - 1]:
- raise ValueError("Values falls after last bin")
-
- bins = np.empty(lenbin - 1, dtype=np.int64)
-
- j = 0 # index into values
- bc = 0 # bin count
-
- # linear scan, presume nothing about values/binner except that it fits ok
- for i in range(0, lenbin - 1):
- r_bin = binner[i + 1]
-
- # count values in current bin, advance to next bin
- while j < lenidx and (
- values[j] < r_bin or (closed == "right" and values[j] == r_bin)
- ):
- j += 1
-
- bins[bc] = j
- bc += 1
-
- return bins
-
-
class BaseGrouper:
"""
This is an internal Grouper class, which actually holds
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index 8da03a7f61029..0e7a66769d2d4 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -6,9 +6,7 @@
from pandas.core.dtypes.common import ensure_int64
from pandas import Index, Series, isna
-from pandas.core.groupby.ops import generate_bins_generic
import pandas.util.testing as tm
-from pandas.util.testing import assert_almost_equal
def test_series_grouper():
@@ -21,10 +19,10 @@ def test_series_grouper():
result, counts = grouper.get_result()
expected = np.array([obj[3:6].mean(), obj[6:].mean()])
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
exp_counts = np.array([3, 4], dtype=np.int64)
- assert_almost_equal(counts, exp_counts)
+ tm.assert_almost_equal(counts, exp_counts)
def test_series_bin_grouper():
@@ -37,48 +35,37 @@ def test_series_bin_grouper():
result, counts = grouper.get_result()
expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()])
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
exp_counts = np.array([3, 3, 4], dtype=np.int64)
- assert_almost_equal(counts, exp_counts)
-
-
-class TestBinGroupers:
- def setup_method(self, method):
- self.obj = np.random.randn(10, 1)
- self.labels = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2], dtype=np.int64)
- self.bins = np.array([3, 6], dtype=np.int64)
-
- def test_generate_bins(self):
- values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
- binner = np.array([0, 3, 6, 9], dtype=np.int64)
-
- for func in [lib.generate_bins_dt64, generate_bins_generic]:
- bins = func(values, binner, closed="left")
- assert (bins == np.array([2, 5, 6])).all()
-
- bins = func(values, binner, closed="right")
- assert (bins == np.array([3, 6, 6])).all()
-
- for func in [lib.generate_bins_dt64, generate_bins_generic]:
- values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
- binner = np.array([0, 3, 6], dtype=np.int64)
-
- bins = func(values, binner, closed="right")
- assert (bins == np.array([3, 6])).all()
-
- msg = "Invalid length for values or for binner"
- with pytest.raises(ValueError, match=msg):
- generate_bins_generic(values, [], "right")
- with pytest.raises(ValueError, match=msg):
- generate_bins_generic(values[:0], binner, "right")
-
- msg = "Values falls before first bin"
- with pytest.raises(ValueError, match=msg):
- generate_bins_generic(values, [4], "right")
- msg = "Values falls after last bin"
- with pytest.raises(ValueError, match=msg):
- generate_bins_generic(values, [-3, -1], "right")
+ tm.assert_almost_equal(counts, exp_counts)
+
+
+@pytest.mark.parametrize(
+ "binner,closed,expected",
+ [
+ (
+ np.array([0, 3, 6, 9], dtype=np.int64),
+ "left",
+ np.array([2, 5, 6], dtype=np.int64),
+ ),
+ (
+ np.array([0, 3, 6, 9], dtype=np.int64),
+ "right",
+ np.array([3, 6, 6], dtype=np.int64),
+ ),
+ (np.array([0, 3, 6], dtype=np.int64), "left", np.array([2, 5], dtype=np.int64)),
+ (
+ np.array([0, 3, 6], dtype=np.int64),
+ "right",
+ np.array([3, 6], dtype=np.int64),
+ ),
+ ],
+)
+def test_generate_bins(binner, closed, expected):
+ values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
+ result = lib.generate_bins_dt64(values, binner, closed=closed)
+ tm.assert_numpy_array_equal(result, expected)
def test_group_ohlc():
@@ -100,13 +87,13 @@ def _ohlc(group):
expected = np.array([_ohlc(obj[:6]), _ohlc(obj[6:12]), _ohlc(obj[12:])])
- assert_almost_equal(out, expected)
+ tm.assert_almost_equal(out, expected)
tm.assert_numpy_array_equal(counts, np.array([6, 6, 8], dtype=np.int64))
obj[:6] = np.nan
func(out, counts, obj[:, None], labels)
expected[0] = np.nan
- assert_almost_equal(out, expected)
+ tm.assert_almost_equal(out, expected)
_check("float32")
_check("float64")
@@ -121,29 +108,29 @@ def test_int_index(self):
arr = np.random.randn(100, 4)
result = libreduction.compute_reduction(arr, np.sum, labels=Index(np.arange(4)))
expected = arr.sum(0)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
result = libreduction.compute_reduction(
arr, np.sum, axis=1, labels=Index(np.arange(100))
)
expected = arr.sum(1)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
dummy = Series(0.0, index=np.arange(100))
result = libreduction.compute_reduction(
arr, np.sum, dummy=dummy, labels=Index(np.arange(4))
)
expected = arr.sum(0)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
dummy = Series(0.0, index=np.arange(4))
result = libreduction.compute_reduction(
arr, np.sum, axis=1, dummy=dummy, labels=Index(np.arange(100))
)
expected = arr.sum(1)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
result = libreduction.compute_reduction(
arr, np.sum, axis=1, dummy=dummy, labels=Index(np.arange(100))
)
- assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result, expected)
| appears to be dead groupby code | https://api.github.com/repos/pandas-dev/pandas/pulls/29192 | 2019-10-23T19:24:21Z | 2019-10-29T17:10:50Z | 2019-10-29T17:10:50Z | 2019-10-29T17:11:05Z |
Fix mypy error in pandas/tests.indexes.test_base.py | diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 5bfa13c0865f1..9d6dc3e906dff 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -507,7 +507,7 @@ def test_constructor_dtypes_timedelta(self, attr, klass):
result = klass(list(values), dtype=dtype)
tm.assert_index_equal(result, index)
- @pytest.mark.parametrize("value", [[], iter([]), (x for x in [])])
+ @pytest.mark.parametrize("value", [[], iter([]), (_ for _ in [])])
@pytest.mark.parametrize(
"klass",
[
@@ -530,7 +530,7 @@ def test_constructor_empty(self, value, klass):
[
(PeriodIndex([], freq="B"), PeriodIndex),
(PeriodIndex(iter([]), freq="B"), PeriodIndex),
- (PeriodIndex((x for x in []), freq="B"), PeriodIndex),
+ (PeriodIndex((_ for _ in []), freq="B"), PeriodIndex),
(RangeIndex(step=1), pd.RangeIndex),
(MultiIndex(levels=[[1, 2], ["blue", "red"]], codes=[[], []]), MultiIndex),
],
@@ -1733,22 +1733,22 @@ def test_slice_locs_na_raises(self):
"in_slice,expected",
[
(pd.IndexSlice[::-1], "yxdcb"),
- (pd.IndexSlice["b":"y":-1], ""),
- (pd.IndexSlice["b"::-1], "b"),
- (pd.IndexSlice[:"b":-1], "yxdcb"),
- (pd.IndexSlice[:"y":-1], "y"),
- (pd.IndexSlice["y"::-1], "yxdcb"),
- (pd.IndexSlice["y"::-4], "yb"),
+ (pd.IndexSlice["b":"y":-1], ""), # type: ignore
+ (pd.IndexSlice["b"::-1], "b"), # type: ignore
+ (pd.IndexSlice[:"b":-1], "yxdcb"), # type: ignore
+ (pd.IndexSlice[:"y":-1], "y"), # type: ignore
+ (pd.IndexSlice["y"::-1], "yxdcb"), # type: ignore
+ (pd.IndexSlice["y"::-4], "yb"), # type: ignore
# absent labels
- (pd.IndexSlice[:"a":-1], "yxdcb"),
- (pd.IndexSlice[:"a":-2], "ydb"),
- (pd.IndexSlice["z"::-1], "yxdcb"),
- (pd.IndexSlice["z"::-3], "yc"),
- (pd.IndexSlice["m"::-1], "dcb"),
- (pd.IndexSlice[:"m":-1], "yx"),
- (pd.IndexSlice["a":"a":-1], ""),
- (pd.IndexSlice["z":"z":-1], ""),
- (pd.IndexSlice["m":"m":-1], ""),
+ (pd.IndexSlice[:"a":-1], "yxdcb"), # type: ignore
+ (pd.IndexSlice[:"a":-2], "ydb"), # type: ignore
+ (pd.IndexSlice["z"::-1], "yxdcb"), # type: ignore
+ (pd.IndexSlice["z"::-3], "yc"), # type: ignore
+ (pd.IndexSlice["m"::-1], "dcb"), # type: ignore
+ (pd.IndexSlice[:"m":-1], "yx"), # type: ignore
+ (pd.IndexSlice["a":"a":-1], ""), # type: ignore
+ (pd.IndexSlice["z":"z":-1], ""), # type: ignore
+ (pd.IndexSlice["m":"m":-1], ""), # type: ignore
],
)
def test_slice_locs_negative_step(self, in_slice, expected):
diff --git a/setup.cfg b/setup.cfg
index 46e6b88f8018a..fa8b61a214cab 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -148,9 +148,6 @@ ignore_errors=True
[mypy-pandas.tests.indexes.datetimes.test_tools]
ignore_errors=True
-[mypy-pandas.tests.indexes.test_base]
-ignore_errors=True
-
[mypy-pandas.tests.scalar.period.test_period]
ignore_errors=True
| - [x] relates to #28926
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/29188 | 2019-10-23T17:23:02Z | 2019-11-27T16:09:11Z | 2019-11-27T16:09:11Z | 2019-11-27T16:09:19Z |
DOC: infer_objects doc fixup | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 4211b15203721..aae1fffb7a3b6 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -2025,11 +2025,12 @@ object conversion
pandas offers various functions to try to force conversion of types from the ``object`` dtype to other types.
In cases where the data is already of the correct type, but stored in an ``object`` array, the
-:meth:`~DataFrame.infer_objects` and :meth:`~Series.infer_objects` can be used to soft convert
+:meth:`DataFrame.infer_objects` and :meth:`Series.infer_objects` methods can be used to soft convert
to the correct type.
.. ipython:: python
+ import datetime
df = pd.DataFrame([[1, 2],
['a', 'b'],
[datetime.datetime(2016, 3, 2), datetime.datetime(2016, 3, 2)]])
@@ -2037,7 +2038,7 @@ to the correct type.
df
df.dtypes
-Because the data transposed the original inference stored all columns as object, which
+Because the data was transposed the original inference stored all columns as object, which
``infer_objects`` will correct.
.. ipython:: python
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 4801e5c5300e7..4a281ca6bd1c6 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -31,13 +31,13 @@ New features
``infer_objects`` type conversion
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The `:meth:`~DataFrame.infer_objects` and :meth:`~Series.infer_objects`
+The :meth:`DataFrame.infer_objects` and :meth:`Series.infer_objects`
methods have been added to perform dtype inference on object columns, replacing
some of the functionality of the deprecated ``convert_objects``
method. See the documentation :ref:`here <basics.object_conversion>`
for more details. (:issue:`11221`)
-This function only performs soft conversions on object columns, converting Python objects
+This method only performs soft conversions on object columns, converting Python objects
to native types, but not any coercive conversions. For example:
.. ipython:: python
@@ -46,11 +46,12 @@ to native types, but not any coercive conversions. For example:
'B': np.array([1, 2, 3], dtype='object'),
'C': ['1', '2', '3']})
df.dtypes
- df.infer_objects().dtype
+ df.infer_objects().dtypes
Note that column ``'C'`` was not converted - only scalar numeric types
will be inferred to a new type. Other types of conversion should be accomplished
-using :func:`to_numeric` function (or :func:`to_datetime`, :func:`to_timedelta`).
+using the :func:`to_numeric` function (or :func:`to_datetime`, :func:`to_timedelta`).
+
.. ipython:: python
df = df.infer_objects()
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c95129bdaa005..48006b11993c7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3693,7 +3693,7 @@ def infer_objects(self):
columns unchanged. The inference rules are the
same as during normal Series/DataFrame construction.
- .. versionadded:: 0.20.0
+ .. versionadded:: 0.21.0
See Also
--------
| follow up to #16915 | https://api.github.com/repos/pandas-dev/pandas/pulls/17018 | 2017-07-19T01:51:51Z | 2017-07-19T09:51:47Z | 2017-07-19T09:51:47Z | 2017-07-19T09:51:50Z |
DOC: add warning to append about inefficiency | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6559fc4c24ce2..9fda0f44320f8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4618,6 +4618,11 @@ def append(self, other, ignore_index=False, verify_integrity=False):
the DataFrame's index, the order of the columns in the resulting
DataFrame will be unchanged.
+ Iteratively appending rows to a DataFrame can be more computationally
+ intensive than a single concatenate. A better solution is to append
+ those rows to a list and then concatenate the list with the original
+ DataFrame all at once.
+
See also
--------
pandas.concat : General function to concatenate DataFrame, Series
@@ -4648,6 +4653,33 @@ def append(self, other, ignore_index=False, verify_integrity=False):
2 5 6
3 7 8
+ The following, while not recommended methods for generating DataFrames,
+ show two ways to generate a DataFrame from multiple data sources.
+
+ Less efficient:
+
+ >>> df = pd.DataFrame(columns=['A'])
+ >>> for i in range(5):
+ ... df = df.append({'A'}: i}, ignore_index=True)
+ >>> df
+ A
+ 0 0
+ 1 1
+ 2 2
+ 3 3
+ 4 4
+
+ More efficient:
+
+ >>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)],
+ ... ignore_index=True)
+ A
+ 0 0
+ 1 1
+ 2 2
+ 3 3
+ 4 4
+
"""
if isinstance(other, (Series, dict)):
if isinstance(other, dict):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e1f668dd3afda..597d934ea4066 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1522,6 +1522,18 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
verify_integrity : boolean, default False
If True, raise Exception on creating index with duplicates
+ Notes
+ -----
+ Iteratively appending to a Series can be more computationally intensive
+ than a single concatenate. A better solution is to append values to a
+ list and then concatenate the list with the original Series all at
+ once.
+
+ See also
+ --------
+ pandas.concat : General function to concatenate DataFrame, Series
+ or Panel objects
+
Returns
-------
appended : Series
| - [X] closes #16418
- [X] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [X] Improves documentation about append with regards to append
I think I misused Git and deleted my old pull request and repushed the branch instead of rebasing correctly.... sorry about that. I believe I have addressed everything in that review: wording, removing the `try` - `except`, and spacing issues.
| https://api.github.com/repos/pandas-dev/pandas/pulls/17017 | 2017-07-19T01:51:44Z | 2017-07-19T10:22:53Z | 2017-07-19T10:22:53Z | 2017-07-19T15:03:24Z |
BUG: do not cast ints to floats if inputs of crosstab are not aligned | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 9a6016c82e794..8c81868af82b9 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -224,10 +224,10 @@ Sparse
Reshaping
^^^^^^^^^
- Joining/Merging with a non unique ``PeriodIndex`` raised a TypeError (:issue:`16871`)
+- Bug in :func:`crosstab` where non-aligned series of integers were casted to float (:issue:`17005`)
- Bug when using :func:`isin` on a large object series and large comparison array (:issue:`16012`)
- Fixes regression from 0.20, :func:`Series.aggregate` and :func:`DataFrame.aggregate` allow dictionaries as return values again (:issue:`16741`)
-
Numeric
^^^^^^^
- Bug in ``.clip()`` with ``axis=1`` and a list-like for ``threshold`` is passed; previously this raised ``ValueError`` (:issue:`15390`)
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 0581ec7484c49..fbb7e6f970309 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -6,6 +6,7 @@
from pandas import Series, DataFrame, MultiIndex, Index
from pandas.core.groupby import Grouper
from pandas.core.reshape.util import cartesian_product
+from pandas.core.index import _get_combined_index
from pandas.compat import range, lrange, zip
from pandas import compat
import pandas.core.common as com
@@ -493,6 +494,13 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None,
rownames = _get_names(index, rownames, prefix='row')
colnames = _get_names(columns, colnames, prefix='col')
+ obs_idxes = [obj.index for objs in (index, columns) for obj in objs
+ if hasattr(obj, 'index')]
+ if obs_idxes:
+ common_idx = _get_combined_index(obs_idxes, intersect=True)
+ else:
+ common_idx = None
+
data = {}
data.update(zip(rownames, index))
data.update(zip(colnames, columns))
@@ -503,20 +511,21 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None,
if values is not None and aggfunc is None:
raise ValueError("values cannot be used without an aggfunc.")
+ df = DataFrame(data, index=common_idx)
if values is None:
- df = DataFrame(data)
df['__dummy__'] = 0
- table = df.pivot_table('__dummy__', index=rownames, columns=colnames,
- aggfunc=len, margins=margins,
- margins_name=margins_name, dropna=dropna)
- table = table.fillna(0).astype(np.int64)
-
+ kwargs = {'aggfunc': len, 'fill_value': 0}
else:
- data['__dummy__'] = values
- df = DataFrame(data)
- table = df.pivot_table('__dummy__', index=rownames, columns=colnames,
- aggfunc=aggfunc, margins=margins,
- margins_name=margins_name, dropna=dropna)
+ df['__dummy__'] = values
+ kwargs = {'aggfunc': aggfunc}
+
+ table = df.pivot_table('__dummy__', index=rownames, columns=colnames,
+ margins=margins, margins_name=margins_name,
+ dropna=dropna, **kwargs)
+
+ # GH 17013:
+ if values is None and margins:
+ table = table.fillna(0).astype(np.int64)
# Post-process
if normalize is not False:
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 9881ab72f3ef5..ff9f35b0253b0 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1058,6 +1058,22 @@ def test_crosstab_ndarray(self):
assert result.index.name == 'row_0'
assert result.columns.name == 'col_0'
+ def test_crosstab_non_aligned(self):
+ # GH 17005
+ a = pd.Series([0, 1, 1], index=['a', 'b', 'c'])
+ b = pd.Series([3, 4, 3, 4, 3], index=['a', 'b', 'c', 'd', 'f'])
+ c = np.array([3, 4, 3])
+
+ expected = pd.DataFrame([[1, 0], [1, 1]],
+ index=Index([0, 1], name='row_0'),
+ columns=Index([3, 4], name='col_0'))
+
+ result = crosstab(a, b)
+ tm.assert_frame_equal(result, expected)
+
+ result = crosstab(a, c)
+ tm.assert_frame_equal(result, expected)
+
def test_crosstab_margins(self):
a = np.random.randint(0, 7, size=100)
b = np.random.randint(0, 3, size=100)
| - [x] closes #17005
- [x] tests added / passed
- [x] passes ``git diff master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
... plus some trivial refactoring. | https://api.github.com/repos/pandas-dev/pandas/pulls/17011 | 2017-07-18T11:03:19Z | 2017-07-21T10:38:12Z | 2017-07-21T10:38:11Z | 2017-07-23T20:51:06Z |
BUG: Don't error with empty Series for .isin | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index c63d4575bac43..90f3e6a82952d 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -204,3 +204,4 @@ Categorical
Other
^^^^^
- Bug in :func:`eval` where the ``inplace`` parameter was being incorrectly handled (:issue:`16732`)
+- Bug in ``.isin()`` in which checking membership in empty ``Series`` objects raised an error (:issue:`16991`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index b490bf787a037..3dbea97d644d2 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -65,6 +65,8 @@ def _ensure_data(values, dtype=None):
# we check some simple dtypes first
try:
+ if is_object_dtype(dtype):
+ return _ensure_object(np.asarray(values)), 'object', 'object'
if is_bool_dtype(values) or is_bool_dtype(dtype):
# we are actually coercing to uint64
# until our algos suppport uint8 directly (see TODO)
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index b09325bfa2ddc..da1c68005b9b2 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -1151,10 +1151,13 @@ def test_isin(self):
expected = DataFrame([df.loc[s].isin(other) for s in df.index])
tm.assert_frame_equal(result, expected)
- def test_isin_empty(self):
+ @pytest.mark.parametrize("empty", [[], Series(), np.array([])])
+ def test_isin_empty(self, empty):
+ # see gh-16991
df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']})
- result = df.isin([])
- expected = pd.DataFrame(False, df.index, df.columns)
+ expected = DataFrame(False, df.index, df.columns)
+
+ result = df.isin(empty)
tm.assert_frame_equal(result, expected)
def test_isin_dict(self):
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 18dbe6624008a..692cdd4957947 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1407,6 +1407,15 @@ def check_idx(idx):
# Float64Index overrides isin, so must be checked separately
check_idx(Float64Index([1.0, 2.0, 3.0, 4.0]))
+ @pytest.mark.parametrize("empty", [[], Series(), np.array([])])
+ def test_isin_empty(self, empty):
+ # see gh-16991
+ idx = Index(["a", "b"])
+ expected = np.array([False, False])
+
+ result = idx.isin(empty)
+ tm.assert_numpy_array_equal(expected, result)
+
def test_boolean_cmp(self):
values = [1, 2, 3, 4]
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 749af1c56a7f0..e3a4b5630ab43 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1135,6 +1135,15 @@ def test_isin_with_i8(self):
result = s.isin(s[0:2])
assert_series_equal(result, expected)
+ @pytest.mark.parametrize("empty", [[], Series(), np.array([])])
+ def test_isin_empty(self, empty):
+ # see gh-16991
+ s = Series(["a", "b"])
+ expected = Series([False, False])
+
+ result = s.isin(empty)
+ tm.assert_series_equal(expected, result)
+
def test_timedelta64_analytics(self):
from pandas import date_range
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 993dcc4f527b2..4588bf17fdbeb 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -597,6 +597,15 @@ def test_categorical_from_codes(self):
result = algos.isin(Sd, St)
tm.assert_numpy_array_equal(expected, result)
+ @pytest.mark.parametrize("empty", [[], pd.Series(), np.array([])])
+ def test_empty(self, empty):
+ # see gh-16991
+ vals = pd.Index(["a", "b"])
+ expected = np.array([False, False])
+
+ result = algos.isin(vals, empty)
+ tm.assert_numpy_array_equal(expected, result)
+
class TestValueCounts(object):
| Empty `Series` initializes to `float64`, even when the data type is `object` for `.isin`, leading to an error with membership.
Closes #16991. | https://api.github.com/repos/pandas-dev/pandas/pulls/17006 | 2017-07-18T08:18:18Z | 2017-07-19T00:45:02Z | 2017-07-19T00:45:02Z | 2017-07-19T02:55:26Z |
DEPS: set min versions | diff --git a/.travis.yml b/.travis.yml
index 897d31cf23a3b..034e2a32bb75c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -37,7 +37,7 @@ matrix:
- JOB="3.5_OSX" TEST_ARGS="--skip-slow --skip-network"
- dist: trusty
env:
- - JOB="2.7_LOCALE" TEST_ARGS="--only-slow --skip-network" LOCALE_OVERRIDE="zh_CN.UTF-8"
+ - JOB="2.7_LOCALE" LOCALE_OVERRIDE="zh_CN.UTF-8" SLOW=true
addons:
apt:
packages:
@@ -62,7 +62,7 @@ matrix:
# In allow_failures
- dist: trusty
env:
- - JOB="2.7_SLOW" TEST_ARGS="--only-slow --skip-network"
+ - JOB="2.7_SLOW" SLOW=true
# In allow_failures
- dist: trusty
env:
@@ -82,7 +82,7 @@ matrix:
allow_failures:
- dist: trusty
env:
- - JOB="2.7_SLOW" TEST_ARGS="--only-slow --skip-network"
+ - JOB="2.7_SLOW" SLOW=true
- dist: trusty
env:
- JOB="2.7_BUILD_TEST" TEST_ARGS="--skip-slow" BUILD_TEST=true
diff --git a/ci/install_travis.sh b/ci/install_travis.sh
index ad8f0bdd8a597..d26689f2e6b4b 100755
--- a/ci/install_travis.sh
+++ b/ci/install_travis.sh
@@ -47,7 +47,7 @@ which conda
echo
echo "[update conda]"
conda config --set ssl_verify false || exit 1
-conda config --set always_yes true --set changeps1 false || exit 1
+conda config --set quiet true --set always_yes true --set changeps1 false || exit 1
conda update -q conda
echo
diff --git a/ci/requirements-2.7_COMPAT.build b/ci/requirements-2.7_COMPAT.build
index 0e1ccf9eac9bf..d9c932daa110b 100644
--- a/ci/requirements-2.7_COMPAT.build
+++ b/ci/requirements-2.7_COMPAT.build
@@ -1,5 +1,5 @@
python=2.7*
-numpy=1.7.1
+numpy=1.9.2
cython=0.23
dateutil=1.5
pytz=2013b
diff --git a/ci/requirements-2.7_COMPAT.run b/ci/requirements-2.7_COMPAT.run
index b94f4ab7b27d1..39bf720140733 100644
--- a/ci/requirements-2.7_COMPAT.run
+++ b/ci/requirements-2.7_COMPAT.run
@@ -1,11 +1,12 @@
-numpy=1.7.1
+numpy=1.9.2
dateutil=1.5
pytz=2013b
-scipy=0.11.0
+scipy=0.14.0
xlwt=0.7.5
xlrd=0.9.2
-numexpr=2.2.2
-pytables=3.0.0
+bottleneck=1.0.0
+numexpr=2.4.4 # we test that we correctly don't use an unsupported numexpr
+pytables=3.2.2
psycopg2
pymysql=0.6.0
sqlalchemy=0.7.8
diff --git a/ci/requirements-2.7_LOCALE.build b/ci/requirements-2.7_LOCALE.build
index 4a37ce8fbe161..96cb184ec2665 100644
--- a/ci/requirements-2.7_LOCALE.build
+++ b/ci/requirements-2.7_LOCALE.build
@@ -1,5 +1,5 @@
python=2.7*
python-dateutil
pytz=2013b
-numpy=1.8.2
+numpy=1.9.2
cython=0.23
diff --git a/ci/requirements-2.7_LOCALE.run b/ci/requirements-2.7_LOCALE.run
index 8e360cf74b081..00006106f7009 100644
--- a/ci/requirements-2.7_LOCALE.run
+++ b/ci/requirements-2.7_LOCALE.run
@@ -1,11 +1,12 @@
python-dateutil
pytz=2013b
-numpy=1.8.2
+numpy=1.9.2
xlwt=0.7.5
openpyxl=1.6.2
xlsxwriter=0.5.2
xlrd=0.9.2
-matplotlib=1.3.1
+bottleneck=1.0.0
+matplotlib=1.4.3
sqlalchemy=0.8.1
lxml=3.2.1
scipy
diff --git a/ci/requirements-2.7_SLOW.build b/ci/requirements-2.7_SLOW.build
index 0f4a2c6792e6b..a665ab9edd585 100644
--- a/ci/requirements-2.7_SLOW.build
+++ b/ci/requirements-2.7_SLOW.build
@@ -1,5 +1,5 @@
python=2.7*
python-dateutil
pytz
-numpy=1.8.2
+numpy=1.10*
cython
diff --git a/ci/requirements-2.7_SLOW.run b/ci/requirements-2.7_SLOW.run
index 0a549554f5219..f7708283ad04a 100644
--- a/ci/requirements-2.7_SLOW.run
+++ b/ci/requirements-2.7_SLOW.run
@@ -1,7 +1,7 @@
python-dateutil
pytz
-numpy=1.8.2
-matplotlib=1.3.1
+numpy=1.10*
+matplotlib=1.4.3
scipy
patsy
xlwt
diff --git a/ci/script_multi.sh b/ci/script_multi.sh
index d79fc43fbe175..ee9fbcaad5ef5 100755
--- a/ci/script_multi.sh
+++ b/ci/script_multi.sh
@@ -36,9 +36,15 @@ elif [ "$COVERAGE" ]; then
echo pytest -s -n 2 -m "not single" --cov=pandas --cov-report xml:/tmp/cov-multiple.xml --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
pytest -s -n 2 -m "not single" --cov=pandas --cov-report xml:/tmp/cov-multiple.xml --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
+elif [ "$SLOW" ]; then
+ TEST_ARGS="--only-slow --skip-network"
+ echo pytest -r xX -m "not single and slow" -v --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
+ pytest -r xX -m "not single and slow" -v --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
+
else
echo pytest -n 2 -r xX -m "not single" --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
pytest -n 2 -r xX -m "not single" --junitxml=/tmp/multiple.xml $TEST_ARGS pandas # TODO: doctest
+
fi
RET="$?"
diff --git a/ci/script_single.sh b/ci/script_single.sh
index 245b4e6152c4d..375e9879e950f 100755
--- a/ci/script_single.sh
+++ b/ci/script_single.sh
@@ -12,16 +12,24 @@ if [ -n "$LOCALE_OVERRIDE" ]; then
python -c "$pycmd"
fi
+if [ "$SLOW" ]; then
+ TEST_ARGS="--only-slow --skip-network"
+fi
+
if [ "$BUILD_TEST" ]; then
echo "We are not running pytest as this is a build test."
+
elif [ "$DOC" ]; then
echo "We are not running pytest as this is a doc-build"
+
elif [ "$COVERAGE" ]; then
echo pytest -s -m "single" --cov=pandas --cov-report xml:/tmp/cov-single.xml --junitxml=/tmp/single.xml $TEST_ARGS pandas
pytest -s -m "single" --cov=pandas --cov-report xml:/tmp/cov-single.xml --junitxml=/tmp/single.xml $TEST_ARGS pandas
+
else
echo pytest -m "single" -r xX --junitxml=/tmp/single.xml $TEST_ARGS pandas
pytest -m "single" -r xX --junitxml=/tmp/single.xml $TEST_ARGS pandas # TODO: doctest
+
fi
RET="$?"
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 99d299b75b59b..f92c43839ee31 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -203,7 +203,7 @@ Dependencies
------------
* `setuptools <https://setuptools.readthedocs.io/en/latest/>`__
-* `NumPy <http://www.numpy.org>`__: 1.7.1 or higher
+* `NumPy <http://www.numpy.org>`__: 1.9.0 or higher
* `python-dateutil <http://labix.org/python-dateutil>`__: 1.5 or higher
* `pytz <http://pytz.sourceforge.net/>`__: Needed for time zone support
@@ -233,7 +233,7 @@ Optional Dependencies
* `Cython <http://www.cython.org>`__: Only necessary to build development
version. Version 0.23 or higher.
-* `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions
+* `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions, Version 0.14.0 or higher
* `xarray <http://xarray.pydata.org>`__: pandas like handling for > 2 dims, needed for converting Panels to xarray objects. Version 0.7.0 or higher is recommended.
* `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage. Version 3.0.0 or higher required, Version 3.2.1 or higher highly recommended.
* `Feather Format <https://github.com/wesm/feather>`__: necessary for feather-based storage, version 0.3.1 or higher.
@@ -244,7 +244,7 @@ Optional Dependencies
* `pymysql <https://github.com/PyMySQL/PyMySQL>`__: for MySQL.
* `SQLite <https://docs.python.org/3.5/library/sqlite3.html>`__: for SQLite, this is included in Python's standard library by default.
-* `matplotlib <http://matplotlib.org/>`__: for plotting
+* `matplotlib <http://matplotlib.org/>`__: for plotting, Version 1.4.3 or higher.
* For Excel I/O:
* `xlrd/xlwt <http://www.python-excel.org/>`__: Excel reading (xlrd) and writing (xlwt)
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index c5fe89282bf52..72eb4d4ec7240 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -138,6 +138,27 @@ Other Enhancements
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. _whatsnew_0210.api_breaking.deps:
+
+Dependencies have increased minimum versions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+We have updated our minimum supported versions of dependencies (:issue:`15206`, :issue:`15543`, :issue:`15214`)
+). If installed, we now require:
+
+ +--------------+-----------------+----------+
+ | Package | Minimum Version | Required |
+ +======================+=========+==========+
+ | Numpy | 1.9.0 | X |
+ +--------------+-----------------+----------+
+ | Matplotlib | 1.4.3 | |
+ +--------------+-----------------+----------+
+ | Scipy | 0.14.0 | |
+ +--------------+-----------------+----------+
+ | Bottleneck | 1.0.0 | |
+ +--------------+-----------------+----------+
+
.. _whatsnew_0210.api_breaking.pandas_eval:
Improved error handling during item assignment in pd.eval
@@ -259,7 +280,6 @@ Other API Changes
^^^^^^^^^^^^^^^^^
- Support has been dropped for Python 3.4 (:issue:`15251`)
-- Support has been dropped for bottleneck < 1.0.0 (:issue:`15214`)
- The Categorical constructor no longer accepts a scalar for the ``categories`` keyword. (:issue:`16022`)
- Accessing a non-existent attribute on a closed :class:`~pandas.HDFStore` will now
raise an ``AttributeError`` rather than a ``ClosedFileError`` (:issue:`16301`)
diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
index 0c2e056ead7fa..1cc7f5ace95ea 100644
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -12,8 +12,6 @@ from distutils.version import LooseVersion
# numpy versioning
_np_version = np.version.short_version
-_np_version_under1p8 = LooseVersion(_np_version) < '1.8'
-_np_version_under1p9 = LooseVersion(_np_version) < '1.9'
_np_version_under1p10 = LooseVersion(_np_version) < '1.10'
_np_version_under1p11 = LooseVersion(_np_version) < '1.11'
diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py
index 2c5a18973afa8..5112957b49875 100644
--- a/pandas/compat/numpy/__init__.py
+++ b/pandas/compat/numpy/__init__.py
@@ -9,19 +9,18 @@
# numpy versioning
_np_version = np.__version__
_nlv = LooseVersion(_np_version)
-_np_version_under1p8 = _nlv < '1.8'
-_np_version_under1p9 = _nlv < '1.9'
_np_version_under1p10 = _nlv < '1.10'
_np_version_under1p11 = _nlv < '1.11'
_np_version_under1p12 = _nlv < '1.12'
_np_version_under1p13 = _nlv < '1.13'
_np_version_under1p14 = _nlv < '1.14'
+_np_version_under1p15 = _nlv < '1.15'
-if _nlv < '1.7.0':
+if _nlv < '1.9':
raise ImportError('this version of pandas is incompatible with '
- 'numpy < 1.7.0\n'
+ 'numpy < 1.9.0\n'
'your numpy version is {0}.\n'
- 'Please upgrade numpy to >= 1.7.0 to use '
+ 'Please upgrade numpy to >= 1.9.0 to use '
'this pandas version'.format(_np_version))
@@ -70,11 +69,10 @@ def np_array_datetime64_compat(arr, *args, **kwargs):
__all__ = ['np',
- '_np_version_under1p8',
- '_np_version_under1p9',
'_np_version_under1p10',
'_np_version_under1p11',
'_np_version_under1p12',
'_np_version_under1p13',
- '_np_version_under1p14'
+ '_np_version_under1p14',
+ '_np_version_under1p15'
]
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index f2359f3ff1a9d..ffd03096e2a27 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -6,7 +6,6 @@
from warnings import warn, catch_warnings
import numpy as np
-from pandas import compat, _np_version_under1p8
from pandas.core.dtypes.cast import maybe_promote
from pandas.core.dtypes.generic import (
ABCSeries, ABCIndex,
@@ -407,14 +406,12 @@ def isin(comps, values):
comps, dtype, _ = _ensure_data(comps)
values, _, _ = _ensure_data(values, dtype=dtype)
- # GH11232
- # work-around for numpy < 1.8 and comparisions on py3
# faster for larger cases to use np.in1d
f = lambda x, y: htable.ismember_object(x, values)
+
# GH16012
# Ensure np.in1d doesn't get object types or it *may* throw an exception
- if ((_np_version_under1p8 and compat.PY3) or len(comps) > 1000000 and
- not is_object_dtype(comps)):
+ if len(comps) > 1000000 and not is_object_dtype(comps):
f = lambda x, y: np.in1d(x, y)
elif is_integer_dtype(comps):
try:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c83b1073afc8e..5f0aac53d71f6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1827,11 +1827,8 @@ def _box_item_values(self, key, values):
def _maybe_cache_changed(self, item, value):
"""The object has called back to us saying maybe it has changed.
-
- numpy < 1.8 has an issue with object arrays and aliasing
- GH6026
"""
- self._data.set(item, value, check=pd._np_version_under1p8)
+ self._data.set(item, value, check=False)
@property
def _is_cached(self):
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index a388892e925b6..aa7c4517c0a01 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -13,7 +13,7 @@
)
from pandas import compat
-from pandas.compat.numpy import function as nv, _np_version_under1p8
+from pandas.compat.numpy import function as nv
from pandas.compat import set_function_name
from pandas.core.dtypes.common import (
@@ -3257,11 +3257,7 @@ def value_counts(self, normalize=False, sort=True, ascending=False,
d = np.diff(np.r_[idx, len(ids)])
if dropna:
m = ids[lab == -1]
- if _np_version_under1p8:
- mi, ml = algorithms.factorize(m)
- d[ml] = d[ml] - np.bincount(mi)
- else:
- np.add.at(d, m, -1)
+ np.add.at(d, m, -1)
acc = rep(d)[mask]
else:
acc = rep(d)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index b616270e47aa6..83b382ec0ed72 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -69,8 +69,7 @@
import pandas.core.computation.expressions as expressions
from pandas.util._decorators import cache_readonly
from pandas.util._validators import validate_bool_kwarg
-
-from pandas import compat, _np_version_under1p9
+from pandas import compat
from pandas.compat import range, map, zip, u
@@ -857,9 +856,6 @@ def _is_empty_indexer(indexer):
# set
else:
- if _np_version_under1p9:
- # Work around GH 6168 to support old numpy
- indexer = getattr(indexer, 'values', indexer)
values[indexer] = value
# coerce and try to infer the dtypes of the result
@@ -1482,15 +1478,7 @@ def quantile(self, qs, interpolation='linear', axis=0, mgr=None):
tuple of (axis, block)
"""
- if _np_version_under1p9:
- if interpolation != 'linear':
- raise ValueError("Interpolation methods other than linear "
- "are not supported in numpy < 1.9.")
-
- kw = {}
- if not _np_version_under1p9:
- kw.update({'interpolation': interpolation})
-
+ kw = {'interpolation': interpolation}
values = self.get_values()
values, _, _, _ = self._try_coerce_args(values, values)
diff --git a/pandas/tests/frame/test_quantile.py b/pandas/tests/frame/test_quantile.py
index 2482e493dbefd..2f264874378bc 100644
--- a/pandas/tests/frame/test_quantile.py
+++ b/pandas/tests/frame/test_quantile.py
@@ -12,7 +12,6 @@
from pandas.util.testing import assert_series_equal, assert_frame_equal
import pandas.util.testing as tm
-from pandas import _np_version_under1p9
from pandas.tests.frame.common import TestData
@@ -103,9 +102,6 @@ def test_quantile_axis_parameter(self):
def test_quantile_interpolation(self):
# see gh-10174
- if _np_version_under1p9:
- pytest.skip("Numpy version under 1.9")
-
from numpy import percentile
# interpolation = linear (default case)
@@ -166,44 +162,6 @@ def test_quantile_interpolation(self):
index=[.25, .5], columns=['a', 'b', 'c'])
assert_frame_equal(result, expected)
- def test_quantile_interpolation_np_lt_1p9(self):
- # see gh-10174
- if not _np_version_under1p9:
- pytest.skip("Numpy version is greater than 1.9")
-
- from numpy import percentile
-
- # interpolation = linear (default case)
- q = self.tsframe.quantile(0.1, axis=0, interpolation='linear')
- assert q['A'] == percentile(self.tsframe['A'], 10)
- q = self.intframe.quantile(0.1)
- assert q['A'] == percentile(self.intframe['A'], 10)
-
- # test with and without interpolation keyword
- q1 = self.intframe.quantile(0.1)
- assert q1['A'] == np.percentile(self.intframe['A'], 10)
- assert_series_equal(q, q1)
-
- # interpolation method other than default linear
- msg = "Interpolation methods other than linear"
- df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3])
- with tm.assert_raises_regex(ValueError, msg):
- df.quantile(.5, axis=1, interpolation='nearest')
-
- with tm.assert_raises_regex(ValueError, msg):
- df.quantile([.5, .75], axis=1, interpolation='lower')
-
- # test degenerate case
- df = DataFrame({'x': [], 'y': []})
- with tm.assert_raises_regex(ValueError, msg):
- q = df.quantile(0.1, axis=0, interpolation='higher')
-
- # multi
- df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]],
- columns=['a', 'b', 'c'])
- with tm.assert_raises_regex(ValueError, msg):
- df.quantile([.25, .5], interpolation='midpoint')
-
def test_quantile_multi(self):
df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]],
columns=['a', 'b', 'c'])
diff --git a/pandas/tests/frame/test_rank.py b/pandas/tests/frame/test_rank.py
index acf887d047c9e..58f4d9b770173 100644
--- a/pandas/tests/frame/test_rank.py
+++ b/pandas/tests/frame/test_rank.py
@@ -1,4 +1,5 @@
# -*- coding: utf-8 -*-
+import pytest
from datetime import timedelta, datetime
from distutils.version import LooseVersion
from numpy import nan
@@ -26,8 +27,7 @@ class TestRank(TestData):
}
def test_rank(self):
- tm._skip_if_no_scipy()
- from scipy.stats import rankdata
+ rankdata = pytest.importorskip('scipy.stats.rankdata')
self.frame['A'][::2] = np.nan
self.frame['B'][::3] = np.nan
@@ -120,8 +120,7 @@ def test_rank2(self):
tm.assert_frame_equal(df.rank(), exp)
def test_rank_na_option(self):
- tm._skip_if_no_scipy()
- from scipy.stats import rankdata
+ rankdata = pytest.importorskip('scipy.stats.rankdata')
self.frame['A'][::2] = np.nan
self.frame['B'][::3] = np.nan
@@ -193,10 +192,9 @@ def test_rank_axis(self):
tm.assert_frame_equal(df.rank(axis=1), df.rank(axis='columns'))
def test_rank_methods_frame(self):
- tm.skip_if_no_package('scipy', min_version='0.13',
- app='scipy.stats.rankdata')
+ pytest.importorskip('scipy.stats.special')
+ rankdata = pytest.importorskip('scipy.stats.rankdata')
import scipy
- from scipy.stats import rankdata
xs = np.random.randint(0, 21, (100, 26))
xs = (xs - 10.0) / 10.0
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index f99dcee9e5c8a..47f53f53cfd02 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -9,7 +9,7 @@
from pandas.compat import lrange
from pandas.compat.numpy import np_datetime64_compat
from pandas import (DatetimeIndex, Index, date_range, Series, DataFrame,
- Timestamp, datetime, offsets, _np_version_under1p8)
+ Timestamp, datetime, offsets)
from pandas.util.testing import assert_series_equal, assert_almost_equal
@@ -276,11 +276,7 @@ def test_comparisons_nat(self):
np_datetime64_compat('2014-06-01 00:00Z'),
np_datetime64_compat('2014-07-01 00:00Z')])
- if _np_version_under1p8:
- # cannot test array because np.datetime('nat') returns today's date
- cases = [(fidx1, fidx2), (didx1, didx2)]
- else:
- cases = [(fidx1, fidx2), (didx1, didx2), (didx1, darr)]
+ cases = [(fidx1, fidx2), (didx1, didx2), (didx1, darr)]
# Check pd.NaT is handles as the same as np.nan
with tm.assert_produces_warning(None):
diff --git a/pandas/tests/indexes/period/test_indexing.py b/pandas/tests/indexes/period/test_indexing.py
index d4dac1cf88fff..efc13a56cd77e 100644
--- a/pandas/tests/indexes/period/test_indexing.py
+++ b/pandas/tests/indexes/period/test_indexing.py
@@ -8,7 +8,7 @@
from pandas.compat import lrange
from pandas._libs import tslib
from pandas import (PeriodIndex, Series, DatetimeIndex,
- period_range, Period, _np_version_under1p9)
+ period_range, Period)
class TestGetItem(object):
@@ -149,16 +149,12 @@ def test_getitem_seconds(self):
values = ['2014', '2013/02', '2013/01/02', '2013/02/01 9H',
'2013/02/01 09:00']
for v in values:
- if _np_version_under1p9:
- with pytest.raises(ValueError):
- idx[v]
- else:
- # GH7116
- # these show deprecations as we are trying
- # to slice with non-integer indexers
- # with pytest.raises(IndexError):
- # idx[v]
- continue
+ # GH7116
+ # these show deprecations as we are trying
+ # to slice with non-integer indexers
+ # with pytest.raises(IndexError):
+ # idx[v]
+ continue
s = Series(np.random.rand(len(idx)), index=idx)
tm.assert_series_equal(s['2013/01/01 10:00'], s[3600:3660])
@@ -178,16 +174,12 @@ def test_getitem_day(self):
'2013/02/01 09:00']
for v in values:
- if _np_version_under1p9:
- with pytest.raises(ValueError):
- idx[v]
- else:
- # GH7116
- # these show deprecations as we are trying
- # to slice with non-integer indexers
- # with pytest.raises(IndexError):
- # idx[v]
- continue
+ # GH7116
+ # these show deprecations as we are trying
+ # to slice with non-integer indexers
+ # with pytest.raises(IndexError):
+ # idx[v]
+ continue
s = Series(np.random.rand(len(idx)), index=idx)
tm.assert_series_equal(s['2013/01'], s[0:31])
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index 59e4b1432b8bc..0b3bd0b03bccf 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -7,7 +7,7 @@
import pandas.util.testing as tm
from pandas import (timedelta_range, date_range, Series, Timedelta,
DatetimeIndex, TimedeltaIndex, Index, DataFrame,
- Int64Index, _np_version_under1p8)
+ Int64Index)
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_index_equal)
@@ -379,11 +379,7 @@ def test_comparisons_nat(self):
np.timedelta64(1, 'D') + np.timedelta64(2, 's'),
np.timedelta64(5, 'D') + np.timedelta64(3, 's')])
- if _np_version_under1p8:
- # cannot test array because np.datetime('nat') returns today's date
- cases = [(tdidx1, tdidx2)]
- else:
- cases = [(tdidx1, tdidx2), (tdidx1, tdarr)]
+ cases = [(tdidx1, tdidx2), (tdidx1, tdarr)]
# Check pd.NaT is handles as the same as np.nan
for idx1, idx2 in cases:
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 3ab443b223f20..dfab539e9474c 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -39,7 +39,8 @@ def _ok_for_gaussian_kde(kind):
from scipy.stats import gaussian_kde # noqa
except ImportError:
return False
- return True
+
+ return plotting._compat._mpl_ge_1_5_0()
class TestPlotBase(object):
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index e9c7d806fd65d..cff0c1c0b424e 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -610,6 +610,8 @@ def test_secondary_y_ts(self):
@pytest.mark.slow
def test_secondary_kde(self):
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 6d813ac76cc4e..67098529a0111 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -473,7 +473,6 @@ def test_subplots_multiple_axes(self):
# TestDataFrameGroupByPlots.test_grouped_box_multiple_axes
fig, axes = self.plt.subplots(2, 2)
with warnings.catch_warnings():
- warnings.simplefilter('ignore')
df = DataFrame(np.random.rand(10, 4),
index=list(string.ascii_letters[:10]))
@@ -1290,6 +1289,9 @@ def test_boxplot_subplots_return_type(self):
def test_kde_df(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
+
df = DataFrame(randn(100, 4))
ax = _check_plot_works(df.plot, kind='kde')
expected = [pprint_thing(c) for c in df.columns]
@@ -1311,6 +1313,9 @@ def test_kde_df(self):
def test_kde_missing_vals(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
+
df = DataFrame(np.random.uniform(size=(100, 4)))
df.loc[0, 0] = np.nan
_check_plot_works(df.plot, kind='kde')
@@ -1835,6 +1840,8 @@ def test_hist_colors(self):
def test_kde_colors(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
from matplotlib import cm
@@ -1858,6 +1865,8 @@ def test_kde_colors(self):
def test_kde_colors_and_styles_subplots(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
from matplotlib import cm
default_colors = self._maybe_unpack_cycler(self.plt.rcParams)
@@ -2160,71 +2169,74 @@ def test_pie_df_nan(self):
@pytest.mark.slow
def test_errorbar_plot(self):
- d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)}
- df = DataFrame(d)
- d_err = {'x': np.ones(12) * 0.2, 'y': np.ones(12) * 0.4}
- df_err = DataFrame(d_err)
-
- # check line plots
- ax = _check_plot_works(df.plot, yerr=df_err, logy=True)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(df.plot, yerr=df_err, logx=True, logy=True)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(df.plot, yerr=df_err, loglog=True)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
+ with warnings.catch_warnings():
+ d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)}
+ df = DataFrame(d)
+ d_err = {'x': np.ones(12) * 0.2, 'y': np.ones(12) * 0.4}
+ df_err = DataFrame(d_err)
- kinds = ['line', 'bar', 'barh']
- for kind in kinds:
- ax = _check_plot_works(df.plot, yerr=df_err['x'], kind=kind)
+ # check line plots
+ ax = _check_plot_works(df.plot, yerr=df_err, logy=True)
self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(df.plot, yerr=d_err, kind=kind)
+ ax = _check_plot_works(df.plot, yerr=df_err, logx=True, logy=True)
self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(df.plot, yerr=df_err, xerr=df_err,
- kind=kind)
- self._check_has_errorbars(ax, xerr=2, yerr=2)
- ax = _check_plot_works(df.plot, yerr=df_err['x'], xerr=df_err['x'],
- kind=kind)
- self._check_has_errorbars(ax, xerr=2, yerr=2)
- ax = _check_plot_works(df.plot, xerr=0.2, yerr=0.2, kind=kind)
- self._check_has_errorbars(ax, xerr=2, yerr=2)
- # _check_plot_works adds an ax so catch warning. see GH #13188
- with tm.assert_produces_warning(UserWarning):
+ ax = _check_plot_works(df.plot, yerr=df_err, loglog=True)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+
+ kinds = ['line', 'bar', 'barh']
+ for kind in kinds:
+ ax = _check_plot_works(df.plot, yerr=df_err['x'], kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+ ax = _check_plot_works(df.plot, yerr=d_err, kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+ ax = _check_plot_works(df.plot, yerr=df_err, xerr=df_err,
+ kind=kind)
+ self._check_has_errorbars(ax, xerr=2, yerr=2)
+ ax = _check_plot_works(df.plot, yerr=df_err['x'],
+ xerr=df_err['x'],
+ kind=kind)
+ self._check_has_errorbars(ax, xerr=2, yerr=2)
+ ax = _check_plot_works(df.plot, xerr=0.2, yerr=0.2, kind=kind)
+ self._check_has_errorbars(ax, xerr=2, yerr=2)
+
+ # _check_plot_works adds an ax so catch warning. see GH #13188
axes = _check_plot_works(df.plot,
yerr=df_err, xerr=df_err,
subplots=True,
kind=kind)
- self._check_has_errorbars(axes, xerr=1, yerr=1)
-
- ax = _check_plot_works((df + 1).plot, yerr=df_err,
- xerr=df_err, kind='bar', log=True)
- self._check_has_errorbars(ax, xerr=2, yerr=2)
+ self._check_has_errorbars(axes, xerr=1, yerr=1)
- # yerr is raw error values
- ax = _check_plot_works(df['y'].plot, yerr=np.ones(12) * 0.4)
- self._check_has_errorbars(ax, xerr=0, yerr=1)
- ax = _check_plot_works(df.plot, yerr=np.ones((2, 12)) * 0.4)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
+ ax = _check_plot_works((df + 1).plot, yerr=df_err,
+ xerr=df_err, kind='bar', log=True)
+ self._check_has_errorbars(ax, xerr=2, yerr=2)
- # yerr is iterator
- import itertools
- ax = _check_plot_works(df.plot, yerr=itertools.repeat(0.1, len(df)))
- self._check_has_errorbars(ax, xerr=0, yerr=2)
+ # yerr is raw error values
+ ax = _check_plot_works(df['y'].plot, yerr=np.ones(12) * 0.4)
+ self._check_has_errorbars(ax, xerr=0, yerr=1)
+ ax = _check_plot_works(df.plot, yerr=np.ones((2, 12)) * 0.4)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
- # yerr is column name
- for yerr in ['yerr', u('誤差')]:
- s_df = df.copy()
- s_df[yerr] = np.ones(12) * 0.2
- ax = _check_plot_works(s_df.plot, yerr=yerr)
+ # yerr is iterator
+ import itertools
+ ax = _check_plot_works(df.plot,
+ yerr=itertools.repeat(0.1, len(df)))
self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(s_df.plot, y='y', x='x', yerr=yerr)
- self._check_has_errorbars(ax, xerr=0, yerr=1)
- with pytest.raises(ValueError):
- df.plot(yerr=np.random.randn(11))
+ # yerr is column name
+ for yerr in ['yerr', u('誤差')]:
+ s_df = df.copy()
+ s_df[yerr] = np.ones(12) * 0.2
+ ax = _check_plot_works(s_df.plot, yerr=yerr)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+ ax = _check_plot_works(s_df.plot, y='y', x='x', yerr=yerr)
+ self._check_has_errorbars(ax, xerr=0, yerr=1)
- df_err = DataFrame({'x': ['zzz'] * 12, 'y': ['zzz'] * 12})
- with pytest.raises((ValueError, TypeError)):
- df.plot(yerr=df_err)
+ with pytest.raises(ValueError):
+ df.plot(yerr=np.random.randn(11))
+
+ df_err = DataFrame({'x': ['zzz'] * 12, 'y': ['zzz'] * 12})
+ with pytest.raises((ValueError, TypeError)):
+ df.plot(yerr=df_err)
@pytest.mark.slow
def test_errorbar_with_integer_column_names(self):
@@ -2262,33 +2274,34 @@ def test_errorbar_with_partial_columns(self):
@pytest.mark.slow
def test_errorbar_timeseries(self):
- d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)}
- d_err = {'x': np.ones(12) * 0.2, 'y': np.ones(12) * 0.4}
+ with warnings.catch_warnings():
+ d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)}
+ d_err = {'x': np.ones(12) * 0.2, 'y': np.ones(12) * 0.4}
- # check time-series plots
- ix = date_range('1/1/2000', '1/1/2001', freq='M')
- tdf = DataFrame(d, index=ix)
- tdf_err = DataFrame(d_err, index=ix)
+ # check time-series plots
+ ix = date_range('1/1/2000', '1/1/2001', freq='M')
+ tdf = DataFrame(d, index=ix)
+ tdf_err = DataFrame(d_err, index=ix)
- kinds = ['line', 'bar', 'barh']
- for kind in kinds:
- ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(tdf.plot, yerr=d_err, kind=kind)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
- ax = _check_plot_works(tdf.plot, y='y', yerr=tdf_err['x'],
- kind=kind)
- self._check_has_errorbars(ax, xerr=0, yerr=1)
- ax = _check_plot_works(tdf.plot, y='y', yerr='x', kind=kind)
- self._check_has_errorbars(ax, xerr=0, yerr=1)
- ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind)
- self._check_has_errorbars(ax, xerr=0, yerr=2)
- # _check_plot_works adds an ax so catch warning. see GH #13188
- with tm.assert_produces_warning(UserWarning):
+ kinds = ['line', 'bar', 'barh']
+ for kind in kinds:
+ ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+ ax = _check_plot_works(tdf.plot, yerr=d_err, kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+ ax = _check_plot_works(tdf.plot, y='y', yerr=tdf_err['x'],
+ kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=1)
+ ax = _check_plot_works(tdf.plot, y='y', yerr='x', kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=1)
+ ax = _check_plot_works(tdf.plot, yerr=tdf_err, kind=kind)
+ self._check_has_errorbars(ax, xerr=0, yerr=2)
+
+ # _check_plot_works adds an ax so catch warning. see GH #13188
axes = _check_plot_works(tdf.plot,
kind=kind, yerr=tdf_err,
subplots=True)
- self._check_has_errorbars(axes, xerr=0, yerr=1)
+ self._check_has_errorbars(axes, xerr=0, yerr=1)
def test_errorbar_asymmetrical(self):
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index 684a943fb5a69..c4795ea1e1eca 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -4,7 +4,7 @@
import pytest
-from pandas import Series, DataFrame
+from pandas import DataFrame
from pandas.compat import lmap
import pandas.util.testing as tm
@@ -13,8 +13,7 @@
from numpy.random import randn
import pandas.plotting as plotting
-from pandas.tests.plotting.common import (TestPlotBase, _check_plot_works,
- _ok_for_gaussian_kde)
+from pandas.tests.plotting.common import TestPlotBase, _check_plot_works
tm._skip_if_no_mpl()
@@ -52,46 +51,6 @@ def test_bootstrap_plot(self):
class TestDataFramePlots(TestPlotBase):
- @pytest.mark.slow
- def test_scatter_plot_legacy(self):
- tm._skip_if_no_scipy()
-
- df = DataFrame(randn(100, 2))
-
- def scat(**kwds):
- return plotting.scatter_matrix(df, **kwds)
-
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat)
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, marker='+')
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, vmin=0)
- if _ok_for_gaussian_kde('kde'):
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, diagonal='kde')
- if _ok_for_gaussian_kde('density'):
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, diagonal='density')
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, diagonal='hist')
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, range_padding=.1)
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, color='rgb')
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, c='rgb')
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat, facecolor='rgb')
-
- def scat2(x, y, by=None, ax=None, figsize=None):
- return plotting._core.scatter_plot(df, x, y, by, ax, figsize=None)
-
- _check_plot_works(scat2, x=0, y=1)
- grouper = Series(np.repeat([1, 2, 3, 4, 5], 20), df.index)
- with tm.assert_produces_warning(UserWarning):
- _check_plot_works(scat2, x=0, y=1, by=grouper)
-
def test_scatter_matrix_axis(self):
tm._skip_if_no_scipy()
scatter_matrix = plotting.scatter_matrix
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 9c9011ba1ca7b..8164ad74a190a 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -571,6 +571,9 @@ def test_plot_fails_with_dupe_color_and_style(self):
@pytest.mark.slow
def test_hist_kde(self):
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
+
_, ax = self.plt.subplots()
ax = self.ts.plot.hist(logy=True, ax=ax)
self._check_ax_scales(ax, yaxis='log')
@@ -596,6 +599,9 @@ def test_hist_kde(self):
def test_kde_kwargs(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
+
from numpy import linspace
_check_plot_works(self.ts.plot.kde, bw_method=.5,
ind=linspace(-100, 100, 20))
@@ -611,6 +617,9 @@ def test_kde_kwargs(self):
def test_kde_missing_vals(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
+
s = Series(np.random.uniform(size=50))
s[0] = np.nan
axes = _check_plot_works(s.plot.kde)
@@ -638,6 +647,9 @@ def test_hist_kwargs(self):
@pytest.mark.slow
def test_hist_kde_color(self):
+ if not self.mpl_ge_1_5_0:
+ pytest.skip("mpl is not supported")
+
_, ax = self.plt.subplots()
ax = self.ts.plot.hist(logy=True, bins=10, color='b', ax=ax)
self._check_ax_scales(ax, yaxis='log')
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 4888f8fe996b6..114a055de8195 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -14,8 +14,7 @@
import pandas as pd
from pandas import (Index, Series, DataFrame, isna, bdate_range,
- NaT, date_range, timedelta_range,
- _np_version_under1p8)
+ NaT, date_range, timedelta_range)
from pandas.core.indexes.datetimes import Timestamp
from pandas.core.indexes.timedeltas import Timedelta
import pandas.core.nanops as nanops
@@ -687,14 +686,13 @@ def run_ops(ops, get_ser, test_ser):
assert_series_equal(result, exp)
# odd numpy behavior with scalar timedeltas
- if not _np_version_under1p8:
- result = td1[0] + dt1
- exp = (dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize(tz)
- assert_series_equal(result, exp)
+ result = td1[0] + dt1
+ exp = (dt1.dt.tz_localize(None) + td1[0]).dt.tz_localize(tz)
+ assert_series_equal(result, exp)
- result = td2[0] + dt2
- exp = (dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize(tz)
- assert_series_equal(result, exp)
+ result = td2[0] + dt2
+ exp = (dt2.dt.tz_localize(None) + td2[0]).dt.tz_localize(tz)
+ assert_series_equal(result, exp)
result = dt1 - td1[0]
exp = (dt1.dt.tz_localize(None) - td1[0]).dt.tz_localize(tz)
diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py
index 21379641a78d8..cf5e3fe4f29b0 100644
--- a/pandas/tests/series/test_quantile.py
+++ b/pandas/tests/series/test_quantile.py
@@ -1,11 +1,10 @@
# coding=utf-8
# pylint: disable-msg=E1101,W0612
-import pytest
import numpy as np
import pandas as pd
-from pandas import (Index, Series, _np_version_under1p9)
+from pandas import Index, Series
from pandas.core.indexes.datetimes import Timestamp
from pandas.core.dtypes.common import is_integer
import pandas.util.testing as tm
@@ -68,8 +67,6 @@ def test_quantile_multi(self):
[], dtype=float))
tm.assert_series_equal(result, expected)
- @pytest.mark.skipif(_np_version_under1p9,
- reason="Numpy version is under 1.9")
def test_quantile_interpolation(self):
# see gh-10174
@@ -82,8 +79,6 @@ def test_quantile_interpolation(self):
# test with and without interpolation keyword
assert q == q1
- @pytest.mark.skipif(_np_version_under1p9,
- reason="Numpy version is under 1.9")
def test_quantile_interpolation_dtype(self):
# GH #10174
@@ -96,26 +91,6 @@ def test_quantile_interpolation_dtype(self):
assert q == np.percentile(np.array([1, 3, 4]), 50)
assert is_integer(q)
- @pytest.mark.skipif(not _np_version_under1p9,
- reason="Numpy version is greater 1.9")
- def test_quantile_interpolation_np_lt_1p9(self):
- # GH #10174
-
- # interpolation = linear (default case)
- q = self.ts.quantile(0.1, interpolation='linear')
- assert q == np.percentile(self.ts.valid(), 10)
- q1 = self.ts.quantile(0.1)
- assert q1 == np.percentile(self.ts.valid(), 10)
-
- # interpolation other than linear
- msg = "Interpolation methods other than "
- with tm.assert_raises_regex(ValueError, msg):
- self.ts.quantile(0.9, interpolation='nearest')
-
- # object dtype
- with tm.assert_raises_regex(ValueError, msg):
- Series(self.ts, dtype=object).quantile(0.7, interpolation='higher')
-
def test_quantile_nan(self):
# GH 13098
diff --git a/pandas/tests/series/test_rank.py b/pandas/tests/series/test_rank.py
index ff489eb7f15b1..128a4cdd845e6 100644
--- a/pandas/tests/series/test_rank.py
+++ b/pandas/tests/series/test_rank.py
@@ -28,8 +28,8 @@ class TestSeriesRank(TestData):
}
def test_rank(self):
- tm._skip_if_no_scipy()
- from scipy.stats import rankdata
+ pytest.importorskip('scipy.stats.special')
+ rankdata = pytest.importorskip('scipy.stats.rankdata')
self.ts[::2] = np.nan
self.ts[:10][::3] = 4.
@@ -246,10 +246,9 @@ def _check(s, expected, method='average'):
_check(series, results[method], method=method)
def test_rank_methods_series(self):
- tm.skip_if_no_package('scipy', min_version='0.13',
- app='scipy.stats.rankdata')
+ pytest.importorskip('scipy.stats.special')
+ rankdata = pytest.importorskip('scipy.stats.rankdata')
import scipy
- from scipy.stats import rankdata
xs = np.random.randn(9)
xs = np.concatenate([xs[i:] for i in range(0, 9, 2)]) # add duplicates
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/sparse/test_array.py
index 4ce03f72dbba6..b0a9182a265fe 100644
--- a/pandas/tests/sparse/test_array.py
+++ b/pandas/tests/sparse/test_array.py
@@ -8,7 +8,6 @@
from numpy import nan
import numpy as np
-from pandas import _np_version_under1p8
from pandas.core.sparse.api import SparseArray, SparseSeries
from pandas._libs.sparse import IntIndex
from pandas.util.testing import assert_almost_equal
@@ -150,10 +149,8 @@ def test_take(self):
assert np.isnan(self.arr.take(0))
assert np.isscalar(self.arr.take(2))
- # np.take in < 1.8 doesn't support scalar indexing
- if not _np_version_under1p8:
- assert self.arr.take(2) == np.take(self.arr_data, 2)
- assert self.arr.take(6) == np.take(self.arr_data, 6)
+ assert self.arr.take(2) == np.take(self.arr_data, 2)
+ assert self.arr.take(6) == np.take(self.arr_data, 6)
exp = SparseArray(np.take(self.arr_data, [2, 3]))
tm.assert_sp_array_equal(self.arr.take([2, 3]), exp)
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 2a22fc9d32919..9305504f8d5e3 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -8,7 +8,7 @@
import numpy as np
import pandas as pd
-from pandas import Series, isna, _np_version_under1p9
+from pandas import Series, isna
from pandas.core.dtypes.common import is_integer_dtype
import pandas.core.nanops as nanops
import pandas.util.testing as tm
@@ -340,15 +340,13 @@ def test_nanmean_overflow(self):
# In the previous implementation mean can overflow for int dtypes, it
# is now consistent with numpy
- # numpy < 1.9.0 is not computing this correctly
- if not _np_version_under1p9:
- for a in [2 ** 55, -2 ** 55, 20150515061816532]:
- s = Series(a, index=range(500), dtype=np.int64)
- result = s.mean()
- np_result = s.values.mean()
- assert result == a
- assert result == np_result
- assert result.dtype == np.float64
+ for a in [2 ** 55, -2 ** 55, 20150515061816532]:
+ s = Series(a, index=range(500), dtype=np.int64)
+ result = s.mean()
+ np_result = s.values.mean()
+ assert result == a
+ assert result == np_result
+ assert result.dtype == np.float64
def test_returned_dtype(self):
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index d938d5bf9f3ab..d42e37048d87f 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -1688,7 +1688,7 @@ def test_resample_dtype_preservation(self):
def test_resample_dtype_coerceion(self):
- pytest.importorskip('scipy')
+ pytest.importorskip('scipy.interpolate')
# GH 16361
df = {"a": [1, 3, 1, 4]}
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index 664a97640387e..1d13ba93ba759 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -3,7 +3,7 @@
import numpy as np
import pandas as pd
-from pandas import to_numeric, _np_version_under1p9
+from pandas import to_numeric
from pandas.util import testing as tm
from numpy import iinfo
@@ -355,9 +355,6 @@ def test_downcast(self):
def test_downcast_limits(self):
# Test the limits of each downcast. Bug: #14401.
- # Check to make sure numpy is new enough to run this test.
- if _np_version_under1p9:
- pytest.skip("Numpy version is under 1.9")
i = 'integer'
u = 'unsigned'
diff --git a/setup.py b/setup.py
index a912b25328954..04a5684c20fcd 100755
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@ def is_platform_mac():
_have_setuptools = False
setuptools_kwargs = {}
-min_numpy_ver = '1.7.0'
+min_numpy_ver = '1.9.0'
if sys.version_info[0] >= 3:
setuptools_kwargs = {
| closes #15206, numpy >= 1.9
closes #15543, matplotlib >= 1.4.3
closes #15214, bottleneck >= 1.0.0
scipy >= 0.14.0
| https://api.github.com/repos/pandas-dev/pandas/pulls/17002 | 2017-07-18T01:11:01Z | 2017-08-22T09:50:58Z | 2017-08-22T09:50:58Z | 2017-08-22T10:50:57Z |
COMPAT: np.full not available in all versions, xref #16773 | diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index e157ae16e71f9..5fe96d70fc16f 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -163,7 +163,9 @@ def _init_dict(self, data, index, columns, dtype=None):
# TODO: figure out how to handle this case, all nan's?
# add in any other columns we want to have (completeness)
- nan_arr = sp_maker(np.full(len(index), np.nan))
+ nan_arr = np.empty(len(index), dtype='float64')
+ nan_arr.fill(np.nan)
+ nan_arr = sp_maker(nan_arr)
sdict.update((c, nan_arr) for c in columns if c not in sdict)
return to_manager(sdict, columns, index)
| https://api.github.com/repos/pandas-dev/pandas/pulls/17000 | 2017-07-17T23:34:16Z | 2017-07-18T01:31:43Z | 2017-07-18T01:31:43Z | 2017-07-18T01:31:43Z | |
DOC: Make highlight functions match documentation | diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index b08d3877f3b03..d88a230b42403 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -1054,9 +1054,9 @@ def highlight_max(self, subset=None, color='yellow', axis=0):
subset: IndexSlice, default None
a valid slice for ``data`` to limit the style application to
color: str, default 'yellow'
- axis: int, str, or None; default None
- 0 or 'index' for columnwise, 1 or 'columns' for rowwise
- or ``None`` for tablewise (the default)
+ axis: int, str, or None; default 0
+ 0 or 'index' for columnwise (default), 1 or 'columns' for rowwise,
+ or ``None`` for tablewise
Returns
-------
@@ -1076,9 +1076,9 @@ def highlight_min(self, subset=None, color='yellow', axis=0):
subset: IndexSlice, default None
a valid slice for ``data`` to limit the style application to
color: str, default 'yellow'
- axis: int, str, or None; default None
- 0 or 'index' for columnwise, 1 or 'columns' for rowwise
- or ``None`` for tablewise (the default)
+ axis: int, str, or None; default 0
+ 0 or 'index' for columnwise (default), 1 or 'columns' for rowwise,
+ or ``None`` for tablewise
Returns
-------
| Fixes mismatch between function and docs defined in #16998
| https://api.github.com/repos/pandas-dev/pandas/pulls/16999 | 2017-07-17T21:37:45Z | 2017-07-18T16:08:04Z | 2017-07-18T16:08:04Z | 2017-07-18T16:08:13Z |
BUG: np.inf now causes Index to upcast from int to float | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 935e9d740b91c..b8c4cf61edc5b 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -145,6 +145,7 @@ Bug Fixes
~~~~~~~~~
- Fixes regression in 0.20, :func:`Series.aggregate` and :func:`DataFrame.aggregate` allow dictionaries as return values again (:issue:`16741`)
+- Fixes bug where indexing with ``np.inf`` caused an ``OverflowError`` to be raised (:issue:`16957`)
Conversion
^^^^^^^^^^
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index bbbc19b36964d..5d50f961927c7 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -666,7 +666,7 @@ def _try_convert_to_int_index(cls, data, copy, name):
res = data.astype('u8', copy=False)
if (res == data).all():
return UInt64Index(res, copy=copy, name=name)
- except (TypeError, ValueError):
+ except (OverflowError, TypeError, ValueError):
pass
raise ValueError
@@ -1640,7 +1640,7 @@ def __contains__(self, key):
hash(key)
try:
return key in self._engine
- except (TypeError, ValueError):
+ except (OverflowError, TypeError, ValueError):
return False
_index_shared_docs['contains'] = """
@@ -3365,7 +3365,7 @@ def _maybe_cast_indexer(self, key):
ckey = int(key)
if ckey == key:
key = ckey
- except (ValueError, TypeError):
+ except (OverflowError, ValueError, TypeError):
pass
return key
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 9fa677eb624ae..98f5d5eb140df 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -63,6 +63,34 @@ def f():
pytest.raises(ValueError, f)
+ def test_inf_upcast(self):
+ # GH 16957
+ # We should be able to use np.inf as a key
+ # np.inf should cause an index to convert to float
+
+ # Test with np.inf in rows
+ df = pd.DataFrame(columns=[0])
+ df.loc[1] = 1
+ df.loc[2] = 2
+ df.loc[np.inf] = 3
+
+ # make sure we can look up the value
+ assert df.loc[np.inf, 0] == 3
+
+ result = df.index
+ expected = pd.Float64Index([1, 2, np.inf])
+ tm.assert_index_equal(result, expected)
+
+ # Test with np.inf in columns
+ df = pd.DataFrame()
+ df.loc[0, 0] = 1
+ df.loc[1, 1] = 2
+ df.loc[0, np.inf] = 3
+
+ result = df.columns
+ expected = pd.Float64Index([0, 1, np.inf])
+ tm.assert_index_equal(result, expected)
+
def test_setitem_dtype_upcast(self):
# GH3216
@@ -542,6 +570,34 @@ def test_astype_assignment_with_dups(self):
# result = df.get_dtype_counts().sort_index()
# expected = Series({'float64': 2, 'object': 1}).sort_index()
+ @pytest.mark.parametrize("index,val", [
+ (pd.Index([0, 1, 2]), 2),
+ (pd.Index([0, 1, '2']), '2'),
+ (pd.Index([0, 1, 2, np.inf, 4]), 4),
+ (pd.Index([0, 1, 2, np.nan, 4]), 4),
+ (pd.Index([0, 1, 2, np.inf]), np.inf),
+ (pd.Index([0, 1, 2, np.nan]), np.nan),
+ ])
+ def test_index_contains(self, index, val):
+ assert val in index
+
+ @pytest.mark.parametrize("index,val", [
+ (pd.Index([0, 1, 2]), '2'),
+ (pd.Index([0, 1, '2']), 2),
+ (pd.Index([0, 1, 2, np.inf]), 4),
+ (pd.Index([0, 1, 2, np.nan]), 4),
+ (pd.Index([0, 1, 2, np.inf]), np.nan),
+ (pd.Index([0, 1, 2, np.nan]), np.inf),
+ # Checking if np.inf in Int64Index should not cause an OverflowError
+ # Related to GH 16957
+ (pd.Int64Index([0, 1, 2]), np.inf),
+ (pd.Int64Index([0, 1, 2]), np.nan),
+ (pd.UInt64Index([0, 1, 2]), np.inf),
+ (pd.UInt64Index([0, 1, 2]), np.nan),
+ ])
+ def test_index_not_contains(self, index, val):
+ assert val not in index
+
def test_index_type_coercion(self):
with catch_warnings(record=True):
| - [x] closes #16957
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
Previously the following code would cause an OverflowError
```python
df = pd.DataFrame(columns=[0])
df.loc[0] = 0
df.loc[np.inf] = 1
```
This patch catches the exception and allows the `pd.Int64Index` / `pd.UInt64Index` to upcast itself into a `pd.Float64Index`.
Additionally, when I was writing tests for this patch I noticed
```python
np.inf in pd.Int64Index([1, 2, 3])
```
would also cause an OverflowError. I also patched this as well, and now it correctly returns False.
-------
I had trouble running tests on my local machine, so I'll wait for the travis checks.
I'm also unsure the whatsnew entry is in the correct format. Let me know if that needs to be changed. | https://api.github.com/repos/pandas-dev/pandas/pulls/16996 | 2017-07-17T17:04:18Z | 2017-07-18T15:58:56Z | 2017-07-18T15:58:55Z | 2017-07-18T15:59:06Z |
MAINT: Drop line_width and height from options | diff --git a/doc/source/options.rst b/doc/source/options.rst
index f373705a96f48..c585da64efece 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -304,7 +304,6 @@ display.float_format None The callable should accept a fl
This is used in some places like
SeriesFormatter.
See core.format.EngFormatter for an example.
-display.height 60 Deprecated. Use `display.max_rows` instead.
display.large_repr truncate For DataFrames exceeding max_rows/max_cols,
the repr (and HTML repr) can show
a truncated table (the default from 0.13),
@@ -323,7 +322,6 @@ display.latex.multicolumn_format 'l' Alignment of multicolumn labels
display.latex.multirow False Combines rows when using a MultiIndex.
Centered instead of top-aligned,
separated by clines.
-display.line_width 80 Deprecated. Use `display.width` instead.
display.max_columns 20 max_rows and max_columns are used
in __repr__() methods to decide if
to_string() or info() is used to
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 7c52cf6f450b2..58fafdeacbf77 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -124,6 +124,8 @@ Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :func:`read_excel()` has dropped the ``has_index_names`` parameter (:issue:`10967`)
+- The ``pd.options.display.height`` configuration has been dropped (:issue:`3663`)
+- The ``pd.options.display.line_width`` configuration has been dropped (:issue:`2881`)
- The ``pd.options.display.mpl_style`` configuration has been dropped (:issue:`12190`)
- ``Index`` has dropped the ``.sym_diff()`` method in favor of ``.symmetric_difference()`` (:issue:`12591`)
- ``Categorical`` has dropped the ``.order()`` and ``.sort()`` methods in favor of ``.sort_values()`` (:issue:`12882`)
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index ae3001564a62f..06ce811703a8c 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -213,14 +213,6 @@ def use_numexpr_cb(key):
(currently both are identical)
"""
-pc_line_width_deprecation_warning = """\
-line_width has been deprecated, use display.width instead (currently both are
-identical)
-"""
-
-pc_height_deprecation_warning = """\
-height has been deprecated.
-"""
pc_width_doc = """
: int
@@ -383,14 +375,6 @@ def table_schema_cb(key):
cf.register_option('html.border', 1, pc_html_border_doc,
validator=is_int)
-
-cf.deprecate_option('display.line_width',
- msg=pc_line_width_deprecation_warning,
- rkey='display.width')
-
-cf.deprecate_option('display.height', msg=pc_height_deprecation_warning,
- rkey='display.max_rows')
-
with cf.config_prefix('html'):
cf.register_option('border', 1, pc_html_border_doc,
validator=is_int)
diff --git a/pandas/io/formats/console.py b/pandas/io/formats/console.py
index ab75e3fa253ce..bdff59939a4de 100644
--- a/pandas/io/formats/console.py
+++ b/pandas/io/formats/console.py
@@ -53,7 +53,7 @@ def get_console_size():
display_width = get_option('display.width')
# deprecated.
- display_height = get_option('display.height', silent=True)
+ display_height = get_option('display.max_rows')
# Consider
# interactive shell terminal, can detect term size
@@ -71,7 +71,7 @@ def get_console_size():
# match default for width,height in config_init
from pandas.core.config import get_default_val
terminal_width = get_default_val('display.width')
- terminal_height = get_default_val('display.height')
+ terminal_height = get_default_val('display.max_rows')
else:
# pure terminal
terminal_width, terminal_height = get_terminal_size()
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 679d43ac492ca..e1499565ce4a6 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -302,7 +302,7 @@ def test_repr_non_interactive(self):
df = DataFrame('hello', lrange(1000), lrange(5))
with option_context('mode.sim_interactive', False, 'display.width', 0,
- 'display.height', 0, 'display.max_rows', 5000):
+ 'display.max_rows', 5000):
assert not has_truncated_repr(df)
assert not has_expanded_repr(df)
| Deprecated since 0.11 and 0.12 respectively.
xref #2881
xref #3663
| https://api.github.com/repos/pandas-dev/pandas/pulls/16993 | 2017-07-17T08:27:58Z | 2017-07-17T23:18:56Z | 2017-07-17T23:18:56Z | 2017-07-17T23:43:19Z |
COMPAT: Add back remove_na for seaborn | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4d5b718ce0ae9..219eca4277f32 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -37,7 +37,6 @@
maybe_convert_platform,
maybe_cast_to_datetime, maybe_castable)
from pandas.core.dtypes.missing import isnull, notnull, remove_na_arraylike
-
from pandas.core.common import (is_bool_indexer,
_default_index,
_asarray_tuplesafe,
@@ -88,6 +87,17 @@
versionadded_to_excel='\n .. versionadded:: 0.20.0\n')
+# see gh-16971
+def remove_na(arr):
+ """
+ DEPRECATED : this function will be removed in a future version.
+ """
+
+ warnings.warn("remove_na is deprecated and is a private "
+ "function. Do not use.", FutureWarning, stacklevel=2)
+ return remove_na_arraylike(arr)
+
+
def _coerce_method(converter):
""" install the scalar coercion methods """
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 8e73c17684a16..b5948e75aa73e 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -15,6 +15,7 @@
MultiIndex, Index, Timestamp, NaT, IntervalIndex)
from pandas.compat import range
from pandas._libs.tslib import iNaT
+from pandas.core.series import remove_na
from pandas.util.testing import assert_series_equal, assert_frame_equal
import pandas.util.testing as tm
@@ -50,6 +51,11 @@ def _simple_ts(start, end, freq='D'):
class TestSeriesMissingData(TestData):
+ def test_remove_na_deprecation(self):
+ # see gh-16971
+ with tm.assert_produces_warning(FutureWarning):
+ remove_na(Series([]))
+
def test_timedelta_fillna(self):
# GH 3371
s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp(
| Title is self-explanatory.
Closes #16971. | https://api.github.com/repos/pandas-dev/pandas/pulls/16992 | 2017-07-17T07:29:33Z | 2017-07-17T23:29:58Z | 2017-07-17T23:29:58Z | 2017-07-17T23:43:45Z |
DOC: Clarify 'it' in aggregate doc | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e4e2e0093b1a6..f12592feaa4c3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3152,7 +3152,7 @@ def pipe(self, func, *args, **kwargs):
(e.g., np.mean(arr_2d, axis=0)) as opposed to
mimicking the default Numpy behavior (e.g., np.mean(arr_2d)).
- agg is an alias for aggregate. Use it.
+ `agg` is an alias for `aggregate`. Use the alias.
Returns
-------
| The grammarian part of me agrees that clarification is needed.
Closes #16988. | https://api.github.com/repos/pandas-dev/pandas/pulls/16989 | 2017-07-17T06:19:12Z | 2017-07-17T06:19:28Z | 2017-07-17T06:19:27Z | 2017-07-17T06:21:26Z |
CLN/COMPAT: for various py2/py3 in doc/bench scripts | diff --git a/asv_bench/vbench_to_asv.py b/asv_bench/vbench_to_asv.py
index c3041ec2b1ba1..2a4ce5d183ea2 100644
--- a/asv_bench/vbench_to_asv.py
+++ b/asv_bench/vbench_to_asv.py
@@ -114,7 +114,7 @@ def translate_module(target_module):
l_vars = {}
exec('import ' + target_module) in g_vars
- print target_module
+ print(target_module)
module = eval(target_module, g_vars)
benchmarks = []
@@ -157,7 +157,7 @@ def translate_module(target_module):
mod = os.path.basename(module)
if mod in ['make.py', 'measure_memory_consumption.py', 'perf_HEAD.py', 'run_suite.py', 'test_perf.py', 'generate_rst_files.py', 'test.py', 'suite.py']:
continue
- print
- print mod
+ print('')
+ print(mod)
translate_module(mod.replace('.py', ''))
diff --git a/bench/alignment.py b/bench/alignment.py
deleted file mode 100644
index bc3134f597ee0..0000000000000
--- a/bench/alignment.py
+++ /dev/null
@@ -1,22 +0,0 @@
-# Setup
-from pandas.compat import range, lrange
-import numpy as np
-import pandas
-import la
-N = 1000
-K = 50
-arr1 = np.random.randn(N, K)
-arr2 = np.random.randn(N, K)
-idx1 = lrange(N)
-idx2 = lrange(K)
-
-# pandas
-dma1 = pandas.DataFrame(arr1, idx1, idx2)
-dma2 = pandas.DataFrame(arr2, idx1[::-1], idx2[::-1])
-
-# larry
-lar1 = la.larry(arr1, [idx1, idx2])
-lar2 = la.larry(arr2, [idx1[::-1], idx2[::-1]])
-
-for i in range(100):
- result = lar1 + lar2
diff --git a/bench/bench_dense_to_sparse.py b/bench/bench_dense_to_sparse.py
deleted file mode 100644
index e1dcd3456e88d..0000000000000
--- a/bench/bench_dense_to_sparse.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from pandas import *
-
-K = 100
-N = 100000
-rng = DatetimeIndex('1/1/2000', periods=N, offset=datetools.Minute())
-
-rng2 = np.asarray(rng).astype('M8[us]').astype('i8')
-
-series = {}
-for i in range(1, K + 1):
- data = np.random.randn(N)[:-i]
- this_rng = rng2[:-i]
- data[100:] = np.nan
- series[i] = SparseSeries(data, index=this_rng)
diff --git a/bench/bench_get_put_value.py b/bench/bench_get_put_value.py
deleted file mode 100644
index 427e0b1b10a22..0000000000000
--- a/bench/bench_get_put_value.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from pandas import *
-from pandas.util.testing import rands
-from pandas.compat import range
-
-N = 1000
-K = 50
-
-
-def _random_index(howmany):
- return Index([rands(10) for _ in range(howmany)])
-
-df = DataFrame(np.random.randn(N, K), index=_random_index(N),
- columns=_random_index(K))
-
-
-def get1():
- for col in df.columns:
- for row in df.index:
- _ = df[col][row]
-
-
-def get2():
- for col in df.columns:
- for row in df.index:
- _ = df.get_value(row, col)
-
-
-def put1():
- for col in df.columns:
- for row in df.index:
- df[col][row] = 0
-
-
-def put2():
- for col in df.columns:
- for row in df.index:
- df.set_value(row, col, 0)
-
-
-def resize1():
- buf = DataFrame()
- for col in df.columns:
- for row in df.index:
- buf = buf.set_value(row, col, 5.)
- return buf
-
-
-def resize2():
- from collections import defaultdict
-
- buf = defaultdict(dict)
- for col in df.columns:
- for row in df.index:
- buf[col][row] = 5.
-
- return DataFrame(buf)
diff --git a/bench/bench_groupby.py b/bench/bench_groupby.py
deleted file mode 100644
index d7a2853e1e7b2..0000000000000
--- a/bench/bench_groupby.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from pandas import *
-from pandas.util.testing import rands
-from pandas.compat import range
-
-import string
-import random
-
-k = 20000
-n = 10
-
-foo = np.tile(np.array([rands(10) for _ in range(k)], dtype='O'), n)
-foo2 = list(foo)
-random.shuffle(foo)
-random.shuffle(foo2)
-
-df = DataFrame({'A': foo,
- 'B': foo2,
- 'C': np.random.randn(n * k)})
-
-import pandas._sandbox as sbx
-
-
-def f():
- table = sbx.StringHashTable(len(df))
- ret = table.factorize(df['A'])
- return ret
-
-
-def g():
- table = sbx.PyObjectHashTable(len(df))
- ret = table.factorize(df['A'])
- return ret
-
-ret = f()
-
-"""
-import pandas._tseries as lib
-
-f = np.std
-
-
-grouped = df.groupby(['A', 'B'])
-
-label_list = [ping.labels for ping in grouped.groupings]
-shape = [len(ping.ids) for ping in grouped.groupings]
-
-from pandas.core.groupby import get_group_index
-
-
-group_index = get_group_index(label_list, shape,
- sort=True, xnull=True).astype('i4')
-
-ngroups = np.prod(shape)
-
-indexer = lib.groupsort_indexer(group_index, ngroups)
-
-values = df['C'].values.take(indexer)
-group_index = group_index.take(indexer)
-
-f = lambda x: x.std(ddof=1)
-
-grouper = lib.Grouper(df['C'], np.ndarray.std, group_index, ngroups)
-result = grouper.get_result()
-
-expected = grouped.std()
-"""
diff --git a/bench/bench_join_panel.py b/bench/bench_join_panel.py
deleted file mode 100644
index f3c3f8ba15f70..0000000000000
--- a/bench/bench_join_panel.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# reasonably efficient
-
-
-def create_panels_append(cls, panels):
- """ return an append list of panels """
- panels = [a for a in panels if a is not None]
- # corner cases
- if len(panels) == 0:
- return None
- elif len(panels) == 1:
- return panels[0]
- elif len(panels) == 2 and panels[0] == panels[1]:
- return panels[0]
- # import pdb; pdb.set_trace()
- # create a joint index for the axis
-
- def joint_index_for_axis(panels, axis):
- s = set()
- for p in panels:
- s.update(list(getattr(p, axis)))
- return sorted(list(s))
-
- def reindex_on_axis(panels, axis, axis_reindex):
- new_axis = joint_index_for_axis(panels, axis)
- new_panels = [p.reindex(**{axis_reindex: new_axis,
- 'copy': False}) for p in panels]
- return new_panels, new_axis
- # create the joint major index, dont' reindex the sub-panels - we are
- # appending
- major = joint_index_for_axis(panels, 'major_axis')
- # reindex on minor axis
- panels, minor = reindex_on_axis(panels, 'minor_axis', 'minor')
- # reindex on items
- panels, items = reindex_on_axis(panels, 'items', 'items')
- # concatenate values
- try:
- values = np.concatenate([p.values for p in panels], axis=1)
- except Exception as detail:
- raise Exception("cannot append values that dont' match dimensions! -> [%s] %s"
- % (','.join(["%s" % p for p in panels]), str(detail)))
- # pm('append - create_panel')
- p = Panel(values, items=items, major_axis=major,
- minor_axis=minor)
- # pm('append - done')
- return p
-
-
-# does the job but inefficient (better to handle like you read a table in
-# pytables...e.g create a LongPanel then convert to Wide)
-def create_panels_join(cls, panels):
- """ given an array of panels's, create a single panel """
- panels = [a for a in panels if a is not None]
- # corner cases
- if len(panels) == 0:
- return None
- elif len(panels) == 1:
- return panels[0]
- elif len(panels) == 2 and panels[0] == panels[1]:
- return panels[0]
- d = dict()
- minor, major, items = set(), set(), set()
- for panel in panels:
- items.update(panel.items)
- major.update(panel.major_axis)
- minor.update(panel.minor_axis)
- values = panel.values
- for item, item_index in panel.items.indexMap.items():
- for minor_i, minor_index in panel.minor_axis.indexMap.items():
- for major_i, major_index in panel.major_axis.indexMap.items():
- try:
- d[(minor_i, major_i, item)] = values[item_index, major_index, minor_index]
- except:
- pass
- # stack the values
- minor = sorted(list(minor))
- major = sorted(list(major))
- items = sorted(list(items))
- # create the 3d stack (items x columns x indicies)
- data = np.dstack([np.asarray([np.asarray([d.get((minor_i, major_i, item), np.nan)
- for item in items])
- for major_i in major]).transpose()
- for minor_i in minor])
- # construct the panel
- return Panel(data, items, major, minor)
-add_class_method(Panel, create_panels_join, 'join_many')
diff --git a/bench/bench_khash_dict.py b/bench/bench_khash_dict.py
deleted file mode 100644
index 054fc36131b65..0000000000000
--- a/bench/bench_khash_dict.py
+++ /dev/null
@@ -1,89 +0,0 @@
-"""
-Some comparisons of khash.h to Python dict
-"""
-from __future__ import print_function
-
-import numpy as np
-import os
-
-from vbench.api import Benchmark
-from pandas.util.testing import rands
-from pandas.compat import range
-import pandas._tseries as lib
-import pandas._sandbox as sbx
-import time
-
-import psutil
-
-pid = os.getpid()
-proc = psutil.Process(pid)
-
-
-def object_test_data(n):
- pass
-
-
-def string_test_data(n):
- return np.array([rands(10) for _ in range(n)], dtype='O')
-
-
-def int_test_data(n):
- return np.arange(n, dtype='i8')
-
-N = 1000000
-
-#----------------------------------------------------------------------
-# Benchmark 1: map_locations
-
-
-def map_locations_python_object():
- arr = string_test_data(N)
- return _timeit(lambda: lib.map_indices_object(arr))
-
-
-def map_locations_khash_object():
- arr = string_test_data(N)
-
- def f():
- table = sbx.PyObjectHashTable(len(arr))
- table.map_locations(arr)
- return _timeit(f)
-
-
-def _timeit(f, iterations=10):
- start = time.time()
- for _ in range(iterations):
- foo = f()
- elapsed = time.time() - start
- return elapsed
-
-#----------------------------------------------------------------------
-# Benchmark 2: lookup_locations
-
-
-def lookup_python(values):
- table = lib.map_indices_object(values)
- return _timeit(lambda: lib.merge_indexer_object(values, table))
-
-
-def lookup_khash(values):
- table = sbx.PyObjectHashTable(len(values))
- table.map_locations(values)
- locs = table.lookup_locations(values)
- # elapsed = _timeit(lambda: table.lookup_locations2(values))
- return table
-
-
-def leak(values):
- for _ in range(100):
- print(proc.get_memory_info())
- table = lookup_khash(values)
- # table.destroy()
-
-arr = string_test_data(N)
-
-#----------------------------------------------------------------------
-# Benchmark 3: unique
-
-#----------------------------------------------------------------------
-# Benchmark 4: factorize
diff --git a/bench/bench_merge.R b/bench/bench_merge.R
deleted file mode 100644
index 3ed4618494857..0000000000000
--- a/bench/bench_merge.R
+++ /dev/null
@@ -1,161 +0,0 @@
-library(plyr)
-library(data.table)
-N <- 10000
-indices = rep(NA, N)
-indices2 = rep(NA, N)
-for (i in 1:N) {
- indices[i] <- paste(sample(letters, 10), collapse="")
- indices2[i] <- paste(sample(letters, 10), collapse="")
-}
-left <- data.frame(key=rep(indices[1:8000], 10),
- key2=rep(indices2[1:8000], 10),
- value=rnorm(80000))
-right <- data.frame(key=indices[2001:10000],
- key2=indices2[2001:10000],
- value2=rnorm(8000))
-
-right2 <- data.frame(key=rep(right$key, 2),
- key2=rep(right$key2, 2),
- value2=rnorm(16000))
-
-left.dt <- data.table(left, key=c("key", "key2"))
-right.dt <- data.table(right, key=c("key", "key2"))
-right2.dt <- data.table(right2, key=c("key", "key2"))
-
-# left.dt2 <- data.table(left)
-# right.dt2 <- data.table(right)
-
-## left <- data.frame(key=rep(indices[1:1000], 10),
-## key2=rep(indices2[1:1000], 10),
-## value=rnorm(100000))
-## right <- data.frame(key=indices[1:1000],
-## key2=indices2[1:1000],
-## value2=rnorm(10000))
-
-timeit <- function(func, niter=10) {
- timing = rep(NA, niter)
- for (i in 1:niter) {
- gc()
- timing[i] <- system.time(func())[3]
- }
- mean(timing)
-}
-
-left.join <- function(sort=FALSE) {
- result <- base::merge(left, right, all.x=TRUE, sort=sort)
-}
-
-right.join <- function(sort=FALSE) {
- result <- base::merge(left, right, all.y=TRUE, sort=sort)
-}
-
-outer.join <- function(sort=FALSE) {
- result <- base::merge(left, right, all=TRUE, sort=sort)
-}
-
-inner.join <- function(sort=FALSE) {
- result <- base::merge(left, right, all=FALSE, sort=sort)
-}
-
-left.join.dt <- function(sort=FALSE) {
- result <- right.dt[left.dt]
-}
-
-right.join.dt <- function(sort=FALSE) {
- result <- left.dt[right.dt]
-}
-
-outer.join.dt <- function(sort=FALSE) {
- result <- merge(left.dt, right.dt, all=TRUE, sort=sort)
-}
-
-inner.join.dt <- function(sort=FALSE) {
- result <- merge(left.dt, right.dt, all=FALSE, sort=sort)
-}
-
-plyr.join <- function(type) {
- result <- plyr::join(left, right, by=c("key", "key2"),
- type=type, match="first")
-}
-
-sort.options <- c(FALSE, TRUE)
-
-# many-to-one
-
-results <- matrix(nrow=4, ncol=3)
-colnames(results) <- c("base::merge", "plyr", "data.table")
-rownames(results) <- c("inner", "outer", "left", "right")
-
-base.functions <- c(inner.join, outer.join, left.join, right.join)
-plyr.functions <- c(function() plyr.join("inner"),
- function() plyr.join("full"),
- function() plyr.join("left"),
- function() plyr.join("right"))
-dt.functions <- c(inner.join.dt, outer.join.dt, left.join.dt, right.join.dt)
-for (i in 1:4) {
- base.func <- base.functions[[i]]
- plyr.func <- plyr.functions[[i]]
- dt.func <- dt.functions[[i]]
- results[i, 1] <- timeit(base.func)
- results[i, 2] <- timeit(plyr.func)
- results[i, 3] <- timeit(dt.func)
-}
-
-
-# many-to-many
-
-left.join <- function(sort=FALSE) {
- result <- base::merge(left, right2, all.x=TRUE, sort=sort)
-}
-
-right.join <- function(sort=FALSE) {
- result <- base::merge(left, right2, all.y=TRUE, sort=sort)
-}
-
-outer.join <- function(sort=FALSE) {
- result <- base::merge(left, right2, all=TRUE, sort=sort)
-}
-
-inner.join <- function(sort=FALSE) {
- result <- base::merge(left, right2, all=FALSE, sort=sort)
-}
-
-left.join.dt <- function(sort=FALSE) {
- result <- right2.dt[left.dt]
-}
-
-right.join.dt <- function(sort=FALSE) {
- result <- left.dt[right2.dt]
-}
-
-outer.join.dt <- function(sort=FALSE) {
- result <- merge(left.dt, right2.dt, all=TRUE, sort=sort)
-}
-
-inner.join.dt <- function(sort=FALSE) {
- result <- merge(left.dt, right2.dt, all=FALSE, sort=sort)
-}
-
-sort.options <- c(FALSE, TRUE)
-
-# many-to-one
-
-results <- matrix(nrow=4, ncol=3)
-colnames(results) <- c("base::merge", "plyr", "data.table")
-rownames(results) <- c("inner", "outer", "left", "right")
-
-base.functions <- c(inner.join, outer.join, left.join, right.join)
-plyr.functions <- c(function() plyr.join("inner"),
- function() plyr.join("full"),
- function() plyr.join("left"),
- function() plyr.join("right"))
-dt.functions <- c(inner.join.dt, outer.join.dt, left.join.dt, right.join.dt)
-for (i in 1:4) {
- base.func <- base.functions[[i]]
- plyr.func <- plyr.functions[[i]]
- dt.func <- dt.functions[[i]]
- results[i, 1] <- timeit(base.func)
- results[i, 2] <- timeit(plyr.func)
- results[i, 3] <- timeit(dt.func)
-}
-
diff --git a/bench/bench_merge.py b/bench/bench_merge.py
deleted file mode 100644
index 330dba7b9af69..0000000000000
--- a/bench/bench_merge.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import random
-import gc
-import time
-from pandas import *
-from pandas.compat import range, lrange, StringIO
-from pandas.util.testing import rands
-
-N = 10000
-ngroups = 10
-
-
-def get_test_data(ngroups=100, n=N):
- unique_groups = lrange(ngroups)
- arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object)
-
- if len(arr) < n:
- arr = np.asarray(list(arr) + unique_groups[:n - len(arr)],
- dtype=object)
-
- random.shuffle(arr)
- return arr
-
-# aggregate multiple columns
-# df = DataFrame({'key1' : get_test_data(ngroups=ngroups),
-# 'key2' : get_test_data(ngroups=ngroups),
-# 'data1' : np.random.randn(N),
-# 'data2' : np.random.randn(N)})
-
-# df2 = DataFrame({'key1' : get_test_data(ngroups=ngroups, n=N//10),
-# 'key2' : get_test_data(ngroups=ngroups//2, n=N//10),
-# 'value' : np.random.randn(N // 10)})
-# result = merge.merge(df, df2, on='key2')
-
-N = 10000
-
-indices = np.array([rands(10) for _ in range(N)], dtype='O')
-indices2 = np.array([rands(10) for _ in range(N)], dtype='O')
-key = np.tile(indices[:8000], 10)
-key2 = np.tile(indices2[:8000], 10)
-
-left = DataFrame({'key': key, 'key2': key2,
- 'value': np.random.randn(80000)})
-right = DataFrame({'key': indices[2000:], 'key2': indices2[2000:],
- 'value2': np.random.randn(8000)})
-
-right2 = right.append(right, ignore_index=True)
-
-
-join_methods = ['inner', 'outer', 'left', 'right']
-results = DataFrame(index=join_methods, columns=[False, True])
-niter = 10
-for sort in [False, True]:
- for join_method in join_methods:
- f = lambda: merge(left, right, how=join_method, sort=sort)
- gc.disable()
- start = time.time()
- for _ in range(niter):
- f()
- elapsed = (time.time() - start) / niter
- gc.enable()
- results[sort][join_method] = elapsed
-# results.columns = ['pandas']
-results.columns = ['dont_sort', 'sort']
-
-
-# R results
-# many to one
-r_results = read_table(StringIO(""" base::merge plyr data.table
-inner 0.2475 0.1183 0.1100
-outer 0.4213 0.1916 0.2090
-left 0.2998 0.1188 0.0572
-right 0.3102 0.0536 0.0376
-"""), sep='\s+')
-
-presults = results[['dont_sort']].rename(columns={'dont_sort': 'pandas'})
-all_results = presults.join(r_results)
-
-all_results = all_results.div(all_results['pandas'], axis=0)
-
-all_results = all_results.ix[:, ['pandas', 'data.table', 'plyr',
- 'base::merge']]
-
-sort_results = DataFrame.from_items([('pandas', results['sort']),
- ('R', r_results['base::merge'])])
-sort_results['Ratio'] = sort_results['R'] / sort_results['pandas']
-
-
-nosort_results = DataFrame.from_items([('pandas', results['dont_sort']),
- ('R', r_results['base::merge'])])
-nosort_results['Ratio'] = nosort_results['R'] / nosort_results['pandas']
-
-# many to many
-
-# many to one
-r_results = read_table(StringIO("""base::merge plyr data.table
-inner 0.4610 0.1276 0.1269
-outer 0.9195 0.1881 0.2725
-left 0.6559 0.1257 0.0678
-right 0.6425 0.0522 0.0428
-"""), sep='\s+')
-
-all_results = presults.join(r_results)
-all_results = all_results.div(all_results['pandas'], axis=0)
-all_results = all_results.ix[:, ['pandas', 'data.table', 'plyr',
- 'base::merge']]
diff --git a/bench/bench_merge_sqlite.py b/bench/bench_merge_sqlite.py
deleted file mode 100644
index 3ad4b810119c3..0000000000000
--- a/bench/bench_merge_sqlite.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-from collections import defaultdict
-import gc
-import time
-from pandas import DataFrame
-from pandas.util.testing import rands
-from pandas.compat import range, zip
-import random
-
-N = 10000
-
-indices = np.array([rands(10) for _ in range(N)], dtype='O')
-indices2 = np.array([rands(10) for _ in range(N)], dtype='O')
-key = np.tile(indices[:8000], 10)
-key2 = np.tile(indices2[:8000], 10)
-
-left = DataFrame({'key': key, 'key2': key2,
- 'value': np.random.randn(80000)})
-right = DataFrame({'key': indices[2000:], 'key2': indices2[2000:],
- 'value2': np.random.randn(8000)})
-
-# right2 = right.append(right, ignore_index=True)
-# right = right2
-
-# random.shuffle(key2)
-# indices2 = indices.copy()
-# random.shuffle(indices2)
-
-# Prepare Database
-import sqlite3
-create_sql_indexes = True
-
-conn = sqlite3.connect(':memory:')
-conn.execute(
- 'create table left( key varchar(10), key2 varchar(10), value int);')
-conn.execute(
- 'create table right( key varchar(10), key2 varchar(10), value2 int);')
-conn.executemany('insert into left values (?, ?, ?)',
- zip(key, key2, left['value']))
-conn.executemany('insert into right values (?, ?, ?)',
- zip(right['key'], right['key2'], right['value2']))
-
-# Create Indices
-if create_sql_indexes:
- conn.execute('create index left_ix on left(key, key2)')
- conn.execute('create index right_ix on right(key, key2)')
-
-
-join_methods = ['inner', 'left outer', 'left'] # others not supported
-sql_results = DataFrame(index=join_methods, columns=[False])
-niter = 5
-for sort in [False]:
- for join_method in join_methods:
- sql = """CREATE TABLE test as select *
- from left
- %s join right
- on left.key=right.key
- and left.key2 = right.key2;""" % join_method
- sql = """select *
- from left
- %s join right
- on left.key=right.key
- and left.key2 = right.key2;""" % join_method
-
- if sort:
- sql = '%s order by key, key2' % sql
- f = lambda: list(conn.execute(sql)) # list fetches results
- g = lambda: conn.execute(sql) # list fetches results
- gc.disable()
- start = time.time()
- # for _ in range(niter):
- g()
- elapsed = (time.time() - start) / niter
- gc.enable()
-
- cur = conn.execute("DROP TABLE test")
- conn.commit()
-
- sql_results[sort][join_method] = elapsed
- sql_results.columns = ['sqlite3'] # ['dont_sort', 'sort']
- sql_results.index = ['inner', 'outer', 'left']
-
- sql = """select *
- from left
- inner join right
- on left.key=right.key
- and left.key2 = right.key2;"""
diff --git a/bench/bench_pivot.R b/bench/bench_pivot.R
deleted file mode 100644
index 06dc6a105bc43..0000000000000
--- a/bench/bench_pivot.R
+++ /dev/null
@@ -1,27 +0,0 @@
-library(reshape2)
-
-
-n <- 100000
-a.size <- 5
-b.size <- 5
-
-data <- data.frame(a=sample(letters[1:a.size], n, replace=T),
- b=sample(letters[1:b.size], n, replace=T),
- c=rnorm(n),
- d=rnorm(n))
-
-timings <- numeric()
-
-# acast(melt(data, id=c("a", "b")), a ~ b, mean)
-# acast(melt(data, id=c("a", "b")), a + b ~ variable, mean)
-
-for (i in 1:10) {
- gc()
- tim <- system.time(acast(melt(data, id=c("a", "b")), a ~ b, mean,
- subset=.(variable=="c")))
- timings[i] = tim[3]
-}
-
-mean(timings)
-
-acast(melt(data, id=c("a", "b")), a ~ b, mean, subset=.(variable="c"))
diff --git a/bench/bench_pivot.py b/bench/bench_pivot.py
deleted file mode 100644
index 007bd0aaebc2f..0000000000000
--- a/bench/bench_pivot.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from pandas import *
-import string
-
-
-n = 100000
-asize = 5
-bsize = 5
-
-letters = np.asarray(list(string.letters), dtype=object)
-
-data = DataFrame(dict(foo=letters[:asize][np.random.randint(0, asize, n)],
- bar=letters[:bsize][np.random.randint(0, bsize, n)],
- baz=np.random.randn(n),
- qux=np.random.randn(n)))
-
-table = pivot_table(data, xby=['foo', 'bar'])
diff --git a/bench/bench_take_indexing.py b/bench/bench_take_indexing.py
deleted file mode 100644
index 5fb584bcfe45f..0000000000000
--- a/bench/bench_take_indexing.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from __future__ import print_function
-import numpy as np
-
-from pandas import *
-import pandas._tseries as lib
-
-from pandas import DataFrame
-import timeit
-from pandas.compat import zip
-
-setup = """
-from pandas import Series
-import pandas._tseries as lib
-import random
-import numpy as np
-
-import random
-n = %d
-k = %d
-arr = np.random.randn(n, k)
-indexer = np.arange(n, dtype=np.int32)
-indexer = indexer[::-1]
-"""
-
-sizes = [100, 1000, 10000, 100000]
-iters = [1000, 1000, 100, 1]
-
-fancy_2d = []
-take_2d = []
-cython_2d = []
-
-n = 1000
-
-
-def _timeit(stmt, size, k=5, iters=1000):
- timer = timeit.Timer(stmt=stmt, setup=setup % (sz, k))
- return timer.timeit(n) / n
-
-for sz, its in zip(sizes, iters):
- print(sz)
- fancy_2d.append(_timeit('arr[indexer]', sz, iters=its))
- take_2d.append(_timeit('arr.take(indexer, axis=0)', sz, iters=its))
- cython_2d.append(_timeit('lib.take_axis0(arr, indexer)', sz, iters=its))
-
-df = DataFrame({'fancy': fancy_2d,
- 'take': take_2d,
- 'cython': cython_2d})
-
-print(df)
-
-from pandas.rpy.common import r
-r('mat <- matrix(rnorm(50000), nrow=10000, ncol=5)')
-r('set.seed(12345')
-r('indexer <- sample(1:10000)')
-r('mat[indexer,]')
diff --git a/bench/bench_unique.py b/bench/bench_unique.py
deleted file mode 100644
index 87bd2f2df586c..0000000000000
--- a/bench/bench_unique.py
+++ /dev/null
@@ -1,278 +0,0 @@
-from __future__ import print_function
-from pandas import *
-from pandas.util.testing import rands
-from pandas.compat import range, zip
-import pandas._tseries as lib
-import numpy as np
-import matplotlib.pyplot as plt
-
-N = 50000
-K = 10000
-
-groups = np.array([rands(10) for _ in range(K)], dtype='O')
-groups2 = np.array([rands(10) for _ in range(K)], dtype='O')
-
-labels = np.tile(groups, N // K)
-labels2 = np.tile(groups2, N // K)
-data = np.random.randn(N)
-
-
-def timeit(f, niter):
- import gc
- import time
- gc.disable()
- start = time.time()
- for _ in range(niter):
- f()
- elapsed = (time.time() - start) / niter
- gc.enable()
- return elapsed
-
-
-def algo1():
- unique_labels = np.unique(labels)
- result = np.empty(len(unique_labels))
- for i, label in enumerate(unique_labels):
- result[i] = data[labels == label].sum()
-
-
-def algo2():
- unique_labels = np.unique(labels)
- indices = lib.groupby_indices(labels)
- result = np.empty(len(unique_labels))
-
- for i, label in enumerate(unique_labels):
- result[i] = data.take(indices[label]).sum()
-
-
-def algo3_nosort():
- rizer = lib.DictFactorizer()
- labs, counts = rizer.factorize(labels, sort=False)
- k = len(rizer.uniques)
- out = np.empty(k)
- lib.group_add(out, counts, data, labs)
-
-
-def algo3_sort():
- rizer = lib.DictFactorizer()
- labs, counts = rizer.factorize(labels, sort=True)
- k = len(rizer.uniques)
- out = np.empty(k)
- lib.group_add(out, counts, data, labs)
-
-import numpy as np
-import random
-
-
-# dict to hold results
-counts = {}
-
-# a hack to generate random key, value pairs.
-# 5k keys, 100k values
-x = np.tile(np.arange(5000, dtype='O'), 20)
-random.shuffle(x)
-xarr = x
-x = [int(y) for y in x]
-data = np.random.uniform(0, 1, 100000)
-
-
-def f():
- # groupby sum
- for k, v in zip(x, data):
- try:
- counts[k] += v
- except KeyError:
- counts[k] = v
-
-
-def f2():
- rizer = lib.DictFactorizer()
- labs, counts = rizer.factorize(xarr, sort=False)
- k = len(rizer.uniques)
- out = np.empty(k)
- lib.group_add(out, counts, data, labs)
-
-
-def algo4():
- rizer = lib.DictFactorizer()
- labs1, _ = rizer.factorize(labels, sort=False)
- k1 = len(rizer.uniques)
-
- rizer = lib.DictFactorizer()
- labs2, _ = rizer.factorize(labels2, sort=False)
- k2 = len(rizer.uniques)
-
- group_id = labs1 * k2 + labs2
- max_group = k1 * k2
-
- if max_group > 1e6:
- rizer = lib.Int64Factorizer(len(group_id))
- group_id, _ = rizer.factorize(group_id.astype('i8'), sort=True)
- max_group = len(rizer.uniques)
-
- out = np.empty(max_group)
- counts = np.zeros(max_group, dtype='i4')
- lib.group_add(out, counts, data, group_id)
-
-# cumtime percall filename:lineno(function)
-# 0.592 0.592 <string>:1(<module>)
- # 0.584 0.006 groupby_ex.py:37(algo3_nosort)
- # 0.535 0.005 {method 'factorize' of DictFactorizer' objects}
- # 0.047 0.000 {pandas._tseries.group_add}
- # 0.002 0.000 numeric.py:65(zeros_like)
- # 0.001 0.000 {method 'fill' of 'numpy.ndarray' objects}
- # 0.000 0.000 {numpy.core.multiarray.empty_like}
- # 0.000 0.000 {numpy.core.multiarray.empty}
-
-# UNIQUE timings
-
-# N = 10000000
-# K = 500000
-
-# groups = np.array([rands(10) for _ in range(K)], dtype='O')
-
-# labels = np.tile(groups, N // K)
-data = np.random.randn(N)
-
-data = np.random.randn(N)
-
-Ks = [100, 1000, 5000, 10000, 25000, 50000, 100000]
-
-# Ks = [500000, 1000000, 2500000, 5000000, 10000000]
-
-import psutil
-import os
-import gc
-
-pid = os.getpid()
-proc = psutil.Process(pid)
-
-
-def dict_unique(values, expected_K, sort=False, memory=False):
- if memory:
- gc.collect()
- before_mem = proc.get_memory_info().rss
-
- rizer = lib.DictFactorizer()
- result = rizer.unique_int64(values)
-
- if memory:
- result = proc.get_memory_info().rss - before_mem
- return result
-
- if sort:
- result.sort()
- assert(len(result) == expected_K)
- return result
-
-
-def khash_unique(values, expected_K, size_hint=False, sort=False,
- memory=False):
- if memory:
- gc.collect()
- before_mem = proc.get_memory_info().rss
-
- if size_hint:
- rizer = lib.Factorizer(len(values))
- else:
- rizer = lib.Factorizer(100)
-
- result = []
- result = rizer.unique(values)
-
- if memory:
- result = proc.get_memory_info().rss - before_mem
- return result
-
- if sort:
- result.sort()
- assert(len(result) == expected_K)
-
-
-def khash_unique_str(values, expected_K, size_hint=False, sort=False,
- memory=False):
- if memory:
- gc.collect()
- before_mem = proc.get_memory_info().rss
-
- if size_hint:
- rizer = lib.StringHashTable(len(values))
- else:
- rizer = lib.StringHashTable(100)
-
- result = []
- result = rizer.unique(values)
-
- if memory:
- result = proc.get_memory_info().rss - before_mem
- return result
-
- if sort:
- result.sort()
- assert(len(result) == expected_K)
-
-
-def khash_unique_int64(values, expected_K, size_hint=False, sort=False):
- if size_hint:
- rizer = lib.Int64HashTable(len(values))
- else:
- rizer = lib.Int64HashTable(100)
-
- result = []
- result = rizer.unique(values)
-
- if sort:
- result.sort()
- assert(len(result) == expected_K)
-
-
-def hash_bench():
- numpy = []
- dict_based = []
- dict_based_sort = []
- khash_hint = []
- khash_nohint = []
- for K in Ks:
- print(K)
- # groups = np.array([rands(10) for _ in range(K)])
- # labels = np.tile(groups, N // K).astype('O')
-
- groups = np.random.randint(0, long(100000000000), size=K)
- labels = np.tile(groups, N // K)
- dict_based.append(timeit(lambda: dict_unique(labels, K), 20))
- khash_nohint.append(timeit(lambda: khash_unique_int64(labels, K), 20))
- khash_hint.append(timeit(lambda: khash_unique_int64(labels, K,
- size_hint=True), 20))
-
- # memory, hard to get
- # dict_based.append(np.mean([dict_unique(labels, K, memory=True)
- # for _ in range(10)]))
- # khash_nohint.append(np.mean([khash_unique(labels, K, memory=True)
- # for _ in range(10)]))
- # khash_hint.append(np.mean([khash_unique(labels, K, size_hint=True, memory=True)
- # for _ in range(10)]))
-
- # dict_based_sort.append(timeit(lambda: dict_unique(labels, K,
- # sort=True), 10))
- # numpy.append(timeit(lambda: np.unique(labels), 10))
-
- # unique_timings = DataFrame({'numpy.unique' : numpy,
- # 'dict, no sort' : dict_based,
- # 'dict, sort' : dict_based_sort},
- # columns=['dict, no sort',
- # 'dict, sort', 'numpy.unique'],
- # index=Ks)
-
- unique_timings = DataFrame({'dict': dict_based,
- 'khash, preallocate': khash_hint,
- 'khash': khash_nohint},
- columns=['khash, preallocate', 'khash', 'dict'],
- index=Ks)
-
- unique_timings.plot(kind='bar', legend=False)
- plt.legend(loc='best')
- plt.title('Unique on 100,000 values, int64')
- plt.xlabel('Number of unique labels')
- plt.ylabel('Mean execution time')
-
- plt.show()
diff --git a/bench/bench_with_subset.R b/bench/bench_with_subset.R
deleted file mode 100644
index 69d0f7a9eec63..0000000000000
--- a/bench/bench_with_subset.R
+++ /dev/null
@@ -1,53 +0,0 @@
-library(microbenchmark)
-library(data.table)
-
-
-data.frame.subset.bench <- function (n=1e7, times=30) {
- df <- data.frame(a=rnorm(n), b=rnorm(n), c=rnorm(n))
- print(microbenchmark(subset(df, a <= b & b <= (c ^ 2 + b ^ 2 - a) & b > c),
- times=times))
-}
-
-
-# data.table allows something very similar to query with an expression
-# but we have chained comparisons AND we're faster BOO YAH!
-data.table.subset.expression.bench <- function (n=1e7, times=30) {
- dt <- data.table(a=rnorm(n), b=rnorm(n), c=rnorm(n))
- print(microbenchmark(dt[, a <= b & b <= (c ^ 2 + b ^ 2 - a) & b > c],
- times=times))
-}
-
-
-# compare against subset with data.table for good measure
-data.table.subset.bench <- function (n=1e7, times=30) {
- dt <- data.table(a=rnorm(n), b=rnorm(n), c=rnorm(n))
- print(microbenchmark(subset(dt, a <= b & b <= (c ^ 2 + b ^ 2 - a) & b > c),
- times=times))
-}
-
-
-data.frame.with.bench <- function (n=1e7, times=30) {
- df <- data.frame(a=rnorm(n), b=rnorm(n), c=rnorm(n))
-
- print(microbenchmark(with(df, a + b * (c ^ 2 + b ^ 2 - a) / (a * c) ^ 3),
- times=times))
-}
-
-
-data.table.with.bench <- function (n=1e7, times=30) {
- dt <- data.table(a=rnorm(n), b=rnorm(n), c=rnorm(n))
- print(microbenchmark(with(dt, a + b * (c ^ 2 + b ^ 2 - a) / (a * c) ^ 3),
- times=times))
-}
-
-
-bench <- function () {
- data.frame.subset.bench()
- data.table.subset.expression.bench()
- data.table.subset.bench()
- data.frame.with.bench()
- data.table.with.bench()
-}
-
-
-bench()
diff --git a/bench/bench_with_subset.py b/bench/bench_with_subset.py
deleted file mode 100644
index 017401df3f7f3..0000000000000
--- a/bench/bench_with_subset.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/env python
-
-"""
-Microbenchmarks for comparison with R's "with" and "subset" functions
-"""
-
-from __future__ import print_function
-import numpy as np
-from numpy import array
-from timeit import repeat as timeit
-from pandas.compat import range, zip
-from pandas import DataFrame
-
-
-setup_common = """from pandas import DataFrame
-from numpy.random import randn
-df = DataFrame(randn(%d, 3), columns=list('abc'))
-%s"""
-
-
-setup_with = "s = 'a + b * (c ** 2 + b ** 2 - a) / (a * c) ** 3'"
-
-
-def bench_with(n, times=10, repeat=3, engine='numexpr'):
- return np.array(timeit('df.eval(s, engine=%r)' % engine,
- setup=setup_common % (n, setup_with),
- repeat=repeat, number=times)) / times
-
-
-setup_subset = "s = 'a <= b <= c ** 2 + b ** 2 - a and b > c'"
-
-
-def bench_subset(n, times=10, repeat=3, engine='numexpr'):
- return np.array(timeit('df.query(s, engine=%r)' % engine,
- setup=setup_common % (n, setup_subset),
- repeat=repeat, number=times)) / times
-
-
-def bench(mn=1, mx=7, num=100, engines=('python', 'numexpr'), verbose=False):
- r = np.logspace(mn, mx, num=num).round().astype(int)
-
- ev = DataFrame(np.empty((num, len(engines))), columns=engines)
- qu = ev.copy(deep=True)
-
- ev['size'] = qu['size'] = r
-
- for engine in engines:
- for i, n in enumerate(r):
- if verbose:
- print('engine: %r, i == %d' % (engine, i))
- ev.loc[i, engine] = bench_with(n, times=1, repeat=1, engine=engine)
- qu.loc[i, engine] = bench_subset(n, times=1, repeat=1,
- engine=engine)
-
- return ev, qu
-
-
-def plot_perf(df, engines, title, filename=None):
- from matplotlib.pyplot import figure, rc
-
- try:
- from mpltools import style
- except ImportError:
- pass
- else:
- style.use('ggplot')
-
- rc('text', usetex=True)
-
- fig = figure(figsize=(4, 3), dpi=100)
- ax = fig.add_subplot(111)
-
- for engine in engines:
- ax.plot(df.size, df[engine], label=engine, lw=2)
-
- ax.set_xlabel('Number of Rows')
- ax.set_ylabel('Time (s)')
- ax.set_title(title)
- ax.legend(loc='best')
- ax.tick_params(top=False, right=False)
-
- fig.tight_layout()
-
- if filename is not None:
- fig.savefig(filename)
-
-
-if __name__ == '__main__':
- import os
- import pandas as pd
-
- pandas_dir = os.path.dirname(os.path.abspath(os.path.dirname(__file__)))
- static_path = os.path.join(pandas_dir, 'doc', 'source', '_static')
-
- join = lambda p: os.path.join(static_path, p)
-
- fn = join('eval-query-perf-data.h5')
-
- engines = 'python', 'numexpr'
-
- if not os.path.exists(fn):
- ev, qu = bench(verbose=True)
- ev.to_hdf(fn, 'eval')
- qu.to_hdf(fn, 'query')
- else:
- ev = pd.read_hdf(fn, 'eval')
- qu = pd.read_hdf(fn, 'query')
-
- plot_perf(ev, engines, 'DataFrame.eval()', filename=join('eval-perf.png'))
- plot_perf(qu, engines, 'DataFrame.query()',
- filename=join('query-perf.png'))
-
- plot_perf(ev[ev.size <= 50000], engines, 'DataFrame.eval()',
- filename=join('eval-perf-small.png'))
- plot_perf(qu[qu.size <= 500000], engines, 'DataFrame.query()',
- filename=join('query-perf-small.png'))
diff --git a/bench/better_unique.py b/bench/better_unique.py
deleted file mode 100644
index e03a4f433ce66..0000000000000
--- a/bench/better_unique.py
+++ /dev/null
@@ -1,80 +0,0 @@
-from __future__ import print_function
-from pandas import DataFrame
-from pandas.compat import range, zip
-import timeit
-
-setup = """
-from pandas import Series
-import pandas._tseries as _tseries
-from pandas.compat import range
-import random
-import numpy as np
-
-def better_unique(values):
- uniques = _tseries.fast_unique(values)
- id_map = _tseries.map_indices_buf(uniques)
- labels = _tseries.get_unique_labels(values, id_map)
- return uniques, labels
-
-tot = 100000
-
-def get_test_data(ngroups=100, n=tot):
- unique_groups = range(ngroups)
- random.shuffle(unique_groups)
- arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object)
-
- if len(arr) < n:
- arr = np.asarray(list(arr) + unique_groups[:n - len(arr)],
- dtype=object)
-
- return arr
-
-arr = get_test_data(ngroups=%d)
-"""
-
-group_sizes = [10, 100, 1000, 10000,
- 20000, 30000, 40000,
- 50000, 60000, 70000,
- 80000, 90000, 100000]
-
-numbers = [100, 100, 50] + [10] * 10
-
-numpy = []
-wes = []
-
-for sz, n in zip(group_sizes, numbers):
- # wes_timer = timeit.Timer(stmt='better_unique(arr)',
- # setup=setup % sz)
- wes_timer = timeit.Timer(stmt='_tseries.fast_unique(arr)',
- setup=setup % sz)
-
- numpy_timer = timeit.Timer(stmt='np.unique(arr)',
- setup=setup % sz)
-
- print(n)
- numpy_result = numpy_timer.timeit(number=n) / n
- wes_result = wes_timer.timeit(number=n) / n
-
- print('Groups: %d, NumPy: %s, Wes: %s' % (sz, numpy_result, wes_result))
-
- wes.append(wes_result)
- numpy.append(numpy_result)
-
-result = DataFrame({'wes': wes, 'numpy': numpy}, index=group_sizes)
-
-
-def make_plot(numpy, wes):
- pass
-
-# def get_test_data(ngroups=100, n=100000):
-# unique_groups = range(ngroups)
-# random.shuffle(unique_groups)
-# arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object)
-
-# if len(arr) < n:
-# arr = np.asarray(list(arr) + unique_groups[:n - len(arr)],
-# dtype=object)
-
-# return arr
-
-# arr = get_test_data(ngroups=1000)
diff --git a/bench/duplicated.R b/bench/duplicated.R
deleted file mode 100644
index eb2376df2932a..0000000000000
--- a/bench/duplicated.R
+++ /dev/null
@@ -1,22 +0,0 @@
-N <- 100000
-
-k1 = rep(NA, N)
-k2 = rep(NA, N)
-for (i in 1:N){
- k1[i] <- paste(sample(letters, 1), collapse="")
- k2[i] <- paste(sample(letters, 1), collapse="")
-}
-df <- data.frame(a=k1, b=k2, c=rep(1:100, N / 100))
-df2 <- data.frame(a=k1, b=k2)
-
-timings <- numeric()
-timings2 <- numeric()
-for (i in 1:50) {
- gc()
- timings[i] = system.time(deduped <- df[!duplicated(df),])[3]
- gc()
- timings2[i] = system.time(deduped <- df[!duplicated(df[,c("a", "b")]),])[3]
-}
-
-mean(timings)
-mean(timings2)
diff --git a/bench/io_roundtrip.py b/bench/io_roundtrip.py
deleted file mode 100644
index d87da0ec6321a..0000000000000
--- a/bench/io_roundtrip.py
+++ /dev/null
@@ -1,116 +0,0 @@
-from __future__ import print_function
-import time
-import os
-import numpy as np
-
-import la
-import pandas
-from pandas.compat import range
-from pandas import datetools, DatetimeIndex
-
-
-def timeit(f, iterations):
- start = time.clock()
-
- for i in range(iterations):
- f()
-
- return time.clock() - start
-
-
-def rountrip_archive(N, K=50, iterations=10):
- # Create data
- arr = np.random.randn(N, K)
- # lar = la.larry(arr)
- dma = pandas.DataFrame(arr,
- DatetimeIndex('1/1/2000', periods=N,
- offset=datetools.Minute()))
- dma[201] = 'bar'
-
- # filenames
- filename_numpy = '/Users/wesm/tmp/numpy.npz'
- filename_larry = '/Users/wesm/tmp/archive.hdf5'
- filename_pandas = '/Users/wesm/tmp/pandas_tmp'
-
- # Delete old files
- try:
- os.unlink(filename_numpy)
- except:
- pass
- try:
- os.unlink(filename_larry)
- except:
- pass
-
- try:
- os.unlink(filename_pandas)
- except:
- pass
-
- # Time a round trip save and load
- # numpy_f = lambda: numpy_roundtrip(filename_numpy, arr, arr)
- # numpy_time = timeit(numpy_f, iterations) / iterations
-
- # larry_f = lambda: larry_roundtrip(filename_larry, lar, lar)
- # larry_time = timeit(larry_f, iterations) / iterations
-
- pandas_f = lambda: pandas_roundtrip(filename_pandas, dma, dma)
- pandas_time = timeit(pandas_f, iterations) / iterations
- print('pandas (HDF5) %7.4f seconds' % pandas_time)
-
- pickle_f = lambda: pandas_roundtrip(filename_pandas, dma, dma)
- pickle_time = timeit(pickle_f, iterations) / iterations
- print('pandas (pickle) %7.4f seconds' % pickle_time)
-
- # print('Numpy (npz) %7.4f seconds' % numpy_time)
- # print('larry (HDF5) %7.4f seconds' % larry_time)
-
- # Delete old files
- try:
- os.unlink(filename_numpy)
- except:
- pass
- try:
- os.unlink(filename_larry)
- except:
- pass
-
- try:
- os.unlink(filename_pandas)
- except:
- pass
-
-
-def numpy_roundtrip(filename, arr1, arr2):
- np.savez(filename, arr1=arr1, arr2=arr2)
- npz = np.load(filename)
- arr1 = npz['arr1']
- arr2 = npz['arr2']
-
-
-def larry_roundtrip(filename, lar1, lar2):
- io = la.IO(filename)
- io['lar1'] = lar1
- io['lar2'] = lar2
- lar1 = io['lar1']
- lar2 = io['lar2']
-
-
-def pandas_roundtrip(filename, dma1, dma2):
- # What's the best way to code this?
- from pandas.io.pytables import HDFStore
- store = HDFStore(filename)
- store['dma1'] = dma1
- store['dma2'] = dma2
- dma1 = store['dma1']
- dma2 = store['dma2']
-
-
-def pandas_roundtrip_pickle(filename, dma1, dma2):
- dma1.save(filename)
- dma1 = pandas.DataFrame.load(filename)
- dma2.save(filename)
- dma2 = pandas.DataFrame.load(filename)
-
-if __name__ == '__main__':
- rountrip_archive(10000, K=200)
diff --git a/bench/larry.py b/bench/larry.py
deleted file mode 100644
index e69de29bb2d1d..0000000000000
diff --git a/bench/serialize.py b/bench/serialize.py
deleted file mode 100644
index b0edd6a5752d2..0000000000000
--- a/bench/serialize.py
+++ /dev/null
@@ -1,89 +0,0 @@
-from __future__ import print_function
-from pandas.compat import range, lrange
-import time
-import os
-import numpy as np
-
-import la
-import pandas
-
-
-def timeit(f, iterations):
- start = time.clock()
-
- for i in range(iterations):
- f()
-
- return time.clock() - start
-
-
-def roundtrip_archive(N, iterations=10):
-
- # Create data
- arr = np.random.randn(N, N)
- lar = la.larry(arr)
- dma = pandas.DataFrame(arr, lrange(N), lrange(N))
-
- # filenames
- filename_numpy = '/Users/wesm/tmp/numpy.npz'
- filename_larry = '/Users/wesm/tmp/archive.hdf5'
- filename_pandas = '/Users/wesm/tmp/pandas_tmp'
-
- # Delete old files
- try:
- os.unlink(filename_numpy)
- except:
- pass
- try:
- os.unlink(filename_larry)
- except:
- pass
- try:
- os.unlink(filename_pandas)
- except:
- pass
-
- # Time a round trip save and load
- numpy_f = lambda: numpy_roundtrip(filename_numpy, arr, arr)
- numpy_time = timeit(numpy_f, iterations) / iterations
-
- larry_f = lambda: larry_roundtrip(filename_larry, lar, lar)
- larry_time = timeit(larry_f, iterations) / iterations
-
- pandas_f = lambda: pandas_roundtrip(filename_pandas, dma, dma)
- pandas_time = timeit(pandas_f, iterations) / iterations
-
- print('Numpy (npz) %7.4f seconds' % numpy_time)
- print('larry (HDF5) %7.4f seconds' % larry_time)
- print('pandas (HDF5) %7.4f seconds' % pandas_time)
-
-
-def numpy_roundtrip(filename, arr1, arr2):
- np.savez(filename, arr1=arr1, arr2=arr2)
- npz = np.load(filename)
- arr1 = npz['arr1']
- arr2 = npz['arr2']
-
-
-def larry_roundtrip(filename, lar1, lar2):
- io = la.IO(filename)
- io['lar1'] = lar1
- io['lar2'] = lar2
- lar1 = io['lar1']
- lar2 = io['lar2']
-
-
-def pandas_roundtrip(filename, dma1, dma2):
- from pandas.io.pytables import HDFStore
- store = HDFStore(filename)
- store['dma1'] = dma1
- store['dma2'] = dma2
- dma1 = store['dma1']
- dma2 = store['dma2']
-
-
-def pandas_roundtrip_pickle(filename, dma1, dma2):
- dma1.save(filename)
- dma1 = pandas.DataFrame.load(filename)
- dma2.save(filename)
- dma2 = pandas.DataFrame.load(filename)
diff --git a/bench/test.py b/bench/test.py
deleted file mode 100644
index 2339deab313a1..0000000000000
--- a/bench/test.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import numpy as np
-import itertools
-import collections
-import scipy.ndimage as ndi
-from pandas.compat import zip, range
-
-N = 10000
-
-lat = np.random.randint(0, 360, N)
-lon = np.random.randint(0, 360, N)
-data = np.random.randn(N)
-
-
-def groupby1(lat, lon, data):
- indexer = np.lexsort((lon, lat))
- lat = lat.take(indexer)
- lon = lon.take(indexer)
- sorted_data = data.take(indexer)
-
- keys = 1000. * lat + lon
- unique_keys = np.unique(keys)
- bounds = keys.searchsorted(unique_keys)
-
- result = group_agg(sorted_data, bounds, lambda x: x.mean())
-
- decoder = keys.searchsorted(unique_keys)
-
- return dict(zip(zip(lat.take(decoder), lon.take(decoder)), result))
-
-
-def group_mean(lat, lon, data):
- indexer = np.lexsort((lon, lat))
- lat = lat.take(indexer)
- lon = lon.take(indexer)
- sorted_data = data.take(indexer)
-
- keys = 1000 * lat + lon
- unique_keys = np.unique(keys)
-
- result = ndi.mean(sorted_data, labels=keys, index=unique_keys)
- decoder = keys.searchsorted(unique_keys)
-
- return dict(zip(zip(lat.take(decoder), lon.take(decoder)), result))
-
-
-def group_mean_naive(lat, lon, data):
- grouped = collections.defaultdict(list)
- for lt, ln, da in zip(lat, lon, data):
- grouped[(lt, ln)].append(da)
-
- averaged = dict((ltln, np.mean(da)) for ltln, da in grouped.items())
-
- return averaged
-
-
-def group_agg(values, bounds, f):
- N = len(values)
- result = np.empty(len(bounds), dtype=float)
- for i, left_bound in enumerate(bounds):
- if i == len(bounds) - 1:
- right_bound = N
- else:
- right_bound = bounds[i + 1]
-
- result[i] = f(values[left_bound: right_bound])
-
- return result
-
-# for i in range(10):
-# groupby1(lat, lon, data)
diff --git a/bench/zoo_bench.R b/bench/zoo_bench.R
deleted file mode 100644
index 294d55f51a9ab..0000000000000
--- a/bench/zoo_bench.R
+++ /dev/null
@@ -1,71 +0,0 @@
-library(zoo)
-library(xts)
-library(fts)
-library(tseries)
-library(its)
-library(xtable)
-
-## indices = rep(NA, 100000)
-## for (i in 1:100000)
-## indices[i] <- paste(sample(letters, 10), collapse="")
-
-
-
-## x <- zoo(rnorm(100000), indices)
-## y <- zoo(rnorm(90000), indices[sample(1:100000, 90000)])
-
-## indices <- as.POSIXct(1:100000)
-
-indices <- as.POSIXct(Sys.Date()) + seq(1, 100000000, 100)
-
-sz <- 500000
-
-## x <- xts(rnorm(sz), sample(indices, sz))
-## y <- xts(rnorm(sz), sample(indices, sz))
-
-zoo.bench <- function(){
- x <- zoo(rnorm(sz), sample(indices, sz))
- y <- zoo(rnorm(sz), sample(indices, sz))
- timeit(function() {x + y})
-}
-
-xts.bench <- function(){
- x <- xts(rnorm(sz), sample(indices, sz))
- y <- xts(rnorm(sz), sample(indices, sz))
- timeit(function() {x + y})
-}
-
-fts.bench <- function(){
- x <- fts(rnorm(sz), sort(sample(indices, sz)))
- y <- fts(rnorm(sz), sort(sample(indices, sz))
- timeit(function() {x + y})
-}
-
-its.bench <- function(){
- x <- its(rnorm(sz), sort(sample(indices, sz)))
- y <- its(rnorm(sz), sort(sample(indices, sz)))
- timeit(function() {x + y})
-}
-
-irts.bench <- function(){
- x <- irts(sort(sample(indices, sz)), rnorm(sz))
- y <- irts(sort(sample(indices, sz)), rnorm(sz))
- timeit(function() {x + y})
-}
-
-timeit <- function(f){
- timings <- numeric()
- for (i in 1:10) {
- gc()
- timings[i] = system.time(f())[3]
- }
- mean(timings)
-}
-
-bench <- function(){
- results <- c(xts.bench(), fts.bench(), its.bench(), zoo.bench())
- names <- c("xts", "fts", "its", "zoo")
- data.frame(results, names)
-}
-
-result <- bench()
diff --git a/bench/zoo_bench.py b/bench/zoo_bench.py
deleted file mode 100644
index 74cb1952a5a2a..0000000000000
--- a/bench/zoo_bench.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from pandas import *
-from pandas.util.testing import rands
-
-n = 1000000
-# indices = Index([rands(10) for _ in xrange(n)])
-
-
-def sample(values, k):
- sampler = np.random.permutation(len(values))
- return values.take(sampler[:k])
-sz = 500000
-rng = np.arange(0, 10000000000000, 10000000)
-stamps = np.datetime64(datetime.now()).view('i8') + rng
-idx1 = np.sort(sample(stamps, sz))
-idx2 = np.sort(sample(stamps, sz))
-ts1 = Series(np.random.randn(sz), idx1)
-ts2 = Series(np.random.randn(sz), idx2)
-
-
-# subsample_size = 90000
-
-# x = Series(np.random.randn(100000), indices)
-# y = Series(np.random.randn(subsample_size),
-# index=sample(indices, subsample_size))
-
-
-# lx = larry(np.random.randn(100000), [list(indices)])
-# ly = larry(np.random.randn(subsample_size), [list(y.index)])
-
-# Benchmark 1: Two 1-million length time series (int64-based index) with
-# randomly chosen timestamps
-
-# Benchmark 2: Join two 5-variate time series DataFrames (outer and inner join)
-
-# df1 = DataFrame(np.random.randn(1000000, 5), idx1, columns=range(5))
-# df2 = DataFrame(np.random.randn(1000000, 5), idx2, columns=range(5, 10))
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 394fa44c30573..cb3063d59beae 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -17,6 +17,11 @@
import importlib
from pandas.compat import u, PY3
+try:
+ raw_input # Python 2
+except NameError:
+ raw_input = input # Python 3
+
# https://github.com/sphinx-doc/sphinx/pull/2325/files
# Workaround for sphinx-build recursion limit overflow:
# pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
diff --git a/doc/sphinxext/ipython_sphinxext/ipython_directive.py b/doc/sphinxext/ipython_sphinxext/ipython_directive.py
index 49fbacba99592..922767a8e2d46 100644
--- a/doc/sphinxext/ipython_sphinxext/ipython_directive.py
+++ b/doc/sphinxext/ipython_sphinxext/ipython_directive.py
@@ -111,7 +111,7 @@
import sys
import tempfile
import ast
-from pandas.compat import zip, range, map, lmap, u, cStringIO as StringIO
+from pandas.compat import zip, range, map, lmap, u, text_type, cStringIO as StringIO
import warnings
# To keep compatibility with various python versions
@@ -138,10 +138,8 @@
if PY3:
from io import StringIO
- text_type = str
else:
from StringIO import StringIO
- text_type = unicode
#-----------------------------------------------------------------------------
# Globals
diff --git a/scripts/find_commits_touching_func.py b/scripts/find_commits_touching_func.py
index 099761f38bb44..74ea120bf0b64 100755
--- a/scripts/find_commits_touching_func.py
+++ b/scripts/find_commits_touching_func.py
@@ -4,7 +4,7 @@
# copryright 2013, y-p @ github
from __future__ import print_function
-from pandas.compat import range, lrange, map
+from pandas.compat import range, lrange, map, string_types, text_type
"""Search the git history for all commits touching a named method
@@ -94,7 +94,7 @@ def get_hits(defname,files=()):
def get_commit_info(c,fmt,sep='\t'):
r=sh.git('log', "--format={}".format(fmt), '{}^..{}'.format(c,c),"-n","1",_tty_out=False)
- return compat.text_type(r).split(sep)
+ return text_type(r).split(sep)
def get_commit_vitals(c,hlen=HASH_LEN):
h,s,d= get_commit_info(c,'%H\t%s\t%ci',"\t")
@@ -183,11 +183,11 @@ def main():
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
""")
return
- if isinstance(args.file_masks,compat.string_types):
+ if isinstance(args.file_masks, string_types):
args.file_masks = args.file_masks.split(',')
- if isinstance(args.path_masks,compat.string_types):
+ if isinstance(args.path_masks, string_types):
args.path_masks = args.path_masks.split(',')
- if isinstance(args.dir_masks,compat.string_types):
+ if isinstance(args.dir_masks, string_types):
args.dir_masks = args.dir_masks.split(',')
logger.setLevel(getattr(logging,args.debug_level))
diff --git a/scripts/windows_builder/build_27-32.bat b/scripts/windows_builder/build_27-32.bat
deleted file mode 100644
index 37eb4d436d567..0000000000000
--- a/scripts/windows_builder/build_27-32.bat
+++ /dev/null
@@ -1,25 +0,0 @@
-@echo off
-echo "starting 27-32"
-
-setlocal EnableDelayedExpansion
-set MSSdk=1
-CALL "C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin\SetEnv.cmd" /x86 /release
-set DISTUTILS_USE_SDK=1
-
-title 27-32 build
-echo "building"
-cd "c:\users\Jeff Reback\documents\github\pandas"
-C:\python27-32\python.exe setup.py build > build.27-32.log 2>&1
-
-title "installing"
-C:\python27-32\python.exe setup.py bdist --formats=wininst > install.27-32.log 2>&1
-
-echo "testing"
-C:\python27-32\scripts\nosetests -A "not slow" build\lib.win32-2.7\pandas > test.27-32.log 2>&1
-
-echo "versions"
-cd build\lib.win32-2.7
-C:\python27-32\python.exe ../../ci/print_versions.py > ../../versions.27-32.log 2>&1
-
-exit
-
diff --git a/scripts/windows_builder/build_27-64.bat b/scripts/windows_builder/build_27-64.bat
deleted file mode 100644
index e76e25d0ef39c..0000000000000
--- a/scripts/windows_builder/build_27-64.bat
+++ /dev/null
@@ -1,25 +0,0 @@
-@echo off
-echo "starting 27-64"
-
-setlocal EnableDelayedExpansion
-set MSSdk=1
-CALL "C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin\SetEnv.cmd" /x64 /release
-set DISTUTILS_USE_SDK=1
-
-title 27-64 build
-echo "building"
-cd "c:\users\Jeff Reback\documents\github\pandas"
-C:\python27-64\python.exe setup.py build > build.27-64.log 2>&1
-
-echo "installing"
-C:\python27-64\python.exe setup.py bdist --formats=wininst > install.27-64.log 2>&1
-
-echo "testing"
-C:\python27-64\scripts\nosetests -A "not slow" build\lib.win-amd64-2.7\pandas > test.27-64.log 2>&1
-
-echo "versions"
-cd build\lib.win-amd64-2.7
-C:\python27-64\python.exe ../../ci/print_versions.py > ../../versions.27-64.log 2>&1
-
-exit
-
diff --git a/scripts/windows_builder/build_34-32.bat b/scripts/windows_builder/build_34-32.bat
deleted file mode 100644
index 8e060e000bc8f..0000000000000
--- a/scripts/windows_builder/build_34-32.bat
+++ /dev/null
@@ -1,27 +0,0 @@
-@echo off
-echo "starting 34-32"
-
-setlocal EnableDelayedExpansion
-set MSSdk=1
-CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x86 /release
-set DISTUTILS_USE_SDK=1
-
-title 34-32 build
-echo "building"
-cd "c:\users\Jeff Reback\documents\github\pandas"
-C:\python34-32\python.exe setup.py build > build.34-32.log 2>&1
-
-echo "installing"
-C:\python34-32\python.exe setup.py bdist --formats=wininst > install.34-32.log 2>&1
-
-echo "testing"
-C:\python34-32\scripts\nosetests -A "not slow" build\lib.win32-3.4\pandas > test.34-32.log 2>&1
-
-echo "versions"
-cd build\lib.win32-3.4
-C:\python34-32\python.exe ../../ci/print_versions.py > ../../versions.34-32.log 2>&1
-
-exit
-
-
-
diff --git a/scripts/windows_builder/build_34-64.bat b/scripts/windows_builder/build_34-64.bat
deleted file mode 100644
index 3a8512b730346..0000000000000
--- a/scripts/windows_builder/build_34-64.bat
+++ /dev/null
@@ -1,27 +0,0 @@
-@echo off
-echo "starting 34-64"
-
-setlocal EnableDelayedExpansion
-set MSSdk=1
-CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64 /release
-set DISTUTILS_USE_SDK=1
-
-title 34-64 build
-echo "building"
-cd "c:\users\Jeff Reback\documents\github\pandas"
-C:\python34-64\python.exe setup.py build > build.34-64.log 2>&1
-
-echo "installing"
-C:\python34-64\python.exe setup.py bdist --formats=wininst > install.34-64.log 2>&1
-
-echo "testing"
-C:\python34-64\scripts\nosetests -A "not slow" build\lib.win-amd64-3.4\pandas > test.34-64.log 2>&1
-
-echo "versions"
-cd build\lib.win-amd64-3.4
-C:\python34-64\python.exe ../../ci/print_versions.py > ../../versions.34-64.log 2>&1
-
-exit
-
-
-
diff --git a/scripts/windows_builder/check_and_build.bat b/scripts/windows_builder/check_and_build.bat
deleted file mode 100644
index 32be1bde1f7f3..0000000000000
--- a/scripts/windows_builder/check_and_build.bat
+++ /dev/null
@@ -1,2 +0,0 @@
-set PYTHONPATH=c:/python27-64/lib
-c:/python27-64/python.exe c:/Builds/check_and_build.py %1 %2 %3 %4 %4 %6 %7 %8 %9
diff --git a/scripts/windows_builder/check_and_build.py b/scripts/windows_builder/check_and_build.py
deleted file mode 100644
index 2eb32fb4265d9..0000000000000
--- a/scripts/windows_builder/check_and_build.py
+++ /dev/null
@@ -1,194 +0,0 @@
-import datetime
-import git
-import logging
-import os, re, time
-import subprocess
-import argparse
-import pysftp
-
-# parse the args
-parser = argparse.ArgumentParser(description='build, test, and install updated versions of master pandas')
-parser.add_argument('-b', '--build',
- help='run just this build',
- dest='build')
-parser.add_argument('-u', '--update',
- help='get a git update',
- dest='update',
- action='store_true',
- default=False)
-parser.add_argument('-t', '--test',
- help='run the tests',
- dest='test',
- action='store_true',
- default=False)
-parser.add_argument('-c', '--compare',
- help='show the last tests compare',
- dest='compare',
- action='store_true',
- default=False)
-parser.add_argument('-v', '--version',
- help='show the last versions',
- dest='version',
- action='store_true',
- default=False)
-parser.add_argument('-i', '--install',
- help='run the install',
- dest='install',
- action='store_true',
- default=False)
-parser.add_argument('--dry',
- help='dry run',
- dest='dry',
- action='store_true',
- default=False)
-
-args = parser.parse_args()
-dry_run = args.dry
-
-builds = ['27-32','27-64','34-32','34-64']
-base_dir = "C:\Users\Jeff Reback\Documents\GitHub\pandas"
-remote_host='pandas.pydata.org'
-username='pandas'
-password=############
-
-# drop python from our environment to avoid
-# passing this onto sub-processes
-env = os.environ
-del env['PYTHONPATH']
-
-# the stdout logger
-fmt = '%(asctime)s: %(message)s'
-logger = logging.getLogger('check_and_build')
-logger.setLevel(logging.DEBUG)
-stream_handler = logging.StreamHandler()
-stream_handler.setFormatter(logging.Formatter(fmt))
-logger.addHandler(stream_handler)
-
-def run_all(test=False,compare=False,install=False,version=False,build=None):
- # run everything
-
- for b in builds:
- if build is not None and build != b:
- continue
- if test:
- do_rebuild(b)
- if compare or test:
- try:
- do_compare(b)
- except (Exception) as e:
- logger.info("ERROR COMPARE {0} : {1}".format(b,e))
- if version:
- try:
- do_version(b)
- except (Exception) as e:
- logger.info("ERROR VERSION {0} : {1}".format(b,e))
-
- if install:
- run_install()
-
-def do_rebuild(build):
- # trigger the rebuild
-
- cmd = "c:/Builds/build_{0}.bat".format(build)
- logger.info("rebuild : {0}".format(cmd))
- p = subprocess.Popen("start /wait /min {0}".format(cmd),env=env,shell=True,close_fds=True)
- ret = p.wait()
-
-def do_compare(build):
- # print the test outputs
-
- f = os.path.join(base_dir,"test.{0}.log".format(build))
- with open(f,'r') as fh:
- for l in fh:
- l = l.rstrip()
- if l.startswith('ERROR:'):
- logger.info("{0} : {1}".format(build,l))
- if l.startswith('Ran') or l.startswith('OK') or l.startswith('FAIL'):
- logger.info("{0} : {1}".format(build,l))
-
-def do_version(build):
- # print the version strings
-
- f = os.path.join(base_dir,"versions.{0}.log".format(build))
- with open(f,'r') as fh:
- for l in fh:
- l = l.rstrip()
- logger.info("{0} : {1}".format(build,l))
-
-def do_update(is_verbose=True):
- # update git; return True if the commit has changed
-
- repo = git.Repo(base_dir)
- master = repo.heads.master
- origin = repo.remotes.origin
- start_commit = master.commit
-
- if is_verbose:
- logger.info("current commit : {0}".format(start_commit))
-
- try:
- origin.update()
- except (Exception) as e:
- logger.info("update exception : {0}".format(e))
- try:
- origin.pull()
- except (Exception) as e:
- logger.info("pull exception : {0}".format(e))
-
- result = start_commit != master.commit
- if result:
- if is_verbose:
- logger.info("commits changed : {0} -> {1}".format(start_commit,master.commit))
- return result
-
-def run_install():
- # send the installation binaries
-
- repo = git.Repo(base_dir)
- master = repo.heads.master
- commit = master.commit
- short_hash = str(commit)[:7]
-
- logger.info("sending files : {0}".format(commit))
- d = os.path.join(base_dir,"dist")
- files = [ f for f in os.listdir(d) if re.search(short_hash,f) ]
- srv = pysftp.Connection(host=remote_host,username=username,password=password)
- srv.chdir("www/pandas-build/dev")
-
- # get current files
- remote_files = set(srv.listdir(path='.'))
-
- for f in files:
- if f not in remote_files:
- logger.info("sending: {0}".format(f))
- local = os.path.join(d,f)
- srv.put(localpath=local)
-
- srv.close()
- logger.info("sending files: done")
-
-# just perform the action
-if args.update or args.test or args.compare or args.install or args.version:
- if args.update:
- do_update()
- run_all(test=args.test,compare=args.compare,install=args.install,version=args.version,build=args.build)
- exit(0)
-
-# file logging
-file_handler = logging.FileHandler("C:\Builds\logs\check_and_build.log")
-file_handler.setFormatter(logging.Formatter(fmt))
-logger.addHandler(file_handler)
-
-logger.info("start")
-
-# main loop
-while(True):
-
- if do_update():
- run_all(test=True,install=False)
-
- time.sleep(60*60)
-
-logger.info("exit")
-file_handler.close()
-
diff --git a/scripts/windows_builder/readme.txt b/scripts/windows_builder/readme.txt
deleted file mode 100644
index 789e2a9ee0c63..0000000000000
--- a/scripts/windows_builder/readme.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-This is a collection of windows batch scripts (and a python script)
-to rebuild the binaries, test, and upload the binaries for public distribution
-upon a commit on github.
-
-Obviously requires that these be setup on windows
-Requires an install of Windows SDK 3.5 and 4.0
-Full python installs for each version with the deps
-
-Currently supporting
-
-27-32,27-64,34-32,34-64
-
-Note that 34 use the 4.0 SDK, while the other suse 3.5 SDK
-
-I installed these scripts in C:\Builds
-
-Installed libaries in C:\Installs
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16984 | 2017-07-16T19:24:00Z | 2017-07-17T12:59:15Z | 2017-07-17T12:59:14Z | 2017-07-17T15:59:28Z |
ENH: Use 'Y' as an alias for end of year | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 1dd80aec4fd6c..8f02a86adbd48 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1092,9 +1092,9 @@ frequencies. We will refer to these aliases as *offset aliases*
"BQ", "business quarter endfrequency"
"QS", "quarter start frequency"
"BQS", "business quarter start frequency"
- "A", "year end frequency"
+ "A, Y", "year end frequency"
"BA", "business year end frequency"
- "AS", "year start frequency"
+ "AS, YS", "year start frequency"
"BAS", "business year start frequency"
"BH", "business hour frequency"
"H", "hourly frequency"
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index c63d4575bac43..b918cf42b3a40 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -40,6 +40,8 @@ Other Enhancements
- :func:`DataFrame.clip()` and :func:`Series.clip()` have gained an ``inplace`` argument. (:issue:`15388`)
- :func:`crosstab` has gained a ``margins_name`` parameter to define the name of the row / column that will contain the totals when ``margins=True``. (:issue:`15972`)
- :func:`DataFrame.select_dtypes` now accepts scalar values for include/exclude as well as list-like. (:issue:`16855`)
+- :func:`date_range` now accepts 'YS' in addition to 'AS' as an alias for start of year (:issue:`9313`)
+- :func:`date_range` now accepts 'Y' in addition to 'A' as an alias for end of year (:issue:`9313`)
.. _whatsnew_0210.api_breaking:
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 62686b356dc30..da4ca83c10dda 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -33,6 +33,31 @@ def test_date_range_gen_error(self):
rng = date_range('1/1/2000 00:00', '1/1/2000 00:18', freq='5min')
assert len(rng) == 4
+ @pytest.mark.parametrize("freq", ["AS", "YS"])
+ def test_begin_year_alias(self, freq):
+ # see gh-9313
+ rng = date_range("1/1/2013", "7/1/2017", freq=freq)
+ exp = pd.DatetimeIndex(["2013-01-01", "2014-01-01",
+ "2015-01-01", "2016-01-01",
+ "2017-01-01"], freq=freq)
+ tm.assert_index_equal(rng, exp)
+
+ @pytest.mark.parametrize("freq", ["A", "Y"])
+ def test_end_year_alias(self, freq):
+ # see gh-9313
+ rng = date_range("1/1/2013", "7/1/2017", freq=freq)
+ exp = pd.DatetimeIndex(["2013-12-31", "2014-12-31",
+ "2015-12-31", "2016-12-31"], freq=freq)
+ tm.assert_index_equal(rng, exp)
+
+ @pytest.mark.parametrize("freq", ["BA", "BY"])
+ def test_business_end_year_alias(self, freq):
+ # see gh-9313
+ rng = date_range("1/1/2013", "7/1/2017", freq=freq)
+ exp = pd.DatetimeIndex(["2013-12-31", "2014-12-31",
+ "2015-12-31", "2016-12-30"], freq=freq)
+ tm.assert_index_equal(rng, exp)
+
def test_date_range_negative_freq(self):
# GH 11018
rng = date_range('2011-12-31', freq='-2A', periods=3)
diff --git a/pandas/tests/tseries/test_frequencies.py b/pandas/tests/tseries/test_frequencies.py
index 54d12317b0bf8..4bcd0b49db7e0 100644
--- a/pandas/tests/tseries/test_frequencies.py
+++ b/pandas/tests/tseries/test_frequencies.py
@@ -248,9 +248,10 @@ def test_anchored_shortcuts(self):
# ensure invalid cases fail as expected
invalid_anchors = ['SM-0', 'SM-28', 'SM-29',
- 'SM-FOO', 'BSM', 'SM--1'
+ 'SM-FOO', 'BSM', 'SM--1',
'SMS-1', 'SMS-28', 'SMS-30',
- 'SMS-BAR', 'BSMS', 'SMS--2']
+ 'SMS-BAR', 'SMS-BYR' 'BSMS',
+ 'SMS--2']
for invalid_anchor in invalid_anchors:
with tm.assert_raises_regex(ValueError,
'Invalid frequency: '):
@@ -292,11 +293,15 @@ def test_get_rule_month():
result = frequencies._get_rule_month('A-DEC')
assert (result == 'DEC')
+ result = frequencies._get_rule_month('Y-DEC')
+ assert (result == 'DEC')
result = frequencies._get_rule_month(offsets.YearEnd())
assert (result == 'DEC')
result = frequencies._get_rule_month('A-MAY')
assert (result == 'MAY')
+ result = frequencies._get_rule_month('Y-MAY')
+ assert (result == 'MAY')
result = frequencies._get_rule_month(offsets.YearEnd(month=5))
assert (result == 'MAY')
@@ -305,6 +310,10 @@ def test_period_str_to_code():
assert (frequencies._period_str_to_code('A') == 1000)
assert (frequencies._period_str_to_code('A-DEC') == 1000)
assert (frequencies._period_str_to_code('A-JAN') == 1001)
+ assert (frequencies._period_str_to_code('Y') == 1000)
+ assert (frequencies._period_str_to_code('Y-DEC') == 1000)
+ assert (frequencies._period_str_to_code('Y-JAN') == 1001)
+
assert (frequencies._period_str_to_code('Q') == 2000)
assert (frequencies._period_str_to_code('Q-DEC') == 2000)
assert (frequencies._period_str_to_code('Q-FEB') == 2002)
@@ -349,6 +358,10 @@ def test_freq_code(self):
assert frequencies.get_freq('3A') == 1000
assert frequencies.get_freq('-1A') == 1000
+ assert frequencies.get_freq('Y') == 1000
+ assert frequencies.get_freq('3Y') == 1000
+ assert frequencies.get_freq('-1Y') == 1000
+
assert frequencies.get_freq('W') == 4000
assert frequencies.get_freq('W-MON') == 4001
assert frequencies.get_freq('W-FRI') == 4005
@@ -369,6 +382,13 @@ def test_freq_group(self):
assert frequencies.get_freq_group('-1A') == 1000
assert frequencies.get_freq_group('A-JAN') == 1000
assert frequencies.get_freq_group('A-MAY') == 1000
+
+ assert frequencies.get_freq_group('Y') == 1000
+ assert frequencies.get_freq_group('3Y') == 1000
+ assert frequencies.get_freq_group('-1Y') == 1000
+ assert frequencies.get_freq_group('Y-JAN') == 1000
+ assert frequencies.get_freq_group('Y-MAY') == 1000
+
assert frequencies.get_freq_group(offsets.YearEnd()) == 1000
assert frequencies.get_freq_group(offsets.YearEnd(month=1)) == 1000
assert frequencies.get_freq_group(offsets.YearEnd(month=5)) == 1000
@@ -790,12 +810,6 @@ def test_series(self):
for freq in [None, 'L']:
s = Series(period_range('2013', periods=10, freq=freq))
pytest.raises(TypeError, lambda: frequencies.infer_freq(s))
- for freq in ['Y']:
-
- msg = frequencies._INVALID_FREQ_ERROR
- with tm.assert_raises_regex(ValueError, msg):
- s = Series(period_range('2013', periods=10, freq=freq))
- pytest.raises(TypeError, lambda: frequencies.infer_freq(s))
# DateTimeIndex
for freq in ['M', 'L', 'S']:
@@ -812,11 +826,12 @@ def test_legacy_offset_warnings(self):
'W@FRI', 'W@SAT', 'W@SUN', 'Q@JAN', 'Q@FEB', 'Q@MAR',
'A@JAN', 'A@FEB', 'A@MAR', 'A@APR', 'A@MAY', 'A@JUN',
'A@JUL', 'A@AUG', 'A@SEP', 'A@OCT', 'A@NOV', 'A@DEC',
- 'WOM@1MON', 'WOM@2MON', 'WOM@3MON', 'WOM@4MON',
- 'WOM@1TUE', 'WOM@2TUE', 'WOM@3TUE', 'WOM@4TUE',
- 'WOM@1WED', 'WOM@2WED', 'WOM@3WED', 'WOM@4WED',
- 'WOM@1THU', 'WOM@2THU', 'WOM@3THU', 'WOM@4THU'
- 'WOM@1FRI', 'WOM@2FRI', 'WOM@3FRI', 'WOM@4FRI']
+ 'Y@JAN', 'WOM@1MON', 'WOM@2MON', 'WOM@3MON',
+ 'WOM@4MON', 'WOM@1TUE', 'WOM@2TUE', 'WOM@3TUE',
+ 'WOM@4TUE', 'WOM@1WED', 'WOM@2WED', 'WOM@3WED',
+ 'WOM@4WED', 'WOM@1THU', 'WOM@2THU', 'WOM@3THU',
+ 'WOM@4THU', 'WOM@1FRI', 'WOM@2FRI', 'WOM@3FRI',
+ 'WOM@4FRI']
msg = frequencies._INVALID_FREQ_ERROR
for freq in freqs:
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index c5f6c00a4005a..aa33a3849acb3 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -399,10 +399,14 @@ def _get_freq_str(base, mult=1):
'Q': 'Q',
'A': 'A',
'W': 'W',
- 'M': 'M'
+ 'M': 'M',
+ 'Y': 'A',
+ 'BY': 'A',
+ 'YS': 'A',
+ 'BYS': 'A',
}
-need_suffix = ['QS', 'BQ', 'BQS', 'AS', 'BA', 'BAS']
+need_suffix = ['QS', 'BQ', 'BQS', 'YS', 'AS', 'BY', 'BA', 'BYS', 'BAS']
for __prefix in need_suffix:
for _m in tslib._MONTHS:
_offset_to_period_map['%s-%s' % (__prefix, _m)] = \
@@ -427,9 +431,13 @@ def get_period_alias(offset_str):
'Q': 'Q-DEC',
'A': 'A-DEC', # YearEnd(month=12),
+ 'Y': 'A-DEC',
'AS': 'AS-JAN', # YearBegin(month=1),
+ 'YS': 'AS-JAN',
'BA': 'BA-DEC', # BYearEnd(month=12),
+ 'BY': 'BA-DEC',
'BAS': 'BAS-JAN', # BYearBegin(month=1),
+ 'BYS': 'BAS-JAN',
'Min': 'T',
'min': 'T',
@@ -708,7 +716,17 @@ def get_standard_freq(freq):
for _k, _v in compat.iteritems(_period_code_map):
_reverse_period_code_map[_v] = _k
-# Additional aliases
+# Yearly aliases
+year_aliases = {}
+
+for k, v in compat.iteritems(_period_code_map):
+ if k.startswith("A-"):
+ alias = "Y" + k[1:]
+ year_aliases[alias] = v
+
+_period_code_map.update(**year_aliases)
+del year_aliases
+
_period_code_map.update({
"Q": 2000, # Quarterly - December year end (default quarterly)
"A": 1000, # Annual
| Redo of #16958.
Closes #9313.
cc @imih
Additional components:
- [x] `whatsnew` mention
- [x] doc in `timeseries.rst`
- [x] tests for 'YS' and `date_range` | https://api.github.com/repos/pandas-dev/pandas/pulls/16978 | 2017-07-16T10:35:56Z | 2017-07-19T02:51:58Z | 2017-07-19T02:51:58Z | 2017-07-19T15:34:36Z |
TST: Add test for sub-char in read_csv | diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 584a6561b505b..4d1f9936af983 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -1677,6 +1677,16 @@ def test_internal_eof_byte_to_file(self):
result = self.read_csv(path)
tm.assert_frame_equal(result, expected)
+ def test_sub_character(self):
+ # see gh-16893
+ dirpath = tm.get_data_path()
+ filename = os.path.join(dirpath, "sub_char.csv")
+
+ expected = DataFrame([[1, 2, 3]], columns=["a", "\x1ab", "c"])
+ result = self.read_csv(filename)
+
+ tm.assert_frame_equal(result, expected)
+
def test_file_handles(self):
# GH 14418 - don't close user provided file handles
diff --git a/pandas/tests/io/parser/data/sub_char.csv b/pandas/tests/io/parser/data/sub_char.csv
new file mode 100644
index 0000000000000..ff1fa777832c7
--- /dev/null
+++ b/pandas/tests/io/parser/data/sub_char.csv
@@ -0,0 +1,2 @@
+a,"b",c
+1,2,3
\ No newline at end of file
| Title is self-explanatory.
Closes #16893.
cc @Khris777 | https://api.github.com/repos/pandas-dev/pandas/pulls/16977 | 2017-07-16T10:29:49Z | 2017-07-16T15:31:13Z | 2017-07-16T15:31:13Z | 2017-07-16T17:17:00Z |
Revert "Create a 'Y' alias for date_range yearly frequency " | diff --git a/pandas/tests/tseries/test_frequencies.py b/pandas/tests/tseries/test_frequencies.py
index 4bcd0b49db7e0..54d12317b0bf8 100644
--- a/pandas/tests/tseries/test_frequencies.py
+++ b/pandas/tests/tseries/test_frequencies.py
@@ -248,10 +248,9 @@ def test_anchored_shortcuts(self):
# ensure invalid cases fail as expected
invalid_anchors = ['SM-0', 'SM-28', 'SM-29',
- 'SM-FOO', 'BSM', 'SM--1',
+ 'SM-FOO', 'BSM', 'SM--1'
'SMS-1', 'SMS-28', 'SMS-30',
- 'SMS-BAR', 'SMS-BYR' 'BSMS',
- 'SMS--2']
+ 'SMS-BAR', 'BSMS', 'SMS--2']
for invalid_anchor in invalid_anchors:
with tm.assert_raises_regex(ValueError,
'Invalid frequency: '):
@@ -293,15 +292,11 @@ def test_get_rule_month():
result = frequencies._get_rule_month('A-DEC')
assert (result == 'DEC')
- result = frequencies._get_rule_month('Y-DEC')
- assert (result == 'DEC')
result = frequencies._get_rule_month(offsets.YearEnd())
assert (result == 'DEC')
result = frequencies._get_rule_month('A-MAY')
assert (result == 'MAY')
- result = frequencies._get_rule_month('Y-MAY')
- assert (result == 'MAY')
result = frequencies._get_rule_month(offsets.YearEnd(month=5))
assert (result == 'MAY')
@@ -310,10 +305,6 @@ def test_period_str_to_code():
assert (frequencies._period_str_to_code('A') == 1000)
assert (frequencies._period_str_to_code('A-DEC') == 1000)
assert (frequencies._period_str_to_code('A-JAN') == 1001)
- assert (frequencies._period_str_to_code('Y') == 1000)
- assert (frequencies._period_str_to_code('Y-DEC') == 1000)
- assert (frequencies._period_str_to_code('Y-JAN') == 1001)
-
assert (frequencies._period_str_to_code('Q') == 2000)
assert (frequencies._period_str_to_code('Q-DEC') == 2000)
assert (frequencies._period_str_to_code('Q-FEB') == 2002)
@@ -358,10 +349,6 @@ def test_freq_code(self):
assert frequencies.get_freq('3A') == 1000
assert frequencies.get_freq('-1A') == 1000
- assert frequencies.get_freq('Y') == 1000
- assert frequencies.get_freq('3Y') == 1000
- assert frequencies.get_freq('-1Y') == 1000
-
assert frequencies.get_freq('W') == 4000
assert frequencies.get_freq('W-MON') == 4001
assert frequencies.get_freq('W-FRI') == 4005
@@ -382,13 +369,6 @@ def test_freq_group(self):
assert frequencies.get_freq_group('-1A') == 1000
assert frequencies.get_freq_group('A-JAN') == 1000
assert frequencies.get_freq_group('A-MAY') == 1000
-
- assert frequencies.get_freq_group('Y') == 1000
- assert frequencies.get_freq_group('3Y') == 1000
- assert frequencies.get_freq_group('-1Y') == 1000
- assert frequencies.get_freq_group('Y-JAN') == 1000
- assert frequencies.get_freq_group('Y-MAY') == 1000
-
assert frequencies.get_freq_group(offsets.YearEnd()) == 1000
assert frequencies.get_freq_group(offsets.YearEnd(month=1)) == 1000
assert frequencies.get_freq_group(offsets.YearEnd(month=5)) == 1000
@@ -810,6 +790,12 @@ def test_series(self):
for freq in [None, 'L']:
s = Series(period_range('2013', periods=10, freq=freq))
pytest.raises(TypeError, lambda: frequencies.infer_freq(s))
+ for freq in ['Y']:
+
+ msg = frequencies._INVALID_FREQ_ERROR
+ with tm.assert_raises_regex(ValueError, msg):
+ s = Series(period_range('2013', periods=10, freq=freq))
+ pytest.raises(TypeError, lambda: frequencies.infer_freq(s))
# DateTimeIndex
for freq in ['M', 'L', 'S']:
@@ -826,12 +812,11 @@ def test_legacy_offset_warnings(self):
'W@FRI', 'W@SAT', 'W@SUN', 'Q@JAN', 'Q@FEB', 'Q@MAR',
'A@JAN', 'A@FEB', 'A@MAR', 'A@APR', 'A@MAY', 'A@JUN',
'A@JUL', 'A@AUG', 'A@SEP', 'A@OCT', 'A@NOV', 'A@DEC',
- 'Y@JAN', 'WOM@1MON', 'WOM@2MON', 'WOM@3MON',
- 'WOM@4MON', 'WOM@1TUE', 'WOM@2TUE', 'WOM@3TUE',
- 'WOM@4TUE', 'WOM@1WED', 'WOM@2WED', 'WOM@3WED',
- 'WOM@4WED', 'WOM@1THU', 'WOM@2THU', 'WOM@3THU',
- 'WOM@4THU', 'WOM@1FRI', 'WOM@2FRI', 'WOM@3FRI',
- 'WOM@4FRI']
+ 'WOM@1MON', 'WOM@2MON', 'WOM@3MON', 'WOM@4MON',
+ 'WOM@1TUE', 'WOM@2TUE', 'WOM@3TUE', 'WOM@4TUE',
+ 'WOM@1WED', 'WOM@2WED', 'WOM@3WED', 'WOM@4WED',
+ 'WOM@1THU', 'WOM@2THU', 'WOM@3THU', 'WOM@4THU'
+ 'WOM@1FRI', 'WOM@2FRI', 'WOM@3FRI', 'WOM@4FRI']
msg = frequencies._INVALID_FREQ_ERROR
for freq in freqs:
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 5c3c90520d1c3..c5f6c00a4005a 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -422,27 +422,6 @@ def get_period_alias(offset_str):
return _offset_to_period_map.get(offset_str, None)
-_pure_alias = {
- # 'A' is equivalent to 'Y'.
- 'Y': 'A',
- 'YS': 'AS',
- 'BY': 'BA',
- 'BYS': 'BAS',
- 'Y-DEC': 'A-DEC',
- 'Y-JAN': 'A-JAN',
- 'Y-FEB': 'A-FEB',
- 'Y-MAR': 'A-MAR',
- 'Y-APR': 'A-APR',
- 'Y-MAY': 'A-MAY',
- 'Y-JUN': 'A-JUN',
- 'Y-JUL': 'A-JUL',
- 'Y-AUG': 'A-AUG',
- 'Y-SEP': 'A-SEP',
- 'Y-OCT': 'A-OCT',
- 'Y-NOV': 'A-NOV',
-}
-
-
_lite_rule_alias = {
'W': 'W-SUN',
'Q': 'Q-DEC',
@@ -739,7 +718,6 @@ def get_standard_freq(freq):
def _period_str_to_code(freqstr):
- freqstr = _pure_alias.get(freqstr, freqstr)
freqstr = _lite_rule_alias.get(freqstr, freqstr)
if freqstr not in _dont_uppercase:
| Reverts #16958. Was prematurely merged. | https://api.github.com/repos/pandas-dev/pandas/pulls/16976 | 2017-07-16T09:56:58Z | 2017-07-16T09:57:15Z | 2017-07-16T09:57:15Z | 2017-07-16T09:58:04Z |
DOC: Improving docstring of reset_index method (#16416) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9a79ca1d4eab1..c18aaf25bfde5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3023,35 +3023,98 @@ def reset_index(self, level=None, drop=False, inplace=False, col_level=0,
Examples
--------
- >>> df = pd.DataFrame({'a': [1, 2, 3, 4], 'b': [5, 6, 7, 8]},
- ... index=pd.Index(['a', 'b', 'c', 'd'],
- ... name='idx'))
+ >>> df = pd.DataFrame([('bird', 389.0),
+ ... ('bird', 24.0),
+ ... ('mammal', 80.5),
+ ... ('mammal', np.nan)],
+ ... index=['falcon', 'parrot', 'lion', 'monkey'],
+ ... columns=('class', 'max_speed'))
+ >>> df
+ class max_speed
+ falcon bird 389.0
+ parrot bird 24.0
+ lion mammal 80.5
+ monkey mammal NaN
+
+ When we reset the index, the old index is added as a column, and a
+ new sequential index is used:
+
>>> df.reset_index()
- idx a b
- 0 a 1 5
- 1 b 2 6
- 2 c 3 7
- 3 d 4 8
-
- >>> arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo',
- ... 'foo', 'qux', 'qux']),
- ... np.array(['one', 'two', 'one', 'two', 'one', 'two',
- ... 'one', 'two'])]
- >>> df2 = pd.DataFrame(
- ... np.random.randn(8, 4),
- ... index=pd.MultiIndex.from_arrays(arrays,
- ... names=['a', 'b']))
- >>> df2.reset_index(level='a')
- a 0 1 2 3
- b
- one bar -1.099413 0.291838 0.598198 0.162181
- two bar -0.312184 -0.119904 0.250360 0.364378
- one baz 0.713596 -0.490636 0.074967 -0.297857
- two baz 0.998397 0.524499 -2.228976 0.901155
- one foo 0.923204 0.920695 1.264488 1.476921
- two foo -1.566922 0.783278 -0.073656 0.266027
- one qux -0.230470 0.109800 -1.383409 0.048421
- two qux -0.865993 -0.865984 0.705367 -0.170446
+ index class max_speed
+ 0 falcon bird 389.0
+ 1 parrot bird 24.0
+ 2 lion mammal 80.5
+ 3 monkey mammal NaN
+
+ We can use the `drop` parameter to avoid the old index being added as
+ a column:
+
+ >>> df.reset_index(drop=True)
+ class max_speed
+ 0 bird 389.0
+ 1 bird 24.0
+ 2 mammal 80.5
+ 3 mammal NaN
+
+ You can also use `reset_index` with `MultiIndex`.
+
+ >>> index = pd.MultiIndex.from_tuples([('bird', 'falcon'),
+ ... ('bird', 'parrot'),
+ ... ('mammal', 'lion'),
+ ... ('mammal', 'monkey')],
+ ... names=['class', 'name'])
+ >>> columns = pd.MultiIndex.from_tuples([('speed', 'max'),
+ ... ('speed', 'type')])
+ >>> df = pd.DataFrame([(389.0, 'fly'),
+ ... ( 24.0, 'fly'),
+ ... ( 80.5, 'run'),
+ ... (np.nan, 'jump')],
+ ... index=index,
+ ... columns=columns)
+ >>> df
+ speed
+ max type
+ class name
+ bird falcon 389.0 fly
+ parrot 24.0 fly
+ mammal lion 80.5 run
+ monkey NaN jump
+
+ If the index has multiple levels, we can reset a subset of them:
+
+ >>> df.reset_index(level='class')
+ class speed
+ max type
+ name
+ falcon bird 389.0 fly
+ parrot bird 24.0 fly
+ lion mammal 80.5 run
+ monkey mammal NaN jump
+
+ If we are not dropping the index, by default, it is placed in the top
+ level. We can place it in another level:
+
+ >>> df.reset_index(level='class', col_level=1)
+ speed
+ class max type
+ name
+ falcon bird 389.0 fly
+ parrot bird 24.0 fly
+ lion mammal 80.5 run
+ monkey mammal NaN jump
+
+ When the index is inserted under another level, we can specify under
+ which one with the parameter `col_fill`. If we specify a nonexistent
+ level, it is created:
+
+ >>> df.reset_index(level='class', col_level=1, col_fill='species')
+ species speed
+ class max type
+ name
+ falcon bird 389.0 fly
+ parrot bird 24.0 fly
+ lion mammal 80.5 run
+ monkey mammal NaN jump
"""
inplace = validate_bool_kwarg(inplace, 'inplace')
if inplace:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
I failed to see that someone else was taking care of the reset_index documentation. But I think the examples I implemented are more clear, so submitting the PR anyway. | https://api.github.com/repos/pandas-dev/pandas/pulls/16975 | 2017-07-16T07:58:16Z | 2017-07-19T10:19:30Z | 2017-07-19T10:19:30Z | 2017-07-19T10:33:14Z |
CLN: some residual code removed, xref to #16761 | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index e70db1d13e376..04563907582ee 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -255,18 +255,6 @@ def use_numexpr_cb(key):
df.info() (the behaviour in earlier versions of pandas).
"""
-pc_mpl_style_doc = """
-: bool
- Setting this to 'default' will modify the rcParams used by matplotlib
- to give plots a more pleasing visual style by default.
- Setting this to None/False restores the values to their initial value.
-"""
-
-pc_mpl_style_deprecation_warning = """
-mpl_style had been deprecated and will be removed in a future version.
-Use `matplotlib.pyplot.style.use` instead.
-"""
-
pc_memory_usage_doc = """
: bool, string or None
This specifies if the memory usage of a DataFrame should be displayed when
| xref #16761 | https://api.github.com/repos/pandas-dev/pandas/pulls/16974 | 2017-07-16T01:02:16Z | 2017-07-16T01:42:56Z | 2017-07-16T01:42:56Z | 2017-07-16T01:42:57Z |
COMPAT: rename isnull -> isna, notnull -> notna | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index 8482eef552c17..def49a641a0ff 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -373,7 +373,7 @@ To get the boolean mask where values are ``nan``
.. ipython:: python
- pd.isnull(df1)
+ pd.isna(df1)
Operations
diff --git a/doc/source/api.rst b/doc/source/api.rst
index f22591dba3a38..1a4ee68ef52c4 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -187,7 +187,9 @@ Top-level missing data
.. autosummary::
:toctree: generated/
+ isna
isnull
+ notna
notnull
Top-level conversions
@@ -272,8 +274,8 @@ Conversion
Series.astype
Series.infer_objects
Series.copy
- Series.isnull
- Series.notnull
+ Series.isna
+ Series.notna
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
@@ -781,8 +783,8 @@ Conversion
DataFrame.convert_objects
DataFrame.infer_objects
DataFrame.copy
- DataFrame.isnull
- DataFrame.notnull
+ DataFrame.isna
+ DataFrame.notna
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
@@ -1099,8 +1101,8 @@ Conversion
Panel.astype
Panel.copy
- Panel.isnull
- Panel.notnull
+ Panel.isna
+ Panel.notna
Getting and setting
~~~~~~~~~~~~~~~~~~~
@@ -1343,8 +1345,8 @@ Missing Values
Index.fillna
Index.dropna
- Index.isnull
- Index.notnull
+ Index.isna
+ Index.notna
Conversion
~~~~~~~~~~
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index aae1fffb7a3b6..c8138d795b836 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -444,7 +444,7 @@ So, for instance, to reproduce :meth:`~DataFrame.combine_first` as above:
.. ipython:: python
- combiner = lambda x, y: np.where(pd.isnull(x), y, x)
+ combiner = lambda x, y: np.where(pd.isna(x), y, x)
df1.combine(df2, combiner)
.. _basics.stats:
@@ -511,7 +511,7 @@ optional ``level`` parameter which applies only if the object has a
:header: "Function", "Description"
:widths: 20, 80
- ``count``, Number of non-null observations
+ ``count``, Number of non-na observations
``sum``, Sum of values
``mean``, Mean of values
``mad``, Mean absolute deviation
@@ -541,7 +541,7 @@ will exclude NAs on Series input by default:
np.mean(df['one'].values)
``Series`` also has a method :meth:`~Series.nunique` which will return the
-number of unique non-null values:
+number of unique non-na values:
.. ipython:: python
diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index ef558381c5e6f..02d7920bc4a84 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -863,14 +863,14 @@ a code of ``-1``.
s.cat.codes
-Methods for working with missing data, e.g. :meth:`~Series.isnull`, :meth:`~Series.fillna`,
+Methods for working with missing data, e.g. :meth:`~Series.isna`, :meth:`~Series.fillna`,
:meth:`~Series.dropna`, all work normally:
.. ipython:: python
s = pd.Series(["a", "b", np.nan], dtype="category")
s
- pd.isnull(s)
+ pd.isna(s)
s.fillna("a")
Differences to R's `factor`
diff --git a/doc/source/comparison_with_sas.rst b/doc/source/comparison_with_sas.rst
index 875358521173a..33a347de0bf5b 100644
--- a/doc/source/comparison_with_sas.rst
+++ b/doc/source/comparison_with_sas.rst
@@ -444,13 +444,13 @@ For example, in SAS you could do this to filter missing values.
if value_x ^= .;
run;
-Which doesn't work in in pandas. Instead, the ``pd.isnull`` or ``pd.notnull`` functions
+Which doesn't work in in pandas. Instead, the ``pd.isna`` or ``pd.notna`` functions
should be used for comparisons.
.. ipython:: python
- outer_join[pd.isnull(outer_join['value_x'])]
- outer_join[pd.notnull(outer_join['value_x'])]
+ outer_join[pd.isna(outer_join['value_x'])]
+ outer_join[pd.notna(outer_join['value_x'])]
pandas also provides a variety of methods to work with missing data - some of
which would be challenging to express in SAS. For example, there are methods to
@@ -570,7 +570,7 @@ machine's memory, but also that the operations on that data may be faster.
If out of core processing is needed, one possibility is the
`dask.dataframe <http://dask.pydata.org/en/latest/dataframe.html>`_
-library (currently in development) which
+library (currently in development) which
provides a subset of pandas functionality for an on-disk ``DataFrame``
Data Interop
@@ -578,7 +578,7 @@ Data Interop
pandas provides a :func:`read_sas` method that can read SAS data saved in
the XPORT or SAS7BDAT binary format.
-
+
.. code-block:: none
libname xportout xport 'transport-file.xpt';
@@ -613,4 +613,3 @@ to interop data between SAS and pandas is to serialize to csv.
In [9]: %time df = pd.read_csv('big.csv')
Wall time: 4.86 s
-
diff --git a/doc/source/comparison_with_sql.rst b/doc/source/comparison_with_sql.rst
index 7962e0e69faa1..2112c7de8c897 100644
--- a/doc/source/comparison_with_sql.rst
+++ b/doc/source/comparison_with_sql.rst
@@ -101,7 +101,7 @@ Just like SQL's OR and AND, multiple conditions can be passed to a DataFrame usi
# tips by parties of at least 5 diners OR bill total was more than $45
tips[(tips['size'] >= 5) | (tips['total_bill'] > 45)]
-NULL checking is done using the :meth:`~pandas.Series.notnull` and :meth:`~pandas.Series.isnull`
+NULL checking is done using the :meth:`~pandas.Series.notna` and :meth:`~pandas.Series.isna`
methods.
.. ipython:: python
@@ -121,9 +121,9 @@ where ``col2`` IS NULL with the following query:
.. ipython:: python
- frame[frame['col2'].isnull()]
+ frame[frame['col2'].isna()]
-Getting items where ``col1`` IS NOT NULL can be done with :meth:`~pandas.Series.notnull`.
+Getting items where ``col1`` IS NOT NULL can be done with :meth:`~pandas.Series.notna`.
.. code-block:: sql
@@ -133,7 +133,7 @@ Getting items where ``col1`` IS NOT NULL can be done with :meth:`~pandas.Series.
.. ipython:: python
- frame[frame['col1'].notnull()]
+ frame[frame['col1'].notna()]
GROUP BY
diff --git a/doc/source/conf.py b/doc/source/conf.py
index cb3063d59beae..6eb12324ee461 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -238,8 +238,8 @@
# https://github.com/pandas-dev/pandas/issues/16186
moved_api_pages = [
- ('pandas.core.common.isnull', 'pandas.isnull'),
- ('pandas.core.common.notnull', 'pandas.notnull'),
+ ('pandas.core.common.isnull', 'pandas.isna'),
+ ('pandas.core.common.notnull', 'pandas.notna'),
('pandas.core.reshape.get_dummies', 'pandas.get_dummies'),
('pandas.tools.merge.concat', 'pandas.concat'),
('pandas.tools.merge.merge', 'pandas.merge'),
diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index 11827fe2776cf..a3a90f514f142 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -202,7 +202,7 @@ For many reasons we chose the latter. After years of production use it has
proven, at least in my opinion, to be the best decision given the state of
affairs in NumPy and Python in general. The special value ``NaN``
(Not-A-Number) is used everywhere as the ``NA`` value, and there are API
-functions ``isnull`` and ``notnull`` which can be used across the dtypes to
+functions ``isna`` and ``notna`` which can be used across the dtypes to
detect NA values.
However, it comes with it a couple of trade-offs which I most certainly have
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index 37930775885e3..e40b7d460fef8 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -36,7 +36,7 @@ When / why does data become missing?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some might quibble over our usage of *missing*. By "missing" we simply mean
-**null** or "not present for whatever reason". Many data sets simply arrive with
+**na** or "not present for whatever reason". Many data sets simply arrive with
missing data, either because it exists and was not collected or it never
existed. For example, in a collection of financial time series, some of the time
series might start on different dates. Thus, values prior to the start date
@@ -63,27 +63,27 @@ to handling missing data. While ``NaN`` is the default missing value marker for
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python ``None`` will
-arise and we wish to also consider that "missing" or "null".
+arise and we wish to also consider that "missing" or "na".
.. note::
Prior to version v0.10.0 ``inf`` and ``-inf`` were also
- considered to be "null" in computations. This is no longer the case by
- default; use the ``mode.use_inf_as_null`` option to recover it.
+ considered to be "na" in computations. This is no longer the case by
+ default; use the ``mode.use_inf_as_na`` option to recover it.
-.. _missing.isnull:
+.. _missing.isna:
To make detecting missing values easier (and across different array dtypes),
-pandas provides the :func:`~pandas.core.common.isnull` and
-:func:`~pandas.core.common.notnull` functions, which are also methods on
+pandas provides the :func:`isna` and
+:func:`notna` functions, which are also methods on
``Series`` and ``DataFrame`` objects:
.. ipython:: python
df2['one']
- pd.isnull(df2['one'])
- df2['four'].notnull()
- df2.isnull()
+ pd.isna(df2['one'])
+ df2['four'].notna()
+ df2.isna()
.. warning::
@@ -206,7 +206,7 @@ with missing data.
Filling missing values: fillna
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The **fillna** function can "fill in" NA values with non-null data in a couple
+The **fillna** function can "fill in" NA values with non-na data in a couple
of ways, which we illustrate:
**Replace NA with a scalar value**
@@ -220,7 +220,7 @@ of ways, which we illustrate:
**Fill gaps forward or backward**
Using the same filling arguments as :ref:`reindexing <basics.reindexing>`, we
-can propagate non-null values forward or backward:
+can propagate non-na values forward or backward:
.. ipython:: python
@@ -288,7 +288,7 @@ a Series in this case.
.. ipython:: python
- dff.where(pd.notnull(dff), dff.mean(), axis='columns')
+ dff.where(pd.notna(dff), dff.mean(), axis='columns')
.. _missing_data.dropna:
diff --git a/doc/source/options.rst b/doc/source/options.rst
index c585da64efece..83b08acac5720 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -419,10 +419,10 @@ mode.chained_assignment warn Raise an exception, warn, or no
assignment, The default is warn
mode.sim_interactive False Whether to simulate interactive mode
for purposes of testing.
-mode.use_inf_as_null False True means treat None, NaN, -INF,
- INF as null (old way), False means
+mode.use_inf_as_na False True means treat None, NaN, -INF,
+ INF as NA (old way), False means
None and NaN are null, but INF, -INF
- are not null (new way).
+ are not NA (new way).
compute.use_bottleneck True Use the bottleneck library to accelerate
computation if it is installed.
compute.use_numexpr True Use the numexpr library to accelerate
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index e6764178d1f25..5a5ea827e74ad 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -128,8 +128,6 @@ the target. Now, a ``ValueError`` will be raised when such an input is passed in
...
ValueError: Cannot operate inplace if there is no assignment
-.. _whatsnew_0210.dtype_conversions:
-
Dtype Conversions
^^^^^^^^^^^^^^^^^
@@ -187,6 +185,20 @@ Dtype Conversions
- Inconsistent behavior in ``.where()`` with datetimelikes which would raise rather than coerce to ``object`` (:issue:`16402`)
- Bug in assignment against ``int64`` data with ``np.ndarray`` with ``float64`` dtype may keep ``int64`` dtype (:issue:`14001`)
+.. _whatsnew_0210.api.na_changes:
+
+NA naming Changes
+^^^^^^^^^^^^^^^^^
+
+In orde to promote more consistency among the pandas API, we have added additional top-level
+functions :func:`isna` and :func:`notna` that are aliases for :func:`isnull` and :func:`notnull`.
+The naming scheme is now more consistent with methods like ``.dropna()`` and ``.fillna()``. Furthermore
+in all cases where ``.isnull()`` and ``.notnull()`` methods are defined, these have additional methods
+named ``.isna()`` and ``.notna()``, these are included for classes ``Categorical``,
+``Index``, ``Series``, and ``DataFrame``. (:issue:`15001`).
+
+The configuration option ``mode.use_inf_as_null``is deprecated, and ``mode.use_inf_as_na`` is added as a replacement.
+
.. _whatsnew_0210.api:
Other API Changes
diff --git a/pandas/_libs/algos_rank_helper.pxi.in b/pandas/_libs/algos_rank_helper.pxi.in
index aafffbf60f638..0945aec638b1d 100644
--- a/pandas/_libs/algos_rank_helper.pxi.in
+++ b/pandas/_libs/algos_rank_helper.pxi.in
@@ -83,7 +83,7 @@ def rank_1d_{{dtype}}(object in_arr, ties_method='average', ascending=True,
nan_value = {{neg_nan_value}}
{{if dtype == 'object'}}
- mask = lib.isnullobj(values)
+ mask = lib.isnaobj(values)
{{elif dtype == 'float64'}}
mask = np.isnan(values)
{{elif dtype == 'int64'}}
@@ -259,7 +259,7 @@ def rank_2d_{{dtype}}(object in_arr, axis=0, ties_method='average',
nan_value = {{neg_nan_value}}
{{if dtype == 'object'}}
- mask = lib.isnullobj2d(values)
+ mask = lib.isnaobj2d(values)
{{elif dtype == 'float64'}}
mask = np.isnan(values)
{{elif dtype == 'int64'}}
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index f6e574b66a828..0458d4ae9f3de 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -286,7 +286,7 @@ def item_from_zerodim(object val):
@cython.wraparound(False)
@cython.boundscheck(False)
-def isnullobj(ndarray arr):
+def isnaobj(ndarray arr):
cdef Py_ssize_t i, n
cdef object val
cdef ndarray[uint8_t] result
@@ -303,7 +303,7 @@ def isnullobj(ndarray arr):
@cython.wraparound(False)
@cython.boundscheck(False)
-def isnullobj_old(ndarray arr):
+def isnaobj_old(ndarray arr):
cdef Py_ssize_t i, n
cdef object val
cdef ndarray[uint8_t] result
@@ -320,7 +320,7 @@ def isnullobj_old(ndarray arr):
@cython.wraparound(False)
@cython.boundscheck(False)
-def isnullobj2d(ndarray arr):
+def isnaobj2d(ndarray arr):
cdef Py_ssize_t i, j, n, m
cdef object val
cdef ndarray[uint8_t, ndim=2] result
@@ -339,7 +339,7 @@ def isnullobj2d(ndarray arr):
@cython.wraparound(False)
@cython.boundscheck(False)
-def isnullobj2d_old(ndarray arr):
+def isnaobj2d_old(ndarray arr):
cdef Py_ssize_t i, j, n, m
cdef object val
cdef ndarray[uint8_t, ndim=2] result
diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index 9495af87f5c31..ab7f3c3de2131 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -1,7 +1,7 @@
import numpy as np
from pandas import compat
-from pandas.core.dtypes.missing import isnull, array_equivalent
+from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.core.dtypes.common import is_dtype_equal
cdef NUMERIC_TYPES = (
@@ -182,7 +182,7 @@ cpdef assert_almost_equal(a, b,
if a == b:
# object comparison
return True
- if isnull(a) and isnull(b):
+ if isna(a) and isna(b):
# nan / None comparison
return True
if is_comparable_as_number(a) and is_comparable_as_number(b):
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 133e9d7dca18f..4ca658b35a276 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -27,7 +27,7 @@
_ensure_float64, _ensure_uint64,
_ensure_int64)
from pandas.compat.numpy import _np_version_under1p10
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core import common as com
from pandas._libs import algos, lib, hashtable as htable
@@ -427,7 +427,7 @@ def isin(comps, values):
try:
values = values.astype('float64', copy=False)
comps = comps.astype('float64', copy=False)
- checknull = isnull(values).any()
+ checknull = isna(values).any()
f = lambda x, y: htable.ismember_float64(x, y, checknull)
except (TypeError, ValueError):
values = values.astype(object)
@@ -529,7 +529,7 @@ def value_counts(values, sort=True, ascending=False, normalize=False,
# count, remove nulls (from the index), and but the bins
result = ii.value_counts(dropna=dropna)
- result = result[result.index.notnull()]
+ result = result[result.index.notna()]
result.index = result.index.astype('interval')
result = result.sort_index()
@@ -597,9 +597,9 @@ def _value_counts_arraylike(values, dropna):
f = getattr(htable, "value_count_{dtype}".format(dtype=ndtype))
keys, counts = f(values, dropna)
- mask = isnull(values)
+ mask = isna(values)
if not dropna and mask.any():
- if not isnull(keys).any():
+ if not isna(keys).any():
keys = np.insert(keys, 0, np.NaN)
counts = np.insert(counts, 0, mask.sum())
@@ -860,7 +860,7 @@ def quantile(x, q, interpolation_method='fraction'):
"""
x = np.asarray(x)
- mask = isnull(x)
+ mask = isna(x)
x = x[~mask]
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 265fb4004d997..086fedd7d7cf8 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas.core.algorithms import factorize, unique, value_counts
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, isnull, notna, notnull
from pandas.core.categorical import Categorical
from pandas.core.groupby import Grouper
from pandas.io.formats.format import set_eng_float_format
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 97c4c8626dcbb..eb785b18bd02b 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -6,7 +6,7 @@
from pandas.compat import builtins
import numpy as np
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries, ABCIndexClass
from pandas.core.dtypes.common import is_object_dtype, is_list_like, is_scalar
from pandas.util._validators import validate_bool_kwarg
@@ -894,7 +894,7 @@ def argmin(self, axis=None):
@cache_readonly
def hasnans(self):
""" return if I have any nans; enables various perf speedups """
- return isnull(self).any()
+ return isna(self).any()
def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None,
filter_type=None, **kwds):
@@ -990,7 +990,7 @@ def nunique(self, dropna=True):
"""
uniqs = self.unique()
n = len(uniqs)
- if dropna and isnull(uniqs).any():
+ if dropna and isna(uniqs).any():
n -= 1
return n
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index afae11163b0dc..1392ad2f011db 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -10,7 +10,7 @@
from pandas.core.dtypes.generic import (
ABCSeries, ABCIndexClass, ABCCategoricalIndex)
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.cast import (
maybe_infer_to_datetimelike,
coerce_indexer_dtype)
@@ -34,8 +34,8 @@
import pandas.core.common as com
from pandas.core.missing import interpolate_2d
from pandas.compat.numpy import function as nv
-from pandas.util._decorators import (Appender, cache_readonly,
- deprecate_kwarg, Substitution)
+from pandas.util._decorators import (
+ Appender, cache_readonly, deprecate_kwarg, Substitution)
from pandas.io.formats.terminal import get_terminal_size
from pandas.util._validators import validate_bool_kwarg
@@ -290,7 +290,7 @@ def __init__(self, values, categories=None, ordered=False, fastpath=False):
# On list with NaNs, int values will be converted to float. Use
# "object" dtype to prevent this. In the end objects will be
# casted to int/... in the category assignment step.
- dtype = 'object' if isnull(values).any() else None
+ dtype = 'object' if isna(values).any() else None
values = _sanitize_array(values, None, dtype=dtype)
if categories is None:
@@ -561,9 +561,9 @@ def _validate_categories(cls, categories, fastpath=False):
categories = _convert_to_list_like(categories)
# On categories with NaNs, int values would be converted to
# float. Use "object" dtype to prevent this.
- if isnull(categories).any():
+ if isna(categories).any():
without_na = np.array([x for x in categories
- if notnull(x)])
+ if notna(x)])
with_na = np.array(categories)
if with_na.dtype != without_na.dtype:
dtype = "object"
@@ -941,9 +941,9 @@ def remove_categories(self, removals, inplace=False):
new_categories = [c for c in self._categories if c not in removal_set]
# GH 10156
- if any(isnull(removals)):
- not_included = [x for x in not_included if notnull(x)]
- new_categories = [x for x in new_categories if notnull(x)]
+ if any(isna(removals)):
+ not_included = [x for x in not_included if notna(x)]
+ new_categories = [x for x in new_categories if notna(x)]
if len(not_included) != 0:
raise ValueError("removals must all be in old categories: %s" %
@@ -1153,7 +1153,7 @@ def searchsorted(self, value, side='left', sorter=None):
return self.codes.searchsorted(values_as_codes, side=side,
sorter=sorter)
- def isnull(self):
+ def isna(self):
"""
Detect missing values
@@ -1165,8 +1165,9 @@ def isnull(self):
See also
--------
- isnull : pandas version
- Categorical.notnull : boolean inverse of Categorical.isnull
+ isna : top-level isna
+ isnull : alias of isna
+ Categorical.notna : boolean inverse of Categorical.isna
"""
@@ -1175,14 +1176,15 @@ def isnull(self):
# String/object and float categories can hold np.nan
if self.categories.dtype.kind in ['S', 'O', 'f']:
if np.nan in self.categories:
- nan_pos = np.where(isnull(self.categories))[0]
+ nan_pos = np.where(isna(self.categories))[0]
# we only have one NA in categories
ret = np.logical_or(ret, self._codes == nan_pos)
return ret
+ isnull = isna
- def notnull(self):
+ def notna(self):
"""
- Reverse of isnull
+ Inverse of isna
Both missing values (-1 in .codes) and NA as a category are detected as
null.
@@ -1193,11 +1195,13 @@ def notnull(self):
See also
--------
- notnull : pandas version
- Categorical.isnull : boolean inverse of Categorical.notnull
+ notna : top-level notna
+ notnull : alias of notna
+ Categorical.isna : boolean inverse of Categorical.notna
"""
- return ~self.isnull()
+ return ~self.isna()
+ notnull = notna
def put(self, *args, **kwargs):
"""
@@ -1217,8 +1221,8 @@ def dropna(self):
-------
valid : Categorical
"""
- result = self[self.notnull()]
- if isnull(result.categories).any():
+ result = self[self.notna()]
+ if isna(result.categories).any():
result = result.remove_categories([np.nan])
return result
@@ -1243,12 +1247,10 @@ def value_counts(self, dropna=True):
"""
from numpy import bincount
- from pandas.core.dtypes.missing import isnull
- from pandas.core.series import Series
- from pandas.core.index import CategoricalIndex
+ from pandas import isna, Series, CategoricalIndex
obj = (self.remove_categories([np.nan]) if dropna and
- isnull(self.categories).any() else self)
+ isna(self.categories).any() else self)
code, cat = obj._codes, obj.categories
ncat, mask = len(cat), 0 <= code
ix, clean = np.arange(ncat), mask.all()
@@ -1520,7 +1522,7 @@ def fillna(self, value=None, method=None, limit=None):
if self.categories.dtype.kind in ['S', 'O', 'f']:
if np.nan in self.categories:
values = values.copy()
- nan_pos = np.where(isnull(self.categories))[0]
+ nan_pos = np.where(isna(self.categories))[0]
# we only have one NA in categories
values[values == nan_pos] = -1
@@ -1534,13 +1536,13 @@ def fillna(self, value=None, method=None, limit=None):
else:
- if not isnull(value) and value not in self.categories:
+ if not isna(value) and value not in self.categories:
raise ValueError("fill value must be in categories")
mask = values == -1
if mask.any():
values = values.copy()
- if isnull(value):
+ if isna(value):
values[mask] = -1
else:
values[mask] = self.categories.get_loc(value)
@@ -1556,7 +1558,7 @@ def take_nd(self, indexer, allow_fill=True, fill_value=None):
# filling must always be None/nan here
# but is passed thru internally
- assert isnull(fill_value)
+ assert isna(fill_value)
codes = take_1d(self._codes, indexer, allow_fill=True, fill_value=-1)
result = self._constructor(codes, categories=self.categories,
@@ -1720,7 +1722,7 @@ def __setitem__(self, key, value):
# no assignments of values not in categories, but it's always ok to set
# something to np.nan
- if len(to_add) and not isnull(to_add).all():
+ if len(to_add) and not isna(to_add).all():
raise ValueError("Cannot setitem on a Categorical with a new "
"category, set the categories first")
@@ -1763,8 +1765,8 @@ def __setitem__(self, key, value):
# https://github.com/pandas-dev/pandas/issues/7820
# float categories do currently return -1 for np.nan, even if np.nan is
# included in the index -> "repair" this here
- if isnull(rvalue).any() and isnull(self.categories).any():
- nan_pos = np.where(isnull(self.categories))[0]
+ if isna(rvalue).any() and isna(self.categories).any():
+ nan_pos = np.where(isna(self.categories))[0]
lindexer[lindexer == -1] = nan_pos
lindexer = self._maybe_coerce_indexer(lindexer)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 3b09e68c6433a..44cb36b8a3207 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -18,7 +18,7 @@
from pandas.core.dtypes.generic import ABCSeries
from pandas.core.dtypes.common import _NS_DTYPE
from pandas.core.dtypes.inference import _iterable_not_string
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna, isnull, notnull # noqa
from pandas.api import types
from pandas.core.dtypes import common
@@ -187,7 +187,7 @@ def is_bool_indexer(key):
key = np.asarray(_values_from_object(key))
if not lib.is_bool_array(key):
- if isnull(key).any():
+ if isna(key).any():
raise ValueError('cannot index with vector containing '
'NA / NaN values')
return False
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 06ce811703a8c..76e30a6fb9d52 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -392,9 +392,14 @@ def table_schema_cb(key):
cf.register_option('sim_interactive', False, tc_sim_interactive_doc)
use_inf_as_null_doc = """
+use_inf_as_null had been deprecated and will be removed in a future version.
+Use `use_inf_as_na` instead.
+"""
+
+use_inf_as_na_doc = """
: boolean
- True means treat None, NaN, INF, -INF as null (old way),
- False means None and NaN are null, but INF, -INF are not null
+ True means treat None, NaN, INF, -INF as na (old way),
+ False means None and NaN are null, but INF, -INF are not na
(new way).
"""
@@ -402,14 +407,17 @@ def table_schema_cb(key):
# or we'll hit circular deps.
-def use_inf_as_null_cb(key):
- from pandas.core.dtypes.missing import _use_inf_as_null
- _use_inf_as_null(key)
+def use_inf_as_na_cb(key):
+ from pandas.core.dtypes.missing import _use_inf_as_na
+ _use_inf_as_na(key)
-with cf.config_prefix('mode'):
- cf.register_option('use_inf_as_null', False, use_inf_as_null_doc,
- cb=use_inf_as_null_cb)
+cf.register_option('mode.use_inf_as_na', False, use_inf_as_na_doc,
+ cb=use_inf_as_na_cb)
+
+cf.deprecate_option('mode.use_inf_as_null', msg=use_inf_as_null_doc,
+ rkey='mode.use_inf_as_na')
+
# user warnings
chained_assignment = """
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 22d98a89d68d6..723e4f70da4e9 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -27,7 +27,7 @@
from .dtypes import ExtensionDtype, DatetimeTZDtype, PeriodDtype
from .generic import (ABCDatetimeIndex, ABCPeriodIndex,
ABCSeries)
-from .missing import isnull, notnull
+from .missing import isna, notna
from .inference import is_list_like
_int8_max = np.iinfo(np.int8).max
@@ -121,7 +121,7 @@ def trans(x): # noqa
arr = np.array([r[0]])
# if we have any nulls, then we are done
- if (isnull(arr).any() or
+ if (isna(arr).any() or
not np.allclose(arr, trans(arr).astype(dtype), rtol=0)):
return result
@@ -131,7 +131,7 @@ def trans(x): # noqa
return result
if (issubclass(result.dtype.type, (np.object_, np.number)) and
- notnull(result).all()):
+ notna(result).all()):
new_result = trans(result).astype(dtype)
try:
if np.allclose(new_result, result, rtol=0):
@@ -191,7 +191,7 @@ def maybe_upcast_putmask(result, mask, other):
# integer or integer array -> date-like array
if is_datetimelike(result.dtype):
if is_scalar(other):
- if isnull(other):
+ if isna(other):
other = result.dtype.type('nat')
elif is_integer(other):
other = np.array(other, dtype=result.dtype)
@@ -232,13 +232,13 @@ def changeit():
# and its nan and we are changing some values
if (is_scalar(other) or
(isinstance(other, np.ndarray) and other.ndim < 1)):
- if isnull(other):
+ if isna(other):
return changeit()
# we have an ndarray and the masking has nans in it
else:
- if isnull(other[mask]).any():
+ if isna(other[mask]).any():
return changeit()
try:
@@ -268,7 +268,7 @@ def maybe_promote(dtype, fill_value=np.nan):
# for now: refuse to upcast datetime64
# (this is because datetime64 will not implicitly upconvert
# to object correctly as of numpy 1.6.1)
- if isnull(fill_value):
+ if isna(fill_value):
fill_value = iNaT
else:
if issubclass(dtype.type, np.datetime64):
@@ -287,7 +287,7 @@ def maybe_promote(dtype, fill_value=np.nan):
else:
fill_value = iNaT
elif is_datetimetz(dtype):
- if isnull(fill_value):
+ if isna(fill_value):
fill_value = iNaT
elif is_float(fill_value):
if issubclass(dtype.type, np.bool_):
@@ -580,7 +580,7 @@ def coerce_to_dtypes(result, dtypes):
def conv(r, dtype):
try:
- if isnull(r):
+ if isna(r):
pass
elif dtype == _NS_DTYPE:
r = tslib.Timestamp(r)
@@ -635,7 +635,7 @@ def astype_nansafe(arr, dtype, copy=True):
# allow frequency conversions
if dtype.kind == 'm':
- mask = isnull(arr)
+ mask = isna(arr)
result = arr.astype(dtype).astype(np.float64)
result[mask] = np.nan
return result
@@ -687,7 +687,7 @@ def maybe_convert_objects(values, convert_dates=True, convert_numeric=True,
values, 'M8[ns]', errors='coerce')
# if we are all nans then leave me alone
- if not isnull(new_values).all():
+ if not isna(new_values).all():
values = new_values
else:
@@ -702,7 +702,7 @@ def maybe_convert_objects(values, convert_dates=True, convert_numeric=True,
new_values = to_timedelta(values, errors='coerce')
# if we are all nans then leave me alone
- if not isnull(new_values).all():
+ if not isna(new_values).all():
values = new_values
else:
@@ -717,7 +717,7 @@ def maybe_convert_objects(values, convert_dates=True, convert_numeric=True,
coerce_numeric=True)
# if we are all nans then leave me alone
- if not isnull(new_values).all():
+ if not isna(new_values).all():
values = new_values
except:
@@ -779,7 +779,7 @@ def soft_convert_objects(values, datetime=True, numeric=True, timedelta=True,
converted = lib.maybe_convert_numeric(values, set(),
coerce_numeric=True)
# If all NaNs, then do not-alter
- values = converted if not isnull(converted).all() else values
+ values = converted if not isna(converted).all() else values
values = values.copy() if copy else values
except:
pass
@@ -881,7 +881,7 @@ def try_timedelta(v):
elif inferred_type == 'nat':
# if all NaT, return as datetime
- if isnull(v).all():
+ if isna(v).all():
value = try_datetime(v)
else:
@@ -932,7 +932,7 @@ def maybe_cast_to_datetime(value, dtype, errors='raise'):
# our NaT doesn't support tz's
# this will coerce to DatetimeIndex with
# a matching dtype below
- if is_scalar(value) and isnull(value):
+ if is_scalar(value) and isna(value):
value = [value]
elif is_timedelta64 and not is_dtype_equal(dtype, _TD_DTYPE):
@@ -946,7 +946,7 @@ def maybe_cast_to_datetime(value, dtype, errors='raise'):
"dtype [%s]" % dtype)
if is_scalar(value):
- if value == iNaT or isnull(value):
+ if value == iNaT or isna(value):
value = iNaT
else:
value = np.array(value, copy=False)
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index 9913923cb7807..101612893cb02 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -23,7 +23,7 @@
from .inference import is_list_like
-def isnull(obj):
+def isna(obj):
"""Detect missing values (NaN in numeric arrays, None/NaN in object arrays)
Parameters
@@ -33,34 +33,38 @@ def isnull(obj):
Returns
-------
- isnulled : array-like of bool or bool
+ isna : array-like of bool or bool
Array or bool indicating whether an object is null or if an array is
given which of the element is null.
See also
--------
- pandas.notnull: boolean inverse of pandas.isnull
+ pandas.notna: boolean inverse of pandas.isna
+ pandas.isnull: alias of isna
"""
- return _isnull(obj)
+ return _isna(obj)
-def _isnull_new(obj):
+isnull = isna
+
+
+def _isna_new(obj):
if is_scalar(obj):
return lib.checknull(obj)
# hack (for now) because MI registers as ndarray
elif isinstance(obj, ABCMultiIndex):
- raise NotImplementedError("isnull is not defined for MultiIndex")
+ raise NotImplementedError("isna is not defined for MultiIndex")
elif isinstance(obj, (ABCSeries, np.ndarray, ABCIndexClass)):
- return _isnull_ndarraylike(obj)
+ return _isna_ndarraylike(obj)
elif isinstance(obj, ABCGeneric):
- return obj._constructor(obj._data.isnull(func=isnull))
+ return obj._constructor(obj._data.isna(func=isna))
elif isinstance(obj, list) or hasattr(obj, '__array__'):
- return _isnull_ndarraylike(np.asarray(obj))
+ return _isna_ndarraylike(np.asarray(obj))
else:
return obj is None
-def _isnull_old(obj):
+def _isna_old(obj):
"""Detect missing values. Treat None, NaN, INF, -INF as null.
Parameters
@@ -75,22 +79,22 @@ def _isnull_old(obj):
return lib.checknull_old(obj)
# hack (for now) because MI registers as ndarray
elif isinstance(obj, ABCMultiIndex):
- raise NotImplementedError("isnull is not defined for MultiIndex")
+ raise NotImplementedError("isna is not defined for MultiIndex")
elif isinstance(obj, (ABCSeries, np.ndarray, ABCIndexClass)):
- return _isnull_ndarraylike_old(obj)
+ return _isna_ndarraylike_old(obj)
elif isinstance(obj, ABCGeneric):
- return obj._constructor(obj._data.isnull(func=_isnull_old))
+ return obj._constructor(obj._data.isna(func=_isna_old))
elif isinstance(obj, list) or hasattr(obj, '__array__'):
- return _isnull_ndarraylike_old(np.asarray(obj))
+ return _isna_ndarraylike_old(np.asarray(obj))
else:
return obj is None
-_isnull = _isnull_new
+_isna = _isna_new
-def _use_inf_as_null(key):
- """Option change callback for null/inf behaviour
+def _use_inf_as_na(key):
+ """Option change callback for na/inf behaviour
Choose which replacement for numpy.isnan / -numpy.isfinite is used.
Parameters
@@ -111,12 +115,12 @@ def _use_inf_as_null(key):
from pandas.core.config import get_option
flag = get_option(key)
if flag:
- globals()['_isnull'] = _isnull_old
+ globals()['_isna'] = _isna_old
else:
- globals()['_isnull'] = _isnull_new
+ globals()['_isna'] = _isna_new
-def _isnull_ndarraylike(obj):
+def _isna_ndarraylike(obj):
values = getattr(obj, 'values', obj)
dtype = values.dtype
@@ -126,10 +130,10 @@ def _isnull_ndarraylike(obj):
from pandas import Categorical
if not isinstance(values, Categorical):
values = values.values
- result = values.isnull()
+ result = values.isna()
elif is_interval_dtype(values):
from pandas import IntervalIndex
- result = IntervalIndex(obj).isnull()
+ result = IntervalIndex(obj).isna()
else:
# Working around NumPy ticket 1542
@@ -139,7 +143,7 @@ def _isnull_ndarraylike(obj):
result = np.zeros(values.shape, dtype=bool)
else:
result = np.empty(shape, dtype=bool)
- vec = lib.isnullobj(values.ravel())
+ vec = lib.isnaobj(values.ravel())
result[...] = vec.reshape(shape)
elif needs_i8_conversion(obj):
@@ -156,7 +160,7 @@ def _isnull_ndarraylike(obj):
return result
-def _isnull_ndarraylike_old(obj):
+def _isna_ndarraylike_old(obj):
values = getattr(obj, 'values', obj)
dtype = values.dtype
@@ -168,7 +172,7 @@ def _isnull_ndarraylike_old(obj):
result = np.zeros(values.shape, dtype=bool)
else:
result = np.empty(shape, dtype=bool)
- vec = lib.isnullobj_old(values.ravel())
+ vec = lib.isnaobj_old(values.ravel())
result[:] = vec.reshape(shape)
elif is_datetime64_dtype(dtype):
@@ -185,7 +189,7 @@ def _isnull_ndarraylike_old(obj):
return result
-def notnull(obj):
+def notna(obj):
"""Replacement for numpy.isfinite / -numpy.isnan which is suitable for use
on object arrays.
@@ -196,20 +200,24 @@ def notnull(obj):
Returns
-------
- isnulled : array-like of bool or bool
+ notisna : array-like of bool or bool
Array or bool indicating whether an object is *not* null or if an array
is given which of the element is *not* null.
See also
--------
- pandas.isnull : boolean inverse of pandas.notnull
+ pandas.isna : boolean inverse of pandas.notna
+ pandas.notnull : alias of notna
"""
- res = isnull(obj)
+ res = isna(obj)
if is_scalar(res):
return not res
return ~res
+notnull = notna
+
+
def is_null_datelike_scalar(other):
""" test whether the object is a null datelike, e.g. Nat
but guard against passing a non-scalar """
@@ -222,11 +230,11 @@ def is_null_datelike_scalar(other):
return other.view('i8') == iNaT
elif is_integer(other) and other == iNaT:
return True
- return isnull(other)
+ return isna(other)
return False
-def _is_na_compat(arr, fill_value=np.nan):
+def _isna_compat(arr, fill_value=np.nan):
"""
Parameters
----------
@@ -238,7 +246,7 @@ def _is_na_compat(arr, fill_value=np.nan):
True if we can fill using this fill_value
"""
dtype = arr.dtype
- if isnull(fill_value):
+ if isna(fill_value):
return not (is_bool_dtype(dtype) or
is_integer_dtype(dtype))
return True
@@ -286,7 +294,7 @@ def array_equivalent(left, right, strict_nan=False):
if is_string_dtype(left) or is_string_dtype(right):
if not strict_nan:
- # isnull considers NaN and None to be equivalent.
+ # isna considers NaN and None to be equivalent.
return lib.array_equivalent_object(
_ensure_object(left.ravel()), _ensure_object(right.ravel()))
@@ -305,7 +313,7 @@ def array_equivalent(left, right, strict_nan=False):
# NaNs can occur in float and complex arrays.
if is_float_dtype(left) or is_complex_dtype(left):
- return ((left == right) | (isnull(left) & isnull(right))).all()
+ return ((left == right) | (isna(left) & isna(right))).all()
# numpy will will not allow this type of datetimelike vs integer comparison
elif is_datetimelike_v_numeric(left, right):
@@ -365,7 +373,7 @@ def _maybe_fill(arr, fill_value=np.nan):
"""
if we have a compatiable fill_value and arr dtype, then fill
"""
- if _is_na_compat(arr, fill_value):
+ if _isna_compat(arr, fill_value):
arr.fill(fill_value)
return arr
@@ -400,4 +408,4 @@ def remove_na_arraylike(arr):
"""
Return array-like containing only true/non-NaN values, possibly empty.
"""
- return arr[notnull(lib.values_from_object(arr))]
+ return arr[notna(lib.values_from_object(arr))]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 2ceb62dc7a349..6c72fa648559a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -58,7 +58,7 @@
is_iterator,
is_sequence,
is_named_tuple)
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.common import (_try_sort,
@@ -3205,6 +3205,22 @@ def _maybe_casted_values(index, labels=None):
# ----------------------------------------------------------------------
# Reindex-based selection methods
+ @Appender(_shared_docs['isna'] % _shared_doc_kwargs)
+ def isna(self):
+ return super(DataFrame, self).isna()
+
+ @Appender(_shared_docs['isna'] % _shared_doc_kwargs)
+ def isnull(self):
+ return super(DataFrame, self).isnull()
+
+ @Appender(_shared_docs['isna'] % _shared_doc_kwargs)
+ def notna(self):
+ return super(DataFrame, self).notna()
+
+ @Appender(_shared_docs['notna'] % _shared_doc_kwargs)
+ def notnull(self):
+ return super(DataFrame, self).notnull()
+
def dropna(self, axis=0, how='any', thresh=None, subset=None,
inplace=False):
"""
@@ -3689,8 +3705,8 @@ def _combine_frame(self, other, func, fill_value=None, level=None,
def _arith_op(left, right):
if fill_value is not None:
- left_mask = isnull(left)
- right_mask = isnull(right)
+ left_mask = isna(left)
+ right_mask = isna(right)
left = left.copy()
right = right.copy()
@@ -3874,8 +3890,8 @@ def combine(self, other, func, fill_value=None, overwrite=True):
this_dtype = series.dtype
other_dtype = otherSeries.dtype
- this_mask = isnull(series)
- other_mask = isnull(otherSeries)
+ this_mask = isna(series)
+ other_mask = isna(otherSeries)
# don't overwrite columns unecessarily
# DO propagate if this column is not in the intersection
@@ -3954,11 +3970,11 @@ def combiner(x, y, needs_i8_conversion=False):
x_values = x.values if hasattr(x, 'values') else x
y_values = y.values if hasattr(y, 'values') else y
if needs_i8_conversion:
- mask = isnull(x)
+ mask = isna(x)
x_values = x_values.view('i8')
y_values = y_values.view('i8')
else:
- mask = isnull(x_values)
+ mask = isna(x_values)
return expressions.where(mask, y_values, x_values,
raise_on_error=True)
@@ -3998,18 +4014,18 @@ def update(self, other, join='left', overwrite=True, filter_func=None,
that = other[col].values
if filter_func is not None:
with np.errstate(all='ignore'):
- mask = ~filter_func(this) | isnull(that)
+ mask = ~filter_func(this) | isna(that)
else:
if raise_conflict:
- mask_this = notnull(that)
- mask_that = notnull(this)
+ mask_this = notna(that)
+ mask_that = notna(this)
if any(mask_this & mask_that):
raise ValueError("Data overlaps.")
if overwrite:
- mask = isnull(that)
+ mask = isna(that)
else:
- mask = notnull(this)
+ mask = notna(this)
# don't overwrite columns unecessarily
if mask.all():
@@ -5181,7 +5197,7 @@ def cov(self, min_periods=None):
idx = cols.copy()
mat = numeric_df.values
- if notnull(mat).all():
+ if notna(mat).all():
if min_periods is not None and min_periods > len(mat):
baseCov = np.empty((mat.shape[1], mat.shape[1]))
baseCov.fill(np.nan)
@@ -5281,9 +5297,9 @@ def count(self, axis=0, level=None, numeric_only=False):
result = Series(0, index=frame._get_agg_axis(axis))
else:
if frame._is_mixed_type:
- result = notnull(frame).sum(axis=axis)
+ result = notna(frame).sum(axis=axis)
else:
- counts = notnull(frame.values).sum(axis=axis)
+ counts = notna(frame.values).sum(axis=axis)
result = Series(counts, index=frame._get_agg_axis(axis))
return result.astype('int64')
@@ -5302,12 +5318,12 @@ def _count_level(self, level, axis=0, numeric_only=False):
self._get_axis_name(axis))
if frame._is_mixed_type:
- # Since we have mixed types, calling notnull(frame.values) might
+ # Since we have mixed types, calling notna(frame.values) might
# upcast everything to object
- mask = notnull(frame).values
+ mask = notna(frame).values
else:
# But use the speedup when we have homogeneous dtypes
- mask = notnull(frame.values)
+ mask = notna(frame.values)
if axis == 1:
# We're transposing the mask rather than frame to avoid potential
@@ -5400,7 +5416,7 @@ def f(x):
try:
if filter_type is None or filter_type == 'numeric':
result = result.astype(np.float64)
- elif filter_type == 'bool' and notnull(result).all():
+ elif filter_type == 'bool' and notna(result).all():
result = result.astype(np.bool_)
except (ValueError, TypeError):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f3b7b31557216..abccd76b2fbcb 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -25,9 +25,8 @@
is_dict_like,
is_re_compilable,
pandas_dtype)
-from pandas.core.dtypes.cast import (
- maybe_promote, maybe_upcast_putmask)
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.cast import maybe_promote, maybe_upcast_putmask
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.generic import ABCSeries, ABCPanel
from pandas.core.common import (_values_from_object,
@@ -54,7 +53,8 @@
isidentifier, set_function_name, cPickle as pkl)
from pandas.core.ops import _align_method_FRAME
import pandas.core.nanops as nanops
-from pandas.util._decorators import Appender, Substitution, deprecate_kwarg
+from pandas.util._decorators import (Appender, Substitution,
+ deprecate_kwarg)
from pandas.util._validators import validate_bool_kwarg
from pandas.core import config
@@ -4000,7 +4000,7 @@ def fillna(self, value=None, method=None, axis=None, inplace=False,
inplace=inplace,
downcast=downcast)
elif isinstance(value, DataFrame) and self.ndim == 2:
- new_data = self.where(self.notnull(), value)
+ new_data = self.where(self.notna(), value)
else:
raise ValueError("invalid fill value with a %s" % type(value))
@@ -4398,7 +4398,7 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False,
else:
index = _maybe_transposed_self._get_axis(alt_ax)
- if pd.isnull(index).any():
+ if isna(index).any():
raise NotImplementedError("Interpolation with NaNs in the index "
"has not been implemented. Try filling "
"those NaNs before interpolating.")
@@ -4503,14 +4503,14 @@ def asof(self, where, subset=None):
loc -= 1
values = self._values
- while loc > 0 and isnull(values[loc]):
+ while loc > 0 and isna(values[loc]):
loc -= 1
return values[loc]
if not isinstance(where, Index):
where = Index(where) if is_list else Index([where])
- nulls = self.isnull() if is_series else self[subset].isnull().any(1)
+ nulls = self.isna() if is_series else self[subset].isna().any(1)
if nulls.all():
if is_series:
return self._constructor(np.nan, index=where, name=self.name)
@@ -4533,38 +4533,50 @@ def asof(self, where, subset=None):
# ----------------------------------------------------------------------
# Action Methods
- _shared_docs['isnull'] = """
- Return a boolean same-sized object indicating if the values are null.
+ _shared_docs['isna'] = """
+ Return a boolean same-sized object indicating if the values are na.
See Also
--------
- notnull : boolean inverse of isnull
+ %(klass)s.notna : boolean inverse of isna
+ %(klass)s.isnull : alias of isna
+ isna : top-level isna
"""
- @Appender(_shared_docs['isnull'])
+ @Appender(_shared_docs['isna'] % _shared_doc_kwargs)
+ def isna(self):
+ return isna(self).__finalize__(self)
+
+ @Appender(_shared_docs['isna'] % _shared_doc_kwargs)
def isnull(self):
- return isnull(self).__finalize__(self)
+ return isna(self).__finalize__(self)
- _shared_docs['isnotnull'] = """
+ _shared_docs['notna'] = """
Return a boolean same-sized object indicating if the values are
- not null.
+ not na.
See Also
--------
- isnull : boolean inverse of notnull
+ %(klass)s.isna : boolean inverse of notna
+ %(klass)s.notnull : alias of notna
+ notna : top-level notna
"""
- @Appender(_shared_docs['isnotnull'])
+ @Appender(_shared_docs['notna'] % _shared_doc_kwargs)
+ def notna(self):
+ return notna(self).__finalize__(self)
+
+ @Appender(_shared_docs['notna'] % _shared_doc_kwargs)
def notnull(self):
- return notnull(self).__finalize__(self)
+ return notna(self).__finalize__(self)
def _clip_with_scalar(self, lower, upper, inplace=False):
- if ((lower is not None and np.any(isnull(lower))) or
- (upper is not None and np.any(isnull(upper)))):
+ if ((lower is not None and np.any(isna(lower))) or
+ (upper is not None and np.any(isna(upper)))):
raise ValueError("Cannot use an NA value as a clip threshold")
result = self.values
- mask = isnull(result)
+ mask = isna(result)
with np.errstate(all='ignore'):
if upper is not None:
@@ -4588,7 +4600,7 @@ def _clip_with_one_bound(self, threshold, method, axis, inplace):
if axis is not None:
axis = self._get_axis_number(axis)
- if np.any(isnull(threshold)):
+ if np.any(isna(threshold)):
raise ValueError("Cannot use an NA value as a clip threshold")
# method is self.le for upper bound and self.ge for lower bound
@@ -4597,7 +4609,7 @@ def _clip_with_one_bound(self, threshold, method, axis, inplace):
return self._clip_with_scalar(None, threshold, inplace=inplace)
return self._clip_with_scalar(threshold, None, inplace=inplace)
- subset = method(threshold, axis=axis) | isnull(self)
+ subset = method(threshold, axis=axis) | isna(self)
# GH #15390
# In order for where method to work, the threshold must
@@ -5472,7 +5484,7 @@ def _align_series(self, other, join='outer', axis=None, level=None,
right = other.reindex(join_index, level=level)
# fill
- fill_na = notnull(fill_value) or (method is not None)
+ fill_na = notna(fill_value) or (method is not None)
if fill_na:
left = left.fillna(fill_value, method=method, limit=limit,
axis=fill_axis)
@@ -6405,7 +6417,7 @@ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
rs = (data.div(data.shift(periods=periods, freq=freq, axis=axis,
**kwargs)) - 1)
if freq is None:
- mask = isnull(_values_from_object(self))
+ mask = isna(_values_from_object(self))
np.putmask(rs.values, mask, np.nan)
return rs
@@ -6767,10 +6779,10 @@ def cum_func(self, axis=None, skipna=True, *args, **kwargs):
if (skipna and
issubclass(y.dtype.type, (np.datetime64, np.timedelta64))):
result = accum_func(y, axis)
- mask = isnull(self)
+ mask = isna(self)
np.putmask(result, mask, tslib.iNaT)
elif skipna and not issubclass(y.dtype.type, (np.integer, np.bool_)):
- mask = isnull(self)
+ mask = isna(self)
np.putmask(y, mask, mask_a)
result = accum_func(y, axis)
np.putmask(result, mask, mask_b)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index c8a7ee752d243..a388892e925b6 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -36,7 +36,7 @@
_ensure_categorical,
_ensure_float)
from pandas.core.dtypes.cast import maybe_downcast_to_dtype
-from pandas.core.dtypes.missing import isnull, notnull, _maybe_fill
+from pandas.core.dtypes.missing import isna, notna, _maybe_fill
from pandas.core.common import (_values_from_object, AbstractMethodError,
_default_index)
@@ -1168,7 +1168,7 @@ def first_compat(x, axis=0):
def first(x):
x = np.asarray(x)
- x = x[notnull(x)]
+ x = x[notna(x)]
if len(x) == 0:
return np.nan
return x[0]
@@ -1183,7 +1183,7 @@ def last_compat(x, axis=0):
def last(x):
x = np.asarray(x)
- x = x[notnull(x)]
+ x = x[notna(x)]
if len(x) == 0:
return np.nan
return x[-1]
@@ -2357,7 +2357,7 @@ def ngroups(self):
@cache_readonly
def result_index(self):
- if len(self.binlabels) != 0 and isnull(self.binlabels[0]):
+ if len(self.binlabels) != 0 and isna(self.binlabels[0]):
return self.binlabels[1:]
return self.binlabels
@@ -3114,13 +3114,13 @@ def filter(self, func, dropna=True, *args, **kwargs): # noqa
wrapper = lambda x: func(x, *args, **kwargs)
# Interpret np.nan as False.
- def true_and_notnull(x, *args, **kwargs):
+ def true_and_notna(x, *args, **kwargs):
b = wrapper(x, *args, **kwargs)
- return b and notnull(b)
+ return b and notna(b)
try:
indices = [self._get_index(name) for name, group in self
- if true_and_notnull(group)]
+ if true_and_notna(group)]
except ValueError:
raise TypeError("the filter must return a boolean result")
except TypeError:
@@ -3142,9 +3142,9 @@ def nunique(self, dropna=True):
'val.dtype must be object, got %s' % val.dtype
val, _ = algorithms.factorize(val, sort=False)
sorter = np.lexsort((val, ids))
- _isnull = lambda a: a == -1
+ _isna = lambda a: a == -1
else:
- _isnull = isnull
+ _isna = isna
ids, val = ids[sorter], val[sorter]
@@ -3154,7 +3154,7 @@ def nunique(self, dropna=True):
inc = np.r_[1, val[1:] != val[:-1]]
# 1st item of each group is a new unique observation
- mask = _isnull(val)
+ mask = _isna(val)
if dropna:
inc[idx] = 1
inc[mask] = 0
@@ -3316,7 +3316,7 @@ def count(self):
ids, _, ngroups = self.grouper.group_info
val = self.obj.get_values()
- mask = (ids != -1) & ~isnull(val)
+ mask = (ids != -1) & ~isna(val)
ids = _ensure_platform_int(ids)
out = np.bincount(ids[mask], minlength=ngroups or None)
@@ -3870,7 +3870,7 @@ def _choose_path(self, fast_path, slow_path, group):
if res.shape == res_fast.shape:
res_r = res.values.ravel()
res_fast_r = res_fast.values.ravel()
- mask = notnull(res_r)
+ mask = notna(res_r)
if (res_r[mask] == res_fast_r[mask]).all():
path = fast_path
@@ -3950,8 +3950,8 @@ def filter(self, func, dropna=True, *args, **kwargs): # noqa
pass
# interpret the result of the filter
- if is_bool(res) or (is_scalar(res) and isnull(res)):
- if res and notnull(res):
+ if is_bool(res) or (is_scalar(res) and isna(res)):
+ if res and notna(res):
indices.append(self._get_index(name))
else:
# non scalars aren't allowed
@@ -4204,13 +4204,13 @@ def _apply_to_column_groupbys(self, func):
def count(self):
""" Compute count of group, excluding missing values """
from functools import partial
- from pandas.core.dtypes.missing import _isnull_ndarraylike as isnull
+ from pandas.core.dtypes.missing import _isna_ndarraylike as isna
data, _ = self._get_data_to_aggregate()
ids, _, ngroups = self.grouper.group_info
mask = ids != -1
- val = ((mask & ~isnull(blk.get_values())) for blk in data.blocks)
+ val = ((mask & ~isna(blk.get_values())) for blk in data.blocks)
loc = (blk.mgr_locs for blk in data.blocks)
counter = partial(count_level_2d, labels=ids, max_bin=ngroups, axis=1)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 714b952217c9d..fd9abcfb726bf 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -14,7 +14,7 @@
from pandas.core.dtypes.generic import ABCSeries, ABCMultiIndex, ABCPeriodIndex
-from pandas.core.dtypes.missing import isnull, array_equivalent
+from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.core.dtypes.common import (
_ensure_int64,
_ensure_object,
@@ -42,8 +42,8 @@
from pandas.core.base import PandasObject, IndexOpsMixin
import pandas.core.base as base
-from pandas.util._decorators import (Appender, Substitution,
- cache_readonly, deprecate_kwarg)
+from pandas.util._decorators import (
+ Appender, Substitution, cache_readonly, deprecate_kwarg)
from pandas.core.indexes.frozen import FrozenList
import pandas.core.common as com
import pandas.core.dtypes.concat as _concat
@@ -216,7 +216,7 @@ def __new__(cls, data=None, dtype=None, copy=False, name=None,
if inferred == 'integer':
data = np.array(data, copy=copy, dtype=dtype)
elif inferred in ['floating', 'mixed-integer-float']:
- if isnull(data).any():
+ if isna(data).any():
raise ValueError('cannot convert float '
'NaN to integer')
@@ -624,7 +624,7 @@ def where(self, cond, other=None):
values = np.where(cond, values, other)
- if self._is_numeric_dtype and np.any(isnull(values)):
+ if self._is_numeric_dtype and np.any(isna(values)):
# We can't coerce to the numeric dtype of "self" (unless
# it's float) if there are NaN values in our output.
dtype = None
@@ -735,7 +735,7 @@ def _coerce_scalar_to_index(self, item):
"""
dtype = self.dtype
- if self._is_numeric_dtype and isnull(item):
+ if self._is_numeric_dtype and isna(item):
# We can't coerce to the numeric dtype of "self" (unless
# it's float) if there are NaN values in our output.
dtype = None
@@ -1821,7 +1821,7 @@ def _assert_take_fillable(self, values, indices, allow_fill=True,
def _isnan(self):
""" return if each value is nan"""
if self._can_hold_na:
- return isnull(self)
+ return isna(self)
else:
# shouldn't reach to this condition by checking hasnans beforehand
values = np.empty(len(self), dtype=np.bool_)
@@ -1844,7 +1844,7 @@ def hasnans(self):
else:
return False
- def isnull(self):
+ def isna(self):
"""
Detect missing values
@@ -1852,29 +1852,33 @@ def isnull(self):
Returns
-------
- a boolean array of whether my values are null
+ a boolean array of whether my values are na
See also
--------
- pandas.isnull : pandas version
+ isnull : alias of isna
+ pandas.isna : top-level isna
"""
return self._isnan
+ isnull = isna
- def notnull(self):
+ def notna(self):
"""
- Reverse of isnull
+ Inverse of isna
.. versionadded:: 0.20.0
Returns
-------
- a boolean array of whether my values are not null
+ a boolean array of whether my values are not na
See also
--------
- pandas.notnull : pandas version
+ notnull : alias of notna
+ pandas.notna : top-level notna
"""
- return ~self.isnull()
+ return ~self.isna()
+ notnull = notna
def putmask(self, mask, value):
"""
@@ -1922,7 +1926,7 @@ def _format_with_header(self, header, na_rep='NaN', **kwargs):
for x in values]
# could have nans
- mask = isnull(values)
+ mask = isna(values)
if mask.any():
result = np.array(result)
result[mask] = na_rep
@@ -1960,7 +1964,7 @@ def to_native_types(self, slicer=None, **kwargs):
def _format_native_types(self, na_rep='', quoting=None, **kwargs):
""" actually format my specific types """
- mask = isnull(self)
+ mask = isna(self)
if not self.is_object() and not quoting:
values = np.asarray(self).astype(str)
else:
@@ -2411,7 +2415,7 @@ def _get_unique_index(self, dropna=False):
if dropna:
try:
if self.hasnans:
- values = values[~isnull(values)]
+ values = values[~isna(values)]
except NotImplementedError:
pass
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index cd8559bcca03c..845c71b6c41d8 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -17,7 +17,7 @@
from pandas.core.dtypes.generic import (
ABCIndex, ABCSeries,
ABCPeriodIndex, ABCIndexClass)
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core import common as com, algorithms
from pandas.core.algorithms import checked_add_with_arr
from pandas.core.common import AbstractMethodError
@@ -857,7 +857,7 @@ def _append_same_dtype(self, to_concat, name):
def _ensure_datetimelike_to_i8(other):
""" helper for coercing an input scalar or array to i8 """
- if lib.isscalar(other) and isnull(other):
+ if lib.isscalar(other) and isna(other):
other = iNaT
elif isinstance(other, ABCIndexClass):
# convert tz if needed
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e6bc1790f2992..5a04c550f4502 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -23,7 +23,7 @@
_ensure_int64)
from pandas.core.dtypes.generic import ABCSeries
from pandas.core.dtypes.dtypes import DatetimeTZDtype
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
import pandas.core.dtypes.concat as _concat
from pandas.errors import PerformanceWarning
@@ -109,7 +109,7 @@ def wrapper(self, other):
isinstance(other, compat.string_types)):
other = _to_m8(other, tz=self.tz)
result = func(other)
- if isnull(other):
+ if isna(other):
result.fill(nat_result)
else:
if isinstance(other, list):
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index e6b2bc0953680..aa2ad21ae37fd 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -2,7 +2,7 @@
import numpy as np
-from pandas.core.dtypes.missing import notnull, isnull
+from pandas.core.dtypes.missing import notna, isna
from pandas.core.dtypes.generic import ABCPeriodIndex
from pandas.core.dtypes.dtypes import IntervalDtype
from pandas.core.dtypes.common import (
@@ -222,8 +222,8 @@ def _validate(self):
raise ValueError("invalid options for 'closed': %s" % self.closed)
if len(self.left) != len(self.right):
raise ValueError('left and right must have the same length')
- left_mask = notnull(self.left)
- right_mask = notnull(self.right)
+ left_mask = notna(self.left)
+ right_mask = notna(self.right)
if not (left_mask == right_mask).all():
raise ValueError('missing values must be missing in the same '
'location both left and right sides')
@@ -240,7 +240,7 @@ def hasnans(self):
def _isnan(self):
""" return if each value is nan"""
if self._mask is None:
- self._mask = isnull(self.left)
+ self._mask = isna(self.left)
return self._mask
@cache_readonly
@@ -415,7 +415,7 @@ def from_tuples(cls, data, closed='right', name=None, copy=False):
right = []
for d in data:
- if isnull(d):
+ if isna(d):
left.append(np.nan)
right.append(np.nan)
continue
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index ed7ca079a07b5..420788f9008cd 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -19,7 +19,7 @@
is_iterator,
is_list_like,
is_scalar)
-from pandas.core.dtypes.missing import isnull, array_equivalent
+from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.errors import PerformanceWarning, UnsortedIndexError
from pandas.core.common import (_values_from_object,
is_bool_indexer,
@@ -783,8 +783,8 @@ def duplicated(self, keep='first'):
@Appender(ibase._index_shared_docs['fillna'])
def fillna(self, value=None, downcast=None):
- # isnull is not implemented for MultiIndex
- raise NotImplementedError('isnull is not defined for MultiIndex')
+ # isna is not implemented for MultiIndex
+ raise NotImplementedError('isna is not defined for MultiIndex')
@Appender(_index_shared_docs['dropna'])
def dropna(self, how='any'):
@@ -920,7 +920,7 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
else:
# weird all NA case
- formatted = [pprint_thing(na if isnull(x) else x,
+ formatted = [pprint_thing(na if isna(x) else x,
escape_chars=('\t', '\r', '\n'))
for x in algos.take_1d(lev._values, lab)]
stringified_levels.append(formatted)
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 68713743d72ed..2823951c0f348 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -13,7 +13,7 @@
is_timedelta64_dtype,
is_timedelta64_ns_dtype,
_ensure_int64)
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.generic import ABCSeries
from pandas.core.common import _maybe_box, _values_from_object
@@ -51,7 +51,7 @@ def wrapper(self, other):
# failed to parse as timedelta
raise TypeError(msg.format(type(other)))
result = func(other)
- if isnull(other):
+ if isna(other):
result.fill(nat_result)
else:
if not is_list_like(other):
@@ -331,7 +331,7 @@ def _evaluate_with_timedelta_like(self, other, op, opstr):
if opstr in ['__div__', '__truediv__', '__floordiv__']:
if _is_convertible_to_td(other):
other = Timedelta(other)
- if isnull(other):
+ if isna(other):
raise NotImplementedError(
"division by pd.NaT not implemented")
@@ -430,7 +430,7 @@ def components(self):
hasnans = self.hasnans
if hasnans:
def f(x):
- if isnull(x):
+ if isna(x):
return [np.nan] * len(columns)
return x.components
else:
@@ -685,7 +685,7 @@ def get_loc(self, key, method=None, tolerance=None):
if is_list_like(key):
raise TypeError
- if isnull(key):
+ if isna(key):
key = NaT
if tolerance is not None:
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 38cc5431a004f..8f6b00fd204cc 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -15,7 +15,7 @@
is_sparse,
_is_unorderable_exception,
_ensure_platform_int)
-from pandas.core.dtypes.missing import isnull, _infer_fill_value
+from pandas.core.dtypes.missing import isna, _infer_fill_value
from pandas.core.index import Index, MultiIndex
@@ -1428,7 +1428,7 @@ def _has_valid_type(self, key, axis):
else:
def error():
- if isnull(key):
+ if isna(key):
raise TypeError("cannot use label indexing with a null "
"key")
raise KeyError("the label [%s] is not in the [%s]" %
@@ -1940,7 +1940,7 @@ def check_bool_indexer(ax, key):
result = key
if isinstance(key, ABCSeries) and not key.index.equals(ax):
result = result.reindex(ax)
- mask = isnull(result._values)
+ mask = isna(result._values)
if mask.any():
raise IndexingError('Unalignable boolean Series provided as '
'indexer (index of the boolean Series and of '
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 8f3667edf68e6..25c367fcbd968 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -44,8 +44,8 @@
astype_nansafe,
find_common_type)
from pandas.core.dtypes.missing import (
- isnull, notnull, array_equivalent,
- _is_na_compat,
+ isna, notna, array_equivalent,
+ _isna_compat,
is_null_datelike_scalar)
import pandas.core.dtypes.concat as _concat
@@ -371,7 +371,7 @@ def fillna(self, value, limit=None, inplace=False, downcast=None,
else:
return self.copy()
- mask = isnull(self.values)
+ mask = isna(self.values)
if limit is not None:
if not is_integer(limit):
raise ValueError('Limit must be an integer')
@@ -633,7 +633,7 @@ def _try_cast_result(self, result, dtype=None):
dtype = dtype.type
if issubclass(dtype, (np.bool_, np.object_)):
if issubclass(dtype, np.bool_):
- if isnull(result).all():
+ if isna(result).all():
return result.astype(np.bool_)
else:
result = result.astype(np.object_)
@@ -651,7 +651,7 @@ def _try_cast_result(self, result, dtype=None):
def _try_coerce_args(self, values, other):
""" provide coercion to our input arguments """
- if np.any(notnull(other)) and not self._can_hold_element(other):
+ if np.any(notna(other)) and not self._can_hold_element(other):
# coercion issues
# let higher levels handle
raise TypeError("cannot convert {} to an {}".format(
@@ -676,7 +676,7 @@ def to_native_types(self, slicer=None, na_rep='nan', quoting=None,
values = self.values
if slicer is not None:
values = values[:, slicer]
- mask = isnull(values)
+ mask = isna(values)
if not self.is_object and not quoting:
values = values.astype(str)
@@ -764,7 +764,7 @@ def setitem(self, indexer, value, mgr=None):
find_dtype = True
elif is_scalar(value):
- if isnull(value):
+ if isna(value):
# NaN promotion is handled in latter path
dtype = False
else:
@@ -894,7 +894,7 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
mask = mask.values
# if we are passed a scalar None, convert it here
- if not is_list_like(new) and isnull(new) and not self.is_object:
+ if not is_list_like(new) and isna(new) and not self.is_object:
new = self.fill_value
if self._can_hold_element(new):
@@ -1504,7 +1504,7 @@ def _nanpercentile1D(values, mask, q, **kw):
def _nanpercentile(values, q, axis, **kw):
- mask = isnull(self.values)
+ mask = isna(self.values)
if not is_scalar(mask) and mask.any():
if self.ndim == 1:
return _nanpercentile1D(values, mask, q, **kw)
@@ -1750,7 +1750,7 @@ def to_native_types(self, slicer=None, na_rep='', float_format=None,
# output (important for appropriate 'quoting' behaviour),
# so do not pass it through the FloatArrayFormatter
if float_format is None and decimal == '.':
- mask = isnull(values)
+ mask = isna(values)
if not quoting:
values = values.astype(str)
@@ -1869,7 +1869,7 @@ def _try_coerce_args(self, values, other):
base-type values, values mask, base-type other, other mask
"""
- values_mask = isnull(values)
+ values_mask = isna(values)
values = values.view('i8')
other_mask = False
@@ -1879,15 +1879,15 @@ def _try_coerce_args(self, values, other):
other = tslib.iNaT
other_mask = True
elif isinstance(other, Timedelta):
- other_mask = isnull(other)
+ other_mask = isna(other)
other = other.value
elif isinstance(other, timedelta):
other = Timedelta(other).value
elif isinstance(other, np.timedelta64):
- other_mask = isnull(other)
+ other_mask = isna(other)
other = Timedelta(other).value
elif hasattr(other, 'dtype') and is_timedelta64_dtype(other):
- other_mask = isnull(other)
+ other_mask = isna(other)
other = other.astype('i8', copy=False).view('i8')
else:
# coercion issues
@@ -1899,7 +1899,7 @@ def _try_coerce_args(self, values, other):
def _try_coerce_result(self, result):
""" reverse of try_coerce_args / try_operate """
if isinstance(result, np.ndarray):
- mask = isnull(result)
+ mask = isna(result)
if result.dtype.kind in ['i', 'f', 'O']:
result = result.astype('m8[ns]')
result[mask] = tslib.iNaT
@@ -1917,7 +1917,7 @@ def to_native_types(self, slicer=None, na_rep=None, quoting=None,
values = self.values
if slicer is not None:
values = values[:, slicer]
- mask = isnull(values)
+ mask = isna(values)
rvalues = np.empty(values.shape, dtype=object)
if na_rep is None:
@@ -2178,7 +2178,7 @@ def _replace_single(self, to_replace, value, inplace=False, filter=None,
# deal with replacing values with objects (strings) that match but
# whose replacement is not a string (numeric, nan, object)
- if isnull(value) or not isinstance(value, compat.string_types):
+ if isna(value) or not isinstance(value, compat.string_types):
def re_replacer(s):
try:
@@ -2333,7 +2333,7 @@ def to_native_types(self, slicer=None, na_rep='', quoting=None, **kwargs):
if slicer is not None:
# Categorical is always one dimension
values = values[slicer]
- mask = isnull(values)
+ mask = isna(values)
values = np.array(values, dtype='object')
values[mask] = na_rep
@@ -2377,7 +2377,7 @@ def _can_hold_element(self, element):
element = np.array(element)
return element.dtype == _NS_DTYPE or element.dtype == np.int64
return (is_integer(element) or isinstance(element, datetime) or
- isnull(element))
+ isna(element))
def _try_coerce_args(self, values, other):
"""
@@ -2396,7 +2396,7 @@ def _try_coerce_args(self, values, other):
base-type values, values mask, base-type other, other mask
"""
- values_mask = isnull(values)
+ values_mask = isna(values)
values = values.view('i8')
other_mask = False
@@ -2410,10 +2410,10 @@ def _try_coerce_args(self, values, other):
if getattr(other, 'tz') is not None:
raise TypeError("cannot coerce a Timestamp with a tz on a "
"naive Block")
- other_mask = isnull(other)
+ other_mask = isna(other)
other = other.asm8.view('i8')
elif hasattr(other, 'dtype') and is_datetime64_dtype(other):
- other_mask = isnull(other)
+ other_mask = isna(other)
other = other.astype('i8', copy=False).view('i8')
else:
# coercion issues
@@ -2540,26 +2540,26 @@ def _try_coerce_args(self, values, other):
-------
base-type values, values mask, base-type other, other mask
"""
- values_mask = _block_shape(isnull(values), ndim=self.ndim)
+ values_mask = _block_shape(isna(values), ndim=self.ndim)
# asi8 is a view, needs copy
values = _block_shape(values.asi8, ndim=self.ndim)
other_mask = False
if isinstance(other, ABCSeries):
other = self._holder(other)
- other_mask = isnull(other)
+ other_mask = isna(other)
if isinstance(other, bool):
raise TypeError
elif (is_null_datelike_scalar(other) or
- (is_scalar(other) and isnull(other))):
+ (is_scalar(other) and isna(other))):
other = tslib.iNaT
other_mask = True
elif isinstance(other, self._holder):
if other.tz != self.values.tz:
raise ValueError("incompatible or non tz-aware value")
other = other.asi8
- other_mask = isnull(other)
+ other_mask = isna(other)
elif isinstance(other, (np.datetime64, datetime, date)):
other = lib.Timestamp(other)
tz = getattr(other, 'tz', None)
@@ -2567,7 +2567,7 @@ def _try_coerce_args(self, values, other):
# test we can have an equal time zone
if tz is None or str(tz) != str(self.values.tz):
raise ValueError("incompatible or non tz-aware value")
- other_mask = isnull(other)
+ other_mask = isna(other)
other = other.value
else:
raise TypeError
@@ -3292,7 +3292,7 @@ def reduction(self, f, axis=0, consolidate=True, transposed=False,
placement=np.arange(len(values)))],
axes[0])
- def isnull(self, **kwargs):
+ def isna(self, **kwargs):
return self.apply('apply', **kwargs)
def where(self, **kwargs):
@@ -3347,8 +3347,8 @@ def replace_list(self, src_list, dest_list, inplace=False, regex=False,
values = self.as_matrix()
def comp(s):
- if isnull(s):
- return isnull(values)
+ if isna(s):
+ return isna(values)
return _maybe_compare(values, getattr(s, 'asm8', s), operator.eq)
masks = [comp(s) for i, s in enumerate(src_list)]
@@ -3681,10 +3681,10 @@ def get(self, item, fastpath=True):
"""
if self.items.is_unique:
- if not isnull(item):
+ if not isna(item):
loc = self.items.get_loc(item)
else:
- indexer = np.arange(len(self.items))[isnull(self.items)]
+ indexer = np.arange(len(self.items))[isna(self.items)]
# allow a single nan location indexer
if not is_scalar(indexer):
@@ -3696,7 +3696,7 @@ def get(self, item, fastpath=True):
return self.iget(loc, fastpath=fastpath)
else:
- if isnull(item):
+ if isna(item):
raise TypeError("cannot label index with a null key")
indexer = self.items.get_indexer_for([item])
@@ -4886,7 +4886,7 @@ def _putmask_smart(v, m, n):
# make sure that we have a nullable type
# if we have nulls
- if not _is_na_compat(v, nn[0]):
+ if not _isna_compat(v, nn[0]):
raise ValueError
# we ignore ComplexWarning here
@@ -5010,7 +5010,7 @@ def get_empty_dtype_and_na(join_units):
# Null blocks should not influence upcast class selection, unless there
# are only null blocks, when same upcasting rules must be applied to
# null upcast classes.
- if unit.is_null:
+ if unit.is_na:
null_upcast_classes[upcast_cls].append(dtype)
else:
upcast_classes[upcast_cls].append(dtype)
@@ -5280,7 +5280,7 @@ def dtype(self):
self.block.fill_value)[0])
@cache_readonly
- def is_null(self):
+ def is_na(self):
if self.block is None:
return True
@@ -5303,7 +5303,7 @@ def is_null(self):
total_len = values_flat.shape[0]
chunk_len = max(total_len // 40, 1000)
for i in range(0, total_len, chunk_len):
- if not isnull(values_flat[i:i + chunk_len]).all():
+ if not isna(values_flat[i:i + chunk_len]).all():
return False
return True
@@ -5316,7 +5316,7 @@ def get_reindexed_values(self, empty_dtype, upcasted_na):
else:
fill_value = upcasted_na
- if self.is_null:
+ if self.is_na:
if getattr(self.block, 'is_object', False):
# we want to avoid filling with np.nan if we are
# using None; we already know that we are all
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index 5aabc9d8730dd..93281e20a2a96 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -20,7 +20,7 @@
_ensure_float64)
from pandas.core.dtypes.cast import infer_dtype_from_array
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
def mask_missing(arr, values_to_mask):
@@ -36,7 +36,7 @@ def mask_missing(arr, values_to_mask):
except Exception:
values_to_mask = np.array(values_to_mask, dtype=object)
- na_mask = isnull(values_to_mask)
+ na_mask = isna(values_to_mask)
nonna = values_to_mask[~na_mask]
mask = None
@@ -63,9 +63,9 @@ def mask_missing(arr, values_to_mask):
if na_mask.any():
if mask is None:
- mask = isnull(arr)
+ mask = isna(arr)
else:
- mask |= isnull(arr)
+ mask |= isna(arr)
return mask
@@ -122,7 +122,7 @@ def interpolate_1d(xvalues, yvalues, method='linear', limit=None,
"""
# Treat the original, non-scipy methods first.
- invalid = isnull(yvalues)
+ invalid = isna(yvalues)
valid = ~invalid
if not valid.any():
@@ -479,7 +479,7 @@ def pad_1d(values, limit=None, mask=None, dtype=None):
raise ValueError('Invalid dtype for pad_1d [%s]' % dtype.name)
if mask is None:
- mask = isnull(values)
+ mask = isna(values)
mask = mask.view(np.uint8)
_method(values, mask, limit=limit)
return values
@@ -503,7 +503,7 @@ def backfill_1d(values, limit=None, mask=None, dtype=None):
raise ValueError('Invalid dtype for backfill_1d [%s]' % dtype.name)
if mask is None:
- mask = isnull(values)
+ mask = isna(values)
mask = mask.view(np.uint8)
_method(values, mask, limit=limit)
@@ -528,7 +528,7 @@ def pad_2d(values, limit=None, mask=None, dtype=None):
raise ValueError('Invalid dtype for pad_2d [%s]' % dtype.name)
if mask is None:
- mask = isnull(values)
+ mask = isna(values)
mask = mask.view(np.uint8)
if np.all(values.shape):
@@ -557,7 +557,7 @@ def backfill_2d(values, limit=None, mask=None, dtype=None):
raise ValueError('Invalid dtype for backfill_2d [%s]' % dtype.name)
if mask is None:
- mask = isnull(values)
+ mask = isna(values)
mask = mask.view(np.uint8)
if np.all(values.shape):
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 1d64f87b15761..5bebb8eb65b23 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -16,7 +16,7 @@
is_datetime_or_timedelta_dtype,
is_int_or_datetime_dtype, is_any_int_dtype)
from pandas.core.dtypes.cast import _int64_max, maybe_upcast_putmask
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.config import get_option
from pandas.core.common import _values_from_object
@@ -195,7 +195,7 @@ def _get_values(values, skipna, fill_value=None, fill_value_typ=None,
if isfinite:
mask = _isfinite(values)
else:
- mask = isnull(values)
+ mask = isna(values)
dtype = values.dtype
dtype_ok = _na_ok_dtype(dtype)
@@ -232,7 +232,7 @@ def _get_values(values, skipna, fill_value=None, fill_value_typ=None,
def _isfinite(values):
if is_datetime_or_timedelta_dtype(values):
- return isnull(values)
+ return isna(values)
if (is_complex_dtype(values) or is_float_dtype(values) or
is_integer_dtype(values) or is_bool_dtype(values)):
return ~np.isfinite(values)
@@ -329,7 +329,7 @@ def nanmedian(values, axis=None, skipna=True):
values, mask, dtype, dtype_max = _get_values(values, skipna)
def get_median(x):
- mask = notnull(x)
+ mask = notna(x)
if not skipna and not mask.all():
return np.nan
return algos.median(_values_from_object(x[mask]))
@@ -395,7 +395,7 @@ def nanvar(values, axis=None, skipna=True, ddof=1):
values = _values_from_object(values)
dtype = values.dtype
- mask = isnull(values)
+ mask = isna(values)
if is_any_int_dtype(values):
values = values.astype('f8')
values[mask] = np.nan
@@ -434,7 +434,7 @@ def nanvar(values, axis=None, skipna=True, ddof=1):
def nansem(values, axis=None, skipna=True, ddof=1):
var = nanvar(values, axis, skipna, ddof=ddof)
- mask = isnull(values)
+ mask = isna(values)
if not is_float_dtype(values.dtype):
values = values.astype('f8')
count, _ = _get_counts_nanvar(mask, axis, ddof, values.dtype)
@@ -503,7 +503,7 @@ def nanskew(values, axis=None, skipna=True):
"""
values = _values_from_object(values)
- mask = isnull(values)
+ mask = isna(values)
if not is_float_dtype(values.dtype):
values = values.astype('f8')
count = _get_counts(mask, axis)
@@ -558,7 +558,7 @@ def nankurt(values, axis=None, skipna=True):
"""
values = _values_from_object(values)
- mask = isnull(values)
+ mask = isna(values)
if not is_float_dtype(values.dtype):
values = values.astype('f8')
count = _get_counts(mask, axis)
@@ -615,7 +615,7 @@ def nankurt(values, axis=None, skipna=True):
@disallow('M8', 'm8')
def nanprod(values, axis=None, skipna=True):
- mask = isnull(values)
+ mask = isna(values)
if skipna and not is_any_int_dtype(values):
values = values.copy()
values[mask] = 1
@@ -696,7 +696,7 @@ def nancorr(a, b, method='pearson', min_periods=None):
if min_periods is None:
min_periods = 1
- valid = notnull(a) & notnull(b)
+ valid = notna(a) & notna(b)
if not valid.all():
a = a[valid]
b = b[valid]
@@ -740,7 +740,7 @@ def nancov(a, b, min_periods=None):
if min_periods is None:
min_periods = 1
- valid = notnull(a) & notnull(b)
+ valid = notna(a) & notna(b)
if not valid.all():
a = a[valid]
b = b[valid]
@@ -778,8 +778,8 @@ def _ensure_numeric(x):
def make_nancomp(op):
def f(x, y):
- xmask = isnull(x)
- ymask = isnull(y)
+ xmask = isna(x)
+ ymask = isna(y)
mask = xmask | ymask
with np.errstate(all='ignore'):
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index bc201be26b756..4e08e1483d617 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -23,7 +23,7 @@
from pandas.errors import PerformanceWarning
from pandas.core.common import _values_from_object, _maybe_match_name
-from pandas.core.dtypes.missing import notnull, isnull
+from pandas.core.dtypes.missing import notna, isna
from pandas.core.dtypes.common import (
needs_i8_conversion,
is_datetimelike_v_numeric,
@@ -465,7 +465,7 @@ def _convert_to_array(self, values, name=None, other=None):
# we are in the wrong path
if (supplied_dtype is None and other is not None and
(other.dtype in ('timedelta64[ns]', 'datetime64[ns]')) and
- isnull(values).all()):
+ isna(values).all()):
values = np.empty(values.shape, dtype='timedelta64[ns]')
values[:] = iNaT
@@ -496,7 +496,7 @@ def _convert_to_array(self, values, name=None, other=None):
raise TypeError("incompatible type for a datetime/timedelta "
"operation [{0}]".format(name))
elif inferred_type == 'floating':
- if (isnull(values).all() and
+ if (isna(values).all() and
name in ('__add__', '__radd__', '__sub__', '__rsub__')):
values = np.empty(values.shape, dtype=other.dtype)
values[:] = iNaT
@@ -512,7 +512,7 @@ def _convert_to_array(self, values, name=None, other=None):
def _convert_for_datetime(self, lvalues, rvalues):
from pandas.core.tools.timedeltas import to_timedelta
- mask = isnull(lvalues) | isnull(rvalues)
+ mask = isna(lvalues) | isna(rvalues)
# datetimes require views
if self.is_datetime_lhs or self.is_datetime_rhs:
@@ -662,11 +662,11 @@ def na_op(x, y):
if isinstance(y, (np.ndarray, ABCSeries, pd.Index)):
dtype = find_common_type([x.dtype, y.dtype])
result = np.empty(x.size, dtype=dtype)
- mask = notnull(x) & notnull(y)
+ mask = notna(x) & notna(y)
result[mask] = op(x[mask], _values_from_object(y[mask]))
elif isinstance(x, np.ndarray):
result = np.empty(len(x), dtype=x.dtype)
- mask = notnull(x)
+ mask = notna(x)
result[mask] = op(x[mask], y)
else:
raise TypeError("{typ} cannot perform the operation "
@@ -776,7 +776,7 @@ def na_op(x, y):
raise TypeError("invalid type comparison")
# numpy does not like comparisons vs None
- if is_scalar(y) and isnull(y):
+ if is_scalar(y) and isna(y):
if name == '__ne__':
return np.ones(len(x), dtype=bool)
else:
@@ -788,10 +788,10 @@ def na_op(x, y):
(not is_scalar(y) and needs_i8_conversion(y))):
if is_scalar(y):
- mask = isnull(x)
+ mask = isna(x)
y = libindex.convert_scalar(x, _values_from_object(y))
else:
- mask = isnull(x) | isnull(y)
+ mask = isna(x) | isna(y)
y = y.view('i8')
x = x.view('i8')
@@ -898,7 +898,7 @@ def na_op(x, y):
try:
# let null fall thru
- if not isnull(y):
+ if not isna(y):
y = bool(y)
result = lib.scalar_binop(x, y, op)
except:
@@ -1182,7 +1182,7 @@ def na_op(x, y):
dtype = np.find_common_type([x.dtype, y.dtype], [])
result = np.empty(x.size, dtype=dtype)
yrav = y.ravel()
- mask = notnull(xrav) & notnull(yrav)
+ mask = notna(xrav) & notna(yrav)
xrav = xrav[mask]
# we may need to manually
@@ -1197,7 +1197,7 @@ def na_op(x, y):
result[mask] = op(xrav, yrav)
elif hasattr(x, 'size'):
result = np.empty(x.size, dtype=x.dtype)
- mask = notnull(xrav)
+ mask = notna(xrav)
xrav = xrav[mask]
if np.prod(xrav.shape):
with np.errstate(all='ignore'):
@@ -1259,11 +1259,11 @@ def na_op(x, y):
result = np.empty(x.size, dtype=bool)
if isinstance(y, (np.ndarray, ABCSeries)):
yrav = y.ravel()
- mask = notnull(xrav) & notnull(yrav)
+ mask = notna(xrav) & notna(yrav)
result[mask] = op(np.array(list(xrav[mask])),
np.array(list(yrav[mask])))
else:
- mask = notnull(xrav)
+ mask = notna(xrav)
result[mask] = op(np.array(list(xrav[mask])), y)
if op == operator.ne: # pragma: no cover
@@ -1335,7 +1335,7 @@ def na_op(x, y):
# TODO: might need to find_common_type here?
result = np.empty(len(x), dtype=x.dtype)
- mask = notnull(x)
+ mask = notna(x)
result[mask] = op(x[mask], y)
result, changed = maybe_upcast_putmask(result, ~mask, np.nan)
@@ -1365,11 +1365,11 @@ def na_op(x, y):
result = np.empty(x.size, dtype=bool)
if isinstance(y, np.ndarray):
yrav = y.ravel()
- mask = notnull(xrav) & notnull(yrav)
+ mask = notna(xrav) & notna(yrav)
result[mask] = op(np.array(list(xrav[mask])),
np.array(list(yrav[mask])))
else:
- mask = notnull(xrav)
+ mask = notna(xrav)
result[mask] = op(np.array(list(xrav[mask])), y)
if op == operator.ne: # pragma: no cover
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 609bf3186344a..e4515efe109c5 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -13,7 +13,7 @@
from pandas.core.dtypes.common import (
is_integer, is_list_like,
is_string_like, is_scalar)
-from pandas.core.dtypes.missing import notnull
+from pandas.core.dtypes.missing import notna
import pandas.core.computation.expressions as expressions
import pandas.core.common as com
@@ -685,7 +685,7 @@ def dropna(self, axis=0, how='any', inplace=False):
axis = self._get_axis_number(axis)
values = self.values
- mask = notnull(values)
+ mask = notna(values)
for ax in reversed(sorted(set(range(self._AXIS_LEN)) - set([axis]))):
mask = mask.sum(ax)
@@ -907,7 +907,7 @@ def to_frame(self, filter_observations=True):
if filter_observations:
# shaped like the return DataFrame
- mask = notnull(self.values).all(axis=0)
+ mask = notna(self.values).all(axis=0)
# size = mask.sum()
selector = mask.ravel()
else:
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 890555477425d..c2fb81178433e 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -175,7 +175,7 @@ def pivot_table(data, values=None, index=None, columns=None, aggfunc='mean',
if margins:
if dropna:
- data = data[data.notnull().all(axis=1)]
+ data = data[data.notna().all(axis=1)]
table = _add_margins(table, data, values, rows=index,
cols=columns, aggfunc=aggfunc,
margins_name=margins_name)
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index dcb83d225699d..b7638471f2ad0 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -12,7 +12,7 @@
is_list_like, is_bool_dtype,
needs_i8_conversion)
from pandas.core.dtypes.cast import maybe_promote
-from pandas.core.dtypes.missing import notnull
+from pandas.core.dtypes.missing import notna
import pandas.core.dtypes.concat as _concat
from pandas.core.series import Series
@@ -547,7 +547,7 @@ def factorize(index):
new_values = frame.values.ravel()
if dropna:
- mask = notnull(new_values)
+ mask = notna(new_values)
new_values = new_values[mask]
new_index = new_index[mask]
return Series(new_values, index=new_index)
@@ -835,7 +835,7 @@ def lreshape(data, groups, dropna=True, label=None):
if dropna:
mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool)
for c in pivot_cols:
- mask &= notnull(mdata[c])
+ mask &= notna(mdata[c])
if not mask.all():
mdata = dict((k, v[mask]) for k, v in compat.iteritems(mdata))
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index d8398023a5083..1cb39faa2e869 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -2,7 +2,7 @@
Quantilization functions and related stuff
"""
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.common import (
is_integer,
is_scalar,
@@ -241,7 +241,7 @@ def _bins_to_cuts(x, bins, right=True, labels=None,
if include_lowest:
ids[x == bins[0]] = 1
- na_mask = isnull(x) | (ids == len(bins)) | (ids == 0)
+ na_mask = isna(x) | (ids == len(bins)) | (ids == 0)
has_nas = na_mask.any()
if labels is not False:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index c7ead292c8b63..fb5819b2748a0 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -36,7 +36,8 @@
maybe_upcast, infer_dtype_from_scalar,
maybe_convert_platform,
maybe_cast_to_datetime, maybe_castable)
-from pandas.core.dtypes.missing import isnull, notnull, remove_na_arraylike
+from pandas.core.dtypes.missing import isna, notna, remove_na_arraylike
+
from pandas.core.common import (is_bool_indexer,
_default_index,
_asarray_tuplesafe,
@@ -745,7 +746,7 @@ def setitem(key, value):
pass
elif is_timedelta64_dtype(self.dtype):
# reassign a null value to iNaT
- if isnull(value):
+ if isna(value):
value = iNaT
try:
@@ -1226,7 +1227,7 @@ def count(self, level=None):
from pandas.core.index import _get_na_value
if level is None:
- return notnull(_values_from_object(self)).sum()
+ return notna(_values_from_object(self)).sum()
if isinstance(level, compat.string_types):
level = self.index._get_level_number(level)
@@ -1239,7 +1240,7 @@ def count(self, level=None):
lab[mask] = cnt = len(lev)
lev = lev.insert(cnt, _get_na_value(lev.dtype.type))
- obs = lab[notnull(self.values)]
+ obs = lab[notna(self.values)]
out = np.bincount(obs, minlength=len(lev) or None)
return self._constructor(out, index=lev,
dtype='int64').__finalize__(self)
@@ -1665,8 +1666,8 @@ def _binop(self, other, func, level=None, fill_value=None):
other_vals = other.values
if fill_value is not None:
- this_mask = isnull(this_vals)
- other_mask = isnull(other_vals)
+ this_mask = isna(this_vals)
+ other_mask = isna(other_vals)
this_vals = this_vals.copy()
other_vals = other_vals.copy()
@@ -1735,7 +1736,7 @@ def combine_first(self, other):
other = other.reindex(new_index, copy=False)
# TODO: do we need name?
name = _maybe_match_name(self, other) # noqa
- rs_vals = com._where_compat(isnull(this), other._values, this._values)
+ rs_vals = com._where_compat(isna(this), other._values, this._values)
return self._constructor(rs_vals, index=new_index).__finalize__(self)
def update(self, other):
@@ -1748,7 +1749,7 @@ def update(self, other):
other : Series
"""
other = other.reindex_like(self)
- mask = notnull(other)
+ mask = notna(other)
self._data = self._data.putmask(mask=mask, new=other, inplace=True)
self._maybe_update_cacher()
@@ -1781,7 +1782,7 @@ def _try_kind_sort(arr):
arr = self._values
sortedIdx = np.empty(len(self), dtype=np.int32)
- bad = isnull(arr)
+ bad = isna(arr)
good = ~bad
idx = _default_index(len(self))
@@ -1886,7 +1887,7 @@ def argsort(self, axis=0, kind='quicksort', order=None):
numpy.ndarray.argsort
"""
values = self._values
- mask = isnull(values)
+ mask = isna(values)
if mask.any():
result = Series(-1, index=self.index, name=self.name,
@@ -2215,7 +2216,7 @@ def map(self, arg, na_action=None):
if na_action == 'ignore':
def map_f(values, f):
return lib.map_infer_mask(values, f,
- isnull(values).view(np.uint8))
+ isna(values).view(np.uint8))
else:
map_f = lib.map_infer
@@ -2783,6 +2784,22 @@ def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
merge_cells=merge_cells, encoding=encoding,
inf_rep=inf_rep, verbose=verbose)
+ @Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
+ def isna(self):
+ return super(Series, self).isna()
+
+ @Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
+ def isnull(self):
+ return super(Series, self).isnull()
+
+ @Appender(generic._shared_docs['isna'] % _shared_doc_kwargs)
+ def notna(self):
+ return super(Series, self).notna()
+
+ @Appender(generic._shared_docs['notna'] % _shared_doc_kwargs)
+ def notnull(self):
+ return super(Series, self).notnull()
+
def dropna(self, axis=0, inplace=False, **kwargs):
"""
Return Series without null values
@@ -2824,7 +2841,7 @@ def first_valid_index(self):
if len(self) == 0:
return None
- mask = isnull(self._values)
+ mask = isna(self._values)
i = mask.argmin()
if mask[i]:
return None
@@ -2838,7 +2855,7 @@ def last_valid_index(self):
if len(self) == 0:
return None
- mask = isnull(self._values[::-1])
+ mask = isna(self._values[::-1])
i = mask.argmin()
if mask[i]:
return None
@@ -3010,7 +3027,7 @@ def _try_cast(arr, take_fast_path):
# possibility of nan -> garbage
if is_float_dtype(data.dtype) and is_integer_dtype(dtype):
- if not isnull(data).any():
+ if not isna(data).any():
subarr = _try_cast(data, True)
elif copy:
subarr = data.copy()
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 44a27bb5cbae1..12e8d8aba9177 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -9,7 +9,7 @@
is_list_like,
is_categorical_dtype)
from pandas.core.dtypes.cast import infer_dtype_from_array
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
import pandas.core.algorithms as algorithms
from pandas._libs import lib, algos, hashtable
from pandas._libs.hashtable import unique_label_indices
@@ -239,7 +239,7 @@ def nargsort(items, kind='quicksort', ascending=True, na_position='last'):
items = np.asanyarray(items)
idx = np.arange(len(items))
- mask = isnull(items)
+ mask = isna(items)
non_nans = items[~mask]
non_nan_idx = idx[~mask]
nan_idx = np.nonzero(mask)[0]
diff --git a/pandas/core/sparse/array.py b/pandas/core/sparse/array.py
index 42fc5189eebd8..4a12dd1af28c9 100644
--- a/pandas/core/sparse/array.py
+++ b/pandas/core/sparse/array.py
@@ -27,7 +27,7 @@
from pandas.core.dtypes.cast import (
maybe_convert_platform, maybe_promote,
astype_nansafe, find_common_type)
-from pandas.core.dtypes.missing import isnull, notnull, na_value_for_dtype
+from pandas.core.dtypes.missing import isna, notna, na_value_for_dtype
import pandas._libs.sparse as splib
from pandas._libs.sparse import SparseIndex, BlockIndex, IntIndex
@@ -579,12 +579,12 @@ def count(self):
@property
def _null_fill_value(self):
- return isnull(self.fill_value)
+ return isna(self.fill_value)
@property
def _valid_sp_values(self):
sp_vals = self.sp_values
- mask = notnull(sp_vals)
+ mask = notna(sp_vals)
return sp_vals[mask]
@Appender(_index_shared_docs['fillna'] % _sparray_doc_kwargs)
@@ -595,7 +595,7 @@ def fillna(self, value, downcast=None):
if issubclass(self.dtype.type, np.floating):
value = float(value)
- new_values = np.where(isnull(self.sp_values), value, self.sp_values)
+ new_values = np.where(isna(self.sp_values), value, self.sp_values)
fill_value = value if self._null_fill_value else self.fill_value
return self._simple_new(new_values, self.sp_index,
@@ -687,7 +687,7 @@ def value_counts(self, dropna=True):
pass
else:
if self._null_fill_value:
- mask = pd.isnull(keys)
+ mask = pd.isna(keys)
else:
mask = keys == self.fill_value
@@ -767,8 +767,8 @@ def make_sparse(arr, kind='block', fill_value=None):
if fill_value is None:
fill_value = na_value_for_dtype(arr.dtype)
- if isnull(fill_value):
- mask = notnull(arr)
+ if isna(fill_value):
+ mask = notna(arr)
else:
# For str arrays in NumPy 1.12.0, operator!= below isn't
# element-wise but just returns False if fill_value is not str,
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 462fb18618949..d8c0aa41edac1 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -10,7 +10,7 @@
from pandas import compat
import numpy as np
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.cast import maybe_upcast, find_common_type
from pandas.core.dtypes.common import _ensure_platform_int, is_scipy_sparse
@@ -565,7 +565,7 @@ def _combine_match_index(self, other, func, level=None, fill_value=None,
new_data[col] = func(series.values, other.values)
# fill_value is a function of our operator
- if isnull(other.fill_value) or isnull(self.default_fill_value):
+ if isna(other.fill_value) or isna(self.default_fill_value):
fill_value = np.nan
else:
fill_value = func(np.float64(self.default_fill_value),
@@ -651,7 +651,7 @@ def _reindex_columns(self, columns, method, copy, level, fill_value=None,
if level is not None:
raise TypeError('Reindex by level not supported for sparse')
- if notnull(fill_value):
+ if notna(fill_value):
raise NotImplementedError("'fill_value' argument is not supported")
if limit:
@@ -785,13 +785,15 @@ def cumsum(self, axis=0, *args, **kwargs):
return self.apply(lambda x: x.cumsum(), axis=axis)
- @Appender(generic._shared_docs['isnull'])
- def isnull(self):
- return self._apply_columns(lambda x: x.isnull())
+ @Appender(generic._shared_docs['isna'])
+ def isna(self):
+ return self._apply_columns(lambda x: x.isna())
+ isnull = isna
- @Appender(generic._shared_docs['isnotnull'])
- def isnotnull(self):
- return self._apply_columns(lambda x: x.isnotnull())
+ @Appender(generic._shared_docs['notna'])
+ def notna(self):
+ return self._apply_columns(lambda x: x.notna())
+ notnull = notna
def apply(self, func, axis=0, broadcast=False, reduce=False):
"""
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index 1bc9cf5379930..62d20e73dbfcb 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -8,7 +8,7 @@
import numpy as np
import warnings
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.common import is_scalar
from pandas.core.common import _values_from_object, _maybe_match_name
@@ -172,7 +172,7 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block',
else:
length = len(index)
- if data == fill_value or (isnull(data) and isnull(fill_value)):
+ if data == fill_value or (isna(data) and isna(fill_value)):
if kind == 'block':
sparse_index = BlockIndex(length, [], [])
else:
@@ -641,19 +641,21 @@ def cumsum(self, axis=0, *args, **kwargs):
new_array, index=self.index,
sparse_index=new_array.sp_index).__finalize__(self)
- @Appender(generic._shared_docs['isnull'])
- def isnull(self):
- arr = SparseArray(isnull(self.values.sp_values),
+ @Appender(generic._shared_docs['isna'])
+ def isna(self):
+ arr = SparseArray(isna(self.values.sp_values),
sparse_index=self.values.sp_index,
- fill_value=isnull(self.fill_value))
+ fill_value=isna(self.fill_value))
return self._constructor(arr, index=self.index).__finalize__(self)
+ isnull = isna
- @Appender(generic._shared_docs['isnotnull'])
- def isnotnull(self):
- arr = SparseArray(notnull(self.values.sp_values),
+ @Appender(generic._shared_docs['notna'])
+ def notna(self):
+ arr = SparseArray(notna(self.values.sp_values),
sparse_index=self.values.sp_index,
- fill_value=notnull(self.fill_value))
+ fill_value=notna(self.fill_value))
return self._constructor(arr, index=self.index).__finalize__(self)
+ notnull = notna
def dropna(self, axis=0, inplace=False, **kwargs):
"""
@@ -665,7 +667,7 @@ def dropna(self, axis=0, inplace=False, **kwargs):
if inplace:
raise NotImplementedError("Cannot perform inplace dropna"
" operations on a SparseSeries")
- if isnull(self.fill_value):
+ if isna(self.fill_value):
return dense_valid
else:
dense_valid = dense_valid[dense_valid != self.fill_value]
@@ -677,7 +679,7 @@ def shift(self, periods, freq=None, axis=0):
return self.copy()
# no special handling of fill values yet
- if not isnull(self.fill_value):
+ if not isna(self.fill_value):
shifted = self.to_dense().shift(periods, freq=freq,
axis=axis)
return shifted.to_sparse(fill_value=self.fill_value,
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index cd7e313b13f1e..30465561a911c 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -2,7 +2,7 @@
from pandas.compat import zip
from pandas.core.dtypes.generic import ABCSeries, ABCIndex
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.common import (
is_bool_dtype,
is_categorical_dtype,
@@ -101,7 +101,7 @@ def str_cat(arr, others=None, sep=None, na_rep=None):
arrays = _get_array_list(arr, others)
n = _length_check(arrays)
- masks = np.array([isnull(x) for x in arrays])
+ masks = np.array([isna(x) for x in arrays])
cats = None
if na_rep is None:
@@ -129,12 +129,12 @@ def str_cat(arr, others=None, sep=None, na_rep=None):
return result
else:
arr = np.asarray(arr, dtype=object)
- mask = isnull(arr)
+ mask = isna(arr)
if na_rep is None and mask.any():
if sep == '':
na_rep = ''
else:
- return sep.join(arr[notnull(arr)])
+ return sep.join(arr[notna(arr)])
return sep.join(np.where(mask, na_rep, arr))
@@ -165,7 +165,7 @@ def _map(f, arr, na_mask=False, na_value=np.nan, dtype=object):
if not isinstance(arr, np.ndarray):
arr = np.asarray(arr, dtype=object)
if na_mask:
- mask = isnull(arr)
+ mask = isna(arr)
try:
convert = not all(mask)
result = lib.map_infer_mask(arr, f, mask.view(np.uint8), convert)
@@ -1391,7 +1391,7 @@ def __getitem__(self, key):
def __iter__(self):
i = 0
g = self.get(i)
- while g.notnull().any():
+ while g.notna().any():
yield g
i += 1
g = self.get(i)
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 9c02a6212c412..a1f323aff7c1a 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -18,7 +18,7 @@
from pandas.core.dtypes.generic import (
ABCIndexClass, ABCSeries,
ABCDataFrame)
-from pandas.core.dtypes.missing import notnull
+from pandas.core.dtypes.missing import notna
from pandas.core import algorithms
import pandas.compat as compat
@@ -176,7 +176,7 @@ def _guess_datetime_format(dt_str, dayfirst=False,
def _guess_datetime_format_for_array(arr, **kwargs):
# Try to guess the format based on the first non-NaN element
- non_nan_elements = notnull(arr).nonzero()[0]
+ non_nan_elements = notna(arr).nonzero()[0]
if len(non_nan_elements):
return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs)
@@ -665,7 +665,7 @@ def calc_with_mask(carg, mask):
# a float with actual np.nan
try:
carg = arg.astype(np.float64)
- return calc_with_mask(carg, notnull(carg))
+ return calc_with_mask(carg, notna(carg))
except:
pass
@@ -744,7 +744,7 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
def _guess_time_format_for_array(arr):
# Try to guess the format based on the first non-NaN element
- non_nan_elements = notnull(arr).nonzero()[0]
+ non_nan_elements = notna(arr).nonzero()[0]
if len(non_nan_elements):
element = arr[non_nan_elements[0]]
for time_format in _time_formats:
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index e41ffae9d03c2..07e993d7ef509 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -12,7 +12,7 @@
ABCDataFrame)
from pandas.core.dtypes.common import (
is_categorical_dtype, is_list_like)
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.cast import infer_dtype_from_scalar
@@ -215,7 +215,7 @@ def _hash_categorical(c, encoding, hash_key):
#
# TODO: GH 15362
- mask = c.isnull()
+ mask = c.isna()
if len(hashed):
result = hashed.take(c.codes)
else:
@@ -313,7 +313,7 @@ def _hash_scalar(val, encoding='utf8', hash_key=None):
1d uint64 numpy array of hash value, of length 1
"""
- if isnull(val):
+ if isna(val):
# this is to be consistent with the _hash_categorical implementation
return np.array([np.iinfo(np.uint64).max], dtype='u8')
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 57611794c375f..5866f1e8a76bd 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -828,7 +828,7 @@ def count(self):
results = []
for b in blocks:
- result = b.notnull().astype(int)
+ result = b.notna().astype(int)
result = self._constructor(result, window=window, min_periods=0,
center=self.center,
closed=self.closed).sum()
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 23eb3bb05fd0a..2b322431bd301 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -10,7 +10,7 @@
from textwrap import dedent
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.common import (
is_categorical_dtype,
is_float_dtype,
@@ -1562,7 +1562,7 @@ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='',
self.data_index = obj.index
if (isinstance(self.data_index, (DatetimeIndex, PeriodIndex)) and
date_format is not None):
- self.data_index = Index([x.strftime(date_format) if notnull(x) else
+ self.data_index = Index([x.strftime(date_format) if notna(x) else
'' for x in self.data_index])
self.nlevels = getattr(self.data_index, 'nlevels', 1)
@@ -1816,7 +1816,7 @@ def _format(x):
elif isinstance(vals, ABCSparseArray):
vals = vals.values
- is_float_type = lib.map_infer(vals, is_float) & notnull(vals)
+ is_float_type = lib.map_infer(vals, is_float) & notna(vals)
leading_space = is_float_type.any()
fmt_values = []
@@ -1862,10 +1862,10 @@ def _value_formatter(self, float_format=None, threshold=None):
# because str(0.0) = '0.0' while '%g' % 0.0 = '0'
if float_format:
def base_formatter(v):
- return (float_format % v) if notnull(v) else self.na_rep
+ return (float_format % v) if notna(v) else self.na_rep
else:
def base_formatter(v):
- return str(v) if notnull(v) else self.na_rep
+ return str(v) if notna(v) else self.na_rep
if self.decimal != '.':
def decimal_formatter(v):
@@ -1877,7 +1877,7 @@ def decimal_formatter(v):
return decimal_formatter
def formatter(value):
- if notnull(value):
+ if notna(value):
if abs(value) > threshold:
return decimal_formatter(value)
else:
@@ -1907,7 +1907,7 @@ def format_values_with(float_format):
# separate the wheat from the chaff
values = self.values
- mask = isnull(values)
+ mask = isna(values)
if hasattr(values, 'to_dense'): # sparse numpy ndarray
values = values.to_dense()
values = np.array(values, dtype='object')
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index d88a230b42403..445fceb4b8146 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -771,7 +771,7 @@ def set_table_styles(self, table_styles):
@staticmethod
def _highlight_null(v, null_color):
- return 'background-color: %s' % null_color if pd.isnull(v) else ''
+ return 'background-color: %s' % null_color if pd.isna(v) else ''
def highlight_null(self, null_color='red'):
"""
diff --git a/pandas/io/json/json.py b/pandas/io/json/json.py
index 31907ad586817..a1d48719ba9c0 100644
--- a/pandas/io/json/json.py
+++ b/pandas/io/json/json.py
@@ -5,7 +5,7 @@
import pandas._libs.json as json
from pandas._libs.tslib import iNaT
from pandas.compat import StringIO, long, u
-from pandas import compat, isnull
+from pandas import compat, isna
from pandas import Series, DataFrame, to_datetime, MultiIndex
from pandas.io.common import (get_filepath_or_buffer, _get_handle,
_stringify_path)
@@ -535,7 +535,7 @@ def _try_convert_to_date(self, data):
# ignore numbers that are out of range
if issubclass(new_data.dtype.type, np.number):
- in_range = (isnull(new_data.values) | (new_data > self.min_stamp) |
+ in_range = (isna(new_data.values) | (new_data > self.min_stamp) |
(new_data.values == iNaT))
if not in_range.all():
return data, False
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 1e7d9d420b35d..9cf0a11a65270 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -21,7 +21,7 @@
is_float, is_dtype_equal,
is_object_dtype, is_string_dtype,
is_scalar, is_categorical_dtype)
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.cast import astype_nansafe
from pandas.core.index import Index, MultiIndex, RangeIndex
from pandas.core.series import Series
@@ -1532,7 +1532,7 @@ def _infer_types(self, values, na_values, try_num_bool=True):
if try_num_bool:
try:
result = lib.maybe_convert_numeric(values, na_values, False)
- na_count = isnull(result).sum()
+ na_count = isna(result).sum()
except Exception:
result = values
if values.dtype == np.object_:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 4e343556c083b..82c80a13372d7 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -25,7 +25,7 @@
import numpy as np
from pandas import (Series, DataFrame, Panel, Panel4D, Index,
- MultiIndex, Int64Index, isnull, concat,
+ MultiIndex, Int64Index, isna, concat,
SparseSeries, SparseDataFrame, PeriodIndex,
DatetimeIndex, TimedeltaIndex)
from pandas.core import config
@@ -2136,7 +2136,7 @@ def convert(self, values, nan_rep, encoding):
# if we have stored a NaN in the categories
# then strip it; in theory we could have BOTH
# -1s in the codes and nulls :<
- mask = isnull(categories)
+ mask = isna(categories)
if mask.any():
categories = categories[~mask]
codes[codes != -1] -= mask.astype(int).cumsum().values
@@ -3941,7 +3941,7 @@ def write_data(self, chunksize, dropna=False):
# figure the mask: only do if we can successfully process this
# column, otherwise ignore the mask
- mask = isnull(a.data).all(axis=0)
+ mask = isna(a.data).all(axis=0)
if isinstance(mask, np.ndarray):
masks.append(mask.astype('u1', copy=False))
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 0dbef66616e43..9aa47e5c69850 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -12,7 +12,7 @@
import numpy as np
import pandas._libs.lib as lib
-from pandas.core.dtypes.missing import isnull
+from pandas.core.dtypes.missing import isna
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.common import (
is_list_like, is_dict_like,
@@ -632,7 +632,7 @@ def insert_data(self):
# replace NaN with None
if b._can_hold_na:
- mask = isnull(d)
+ mask = isna(d)
d[mask] = None
for col_loc, col in zip(b.mgr_locs, d):
@@ -845,7 +845,7 @@ def _harmonize_columns(self, parse_dates=None):
except KeyError:
pass # this column not in results
- def _get_notnull_col_dtype(self, col):
+ def _get_notna_col_dtype(self, col):
"""
Infer datatype of the Series col. In case the dtype of col is 'object'
and it contains NA values, this infers the datatype of the not-NA
@@ -853,9 +853,9 @@ def _get_notnull_col_dtype(self, col):
"""
col_for_inference = col
if col.dtype == 'object':
- notnulldata = col[~isnull(col)]
- if len(notnulldata):
- col_for_inference = notnulldata
+ notnadata = col[~isna(col)]
+ if len(notnadata):
+ col_for_inference = notnadata
return lib.infer_dtype(col_for_inference)
@@ -865,7 +865,7 @@ def _sqlalchemy_type(self, col):
if col.name in dtype:
return self.dtype[col.name]
- col_type = self._get_notnull_col_dtype(col)
+ col_type = self._get_notna_col_dtype(col)
from sqlalchemy.types import (BigInteger, Integer, Float,
Text, Boolean,
@@ -1345,7 +1345,7 @@ def _sql_type_name(self, col):
if col.name in dtype:
return dtype[col.name]
- col_type = self._get_notnull_col_dtype(col)
+ col_type = self._get_notna_col_dtype(col)
if col_type == 'timedelta64':
warnings.warn("the 'timedelta' type is not supported, and will be "
"written as integer values (ns frequency) to the "
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 30991d8a24c63..253ed03c25db9 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -24,7 +24,7 @@
from pandas.core.frame import DataFrame
from pandas.core.series import Series
import datetime
-from pandas import compat, to_timedelta, to_datetime, isnull, DatetimeIndex
+from pandas import compat, to_timedelta, to_datetime, isna, DatetimeIndex
from pandas.compat import lrange, lmap, lzip, text_type, string_types, range, \
zip, BytesIO
from pandas.util._decorators import Appender
@@ -402,7 +402,7 @@ def parse_dates_safe(dates, delta=False, year=False, days=False):
return DataFrame(d, index=index)
- bad_loc = isnull(dates)
+ bad_loc = isna(dates)
index = dates.index
if bad_loc.any():
dates = Series(dates)
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index de96d17da2a9f..b8d7cebe8a274 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -11,14 +11,14 @@
from pandas.util._decorators import cache_readonly
from pandas.core.base import PandasObject
-from pandas.core.dtypes.missing import notnull, remove_na_arraylike
+from pandas.core.dtypes.missing import isna, notna, remove_na_arraylike
from pandas.core.dtypes.common import (
is_list_like,
is_integer,
is_number,
is_hashable,
is_iterator)
-from pandas.core.common import AbstractMethodError, isnull, _try_sort
+from pandas.core.common import AbstractMethodError, _try_sort
from pandas.core.generic import _shared_docs, _shared_doc_kwargs
from pandas.core.index import Index, MultiIndex
from pandas.core.series import Series
@@ -554,7 +554,7 @@ def _get_xticks(self, convert_period=False):
"""
x = index._mpl_repr()
elif is_datetype:
- self.data = self.data[notnull(self.data.index)]
+ self.data = self.data[notna(self.data.index)]
self.data = self.data.sort_index()
x = self.data.index._mpl_repr()
else:
@@ -567,7 +567,7 @@ def _get_xticks(self, convert_period=False):
@classmethod
def _plot(cls, ax, x, y, style=None, is_errorbar=False, **kwds):
- mask = isnull(y)
+ mask = isna(y)
if mask.any():
y = np.ma.array(y)
y = np.ma.masked_where(mask, y)
@@ -1290,7 +1290,7 @@ def _args_adjust(self):
# create common bin edge
values = (self.data._convert(datetime=True)._get_numeric_data())
values = np.ravel(values)
- values = values[~isnull(values)]
+ values = values[~isna(values)]
hist, self.bins = np.histogram(
values, bins=self.bins,
@@ -1305,7 +1305,7 @@ def _plot(cls, ax, y, style=None, bins=None, bottom=0, column_num=0,
stacking_id=None, **kwds):
if column_num == 0:
cls._initialize_stacker(ax, stacking_id, len(bins) - 1)
- y = y[~isnull(y)]
+ y = y[~isna(y)]
base = np.zeros(len(bins) - 1)
bottom = bottom + \
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index 20ada033c0f58..db2211fb55135 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas.util._decorators import deprecate_kwarg
-from pandas.core.dtypes.missing import notnull
+from pandas.core.dtypes.missing import notna
from pandas.compat import range, lrange, lmap, zip
from pandas.io.formats.printing import pprint_thing
@@ -62,7 +62,7 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
# no gaps between subplots
fig.subplots_adjust(wspace=0, hspace=0)
- mask = notnull(df)
+ mask = notna(df)
marker = _get_marker_compat(marker)
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index b1652cf6eb6db..433ed7e517b1c 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -64,8 +64,8 @@ class TestPDApi(Base):
funcs = ['bdate_range', 'concat', 'crosstab', 'cut',
'date_range', 'interval_range', 'eval',
'factorize', 'get_dummies',
- 'infer_freq', 'isnull', 'lreshape',
- 'melt', 'notnull', 'offsets',
+ 'infer_freq', 'isna', 'isnull', 'lreshape',
+ 'melt', 'notna', 'notnull', 'offsets',
'merge', 'merge_ordered', 'merge_asof',
'period_range',
'pivot', 'pivot_table', 'qcut',
@@ -88,6 +88,9 @@ class TestPDApi(Base):
funcs_to = ['to_datetime', 'to_msgpack',
'to_numeric', 'to_pickle', 'to_timedelta']
+ # top-level to deprecate in the future
+ deprecated_funcs_in_future = []
+
# these are already deprecated; awaiting removal
deprecated_funcs = ['ewma', 'ewmcorr', 'ewmcov', 'ewmstd', 'ewmvar',
'ewmvol', 'expanding_apply', 'expanding_corr',
@@ -113,6 +116,7 @@ def test_api(self):
self.deprecated_classes_in_future +
self.funcs + self.funcs_option +
self.funcs_read + self.funcs_to +
+ self.deprecated_funcs_in_future +
self.deprecated_funcs,
self.ignored)
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index ec5fe45d7f610..d26ea047bb41f 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -18,7 +18,7 @@
from pandas._libs import tslib, lib
from pandas import (Series, Index, DataFrame, Timedelta,
DatetimeIndex, TimedeltaIndex, Timestamp,
- Panel, Period, Categorical)
+ Panel, Period, Categorical, isna)
from pandas.compat import u, PY2, PY3, StringIO, lrange
from pandas.core.dtypes import inference
from pandas.core.dtypes.common import (
@@ -36,7 +36,6 @@
is_scipy_sparse,
_ensure_int32,
_ensure_categorical)
-from pandas.core.dtypes.missing import isnull
from pandas.util import testing as tm
@@ -1014,7 +1013,7 @@ def test_nan_to_nat_conversions():
s = df['B'].copy()
s._data = s._data.setitem(indexer=tuple([slice(8, 9)]), value=np.nan)
- assert (isnull(s[8]))
+ assert (isna(s[8]))
# numpy < 1.7.0 is wrong
from distutils.version import LooseVersion
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index ea4f5da04a271..d3c9ca51af18f 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -15,151 +15,153 @@
from pandas.core.dtypes.common import is_scalar
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import (
- array_equivalent, isnull, notnull,
+ array_equivalent, isna, notna, isnull, notnull,
na_value_for_dtype)
-def test_notnull():
- assert notnull(1.)
- assert not notnull(None)
- assert not notnull(np.NaN)
+@pytest.mark.parametrize('notna_f', [notna, notnull])
+def test_notna_notnull(notna_f):
+ assert notna_f(1.)
+ assert not notna_f(None)
+ assert not notna_f(np.NaN)
- with cf.option_context("mode.use_inf_as_null", False):
- assert notnull(np.inf)
- assert notnull(-np.inf)
+ with cf.option_context("mode.use_inf_as_na", False):
+ assert notna_f(np.inf)
+ assert notna_f(-np.inf)
arr = np.array([1.5, np.inf, 3.5, -np.inf])
- result = notnull(arr)
+ result = notna_f(arr)
assert result.all()
- with cf.option_context("mode.use_inf_as_null", True):
- assert not notnull(np.inf)
- assert not notnull(-np.inf)
+ with cf.option_context("mode.use_inf_as_na", True):
+ assert not notna_f(np.inf)
+ assert not notna_f(-np.inf)
arr = np.array([1.5, np.inf, 3.5, -np.inf])
- result = notnull(arr)
+ result = notna_f(arr)
assert result.sum() == 2
- with cf.option_context("mode.use_inf_as_null", False):
+ with cf.option_context("mode.use_inf_as_na", False):
for s in [tm.makeFloatSeries(), tm.makeStringSeries(),
tm.makeObjectSeries(), tm.makeTimeSeries(),
tm.makePeriodSeries()]:
- assert (isinstance(isnull(s), Series))
+ assert (isinstance(notna_f(s), Series))
-class TestIsNull(object):
+class TestIsNA(object):
def test_0d_array(self):
- assert isnull(np.array(np.nan))
- assert not isnull(np.array(0.0))
- assert not isnull(np.array(0))
+ assert isna(np.array(np.nan))
+ assert not isna(np.array(0.0))
+ assert not isna(np.array(0))
# test object dtype
- assert isnull(np.array(np.nan, dtype=object))
- assert not isnull(np.array(0.0, dtype=object))
- assert not isnull(np.array(0, dtype=object))
+ assert isna(np.array(np.nan, dtype=object))
+ assert not isna(np.array(0.0, dtype=object))
+ assert not isna(np.array(0, dtype=object))
def test_empty_object(self):
for shape in [(4, 0), (4,)]:
arr = np.empty(shape=shape, dtype=object)
- result = isnull(arr)
+ result = isna(arr)
expected = np.ones(shape=shape, dtype=bool)
tm.assert_numpy_array_equal(result, expected)
- def test_isnull(self):
- assert not isnull(1.)
- assert isnull(None)
- assert isnull(np.NaN)
+ @pytest.mark.parametrize('isna_f', [isna, isnull])
+ def test_isna_isnull(self, isna_f):
+ assert not isna_f(1.)
+ assert isna_f(None)
+ assert isna_f(np.NaN)
assert float('nan')
- assert not isnull(np.inf)
- assert not isnull(-np.inf)
+ assert not isna_f(np.inf)
+ assert not isna_f(-np.inf)
# series
for s in [tm.makeFloatSeries(), tm.makeStringSeries(),
tm.makeObjectSeries(), tm.makeTimeSeries(),
tm.makePeriodSeries()]:
- assert isinstance(isnull(s), Series)
+ assert isinstance(isna_f(s), Series)
# frame
for df in [tm.makeTimeDataFrame(), tm.makePeriodFrame(),
tm.makeMixedDataFrame()]:
- result = isnull(df)
- expected = df.apply(isnull)
+ result = isna_f(df)
+ expected = df.apply(isna_f)
tm.assert_frame_equal(result, expected)
# panel
with catch_warnings(record=True):
for p in [tm.makePanel(), tm.makePeriodPanel(),
tm.add_nans(tm.makePanel())]:
- result = isnull(p)
- expected = p.apply(isnull)
+ result = isna_f(p)
+ expected = p.apply(isna_f)
tm.assert_panel_equal(result, expected)
# panel 4d
with catch_warnings(record=True):
for p in [tm.makePanel4D(), tm.add_nans_panel4d(tm.makePanel4D())]:
- result = isnull(p)
- expected = p.apply(isnull)
+ result = isna_f(p)
+ expected = p.apply(isna_f)
tm.assert_panel4d_equal(result, expected)
- def test_isnull_lists(self):
- result = isnull([[False]])
+ def test_isna_lists(self):
+ result = isna([[False]])
exp = np.array([[False]])
tm.assert_numpy_array_equal(result, exp)
- result = isnull([[1], [2]])
+ result = isna([[1], [2]])
exp = np.array([[False], [False]])
tm.assert_numpy_array_equal(result, exp)
# list of strings / unicode
- result = isnull(['foo', 'bar'])
+ result = isna(['foo', 'bar'])
exp = np.array([False, False])
tm.assert_numpy_array_equal(result, exp)
- result = isnull([u('foo'), u('bar')])
+ result = isna([u('foo'), u('bar')])
exp = np.array([False, False])
tm.assert_numpy_array_equal(result, exp)
- def test_isnull_nat(self):
- result = isnull([NaT])
+ def test_isna_nat(self):
+ result = isna([NaT])
exp = np.array([True])
tm.assert_numpy_array_equal(result, exp)
- result = isnull(np.array([NaT], dtype=object))
+ result = isna(np.array([NaT], dtype=object))
exp = np.array([True])
tm.assert_numpy_array_equal(result, exp)
- def test_isnull_numpy_nat(self):
+ def test_isna_numpy_nat(self):
arr = np.array([NaT, np.datetime64('NaT'), np.timedelta64('NaT'),
np.datetime64('NaT', 's')])
- result = isnull(arr)
+ result = isna(arr)
expected = np.array([True] * 4)
tm.assert_numpy_array_equal(result, expected)
- def test_isnull_datetime(self):
- assert not isnull(datetime.now())
- assert notnull(datetime.now())
+ def test_isna_datetime(self):
+ assert not isna(datetime.now())
+ assert notna(datetime.now())
idx = date_range('1/1/1990', periods=20)
exp = np.ones(len(idx), dtype=bool)
- tm.assert_numpy_array_equal(notnull(idx), exp)
+ tm.assert_numpy_array_equal(notna(idx), exp)
idx = np.asarray(idx)
idx[0] = iNaT
idx = DatetimeIndex(idx)
- mask = isnull(idx)
+ mask = isna(idx)
assert mask[0]
exp = np.array([True] + [False] * (len(idx) - 1), dtype=bool)
tm.assert_numpy_array_equal(mask, exp)
# GH 9129
pidx = idx.to_period(freq='M')
- mask = isnull(pidx)
+ mask = isna(pidx)
assert mask[0]
exp = np.array([True] + [False] * (len(idx) - 1), dtype=bool)
tm.assert_numpy_array_equal(mask, exp)
- mask = isnull(pidx[1:])
+ mask = isna(pidx[1:])
exp = np.zeros(len(mask), dtype=bool)
tm.assert_numpy_array_equal(mask, exp)
@@ -174,7 +176,7 @@ def test_isnull_datetime(self):
(np.array([1, 1 + 0j, np.nan, 3]).astype(object),
np.array([False, False, True, False]))])
def test_complex(self, value, expected):
- result = isnull(value)
+ result = isna(value)
if is_scalar(result):
assert result is expected
else:
@@ -183,10 +185,10 @@ def test_complex(self, value, expected):
def test_datetime_other_units(self):
idx = pd.DatetimeIndex(['2011-01-01', 'NaT', '2011-01-02'])
exp = np.array([False, True, False])
- tm.assert_numpy_array_equal(isnull(idx), exp)
- tm.assert_numpy_array_equal(notnull(idx), ~exp)
- tm.assert_numpy_array_equal(isnull(idx.values), exp)
- tm.assert_numpy_array_equal(notnull(idx.values), ~exp)
+ tm.assert_numpy_array_equal(isna(idx), exp)
+ tm.assert_numpy_array_equal(notna(idx), ~exp)
+ tm.assert_numpy_array_equal(isna(idx.values), exp)
+ tm.assert_numpy_array_equal(notna(idx.values), ~exp)
for dtype in ['datetime64[D]', 'datetime64[h]', 'datetime64[m]',
'datetime64[s]', 'datetime64[ms]', 'datetime64[us]',
@@ -194,24 +196,24 @@ def test_datetime_other_units(self):
values = idx.values.astype(dtype)
exp = np.array([False, True, False])
- tm.assert_numpy_array_equal(isnull(values), exp)
- tm.assert_numpy_array_equal(notnull(values), ~exp)
+ tm.assert_numpy_array_equal(isna(values), exp)
+ tm.assert_numpy_array_equal(notna(values), ~exp)
exp = pd.Series([False, True, False])
s = pd.Series(values)
- tm.assert_series_equal(isnull(s), exp)
- tm.assert_series_equal(notnull(s), ~exp)
+ tm.assert_series_equal(isna(s), exp)
+ tm.assert_series_equal(notna(s), ~exp)
s = pd.Series(values, dtype=object)
- tm.assert_series_equal(isnull(s), exp)
- tm.assert_series_equal(notnull(s), ~exp)
+ tm.assert_series_equal(isna(s), exp)
+ tm.assert_series_equal(notna(s), ~exp)
def test_timedelta_other_units(self):
idx = pd.TimedeltaIndex(['1 days', 'NaT', '2 days'])
exp = np.array([False, True, False])
- tm.assert_numpy_array_equal(isnull(idx), exp)
- tm.assert_numpy_array_equal(notnull(idx), ~exp)
- tm.assert_numpy_array_equal(isnull(idx.values), exp)
- tm.assert_numpy_array_equal(notnull(idx.values), ~exp)
+ tm.assert_numpy_array_equal(isna(idx), exp)
+ tm.assert_numpy_array_equal(notna(idx), ~exp)
+ tm.assert_numpy_array_equal(isna(idx.values), exp)
+ tm.assert_numpy_array_equal(notna(idx.values), ~exp)
for dtype in ['timedelta64[D]', 'timedelta64[h]', 'timedelta64[m]',
'timedelta64[s]', 'timedelta64[ms]', 'timedelta64[us]',
@@ -219,30 +221,30 @@ def test_timedelta_other_units(self):
values = idx.values.astype(dtype)
exp = np.array([False, True, False])
- tm.assert_numpy_array_equal(isnull(values), exp)
- tm.assert_numpy_array_equal(notnull(values), ~exp)
+ tm.assert_numpy_array_equal(isna(values), exp)
+ tm.assert_numpy_array_equal(notna(values), ~exp)
exp = pd.Series([False, True, False])
s = pd.Series(values)
- tm.assert_series_equal(isnull(s), exp)
- tm.assert_series_equal(notnull(s), ~exp)
+ tm.assert_series_equal(isna(s), exp)
+ tm.assert_series_equal(notna(s), ~exp)
s = pd.Series(values, dtype=object)
- tm.assert_series_equal(isnull(s), exp)
- tm.assert_series_equal(notnull(s), ~exp)
+ tm.assert_series_equal(isna(s), exp)
+ tm.assert_series_equal(notna(s), ~exp)
def test_period(self):
idx = pd.PeriodIndex(['2011-01', 'NaT', '2012-01'], freq='M')
exp = np.array([False, True, False])
- tm.assert_numpy_array_equal(isnull(idx), exp)
- tm.assert_numpy_array_equal(notnull(idx), ~exp)
+ tm.assert_numpy_array_equal(isna(idx), exp)
+ tm.assert_numpy_array_equal(notna(idx), ~exp)
exp = pd.Series([False, True, False])
s = pd.Series(idx)
- tm.assert_series_equal(isnull(s), exp)
- tm.assert_series_equal(notnull(s), ~exp)
+ tm.assert_series_equal(isna(s), exp)
+ tm.assert_series_equal(notna(s), ~exp)
s = pd.Series(idx, dtype=object)
- tm.assert_series_equal(isnull(s), exp)
- tm.assert_series_equal(notnull(s), ~exp)
+ tm.assert_series_equal(isna(s), exp)
+ tm.assert_series_equal(notna(s), ~exp)
def test_array_equivalent():
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index da1c68005b9b2..484a09f11b58a 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -13,7 +13,7 @@
import numpy as np
from pandas.compat import lrange, product
-from pandas import (compat, isnull, notnull, DataFrame, Series,
+from pandas import (compat, isna, notna, DataFrame, Series,
MultiIndex, date_range, Timestamp)
import pandas as pd
import pandas.core.nanops as nanops
@@ -81,11 +81,11 @@ def test_corr_nooverlap(self):
'C': [np.nan, np.nan, np.nan, np.nan,
np.nan, np.nan]})
rs = df.corr(meth)
- assert isnull(rs.loc['A', 'B'])
- assert isnull(rs.loc['B', 'A'])
+ assert isna(rs.loc['A', 'B'])
+ assert isna(rs.loc['B', 'A'])
assert rs.loc['A', 'A'] == 1
assert rs.loc['B', 'B'] == 1
- assert isnull(rs.loc['C', 'C'])
+ assert isna(rs.loc['C', 'C'])
def test_corr_constant(self):
tm._skip_if_no_scipy()
@@ -96,7 +96,7 @@ def test_corr_constant(self):
df = DataFrame({'A': [1, 1, 1, np.nan, np.nan, np.nan],
'B': [np.nan, np.nan, np.nan, 1, 1, 1]})
rs = df.corr(meth)
- assert isnull(rs.values).all()
+ assert isna(rs.values).all()
def test_corr_int(self):
# dtypes other than float64 #1761
@@ -136,7 +136,7 @@ def test_cov(self):
tm.assert_frame_equal(expected, result)
result = self.frame.cov(min_periods=len(self.frame) + 1)
- assert isnull(result.values).all()
+ assert isna(result.values).all()
# with NAs
frame = self.frame.copy()
@@ -389,7 +389,7 @@ def test_reduce_mixed_frame(self):
tm.assert_series_equal(test, df.T.sum(axis=1))
def test_count(self):
- f = lambda s: notnull(s).sum()
+ f = lambda s: notna(s).sum()
self._check_stat_op('count', f,
has_skipna=False,
has_numeric_only=True,
@@ -477,7 +477,7 @@ def test_product(self):
def test_median(self):
def wrapper(x):
- if isnull(x).any():
+ if isna(x).any():
return np.nan
return np.median(x)
@@ -974,7 +974,7 @@ def test_stats_mixed_type(self):
def test_median_corner(self):
def wrapper(x):
- if isnull(x).any():
+ if isna(x).any():
return np.nan
return np.median(x)
@@ -998,7 +998,7 @@ def test_cumsum_corner(self):
def test_sum_bools(self):
df = DataFrame(index=lrange(1), columns=lrange(10))
- bools = isnull(df)
+ bools = isna(df)
assert bools.sum(axis=1)[0] == 10
# Index of max / min
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index a6f39cabb60ed..ab2e810d77634 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -9,7 +9,7 @@
import warnings
import numpy as np
-from pandas import (notnull, DataFrame, Series, MultiIndex, date_range,
+from pandas import (notna, DataFrame, Series, MultiIndex, date_range,
Timestamp, compat)
import pandas as pd
from pandas.core.dtypes.dtypes import CategoricalDtype
@@ -278,7 +278,7 @@ def transform(row):
return row
def transform2(row):
- if (notnull(row['C']) and row['C'].startswith('shin') and
+ if (notna(row['C']) and row['C'].startswith('shin') and
row['A'] == 'foo'):
row['D'] = 7
return row
diff --git a/pandas/tests/frame/test_asof.py b/pandas/tests/frame/test_asof.py
index d4e3d541937dc..fea6a5370109e 100644
--- a/pandas/tests/frame/test_asof.py
+++ b/pandas/tests/frame/test_asof.py
@@ -23,13 +23,13 @@ def test_basic(self):
freq='25s')
result = df.asof(dates)
- assert result.notnull().all(1).all()
+ assert result.notna().all(1).all()
lb = df.index[14]
ub = df.index[30]
dates = list(dates)
result = df.asof(dates)
- assert result.notnull().all(1).all()
+ assert result.notna().all(1).all()
mask = (result.index >= lb) & (result.index < ub)
rs = result[mask]
diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py
index 87d942101f5f1..e76869bf6712b 100644
--- a/pandas/tests/frame/test_axis_select_reindex.py
+++ b/pandas/tests/frame/test_axis_select_reindex.py
@@ -11,7 +11,7 @@
from pandas.compat import lrange, lzip, u
from pandas import (compat, DataFrame, Series, Index, MultiIndex,
- date_range, isnull)
+ date_range, isna)
import pandas as pd
from pandas.util.testing import assert_frame_equal
@@ -852,11 +852,11 @@ def test_reindex_boolean(self):
reindexed = frame.reindex(np.arange(10))
assert reindexed.values.dtype == np.object_
- assert isnull(reindexed[0][1])
+ assert isna(reindexed[0][1])
reindexed = frame.reindex(columns=lrange(3))
assert reindexed.values.dtype == np.object_
- assert isnull(reindexed[1]).all()
+ assert isna(reindexed[1]).all()
def test_reindex_objects(self):
reindexed = self.mixed_frame.reindex(columns=['foo', 'A', 'B'])
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index f66070fd66813..afa3c4f25789a 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -533,7 +533,7 @@ def test_stale_cached_series_bug_473(self):
repr(Y)
result = Y.sum() # noqa
exp = Y['g'].sum() # noqa
- assert pd.isnull(Y['g']['c'])
+ assert pd.isna(Y['g']['c'])
def test_get_X_columns(self):
# numeric and object columns
@@ -566,6 +566,6 @@ def test_strange_column_corruption_issue(self):
myid = 100
- first = len(df.loc[pd.isnull(df[myid]), [myid]])
- second = len(df.loc[pd.isnull(df[myid]), [myid]])
+ first = len(df.loc[pd.isna(df[myid]), [myid]])
+ second = len(df.loc[pd.isna(df[myid]), [myid]])
assert first == second == 0
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 97cf3ce8a7216..d942330ecd8a6 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -17,7 +17,7 @@
from pandas.compat import (lmap, long, zip, range, lrange, lzip,
OrderedDict, is_platform_little_endian)
from pandas import compat
-from pandas import (DataFrame, Index, Series, isnull,
+from pandas import (DataFrame, Index, Series, isna,
MultiIndex, Timedelta, Timestamp,
date_range)
import pandas as pd
@@ -224,7 +224,7 @@ def test_constructor_dict(self):
assert len(frame) == len(self.ts2)
assert 'col1' not in frame
- assert isnull(frame['col3']).all()
+ assert isna(frame['col3']).all()
# Corner cases
assert len(DataFrame({})) == 0
@@ -279,12 +279,12 @@ def test_constructor_multi_index(self):
tuples = [(2, 3), (3, 3), (3, 3)]
mi = MultiIndex.from_tuples(tuples)
df = DataFrame(index=mi, columns=mi)
- assert pd.isnull(df).values.ravel().all()
+ assert pd.isna(df).values.ravel().all()
tuples = [(3, 3), (2, 3), (3, 3)]
mi = MultiIndex.from_tuples(tuples)
df = DataFrame(index=mi, columns=mi)
- assert pd.isnull(df).values.ravel().all()
+ assert pd.isna(df).values.ravel().all()
def test_constructor_error_msgs(self):
msg = "Empty data passed with indices specified."
@@ -625,7 +625,7 @@ def test_constructor_maskedarray_nonfloat(self):
assert len(frame.index) == 2
assert len(frame.columns) == 3
- assert isnull(frame).values.all()
+ assert isna(frame).values.all()
# cast type
frame = DataFrame(mat, columns=['A', 'B', 'C'],
@@ -1496,7 +1496,7 @@ def check(df):
df.iloc[:, i]
# allow single nans to succeed
- indexer = np.arange(len(df.columns))[isnull(df.columns)]
+ indexer = np.arange(len(df.columns))[isna(df.columns)]
if len(indexer) == 1:
tm.assert_series_equal(df.iloc[:, indexer[0]],
@@ -1966,7 +1966,7 @@ def test_frame_datetime64_mixed_index_ctor_1681(self):
# it works!
d = DataFrame({'A': 'foo', 'B': ts}, index=dr)
- assert d['B'].isnull().all()
+ assert d['B'].isna().all()
def test_frame_timeseries_to_records(self):
index = date_range('1/1/2000', periods=10)
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 065580d56a683..5941b2ab7c2cb 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -382,7 +382,7 @@ def test_dtypes_gh8722(self):
assert_series_equal(result, expected)
# compat, GH 8722
- with option_context('use_inf_as_null', True):
+ with option_context('use_inf_as_na', True):
df = DataFrame([[1]])
result = df.dtypes
assert_series_equal(result, Series({0: np.dtype('int64')}))
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index ff79bedbc60f6..dd2759cd3ef8e 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -15,7 +15,7 @@
import numpy as np
import pandas.core.common as com
-from pandas import (DataFrame, Index, Series, notnull, isnull,
+from pandas import (DataFrame, Index, Series, notna, isna,
MultiIndex, DatetimeIndex, Timestamp,
date_range)
import pandas as pd
@@ -312,7 +312,7 @@ def test_getitem_boolean_casting(self):
df = DataFrame(data=np.random.randn(100, 50))
df = df.where(df > 0) # create nans
bools = df > 0
- mask = isnull(df)
+ mask = isna(df)
expected = bools.astype(float).mask(mask)
result = bools.mask(mask)
assert_frame_equal(result, expected)
@@ -395,7 +395,7 @@ def test_getitem_setitem_ix_negative_integers(self):
df = DataFrame(np.random.randn(8, 4))
with catch_warnings(record=True):
- assert isnull(df.ix[:, [-1]].values).all()
+ assert isna(df.ix[:, [-1]].values).all()
# #1942
a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)])
@@ -487,7 +487,7 @@ def test_setitem_always_copy(self):
self.frame['E'] = s
self.frame['E'][5:10] = nan
- assert notnull(s[5:10]).all()
+ assert notna(s[5:10]).all()
def test_setitem_boolean(self):
df = self.frame.copy()
@@ -705,7 +705,7 @@ def test_setitem_empty(self):
'c': ['111', '222', '333']})
result = df.copy()
- result.loc[result.b.isnull(), 'a'] = result.a
+ result.loc[result.b.isna(), 'a'] = result.a
assert_frame_equal(result, df)
def test_setitem_empty_frame_with_boolean(self):
@@ -795,7 +795,7 @@ def test_getitem_fancy_slice_integers_step(self):
# this is OK
result = df.iloc[:8:2] # noqa
df.iloc[:8:2] = np.nan
- assert isnull(df.iloc[:8:2]).values.all()
+ assert isna(df.iloc[:8:2]).values.all()
def test_getitem_setitem_integer_slice_keyerrors(self):
df = DataFrame(np.random.randn(10, 5), index=lrange(0, 20, 2))
@@ -1020,7 +1020,7 @@ def test_setitem_fancy_mixed_2d(self):
assert (result.values == 5).all()
self.mixed_frame.ix[5] = np.nan
- assert isnull(self.mixed_frame.ix[5]).all()
+ assert isna(self.mixed_frame.ix[5]).all()
self.mixed_frame.ix[5] = self.mixed_frame.ix[6]
assert_series_equal(self.mixed_frame.ix[5], self.mixed_frame.ix[6],
@@ -1492,15 +1492,15 @@ def test_setitem_single_column_mixed_datetime(self):
# set an allowable datetime64 type
df.loc['b', 'timestamp'] = iNaT
- assert isnull(df.loc['b', 'timestamp'])
+ assert isna(df.loc['b', 'timestamp'])
# allow this syntax
df.loc['c', 'timestamp'] = nan
- assert isnull(df.loc['c', 'timestamp'])
+ assert isna(df.loc['c', 'timestamp'])
# allow this syntax
df.loc['d', :] = nan
- assert not isnull(df.loc['c', :]).all()
+ assert not isna(df.loc['c', :]).all()
# as of GH 3216 this will now work!
# try to set with a list like item
@@ -1695,7 +1695,7 @@ def test_set_value_resize(self):
res = self.frame.copy()
res3 = res.set_value('foobar', 'baz', 5)
assert is_float_dtype(res3['baz'])
- assert isnull(res3['baz'].drop(['foobar'])).all()
+ assert isna(res3['baz'].drop(['foobar'])).all()
pytest.raises(ValueError, res3.set_value, 'foobar', 'baz', 'sam')
def test_set_value_with_index_dtype_change(self):
@@ -1935,7 +1935,7 @@ def test_reindex_frame_add_nat(self):
result = df.reindex(lrange(15))
assert np.issubdtype(result['B'].dtype, np.dtype('M8[ns]'))
- mask = com.isnull(result)['B']
+ mask = com.isna(result)['B']
assert mask[-5:].all()
assert not mask[:-5].any()
@@ -2590,7 +2590,7 @@ def test_where_bug(self):
# GH7506
a = DataFrame({0: [1, 2], 1: [3, 4], 2: [5, 6]})
b = DataFrame({0: [np.nan, 8], 1: [9, np.nan], 2: [np.nan, np.nan]})
- do_not_replace = b.isnull() | (a > b)
+ do_not_replace = b.isna() | (a > b)
expected = a.copy()
expected[~do_not_replace] = b
@@ -2600,7 +2600,7 @@ def test_where_bug(self):
a = DataFrame({0: [4, 6], 1: [1, 0]})
b = DataFrame({0: [np.nan, 3], 1: [3, np.nan]})
- do_not_replace = b.isnull() | (a > b)
+ do_not_replace = b.isna() | (a > b)
expected = a.copy()
expected[~do_not_replace] = b
@@ -2633,10 +2633,10 @@ def test_where_none(self):
# GH 7656
df = DataFrame([{'A': 1, 'B': np.nan, 'C': 'Test'}, {
'A': np.nan, 'B': 'Test', 'C': np.nan}])
- expected = df.where(~isnull(df), None)
+ expected = df.where(~isna(df), None)
with tm.assert_raises_regex(TypeError, 'boolean setting '
'on mixed-type'):
- df.where(~isnull(df), None, inplace=True)
+ df.where(~isna(df), None, inplace=True)
def test_where_align(self):
@@ -2650,10 +2650,10 @@ def create():
# series
df = create()
expected = df.fillna(df.mean())
- result = df.where(pd.notnull(df), df.mean(), axis='columns')
+ result = df.where(pd.notna(df), df.mean(), axis='columns')
assert_frame_equal(result, expected)
- df.where(pd.notnull(df), df.mean(), inplace=True, axis='columns')
+ df.where(pd.notna(df), df.mean(), inplace=True, axis='columns')
assert_frame_equal(df, expected)
df = create().fillna(0)
@@ -2666,7 +2666,7 @@ def create():
# frame
df = create()
expected = df.fillna(1)
- result = df.where(pd.notnull(df), DataFrame(
+ result = df.where(pd.notna(df), DataFrame(
1, index=df.index, columns=df.columns))
assert_frame_equal(result, expected)
@@ -2948,7 +2948,7 @@ def test_setitem(self):
df2.iloc[1, 1] = pd.NaT
df2.iloc[1, 2] = pd.NaT
result = df2['B']
- assert_series_equal(notnull(result), Series(
+ assert_series_equal(notna(result), Series(
[True, False, True], name='B'))
assert_series_equal(df2.dtypes, df.dtypes)
@@ -3000,7 +3000,7 @@ def test_setitem(self):
df2.iloc[1, 1] = pd.NaT
df2.iloc[1, 2] = pd.NaT
result = df2['B']
- assert_series_equal(notnull(result), Series(
+ assert_series_equal(notna(result), Series(
[True, False, True], name='B'))
assert_series_equal(df2.dtypes, Series([np.dtype('uint64'),
np.dtype('O'), np.dtype('O')],
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 438d7481ecc3e..5052bef24e95a 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -137,12 +137,12 @@ def test_operators_none_as_na(self):
filled = df.fillna(np.nan)
result = op(df, 3)
expected = op(filled, 3).astype(object)
- expected[com.isnull(expected)] = None
+ expected[com.isna(expected)] = None
assert_frame_equal(result, expected)
result = op(df, df)
expected = op(filled, filled).astype(object)
- expected[com.isnull(expected)] = None
+ expected[com.isna(expected)] = None
assert_frame_equal(result, expected)
result = op(df, df.fillna(7))
@@ -1045,8 +1045,8 @@ def test_combine_generic(self):
combined = df1.combine(df2, np.add)
combined2 = df2.combine(df1, np.add)
- assert combined['D'].isnull().all()
- assert combined2['D'].isnull().all()
+ assert combined['D'].isna().all()
+ assert combined2['D'].isna().all()
chunk = combined.loc[combined.index[:-5], ['A', 'B', 'C']]
chunk2 = combined2.loc[combined2.index[:-5], ['A', 'B', 'C']]
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index fdb0119d8ae60..e2f362ebdc895 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -528,7 +528,7 @@ def test_unstack_nan_index(self): # GH7466
def verify(df):
mk_list = lambda a: list(a) if isinstance(a, tuple) else [a]
- rows, cols = df.notnull().values.nonzero()
+ rows, cols = df.notna().values.nonzero()
for i, j in zip(rows, cols):
left = sorted(df.iloc[i, j].split('.'))
right = mk_list(df.index[i]) + mk_list(df.columns[j])
@@ -547,7 +547,7 @@ def verify(df):
mi = df.set_index(list(idx))
for lev in range(2):
udf = mi.unstack(level=lev)
- assert udf.notnull().values.sum() == len(df)
+ assert udf.notna().values.sum() == len(df)
verify(udf['jolie'])
df = DataFrame({'1st': ['d'] * 3 + [nan] * 5 + ['a'] * 2 +
@@ -565,7 +565,7 @@ def verify(df):
mi = df.set_index(list(idx))
for lev in range(3):
udf = mi.unstack(level=lev)
- assert udf.notnull().values.sum() == 2 * len(df)
+ assert udf.notna().values.sum() == 2 * len(df)
for col in ['4th', '5th']:
verify(udf[col])
@@ -670,7 +670,7 @@ def verify(df):
df.loc[1, '3rd'] = df.loc[4, '3rd'] = nan
left = df.set_index(['1st', '2nd', '3rd']).unstack(['2nd', '3rd'])
- assert left.notnull().values.sum() == 2 * len(df)
+ assert left.notna().values.sum() == 2 * len(df)
for col in ['jim', 'joe']:
for _, r in df.iterrows():
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index aaca8a60fe062..19fbf854256c6 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -281,7 +281,7 @@ def test_shift_duplicate_columns(self):
shifted.append(df)
# sanity check the base case
- nulls = shifted[0].isnull().sum()
+ nulls = shifted[0].isna().sum()
assert_series_equal(nulls, Series(range(1, 6), dtype='int64'))
# check all answers are the same
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index f527c732fb76b..8b95455b53d22 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -6,7 +6,7 @@
import numpy as np
from pandas.core.dtypes.common import _ensure_int64
-from pandas import Index, isnull
+from pandas import Index, isna
from pandas.util.testing import assert_almost_equal
import pandas.util.testing as tm
from pandas._libs import lib, groupby
@@ -97,7 +97,7 @@ def _check(dtype):
func(out, counts, obj[:, None], labels)
def _ohlc(group):
- if isnull(group).all():
+ if isna(group).all():
return np.repeat(nan, 4)
return [group[0], group.max(), group.min(), group[-1]]
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 19124a33bdbcb..0dea1e8447b2b 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2562,7 +2562,7 @@ def test_cython_grouper_series_bug_noncontig(self):
inds = np.tile(lrange(10), 10)
result = obj.groupby(inds).agg(Series.median)
- assert result.isnull().all()
+ assert result.isna().all()
def test_series_grouper_noncontig_index(self):
index = Index(tm.rands_array(10, 100))
@@ -3540,7 +3540,7 @@ def test_max_nan_bug(self):
r = gb[['File']].max()
e = gb['File'].max().to_frame()
tm.assert_frame_equal(r, e)
- assert not r['File'].isnull().any()
+ assert not r['File'].isna().any()
def test_nlargest(self):
a = Series([1, 3, 5, 7, 2, 9, 0, 4, 6, 10])
diff --git a/pandas/tests/groupby/test_nth.py b/pandas/tests/groupby/test_nth.py
index 47e6e7839422a..28392537be3c6 100644
--- a/pandas/tests/groupby/test_nth.py
+++ b/pandas/tests/groupby/test_nth.py
@@ -1,6 +1,6 @@
import numpy as np
import pandas as pd
-from pandas import DataFrame, MultiIndex, Index, Series, isnull
+from pandas import DataFrame, MultiIndex, Index, Series, isna
from pandas.compat import lrange
from pandas.util.testing import assert_frame_equal, assert_series_equal
@@ -41,9 +41,9 @@ def test_first_last_nth(self):
grouped['B'].nth(0)
self.df.loc[self.df['A'] == 'foo', 'B'] = np.nan
- assert isnull(grouped['B'].first()['foo'])
- assert isnull(grouped['B'].last()['foo'])
- assert isnull(grouped['B'].nth(0)['foo'])
+ assert isna(grouped['B'].first()['foo'])
+ assert isna(grouped['B'].last()['foo'])
+ assert isna(grouped['B'].nth(0)['foo'])
# v0.14.0 whatsnew
df = DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=['A', 'B'])
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index 70b6b1e439691..df0a93d783375 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -599,7 +599,7 @@ def test_first_last_max_min_on_time_data(self):
'td': [nan, td(days=1), td(days=2), td(days=3), nan]})
df_test.dt = pd.to_datetime(df_test.dt)
df_test['group'] = 'A'
- df_ref = df_test[df_test.dt.notnull()]
+ df_ref = df_test[df_test.dt.notna()]
grouped_test = df_test.groupby('group')
grouped_ref = df_ref.groupby('group')
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 1513a1c690014..1fdc08d68eb26 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -10,7 +10,7 @@
from pandas import (Series, Index, Float64Index, Int64Index, UInt64Index,
RangeIndex, MultiIndex, CategoricalIndex, DatetimeIndex,
TimedeltaIndex, PeriodIndex, IntervalIndex,
- notnull, isnull)
+ notna, isna)
from pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin
from pandas.core.dtypes.common import needs_i8_conversion
from pandas._libs.tslib import iNaT
@@ -514,7 +514,7 @@ def test_numpy_repeat(self):
def test_where(self):
i = self.create_index()
- result = i.where(notnull(i))
+ result = i.where(notna(i))
expected = i
tm.assert_index_equal(result, expected)
@@ -884,7 +884,7 @@ def test_fillna(self):
pass
elif isinstance(index, MultiIndex):
idx = index.copy()
- msg = "isnull is not defined for MultiIndex"
+ msg = "isna is not defined for MultiIndex"
with tm.assert_raises_regex(NotImplementedError, msg):
idx.fillna(idx[0])
else:
@@ -924,23 +924,23 @@ def test_nulls(self):
for name, index in self.indices.items():
if len(index) == 0:
tm.assert_numpy_array_equal(
- index.isnull(), np.array([], dtype=bool))
+ index.isna(), np.array([], dtype=bool))
elif isinstance(index, MultiIndex):
idx = index.copy()
- msg = "isnull is not defined for MultiIndex"
+ msg = "isna is not defined for MultiIndex"
with tm.assert_raises_regex(NotImplementedError, msg):
- idx.isnull()
+ idx.isna()
else:
if not index.hasnans:
tm.assert_numpy_array_equal(
- index.isnull(), np.zeros(len(index), dtype=bool))
+ index.isna(), np.zeros(len(index), dtype=bool))
tm.assert_numpy_array_equal(
- index.notnull(), np.ones(len(index), dtype=bool))
+ index.notna(), np.ones(len(index), dtype=bool))
else:
- result = isnull(index)
- tm.assert_numpy_array_equal(index.isnull(), result)
- tm.assert_numpy_array_equal(index.notnull(), ~result)
+ result = isna(index)
+ tm.assert_numpy_array_equal(index.isna(), result)
+ tm.assert_numpy_array_equal(index.notna(), ~result)
def test_empty(self):
# GH 15270
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 4ef5cc5499f4d..9416b08f9654a 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -5,7 +5,7 @@
import pandas as pd
import pandas.util.testing as tm
import pandas.compat as compat
-from pandas import notnull, Index, DatetimeIndex, datetime, date_range
+from pandas import notna, Index, DatetimeIndex, datetime, date_range
class TestDatetimeIndex(object):
@@ -16,29 +16,29 @@ def test_where_other(self):
i = pd.date_range('20130101', periods=3, tz='US/Eastern')
for arr in [np.nan, pd.NaT]:
- result = i.where(notnull(i), other=np.nan)
+ result = i.where(notna(i), other=np.nan)
expected = i
tm.assert_index_equal(result, expected)
i2 = i.copy()
i2 = Index([pd.NaT, pd.NaT] + i[2:].tolist())
- result = i.where(notnull(i2), i2)
+ result = i.where(notna(i2), i2)
tm.assert_index_equal(result, i2)
i2 = i.copy()
i2 = Index([pd.NaT, pd.NaT] + i[2:].tolist())
- result = i.where(notnull(i2), i2.values)
+ result = i.where(notna(i2), i2.values)
tm.assert_index_equal(result, i2)
def test_where_tz(self):
i = pd.date_range('20130101', periods=3, tz='US/Eastern')
- result = i.where(notnull(i))
+ result = i.where(notna(i))
expected = i
tm.assert_index_equal(result, expected)
i2 = i.copy()
i2 = Index([pd.NaT, pd.NaT] + i[2:].tolist())
- result = i.where(notnull(i2))
+ result = i.where(notna(i2))
expected = i2
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index f33cdf8800791..86e65feec04f3 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -116,13 +116,13 @@ def test_minmax(self):
for op in ['min', 'max']:
# Return NaT
obj = DatetimeIndex([])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
obj = DatetimeIndex([pd.NaT])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
obj = DatetimeIndex([pd.NaT, pd.NaT, pd.NaT])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index a47db755b44af..7ff9c2b23cbfb 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -20,7 +20,7 @@
from pandas.core.dtypes.common import is_datetime64_ns_dtype
from pandas.util import testing as tm
from pandas.util.testing import assert_series_equal, _skip_if_has_locale
-from pandas import (isnull, to_datetime, Timestamp, Series, DataFrame,
+from pandas import (isna, to_datetime, Timestamp, Series, DataFrame,
Index, DatetimeIndex, NaT, date_range, bdate_range,
compat)
@@ -683,7 +683,7 @@ def test_to_datetime_types(self):
assert result is NaT
result = to_datetime(['', ''])
- assert isnull(result).all()
+ assert isna(result).all()
# ints
result = Timestamp(0)
@@ -751,7 +751,7 @@ def test_string_na_nat_conversion(self):
expected = np.empty(4, dtype='M8[ns]')
for i, val in enumerate(strings):
- if isnull(val):
+ if isna(val):
expected[i] = tslib.iNaT
else:
expected[i] = parse_date(val)
@@ -787,7 +787,7 @@ def test_string_na_nat_conversion(self):
expected = Series(np.empty(5, dtype='M8[ns]'), index=idx)
for i in range(5):
x = series[i]
- if isnull(x):
+ if isna(x):
expected[i] = tslib.iNaT
else:
expected[i] = to_datetime(x)
@@ -977,13 +977,13 @@ class TestDaysInMonth(object):
# tests for issue #10154
def test_day_not_in_month_coerce(self):
- assert isnull(to_datetime('2015-02-29', errors='coerce'))
- assert isnull(to_datetime('2015-02-29', format="%Y-%m-%d",
- errors='coerce'))
- assert isnull(to_datetime('2015-02-32', format="%Y-%m-%d",
- errors='coerce'))
- assert isnull(to_datetime('2015-04-31', format="%Y-%m-%d",
- errors='coerce'))
+ assert isna(to_datetime('2015-02-29', errors='coerce'))
+ assert isna(to_datetime('2015-02-29', format="%Y-%m-%d",
+ errors='coerce'))
+ assert isna(to_datetime('2015-02-32', format="%Y-%m-%d",
+ errors='coerce'))
+ assert isna(to_datetime('2015-04-31', format="%Y-%m-%d",
+ errors='coerce'))
def test_day_not_in_month_raise(self):
pytest.raises(ValueError, to_datetime, '2015-02-29',
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 291ca317f8fae..e24e2ad936e2c 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -6,7 +6,7 @@
import pandas as pd
from pandas.util import testing as tm
-from pandas import (PeriodIndex, period_range, notnull, DatetimeIndex, NaT,
+from pandas import (PeriodIndex, period_range, notna, DatetimeIndex, NaT,
Index, Period, Int64Index, Series, DataFrame, date_range,
offsets, compat)
@@ -92,13 +92,13 @@ def test_get_loc(self):
def test_where(self):
i = self.create_index()
- result = i.where(notnull(i))
+ result = i.where(notna(i))
expected = i
tm.assert_index_equal(result, expected)
i2 = pd.PeriodIndex([pd.NaT, pd.NaT] + i[2:].tolist(),
freq='D')
- result = i.where(notnull(i2))
+ result = i.where(notna(i2))
expected = i2
tm.assert_index_equal(result, expected)
@@ -116,20 +116,20 @@ def test_where_other(self):
i = self.create_index()
for arr in [np.nan, pd.NaT]:
- result = i.where(notnull(i), other=np.nan)
+ result = i.where(notna(i), other=np.nan)
expected = i
tm.assert_index_equal(result, expected)
i2 = i.copy()
i2 = pd.PeriodIndex([pd.NaT, pd.NaT] + i[2:].tolist(),
freq='D')
- result = i.where(notnull(i2), i2)
+ result = i.where(notna(i2), i2)
tm.assert_index_equal(result, i2)
i2 = i.copy()
i2 = pd.PeriodIndex([pd.NaT, pd.NaT] + i[2:].tolist(),
freq='D')
- result = i.where(notnull(i2), i2.values)
+ result = i.where(notna(i2), i2.values)
tm.assert_index_equal(result, i2)
def test_get_indexer(self):
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 842e8fea0df9b..ef36e4a91aa1c 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -16,7 +16,7 @@
from pandas import (period_range, date_range, Series,
DataFrame, Float64Index, Int64Index,
CategoricalIndex, DatetimeIndex, TimedeltaIndex,
- PeriodIndex, isnull)
+ PeriodIndex, isna)
from pandas.core.index import _get_combined_index
from pandas.util.testing import assert_almost_equal
from pandas.compat.numpy import np_datetime64_compat
@@ -504,7 +504,7 @@ def test_is_(self):
def test_asof(self):
d = self.dateIndex[0]
assert self.dateIndex.asof(d) == d
- assert isnull(self.dateIndex.asof(d - timedelta(1)))
+ assert isna(self.dateIndex.asof(d - timedelta(1)))
d = self.dateIndex[-1]
assert self.dateIndex.asof(d + timedelta(1)) == d
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index e8d780e041316..a3d72fdb88239 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -10,7 +10,7 @@
import numpy as np
-from pandas import Categorical, IntervalIndex, compat, notnull
+from pandas import Categorical, IntervalIndex, compat, notna
from pandas.util.testing import assert_almost_equal
import pandas.core.config as cf
import pandas as pd
@@ -236,13 +236,13 @@ def f(x):
def test_where(self):
i = self.create_index()
- result = i.where(notnull(i))
+ result = i.where(notna(i))
expected = i
tm.assert_index_equal(result, expected)
i2 = pd.CategoricalIndex([np.nan, np.nan] + i[2:].tolist(),
categories=i.categories)
- result = i.where(notnull(i2))
+ result = i.where(notna(i2))
expected = i2
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/test_interval.py b/pandas/tests/indexes/test_interval.py
index 33745017fe3d6..fe86a2121761a 100644
--- a/pandas/tests/indexes/test_interval.py
+++ b/pandas/tests/indexes/test_interval.py
@@ -3,7 +3,7 @@
import pytest
import numpy as np
-from pandas import (Interval, IntervalIndex, Index, isnull,
+from pandas import (Interval, IntervalIndex, Index, isna,
interval_range, Timestamp, Timedelta,
compat)
from pandas._libs.interval import IntervalTree
@@ -152,16 +152,16 @@ def test_properties(self):
def test_with_nans(self):
index = self.index
assert not index.hasnans
- tm.assert_numpy_array_equal(index.isnull(),
+ tm.assert_numpy_array_equal(index.isna(),
np.array([False, False]))
- tm.assert_numpy_array_equal(index.notnull(),
+ tm.assert_numpy_array_equal(index.notna(),
np.array([True, True]))
index = self.index_with_nan
assert index.hasnans
- tm.assert_numpy_array_equal(index.notnull(),
+ tm.assert_numpy_array_equal(index.notna(),
np.array([True, False, True]))
- tm.assert_numpy_array_equal(index.isnull(),
+ tm.assert_numpy_array_equal(index.isna(),
np.array([False, True, False]))
def test_copy(self):
@@ -228,7 +228,7 @@ def test_astype(self):
def test_where(self):
expected = self.index
- result = self.index.where(self.index.notnull())
+ result = self.index.where(self.index.notna())
tm.assert_index_equal(result, expected)
idx = IntervalIndex.from_breaks([1, 2])
@@ -311,7 +311,7 @@ def test_get_item(self):
closed='right')
assert i[0] == Interval(0.0, 1.0)
assert i[1] == Interval(1.0, 2.0)
- assert isnull(i[2])
+ assert isna(i[2])
result = i[0:1]
expected = IntervalIndex.from_arrays((0.,), (1.,), closed='right')
@@ -620,7 +620,7 @@ def test_missing_values(self):
with pytest.raises(ValueError):
IntervalIndex.from_arrays([np.nan, 0, 1], np.array([0, 1, 2]))
- tm.assert_numpy_array_equal(isnull(idx),
+ tm.assert_numpy_array_equal(isna(idx),
np.array([True, False, False]))
def test_sort_values(self):
@@ -631,15 +631,15 @@ def test_sort_values(self):
# nan
idx = self.index_with_nan
- mask = idx.isnull()
+ mask = idx.isna()
tm.assert_numpy_array_equal(mask, np.array([False, True, False]))
result = idx.sort_values()
- mask = result.isnull()
+ mask = result.isna()
tm.assert_numpy_array_equal(mask, np.array([False, False, True]))
result = idx.sort_values(ascending=False)
- mask = result.isnull()
+ mask = result.isna()
tm.assert_numpy_array_equal(mask, np.array([True, False, False]))
def test_datetime(self):
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 719cd2f7e01a4..da1b309f5a621 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -2366,12 +2366,12 @@ def test_slice_keep_name(self):
names=['x', 'y'])
assert x[1:].names == x.names
- def test_isnull_behavior(self):
+ def test_isna_behavior(self):
# should not segfault GH5123
# NOTE: if MI representation changes, may make sense to allow
- # isnull(MI)
+ # isna(MI)
with pytest.raises(NotImplementedError):
- pd.isnull(self.index)
+ pd.isna(self.index)
def test_level_setting_resets_attributes(self):
ind = MultiIndex.from_arrays([
@@ -2889,13 +2889,13 @@ def test_nan_stays_float(self):
labels=[[0], [0]],
names=[0, 1])
idxm = idx0.join(idx1, how='outer')
- assert pd.isnull(idx0.get_level_values(1)).all()
+ assert pd.isna(idx0.get_level_values(1)).all()
# the following failed in 0.14.1
- assert pd.isnull(idxm.get_level_values(1)[:-1]).all()
+ assert pd.isna(idxm.get_level_values(1)[:-1]).all()
df0 = pd.DataFrame([[1, 2]], index=idx0)
df1 = pd.DataFrame([[3, 4]], index=idx1)
dfm = df0 - df1
- assert pd.isnull(df0.index.get_level_values(1)).all()
+ assert pd.isna(df0.index.get_level_values(1)).all()
# the following failed in 0.14.1
- assert pd.isnull(dfm.index.get_level_values(1)[:-1]).all()
+ assert pd.isna(dfm.index.get_level_values(1)[:-1]).all()
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 62ac337d02727..1a0a38c173284 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -7,7 +7,7 @@
import numpy as np
-from pandas import (date_range, notnull, Series, Index, Float64Index,
+from pandas import (date_range, notna, Series, Index, Float64Index,
Int64Index, UInt64Index, RangeIndex)
import pandas.util.testing as tm
@@ -228,11 +228,11 @@ def test_constructor(self):
# nan handling
result = Float64Index([np.nan, np.nan])
- assert pd.isnull(result.values).all()
+ assert pd.isna(result.values).all()
result = Float64Index(np.array([np.nan]))
- assert pd.isnull(result.values).all()
+ assert pd.isna(result.values).all()
result = Index(np.array([np.nan]))
- assert pd.isnull(result.values).all()
+ assert pd.isna(result.values).all()
def test_constructor_invalid(self):
@@ -717,7 +717,7 @@ def test_coerce_list(self):
def test_where(self):
i = self.create_index()
- result = i.where(notnull(i))
+ result = i.where(notna(i))
expected = i
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/test_range.py b/pandas/tests/indexes/test_range.py
index 0d88e88030604..566354da4870d 100644
--- a/pandas/tests/indexes/test_range.py
+++ b/pandas/tests/indexes/test_range.py
@@ -10,7 +10,7 @@
import numpy as np
-from pandas import (notnull, Series, Index, Float64Index,
+from pandas import (notna, Series, Index, Float64Index,
Int64Index, RangeIndex)
import pandas.util.testing as tm
@@ -929,7 +929,7 @@ def test_len_specialised(self):
def test_where(self):
i = self.create_index()
- result = i.where(notnull(i))
+ result = i.where(notna(i))
expected = i
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/timedeltas/test_ops.py b/pandas/tests/indexes/timedeltas/test_ops.py
index 9a9912d4f0ab1..f4f669ee1d087 100644
--- a/pandas/tests/indexes/timedeltas/test_ops.py
+++ b/pandas/tests/indexes/timedeltas/test_ops.py
@@ -71,13 +71,13 @@ def test_minmax(self):
for op in ['min', 'max']:
# Return NaT
obj = TimedeltaIndex([])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
obj = TimedeltaIndex([pd.NaT])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
obj = TimedeltaIndex([pd.NaT, pd.NaT, pd.NaT])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
diff --git a/pandas/tests/indexes/timedeltas/test_tools.py b/pandas/tests/indexes/timedeltas/test_tools.py
index a991b7bbe140a..1a4d1b1d7abaa 100644
--- a/pandas/tests/indexes/timedeltas/test_tools.py
+++ b/pandas/tests/indexes/timedeltas/test_tools.py
@@ -6,7 +6,7 @@
import pandas as pd
import pandas.util.testing as tm
from pandas.util.testing import assert_series_equal
-from pandas import (Series, Timedelta, to_timedelta, isnull,
+from pandas import (Series, Timedelta, to_timedelta, isna,
TimedeltaIndex)
from pandas._libs.tslib import iNaT
@@ -31,7 +31,7 @@ def conv(v):
assert result.astype('int64') == iNaT
result = to_timedelta(['', ''])
- assert isnull(result).all()
+ assert isna(result).all()
# pass thru
result = to_timedelta(np.array([np.timedelta64(1, 's')]))
diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index 27a889e58e55e..25e572ee09a6b 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -321,7 +321,7 @@ def test_setting_with_copy_bug(self):
df = pd.DataFrame({'a': list(range(4)),
'b': list('ab..'),
'c': ['a', 'b', np.nan, 'd']})
- mask = pd.isnull(df.c)
+ mask = pd.isna(df.c)
def f():
df[['c']][mask] = df[['b']][mask]
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 769cf8ec395dd..1ba9f3101e7b6 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -7,7 +7,7 @@
import pandas as pd
from pandas.compat import lrange, lmap
-from pandas import Series, DataFrame, date_range, concat, isnull
+from pandas import Series, DataFrame, date_range, concat, isna
from pandas.util import testing as tm
from pandas.tests.indexing.common import Base
@@ -191,7 +191,7 @@ def test_iloc_getitem_dups(self):
# cross-sectional indexing
result = df.iloc[0, 0]
- assert isnull(result)
+ assert isna(result)
result = df.iloc[0, :]
expected = Series([np.nan, 1, 3, 3], index=['A', 'B', 'A', 'B'],
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index e5b70a9fadb8f..3ecd1f3029cad 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -363,7 +363,7 @@ def test_multi_assign(self):
df.iloc[1, 0] = np.nan
df2 = df.copy()
- mask = ~df2.FC.isnull()
+ mask = ~df2.FC.isna()
cols = ['col1', 'col2']
dft = df2 * 2
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 4d1f9936af983..34ed8782b346c 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -704,7 +704,7 @@ def test_missing_trailing_delimiters(self):
1,3,3,
1,4,5"""
result = self.read_csv(StringIO(data))
- assert result['D'].isnull()[1:].all()
+ assert result['D'].isna()[1:].all()
def test_skipinitialspace(self):
s = ('"09-Apr-2012", "01:10:18.300", 2456026.548822908, 12849, '
@@ -718,7 +718,7 @@ def test_skipinitialspace(self):
# it's 33 columns
result = self.read_csv(sfile, names=lrange(33), na_values=['-9999.0'],
header=None, skipinitialspace=True)
- assert pd.isnull(result.iloc[0, 29])
+ assert pd.isna(result.iloc[0, 29])
def test_utf16_bom_skiprows(self):
# #2298
diff --git a/pandas/tests/io/parser/converters.py b/pandas/tests/io/parser/converters.py
index 8fde709e39cae..1176b1e84e29b 100644
--- a/pandas/tests/io/parser/converters.py
+++ b/pandas/tests/io/parser/converters.py
@@ -133,7 +133,7 @@ def convert_score(x):
result = self.read_csv(fh, converters={'score': convert_score,
'days': convert_days},
na_values=['', None])
- assert pd.isnull(result['days'][1])
+ assert pd.isna(result['days'][1])
fh = StringIO(data)
result2 = self.read_csv(fh, converters={'score': convert_score,
diff --git a/pandas/tests/io/parser/na_values.py b/pandas/tests/io/parser/na_values.py
index c6d1cc79b82d7..7fbf174e19eee 100644
--- a/pandas/tests/io/parser/na_values.py
+++ b/pandas/tests/io/parser/na_values.py
@@ -249,7 +249,7 @@ def test_na_trailing_columns(self):
result = self.read_csv(StringIO(data))
assert result['Date'][1] == '2012-05-12'
- assert result['UnitPrice'].isnull().all()
+ assert result['UnitPrice'].isna().all()
def test_na_values_scalar(self):
# see gh-12224
diff --git a/pandas/tests/io/parser/parse_dates.py b/pandas/tests/io/parser/parse_dates.py
index 4507db108b684..e1ae1b577ea29 100644
--- a/pandas/tests/io/parser/parse_dates.py
+++ b/pandas/tests/io/parser/parse_dates.py
@@ -461,7 +461,7 @@ def test_parse_dates_empty_string(self):
data = "Date, test\n2012-01-01, 1\n,2"
result = self.read_csv(StringIO(data), parse_dates=["Date"],
na_filter=False)
- assert result['Date'].isnull()[1]
+ assert result['Date'].isna()[1]
def test_parse_dates_noconvert_thousands(self):
# see gh-14066
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 0455ffb069322..6fc080c8d9090 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -385,7 +385,7 @@ def test_thousands_macau_stats(self):
attrs={'class': 'style1'})
df = dfs[all_non_nan_table_index]
- assert not any(s.isnull().any() for _, s in df.iteritems())
+ assert not any(s.isna().any() for _, s in df.iteritems())
@pytest.mark.slow
def test_thousands_macau_index_col(self):
@@ -394,7 +394,7 @@ def test_thousands_macau_index_col(self):
dfs = self.read_html(macau_data, index_col=0, header=0)
df = dfs[all_non_nan_table_index]
- assert not any(s.isnull().any() for _, s in df.iteritems())
+ assert not any(s.isna().any() for _, s in df.iteritems())
def test_empty_tables(self):
"""
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index c0d200560b477..fc17b5f85b68c 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -14,7 +14,7 @@
from pandas import (Series, DataFrame, Panel, Panel4D, MultiIndex, Int64Index,
RangeIndex, Categorical, bdate_range,
date_range, timedelta_range, Index, DatetimeIndex,
- isnull)
+ isna)
from pandas.compat import is_platform_windows, PY3, PY35, BytesIO, text_type
from pandas.io.formats.printing import pprint_thing
@@ -3948,7 +3948,7 @@ def test_string_select(self):
store.append('df2', df2, data_columns=['x'])
result = store.select('df2', 'x!=none')
- expected = df2[isnull(df2.x)]
+ expected = df2[isna(df2.x)]
assert_frame_equal(result, expected)
# int ==/!=
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index deeb8cba2b228..a7c42391effe6 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -33,7 +33,7 @@
from pandas.core.dtypes.common import (
is_object_dtype, is_datetime64_dtype,
is_datetime64tz_dtype)
-from pandas import DataFrame, Series, Index, MultiIndex, isnull, concat
+from pandas import DataFrame, Series, Index, MultiIndex, isna, concat
from pandas import date_range, to_datetime, to_timedelta, Timestamp
import pandas.compat as compat
from pandas.compat import range, lrange, string_types, PY36
@@ -1530,7 +1530,7 @@ def test_dtype(self):
assert isinstance(sqltypea, sqlalchemy.TEXT)
assert isinstance(sqltypeb, sqlalchemy.TEXT)
- def test_notnull_dtype(self):
+ def test_notna_dtype(self):
cols = {'Bool': Series([True, None]),
'Date': Series([datetime(2012, 5, 1), None]),
'Int': Series([1, None], dtype='object'),
@@ -1538,7 +1538,7 @@ def test_notnull_dtype(self):
}
df = DataFrame(cols)
- tbl = 'notnull_dtype_test'
+ tbl = 'notna_dtype_test'
df.to_sql(tbl, self.conn)
returned_df = sql.read_sql_table(tbl, self.conn) # noqa
meta = sqlalchemy.schema.MetaData(bind=self.conn)
@@ -2005,7 +2005,7 @@ def test_dtype(self):
assert self._get_sqlite_column_type(
'single_dtype_test', 'B') == 'STRING'
- def test_notnull_dtype(self):
+ def test_notna_dtype(self):
if self.flavor == 'mysql':
pytest.skip('Not applicable to MySQL legacy')
@@ -2016,7 +2016,7 @@ def test_notnull_dtype(self):
}
df = DataFrame(cols)
- tbl = 'notnull_dtype_test'
+ tbl = 'notna_dtype_test'
df.to_sql(tbl, self.conn)
assert self._get_sqlite_column_type(tbl, 'Bool') == 'INTEGER'
@@ -2069,7 +2069,7 @@ def format_query(sql, *args):
"""
processed_args = []
for arg in args:
- if isinstance(arg, float) and isnull(arg):
+ if isinstance(arg, float) and isna(arg):
arg = None
formatter = _formatters[type(arg)]
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 7486c32f57fdb..46fea86c45925 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -8,7 +8,7 @@
from pandas.compat import StringIO, iteritems
import pandas as pd
from pandas import (DataFrame, concat,
- read_csv, isnull, Series, date_range,
+ read_csv, isna, Series, date_range,
Index, Panel, MultiIndex, Timestamp,
DatetimeIndex)
from pandas.util import testing as tm
@@ -789,8 +789,8 @@ def test_append_different_columns(self):
b = df[5:].loc[:, ['strings', 'ints', 'floats']]
appended = a.append(b)
- assert isnull(appended['strings'][0:4]).all()
- assert isnull(appended['bools'][5:]).all()
+ assert isna(appended['strings'][0:4]).all()
+ assert isna(appended['bools'][5:]).all()
def test_append_many(self):
chunks = [self.frame[:5], self.frame[5:10],
@@ -804,7 +804,7 @@ def test_append_many(self):
result = chunks[0].append(chunks[1:])
tm.assert_frame_equal(result.loc[:, self.frame.columns], self.frame)
assert (result['foo'][15:] == 'bar').all()
- assert result['foo'][:15].isnull().all()
+ assert result['foo'][:15].isna().all()
def test_append_preserve_index_name(self):
# #980
diff --git a/pandas/tests/reshape/test_join.py b/pandas/tests/reshape/test_join.py
index e4894307918c6..75c01fabea8f6 100644
--- a/pandas/tests/reshape/test_join.py
+++ b/pandas/tests/reshape/test_join.py
@@ -252,7 +252,7 @@ def test_join_with_len0(self):
merged = self.target.join(self.source.reindex([]), on='C')
for col in self.source:
assert col in merged
- assert merged[col].isnull().all()
+ assert merged[col].isna().all()
merged2 = self.target.join(self.source.reindex([]), on='C',
how='inner')
@@ -266,7 +266,7 @@ def test_join_on_inner(self):
joined = df.join(df2, on='key', how='inner')
expected = df.join(df2, on='key')
- expected = expected[expected['value'].notnull()]
+ expected = expected[expected['value'].notna()]
tm.assert_series_equal(joined['key'], expected['key'],
check_dtype=False)
tm.assert_series_equal(joined['value'], expected['value'],
@@ -734,7 +734,7 @@ def _check_join(left, right, result, join_col, how='left',
# some smoke tests
for c in join_col:
- assert(result[c].notnull().all())
+ assert(result[c].notna().all())
left_grouped = left.groupby(join_col)
right_grouped = right.groupby(join_col)
@@ -797,7 +797,7 @@ def _assert_all_na(join_chunk, source_columns, join_col):
for c in source_columns:
if c in join_col:
continue
- assert(join_chunk[c].isnull().all())
+ assert(join_chunk[c].isna().all())
def _join_by_hand(a, b, how='left'):
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index 765e8e28b43fd..338596d1523e4 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -229,8 +229,8 @@ def test_handle_join_key_pass_array(self):
merged2 = merge(right, left, left_on=key, right_on='key', how='outer')
assert_series_equal(merged['key'], merged2['key'])
- assert merged['key'].notnull().all()
- assert merged2['key'].notnull().all()
+ assert merged['key'].notna().all()
+ assert merged2['key'].notna().all()
left = DataFrame({'value': lrange(5)}, columns=['value'])
right = DataFrame({'rvalue': lrange(6)})
@@ -926,8 +926,8 @@ def run_asserts(left, right):
res = left.join(right, on=icols, how='left', sort=sort)
assert len(left) < len(res) + 1
- assert not res['4th'].isnull().any()
- assert not res['5th'].isnull().any()
+ assert not res['4th'].isna().any()
+ assert not res['5th'].isna().any()
tm.assert_series_equal(
res['4th'], - res['5th'], check_names=False)
diff --git a/pandas/tests/reshape/test_merge_ordered.py b/pandas/tests/reshape/test_merge_ordered.py
index 9469e98f336fd..9b1806ee52c1d 100644
--- a/pandas/tests/reshape/test_merge_ordered.py
+++ b/pandas/tests/reshape/test_merge_ordered.py
@@ -57,7 +57,7 @@ def test_multigroup(self):
assert_frame_equal(result, result2.loc[:, result.columns])
result = merge_ordered(left, self.right, on='key', left_by='group')
- assert result['group'].notnull().all()
+ assert result['group'].notna().all()
def test_merge_type(self):
class NotADataFrame(DataFrame):
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index ff9f35b0253b0..5e5852ac5381d 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -267,7 +267,7 @@ def test_pivot_index_with_nan(self):
df.loc[1, 'b'] = df.loc[4, 'b'] = nan
pv = df.pivot('a', 'b', 'c')
- assert pv.notnull().values.sum() == len(df)
+ assert pv.notna().values.sum() == len(df)
for _, row in df.iterrows():
assert pv.loc[row['a'], row['b']] == row['c']
diff --git a/pandas/tests/reshape/test_tile.py b/pandas/tests/reshape/test_tile.py
index 2523f8ab9f776..91000747b41bb 100644
--- a/pandas/tests/reshape/test_tile.py
+++ b/pandas/tests/reshape/test_tile.py
@@ -4,7 +4,7 @@
import numpy as np
from pandas.compat import zip
-from pandas import (Series, Index, isnull,
+from pandas import (Series, Index, isna,
to_datetime, DatetimeIndex, Timestamp,
Interval, IntervalIndex, Categorical,
cut, qcut, date_range)
@@ -140,12 +140,12 @@ def test_na_handling(self):
result_arr = np.asarray(result)
- ex_arr = np.where(isnull(arr), np.nan, result_arr)
+ ex_arr = np.where(isna(arr), np.nan, result_arr)
tm.assert_almost_equal(result_arr, ex_arr)
result = cut(arr, 4, labels=False)
- ex_result = np.where(isnull(arr), np.nan, result)
+ ex_result = np.where(isna(arr), np.nan, result)
tm.assert_almost_equal(result, ex_result)
def test_inf_handling(self):
@@ -200,7 +200,7 @@ def test_cut_out_of_bounds(self):
result = cut(arr, [-1, 0, 1])
- mask = isnull(result)
+ mask = isna(result)
ex_mask = (arr < -1) | (arr > 1)
tm.assert_numpy_array_equal(mask, ex_mask)
@@ -244,7 +244,7 @@ def test_qcut_nas(self):
arr[:20] = np.nan
result = qcut(arr, 4)
- assert isnull(result[:20]).all()
+ assert isna(result[:20]).all()
def test_qcut_index(self):
result = qcut([0, 2], 2)
@@ -502,9 +502,9 @@ def f():
result = cut(date_range('20130102', periods=5),
bins=date_range('20130101', periods=2))
- mask = result.categories.isnull()
+ mask = result.categories.isna()
tm.assert_numpy_array_equal(mask, np.array([False]))
- mask = result.isnull()
+ mask = result.isna()
tm.assert_numpy_array_equal(
mask, np.array([False, True, True, True, True]))
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index 0695fe2243947..5f247cae1099b 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -6,7 +6,7 @@
import numpy as np
from pandas import (NaT, Index, Timestamp, Timedelta, Period,
DatetimeIndex, PeriodIndex,
- TimedeltaIndex, Series, isnull)
+ TimedeltaIndex, Series, isna)
from pandas.util import testing as tm
from pandas._libs.tslib import iNaT
@@ -95,7 +95,7 @@ def test_identity(klass):
result = klass('NaT')
assert result is NaT
- assert isnull(klass('nat'))
+ assert isna(klass('nat'))
@pytest.mark.parametrize('klass', [Timestamp, Timedelta, Period])
@@ -108,7 +108,7 @@ def test_equality(klass):
klass('NAT').value == iNaT
klass(None).value == iNaT
klass(np.nan).value == iNaT
- assert isnull(klass('nat'))
+ assert isna(klass('nat'))
@pytest.mark.parametrize('klass', [Timestamp, Timedelta])
diff --git a/pandas/tests/scalar/test_timedelta.py b/pandas/tests/scalar/test_timedelta.py
index ecc44204924d3..bc9a0388df9d9 100644
--- a/pandas/tests/scalar/test_timedelta.py
+++ b/pandas/tests/scalar/test_timedelta.py
@@ -638,8 +638,8 @@ def test_components(self):
s[1] = np.nan
result = s.dt.components
- assert not result.iloc[0].isnull().all()
- assert result.iloc[1].isnull().all()
+ assert not result.iloc[0].isna().all()
+ assert result.iloc[1].isna().all()
def test_isoformat(self):
td = Timedelta(days=6, minutes=50, seconds=3,
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index a736f3aa74558..44da0968d7024 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -10,7 +10,7 @@
import numpy as np
import pandas as pd
-from pandas import (Series, Categorical, DataFrame, isnull, notnull,
+from pandas import (Series, Categorical, DataFrame, isna, notna,
bdate_range, date_range, _np_version_under1p10)
from pandas.core.index import MultiIndex
from pandas.core.indexes.datetimes import Timestamp
@@ -130,7 +130,7 @@ def test_sum_inf(self):
arr = np.random.randn(100, 100).astype('f4')
arr[:, 2] = np.inf
- with cf.option_context("mode.use_inf_as_null", True):
+ with cf.option_context("mode.use_inf_as_na", True):
assert_almost_equal(s.sum(), s2.sum())
res = nanops.nansum(arr, axis=1)
@@ -269,10 +269,10 @@ def test_var_std(self):
# 1 - element series with ddof=1
s = self.ts.iloc[[0]]
result = s.var(ddof=1)
- assert isnull(result)
+ assert isna(result)
result = s.std(ddof=1)
- assert isnull(result)
+ assert isna(result)
def test_sem(self):
alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x))
@@ -286,7 +286,7 @@ def test_sem(self):
# 1 - element series with ddof=1
s = self.ts.iloc[[0]]
result = s.sem(ddof=1)
- assert isnull(result)
+ assert isna(result)
def test_skew(self):
tm._skip_if_no_scipy()
@@ -365,7 +365,7 @@ def test_argsort(self):
assert s.dtype == 'datetime64[ns]'
shifted = s.shift(-1)
assert shifted.dtype == 'datetime64[ns]'
- assert isnull(shifted[4])
+ assert isna(shifted[4])
result = s.argsort()
expected = Series(lrange(5), dtype='int64')
@@ -524,8 +524,8 @@ def testit():
pytest.raises(TypeError, f, ds)
# skipna or no
- assert notnull(f(self.series))
- assert isnull(f(self.series, skipna=False))
+ assert notna(f(self.series))
+ assert isna(f(self.series, skipna=False))
# check the result is correct
nona = self.series.dropna()
@@ -743,10 +743,10 @@ def test_ops_consistency_on_empty(self):
assert result == 0
result = Series(dtype=float).mean()
- assert isnull(result)
+ assert isna(result)
result = Series(dtype=float).median()
- assert isnull(result)
+ assert isna(result)
# timedelta64[ns]
result = Series(dtype='m8[ns]').sum()
@@ -769,11 +769,11 @@ def test_corr(self):
# partial overlap
tm.assert_almost_equal(self.ts[:15].corr(self.ts[5:]), 1)
- assert isnull(self.ts[:15].corr(self.ts[5:], min_periods=12))
+ assert isna(self.ts[:15].corr(self.ts[5:], min_periods=12))
ts1 = self.ts[:15].reindex(self.ts.index)
ts2 = self.ts[5:].reindex(self.ts.index)
- assert isnull(ts1.corr(ts2, min_periods=12))
+ assert isna(ts1.corr(ts2, min_periods=12))
# No overlap
assert np.isnan(self.ts[::2].corr(self.ts[1::2]))
@@ -781,7 +781,7 @@ def test_corr(self):
# all NA
cp = self.ts[:10].copy()
cp[:] = np.nan
- assert isnull(cp.corr(cp))
+ assert isna(cp.corr(cp))
A = tm.makeTimeSeries()
B = tm.makeTimeSeries()
@@ -838,14 +838,14 @@ def test_cov(self):
# all NA
cp = self.ts[:10].copy()
cp[:] = np.nan
- assert isnull(cp.cov(cp))
+ assert isna(cp.cov(cp))
# min_periods
- assert isnull(self.ts[:15].cov(self.ts[5:], min_periods=12))
+ assert isna(self.ts[:15].cov(self.ts[5:], min_periods=12))
ts1 = self.ts[:15].reindex(self.ts.index)
ts2 = self.ts[5:].reindex(self.ts.index)
- assert isnull(ts1.cov(ts2, min_periods=12))
+ assert isna(ts1.cov(ts2, min_periods=12))
def test_count(self):
assert self.ts.count() == len(self.ts)
@@ -995,10 +995,10 @@ def test_clip_types_and_nulls(self):
thresh = s[2]
l = s.clip_lower(thresh)
u = s.clip_upper(thresh)
- assert l[notnull(l)].min() == thresh
- assert u[notnull(u)].max() == thresh
- assert list(isnull(s)) == list(isnull(l))
- assert list(isnull(s)) == list(isnull(u))
+ assert l[notna(l)].min() == thresh
+ assert u[notna(u)].max() == thresh
+ assert list(isna(s)) == list(isna(l))
+ assert list(isna(s)) == list(isna(u))
def test_clip_against_series(self):
# GH #6966
@@ -1202,14 +1202,14 @@ def test_timedelta64_analytics(self):
def test_idxmin(self):
# test idxmin
- # _check_stat_op approach can not be used here because of isnull check.
+ # _check_stat_op approach can not be used here because of isna check.
# add some NaNs
self.series[5:15] = np.NaN
# skipna or no
assert self.series[self.series.idxmin()] == self.series.min()
- assert isnull(self.series.idxmin(skipna=False))
+ assert isna(self.series.idxmin(skipna=False))
# no NaNs
nona = self.series.dropna()
@@ -1219,7 +1219,7 @@ def test_idxmin(self):
# all NaNs
allna = self.series * nan
- assert isnull(allna.idxmin())
+ assert isna(allna.idxmin())
# datetime64[ns]
from pandas import date_range
@@ -1244,14 +1244,14 @@ def test_numpy_argmin(self):
def test_idxmax(self):
# test idxmax
- # _check_stat_op approach can not be used here because of isnull check.
+ # _check_stat_op approach can not be used here because of isna check.
# add some NaNs
self.series[5:15] = np.NaN
# skipna or no
assert self.series[self.series.idxmax()] == self.series.max()
- assert isnull(self.series.idxmax(skipna=False))
+ assert isna(self.series.idxmax(skipna=False))
# no NaNs
nona = self.series.dropna()
@@ -1261,7 +1261,7 @@ def test_idxmax(self):
# all NaNs
allna = self.series * nan
- assert isnull(allna.idxmax())
+ assert isna(allna.idxmax())
from pandas import date_range
s = Series(date_range('20130102', periods=6))
@@ -1307,7 +1307,7 @@ def test_ptp(self):
# GH11163
s = Series([3, 5, np.nan, -3, 10])
assert s.ptp() == 13
- assert pd.isnull(s.ptp(skipna=False))
+ assert pd.isna(s.ptp(skipna=False))
mi = pd.MultiIndex.from_product([['a', 'b'], [1, 2, 3]])
s = pd.Series([1, np.nan, 7, 3, 5, np.nan], index=mi)
diff --git a/pandas/tests/series/test_apply.py b/pandas/tests/series/test_apply.py
index 2c5f0d7772cc2..e3be5427588b3 100644
--- a/pandas/tests/series/test_apply.py
+++ b/pandas/tests/series/test_apply.py
@@ -8,7 +8,7 @@
import numpy as np
import pandas as pd
-from pandas import (Index, Series, DataFrame, isnull)
+from pandas import (Index, Series, DataFrame, isna)
from pandas.compat import lrange
from pandas import compat
from pandas.util.testing import assert_series_equal, assert_frame_equal
@@ -393,8 +393,8 @@ def test_map_int(self):
merged = left.map(right)
assert merged.dtype == np.float_
- assert isnull(merged['d'])
- assert not isnull(merged['c'])
+ assert isna(merged['d'])
+ assert not isna(merged['c'])
def test_map_type_inference(self):
s = Series(lrange(3))
diff --git a/pandas/tests/series/test_asof.py b/pandas/tests/series/test_asof.py
index 1f62d618b20e1..3104d85601434 100644
--- a/pandas/tests/series/test_asof.py
+++ b/pandas/tests/series/test_asof.py
@@ -3,8 +3,8 @@
import pytest
import numpy as np
-from pandas import (offsets, Series, notnull,
- isnull, date_range, Timestamp)
+from pandas import (offsets, Series, notna,
+ isna, date_range, Timestamp)
import pandas.util.testing as tm
@@ -23,12 +23,12 @@ def test_basic(self):
dates = date_range('1/1/1990', periods=N * 3, freq='25s')
result = ts.asof(dates)
- assert notnull(result).all()
+ assert notna(result).all()
lb = ts.index[14]
ub = ts.index[30]
result = ts.asof(list(dates))
- assert notnull(result).all()
+ assert notna(result).all()
lb = ts.index[14]
ub = ts.index[30]
@@ -98,12 +98,12 @@ def test_periodindex(self):
dates = date_range('1/1/1990', periods=N * 3, freq='37min')
result = ts.asof(dates)
- assert notnull(result).all()
+ assert notna(result).all()
lb = ts.index[14]
ub = ts.index[30]
result = ts.asof(list(dates))
- assert notnull(result).all()
+ assert notna(result).all()
lb = ts.index[14]
ub = ts.index[30]
@@ -130,7 +130,7 @@ def test_periodindex(self):
# no as of value
d = ts.index[0].to_timestamp() - offsets.BDay()
- assert isnull(ts.asof(d))
+ assert isna(ts.asof(d))
def test_errors(self):
@@ -170,7 +170,7 @@ def test_all_nans(self):
# testing scalar input
date = date_range('1/1/1990', periods=N * 3, freq='25s')[0]
result = Series(np.nan, index=rng).asof(date)
- assert isnull(result)
+ assert isna(result)
# test name is propagated
result = Series(np.nan, index=[1, 2, 3, 4], name='test').asof([4, 5])
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index a916c42c007f9..3b95c2803dd9e 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -13,7 +13,7 @@
from pandas.core.dtypes.common import (
is_categorical_dtype,
is_datetime64tz_dtype)
-from pandas import (Index, Series, isnull, date_range,
+from pandas import (Index, Series, isna, date_range,
NaT, period_range, MultiIndex, IntervalIndex)
from pandas.core.indexes.datetimes import Timestamp, DatetimeIndex
@@ -348,22 +348,22 @@ def test_constructor_datetimes_with_nulls(self):
def test_constructor_dtype_datetime64(self):
s = Series(iNaT, dtype='M8[ns]', index=lrange(5))
- assert isnull(s).all()
+ assert isna(s).all()
# in theory this should be all nulls, but since
# we are not specifying a dtype is ambiguous
s = Series(iNaT, index=lrange(5))
- assert not isnull(s).all()
+ assert not isna(s).all()
s = Series(nan, dtype='M8[ns]', index=lrange(5))
- assert isnull(s).all()
+ assert isna(s).all()
s = Series([datetime(2001, 1, 2, 0, 0), iNaT], dtype='M8[ns]')
- assert isnull(s[1])
+ assert isna(s[1])
assert s.dtype == 'M8[ns]'
s = Series([datetime(2001, 1, 2, 0, 0), nan], dtype='M8[ns]')
- assert isnull(s[1])
+ assert isna(s[1])
assert s.dtype == 'M8[ns]'
# GH3416
@@ -760,10 +760,10 @@ def test_NaT_scalar(self):
series = Series([0, 1000, 2000, iNaT], dtype='M8[ns]')
val = series[3]
- assert isnull(val)
+ assert isna(val)
series[2] = val
- assert isnull(series[2])
+ assert isna(series[2])
def test_NaT_cast(self):
# GH10747
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 23283733c492a..45a92f6d6f50b 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -11,7 +11,7 @@
import pandas._libs.index as _index
from pandas.core.dtypes.common import is_integer, is_scalar
-from pandas import (Index, Series, DataFrame, isnull,
+from pandas import (Index, Series, DataFrame, isna,
date_range, NaT, MultiIndex,
Timestamp, DatetimeIndex, Timedelta)
from pandas.core.indexing import IndexingError
@@ -254,7 +254,7 @@ def test_getitem_boolean(self):
def test_getitem_boolean_empty(self):
s = Series([], dtype=np.int64)
s.index.name = 'index_name'
- s = s[s.isnull()]
+ s = s[s.isna()]
assert s.index.name == 'index_name'
assert s.dtype == np.int64
@@ -1190,11 +1190,11 @@ def f():
s = Series(range(10)).astype(float)
s[8] = None
result = s[8]
- assert isnull(result)
+ assert isna(result)
s = Series(range(10)).astype(float)
s[s > 8] = None
- result = s[isnull(s)]
+ result = s[isna(s)]
expected = Series(np.nan, index=[9])
assert_series_equal(result, expected)
@@ -1988,7 +1988,7 @@ def test_reindex_series_add_nat(self):
result = series.reindex(lrange(15))
assert np.issubdtype(result.dtype, np.dtype('M8[ns]'))
- mask = result.isnull()
+ mask = result.isna()
assert mask[-5:].all()
assert not mask[:-5].any()
@@ -2114,7 +2114,7 @@ def test_reindex_bool_pad(self):
ts = self.ts[5:]
bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
filled_bool = bool_ts.reindex(self.ts.index, method='pad')
- assert isnull(filled_bool[:5]).all()
+ assert isna(filled_bool[:5]).all()
def test_reindex_like(self):
other = self.ts[::2]
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 24dd90e40fa35..2d20ac9685914 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -11,7 +11,7 @@
import numpy as np
import pandas as pd
-from pandas import (Series, DataFrame, isnull, date_range,
+from pandas import (Series, DataFrame, isna, date_range,
MultiIndex, Index, Timestamp, NaT, IntervalIndex)
from pandas.compat import range
from pandas._libs.tslib import iNaT
@@ -159,7 +159,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-02 10:00')])
tm.assert_series_equal(expected, result)
# check s is not changed
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz))
expected = Series([Timestamp('2011-01-01 10:00'),
@@ -167,14 +167,14 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00'),
Timestamp('2011-01-02 10:00', tz=tz)])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna('AAA')
expected = Series([Timestamp('2011-01-01 10:00'), 'AAA',
Timestamp('2011-01-03 10:00'), 'AAA'],
dtype=object)
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz),
3: pd.Timestamp('2011-01-04 10:00')})
@@ -183,7 +183,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00'),
Timestamp('2011-01-04 10:00')])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna({1: pd.Timestamp('2011-01-02 10:00'),
3: pd.Timestamp('2011-01-04 10:00')})
@@ -192,14 +192,14 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00'),
Timestamp('2011-01-04 10:00')])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
# DatetimeBlockTZ
idx = pd.DatetimeIndex(['2011-01-01 10:00', pd.NaT,
'2011-01-03 10:00', pd.NaT], tz=tz)
s = pd.Series(idx)
assert s.dtype == 'datetime64[ns, {0}]'.format(tz)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna(pd.Timestamp('2011-01-02 10:00'))
expected = Series([Timestamp('2011-01-01 10:00', tz=tz),
@@ -207,7 +207,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00', tz=tz),
Timestamp('2011-01-02 10:00')])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna(pd.Timestamp('2011-01-02 10:00', tz=tz))
idx = pd.DatetimeIndex(['2011-01-01 10:00', '2011-01-02 10:00',
@@ -215,7 +215,7 @@ def test_datetime64_tz_fillna(self):
tz=tz)
expected = Series(idx)
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna(pd.Timestamp('2011-01-02 10:00',
tz=tz).to_pydatetime())
@@ -224,14 +224,14 @@ def test_datetime64_tz_fillna(self):
tz=tz)
expected = Series(idx)
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna('AAA')
expected = Series([Timestamp('2011-01-01 10:00', tz=tz), 'AAA',
Timestamp('2011-01-03 10:00', tz=tz), 'AAA'],
dtype=object)
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz),
3: pd.Timestamp('2011-01-04 10:00')})
@@ -240,7 +240,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00', tz=tz),
Timestamp('2011-01-04 10:00')])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna({1: pd.Timestamp('2011-01-02 10:00', tz=tz),
3: pd.Timestamp('2011-01-04 10:00', tz=tz)})
@@ -249,7 +249,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00', tz=tz),
Timestamp('2011-01-04 10:00', tz=tz)])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
# filling with a naive/other zone, coerce to object
result = s.fillna(Timestamp('20130101'))
@@ -258,7 +258,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00', tz=tz),
Timestamp('2013-01-01')])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
result = s.fillna(Timestamp('20130101', tz='US/Pacific'))
expected = Series([Timestamp('2011-01-01 10:00', tz=tz),
@@ -266,7 +266,7 @@ def test_datetime64_tz_fillna(self):
Timestamp('2011-01-03 10:00', tz=tz),
Timestamp('2013-01-01', tz='US/Pacific')])
tm.assert_series_equal(expected, result)
- tm.assert_series_equal(pd.isnull(s), null_loc)
+ tm.assert_series_equal(pd.isna(s), null_loc)
# with timezone
# GH 15855
@@ -400,10 +400,10 @@ def test_fillna_nat(self):
assert_frame_equal(filled, expected)
assert_frame_equal(filled2, expected)
- def test_isnull_for_inf(self):
+ def test_isna_for_inf(self):
s = Series(['a', np.inf, np.nan, 1.0])
- with pd.option_context('mode.use_inf_as_null', True):
- r = s.isnull()
+ with pd.option_context('mode.use_inf_as_na', True):
+ r = s.isna()
dr = s.dropna()
e = Series([False, True, True, False])
de = Series(['a', 1.0], index=[0, 3])
@@ -526,28 +526,28 @@ def test_timedelta64_nan(self):
# nan ops on timedeltas
td1 = td.copy()
td1[0] = np.nan
- assert isnull(td1[0])
+ assert isna(td1[0])
assert td1[0].value == iNaT
td1[0] = td[0]
- assert not isnull(td1[0])
+ assert not isna(td1[0])
td1[1] = iNaT
- assert isnull(td1[1])
+ assert isna(td1[1])
assert td1[1].value == iNaT
td1[1] = td[1]
- assert not isnull(td1[1])
+ assert not isna(td1[1])
td1[2] = NaT
- assert isnull(td1[2])
+ assert isna(td1[2])
assert td1[2].value == iNaT
td1[2] = td[2]
- assert not isnull(td1[2])
+ assert not isna(td1[2])
# boolean setting
# this doesn't work, not sure numpy even supports it
# result = td[(td>np.timedelta64(timedelta(days=3))) &
# td<np.timedelta64(timedelta(days=7)))] = np.nan
- # assert isnull(result).sum() == 7
+ # assert isna(result).sum() == 7
# NumPy limitiation =(
@@ -616,21 +616,21 @@ def test_valid(self):
result = ts.valid()
assert len(result) == ts.count()
tm.assert_series_equal(result, ts[1::2])
- tm.assert_series_equal(result, ts[pd.notnull(ts)])
+ tm.assert_series_equal(result, ts[pd.notna(ts)])
- def test_isnull(self):
+ def test_isna(self):
ser = Series([0, 5.4, 3, nan, -0.001])
- np.array_equal(ser.isnull(),
+ np.array_equal(ser.isna(),
Series([False, False, False, True, False]).values)
ser = Series(["hi", "", nan])
- np.array_equal(ser.isnull(), Series([False, False, True]).values)
+ np.array_equal(ser.isna(), Series([False, False, True]).values)
- def test_notnull(self):
+ def test_notna(self):
ser = Series([0, 5.4, 3, nan, -0.001])
- np.array_equal(ser.notnull(),
+ np.array_equal(ser.notna(),
Series([True, True, True, False, True]).values)
ser = Series(["hi", "", nan])
- np.array_equal(ser.notnull(), Series([True, True, False]).values)
+ np.array_equal(ser.notna(), Series([True, True, False]).values)
def test_pad_nan(self):
x = Series([np.nan, 1., np.nan, 3., np.nan], ['z', 'a', 'b', 'c', 'd'],
@@ -849,7 +849,7 @@ def test_interpolate_index_values(self):
result = s.interpolate(method='index')
expected = s.copy()
- bad = isnull(expected.values)
+ bad = isna(expected.values)
good = ~bad
expected = Series(np.interp(vals[bad], vals[good],
s.values[good]),
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 2e400812e0331..991c5ff625554 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -13,7 +13,7 @@
import numpy as np
import pandas as pd
-from pandas import (Index, Series, DataFrame, isnull, bdate_range,
+from pandas import (Index, Series, DataFrame, isna, bdate_range,
NaT, date_range, timedelta_range,
_np_version_under1p8)
from pandas.core.indexes.datetimes import Timestamp
@@ -999,7 +999,7 @@ def test_comparison_operators_with_nas(self):
# boolean &, |, ^ should work with object arrays and propagate NAs
ops = ['and_', 'or_', 'xor']
- mask = s.isnull()
+ mask = s.isna()
for bool_op in ops:
f = getattr(operator, bool_op)
@@ -1720,8 +1720,8 @@ def _check_fill(meth, op, a, b, fill_value=0):
a = a.reindex(exp_index)
b = b.reindex(exp_index)
- amask = isnull(a)
- bmask = isnull(b)
+ amask = isna(a)
+ bmask = isna(b)
exp_values = []
for i in range(len(exp_index)):
@@ -1787,8 +1787,8 @@ def test_operators_na_handling(self):
result = s + s.shift(1)
result2 = s.shift(1) + s
- assert isnull(result[0])
- assert isnull(result2[0])
+ assert isna(result[0])
+ assert isna(result2[0])
s = Series(['foo', 'bar', 'baz', np.nan])
result = 'prefix_' + s
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 6e8ee38d366e2..e907b0edd5c6a 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -33,12 +33,12 @@ def test_getitem(self):
tm.assert_series_equal(result, exp)
assert result.dtype == 'object'
- def test_isnull(self):
+ def test_isna(self):
# GH 13737
s = Series([pd.Period('2011-01', freq='M'),
pd.Period('NaT', freq='M')])
- tm.assert_series_equal(s.isnull(), Series([False, True]))
- tm.assert_series_equal(s.notnull(), Series([True, False]))
+ tm.assert_series_equal(s.isna(), Series([False, True]))
+ tm.assert_series_equal(s.notna(), Series([True, False]))
def test_fillna(self):
# GH 13737
@@ -89,10 +89,10 @@ def test_NaT_scalar(self):
series = Series([0, 1000, 2000, iNaT], dtype='period[D]')
val = series[3]
- assert isnull(val)
+ assert isna(val)
series[2] = val
- assert isnull(series[2])
+ assert isna(series[2])
def test_NaT_cast(self):
result = Series([np.nan]).astype('period[D]')
diff --git a/pandas/tests/series/test_quantile.py b/pandas/tests/series/test_quantile.py
index 2d02260ac7303..21379641a78d8 100644
--- a/pandas/tests/series/test_quantile.py
+++ b/pandas/tests/series/test_quantile.py
@@ -166,8 +166,8 @@ def test_quantile_box(self):
def test_datetime_timedelta_quantiles(self):
# covers #9694
- assert pd.isnull(Series([], dtype='M8[ns]').quantile(.5))
- assert pd.isnull(Series([], dtype='m8[ns]').quantile(.5))
+ assert pd.isna(Series([], dtype='M8[ns]').quantile(.5))
+ assert pd.isna(Series([], dtype='m8[ns]').quantile(.5))
def test_quantile_nat(self):
res = Series([pd.NaT, pd.NaT]).quantile(0.5)
diff --git a/pandas/tests/series/test_replace.py b/pandas/tests/series/test_replace.py
index 35d13a62ca083..2c07d87865f53 100644
--- a/pandas/tests/series/test_replace.py
+++ b/pandas/tests/series/test_replace.py
@@ -40,7 +40,7 @@ def test_replace(self):
assert (rs[:5] == -1).all()
assert (rs[6:10] == -1).all()
assert (rs[20:30] == -1).all()
- assert (pd.isnull(ser[:5])).all()
+ assert (pd.isna(ser[:5])).all()
# replace with different values
rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3})
@@ -48,7 +48,7 @@ def test_replace(self):
assert (rs[:5] == -1).all()
assert (rs[6:10] == -2).all()
assert (rs[20:30] == -3).all()
- assert (pd.isnull(ser[:5])).all()
+ assert (pd.isna(ser[:5])).all()
# replace with different values with 2 lists
rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3])
@@ -203,7 +203,7 @@ def test_replace2(self):
assert (rs[:5] == -1).all()
assert (rs[6:10] == -1).all()
assert (rs[20:30] == -1).all()
- assert (pd.isnull(ser[:5])).all()
+ assert (pd.isna(ser[:5])).all()
# replace with different values
rs = ser.replace({np.nan: -1, 'foo': -2, 'bar': -3})
@@ -211,7 +211,7 @@ def test_replace2(self):
assert (rs[:5] == -1).all()
assert (rs[6:10] == -2).all()
assert (rs[20:30] == -3).all()
- assert (pd.isnull(ser[:5])).all()
+ assert (pd.isna(ser[:5])).all()
# replace with different values with 2 lists
rs2 = ser.replace([np.nan, 'foo', 'bar'], [-1, -2, -3])
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index d9cb69d56528c..f0f8954e5785b 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -1105,12 +1105,12 @@ def test_nan_columnname(self):
nan_colname_sparse = nan_colname.to_sparse()
assert np.isnan(nan_colname_sparse.columns[0])
- def test_isnull(self):
+ def test_isna(self):
# GH 8276
df = pd.SparseDataFrame({'A': [np.nan, np.nan, 1, 2, np.nan],
'B': [0, np.nan, np.nan, 2, np.nan]})
- res = df.isnull()
+ res = df.isna()
exp = pd.SparseDataFrame({'A': [True, True, False, False, True],
'B': [False, True, True, False, True]},
default_fill_value=True)
@@ -1121,18 +1121,18 @@ def test_isnull(self):
df = pd.SparseDataFrame({'A': [0, 0, 1, 2, np.nan],
'B': [0, np.nan, 0, 2, np.nan]},
default_fill_value=0.)
- res = df.isnull()
+ res = df.isna()
assert isinstance(res, pd.SparseDataFrame)
exp = pd.DataFrame({'A': [False, False, False, False, True],
'B': [False, True, False, False, True]})
tm.assert_frame_equal(res.to_dense(), exp)
- def test_isnotnull(self):
+ def test_notna(self):
# GH 8276
df = pd.SparseDataFrame({'A': [np.nan, np.nan, 1, 2, np.nan],
'B': [0, np.nan, np.nan, 2, np.nan]})
- res = df.isnotnull()
+ res = df.notna()
exp = pd.SparseDataFrame({'A': [False, False, True, True, False],
'B': [True, False, False, True, False]},
default_fill_value=False)
@@ -1143,7 +1143,7 @@ def test_isnotnull(self):
df = pd.SparseDataFrame({'A': [0, 0, 1, 2, np.nan],
'B': [0, np.nan, 0, 2, np.nan]},
default_fill_value=0.)
- res = df.isnotnull()
+ res = df.notna()
assert isinstance(res, pd.SparseDataFrame)
exp = pd.DataFrame({'A': [True, True, True, True, False],
'B': [True, False, True, True, False]})
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index a7685abd5ba4d..b44314d4e733b 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -9,12 +9,10 @@
import numpy as np
import pandas as pd
-from pandas import Series, DataFrame, bdate_range
-from pandas.core.common import isnull
+from pandas import Series, DataFrame, bdate_range, isna, compat
from pandas.tseries.offsets import BDay
import pandas.util.testing as tm
from pandas.compat import range
-from pandas import compat
from pandas.core.reshape.util import cartesian_product
import pandas.core.sparse.frame as spf
@@ -314,7 +312,7 @@ def test_constructor_scalar(self):
sp = SparseSeries(data, np.arange(100))
sp = sp.reindex(np.arange(200))
assert (sp.loc[:99] == data).all()
- assert isnull(sp.loc[100:]).all()
+ assert isna(sp.loc[100:]).all()
data = np.nan
sp = SparseSeries(data, np.arange(100))
@@ -1292,11 +1290,11 @@ def test_value_counts_int(self):
tm.assert_series_equal(sparse.value_counts(dropna=False),
dense.value_counts(dropna=False))
- def test_isnull(self):
+ def test_isna(self):
# GH 8276
s = pd.SparseSeries([np.nan, np.nan, 1, 2, np.nan], name='xxx')
- res = s.isnull()
+ res = s.isna()
exp = pd.SparseSeries([True, True, False, False, True], name='xxx',
fill_value=True)
tm.assert_sp_series_equal(res, exp)
@@ -1304,16 +1302,16 @@ def test_isnull(self):
# if fill_value is not nan, True can be included in sp_values
s = pd.SparseSeries([np.nan, 0., 1., 2., 0.], name='xxx',
fill_value=0.)
- res = s.isnull()
+ res = s.isna()
assert isinstance(res, pd.SparseSeries)
exp = pd.Series([True, False, False, False, False], name='xxx')
tm.assert_series_equal(res.to_dense(), exp)
- def test_isnotnull(self):
+ def test_notna(self):
# GH 8276
s = pd.SparseSeries([np.nan, np.nan, 1, 2, np.nan], name='xxx')
- res = s.isnotnull()
+ res = s.notna()
exp = pd.SparseSeries([False, False, True, True, False], name='xxx',
fill_value=False)
tm.assert_sp_series_equal(res, exp)
@@ -1321,7 +1319,7 @@ def test_isnotnull(self):
# if fill_value is not nan, True can be included in sp_values
s = pd.SparseSeries([np.nan, 0., 1., 2., 0.], name='xxx',
fill_value=0.)
- res = s.isnotnull()
+ res = s.notna()
assert isinstance(res, pd.SparseSeries)
exp = pd.Series([False, True, True, True, True], name='xxx')
tm.assert_series_equal(res.to_dense(), exp)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 9e7b97f19e0c3..0e86ec123efea 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -177,7 +177,7 @@ def test_factorize_nan(self):
ids = rizer.factorize(key, sort=True, na_sentinel=na_sentinel)
expected = np.array([0, 1, 0, na_sentinel], dtype='int32')
assert len(set(key)) == len(set(expected))
- tm.assert_numpy_array_equal(pd.isnull(key),
+ tm.assert_numpy_array_equal(pd.isna(key),
expected == na_sentinel)
# nan still maps to na_sentinel when sort=False
@@ -189,7 +189,7 @@ def test_factorize_nan(self):
expected = np.array([2, -1, 0], dtype='int32')
assert len(set(key)) == len(set(expected))
- tm.assert_numpy_array_equal(pd.isnull(key), expected == na_sentinel)
+ tm.assert_numpy_array_equal(pd.isna(key), expected == na_sentinel)
def test_complex_sorting(self):
# gh 12666 - check no segfault
@@ -1150,7 +1150,7 @@ def test_combineFunc(self):
def test_reindex(self):
pass
- def test_isnull(self):
+ def test_isna(self):
pass
def test_groupby(self):
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 85976b9fabd66..9af4a9edeb8b1 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -353,10 +353,10 @@ def test_nanops(self):
assert getattr(obj, op)() == 2.0
obj = klass([np.nan])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
obj = klass([])
- assert pd.isnull(getattr(obj, op)())
+ assert pd.isna(getattr(obj, op)())
obj = klass([pd.NaT, datetime(2011, 11, 1)])
# check DatetimeIndex monotonic path
@@ -495,10 +495,10 @@ def test_value_counts_unique_nunique_null(self):
nanloc = np.zeros(len(o), dtype=np.bool)
nanloc[:3] = True
if isinstance(o, Index):
- tm.assert_numpy_array_equal(pd.isnull(o), nanloc)
+ tm.assert_numpy_array_equal(pd.isna(o), nanloc)
else:
exp = pd.Series(nanloc, o.index, name='a')
- tm.assert_series_equal(pd.isnull(o), exp)
+ tm.assert_series_equal(pd.isna(o), exp)
expected_s_na = Series(list(range(10, 2, -1)) + [3],
index=expected_index[9:0:-1],
@@ -528,7 +528,7 @@ def test_value_counts_unique_nunique_null(self):
else:
tm.assert_numpy_array_equal(result[1:], values[2:])
- assert pd.isnull(result[0])
+ assert pd.isna(result[0])
assert result.dtype == orig.dtype
assert o.nunique() == 8
@@ -689,7 +689,7 @@ def test_value_counts_datetime64(self):
tm.assert_index_equal(unique, exp_idx)
else:
tm.assert_numpy_array_equal(unique[:3], expected)
- assert pd.isnull(unique[3])
+ assert pd.isna(unique[3])
assert s.nunique() == 3
assert s.nunique(dropna=False) == 4
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 667b26c24c662..eecdd672095b0 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -19,7 +19,7 @@
import pandas.compat as compat
import pandas.util.testing as tm
from pandas import (Categorical, Index, Series, DataFrame,
- Timestamp, CategoricalIndex, isnull,
+ Timestamp, CategoricalIndex, isna,
date_range, DatetimeIndex,
period_range, PeriodIndex,
timedelta_range, TimedeltaIndex, NaT,
@@ -233,7 +233,7 @@ def f():
# the original type" feature to try to cast the array interface result
# to...
- # vals = np.asarray(cat[cat.notnull()])
+ # vals = np.asarray(cat[cat.notna()])
# assert is_integer_dtype(vals)
# corner cases
@@ -309,7 +309,7 @@ def test_constructor_with_index(self):
categories=ci.categories))
def test_constructor_with_generator(self):
- # This was raising an Error in isnull(single_val).any() because isnull
+ # This was raising an Error in isna(single_val).any() because isna
# returned a scalar for a generator
xrange = range
@@ -606,7 +606,7 @@ def test_na_flags_int_categories(self):
cat = Categorical(labels, categories, fastpath=True)
repr(cat)
- tm.assert_numpy_array_equal(isnull(cat), labels == -1)
+ tm.assert_numpy_array_equal(isna(cat), labels == -1)
def test_categories_none(self):
factor = Categorical(['a', 'b', 'b', 'a',
@@ -1118,10 +1118,10 @@ def test_nan_handling(self):
tm.assert_numpy_array_equal(c._codes, np.array([0, 1, -1, 0],
dtype=np.int8))
- def test_isnull(self):
+ def test_isna(self):
exp = np.array([False, False, True])
c = Categorical(["a", "b", np.nan])
- res = c.isnull()
+ res = c.isna()
tm.assert_numpy_array_equal(res, exp)
@@ -2780,7 +2780,7 @@ def test_info(self):
df = DataFrame({'int64': np.random.randint(100, size=n)})
df['category'] = Series(np.array(list('abcdefghij')).take(
np.random.randint(0, 10, size=n))).astype('category')
- df.isnull()
+ df.isna()
buf = compat.StringIO()
df.info(buf=buf)
diff --git a/pandas/tests/test_lib.py b/pandas/tests/test_lib.py
index 6be687e26e985..2662720bb436d 100644
--- a/pandas/tests/test_lib.py
+++ b/pandas/tests/test_lib.py
@@ -201,20 +201,20 @@ def test_get_reverse_indexer(self):
assert np.array_equal(result, expected)
-class TestNullObj(object):
+class TestNAObj(object):
- _1d_methods = ['isnullobj', 'isnullobj_old']
- _2d_methods = ['isnullobj2d', 'isnullobj2d_old']
+ _1d_methods = ['isnaobj', 'isnaobj_old']
+ _2d_methods = ['isnaobj2d', 'isnaobj2d_old']
def _check_behavior(self, arr, expected):
- for method in TestNullObj._1d_methods:
+ for method in TestNAObj._1d_methods:
result = getattr(lib, method)(arr)
tm.assert_numpy_array_equal(result, expected)
arr = np.atleast_2d(arr)
expected = np.atleast_2d(expected)
- for method in TestNullObj._2d_methods:
+ for method in TestNAObj._2d_methods:
result = getattr(lib, method)(arr)
tm.assert_numpy_array_equal(result, expected)
@@ -237,7 +237,7 @@ def test_empty_arr(self):
self._check_behavior(arr, expected)
def test_empty_str_inp(self):
- arr = np.array([""]) # empty but not null
+ arr = np.array([""]) # empty but not na
expected = np.array([False])
self._check_behavior(arr, expected)
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index a56ff0fc2d158..0b2dc9ba70f03 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -10,7 +10,7 @@
import numpy as np
from pandas.core.index import Index, MultiIndex
-from pandas import Panel, DataFrame, Series, notnull, isnull, Timestamp
+from pandas import Panel, DataFrame, Series, notna, isna, Timestamp
from pandas.core.dtypes.common import is_float_dtype, is_integer_dtype
import pandas.core.common as com
@@ -287,12 +287,12 @@ def test_series_setitem(self):
s = self.ymd['A']
s[2000, 3] = np.nan
- assert isnull(s.values[42:65]).all()
- assert notnull(s.values[:42]).all()
- assert notnull(s.values[65:]).all()
+ assert isna(s.values[42:65]).all()
+ assert notna(s.values[:42]).all()
+ assert notna(s.values[65:]).all()
s[2000, 3, 10] = np.nan
- assert isnull(s[49])
+ assert isna(s[49])
def test_series_slice_partial(self):
pass
@@ -1902,7 +1902,7 @@ def test_dataframe_insert_column_all_na(self):
df = DataFrame([[1, 2], [3, 4], [5, 6]], index=mix)
s = Series({(1, 1): 1, (1, 2): 2})
df['new'] = s
- assert df['new'].isnull().all()
+ assert df['new'].isna().all()
def test_join_segfault(self):
# 1532
@@ -2014,8 +2014,8 @@ def test_tuples_have_na(self):
labels=[[1, 1, 1, 1, -1, 0, 0, 0], [0, 1, 2, 3, 0,
1, 2, 3]])
- assert isnull(index[4][0])
- assert isnull(index.values[4][0])
+ assert isna(index[4][0])
+ assert isna(index.values[4][0])
def test_duplicate_groupby_issues(self):
idx_tp = [('600809', '20061231'), ('600809', '20070331'),
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 6798e64b01d7e..2a22fc9d32919 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -8,7 +8,7 @@
import numpy as np
import pandas as pd
-from pandas import Series, isnull, _np_version_under1p9
+from pandas import Series, isna, _np_version_under1p9
from pandas.core.dtypes.common import is_integer_dtype
import pandas.core.nanops as nanops
import pandas.util.testing as tm
@@ -408,7 +408,7 @@ def test_nanmax(self):
def _argminmax_wrap(self, value, axis=None, func=None):
res = func(value, axis)
nans = np.min(value, axis)
- nullnan = isnull(nans)
+ nullnan = isna(nans)
if res.ndim:
res[nullnan] = -1
elif (hasattr(nullnan, 'all') and nullnan.all() or
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 445611c1696f5..a6113f231f8f2 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -11,7 +11,7 @@
from pandas.core.dtypes.common import is_float_dtype
from pandas.core.dtypes.missing import remove_na_arraylike
-from pandas import (Series, DataFrame, Index, date_range, isnull, notnull,
+from pandas import (Series, DataFrame, Index, date_range, isna, notna,
pivot, MultiIndex)
from pandas.core.nanops import nanall, nanany
from pandas.core.panel import Panel
@@ -79,7 +79,7 @@ def test_iter(self):
tm.equalContents(list(self.panel), self.panel.items)
def test_count(self):
- f = lambda s: notnull(s).sum()
+ f = lambda s: notna(s).sum()
self._check_stat_op('count', f, obj=self.panel, has_skipna=False)
def test_sum(self):
@@ -93,7 +93,7 @@ def test_prod(self):
def test_median(self):
def wrapper(x):
- if isnull(x).any():
+ if isna(x).any():
return np.nan
return np.median(x)
@@ -540,12 +540,12 @@ def test_set_minor_major(self):
df2 = DataFrame([1.0, np.nan, 1.0, np.nan, 1.0, 1.0])
panel = Panel({'Item1': df1, 'Item2': df2})
- newminor = notnull(panel.iloc[:, :, 0])
+ newminor = notna(panel.iloc[:, :, 0])
panel.loc[:, :, 'NewMinor'] = newminor
assert_frame_equal(panel.loc[:, :, 'NewMinor'],
newminor.astype(object))
- newmajor = notnull(panel.iloc[:, 0, :])
+ newmajor = notna(panel.iloc[:, 0, :])
panel.loc[:, 'NewMajor', :] = newmajor
assert_frame_equal(panel.loc[:, 'NewMajor', :],
newmajor.astype(object))
@@ -1694,7 +1694,7 @@ def test_transpose_copy(self):
assert_panel_equal(result, expected)
panel.values[0, 1, 1] = np.nan
- assert notnull(result.values[1, 0, 1])
+ assert notna(result.values[1, 0, 1])
def test_to_frame(self):
with catch_warnings(record=True):
@@ -1863,7 +1863,7 @@ def test_to_panel_na_handling(self):
[0, 1, 2, 3, 4, 5, 2, 3, 4, 5]])
panel = df.to_panel()
- assert isnull(panel[0].loc[1, [0, 1]]).all()
+ assert isna(panel[0].loc[1, [0, 1]]).all()
def test_to_panel_duplicates(self):
# #2441
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 18643aff15e9b..863671feb4ed8 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -6,9 +6,9 @@
from warnings import catch_warnings
import numpy as np
+from pandas import Series, Index, isna, notna
from pandas.core.dtypes.common import is_float_dtype
from pandas.core.dtypes.missing import remove_na_arraylike
-from pandas import Series, Index, isnull, notnull
from pandas.core.panel import Panel
from pandas.core.panel4d import Panel4D
from pandas.tseries.offsets import BDay
@@ -33,7 +33,7 @@ def test_iter(self):
tm.equalContents(list(self.panel4d), self.panel4d.labels)
def test_count(self):
- f = lambda s: notnull(s).sum()
+ f = lambda s: notna(s).sum()
self._check_stat_op('count', f, obj=self.panel4d, has_skipna=False)
def test_sum(self):
@@ -47,7 +47,7 @@ def test_prod(self):
def test_median(self):
def wrapper(x):
- if isnull(x).any():
+ if isna(x).any():
return np.nan
return np.median(x)
diff --git a/pandas/tests/test_resample.py b/pandas/tests/test_resample.py
index 15bbd7a9ef5e9..08fa7992e8da1 100644
--- a/pandas/tests/test_resample.py
+++ b/pandas/tests/test_resample.py
@@ -13,8 +13,8 @@
import pandas as pd
import pandas.tseries.offsets as offsets
import pandas.util.testing as tm
-from pandas import (Series, DataFrame, Panel, Index, isnull,
- notnull, Timestamp)
+from pandas import (Series, DataFrame, Panel, Index, isna,
+ notna, Timestamp)
from pandas.core.dtypes.generic import ABCSeries, ABCDataFrame
from pandas.compat import range, lrange, zip, product, OrderedDict
@@ -103,7 +103,7 @@ def test_api_changes_v018(self):
tm.assert_frame_equal(result, expected)
# compat for pandas-like methods
- for how in ['sort_values', 'isnull']:
+ for how in ['sort_values', 'isna']:
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
getattr(r, how)()
@@ -891,7 +891,7 @@ def test_custom_grouper(self):
g._cython_agg_general(f)
assert g.ngroups == 2593
- assert notnull(g.mean()).all()
+ assert notna(g.mean()).all()
# construct expected val
arr = [1] + [5] * 2592
@@ -948,7 +948,7 @@ def test_resample_how(self):
args = downsample_methods
def _ohlc(group):
- if isnull(group).all():
+ if isna(group).all():
return np.repeat(np.nan, 4)
return [group[0], group.max(), group.min(), group[-1]]
@@ -1445,7 +1445,7 @@ def test_resample_timestamp_to_period(self):
def test_ohlc_5min(self):
def _ohlc(group):
- if isnull(group).all():
+ if isna(group).all():
return np.repeat(np.nan, 4)
return [group[0], group.max(), group.min(), group[-1]]
@@ -2606,7 +2606,7 @@ def test_resample_weekly_all_na(self):
result = ts.resample('W-THU').asfreq()
- assert result.isnull().all()
+ assert result.isna().all()
result = ts.resample('W-THU').asfreq().ffill()[:-1]
expected = ts.asfreq('W-THU').ffill()
diff --git a/pandas/tests/test_sorting.py b/pandas/tests/test_sorting.py
index f6973cccb82b0..e58042961129d 100644
--- a/pandas/tests/test_sorting.py
+++ b/pandas/tests/test_sorting.py
@@ -305,9 +305,9 @@ def verify_order(df):
out = DataFrame(vals, columns=list('ABCDEFG') + ['left', 'right'])
out = align(out)
- jmask = {'left': out['left'].notnull(),
- 'right': out['right'].notnull(),
- 'inner': out['left'].notnull() & out['right'].notnull(),
+ jmask = {'left': out['left'].notna(),
+ 'right': out['right'].notna(),
+ 'inner': out['left'].notna() & out['right'].notna(),
'outer': np.ones(len(out), dtype='bool')}
for how in 'left', 'right', 'outer', 'inner':
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index bb31fb9260160..ec2b0b75b9eed 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -11,7 +11,7 @@
from pandas.compat import range, u
import pandas.compat as compat
-from pandas import (Index, Series, DataFrame, isnull, MultiIndex, notnull)
+from pandas import Index, Series, DataFrame, isna, MultiIndex, notna
from pandas.util.testing import assert_series_equal
import pandas.util.testing as tm
@@ -49,7 +49,7 @@ def test_iter(self):
for el in s:
# each element of the series is either a basestring/str or nan
- assert isinstance(el, compat.string_types) or isnull(el)
+ assert isinstance(el, compat.string_types) or isna(el)
# desired behavior is to iterate until everything would be nan on the
# next iter so make sure the last element of the iterator was 'l' in
@@ -1413,7 +1413,7 @@ def test_len(self):
values = Series(['foo', 'fooo', 'fooooo', np.nan, 'fooooooo'])
result = values.str.len()
- exp = values.map(lambda x: len(x) if notnull(x) else NA)
+ exp = values.map(lambda x: len(x) if notna(x) else NA)
tm.assert_series_equal(result, exp)
# mixed
@@ -1431,7 +1431,7 @@ def test_len(self):
'fooooooo')])
result = values.str.len()
- exp = values.map(lambda x: len(x) if notnull(x) else NA)
+ exp = values.map(lambda x: len(x) if notna(x) else NA)
tm.assert_series_equal(result, exp)
def test_findall(self):
@@ -2281,7 +2281,7 @@ def test_slice(self):
(3, 0, -1)]:
try:
result = values.str.slice(start, stop, step)
- expected = Series([s[start:stop:step] if not isnull(s) else NA
+ expected = Series([s[start:stop:step] if not isna(s) else NA
for s in values])
tm.assert_series_equal(result, expected)
except:
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index dd35e4375841e..5ab33bd6cc5e1 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -10,8 +10,8 @@
from distutils.version import LooseVersion
import pandas as pd
-from pandas import (Series, DataFrame, bdate_range, isnull,
- notnull, concat, Timestamp, Index)
+from pandas import (Series, DataFrame, bdate_range, isna,
+ notna, concat, Timestamp, Index)
import pandas.stats.moments as mom
import pandas.core.window as rwindow
import pandas.tseries.offsets as offsets
@@ -249,7 +249,7 @@ def test_count_nonnumeric_types(self):
tm.assert_frame_equal(result, expected)
result = df.rolling(1).count()
- expected = df.notnull().astype(float)
+ expected = df.notna().astype(float)
tm.assert_frame_equal(result, expected)
def test_window_with_args(self):
@@ -1223,7 +1223,7 @@ def test_rolling_apply_out_of_bounds(self):
# it works!
with catch_warnings(record=True):
result = mom.rolling_apply(arr, 10, np.sum)
- assert isnull(result).all()
+ assert isna(result).all()
with catch_warnings(record=True):
result = mom.rolling_apply(arr, 10, np.sum, min_periods=1)
@@ -1384,8 +1384,8 @@ def get_result(arr, window, min_periods=None, center=False):
arr2 = randn(20)
result = get_result(arr2, 10, min_periods=5)
- assert isnull(result[3])
- assert notnull(result[4])
+ assert isna(result[3])
+ assert notna(result[4])
# min_periods=0
result0 = get_result(arr, 20, min_periods=0)
@@ -1974,10 +1974,10 @@ def create_dataframes():
def is_constant(x):
values = x.values.ravel()
- return len(set(values[notnull(values)])) == 1
+ return len(set(values[notna(values)])) == 1
def no_nans(x):
- return x.notnull().all().all()
+ return x.notna().all().all()
# data is a tuple(object, is_contant, no_nans)
data = create_series() + create_dataframes()
@@ -2047,7 +2047,7 @@ def _test_moments_consistency(self, min_periods, count, mean, mock_mean,
var_debiasing_factors=None):
def _non_null_values(x):
values = x.values.ravel()
- return set(values[notnull(values)].tolist())
+ return set(values[notna(values)].tolist())
for (x, is_constant, no_nans) in self.data:
count_x = count(x)
@@ -2119,7 +2119,7 @@ def _non_null_values(x):
if isinstance(x, Series):
for (y, is_constant, no_nans) in self.data:
- if not x.isnull().equals(y.isnull()):
+ if not x.isna().equals(y.isna()):
# can only easily test two Series with similar
# structure
continue
@@ -2172,8 +2172,8 @@ def _weights(s, com, adjust, ignore_na):
w = Series(np.nan, index=s.index)
alpha = 1. / (1. + com)
if ignore_na:
- w[s.notnull()] = _weights(s[s.notnull()], com=com,
- adjust=adjust, ignore_na=False)
+ w[s.notna()] = _weights(s[s.notna()], com=com,
+ adjust=adjust, ignore_na=False)
elif adjust:
for i in range(len(s)):
if s.iat[i] == s.iat[i]:
@@ -2975,8 +2975,8 @@ def _check_expanding_ndarray(self, func, static_comp, has_min_periods=True,
arr2 = randn(20)
result = func(arr2, min_periods=5)
- assert isnull(result[3])
- assert notnull(result[4])
+ assert isna(result[3])
+ assert notna(result[4])
# min_periods=0
result0 = func(arr, min_periods=0)
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index c034a9c60ef1b..a9ecfd797a32b 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -18,7 +18,7 @@
from pandas.core.indexes.datetimes import bdate_range, date_range
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas._libs import tslib
-from pandas import (Index, Series, DataFrame, isnull, Timestamp, NaT,
+from pandas import (Index, Series, DataFrame, isna, Timestamp, NaT,
DatetimeIndex, to_datetime)
from pandas.util.testing import (assert_frame_equal, assert_series_equal,
set_timezone)
@@ -931,7 +931,7 @@ def test_datetimeindex_tz_nat(self):
idx = to_datetime([Timestamp("2013-1-1", tz=self.tzstr('US/Eastern')),
NaT])
- assert isnull(idx[1])
+ assert isna(idx[1])
assert idx[0].tzinfo is not None
diff --git a/pandas/util/__init__.py b/pandas/util/__init__.py
index e86af930fef7c..202e58c916e47 100644
--- a/pandas/util/__init__.py
+++ b/pandas/util/__init__.py
@@ -1,2 +1,2 @@
-from pandas.core.util.hashing import hash_pandas_object, hash_array # noqa
from pandas.util._decorators import Appender, Substitution, cache_readonly # noqa
+from pandas.core.util.hashing import hash_pandas_object, hash_array # noqa
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 772b206f82e69..e406698fafe63 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -6,12 +6,30 @@
from functools import wraps, update_wrapper
-def deprecate(name, alternative, alt_name=None):
+def deprecate(name, alternative, alt_name=None, klass=None,
+ stacklevel=2):
+ """
+
+ Return a new function that emits a deprecation warning on use
+
+ Parameters
+ ----------
+ name : str
+ Name of function to deprecate
+ alternative : str
+ Name of function to use instead
+ alt_name : str, optional
+ Name to use in preference of alternative.__name__
+ klass : Warning, default FutureWarning
+ stacklevel : int, default 2
+
+ """
alt_name = alt_name or alternative.__name__
+ klass = klass or FutureWarning
def wrapper(*args, **kwargs):
warnings.warn("%s is deprecated. Use %s instead" % (name, alt_name),
- FutureWarning, stacklevel=2)
+ klass, stacklevel=stacklevel)
return alternative(*args, **kwargs)
return wrapper
| existing isnull, notnull remain user facing, will show DeprecationWarning
closes #15001
| https://api.github.com/repos/pandas-dev/pandas/pulls/16972 | 2017-07-15T23:34:33Z | 2017-07-25T10:18:24Z | 2017-07-25T10:18:24Z | 2017-10-27T19:17:10Z |
DEPR: deprecate html.border option | diff --git a/doc/source/options.rst b/doc/source/options.rst
index 6ff5b76014c95..f373705a96f48 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -400,7 +400,7 @@ display.width 80 Width of the display in charact
display.html.table_schema False Whether to publish a Table Schema
representation for frontends that
support it.
-html.border 1 A ``border=value`` attribute is
+display.html.border 1 A ``border=value`` attribute is
inserted in the ``<table>`` tag
for the DataFrame HTML repr.
io.excel.xls.writer xlwt The default Excel writer engine for
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 34095d55b8cc9..f9fb4d231ccf8 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -116,6 +116,7 @@ Deprecations
~~~~~~~~~~~~
- :func:`read_excel()` has deprecated ``sheetname`` in favor of ``sheet_name`` for consistency with ``.to_excel()`` (:issue:`10559`).
+- ``pd.options.html.border`` has been deprecated in favor of ``pd.options.display.html.border`` (:issue:`15793`).
.. _whatsnew_0210.prior_deprecations:
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index e70db1d13e376..893064fc48038 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -202,6 +202,17 @@ def use_numexpr_cb(key):
(default: False)
"""
+pc_html_border_doc = """
+: int
+ A ``border=value`` attribute is inserted in the ``<table>`` tag
+ for the DataFrame HTML repr.
+"""
+
+pc_html_border_deprecation_warning = """\
+html.border has been deprecated, use display.html.border instead
+(currently both are identical)
+"""
+
pc_line_width_deprecation_warning = """\
line_width has been deprecated, use display.width instead (currently both are
identical)
@@ -381,6 +392,8 @@ def table_schema_cb(key):
validator=is_bool)
cf.register_option('html.table_schema', False, pc_table_schema_doc,
validator=is_bool, cb=table_schema_cb)
+ cf.register_option('html.border', 1, pc_html_border_doc,
+ validator=is_int)
cf.deprecate_option('display.line_width',
@@ -390,16 +403,13 @@ def table_schema_cb(key):
cf.deprecate_option('display.height', msg=pc_height_deprecation_warning,
rkey='display.max_rows')
-pc_html_border_doc = """
-: int
- A ``border=value`` attribute is inserted in the ``<table>`` tag
- for the DataFrame HTML repr.
-"""
-
with cf.config_prefix('html'):
cf.register_option('border', 1, pc_html_border_doc,
validator=is_int)
+cf.deprecate_option('html.border', msg=pc_html_border_deprecation_warning,
+ rkey='display.html.border')
+
tc_sim_interactive_doc = """
: boolean
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 0627ca9179509..23eb3bb05fd0a 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1064,7 +1064,7 @@ def __init__(self, formatter, classes=None, max_rows=None, max_cols=None,
self.max_cols < len(self.fmt.columns))
self.notebook = notebook
if border is None:
- border = get_option('html.border')
+ border = get_option('display.html.border')
self.border = border
def write(self, s, indent=0):
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 9f4e532ec2287..1e174c34221d5 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -1401,7 +1401,7 @@ def test_to_html_border(self):
def test_to_html_border_option(self):
df = DataFrame({'A': [1, 2]})
- with pd.option_context('html.border', 0):
+ with pd.option_context('display.html.border', 0):
result = df.to_html()
assert 'border="0"' in result
assert 'border="0"' in df._repr_html_()
@@ -1411,6 +1411,11 @@ def test_to_html_border_zero(self):
result = df.to_html(border=0)
assert 'border="0"' in result
+ def test_display_option_warning(self):
+ with tm.assert_produces_warning(DeprecationWarning,
+ check_stacklevel=False):
+ pd.options.html.border
+
def test_to_html(self):
# big mixed
biggie = DataFrame({'A': np.random.randn(200),
| Use display.html.border instead
- [x] closes #15793
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16970 | 2017-07-15T22:57:20Z | 2017-07-16T15:32:45Z | 2017-07-16T15:32:45Z | 2017-07-16T15:40:21Z |
ERR: Raise ValueError when setting scalars in a dataframe with no index ( #16823) | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index b8b06ee0fe94e..1e9c402dac73e 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -706,6 +706,8 @@ Other API Changes
- Restricted DateOffset keyword arguments. Previously, ``DateOffset`` subclasses allowed arbitrary keyword arguments which could lead to unexpected behavior. Now, only valid arguments will be accepted. (:issue:`17176`).
- Pandas no longer registers matplotlib converters on import. The converters
will be registered and used when the first plot is draw (:issue:`17710`)
+- Setting on a column with a scalar value and 0-len index now raises a ``ValueError`` (:issue:`16823`)
+
.. _whatsnew_0210.deprecations:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 142ccf1f034bc..d907492759dbd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2531,13 +2531,17 @@ def _ensure_valid_index(self, value):
passed value
"""
# GH5632, make sure that we are a Series convertible
- if not len(self.index) and is_list_like(value):
+ if not len(self.index):
+ if not is_list_like(value):
+ # GH16823, Raise an error due to loss of information
+ raise ValueError('If using all scalar values, you must pass'
+ ' an index')
try:
value = Series(value)
except:
- raise ValueError('Cannot set a frame with no defined index '
- 'and a value that cannot be converted to a '
- 'Series')
+ raise ValueError('Cannot set a frame with no defined'
+ 'index and a value that cannot be '
+ 'converted to a Series')
self._data = self._data.reindex_axis(value.index.copy(), axis=1,
fill_value=np.nan)
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index d19de6030d473..38c28af4d6ecb 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -454,6 +454,9 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None,
from pandas import DataFrame
df = DataFrame(data, index=common_idx)
+ if not len(df):
+ return DataFrame(index=common_idx)
+
if values is None:
df['__dummy__'] = 0
kwargs = {'aggfunc': len, 'fill_value': 0}
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index d00f56830a6fa..1a16e4ef48b64 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -721,6 +721,11 @@ def test_setitem_empty_frame_with_boolean(self):
df[df > df2] = 47
assert_frame_equal(df, df2)
+ def test_setitem_scalars_no_index(self):
+ # GH16823
+ df = DataFrame()
+ pytest.raises(ValueError, df.__setitem__, 'foo', 1)
+
def test_getitem_empty_frame_with_boolean(self):
# Test for issue #11859
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index c6f38aeba9e87..bf3a840aced8c 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -423,15 +423,14 @@ def test_loc_setitem_consistency(self):
def test_loc_setitem_consistency_empty(self):
# empty (essentially noops)
- expected = DataFrame(columns=['x', 'y'])
- expected['x'] = expected['x'].astype(np.int64)
+ # GH16823
df = DataFrame(columns=['x', 'y'])
- df.loc[:, 'x'] = 1
- tm.assert_frame_equal(df, expected)
+ with tm.assert_raises_regex(ValueError, 'If using all scalar values'):
+ df.loc[:, 'x'] = 1
df = DataFrame(columns=['x', 'y'])
- df['x'] = 1
- tm.assert_frame_equal(df, expected)
+ with tm.assert_raises_regex(ValueError, 'If using all scalar values'):
+ df['x'] = 1
def test_loc_setitem_consistency_slice_column_len(self):
# .loc[:,column] setting with slice == len of the column
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 41ddfe934a131..16f325393649f 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -575,24 +575,16 @@ def f():
def test_partial_set_empty_frame_row(self):
# GH5720, GH5744
# don't create rows when empty
- expected = DataFrame(columns=['A', 'B', 'New'],
- index=pd.Index([], dtype='int64'))
- expected['A'] = expected['A'].astype('int64')
- expected['B'] = expected['B'].astype('float64')
- expected['New'] = expected['New'].astype('float64')
-
df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
y = df[df.A > 5]
- y['New'] = np.nan
- tm.assert_frame_equal(y, expected)
- # tm.assert_frame_equal(y,expected)
+ # GH16823
+ # Setting a column with a scalar and no index should raise
+ with tm.assert_raises_regex(ValueError, 'If using all scalar values'):
+ y['New'] = np.nan
- expected = DataFrame(columns=['a', 'b', 'c c', 'd'])
- expected['d'] = expected['d'].astype('int64')
df = DataFrame(columns=['a', 'b', 'c c'])
- df['d'] = 3
- tm.assert_frame_equal(df, expected)
- tm.assert_series_equal(df['c c'], Series(name='c c', dtype=object))
+ with tm.assert_raises_regex(ValueError, 'If using all scalar values'):
+ df['d'] = 3
# reindex columns is ok
df = DataFrame({"A": [1, 2, 3], "B": [1.2, 4.2, 5.2]})
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index 07d3052c16756..4126bb1de84d7 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1226,7 +1226,8 @@ def test_crosstab_no_overlap(self):
s2 = pd.Series([4, 5, 6], index=[4, 5, 6])
actual = crosstab(s1, s2)
- expected = pd.DataFrame()
+ expected = pd.DataFrame(
+ index=pd.Index([], dtype='int64')).astype('int64')
tm.assert_frame_equal(actual, expected)
| - [x] closes #16823
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
Trying to set a column with an scalar value and no index now raises a ValueError, similar to the behaviour of the DataFrame constructor.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16968 | 2017-07-15T22:30:07Z | 2017-10-08T17:04:56Z | 2017-10-08T17:04:56Z | 2017-10-09T20:12:52Z |
DOC: channel from pandas to conda-forge | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index bfcf560565977..b44d0f36b86a1 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -171,7 +171,7 @@ other dependencies, you can install them as follows::
To install *all* pandas dependencies you can do the following::
- conda install -n pandas_dev -c pandas --file ci/requirements_all.txt
+ conda install -n pandas_dev -c conda-forge --file ci/requirements_all.txt
To work in this environment, Windows users should ``activate`` it as follows::
| fixes the 'copy/paste' command to install the `requirements_all.txt` file.
It currently will fail because `nbsphinx` is not in the `pandas` channel. | https://api.github.com/repos/pandas-dev/pandas/pulls/16966 | 2017-07-15T21:57:11Z | 2017-07-15T23:49:09Z | 2017-07-15T23:49:09Z | 2017-07-15T23:49:33Z |
DOC: document convention argument for resample() | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a4bb746722c1e..e4e2e0093b1a6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4826,6 +4826,8 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
label : {'right', 'left'}
Which bin edge label to label bucket with
convention : {'start', 'end', 's', 'e'}
+ For PeriodIndex only, controls whether to use the start or end of
+ `rule`
loffset : timedelta
Adjust the resampled time labels
base : int, default 0
@@ -4946,6 +4948,47 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
2000-01-01 00:06:00 26
Freq: 3T, dtype: int64
+ For a Series with a PeriodIndex, the keyword `convention` can be
+ used to control whether to use the start or end of `rule`.
+
+ >>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
+ freq='A',
+ periods=2))
+ >>> s
+ 2012 1
+ 2013 2
+ Freq: A-DEC, dtype: int64
+
+ Resample by month using 'start' `convention`. Values are assigned to
+ the first month of the period.
+
+ >>> s.resample('M', convention='start').asfreq().head()
+ 2012-01 1.0
+ 2012-02 NaN
+ 2012-03 NaN
+ 2012-04 NaN
+ 2012-05 NaN
+ Freq: M, dtype: float64
+
+ Resample by month using 'end' `convention`. Values are assigned to
+ the last month of the period.
+
+ >>> s.resample('M', convention='end').asfreq()
+ 2012-12 1.0
+ 2013-01 NaN
+ 2013-02 NaN
+ 2013-03 NaN
+ 2013-04 NaN
+ 2013-05 NaN
+ 2013-06 NaN
+ 2013-07 NaN
+ 2013-08 NaN
+ 2013-09 NaN
+ 2013-10 NaN
+ 2013-11 NaN
+ 2013-12 2.0
+ Freq: M, dtype: float64
+
For DataFrame objects, the keyword ``on`` can be used to specify the
column instead of the index for resampling.
| - [ ] closes #15432
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16965 | 2017-07-15T21:51:46Z | 2017-07-16T15:55:34Z | 2017-07-16T15:55:34Z | 2017-07-16T15:55:35Z |
DOC: misspelling in DatetimeIndex.indexer_between_time [CI skip] | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index d8aae2367976b..e6bc1790f2992 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1882,7 +1882,7 @@ def indexer_between_time(self, start_time, end_time, include_start=True,
Select values between particular times of day (e.g., 9:00-9:30AM).
Return values of the index between two times. If start_time or
- end_time are strings then tseres.tools.to_time is used to convert to
+ end_time are strings then tseries.tools.to_time is used to convert to
a time object.
Parameters
| - [ ] fix typo in DatetimeIndex.indexer_between_time
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
| https://api.github.com/repos/pandas-dev/pandas/pulls/16963 | 2017-07-15T21:21:24Z | 2017-07-16T01:20:56Z | 2017-07-16T01:20:56Z | 2017-07-16T01:20:58Z |
Fixes SparseSeries initiated with dictionary raising AttributeError | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 6ddf6029b99bb..f8bd643f05ca0 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -180,7 +180,7 @@ Groupby/Resample/Rolling
Sparse
^^^^^^
-
+- Bug in ``SparseSeries`` raises ``AttributeError`` when a dictionary is passed in as data (:issue:`16777`)
Reshaping
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index 9dd061e26ba06..1bc9cf5379930 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -146,10 +146,9 @@ def __init__(self, data=None, index=None, sparse_index=None, kind='block',
data = data._data
elif isinstance(data, (Series, dict)):
- if index is None:
- index = data.index.view()
+ data = Series(data, index=index)
+ index = data.index.view()
- data = Series(data)
res = make_sparse(data, kind=kind, fill_value=fill_value)
data, sparse_index, fill_value = res
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index b524d6bfab418..bb56f8a51897a 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -88,6 +88,24 @@ def setup_method(self, method):
self.ziseries2 = SparseSeries(arr, index=index, kind='integer',
fill_value=0)
+ def test_constructor_dict_input(self):
+ # gh-16905
+ constructor_dict = {1: 1.}
+ index = [0, 1, 2]
+
+ # Series with index passed in
+ series = pd.Series(constructor_dict)
+ expected = SparseSeries(series, index=index)
+
+ result = SparseSeries(constructor_dict, index=index)
+ tm.assert_sp_series_equal(result, expected)
+
+ # Series with index and dictionary with no index
+ expected = SparseSeries(series)
+
+ result = SparseSeries(constructor_dict)
+ tm.assert_sp_series_equal(result, expected)
+
def test_constructor_dtype(self):
arr = SparseSeries([np.nan, 1, 2, np.nan])
assert arr.dtype == np.float64
| - [x] closes #16905
- [x] tests added / passed
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16960 | 2017-07-15T20:56:55Z | 2017-07-19T10:17:57Z | 2017-07-19T10:17:57Z | 2017-07-19T10:23:32Z |
Create a 'Y' alias for date_range yearly frequency | diff --git a/pandas/tests/tseries/test_frequencies.py b/pandas/tests/tseries/test_frequencies.py
index 54d12317b0bf8..4bcd0b49db7e0 100644
--- a/pandas/tests/tseries/test_frequencies.py
+++ b/pandas/tests/tseries/test_frequencies.py
@@ -248,9 +248,10 @@ def test_anchored_shortcuts(self):
# ensure invalid cases fail as expected
invalid_anchors = ['SM-0', 'SM-28', 'SM-29',
- 'SM-FOO', 'BSM', 'SM--1'
+ 'SM-FOO', 'BSM', 'SM--1',
'SMS-1', 'SMS-28', 'SMS-30',
- 'SMS-BAR', 'BSMS', 'SMS--2']
+ 'SMS-BAR', 'SMS-BYR' 'BSMS',
+ 'SMS--2']
for invalid_anchor in invalid_anchors:
with tm.assert_raises_regex(ValueError,
'Invalid frequency: '):
@@ -292,11 +293,15 @@ def test_get_rule_month():
result = frequencies._get_rule_month('A-DEC')
assert (result == 'DEC')
+ result = frequencies._get_rule_month('Y-DEC')
+ assert (result == 'DEC')
result = frequencies._get_rule_month(offsets.YearEnd())
assert (result == 'DEC')
result = frequencies._get_rule_month('A-MAY')
assert (result == 'MAY')
+ result = frequencies._get_rule_month('Y-MAY')
+ assert (result == 'MAY')
result = frequencies._get_rule_month(offsets.YearEnd(month=5))
assert (result == 'MAY')
@@ -305,6 +310,10 @@ def test_period_str_to_code():
assert (frequencies._period_str_to_code('A') == 1000)
assert (frequencies._period_str_to_code('A-DEC') == 1000)
assert (frequencies._period_str_to_code('A-JAN') == 1001)
+ assert (frequencies._period_str_to_code('Y') == 1000)
+ assert (frequencies._period_str_to_code('Y-DEC') == 1000)
+ assert (frequencies._period_str_to_code('Y-JAN') == 1001)
+
assert (frequencies._period_str_to_code('Q') == 2000)
assert (frequencies._period_str_to_code('Q-DEC') == 2000)
assert (frequencies._period_str_to_code('Q-FEB') == 2002)
@@ -349,6 +358,10 @@ def test_freq_code(self):
assert frequencies.get_freq('3A') == 1000
assert frequencies.get_freq('-1A') == 1000
+ assert frequencies.get_freq('Y') == 1000
+ assert frequencies.get_freq('3Y') == 1000
+ assert frequencies.get_freq('-1Y') == 1000
+
assert frequencies.get_freq('W') == 4000
assert frequencies.get_freq('W-MON') == 4001
assert frequencies.get_freq('W-FRI') == 4005
@@ -369,6 +382,13 @@ def test_freq_group(self):
assert frequencies.get_freq_group('-1A') == 1000
assert frequencies.get_freq_group('A-JAN') == 1000
assert frequencies.get_freq_group('A-MAY') == 1000
+
+ assert frequencies.get_freq_group('Y') == 1000
+ assert frequencies.get_freq_group('3Y') == 1000
+ assert frequencies.get_freq_group('-1Y') == 1000
+ assert frequencies.get_freq_group('Y-JAN') == 1000
+ assert frequencies.get_freq_group('Y-MAY') == 1000
+
assert frequencies.get_freq_group(offsets.YearEnd()) == 1000
assert frequencies.get_freq_group(offsets.YearEnd(month=1)) == 1000
assert frequencies.get_freq_group(offsets.YearEnd(month=5)) == 1000
@@ -790,12 +810,6 @@ def test_series(self):
for freq in [None, 'L']:
s = Series(period_range('2013', periods=10, freq=freq))
pytest.raises(TypeError, lambda: frequencies.infer_freq(s))
- for freq in ['Y']:
-
- msg = frequencies._INVALID_FREQ_ERROR
- with tm.assert_raises_regex(ValueError, msg):
- s = Series(period_range('2013', periods=10, freq=freq))
- pytest.raises(TypeError, lambda: frequencies.infer_freq(s))
# DateTimeIndex
for freq in ['M', 'L', 'S']:
@@ -812,11 +826,12 @@ def test_legacy_offset_warnings(self):
'W@FRI', 'W@SAT', 'W@SUN', 'Q@JAN', 'Q@FEB', 'Q@MAR',
'A@JAN', 'A@FEB', 'A@MAR', 'A@APR', 'A@MAY', 'A@JUN',
'A@JUL', 'A@AUG', 'A@SEP', 'A@OCT', 'A@NOV', 'A@DEC',
- 'WOM@1MON', 'WOM@2MON', 'WOM@3MON', 'WOM@4MON',
- 'WOM@1TUE', 'WOM@2TUE', 'WOM@3TUE', 'WOM@4TUE',
- 'WOM@1WED', 'WOM@2WED', 'WOM@3WED', 'WOM@4WED',
- 'WOM@1THU', 'WOM@2THU', 'WOM@3THU', 'WOM@4THU'
- 'WOM@1FRI', 'WOM@2FRI', 'WOM@3FRI', 'WOM@4FRI']
+ 'Y@JAN', 'WOM@1MON', 'WOM@2MON', 'WOM@3MON',
+ 'WOM@4MON', 'WOM@1TUE', 'WOM@2TUE', 'WOM@3TUE',
+ 'WOM@4TUE', 'WOM@1WED', 'WOM@2WED', 'WOM@3WED',
+ 'WOM@4WED', 'WOM@1THU', 'WOM@2THU', 'WOM@3THU',
+ 'WOM@4THU', 'WOM@1FRI', 'WOM@2FRI', 'WOM@3FRI',
+ 'WOM@4FRI']
msg = frequencies._INVALID_FREQ_ERROR
for freq in freqs:
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index c5f6c00a4005a..5c3c90520d1c3 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -422,6 +422,27 @@ def get_period_alias(offset_str):
return _offset_to_period_map.get(offset_str, None)
+_pure_alias = {
+ # 'A' is equivalent to 'Y'.
+ 'Y': 'A',
+ 'YS': 'AS',
+ 'BY': 'BA',
+ 'BYS': 'BAS',
+ 'Y-DEC': 'A-DEC',
+ 'Y-JAN': 'A-JAN',
+ 'Y-FEB': 'A-FEB',
+ 'Y-MAR': 'A-MAR',
+ 'Y-APR': 'A-APR',
+ 'Y-MAY': 'A-MAY',
+ 'Y-JUN': 'A-JUN',
+ 'Y-JUL': 'A-JUL',
+ 'Y-AUG': 'A-AUG',
+ 'Y-SEP': 'A-SEP',
+ 'Y-OCT': 'A-OCT',
+ 'Y-NOV': 'A-NOV',
+}
+
+
_lite_rule_alias = {
'W': 'W-SUN',
'Q': 'Q-DEC',
@@ -718,6 +739,7 @@ def get_standard_freq(freq):
def _period_str_to_code(freqstr):
+ freqstr = _pure_alias.get(freqstr, freqstr)
freqstr = _lite_rule_alias.get(freqstr, freqstr)
if freqstr not in _dont_uppercase:
| - [x] closes #9313
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
- [ ] Add alias 'Y' to docs.
What's the best place to introduce this in docs?
Moreover, note the inference still infers a year to 'A'. | https://api.github.com/repos/pandas-dev/pandas/pulls/16958 | 2017-07-15T20:26:52Z | 2017-07-16T08:04:36Z | 2017-07-16T08:04:36Z | 2017-07-16T10:09:06Z |
Deprecating Series.argmin and Series.argmax (#16830) | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 36551fa30c3ad..61e81387f8fec 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -487,11 +487,33 @@ Other API Changes
Deprecations
~~~~~~~~~~~~
+
- :func:`read_excel()` has deprecated ``sheetname`` in favor of ``sheet_name`` for consistency with ``.to_excel()`` (:issue:`10559`).
- ``pd.options.html.border`` has been deprecated in favor of ``pd.options.display.html.border`` (:issue:`15793`).
- :func:`SeriesGroupBy.nth` has deprecated ``True`` in favor of ``'all'`` for its kwarg ``dropna`` (:issue:`11038`).
- :func:`DataFrame.as_blocks` is deprecated, as this is exposing the internal implementation (:issue:`17302`)
+.. _whatsnew_0210.deprecations.argmin_min
+
+Series.argmax and Series.argmin
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- The behavior of :func:`Series.argmax` has been deprecated in favor of :func:`Series.idxmax` (:issue:`16830`)
+- The behavior of :func:`Series.argmin` has been deprecated in favor of :func:`Series.idxmin` (:issue:`16830`)
+
+For compatibility with NumPy arrays, ``pd.Series`` implements ``argmax`` and
+``argmin``. Since pandas 0.13.0, ``argmax`` has been an alias for
+:meth:`pandas.Series.idxmax`, and ``argmin`` has been an alias for
+:meth:`pandas.Series.idxmin`. They return the *label* of the maximum or minimum,
+rather than the *position*.
+
+We've deprecated the current behavior of ``Series.argmax`` and
+``Series.argmin``. Using either of these will emit a ``FutureWarning``. Use
+:meth:`Series.idxmax` if you want the label of the maximum. Use
+``Series.values.argmax()`` if you want the position of the maximum. Likewise for
+the minimum. In a future release ``Series.argmax`` and ``Series.argmin`` will
+return the position of the maximum or minimum.
+
.. _whatsnew_0210.prior_deprecations:
Removal of prior version deprecations/changes
diff --git a/pandas/core/series.py b/pandas/core/series.py
index db8ee2529ef57..ac3cd02583d24 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -69,7 +69,8 @@
import pandas.core.common as com
import pandas.core.nanops as nanops
import pandas.io.formats.format as fmt
-from pandas.util._decorators import Appender, deprecate_kwarg, Substitution
+from pandas.util._decorators import (
+ Appender, deprecate, deprecate_kwarg, Substitution)
from pandas.util._validators import validate_bool_kwarg
from pandas._libs import index as libindex, tslib as libts, lib, iNaT
@@ -1272,7 +1273,7 @@ def duplicated(self, keep='first'):
def idxmin(self, axis=None, skipna=True, *args, **kwargs):
"""
- Index of first occurrence of minimum of values.
+ Index *label* of the first occurrence of minimum of values.
Parameters
----------
@@ -1285,7 +1286,9 @@ def idxmin(self, axis=None, skipna=True, *args, **kwargs):
Notes
-----
- This method is the Series version of ``ndarray.argmin``.
+ This method is the Series version of ``ndarray.argmin``. This method
+ returns the label of the minimum, while ``ndarray.argmin`` returns
+ the position. To get the position, use ``series.values.argmin()``.
See Also
--------
@@ -1300,7 +1303,7 @@ def idxmin(self, axis=None, skipna=True, *args, **kwargs):
def idxmax(self, axis=None, skipna=True, *args, **kwargs):
"""
- Index of first occurrence of maximum of values.
+ Index *label* of the first occurrence of maximum of values.
Parameters
----------
@@ -1313,7 +1316,9 @@ def idxmax(self, axis=None, skipna=True, *args, **kwargs):
Notes
-----
- This method is the Series version of ``ndarray.argmax``.
+ This method is the Series version of ``ndarray.argmax``. This method
+ returns the label of the maximum, while ``ndarray.argmax`` returns
+ the position. To get the position, use ``series.values.argmax()``.
See Also
--------
@@ -1327,8 +1332,18 @@ def idxmax(self, axis=None, skipna=True, *args, **kwargs):
return self.index[i]
# ndarray compat
- argmin = idxmin
- argmax = idxmax
+ argmin = deprecate('argmin', idxmin,
+ msg="'argmin' is deprecated. Use 'idxmin' instead. "
+ "The behavior of 'argmin' will be corrected to "
+ "return the positional minimum in the future. "
+ "Use 'series.values.argmin' to get the position of "
+ "the minimum now.")
+ argmax = deprecate('argmax', idxmax,
+ msg="'argmax' is deprecated. Use 'idxmax' instead. "
+ "The behavior of 'argmax' will be corrected to "
+ "return the positional maximum in the future. "
+ "Use 'series.values.argmax' to get the position of "
+ "the maximum now.")
def round(self, decimals=0, *args, **kwargs):
"""
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 547b9676717c9..386d9c3ffe30d 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -598,9 +598,7 @@ def to_string(self):
text = self._join_multiline(*strcols)
else: # max_cols == 0. Try to fit frame to terminal
text = self.adj.adjoin(1, *strcols).split('\n')
- row_lens = Series(text).apply(len)
- max_len_col_ix = np.argmax(row_lens)
- max_len = row_lens[max_len_col_ix]
+ max_len = Series(text).str.len().max()
headers = [ele[0] for ele in strcols]
# Size of last col determines dot col size. See
# `self._to_str_columns
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 914181dc94154..9f5e4f2ac4b6e 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1242,16 +1242,31 @@ def test_idxmin(self):
result = s.idxmin()
assert result == 1
- def test_numpy_argmin(self):
- # argmin is aliased to idxmin
- data = np.random.randint(0, 11, size=10)
- result = np.argmin(Series(data))
- assert result == np.argmin(data)
+ def test_numpy_argmin_deprecated(self):
+ # See gh-16830
+ data = np.arange(1, 11)
+
+ s = Series(data, index=data)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # The deprecation of Series.argmin also causes a deprecation
+ # warning when calling np.argmin. This behavior is temporary
+ # until the implemention of Series.argmin is corrected.
+ result = np.argmin(s)
+
+ assert result == 1
+
+ with tm.assert_produces_warning(FutureWarning):
+ # argmin is aliased to idxmin
+ result = s.argmin()
+
+ assert result == 1
if not _np_version_under1p10:
- msg = "the 'out' parameter is not supported"
- tm.assert_raises_regex(ValueError, msg, np.argmin,
- Series(data), out=data)
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ msg = "the 'out' parameter is not supported"
+ tm.assert_raises_regex(ValueError, msg, np.argmin,
+ s, out=data)
def test_idxmax(self):
# test idxmax
@@ -1297,17 +1312,30 @@ def test_idxmax(self):
result = s.idxmin()
assert result == 1.1
- def test_numpy_argmax(self):
+ def test_numpy_argmax_deprecated(self):
+ # See gh-16830
+ data = np.arange(1, 11)
+
+ s = Series(data, index=data)
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # The deprecation of Series.argmax also causes a deprecation
+ # warning when calling np.argmax. This behavior is temporary
+ # until the implemention of Series.argmax is corrected.
+ result = np.argmax(s)
+ assert result == 10
+
+ with tm.assert_produces_warning(FutureWarning):
+ # argmax is aliased to idxmax
+ result = s.argmax()
- # argmax is aliased to idxmax
- data = np.random.randint(0, 11, size=10)
- result = np.argmax(Series(data))
- assert result == np.argmax(data)
+ assert result == 10
if not _np_version_under1p10:
- msg = "the 'out' parameter is not supported"
- tm.assert_raises_regex(ValueError, msg, np.argmax,
- Series(data), out=data)
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ msg = "the 'out' parameter is not supported"
+ tm.assert_raises_regex(ValueError, msg, np.argmax,
+ s, out=data)
def test_ptp(self):
N = 1000
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index d0805e2bb54d2..26a0078efd35d 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -345,7 +345,7 @@ def test_ndarray_compat(self):
index=date_range('1/1/2000', periods=1000))
def f(x):
- return x[x.argmax()]
+ return x[x.idxmax()]
result = tsdf.apply(f)
expected = tsdf.max()
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 114a055de8195..c8cc80b1cf4b1 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -1872,33 +1872,33 @@ def test_op_duplicate_index(self):
),
]
)
- def test_assert_argminmax_raises(self, test_input, error_type):
+ def test_assert_idxminmax_raises(self, test_input, error_type):
"""
Cases where ``Series.argmax`` and related should raise an exception
"""
with pytest.raises(error_type):
- test_input.argmin()
+ test_input.idxmin()
with pytest.raises(error_type):
- test_input.argmin(skipna=False)
+ test_input.idxmin(skipna=False)
with pytest.raises(error_type):
- test_input.argmax()
+ test_input.idxmax()
with pytest.raises(error_type):
- test_input.argmax(skipna=False)
+ test_input.idxmax(skipna=False)
- def test_argminmax_with_inf(self):
+ def test_idxminmax_with_inf(self):
# For numeric data with NA and Inf (GH #13595)
s = pd.Series([0, -np.inf, np.inf, np.nan])
- assert s.argmin() == 1
- assert np.isnan(s.argmin(skipna=False))
+ assert s.idxmin() == 1
+ assert np.isnan(s.idxmin(skipna=False))
- assert s.argmax() == 2
- assert np.isnan(s.argmax(skipna=False))
+ assert s.idxmax() == 2
+ assert np.isnan(s.idxmax(skipna=False))
# Using old-style behavior that treats floating point nan, -inf, and
# +inf as missing
with pd.option_context('mode.use_inf_as_na', True):
- assert s.argmin() == 0
- assert np.isnan(s.argmin(skipna=False))
- assert s.argmax() == 0
- np.isnan(s.argmax(skipna=False))
+ assert s.idxmin() == 0
+ assert np.isnan(s.idxmin(skipna=False))
+ assert s.idxmax() == 0
+ np.isnan(s.idxmax(skipna=False))
diff --git a/pandas/tests/sparse/test_series.py b/pandas/tests/sparse/test_series.py
index b44314d4e733b..451f369593347 100644
--- a/pandas/tests/sparse/test_series.py
+++ b/pandas/tests/sparse/test_series.py
@@ -1379,11 +1379,25 @@ def test_numpy_func_call(self):
# numpy passes in 'axis=None' or `axis=-1'
funcs = ['sum', 'cumsum', 'var', 'mean',
'prod', 'cumprod', 'std', 'argsort',
- 'argmin', 'argmax', 'min', 'max']
+ 'min', 'max']
for func in funcs:
for series in ('bseries', 'zbseries'):
getattr(np, func)(getattr(self, series))
+ def test_deprecated_numpy_func_call(self):
+ # NOTE: These should be add to the 'test_numpy_func_call' test above
+ # once the behavior of argmin/argmax is corrected.
+ funcs = ['argmin', 'argmax']
+ for func in funcs:
+ for series in ('bseries', 'zbseries'):
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ getattr(np, func)(getattr(self, series))
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ getattr(getattr(self, series), func)()
+
@pytest.mark.parametrize(
'datetime_type', (np.datetime64,
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 3733e4311aa73..9e4e5515a292b 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -7,7 +7,7 @@
def deprecate(name, alternative, alt_name=None, klass=None,
- stacklevel=2):
+ stacklevel=2, msg=None):
"""
Return a new function that emits a deprecation warning on use.
@@ -21,14 +21,16 @@ def deprecate(name, alternative, alt_name=None, klass=None,
Name to use in preference of alternative.__name__
klass : Warning, default FutureWarning
stacklevel : int, default 2
+ msg : str
+ The message to display in the warning.
+ Default is '{name} is deprecated. Use {alt_name} instead.'
"""
alt_name = alt_name or alternative.__name__
klass = klass or FutureWarning
+ msg = msg or "{} is deprecated. Use {} instead".format(name, alt_name)
def wrapper(*args, **kwargs):
- msg = "{name} is deprecated. Use {alt_name} instead".format(
- name=name, alt_name=alt_name)
warnings.warn(msg, klass, stacklevel=stacklevel)
return alternative(*args, **kwargs)
return wrapper
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
This is one step toward closing issue #16830. After this, I will post another pull request containing the code to implement the expected behavior of argmax and argmin for both Series and DataFrame. | https://api.github.com/repos/pandas-dev/pandas/pulls/16955 | 2017-07-15T19:43:43Z | 2017-09-27T15:07:48Z | 2017-09-27T15:07:48Z | 2017-09-27T15:07:48Z |
BUG Fix for Issue #12565 - font size on secondary_y | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index bd19d71182762..8d5b390801318 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -166,7 +166,7 @@ I/O
Plotting
^^^^^^^^
-
+- Bug in plotting methods using ``secondary_y`` and ``fontsize`` not setting secondary axis font size (:issue:`12565`)
Groupby/Resample/Rolling
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index f8e83aea03594..0fa2b6b79da75 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -379,6 +379,11 @@ def _post_plot_logic_common(self, ax, data):
self._apply_axis_properties(ax.xaxis, rot=self.rot,
fontsize=self.fontsize)
self._apply_axis_properties(ax.yaxis, fontsize=self.fontsize)
+
+ if hasattr(ax, 'right_ax'):
+ self._apply_axis_properties(ax.right_ax.yaxis,
+ fontsize=self.fontsize)
+
elif self.orientation == 'horizontal':
if self._need_to_set_index:
yticklabels = [labels.get(y, '') for y in ax.get_yticks()]
@@ -386,6 +391,10 @@ def _post_plot_logic_common(self, ax, data):
self._apply_axis_properties(ax.yaxis, rot=self.rot,
fontsize=self.fontsize)
self._apply_axis_properties(ax.xaxis, fontsize=self.fontsize)
+
+ if hasattr(ax, 'right_ax'):
+ self._apply_axis_properties(ax.right_ax.yaxis,
+ fontsize=self.fontsize)
else: # pragma no cover
raise ValueError
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 7878740f64e55..6d813ac76cc4e 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -2733,6 +2733,23 @@ def test_rcParams_bar_colors(self):
barplot = pd.DataFrame([[1, 2, 3]]).plot(kind="bar")
assert color_tuples == [c.get_facecolor() for c in barplot.patches]
+ @pytest.mark.parametrize('method', ['line', 'barh', 'bar'])
+ def test_secondary_axis_font_size(self, method):
+ # GH: 12565
+ df = (pd.DataFrame(np.random.randn(15, 2),
+ columns=list('AB'))
+ .assign(C=lambda df: df.B.cumsum())
+ .assign(D=lambda df: df.C * 1.1))
+
+ fontsize = 20
+ sy = ['C', 'D']
+
+ kwargs = dict(secondary_y=sy, fontsize=fontsize,
+ mark_right=True)
+ ax = getattr(df.plot, method)(**kwargs)
+ self._check_ticks_props(axes=ax.right_ax,
+ ylabelsize=fontsize)
+
def _generate_4_axes_via_gridspec():
import matplotlib.pyplot as plt
| There may be other plotting methods that need testing.
- [ ] closes #12565
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16952 | 2017-07-15T18:05:00Z | 2017-07-15T23:13:50Z | 2017-07-15T23:13:49Z | 2017-07-15T23:17:22Z |
ENH: Add warning when setting into nonexistent attribute | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 1659d57b33b84..53a259ad6eb15 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -227,10 +227,6 @@ as an attribute:
dfa.A
panel.one
-You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful;
-if you try to use attribute access to create a new column, it fails silently, creating a new attribute rather than a
-new column.
-
.. ipython:: python
sa.a = 5
@@ -267,6 +263,37 @@ You can also assign a ``dict`` to a row of a ``DataFrame``:
x.iloc[1] = dict(x=9, y=99)
x
+You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful;
+if you try to use attribute access to create a new column, it creates a new attribute rather than a
+new column. In 0.21.0 and later, this will raise a ``UserWarning``:
+
+.. code-block:: ipython
+
+ In[1]: df = pd.DataFrame({'one': [1., 2., 3.]})
+ In[2]: df.two = [4, 5, 6]
+ UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access
+ In[3]: df
+ Out[3]:
+ one
+ 0 1.0
+ 1 2.0
+ 2 3.0
+
+Similarly, it is possible to create a column with a name which collides with one of Pandas's
+built-in methods or attributes, which can cause confusion later when attempting to access
+that column as an attribute. This behavior now warns:
+
+.. code-block:: ipython
+
+ In[4]: df['sum'] = [5., 7., 9.]
+ UserWarning: Column name 'sum' collides with a built-in method, which will cause unexpected attribute behavior
+ In[5]: df.sum
+ Out[5]:
+ <bound method DataFrame.sum of one sum
+ 0 1.0 5.0
+ 1 2.0 7.0
+ 2 3.0 9.0>
+
Slicing ranges
--------------
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 2f61b71d06019..d9439e0d785f6 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -29,7 +29,6 @@ New features
- Added ``skipna`` parameter to :func:`~pandas.api.types.infer_dtype` to
support type inference in the presence of missing values (:issue:`17059`).
-
.. _whatsnew_0210.enhancements.infer_objects:
``infer_objects`` type conversion
@@ -62,6 +61,51 @@ using the :func:`to_numeric` function (or :func:`to_datetime`, :func:`to_timedel
df['C'] = pd.to_numeric(df['C'], errors='coerce')
df.dtypes
+.. _whatsnew_0210.enhancements.attribute_access:
+
+Improved warnings when attempting to create columns
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+New users are often flummoxed by the relationship between column operations and attribute
+access on ``DataFrame`` instances (:issue:`5904` & :issue:`7175`). Two specific instances
+of this confusion include attempting to create a new column by setting into an attribute:
+
+.. code-block:: ipython
+
+ In[1]: df = pd.DataFrame({'one': [1., 2., 3.]})
+ In[2]: df.two = [4, 5, 6]
+
+This does not raise any obvious exceptions, but also does not create a new column:
+
+.. code-block:: ipython
+
+ In[3]: df
+ Out[3]:
+ one
+ 0 1.0
+ 1 2.0
+ 2 3.0
+
+The second source of confusion is creating a column whose name collides with a method or
+attribute already in the instance namespace:
+
+.. code-block:: ipython
+
+ In[4]: df['sum'] = [5., 7., 9.]
+
+This does not permit that column to be accessed as an attribute:
+
+.. code-block:: ipython
+
+ In[5]: df.sum
+ Out[5]:
+ <bound method DataFrame.sum of one sum
+ 0 1.0 5.0
+ 1 2.0 7.0
+ 2 3.0 9.0>
+
+Both of these now raise a ``UserWarning`` about the potential for unexpected behavior. See :ref:`Attribute Access <indexing.attribute_access>`.
+
.. _whatsnew_0210.enhancements.other:
Other Enhancements
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 442ec93d94023..2d52eed81d22b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -27,7 +27,7 @@
pandas_dtype)
from pandas.core.dtypes.cast import maybe_promote, maybe_upcast_putmask
from pandas.core.dtypes.missing import isna, notna
-from pandas.core.dtypes.generic import ABCSeries, ABCPanel
+from pandas.core.dtypes.generic import ABCSeries, ABCPanel, ABCDataFrame
from pandas.core.common import (_values_from_object,
_maybe_box_datetimelike,
@@ -1907,6 +1907,10 @@ def _slice(self, slobj, axis=0, kind=None):
return result
def _set_item(self, key, value):
+ if isinstance(key, str) and callable(getattr(self, key, None)):
+ warnings.warn("Column name '{key}' collides with a built-in "
+ "method, which will cause unexpected attribute "
+ "behavior".format(key=key), stacklevel=3)
self._data.set(key, value)
self._clear_item_cache()
@@ -3357,6 +3361,12 @@ def __setattr__(self, name, value):
else:
object.__setattr__(self, name, value)
except (AttributeError, TypeError):
+ if isinstance(self, ABCDataFrame) and (is_list_like(value)):
+ warnings.warn("Pandas doesn't allow Series to be assigned "
+ "into nonexistent columns - see "
+ "https://pandas.pydata.org/pandas-docs/"
+ "stable/indexing.html#attribute-access",
+ stacklevel=2)
object.__setattr__(self, name, value)
# ----------------------------------------------------------------------
diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 653d7d3082c08..ec850cc34e23b 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -4,6 +4,7 @@
import numpy as np
import pandas as pd
from pandas.core.dtypes import generic as gt
+from pandas.util import testing as tm
class TestABCClasses(object):
@@ -38,3 +39,40 @@ def test_abc_types(self):
assert isinstance(self.sparse_array, gt.ABCSparseArray)
assert isinstance(self.categorical, gt.ABCCategorical)
assert isinstance(pd.Period('2012', freq='A-DEC'), gt.ABCPeriod)
+
+
+def test_setattr_warnings():
+ # GH5904 - Suggestion: Warning for DataFrame colname-methodname clash
+ # GH7175 - GOTCHA: You can't use dot notation to add a column...
+ d = {'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
+ 'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
+ df = pd.DataFrame(d)
+
+ with catch_warnings(record=True) as w:
+ # successfully add new column
+ # this should not raise a warning
+ df['three'] = df.two + 1
+ assert len(w) == 0
+ assert df.three.sum() > df.two.sum()
+
+ with catch_warnings(record=True) as w:
+ # successfully modify column in place
+ # this should not raise a warning
+ df.one += 1
+ assert len(w) == 0
+ assert df.one.iloc[0] == 2
+
+ with catch_warnings(record=True) as w:
+ # successfully add an attribute to a series
+ # this should not raise a warning
+ df.two.not_an_index = [1, 2]
+ assert len(w) == 0
+
+ with tm.assert_produces_warning(UserWarning):
+ # warn when setting column to nonexistent name
+ df.four = df.two + 2
+ assert df.four.sum() > df.two.sum()
+
+ with tm.assert_produces_warning(UserWarning):
+ # warn when column has same name as method
+ df['sum'] = df.two
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index fc17b5f85b68c..f33ba7627101e 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -2011,7 +2011,7 @@ def check(obj, comparator):
df['string'] = 'foo'
df['float322'] = 1.
df['float322'] = df['float322'].astype('float32')
- df['bool'] = df['float322'] > 0
+ df['boolean'] = df['float322'] > 0
df['time1'] = Timestamp('20130101')
df['time2'] = Timestamp('20130102')
check(df, tm.assert_frame_equal)
@@ -2141,7 +2141,7 @@ def test_table_values_dtypes_roundtrip(self):
df1['string'] = 'foo'
df1['float322'] = 1.
df1['float322'] = df1['float322'].astype('float32')
- df1['bool'] = df1['float32'] > 0
+ df1['boolean'] = df1['float32'] > 0
df1['time1'] = Timestamp('20130101')
df1['time2'] = Timestamp('20130102')
| - [ ] closes #7175
- [ ] closes #5904
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16951 | 2017-07-15T16:52:52Z | 2017-08-07T11:05:59Z | 2017-08-07T11:05:58Z | 2017-08-16T20:41:45Z |
Clarify whitespace behavior in read_fwf documentation (#16772) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 9bf84e5419ffa..495d4e9c3a5a3 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1258,7 +1258,8 @@ Files with Fixed Width Columns
While ``read_csv`` reads delimited data, the :func:`read_fwf` function works
with data files that have known and fixed column widths. The function parameters
-to ``read_fwf`` are largely the same as `read_csv` with two extra parameters:
+to ``read_fwf`` are largely the same as `read_csv` with two extra parameters, and
+a different usage of the ``delimiter`` parameter:
- ``colspecs``: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
@@ -1267,6 +1268,9 @@ to ``read_fwf`` are largely the same as `read_csv` with two extra parameters:
behaviour, if not specified, is to infer.
- ``widths``: A list of field widths which can be used instead of 'colspecs'
if the intervals are contiguous.
+ - ``delimiter``: Characters to consider as filler characters in the fixed-width file.
+ Can be used to specify the filler character of the fields
+ if it is not spaces (e.g., '~').
.. ipython:: python
:suppress:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 343bc7a74fde8..1e7d9d420b35d 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -63,8 +63,6 @@
file. For file URLs, a host is expected. For instance, a local file could
be file ://localhost/path/to/table.csv
%s
-delimiter : str, default ``None``
- Alternative argument name for sep.
delim_whitespace : boolean, default False
Specifies whether or not whitespace (e.g. ``' '`` or ``'\t'``) will be
used as the sep. Equivalent to setting ``sep='\s+'``. If this option
@@ -316,7 +314,9 @@
be used automatically. In addition, separators longer than 1 character and
different from ``'\s+'`` will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
- delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``"""
+ delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
+delimiter : str, default ``None``
+ Alternative argument name for sep."""
_read_csv_doc = """
Read CSV (comma-separated) file into DataFrame
@@ -341,15 +341,16 @@
widths : list of ints. optional
A list of field widths which can be used instead of 'colspecs' if
the intervals are contiguous.
+delimiter : str, default ``'\t' + ' '``
+ Characters to consider as filler characters in the fixed-width file.
+ Can be used to specify the filler character of the fields
+ if it is not spaces (e.g., '~').
"""
_read_fwf_doc = """
Read a table of fixed-width formatted lines into DataFrame
%s
-
-Also, 'delimiter' is used to specify the filler character of the
-fields if it is not spaces (e.g., '~').
""" % (_parser_params % (_fwf_widths, ''))
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index 0bfeb5215f370..ec1d1a2a51cdc 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -405,3 +405,32 @@ def test_skiprows_inference_empty(self):
with pytest.raises(EmptyDataError):
read_fwf(StringIO(test), skiprows=3)
+
+ def test_whitespace_preservation(self):
+ # Addresses Issue #16772
+ data_expected = """
+ a ,bbb
+ cc,dd """
+ expected = read_csv(StringIO(data_expected), header=None)
+
+ test_data = """
+ a bbb
+ ccdd """
+ result = read_fwf(StringIO(test_data), widths=[3, 3],
+ header=None, skiprows=[0], delimiter="\n\t")
+
+ tm.assert_frame_equal(result, expected)
+
+ def test_default_delimiter(self):
+ data_expected = """
+a,bbb
+cc,dd"""
+ expected = read_csv(StringIO(data_expected), header=None)
+
+ test_data = """
+a \tbbb
+cc\tdd """
+ result = read_fwf(StringIO(test_data), widths=[3, 3],
+ header=None, skiprows=[0])
+
+ tm.assert_frame_equal(result, expected)
| - [x] closes #16772
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16950 | 2017-07-15T16:24:27Z | 2017-07-18T05:01:27Z | 2017-07-18T05:01:27Z | 2017-07-18T12:41:20Z |
Support non unique period indexes on join and merge operations | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 6ddf6029b99bb..34ff73082627a 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -185,7 +185,7 @@ Sparse
Reshaping
^^^^^^^^^
-
+- Joining/Merging with a non unique ``PeriodIndex`` raised a TypeError (:issue:`16871`)
Numeric
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e1053c1610175..bbbc19b36964d 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3119,14 +3119,14 @@ def _join_multi(self, other, how, return_indexers=True):
def _join_non_unique(self, other, how='left', return_indexers=False):
from pandas.core.reshape.merge import _get_join_indexers
- left_idx, right_idx = _get_join_indexers([self.values],
+ left_idx, right_idx = _get_join_indexers([self._values],
[other._values], how=how,
sort=True)
left_idx = _ensure_platform_int(left_idx)
right_idx = _ensure_platform_int(right_idx)
- join_index = np.asarray(self.values.take(left_idx))
+ join_index = np.asarray(self._values.take(left_idx))
mask = left_idx == -1
np.putmask(join_index, mask, other._values.take(right_idx))
diff --git a/pandas/tests/reshape/test_join.py b/pandas/tests/reshape/test_join.py
index e25661fb65271..e4894307918c6 100644
--- a/pandas/tests/reshape/test_join.py
+++ b/pandas/tests/reshape/test_join.py
@@ -550,6 +550,18 @@ def test_join_mixed_non_unique_index(self):
index=[1, 2, 2, 'a'])
tm.assert_frame_equal(result, expected)
+ def test_join_non_unique_period_index(self):
+ # GH #16871
+ index = pd.period_range('2016-01-01', periods=16, freq='M')
+ df = DataFrame([i for i in range(len(index))],
+ index=index, columns=['pnum'])
+ df2 = concat([df, df])
+ result = df.join(df2, how='inner', rsuffix='_df2')
+ expected = DataFrame(
+ np.tile(np.arange(16, dtype=np.int64).repeat(2).reshape(-1, 1), 2),
+ columns=['pnum', 'pnum_df2'], index=df2.sort_index().index)
+ tm.assert_frame_equal(result, expected)
+
def test_mixed_type_join_with_suffix(self):
# GH #916
df = DataFrame(np.random.randn(20, 6),
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index 4ac376a9752cb..919675188576e 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -585,6 +585,18 @@ def test_merge_on_datetime64tz(self):
assert result['value_x'].dtype == 'datetime64[ns, US/Eastern]'
assert result['value_y'].dtype == 'datetime64[ns, US/Eastern]'
+ def test_merge_non_unique_period_index(self):
+ # GH #16871
+ index = pd.period_range('2016-01-01', periods=16, freq='M')
+ df = DataFrame([i for i in range(len(index))],
+ index=index, columns=['pnum'])
+ df2 = concat([df, df])
+ result = df.merge(df2, left_index=True, right_index=True, how='inner')
+ expected = DataFrame(
+ np.tile(np.arange(16, dtype=np.int64).repeat(2).reshape(-1, 1), 2),
+ columns=['pnum_x', 'pnum_y'], index=df2.sort_index().index)
+ tm.assert_frame_equal(result, expected)
+
def test_merge_on_periods(self):
left = pd.DataFrame({'key': pd.period_range('20151010', periods=2,
freq='D'),
| - [x] closes #16871
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
Now join and merge operations on dataframes with non unique period indexes are supported, test included. | https://api.github.com/repos/pandas-dev/pandas/pulls/16949 | 2017-07-15T16:19:30Z | 2017-07-15T21:28:24Z | 2017-07-15T21:28:23Z | 2017-07-15T23:18:40Z |
DOC: Improving docstring of take method | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d9d75c870b20c..c83b1073afc8e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2063,18 +2063,77 @@ def __delitem__(self, key):
def take(self, indices, axis=0, convert=True, is_copy=True, **kwargs):
"""
- Analogous to ndarray.take
+ Return the elements in the given *positional* indices along an axis.
+
+ This means that we are not indexing according to actual values in
+ the index attribute of the object. We are indexing according to the
+ actual position of the element in the object.
Parameters
----------
- indices : list / array of ints
+ indices : array-like
+ An array of ints indicating which positions to take.
axis : int, default 0
- convert : translate neg to pos indices (default)
- is_copy : mark the returned frame as a copy
+ The axis on which to select elements. "0" means that we are
+ selecting rows, "1" means that we are selecting columns, etc.
+ convert : bool, default True
+ Whether to convert negative indices to positive ones, just as with
+ indexing into Python lists. For example, if `-1` was passed in,
+ this index would be converted ``n - 1``.
+ is_copy : bool, default True
+ Whether to return a copy of the original object or not.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([('falcon', 'bird', 389.0),
+ ('parrot', 'bird', 24.0),
+ ('lion', 'mammal', 80.5),
+ ('monkey', 'mammal', np.nan)],
+ columns=('name', 'class', 'max_speed'),
+ index=[0, 2, 3, 1])
+ >>> df
+ name class max_speed
+ 0 falcon bird 389.0
+ 2 parrot bird 24.0
+ 3 lion mammal 80.5
+ 1 monkey mammal NaN
+
+ Take elements at positions 0 and 3 along the axis 0 (default).
+
+ Note how the actual indices selected (0 and 1) do not correspond to
+ our selected indices 0 and 3. That's because we are selecting the 0th
+ and 3rd rows, not rows whose indices equal 0 and 3.
+
+ >>> df.take([0, 3])
+ 0 falcon bird 389.0
+ 1 monkey mammal NaN
+
+ Take elements at indices 1 and 2 along the axis 1 (column selection).
+
+ >>> df.take([1, 2], axis=1)
+ class max_speed
+ 0 bird 389.0
+ 2 bird 24.0
+ 3 mammal 80.5
+ 1 mammal NaN
+
+ We may take elements using negative integers for positive indices,
+ starting from the end of the object, just like with Python lists.
+
+ >>> df.take([-1, -2])
+ name class max_speed
+ 1 monkey mammal NaN
+ 3 lion mammal 80.5
Returns
-------
taken : type of caller
+ An array-like containing the elements taken from the object.
+
+ See Also
+ --------
+ numpy.ndarray.take
+ numpy.take
"""
nv.validate_take(tuple(), kwargs)
self._consolidate_inplace()
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16948 | 2017-07-15T15:49:16Z | 2017-08-21T08:27:25Z | 2017-08-21T08:27:24Z | 2017-08-21T08:38:19Z |
Change "pls" to "please" in warning message | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6559fc4c24ce2..4d8b831b7d63f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3359,7 +3359,7 @@ def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
inplace = validate_bool_kwarg(inplace, 'inplace')
# 10726
if by is not None:
- warnings.warn("by argument to sort_index is deprecated, pls use "
+ warnings.warn("by argument to sort_index is deprecated, please use "
".sort_values(by=...)", FutureWarning, stacklevel=2)
if level is not None:
raise ValueError("unable to simultaneously sort by and level")
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [x] whatsnew entry
Code is now more formal when talking to users, as to not disrespect them!
Full warning:
`"by argument to sort_index is deprecated, please use .sort_values(by=...)"` | https://api.github.com/repos/pandas-dev/pandas/pulls/16947 | 2017-07-15T15:18:22Z | 2017-07-15T15:34:05Z | 2017-07-15T15:34:05Z | 2017-10-02T20:44:24Z |
DOC: Expand docstrings for head / tail methods | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5a7f37bba91aa..d9d75c870b20c 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2978,14 +2978,36 @@ def filter(self, items=None, like=None, regex=None, axis=None):
def head(self, n=5):
"""
- Returns first n rows
+ Return the first n rows.
+
+ Parameters
+ ----------
+ n : int, default 5
+ Number of rows to select.
+
+ Returns
+ -------
+ obj_head : type of caller
+ The first n rows of the caller object.
"""
+
return self.iloc[:n]
def tail(self, n=5):
"""
- Returns last n rows
+ Return the last n rows.
+
+ Parameters
+ ----------
+ n : int, default 5
+ Number of rows to select.
+
+ Returns
+ -------
+ obj_tail : type of caller
+ The last n rows of the caller object.
"""
+
if n == 0:
return self.iloc[0:0]
return self.iloc[-n:]
| This is just a first draft of some documentation changes to head and tail methods.
From EuroPython sprint. | https://api.github.com/repos/pandas-dev/pandas/pulls/16941 | 2017-07-15T09:38:29Z | 2017-08-21T07:50:45Z | 2017-08-21T07:50:45Z | 2017-08-21T07:58:18Z |
TST: Don't assert that a bug exists in numpy | diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 9504d2a9426f0..993dcc4f527b2 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -2,6 +2,7 @@
import numpy as np
import pytest
+import warnings
from numpy.random import RandomState
from numpy import nan
@@ -127,7 +128,7 @@ def test_unsortable(self):
arr = np.array([1, 2, datetime.now(), 0, 3], dtype=object)
if compat.PY2 and not pd._np_version_under1p10:
# RuntimeWarning: tp_compare didn't return -1 or -2 for exception
- with tm.assert_produces_warning(RuntimeWarning):
+ with warnings.catch_warnings():
pytest.raises(TypeError, algos.safe_sort, arr)
else:
pytest.raises(TypeError, algos.safe_sort, arr)
| Better to ignore the warning from the bug, rather than assert the bug is still there
After this change, numpy/numpy#9412 _could_ be backported to fix the bug | https://api.github.com/repos/pandas-dev/pandas/pulls/16940 | 2017-07-15T09:26:10Z | 2017-07-15T12:30:03Z | 2017-07-15T12:30:03Z | 2017-07-15T13:41:10Z |
CLN16668: remove OrderedDefaultDict | diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 9eacb9acef2c9..33b41d61aa978 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -21,7 +21,6 @@
given metaclass instead (and avoids intermediary class creation)
Other items:
-* OrderedDefaultDict
* platform checker
"""
# pylint disable=W0611
@@ -373,30 +372,6 @@ def parse_date(timestr, *args, **kwargs):
parse_date = _date_parser.parse
-class OrderedDefaultdict(OrderedDict):
-
- def __init__(self, *args, **kwargs):
- newdefault = None
- newargs = ()
- if args:
- newdefault = args[0]
- if not (newdefault is None or callable(newdefault)):
- raise TypeError('first argument must be callable or None')
- newargs = args[1:]
- self.default_factory = newdefault
- super(self.__class__, self).__init__(*newargs, **kwargs)
-
- def __missing__(self, key):
- if self.default_factory is None:
- raise KeyError(key)
- self[key] = value = self.default_factory()
- return value
-
- def __reduce__(self): # optional, for pickle support
- args = self.default_factory if self.default_factory else tuple()
- return type(self), args, None, None, list(self.items())
-
-
# https://github.com/pandas-dev/pandas/pull/9123
def is_platform_little_endian():
""" am I little endian """
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index d1f5b4587059c..69a8468552f54 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -19,7 +19,7 @@
import pandas.core.ops as ops
import pandas.core.missing as missing
from pandas import compat
-from pandas.compat import (map, zip, range, u, OrderedDict, OrderedDefaultdict)
+from pandas.compat import (map, zip, range, u, OrderedDict)
from pandas.compat.numpy import function as nv
from pandas.core.common import _try_sort, _default_index
from pandas.core.frame import DataFrame
@@ -260,9 +260,11 @@ def from_dict(cls, data, intersect=False, orient='items', dtype=None):
-------
Panel
"""
+ from collections import defaultdict
+
orient = orient.lower()
if orient == 'minor':
- new_data = OrderedDefaultdict(dict)
+ new_data = defaultdict(OrderedDict)
for col, df in compat.iteritems(data):
for item, s in compat.iteritems(df):
new_data[item][col] = s
| - [x] closes #16668
- [x] tests added / passed
- [x] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
The replacement of usage covered by tests. | https://api.github.com/repos/pandas-dev/pandas/pulls/16939 | 2017-07-15T09:16:21Z | 2017-07-15T13:58:25Z | 2017-07-15T13:58:25Z | 2017-07-15T13:58:28Z |
BUG: MultiIndex sort with ascending as list | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index bd19d71182762..6ddf6029b99bb 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -156,6 +156,7 @@ Indexing
- When called on an unsorted ``MultiIndex``, the ``loc`` indexer now will raise ``UnsortedIndexError`` only if proper slicing is used on non-sorted levels (:issue:`16734`).
- Fixes regression in 0.20.3 when indexing with a string on a ``TimedeltaIndex`` (:issue:`16896`).
- Fixed ``TimedeltaIndex.get_loc`` handling of ``np.timedelta64`` inputs (:issue:`16909`).
+- Fix :meth:`MultiIndex.sort_index` ordering when ``ascending`` argument is a list, but not all levels are specified, or are in a different order (:issue:`16934`).
I/O
^^^
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 81eac0ac0684f..ed7ca079a07b5 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1697,7 +1697,8 @@ def sortlevel(self, level=0, ascending=True, sort_remaining=True):
raise ValueError("level must have same length as ascending")
from pandas.core.sorting import lexsort_indexer
- indexer = lexsort_indexer(self.labels, orders=ascending)
+ indexer = lexsort_indexer([self.labels[lev] for lev in level],
+ orders=ascending)
# level ordering
else:
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index c8c210c42eac2..a56ff0fc2d158 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -2781,3 +2781,26 @@ def test_sort_index_nan(self):
result = s.sort_index(na_position='first')
expected = s.iloc[[1, 2, 3, 0]]
tm.assert_series_equal(result, expected)
+
+ def test_sort_ascending_list(self):
+ # GH: 16934
+
+ # Set up a Series with a three level MultiIndex
+ arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
+ ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'],
+ [4, 3, 2, 1, 4, 3, 2, 1]]
+ tuples = list(zip(*arrays))
+ index = pd.MultiIndex.from_tuples(tuples,
+ names=['first', 'second', 'third'])
+ s = pd.Series(range(8), index=index)
+
+ # Sort with boolean ascending
+ result = s.sort_index(level=['third', 'first'], ascending=False)
+ expected = s.iloc[[4, 0, 5, 1, 6, 2, 7, 3]]
+ tm.assert_series_equal(result, expected)
+
+ # Sort with list of boolean ascending
+ result = s.sort_index(level=['third', 'first'],
+ ascending=[False, True])
+ expected = s.iloc[[0, 4, 1, 5, 2, 6, 3, 7]]
+ tm.assert_series_equal(result, expected)
| MultiIndex sorting with `sort_index` would fail when the `ascending`
argument was specified as a list but not all levels of the index were
specified in the `level` argument, or the levels were specified in
a different order to the MultiIndex.
- [X] closes #16934
- [X] tests added / passed
- [X] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [X] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/16937 | 2017-07-15T05:59:40Z | 2017-07-15T15:34:32Z | 2017-07-15T15:34:32Z | 2017-07-15T15:34:34Z |
ERR: add stacklevel to to_dict() UserWarning (#16927) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6559fc4c24ce2..44e43138e6911 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -972,7 +972,8 @@ def to_dict(self, orient='dict', into=dict):
"""
if not self.columns.is_unique:
warnings.warn("DataFrame columns are not unique, some "
- "columns will be omitted.", UserWarning)
+ "columns will be omitted.", UserWarning,
+ stacklevel=2)
# GH16122
into_c = standardize_mapping(into)
if orient.lower().startswith('d'):
diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py
index 34dd138ee1c80..629c695b702fe 100644
--- a/pandas/tests/frame/test_convert_to.py
+++ b/pandas/tests/frame/test_convert_to.py
@@ -216,6 +216,13 @@ def test_to_dict_errors(self, mapping):
with pytest.raises(TypeError):
df.to_dict(into=mapping)
+ def test_to_dict_not_unique_warning(self):
+ # GH16927: When converting to a dict, if a column has a non-unique name
+ # it will be dropped, throwing a warning.
+ df = DataFrame([[1, 2, 3]], columns=['a', 'a', 'b'])
+ with tm.assert_produces_warning(UserWarning):
+ df.to_dict()
+
@pytest.mark.parametrize('tz', ['UTC', 'GMT', 'US/Eastern'])
def test_to_records_datetimeindex_with_tz(self, tz):
# GH13937
| - [x] closes #16927
- [ ] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] The warning now correctly points to the calling site.
I would like some guidance on how to implement testing for this quick fix. I was thinking on using pytest's [warning assertion](https://docs.pytest.org/en/latest/warnings.html#warns), but couldn't figure out how to make it elegantly.
Cheers! :beers:
| https://api.github.com/repos/pandas-dev/pandas/pulls/16936 | 2017-07-15T04:29:54Z | 2017-07-15T16:01:39Z | 2017-07-15T16:01:38Z | 2017-07-15T16:04:31Z |
Move series.remove_na to core.dtypes.missing.remove_na_arraylike | diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index af3a873bc2866..9913923cb7807 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -394,3 +394,10 @@ def na_value_for_dtype(dtype):
elif is_bool_dtype(dtype):
return False
return np.nan
+
+
+def remove_na_arraylike(arr):
+ """
+ Return array-like containing only true/non-NaN values, possibly empty.
+ """
+ return arr[notnull(lib.values_from_object(arr))]
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e1f668dd3afda..98b548f8ab3b5 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -36,7 +36,7 @@
maybe_upcast, infer_dtype_from_scalar,
maybe_convert_platform,
maybe_cast_to_datetime, maybe_castable)
-from pandas.core.dtypes.missing import isnull, notnull
+from pandas.core.dtypes.missing import isnull, notnull, remove_na_arraylike
from pandas.core.common import (is_bool_indexer,
_default_index,
@@ -2749,7 +2749,7 @@ def dropna(self, axis=0, inplace=False, **kwargs):
axis = self._get_axis_number(axis or 0)
if self._can_hold_na:
- result = remove_na(self)
+ result = remove_na_arraylike(self)
if inplace:
self._update_inplace(result)
else:
@@ -2888,13 +2888,6 @@ def _dir_additions(self):
# Supplementary functions
-def remove_na(series):
- """
- Return series containing only true/non-NaN values, possibly empty.
- """
- return series[notnull(_values_from_object(series))]
-
-
def _sanitize_index(data, index, copy=False):
""" sanitize an index type to return an ndarray of the underlying, pass
thru a non-Index
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index f8e83aea03594..9cceebb5c4cdb 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -11,7 +11,7 @@
from pandas.util._decorators import cache_readonly
from pandas.core.base import PandasObject
-from pandas.core.dtypes.missing import notnull
+from pandas.core.dtypes.missing import notnull, remove_na_arraylike
from pandas.core.dtypes.common import (
is_list_like,
is_integer,
@@ -21,7 +21,7 @@
from pandas.core.common import AbstractMethodError, isnull, _try_sort
from pandas.core.generic import _shared_docs, _shared_doc_kwargs
from pandas.core.index import Index, MultiIndex
-from pandas.core.series import Series, remove_na
+from pandas.core.series import Series
from pandas.core.indexes.period import PeriodIndex
from pandas.compat import range, lrange, map, zip, string_types
import pandas.compat as compat
@@ -1376,7 +1376,7 @@ def _plot(cls, ax, y, style=None, bw_method=None, ind=None,
from scipy.stats import gaussian_kde
from scipy import __version__ as spv
- y = remove_na(y)
+ y = remove_na_arraylike(y)
if LooseVersion(spv) >= '0.11.0':
gkde = gaussian_kde(y, bw_method=bw_method)
@@ -1495,13 +1495,13 @@ def _args_adjust(self):
@classmethod
def _plot(cls, ax, y, column_num=None, return_type='axes', **kwds):
if y.ndim == 2:
- y = [remove_na(v) for v in y]
+ y = [remove_na_arraylike(v) for v in y]
# Boxplot fails with empty arrays, so need to add a NaN
# if any cols are empty
# GH 8181
y = [v if v.size > 0 else np.array([np.nan]) for v in y]
else:
- y = remove_na(y)
+ y = remove_na_arraylike(y)
bp = ax.boxplot(y, **kwds)
if return_type == 'dict':
@@ -1969,7 +1969,7 @@ def maybe_color_bp(bp):
def plot_group(keys, values, ax):
keys = [pprint_thing(x) for x in keys]
- values = [remove_na(v) for v in values]
+ values = [remove_na_arraylike(v) for v in values]
bp = ax.boxplot(values, **kwds)
if fontsize is not None:
ax.tick_params(axis='both', labelsize=fontsize)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index e19e42e062932..445611c1696f5 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -10,11 +10,11 @@
import pandas as pd
from pandas.core.dtypes.common import is_float_dtype
+from pandas.core.dtypes.missing import remove_na_arraylike
from pandas import (Series, DataFrame, Index, date_range, isnull, notnull,
pivot, MultiIndex)
from pandas.core.nanops import nanall, nanany
from pandas.core.panel import Panel
-from pandas.core.series import remove_na
from pandas.io.formats.printing import pprint_thing
from pandas import compat
@@ -155,7 +155,7 @@ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
if has_skipna:
def skipna_wrapper(x):
- nona = remove_na(x)
+ nona = remove_na_arraylike(x)
if len(nona) == 0:
return np.nan
return alternative(nona)
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index e1995316e7b7c..18643aff15e9b 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -7,10 +7,10 @@
import numpy as np
from pandas.core.dtypes.common import is_float_dtype
+from pandas.core.dtypes.missing import remove_na_arraylike
from pandas import Series, Index, isnull, notnull
from pandas.core.panel import Panel
from pandas.core.panel4d import Panel4D
-from pandas.core.series import remove_na
from pandas.tseries.offsets import BDay
from pandas.util.testing import (assert_frame_equal, assert_series_equal,
@@ -118,7 +118,7 @@ def _check_stat_op(self, name, alternative, obj=None, has_skipna=True):
if has_skipna:
def skipna_wrapper(x):
- nona = remove_na(x)
+ nona = remove_na_arraylike(x)
if len(nona) == 0:
return np.nan
return alternative(nona)
| See discussion on #16931.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16935 | 2017-07-14T23:32:27Z | 2017-07-15T19:14:34Z | 2017-07-15T19:14:34Z | 2017-08-02T23:54:29Z |
DOC: behavior when slicing with missing bounds | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index f988fb7cd6806..1659d57b33b84 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -78,8 +78,10 @@ of multi-axis indexing.
*label* of the index. This use is **not** an integer position along the
index)
- A list or array of labels ``['a', 'b', 'c']``
- - A slice object with labels ``'a':'f'``, (note that contrary to usual python
- slices, **both** the start and the stop are included!)
+ - A slice object with labels ``'a':'f'`` (note that contrary to usual python
+ slices, **both** the start and the stop are included, when present in the
+ index! - also see :ref:`Slicing with labels
+ <indexing.slicing_with_labels>`)
- A boolean array
- A ``callable`` function with one argument (the calling Series, DataFrame or Panel) and
that returns valid output for indexing (one of the above)
@@ -330,13 +332,16 @@ Selection By Label
dfl.loc['20130102':'20130104']
pandas provides a suite of methods in order to have **purely label based indexing**. This is a strict inclusion based protocol.
-**At least 1** of the labels for which you ask, must be in the index or a ``KeyError`` will be raised! When slicing, the start bound is *included*, **AND** the stop bound is *included*. Integers are valid labels, but they refer to the label **and not the position**.
+**At least 1** of the labels for which you ask, must be in the index or a ``KeyError`` will be raised! When slicing, both the start bound **AND** the stop bound are *included*, if present in the index. Integers are valid labels, but they refer to the label **and not the position**.
The ``.loc`` attribute is the primary access method. The following are valid inputs:
- A single label, e.g. ``5`` or ``'a'``, (note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index)
- A list or array of labels ``['a', 'b', 'c']``
-- A slice object with labels ``'a':'f'`` (note that contrary to usual python slices, **both** the start and the stop are included!)
+- A slice object with labels ``'a':'f'`` (note that contrary to usual python
+ slices, **both** the start and the stop are included, when present in the
+ index! - also See :ref:`Slicing with labels
+ <indexing.slicing_with_labels>`)
- A boolean array
- A ``callable``, see :ref:`Selection By Callable <indexing.callable>`
@@ -390,6 +395,34 @@ For getting a value explicitly (equiv to deprecated ``df.get_value('a','A')``)
# this is also equivalent to ``df1.at['a','A']``
df1.loc['a', 'A']
+.. _indexing.slicing_with_labels:
+
+Slicing with labels
+~~~~~~~~~~~~~~~~~~~
+
+When using ``.loc`` with slices, if both the start and the stop labels are
+present in the index, then elements *located* between the two (including them)
+are returned:
+
+.. ipython:: python
+
+ s = pd.Series(list('abcde'), index=[0,3,2,5,4])
+ s.loc[3:5]
+
+If at least one of the two is absent, but the index is sorted, and can be
+compared against start and stop labels, then slicing will still work as
+expected, by selecting labels which *rank* between the two:
+
+.. ipython:: python
+
+ s.sort_index()
+ s.sort_index().loc[1:6]
+
+However, if at least one of the two is absent *and* the index is not sorted, an
+error will be raised (since doing otherwise would be computationally expensive,
+as well as potentially ambiguous for mixed type indexes). For instance, in the
+above example, ``s.loc[1:6]`` would raise ``KeyError``.
+
.. _indexing.integer:
Selection By Position
| - [x] closes #16917
- [ ] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
This documents the behavior discussed in #16917. I don't have time now to produce tests (and I suspect some already exist), so feel free to not close that issue, or to open a new one for tests only. | https://api.github.com/repos/pandas-dev/pandas/pulls/16932 | 2017-07-14T21:54:32Z | 2017-07-16T15:23:31Z | 2017-07-16T15:23:30Z | 2017-07-16T17:16:19Z |
EHN: Improved thread safety for read_html() GH16928 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 762107a261090..5c7a00475733a 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -164,6 +164,8 @@ I/O
- Bug in :func:`read_stata` where value labels could not be read when using an iterator (:issue:`16923`)
+- Bug in :func:`read_html` where import check fails when run in multiple threads (:issue:`16928`)
+
Plotting
^^^^^^^^
- Bug in plotting methods using ``secondary_y`` and ``fontsize`` not setting secondary axis font size (:issue:`12565`)
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 2613f26ae5f52..a4acb26af5259 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -37,8 +37,6 @@ def _importers():
if _IMPORTS:
return
- _IMPORTS = True
-
global _HAS_BS4, _HAS_LXML, _HAS_HTML5LIB
try:
@@ -59,6 +57,8 @@ def _importers():
except ImportError:
pass
+ _IMPORTS = True
+
#############
# READ HTML #
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 4ef265dcd5113..0455ffb069322 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -3,13 +3,17 @@
import glob
import os
import re
+import threading
import warnings
+
+# imports needed for Python 3.x but will fail under Python 2.x
try:
- from importlib import import_module
+ from importlib import import_module, reload
except ImportError:
import_module = __import__
+
from distutils.version import LooseVersion
import pytest
@@ -22,6 +26,7 @@
from pandas.compat import (map, zip, StringIO, string_types, BytesIO,
is_platform_windows, PY3)
from pandas.io.common import URLError, urlopen, file_path_to_url
+import pandas.io.html
from pandas.io.html import read_html
from pandas._libs.parsers import ParserError
@@ -931,3 +936,32 @@ def test_same_ordering():
dfs_lxml = read_html(filename, index_col=0, flavor=['lxml'])
dfs_bs4 = read_html(filename, index_col=0, flavor=['bs4'])
assert_framelist_equal(dfs_lxml, dfs_bs4)
+
+
+class ErrorThread(threading.Thread):
+ def run(self):
+ try:
+ super(ErrorThread, self).run()
+ except Exception as e:
+ self.err = e
+ else:
+ self.err = None
+
+
+@pytest.mark.slow
+def test_importcheck_thread_safety():
+ # see gh-16928
+
+ # force import check by reinitalising global vars in html.py
+ reload(pandas.io.html)
+
+ filename = os.path.join(DATA_PATH, 'valid_markup.html')
+ helper_thread1 = ErrorThread(target=read_html, args=(filename,))
+ helper_thread2 = ErrorThread(target=read_html, args=(filename,))
+
+ helper_thread1.start()
+ helper_thread2.start()
+
+ while helper_thread1.is_alive() or helper_thread2.is_alive():
+ pass
+ assert None is helper_thread1.err is helper_thread2.err
| - [X] closes #16928
- [x] tests added / passed
- [X] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [X] whatsnew entry
It failed a total of 6 tests when running `test_fast.sh` but failed none when running `pytest pandas/tests/io/test_html.py`.
It seems that these 6 failed tests are unrelated to my changes since they also occur on the master branch. | https://api.github.com/repos/pandas-dev/pandas/pulls/16930 | 2017-07-14T19:25:11Z | 2017-07-21T23:25:12Z | 2017-07-21T23:25:12Z | 2017-07-21T23:25:15Z |
DOC: Recommend sphinx 1.5 for now | diff --git a/ci/requirements_all.txt b/ci/requirements_all.txt
index e9f49ed879c86..de37ec4d20be4 100644
--- a/ci/requirements_all.txt
+++ b/ci/requirements_all.txt
@@ -2,7 +2,7 @@ pytest
pytest-cov
pytest-xdist
flake8
-sphinx
+sphinx=1.5*
nbsphinx
ipython
python-dateutil
| xref https://github.com/pandas-dev/pandas/issues/16705
I've started to debug the slowdown (it's on the writing side), but haven't diagnosed the cause. For the SciPy sprints tomorrow we should recommend 1.5 | https://api.github.com/repos/pandas-dev/pandas/pulls/16929 | 2017-07-14T19:14:52Z | 2017-07-14T19:46:42Z | 2017-07-14T19:46:42Z | 2017-10-27T12:04:55Z |
BUG: Allow value labels to be read with iterator | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 2716d9b09eaa9..bd19d71182762 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -162,6 +162,7 @@ I/O
- Bug in :func:`read_csv` in which non integer values for the header argument generated an unhelpful / unrelated error message (:issue:`16338`)
+- Bug in :func:`read_stata` where value labels could not be read when using an iterator (:issue:`16923`)
Plotting
^^^^^^^^
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 107dccfc8175c..30991d8a24c63 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -997,6 +997,7 @@ def __init__(self, path_or_buf, convert_dates=True,
self.path_or_buf = BytesIO(contents)
self._read_header()
+ self._setup_dtype()
def __enter__(self):
""" enter context manager """
@@ -1299,6 +1300,23 @@ def _read_old_header(self, first_char):
# necessary data to continue parsing
self.data_location = self.path_or_buf.tell()
+ def _setup_dtype(self):
+ """Map between numpy and state dtypes"""
+ if self._dtype is not None:
+ return self._dtype
+
+ dtype = [] # Convert struct data types to numpy data type
+ for i, typ in enumerate(self.typlist):
+ if typ in self.NUMPY_TYPE_MAP:
+ dtype.append(('s' + str(i), self.byteorder +
+ self.NUMPY_TYPE_MAP[typ]))
+ else:
+ dtype.append(('s' + str(i), 'S' + str(typ)))
+ dtype = np.dtype(dtype)
+ self._dtype = dtype
+
+ return self._dtype
+
def _calcsize(self, fmt):
return (type(fmt) is int and fmt or
struct.calcsize(self.byteorder + fmt))
@@ -1472,22 +1490,10 @@ def read(self, nrows=None, convert_dates=None,
if nrows is None:
nrows = self.nobs
- if (self.format_version >= 117) and (self._dtype is None):
+ if (self.format_version >= 117) and (not self._value_labels_read):
self._can_read_value_labels = True
self._read_strls()
- # Setup the dtype.
- if self._dtype is None:
- dtype = [] # Convert struct data types to numpy data type
- for i, typ in enumerate(self.typlist):
- if typ in self.NUMPY_TYPE_MAP:
- dtype.append(('s' + str(i), self.byteorder +
- self.NUMPY_TYPE_MAP[typ]))
- else:
- dtype.append(('s' + str(i), 'S' + str(typ)))
- dtype = np.dtype(dtype)
- self._dtype = dtype
-
# Read data
dtype = self._dtype
max_read_len = (self.nobs - self._lines_read) * dtype.itemsize
@@ -1958,7 +1964,6 @@ def _prepare_categoricals(self, data):
return data
get_base_missing_value = StataMissingValue.get_base_missing_value
- index = data.index
data_formatted = []
for col, col_is_cat in zip(data, is_cat):
if col_is_cat:
@@ -1981,8 +1986,7 @@ def _prepare_categoricals(self, data):
# Replace missing values with Stata missing value for type
values[values == -1] = get_base_missing_value(dtype)
- data_formatted.append((col, values, index))
-
+ data_formatted.append((col, values))
else:
data_formatted.append((col, data[col]))
return DataFrame.from_items(data_formatted)
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index b9c6736563160..a414928d318c4 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -9,18 +9,18 @@
from datetime import datetime
from distutils.version import LooseVersion
-import pytest
import numpy as np
import pandas as pd
import pandas.util.testing as tm
+import pytest
from pandas import compat
+from pandas._libs.tslib import NaT
from pandas.compat import iterkeys
+from pandas.core.dtypes.common import is_categorical_dtype
from pandas.core.frame import DataFrame, Series
from pandas.io.parsers import read_csv
from pandas.io.stata import (read_stata, StataReader, InvalidColumnName,
PossiblePrecisionLoss, StataMissingValue)
-from pandas._libs.tslib import NaT
-from pandas.core.dtypes.common import is_categorical_dtype
class TestStata(object):
@@ -1297,3 +1297,15 @@ def test_pickle_path_localpath(self):
reader = lambda x: read_stata(x).set_index('index')
result = tm.round_trip_localpath(df.to_stata, reader)
tm.assert_frame_equal(df, result)
+
+ @pytest.mark.parametrize('write_index', [True, False])
+ def test_value_labels_iterator(self, write_index):
+ # GH 16923
+ d = {'A': ['B', 'E', 'C', 'A', 'E']}
+ df = pd.DataFrame(data=d)
+ df['A'] = df['A'].astype('category')
+ with tm.ensure_clean() as path:
+ df.to_stata(path, write_index=write_index)
+ dta_iter = pd.read_stata(path, iterator=True)
+ value_labels = dta_iter.value_labels()
+ assert value_labels == {'A': {0: 'A', 1: 'B', 2: 'C', 3: 'E'}}
| All value labels to be read before the iterator has been used
Fix issue where categorical data was incorrectly reformatted when
write_index was False
closes #16923
- [X] closes #16923
- [X] tests added / passed
- [X] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16926 | 2017-07-14T17:22:36Z | 2017-07-14T21:20:28Z | 2017-07-14T21:20:28Z | 2018-04-22T21:12:04Z |
DOC: Update flake8 command instructions | diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 959858fb50f89..e8b6ee21ad104 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,4 +1,4 @@
- [ ] closes #xxxx
- [ ] tests added / passed
- - [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
+ - [ ] passes ``git diff upstream/master -u -- "*.py" | flake8 --diff``
- [ ] whatsnew entry
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index cd444f796fabb..bfcf560565977 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -509,7 +509,7 @@ the `flake8 <http://pypi.python.org/pypi/flake8>`_ tool
and report any stylistic errors in your code. Therefore, it is helpful before
submitting code to run the check yourself on the diff::
- git diff master --name-only -- '*.py' | flake8 --diff
+ git diff master -u -- "*.py" | flake8 --diff
This command will catch any stylistic errors in your changes specifically, but
be beware it may not catch all of them. For example, if you delete the only
@@ -518,18 +518,28 @@ unused function. However, style-checking the diff will not catch this because
the actual import is not part of the diff. Thus, for completeness, you should
run this command, though it will take longer::
- git diff master --name-only -- '*.py' | grep 'pandas/' | xargs -r flake8
+ git diff master --name-only -- "*.py" | grep "pandas/" | xargs -r flake8
Note that on OSX, the ``-r`` flag is not available, so you have to omit it and
run this slightly modified command::
- git diff master --name-only -- '*.py' | grep 'pandas/' | xargs flake8
+ git diff master --name-only -- "*.py" | grep "pandas/" | xargs flake8
-Note that on Windows, ``grep``, ``xargs``, and other tools are likely
-unavailable. However, this has been shown to work on smaller commits in the
-standard Windows command line::
+Note that on Windows, these commands are unfortunately not possible because
+commands like ``grep`` and ``xargs`` are not available natively. To imitate the
+behavior with the commands above, you should run::
- git diff master -u -- "*.py" | flake8 --diff
+ git diff master --name-only -- "*.py"
+
+This will list all of the Python files that have been modified. The only ones
+that matter during linting are any whose directory filepath begins with "pandas."
+For each filepath, copy and paste it after the ``flake8`` command as shown below:
+
+ flake8 <python-filepath>
+
+Alternatively, you can install the ``grep`` and ``xargs`` commands via the
+`MinGW <http://www.mingw.org/>`__ toolchain, and it will allow you to run the
+commands above.
Backwards Compatibility
~~~~~~~~~~~~~~~~~~~~~~~
| xref <a href="https://github.com/pandas-dev/pandas/pull/16738#issuecomment-315211329">#16738 (comment)</a>
cc @thequackdaddy
| https://api.github.com/repos/pandas-dev/pandas/pulls/16919 | 2017-07-14T06:54:42Z | 2017-07-15T04:16:00Z | 2017-07-15T04:16:00Z | 2017-07-15T04:16:48Z |
API: add infer_objects for soft conversions | diff --git a/doc/source/api.rst b/doc/source/api.rst
index d6053791d6f4b..77d095a965221 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -270,6 +270,7 @@ Conversion
:toctree: generated/
Series.astype
+ Series.infer_objects
Series.copy
Series.isnull
Series.notnull
@@ -777,6 +778,7 @@ Conversion
DataFrame.astype
DataFrame.convert_objects
+ DataFrame.infer_objects
DataFrame.copy
DataFrame.isnull
DataFrame.notnull
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index d8b1602fb104d..4211b15203721 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -2024,7 +2024,28 @@ object conversion
~~~~~~~~~~~~~~~~~
pandas offers various functions to try to force conversion of types from the ``object`` dtype to other types.
-The following functions are available for one dimensional object arrays or scalars:
+In cases where the data is already of the correct type, but stored in an ``object`` array, the
+:meth:`~DataFrame.infer_objects` and :meth:`~Series.infer_objects` can be used to soft convert
+to the correct type.
+
+ .. ipython:: python
+
+ df = pd.DataFrame([[1, 2],
+ ['a', 'b'],
+ [datetime.datetime(2016, 3, 2), datetime.datetime(2016, 3, 2)]])
+ df = df.T
+ df
+ df.dtypes
+
+Because the data transposed the original inference stored all columns as object, which
+``infer_objects`` will correct.
+
+ .. ipython:: python
+
+ df.infer_objects().dtypes
+
+The following functions are available for one dimensional object arrays or scalars to perform
+hard conversion of objects to a specified type:
- :meth:`~pandas.to_numeric` (conversion to numeric dtypes)
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 8ba57c0fa50be..5f2bb03ecf98a 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -25,6 +25,38 @@ New features
- Added ``__fspath__`` method to :class:`~pandas.HDFStore`, :class:`~pandas.ExcelFile`,
and :class:`~pandas.ExcelWriter` to work properly with the file system path protocol (:issue:`13823`)
+
+.. _whatsnew_0210.enhancements.infer_objects:
+
+``infer_objects`` type conversion
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The `:meth:`~DataFrame.infer_objects` and :meth:`~Series.infer_objects`
+methods have been added to perform dtype inference on object columns, replacing
+some of the functionality of the deprecated ``convert_objects``
+method. See the documentation :ref:`here <basics.object_conversion>`
+for more details. (:issue:`11221`)
+
+This function only performs soft conversions on object columns, converting Python objects
+to native types, but not any coercive conversions. For example:
+
+.. ipython:: python
+
+ df = pd.DataFrame({'A': [1, 2, 3],
+ 'B': np.array([1, 2, 3], dtype='object'),
+ 'C': ['1', '2', '3']})
+ df.dtypes
+ df.infer_objects().dtype
+
+Note that column ``'C'`` was not converted - only scalar numeric types
+will be inferred to a new type. Other types of conversion should be accomplished
+using :func:`to_numeric` function (or :func:`to_datetime`, :func:`to_timedelta`).
+.. ipython:: python
+
+ df = df.infer_objects()
+ df['C'] = pd.to_numeric(df['C'], errors='coerce')
+ df.dtypes
+
.. _whatsnew_0210.enhancements.other:
Other Enhancements
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5722539b87aec..7a064440b5273 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3634,9 +3634,12 @@ def convert_objects(self, convert_dates=True, convert_numeric=False,
converted : same as input object
"""
from warnings import warn
- warn("convert_objects is deprecated. Use the data-type specific "
- "converters pd.to_datetime, pd.to_timedelta and pd.to_numeric.",
- FutureWarning, stacklevel=2)
+ msg = ("convert_objects is deprecated. To re-infer data dtypes for "
+ "object columns, use {klass}.infer_objects()\nFor all "
+ "other conversions use the data-type specific converters "
+ "pd.to_datetime, pd.to_timedelta and pd.to_numeric."
+ ).format(klass=self.__class__.__name__)
+ warn(msg, FutureWarning, stacklevel=2)
return self._constructor(
self._data.convert(convert_dates=convert_dates,
@@ -3644,6 +3647,53 @@ def convert_objects(self, convert_dates=True, convert_numeric=False,
convert_timedeltas=convert_timedeltas,
copy=copy)).__finalize__(self)
+ def infer_objects(self):
+ """
+ Attempt to infer better dtypes for object columns.
+
+ Attempts soft conversion of object-dtyped
+ columns, leaving non-object and unconvertible
+ columns unchanged. The inference rules are the
+ same as during normal Series/DataFrame construction.
+
+ .. versionadded:: 0.20.0
+
+ See Also
+ --------
+ pandas.to_datetime : Convert argument to datetime.
+ pandas.to_timedelta : Convert argument to timedelta.
+ pandas.to_numeric : Convert argument to numeric typeR
+
+ Returns
+ -------
+ converted : same type as input object
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({"A": ["a", 1, 2, 3]})
+ >>> df = df.iloc[1:]
+ >>> df
+ A
+ 1 1
+ 2 2
+ 3 3
+
+ >>> df.dtypes
+ A object
+ dtype: object
+
+ >>> df.infer_objects().dtypes
+ A int64
+ dtype: object
+ """
+ # numeric=False necessary to only soft convert;
+ # python objects will still be converted to
+ # native numpy numeric types
+ return self._constructor(
+ self._data.convert(datetime=True, numeric=False,
+ timedelta=True, coerce=False,
+ copy=True)).__finalize__(self)
+
# ----------------------------------------------------------------------
# Filling NA's
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index c1a5b437be5d0..f66070fd66813 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -495,6 +495,32 @@ def test_convert_objects_no_conversion(self):
mixed2 = mixed1._convert(datetime=True)
assert_frame_equal(mixed1, mixed2)
+ def test_infer_objects(self):
+ # GH 11221
+ df = DataFrame({'a': ['a', 1, 2, 3],
+ 'b': ['b', 2.0, 3.0, 4.1],
+ 'c': ['c', datetime(2016, 1, 1),
+ datetime(2016, 1, 2),
+ datetime(2016, 1, 3)],
+ 'd': [1, 2, 3, 'd']},
+ columns=['a', 'b', 'c', 'd'])
+ df = df.iloc[1:].infer_objects()
+
+ assert df['a'].dtype == 'int64'
+ assert df['b'].dtype == 'float64'
+ assert df['c'].dtype == 'M8[ns]'
+ assert df['d'].dtype == 'object'
+
+ expected = DataFrame({'a': [1, 2, 3],
+ 'b': [2.0, 3.0, 4.1],
+ 'c': [datetime(2016, 1, 1),
+ datetime(2016, 1, 2),
+ datetime(2016, 1, 3)],
+ 'd': [2, 3, 'd']},
+ columns=['a', 'b', 'c', 'd'])
+ # reconstruct frame to verify inference is same
+ tm.assert_frame_equal(df.reset_index(drop=True), expected)
+
def test_stale_cached_series_bug_473(self):
# this is chained, but ok
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 2ec579842e33f..c214280ee8386 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -268,3 +268,21 @@ def test_series_to_categorical(self):
expected = Series(['a', 'b', 'c'], dtype='category')
tm.assert_series_equal(result, expected)
+
+ def test_infer_objects_series(self):
+ # GH 11221
+ actual = Series(np.array([1, 2, 3], dtype='O')).infer_objects()
+ expected = Series([1, 2, 3])
+ tm.assert_series_equal(actual, expected)
+
+ actual = Series(np.array([1, 2, 3, None], dtype='O')).infer_objects()
+ expected = Series([1., 2., 3., np.nan])
+ tm.assert_series_equal(actual, expected)
+
+ # only soft conversions, uncovertable pass thru unchanged
+ actual = (Series(np.array([1, 2, 3, None, 'a'], dtype='O'))
+ .infer_objects())
+ expected = Series([1, 2, 3, None, 'a'])
+
+ assert actual.dtype == 'object'
+ tm.assert_series_equal(actual, expected)
| - [x] closes #11221 (more or less)
- [ ] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] whatsnew entry
Need to add more tests cases still | https://api.github.com/repos/pandas-dev/pandas/pulls/16915 | 2017-07-13T23:39:13Z | 2017-07-18T11:26:45Z | 2017-07-18T11:26:45Z | 2017-07-19T01:17:14Z |
Fix for #16909(DeltatimeIndex.get_loc is not working on np.deltatime64 data type) | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 039b24cc63217..2716d9b09eaa9 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -155,6 +155,7 @@ Indexing
- When called with a null slice (e.g. ``df.iloc[:]``), the ``.iloc`` and ``.loc`` indexers return a shallow copy of the original object. Previously they returned the original object. (:issue:`13873`).
- When called on an unsorted ``MultiIndex``, the ``loc`` indexer now will raise ``UnsortedIndexError`` only if proper slicing is used on non-sorted levels (:issue:`16734`).
- Fixes regression in 0.20.3 when indexing with a string on a ``TimedeltaIndex`` (:issue:`16896`).
+- Fixed ``TimedeltaIndex.get_loc`` handling of ``np.timedelta64`` inputs (:issue:`16909`).
I/O
^^^
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index faec813df3993..68713743d72ed 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -15,7 +15,7 @@
_ensure_int64)
from pandas.core.dtypes.missing import isnull
from pandas.core.dtypes.generic import ABCSeries
-from pandas.core.common import _maybe_box, _values_from_object, is_bool_indexer
+from pandas.core.common import _maybe_box, _values_from_object
from pandas.core.indexes.base import Index
from pandas.core.indexes.numeric import Int64Index
@@ -682,7 +682,7 @@ def get_loc(self, key, method=None, tolerance=None):
-------
loc : int
"""
- if is_bool_indexer(key) or is_timedelta64_dtype(key):
+ if is_list_like(key):
raise TypeError
if isnull(key):
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index a4fc26382fb9b..59e4b1432b8bc 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -66,6 +66,9 @@ def test_get_loc(self):
for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]:
assert idx.get_loc('1 day 1 hour', method) == loc
+ # GH 16909
+ assert idx.get_loc(idx[1].to_timedelta64()) == 1
+
# GH 16896
assert idx.get_loc('0 days') == 0
| get_loc should now work for np.timedelta64 data type.
- [x] closes #16909
- [ ] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16912 | 2017-07-13T20:17:20Z | 2017-07-14T14:13:54Z | 2017-07-14T14:13:54Z | 2017-07-14T14:14:00Z |
MAINT: Remove unused mock import | diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index aae3ba31648ff..ec34e7656e01f 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -127,14 +127,7 @@ class TestTableSchemaRepr(object):
@classmethod
def setup_class(cls):
pytest.importorskip('IPython')
- try:
- import mock
- except ImportError:
- try:
- from unittest import mock
- except ImportError:
- pytest.skip("Mock is not installed")
- cls.mock = mock
+
from IPython.core.interactiveshell import InteractiveShell
cls.display_formatter = InteractiveShell.instance().display_formatter
| We import it, set it as an attribute, and then don't use it.
xref <a href="https://github.com/pandas-dev/pandas/issues/16716#issuecomment-315127204">#16716 (comment)</a>
cc @TomAugspurger | https://api.github.com/repos/pandas-dev/pandas/pulls/16908 | 2017-07-13T17:38:44Z | 2017-07-13T19:15:27Z | 2017-07-13T19:15:27Z | 2017-07-13T19:32:56Z |
Fixes for #16896(TimedeltaIndex indexing regression for strings) | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index a5ee0e0ce2653..0fbf865b24d50 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -155,7 +155,7 @@ Indexing
- When called with a null slice (e.g. ``df.iloc[:]``), the ``.iloc`` and ``.loc`` indexers return a shallow copy of the original object. Previously they returned the original object. (:issue:`13873`).
- When called on an unsorted ``MultiIndex``, the ``loc`` indexer now will raise ``UnsortedIndexError`` only if proper slicing is used on non-sorted levels (:issue:`16734`).
-
+- Fixes regression in 0.20.3 when indexing with a string on a ``TimedeltaIndex`` (:issue:`16896`).
I/O
^^^
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 2eebf3704253e..ac7189d108b0a 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -392,13 +392,15 @@ def is_timedelta64_dtype(arr_or_dtype):
False
>>> is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
True
+ >>> is_timedelta64_dtype('0 days')
+ False
"""
if arr_or_dtype is None:
return False
try:
tipo = _get_dtype_type(arr_or_dtype)
- except ValueError:
+ except:
return False
return issubclass(tipo, np.timedelta64)
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index c32e8590c5675..19a2deecee1f4 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -199,12 +199,17 @@ def test_is_datetime64tz_dtype():
def test_is_timedelta64_dtype():
assert not com.is_timedelta64_dtype(object)
+ assert not com.is_timedelta64_dtype(None)
assert not com.is_timedelta64_dtype([1, 2, 3])
assert not com.is_timedelta64_dtype(np.array([], dtype=np.datetime64))
+ assert not com.is_timedelta64_dtype('0 days')
+ assert not com.is_timedelta64_dtype("0 days 00:00:00")
+ assert not com.is_timedelta64_dtype(["0 days 00:00:00"])
+ assert not com.is_timedelta64_dtype("NO DATE")
+
assert com.is_timedelta64_dtype(np.timedelta64)
assert com.is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
-
- assert not com.is_timedelta64_dtype("0 days 00:00:00")
+ assert com.is_timedelta64_dtype(pd.to_timedelta(['0 days', '1 days']))
def test_is_period_dtype():
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index 08cf5108ffdb1..a4fc26382fb9b 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -66,6 +66,9 @@ def test_get_loc(self):
for method, loc in [('pad', 1), ('backfill', 2), ('nearest', 1)]:
assert idx.get_loc('1 day 1 hour', method) == loc
+ # GH 16896
+ assert idx.get_loc('0 days') == 0
+
def test_get_loc_nat(self):
tidx = TimedeltaIndex(['1 days 01:00:00', 'NaT', '2 days 01:00:00'])
diff --git a/pandas/tests/indexing/test_timedelta.py b/pandas/tests/indexing/test_timedelta.py
index be3ea8f0c371d..32609362e49af 100644
--- a/pandas/tests/indexing/test_timedelta.py
+++ b/pandas/tests/indexing/test_timedelta.py
@@ -5,7 +5,6 @@
class TestTimedeltaIndexing(object):
-
def test_boolean_indexing(self):
# GH 14946
df = pd.DataFrame({'x': range(10)})
@@ -40,3 +39,11 @@ def test_list_like_indexing(self, indexer, expected):
dtype="int64")
tm.assert_frame_equal(expected, df)
+
+ def test_string_indexing(self):
+ # GH 16896
+ df = pd.DataFrame({'x': range(3)},
+ index=pd.to_timedelta(range(3), unit='days'))
+ expected = df.iloc[0]
+ sliced = df.loc['0 days']
+ tm.assert_series_equal(sliced, expected)
| Fixed a regression for indexing with a string with a ``TimedeltaIndex`` (#16896)
I am not a fan of the catch all except, but there is no guarantee in `_get_dtype_type`. As part of this PR, we can ensure that all type checks return True/False. `_get_dtype_type` should probably return a type or a TypeError... Nothing else. Let me know if you want to do that as well.
- [x] closes #16896
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16907 | 2017-07-13T14:41:25Z | 2017-07-13T23:04:30Z | 2017-07-13T23:04:30Z | 2017-07-14T12:32:22Z |
Added test for _get_dtype_type. | diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 290cdd732b6d6..b02691e957366 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -568,3 +568,40 @@ def test__get_dtype(input_param, result):
def test__get_dtype_fails(input_param):
# python objects
pytest.raises(TypeError, com._get_dtype, input_param)
+
+
+@pytest.mark.parametrize('input_param,result', [
+ (int, np.dtype(int).type),
+ ('int32', np.int32),
+ (float, np.dtype(float).type),
+ ('float64', np.float64),
+ (np.dtype('float64'), np.float64),
+ (str, np.dtype(str).type),
+ (pd.Series([1, 2], dtype=np.dtype('int16')), np.int16),
+ (pd.Series(['a', 'b']), np.object_),
+ (pd.Index([1, 2], dtype='int64'), np.int64),
+ (pd.Index(['a', 'b']), np.object_),
+ ('category', com.CategoricalDtypeType),
+ (pd.Categorical(['a', 'b']).dtype, com.CategoricalDtypeType),
+ (pd.Categorical(['a', 'b']), com.CategoricalDtypeType),
+ (pd.CategoricalIndex(['a', 'b']).dtype, com.CategoricalDtypeType),
+ (pd.CategoricalIndex(['a', 'b']), com.CategoricalDtypeType),
+ (pd.DatetimeIndex([1, 2]), np.datetime64),
+ (pd.DatetimeIndex([1, 2]).dtype, np.datetime64),
+ ('<M8[ns]', np.datetime64),
+ (pd.DatetimeIndex([1, 2], tz='Europe/London'), com.DatetimeTZDtypeType),
+ (pd.DatetimeIndex([1, 2], tz='Europe/London').dtype,
+ com.DatetimeTZDtypeType),
+ ('datetime64[ns, Europe/London]', com.DatetimeTZDtypeType),
+ (pd.SparseSeries([1, 2], dtype='int32'), np.int32),
+ (pd.SparseSeries([1, 2], dtype='int32').dtype, np.int32),
+ (PeriodDtype(freq='D'), com.PeriodDtypeType),
+ ('period[D]', com.PeriodDtypeType),
+ (IntervalDtype(), com.IntervalDtypeType),
+ (None, type(None)),
+ (1, type(None)),
+ (1.2, type(None)),
+ (pd.DataFrame([1, 2]), type(None)), # composite dtype
+])
+def test__get_dtype_type(input_param, result):
+ assert com._get_dtype_type(input_param) == result
| - [ ] closes #xxxx
- [ x] tests added / passed
- [ x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
Tests for ``core.dtype.common._get_dtype_type``. | https://api.github.com/repos/pandas-dev/pandas/pulls/16899 | 2017-07-12T22:44:52Z | 2017-07-21T11:01:39Z | 2017-07-21T11:01:39Z | 2017-09-11T21:11:20Z |
BUG: setitem with boolean mask and tz-aware DTI | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 5146bd35dff30..4270d08b96e39 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -196,7 +196,8 @@ Indexing
- Fix :func:`MultiIndex.sort_index` ordering when ``ascending`` argument is a list, but not all levels are specified, or are in a different order (:issue:`16934`).
- Fixes bug where indexing with ``np.inf`` caused an ``OverflowError`` to be raised (:issue:`16957`)
- Bug in reindexing on an empty ``CategoricalIndex`` (:issue:`16770`)
-
+- Fixes ``DataFrame.loc`` for setting with alignment and tz-aware ``DatetimeIndex`` (:issue:`16889`)
+
I/O
^^^
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index ae0aaf98fdf02..38cc5431a004f 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -760,10 +760,12 @@ def _align_frame(self, indexer, df):
for i, ix in enumerate(indexer):
ax = self.obj.axes[i]
if is_sequence(ix) or isinstance(ix, slice):
+ if isinstance(ix, np.ndarray):
+ ix = ix.ravel()
if idx is None:
- idx = ax[ix].ravel()
+ idx = ax[ix]
elif cols is None:
- cols = ax[ix].ravel()
+ cols = ax[ix]
else:
break
else:
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py
index da8a896cb6f4a..8e8fc835b11f7 100644
--- a/pandas/tests/indexing/test_datetime.py
+++ b/pandas/tests/indexing/test_datetime.py
@@ -8,6 +8,33 @@
class TestDatetimeIndex(object):
+ def test_setitem_with_datetime_tz(self):
+ # 16889
+ # support .loc with alignment and tz-aware DatetimeIndex
+ mask = np.array([True, False, True, False])
+
+ idx = pd.date_range('20010101', periods=4, tz='UTC')
+ df = pd.DataFrame({'a': np.arange(4)}, index=idx).astype('float64')
+
+ result = df.copy()
+ result.loc[mask, :] = df.loc[mask, :]
+ tm.assert_frame_equal(result, df)
+
+ result = df.copy()
+ result.loc[mask] = df.loc[mask]
+ tm.assert_frame_equal(result, df)
+
+ idx = pd.date_range('20010101', periods=4)
+ df = pd.DataFrame({'a': np.arange(4)}, index=idx).astype('float64')
+
+ result = df.copy()
+ result.loc[mask, :] = df.loc[mask, :]
+ tm.assert_frame_equal(result, df)
+
+ result = df.copy()
+ result.loc[mask] = df.loc[mask]
+ tm.assert_frame_equal(result, df)
+
def test_indexing_with_datetime_tz(self):
# 8260
| closes #16889
- [x] closes #xxxx
- [x] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16897 | 2017-07-12T16:59:52Z | 2017-07-21T11:01:00Z | 2017-07-21T11:01:00Z | 2017-07-21T12:46:14Z |
BUG: coercing of bools in groupby transform | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index a5d4259480ba8..762107a261090 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -175,7 +175,7 @@ Groupby/Resample/Rolling
- Bug in ``DataFrame.resample(...).size()`` where an empty ``DataFrame`` did not return a ``Series`` (:issue:`14962`)
- Bug in :func:`infer_freq` causing indices with 2-day gaps during the working week to be wrongly inferred as business daily (:issue:`16624`)
- Bug in ``.rolling(...).quantile()`` which incorrectly used different defaults than :func:`Series.quantile()` and :func:`DataFrame.quantile()` (:issue:`9413`, :issue:`16211`)
-
+- Bug in ``groupby.transform()`` that would coerce boolean dtypes back to float (:issue:`16875`)
Sparse
^^^^^^
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 16b0a5c8a74ca..6532e17695c86 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -110,9 +110,7 @@ def trans(x): # noqa
np.prod(result.shape)):
return result
- if issubclass(dtype.type, np.floating):
- return result.astype(dtype)
- elif is_bool_dtype(dtype) or is_integer_dtype(dtype):
+ if is_bool_dtype(dtype) or is_integer_dtype(dtype):
# if we don't have any elements, just astype it
if not np.prod(result.shape):
@@ -144,6 +142,9 @@ def trans(x): # noqa
# hit here
if (new_result == result).all():
return new_result
+ elif (issubclass(dtype.type, np.floating) and
+ not is_bool_dtype(result.dtype)):
+ return result.astype(dtype)
# a datetimelike
# GH12821, iNaT is casted to float
diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index 767e99d98cf29..6e07487b3e04f 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -9,7 +9,7 @@
from datetime import datetime, timedelta, date
import numpy as np
-from pandas import Timedelta, Timestamp, DatetimeIndex, DataFrame, NaT
+from pandas import Timedelta, Timestamp, DatetimeIndex, DataFrame, NaT, Series
from pandas.core.dtypes.cast import (
maybe_downcast_to_dtype,
@@ -45,6 +45,12 @@ def test_downcast_conv(self):
expected = np.array([8, 8, 8, 8, 9])
assert (np.array_equal(result, expected))
+ # GH16875 coercing of bools
+ ser = Series([True, True, False])
+ result = maybe_downcast_to_dtype(ser, np.dtype(np.float64))
+ expected = ser
+ tm.assert_series_equal(result, expected)
+
# conversions
expected = np.array([1, 2])
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index 40434ff510421..98839a17d6e0c 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -195,6 +195,19 @@ def test_transform_bug(self):
expected = Series(np.arange(5, 0, step=-1), name='B')
assert_series_equal(result, expected)
+ def test_transform_numeric_to_boolean(self):
+ # GH 16875
+ # inconsistency in transforming boolean values
+ expected = pd.Series([True, True], name='A')
+
+ df = pd.DataFrame({'A': [1.1, 2.2], 'B': [1, 2]})
+ result = df.groupby('B').A.transform(lambda x: True)
+ assert_series_equal(result, expected)
+
+ df = pd.DataFrame({'A': [1, 2], 'B': [1, 2]})
+ result = df.groupby('B').A.transform(lambda x: True)
+ assert_series_equal(result, expected)
+
def test_transform_datetime_to_timedelta(self):
# GH 15429
# transforming a datetime to timedelta
| This bug is happening because of the following lines of code(3054-3059 in groupby.py)
```
# we will only try to coerce the result type if
# we have a numeric dtype, as these are *always* udfs
# the cython take a different path (and casting)
dtype = self._selected_obj.dtype
if is_numeric_dtype(dtype):
result = maybe_downcast_to_dtype(result, dtype)
```
the call to `maybe_downcast_to_dtype` successfully downcasts bool to float thus the result is 1.0.
I've put a guard in place to check if the result is boolean. This seems like an optimal solution, to me, since the behaviour of `maybe_downcast_to_dtype` seems consistent.
Fix inconsistency in groupby tranformations
- [ ] closes #16875
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16895 | 2017-07-12T14:57:47Z | 2017-07-16T01:12:58Z | 2017-07-16T01:12:58Z | 2017-07-16T01:13:00Z |
Let _get_dtype accept Categoricals and CategoricalIndex | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 015fdf1f45f47..40df813cf4b3b 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -148,7 +148,6 @@ Conversion
^^^^^^^^^^
-
Indexing
^^^^^^^^
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 2eebf3704253e..a386c04cc4fdd 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -11,7 +11,7 @@
ExtensionDtype)
from .generic import (ABCCategorical, ABCPeriodIndex,
ABCDatetimeIndex, ABCSeries,
- ABCSparseArray, ABCSparseSeries)
+ ABCSparseArray, ABCSparseSeries, ABCCategoricalIndex)
from .inference import is_string_like
from .inference import * # noqa
@@ -1713,6 +1713,8 @@ def _get_dtype(arr_or_dtype):
return PeriodDtype.construct_from_string(arr_or_dtype)
elif is_interval_dtype(arr_or_dtype):
return IntervalDtype.construct_from_string(arr_or_dtype)
+ elif isinstance(arr_or_dtype, (ABCCategorical, ABCCategoricalIndex)):
+ return arr_or_dtype.dtype
if hasattr(arr_or_dtype, 'dtype'):
arr_or_dtype = arr_or_dtype.dtype
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index c32e8590c5675..7188e397c0617 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -532,16 +532,16 @@ def test_is_complex_dtype():
(float, np.dtype(float)),
('float64', np.dtype('float64')),
(np.dtype('float64'), np.dtype('float64')),
- pytest.mark.xfail((str, np.dtype('<U')), ),
+ (str, np.dtype(str)),
(pd.Series([1, 2], dtype=np.dtype('int16')), np.dtype('int16')),
(pd.Series(['a', 'b']), np.dtype(object)),
(pd.Index([1, 2]), np.dtype('int64')),
(pd.Index(['a', 'b']), np.dtype(object)),
('category', 'category'),
(pd.Categorical(['a', 'b']).dtype, CategoricalDtype()),
- pytest.mark.xfail((pd.Categorical(['a', 'b']), CategoricalDtype()),),
+ (pd.Categorical(['a', 'b']), CategoricalDtype()),
(pd.CategoricalIndex(['a', 'b']).dtype, CategoricalDtype()),
- pytest.mark.xfail((pd.CategoricalIndex(['a', 'b']), CategoricalDtype()),),
+ (pd.CategoricalIndex(['a', 'b']), CategoricalDtype()),
(pd.DatetimeIndex([1, 2]), np.dtype('<M8[ns]')),
(pd.DatetimeIndex([1, 2]).dtype, np.dtype('<M8[ns]')),
('<M8[ns]', np.dtype('<M8[ns]')),
| - [ x] closes #16817
- [x ] tests added / passed
- [x ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
Makes ``_get_dtype`` accept ``Categorical``s and ``CategoricalIndex``. Also passes relevant tests in ``test_common.py``.
See also discussion in #16845. Is there a manual for checking performance (can't imagine this affects performance, though)? | https://api.github.com/repos/pandas-dev/pandas/pulls/16887 | 2017-07-11T22:00:47Z | 2017-07-13T20:35:48Z | 2017-07-13T20:35:48Z | 2017-07-13T20:43:31Z |
PERF: SparseDataFrame._init_dict uses intermediary dict, not DataFrame | diff --git a/asv_bench/benchmarks/sparse.py b/asv_bench/benchmarks/sparse.py
index 500149b89b08b..7259e8cdb7d61 100644
--- a/asv_bench/benchmarks/sparse.py
+++ b/asv_bench/benchmarks/sparse.py
@@ -1,3 +1,5 @@
+from itertools import repeat
+
from .pandas_vb_common import *
import scipy.sparse
from pandas import SparseSeries, SparseDataFrame
@@ -27,6 +29,12 @@ class sparse_frame_constructor(object):
def time_sparse_frame_constructor(self):
SparseDataFrame(columns=np.arange(100), index=np.arange(1000))
+ def time_sparse_from_scipy(self):
+ SparseDataFrame(scipy.sparse.rand(1000, 1000, 0.005))
+
+ def time_sparse_from_dict(self):
+ SparseDataFrame(dict(zip(range(1000), repeat([0]))))
+
class sparse_series_from_coo(object):
goal_time = 0.2
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 015fdf1f45f47..6e60b77611492 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -135,6 +135,7 @@ Removal of prior version deprecations/changes
Performance Improvements
~~~~~~~~~~~~~~~~~~~~~~~~
+- Improved performance of instantiating :class:`SparseDataFrame` (:issue:`16773`)
.. _whatsnew_0210.bug_fixes:
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 461dd50c5da6e..e157ae16e71f9 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -143,7 +143,7 @@ def _init_dict(self, data, index, columns, dtype=None):
sp_maker = lambda x: SparseArray(x, kind=self._default_kind,
fill_value=self._default_fill_value,
copy=True, dtype=dtype)
- sdict = DataFrame()
+ sdict = {}
for k, v in compat.iteritems(data):
if isinstance(v, Series):
# Force alignment, no copy necessary
@@ -163,11 +163,8 @@ def _init_dict(self, data, index, columns, dtype=None):
# TODO: figure out how to handle this case, all nan's?
# add in any other columns we want to have (completeness)
- nan_vec = np.empty(len(index))
- nan_vec.fill(nan)
- for c in columns:
- if c not in sdict:
- sdict[c] = sp_maker(nan_vec)
+ nan_arr = sp_maker(np.full(len(index), np.nan))
+ sdict.update((c, nan_arr) for c in columns if c not in sdict)
return to_manager(sdict, columns, index)
diff --git a/pandas/tests/reshape/test_reshape.py b/pandas/tests/reshape/test_reshape.py
index d47a95924bd10..632d3b4ad2e7a 100644
--- a/pandas/tests/reshape/test_reshape.py
+++ b/pandas/tests/reshape/test_reshape.py
@@ -643,6 +643,10 @@ def test_dataframe_dummies_preserve_categorical_dtype(self):
class TestGetDummiesSparse(TestGetDummies):
sparse = True
+ @pytest.mark.xfail(reason='nan in index is problematic (GH 16894)')
+ def test_include_na(self):
+ super(TestGetDummiesSparse, self).test_include_na()
+
class TestMakeAxisDummies(object):
diff --git a/pandas/tests/sparse/test_frame.py b/pandas/tests/sparse/test_frame.py
index 654d12b782f37..a5d514644a8f1 100644
--- a/pandas/tests/sparse/test_frame.py
+++ b/pandas/tests/sparse/test_frame.py
@@ -1095,6 +1095,8 @@ def test_as_blocks(self):
assert list(df_blocks.keys()) == ['float64']
tm.assert_frame_equal(df_blocks['float64'], df)
+ @pytest.mark.xfail(reason='nan column names in _init_dict problematic '
+ '(GH 16894)')
def test_nan_columnname(self):
# GH 8822
nan_colname = DataFrame(Series(1.0, index=[0]), columns=[nan])
| - [X] closes https://github.com/pandas-dev/pandas/issues/16773
- [x] tests added / passed
- [X] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16883 | 2017-07-11T17:15:30Z | 2017-07-17T15:11:38Z | 2017-07-17T15:11:37Z | 2017-07-17T15:11:46Z |
fix 4 lgtm alerts | diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py
index 9da9d0b855323..899349cd21f84 100644
--- a/asv_bench/benchmarks/rolling.py
+++ b/asv_bench/benchmarks/rolling.py
@@ -26,7 +26,7 @@ def time_rolling_quantile_median(self):
def time_rolling_median(self):
(self.df.rolling(self.wins).median())
- def time_rolling_median(self):
+ def time_rolling_mean(self):
(self.df.rolling(self.wins).mean())
def time_rolling_max(self):
@@ -68,7 +68,7 @@ def time_rolling_quantile_median_l(self):
def time_rolling_median_l(self):
(self.df.rolling(self.winl).median())
- def time_rolling_median_l(self):
+ def time_rolling_mean_l(self):
(self.df.rolling(self.winl).mean())
def time_rolling_max_l(self):
@@ -118,7 +118,7 @@ def time_rolling_quantile_median(self):
def time_rolling_median(self):
(self.sr.rolling(self.wins).median())
- def time_rolling_median(self):
+ def time_rolling_mean(self):
(self.sr.rolling(self.wins).mean())
def time_rolling_max(self):
@@ -160,7 +160,7 @@ def time_rolling_quantile_median_l(self):
def time_rolling_median_l(self):
(self.sr.rolling(self.winl).median())
- def time_rolling_median_l(self):
+ def time_rolling_mean_l(self):
(self.sr.rolling(self.winl).mean())
def time_rolling_max_l(self):
| Fix name of methods as `rolling_mean` instead of `rolling_median` following a43c1576ce3d94bc82f7cdd63531280ced5a9fa0.
I'm a dev on lgtm.com so I'm very biased but these were flagged [here](https://lgtm.com/projects/g/pydata/pandas/rev/a43c1576ce3d94bc82f7cdd63531280ced5a9fa0) and you can catch such issues (and more subtle ones) before they find their way into master by giving a go at [PR integration](https://lgtm.com/docs/lgtm/config/pr-integration) :)
Hope that helps!
| https://api.github.com/repos/pandas-dev/pandas/pulls/16881 | 2017-07-11T11:08:28Z | 2017-07-11T16:40:21Z | 2017-07-11T16:40:21Z | 2017-07-11T16:41:17Z |
COMPAT with dateutil 2.6.1, fixed ambiguous tz dst behavior | diff --git a/ci/requirements-3.5.run b/ci/requirements-3.5.run
index 43e6814ed6c8e..52828b5220997 100644
--- a/ci/requirements-3.5.run
+++ b/ci/requirements-3.5.run
@@ -1,4 +1,3 @@
-python-dateutil
pytz
numpy=1.11.3
openpyxl
diff --git a/ci/requirements-3.5.sh b/ci/requirements-3.5.sh
index d0f0b81802dc6..917439a8765a2 100644
--- a/ci/requirements-3.5.sh
+++ b/ci/requirements-3.5.sh
@@ -5,3 +5,7 @@ source activate pandas
echo "install 35"
conda install -n pandas -c conda-forge feather-format
+
+# pip install python-dateutil to get latest
+conda remove -n pandas python-dateutil --force
+pip install python-dateutil
diff --git a/ci/requirements-3.6_NUMPY_DEV.run b/ci/requirements-3.6_NUMPY_DEV.run
index 0aa987baefb1d..af44f198c687e 100644
--- a/ci/requirements-3.6_NUMPY_DEV.run
+++ b/ci/requirements-3.6_NUMPY_DEV.run
@@ -1,2 +1 @@
-python-dateutil
pytz
diff --git a/pandas/tests/tseries/test_offsets.py b/pandas/tests/tseries/test_offsets.py
index 47b15a2b66fc4..e03b3e0a85e5e 100644
--- a/pandas/tests/tseries/test_offsets.py
+++ b/pandas/tests/tseries/test_offsets.py
@@ -4844,7 +4844,7 @@ def test_fallback_plural(self):
hrs_pre = utc_offsets['utc_offset_daylight']
hrs_post = utc_offsets['utc_offset_standard']
- if dateutil.__version__ != LooseVersion('2.6.0'):
+ if dateutil.__version__ < LooseVersion('2.6.0'):
# buggy ambiguous behavior in 2.6.0
# GH 14621
# https://github.com/dateutil/dateutil/issues/321
@@ -4852,6 +4852,9 @@ def test_fallback_plural(self):
n=3, tstart=self._make_timestamp(self.ts_pre_fallback,
hrs_pre, tz),
expected_utc_offset=hrs_post)
+ elif dateutil.__version__ > LooseVersion('2.6.0'):
+ # fixed, but skip the test
+ continue
def test_springforward_plural(self):
# test moving from standard to daylight savings
diff --git a/pandas/tests/tseries/test_timezones.py b/pandas/tests/tseries/test_timezones.py
index de6978d52968b..c034a9c60ef1b 100644
--- a/pandas/tests/tseries/test_timezones.py
+++ b/pandas/tests/tseries/test_timezones.py
@@ -552,8 +552,16 @@ def f():
tz=tz, ambiguous='infer')
assert times[0] == Timestamp('2013-10-26 23:00', tz=tz, freq="H")
- if dateutil.__version__ != LooseVersion('2.6.0'):
- # see gh-14621
+ if str(tz).startswith('dateutil'):
+ if dateutil.__version__ < LooseVersion('2.6.0'):
+ # see gh-14621
+ assert times[-1] == Timestamp('2013-10-27 01:00:00+0000',
+ tz=tz, freq="H")
+ elif dateutil.__version__ > LooseVersion('2.6.0'):
+ # fixed ambiguous behavior
+ assert times[-1] == Timestamp('2013-10-27 01:00:00+0100',
+ tz=tz, freq="H")
+ else:
assert times[-1] == Timestamp('2013-10-27 01:00:00+0000',
tz=tz, freq="H")
@@ -1233,13 +1241,18 @@ def test_ambiguous_compat(self):
assert result_pytz.value == result_dateutil.value
assert result_pytz.value == 1382835600000000000
- # dateutil 2.6 buggy w.r.t. ambiguous=0
- if dateutil.__version__ != LooseVersion('2.6.0'):
+ if dateutil.__version__ < LooseVersion('2.6.0'):
+ # dateutil 2.6 buggy w.r.t. ambiguous=0
# see gh-14621
# see https://github.com/dateutil/dateutil/issues/321
assert (result_pytz.to_pydatetime().tzname() ==
result_dateutil.to_pydatetime().tzname())
assert str(result_pytz) == str(result_dateutil)
+ elif dateutil.__version__ > LooseVersion('2.6.0'):
+ # fixed ambiguous behavior
+ assert result_pytz.to_pydatetime().tzname() == 'GMT'
+ assert result_dateutil.to_pydatetime().tzname() == 'BST'
+ assert str(result_pytz) != str(result_dateutil)
# 1 hour difference
result_pytz = (Timestamp('2013-10-27 01:00:00')
| https://api.github.com/repos/pandas-dev/pandas/pulls/16880 | 2017-07-11T10:32:16Z | 2017-07-11T16:39:40Z | 2017-07-11T16:39:40Z | 2017-07-11T16:40:43Z | |
BUG: Fix css for displaying DataFrames in notebook #16792 | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index c808babeee5d9..faf66afb16f1e 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -520,6 +520,7 @@ I/O
- Bug in :func:`read_stata` where the index was not set (:issue:`16342`)
- Bug in :func:`read_html` where import check fails when run in multiple threads (:issue:`16928`)
- Bug in :func:`read_csv` where automatic delimiter detection caused a ``TypeError`` to be thrown when a bad line was encountered rather than the correct error message (:issue:`13374`)
+- Bug in ``DataFrame.to_html()`` with ``notebook=True`` where DataFrames with named indices or non-MultiIndex indices had undesired horizontal or vertical alignment for column or row labels, respectively (:issue:`16792`)
Plotting
^^^^^^^^
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 6a98497aa1bfe..547b9676717c9 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1132,20 +1132,42 @@ def write_tr(self, line, indent=0, indent_delta=4, header=False,
self.write('</tr>', indent)
def write_style(self):
- template = dedent("""\
- <style>
- .dataframe thead tr:only-child th {
- text-align: right;
- }
-
- .dataframe thead th {
- text-align: left;
- }
-
- .dataframe tbody tr th {
- vertical-align: top;
- }
- </style>""")
+ # We use the "scoped" attribute here so that the desired
+ # style properties for the data frame are not then applied
+ # throughout the entire notebook.
+ template_first = """\
+ <style scoped>"""
+ template_last = """\
+ </style>"""
+ template_select = """\
+ .dataframe %s {
+ %s: %s;
+ }"""
+ element_props = [('tbody tr th:only-of-type',
+ 'vertical-align',
+ 'middle'),
+ ('tbody tr th',
+ 'vertical-align',
+ 'top')]
+ if isinstance(self.columns, MultiIndex):
+ element_props.append(('thead tr th',
+ 'text-align',
+ 'left'))
+ if all((self.fmt.has_index_names,
+ self.fmt.index,
+ self.fmt.show_index_names)):
+ element_props.append(('thead tr:last-of-type th',
+ 'text-align',
+ 'right'))
+ else:
+ element_props.append(('thead th',
+ 'text-align',
+ 'right'))
+ template_mid = '\n\n'.join(map(lambda t: template_select % t,
+ element_props))
+ template = dedent('\n'.join((template_first,
+ template_mid,
+ template_last)))
if self.notebook:
self.write(template)
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 1e174c34221d5..194b5ba3e0276 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -1868,12 +1868,16 @@ def test_to_html_no_index_max_rows(self):
def test_to_html_notebook_has_style(self):
df = pd.DataFrame({"A": [1, 2, 3]})
result = df.to_html(notebook=True)
- assert "thead tr:only-child" in result
+ assert "tbody tr th:only-of-type" in result
+ assert "vertical-align: middle;" in result
+ assert "thead th" in result
def test_to_html_notebook_has_no_style(self):
df = pd.DataFrame({"A": [1, 2, 3]})
result = df.to_html()
- assert "thead tr:only-child" not in result
+ assert "tbody tr th:only-of-type" not in result
+ assert "vertical-align: middle;" not in result
+ assert "thead th" not in result
def test_to_html_with_index_names_false(self):
# gh-16493
| - [ ] closes #16792
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16879 | 2017-07-10T22:14:57Z | 2017-09-22T07:19:40Z | 2017-09-22T07:19:39Z | 2023-03-29T14:47:51Z |
Confirm that `select` was *not* clearer in 0.12 | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 7d1a8adf381fe..5722539b87aec 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2098,7 +2098,6 @@ def xs(self, key, axis=0, level=None, drop_level=True):
_xs = xs
- # TODO: Check if this was clearer in 0.12
def select(self, crit, axis=0):
"""
Return data corresponding to axis labels matching criteria
| This just deletes a comment `# TODO: Check if this was clearer in 0.12`.
Checking, the answer is "no". It was nearly identical. The docstring was exactly identical. The only change in the code from 0.12.0 is using `_get_axis_number` in the newer version.
This is based on comparison against 0.12.0 found at https://pypi.python.org/pypi/pandas/0.12.0/#downloads
| https://api.github.com/repos/pandas-dev/pandas/pulls/16878 | 2017-07-10T21:41:48Z | 2017-07-11T10:01:12Z | 2017-07-11T10:01:12Z | 2017-08-03T00:07:10Z |
COMPAT: moar 32-bit compat for testing of indexers | diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 9dc2cfdecb98f..14f344acbefb2 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -393,7 +393,7 @@ def test_reindex_dtype(self):
res, indexer = c.reindex(['a', 'c'])
tm.assert_index_equal(res, Index(['a', 'a', 'c']), exact=True)
tm.assert_numpy_array_equal(indexer,
- np.array([0, 3, 2], dtype=np.int64))
+ np.array([0, 3, 2], dtype=np.intp))
c = CategoricalIndex(['a', 'b', 'c', 'a'])
res, indexer = c.reindex(Categorical(['a', 'c']))
| xref #16826 | https://api.github.com/repos/pandas-dev/pandas/pulls/16869 | 2017-07-10T10:06:54Z | 2017-07-10T10:36:32Z | 2017-07-10T10:36:32Z | 2017-07-10T10:37:52Z |
MAINT: Drop the get_offset_name method | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index d5cc3d6ddca8e..43bfebd0c2e59 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -126,6 +126,7 @@ Removal of prior version deprecations/changes
- ``Index`` has dropped the ``.sym_diff()`` method in favor of ``.symmetric_difference()`` (:issue:`12591`)
- ``Categorical`` has dropped the ``.order()`` and ``.sort()`` methods in favor of ``.sort_values()`` (:issue:`12882`)
- :func:`eval` and :method:`DataFrame.eval` have changed the default of ``inplace`` from ``None`` to ``False`` (:issue:`11149`)
+- The function ``get_offset_name`` has been dropped in favor of the ``.freqstr`` attribute for an offset (:issue:`11834`)
.. _whatsnew_0210.performance:
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 8640f106a048a..c5f6c00a4005a 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -637,20 +637,6 @@ def get_offset(name):
getOffset = get_offset
-def get_offset_name(offset):
- """
- Return rule name associated with a DateOffset object
-
- Examples
- --------
- get_offset_name(BMonthEnd(1)) --> 'EOM'
- """
-
- msg = "get_offset_name(offset) is deprecated. Use offset.freqstr instead"
- warnings.warn(msg, FutureWarning, stacklevel=2)
- return offset.freqstr
-
-
def get_standard_freq(freq):
"""
Return the standardized frequency string
| Deprecated since 0.18.0
xref #11834 ( :-1: for no test being added for its deprecation back then! :smile:)
| https://api.github.com/repos/pandas-dev/pandas/pulls/16863 | 2017-07-09T07:48:09Z | 2017-07-10T10:12:50Z | 2017-07-10T10:12:50Z | 2017-07-10T15:06:05Z |
DOC: Fix missing parentheses in documentation | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 61f43146aba85..937d682d238b3 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -933,7 +933,7 @@ The dimension of the returned result can also change:
d = pd.DataFrame({"a":["x", "y"], "b":[1,2]})
def identity(df):
- print df
+ print(df)
return df
d.groupby("a").apply(identity)
diff --git a/doc/source/io.rst b/doc/source/io.rst
index e1e82f686f182..9bf84e5419ffa 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -3194,7 +3194,7 @@ You can pass ``iterator=True`` to iterate over the unpacked results
.. ipython:: python
for o in pd.read_msgpack('foo.msg',iterator=True):
- print o
+ print(o)
You can pass ``append=True`` to the writer to append to an existing pack
@@ -3912,7 +3912,7 @@ chunks.
evens = [2,4,6,8,10]
coordinates = store.select_as_coordinates('dfeq','number=evens')
for c in chunks(coordinates, 2):
- print store.select('dfeq',where=c)
+ print(store.select('dfeq',where=c))
Advanced Queries
++++++++++++++++
diff --git a/doc/source/whatsnew/v0.13.0.txt b/doc/source/whatsnew/v0.13.0.txt
index 3347b05a5df37..f440be1ddd56e 100644
--- a/doc/source/whatsnew/v0.13.0.txt
+++ b/doc/source/whatsnew/v0.13.0.txt
@@ -790,7 +790,7 @@ Experimental
.. ipython:: python
for o in pd.read_msgpack('foo.msg',iterator=True):
- print o
+ print(o)
.. ipython:: python
:suppress:
| Add parentheses to fix `SyntaxError: Missing parentheses in call to 'print'` inside documentations.
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16862 | 2017-07-09T07:25:14Z | 2017-07-10T10:13:50Z | 2017-07-10T10:13:49Z | 2017-07-10T10:13:52Z |
COMPAT: moar 32-bit compat for testing of indexers | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index cefb080a3ee78..e1053c1610175 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2704,7 +2704,7 @@ def get_indexer_non_unique(self, target):
tgt_values = target._values
indexer, missing = self._engine.get_indexer_non_unique(tgt_values)
- return indexer, missing
+ return _ensure_platform_int(indexer), missing
def get_indexer_for(self, target, **kwargs):
"""
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 493274fff43e0..9dc2cfdecb98f 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -401,7 +401,7 @@ def test_reindex_dtype(self):
exp = CategoricalIndex(['a', 'a', 'c'], categories=['a', 'c'])
tm.assert_index_equal(res, exp, exact=True)
tm.assert_numpy_array_equal(indexer,
- np.array([0, 3, 2], dtype=np.int64))
+ np.array([0, 3, 2], dtype=np.intp))
c = CategoricalIndex(['a', 'b', 'c', 'a'],
categories=['a', 'b', 'c', 'd'])
@@ -409,7 +409,7 @@ def test_reindex_dtype(self):
exp = Index(['a', 'a', 'c'], dtype='object')
tm.assert_index_equal(res, exp, exact=True)
tm.assert_numpy_array_equal(indexer,
- np.array([0, 3, 2], dtype=np.int64))
+ np.array([0, 3, 2], dtype=np.intp))
c = CategoricalIndex(['a', 'b', 'c', 'a'],
categories=['a', 'b', 'c', 'd'])
@@ -417,7 +417,7 @@ def test_reindex_dtype(self):
exp = CategoricalIndex(['a', 'a', 'c'], categories=['a', 'c'])
tm.assert_index_equal(res, exp, exact=True)
tm.assert_numpy_array_equal(indexer,
- np.array([0, 3, 2], dtype=np.int64))
+ np.array([0, 3, 2], dtype=np.intp))
def test_duplicates(self):
| xref #16826
| https://api.github.com/repos/pandas-dev/pandas/pulls/16861 | 2017-07-08T15:13:22Z | 2017-07-08T16:08:31Z | 2017-07-08T16:08:31Z | 2017-07-08T16:09:27Z |
ENH - Modify Dataframe.select_dtypes to accept scalar values | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 134cc5106015b..d8b1602fb104d 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -2229,7 +2229,3 @@ All numpy dtypes are subclasses of ``numpy.generic``:
Pandas also defines the types ``category``, and ``datetime64[ns, tz]``, which are not integrated into the normal
numpy hierarchy and wont show up with the above function.
-
-.. note::
-
- The ``include`` and ``exclude`` parameters must be non-string sequences.
diff --git a/doc/source/style.ipynb b/doc/source/style.ipynb
index 4eeda491426b1..c250787785e14 100644
--- a/doc/source/style.ipynb
+++ b/doc/source/style.ipynb
@@ -935,7 +935,7 @@
"\n",
"<span style=\"color: red\">*Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.*</span>\n",
"\n",
- "Some support is available for exporting styled `DataFrames` to Excel worksheets using the `OpenPyXL` engine. CSS2.2 properties handled include:\n",
+ "Some support is available for exporting styled `DataFrames` to Excel worksheets using the `OpenPyXL` engine. CSS2.2 properties handled include:\n",
"\n",
"- `background-color`\n",
"- `border-style`, `border-width`, `border-color` and their {`top`, `right`, `bottom`, `left` variants}\n",
diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index d5cc3d6ddca8e..6968bbebc836c 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -39,6 +39,7 @@ Other Enhancements
- :func:`read_feather` has gained the ``nthreads`` parameter for multi-threaded operations (:issue:`16359`)
- :func:`DataFrame.clip()` and :func:`Series.clip()` have gained an ``inplace`` argument. (:issue:`15388`)
- :func:`crosstab` has gained a ``margins_name`` parameter to define the name of the row / column that will contain the totals when ``margins=True``. (:issue:`15972`)
+- :func:`Dataframe.select_dtypes` now accepts scalar values for include/exclude as well as list-like. (:issue:`16855`)
.. _whatsnew_0210.api_breaking:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 80cdebc24c39d..6559fc4c24ce2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2285,9 +2285,9 @@ def select_dtypes(self, include=None, exclude=None):
Parameters
----------
- include, exclude : list-like
- A list of dtypes or strings to be included/excluded. You must pass
- in a non-empty sequence for at least one of these.
+ include, exclude : scalar or list-like
+ A selection of dtypes or strings to be included/excluded. At least
+ one of these parameters must be supplied.
Raises
------
@@ -2295,8 +2295,6 @@ def select_dtypes(self, include=None, exclude=None):
* If both of ``include`` and ``exclude`` are empty
* If ``include`` and ``exclude`` have overlapping elements
* If any kind of string dtype is passed in.
- TypeError
- * If either of ``include`` or ``exclude`` is not a sequence
Returns
-------
@@ -2331,6 +2329,14 @@ def select_dtypes(self, include=None, exclude=None):
3 0.0764 False 2
4 -0.9703 True 1
5 -1.2094 False 2
+ >>> df.select_dtypes(include='bool')
+ c
+ 0 True
+ 1 False
+ 2 True
+ 3 False
+ 4 True
+ 5 False
>>> df.select_dtypes(include=['float64'])
c
0 1
@@ -2348,10 +2354,12 @@ def select_dtypes(self, include=None, exclude=None):
4 True
5 False
"""
- include, exclude = include or (), exclude or ()
- if not (is_list_like(include) and is_list_like(exclude)):
- raise TypeError('include and exclude must both be non-string'
- ' sequences')
+
+ if not is_list_like(include):
+ include = (include,) if include is not None else ()
+ if not is_list_like(exclude):
+ exclude = (exclude,) if exclude is not None else ()
+
selection = tuple(map(frozenset, (include, exclude)))
if not any(selection):
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 335b76ff2aade..065580d56a683 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -104,7 +104,7 @@ def test_dtypes_are_correct_after_column_slice(self):
('b', np.float_),
('c', np.float_)])))
- def test_select_dtypes_include(self):
+ def test_select_dtypes_include_using_list_like(self):
df = DataFrame({'a': list('abc'),
'b': list(range(1, 4)),
'c': np.arange(3, 6).astype('u1'),
@@ -145,14 +145,10 @@ def test_select_dtypes_include(self):
ei = df[['h', 'i']]
assert_frame_equal(ri, ei)
- ri = df.select_dtypes(include=['timedelta'])
- ei = df[['k']]
- assert_frame_equal(ri, ei)
-
pytest.raises(NotImplementedError,
lambda: df.select_dtypes(include=['period']))
- def test_select_dtypes_exclude(self):
+ def test_select_dtypes_exclude_using_list_like(self):
df = DataFrame({'a': list('abc'),
'b': list(range(1, 4)),
'c': np.arange(3, 6).astype('u1'),
@@ -162,7 +158,7 @@ def test_select_dtypes_exclude(self):
ee = df[['a', 'e']]
assert_frame_equal(re, ee)
- def test_select_dtypes_exclude_include(self):
+ def test_select_dtypes_exclude_include_using_list_like(self):
df = DataFrame({'a': list('abc'),
'b': list(range(1, 4)),
'c': np.arange(3, 6).astype('u1'),
@@ -181,6 +177,114 @@ def test_select_dtypes_exclude_include(self):
e = df[['b', 'e']]
assert_frame_equal(r, e)
+ def test_select_dtypes_include_using_scalars(self):
+ df = DataFrame({'a': list('abc'),
+ 'b': list(range(1, 4)),
+ 'c': np.arange(3, 6).astype('u1'),
+ 'd': np.arange(4.0, 7.0, dtype='float64'),
+ 'e': [True, False, True],
+ 'f': pd.Categorical(list('abc')),
+ 'g': pd.date_range('20130101', periods=3),
+ 'h': pd.date_range('20130101', periods=3,
+ tz='US/Eastern'),
+ 'i': pd.date_range('20130101', periods=3,
+ tz='CET'),
+ 'j': pd.period_range('2013-01', periods=3,
+ freq='M'),
+ 'k': pd.timedelta_range('1 day', periods=3)})
+
+ ri = df.select_dtypes(include=np.number)
+ ei = df[['b', 'c', 'd', 'k']]
+ assert_frame_equal(ri, ei)
+
+ ri = df.select_dtypes(include='datetime')
+ ei = df[['g']]
+ assert_frame_equal(ri, ei)
+
+ ri = df.select_dtypes(include='datetime64')
+ ei = df[['g']]
+ assert_frame_equal(ri, ei)
+
+ ri = df.select_dtypes(include='category')
+ ei = df[['f']]
+ assert_frame_equal(ri, ei)
+
+ pytest.raises(NotImplementedError,
+ lambda: df.select_dtypes(include='period'))
+
+ def test_select_dtypes_exclude_using_scalars(self):
+ df = DataFrame({'a': list('abc'),
+ 'b': list(range(1, 4)),
+ 'c': np.arange(3, 6).astype('u1'),
+ 'd': np.arange(4.0, 7.0, dtype='float64'),
+ 'e': [True, False, True],
+ 'f': pd.Categorical(list('abc')),
+ 'g': pd.date_range('20130101', periods=3),
+ 'h': pd.date_range('20130101', periods=3,
+ tz='US/Eastern'),
+ 'i': pd.date_range('20130101', periods=3,
+ tz='CET'),
+ 'j': pd.period_range('2013-01', periods=3,
+ freq='M'),
+ 'k': pd.timedelta_range('1 day', periods=3)})
+
+ ri = df.select_dtypes(exclude=np.number)
+ ei = df[['a', 'e', 'f', 'g', 'h', 'i', 'j']]
+ assert_frame_equal(ri, ei)
+
+ ri = df.select_dtypes(exclude='category')
+ ei = df[['a', 'b', 'c', 'd', 'e', 'g', 'h', 'i', 'j', 'k']]
+ assert_frame_equal(ri, ei)
+
+ pytest.raises(NotImplementedError,
+ lambda: df.select_dtypes(exclude='period'))
+
+ def test_select_dtypes_include_exclude_using_scalars(self):
+ df = DataFrame({'a': list('abc'),
+ 'b': list(range(1, 4)),
+ 'c': np.arange(3, 6).astype('u1'),
+ 'd': np.arange(4.0, 7.0, dtype='float64'),
+ 'e': [True, False, True],
+ 'f': pd.Categorical(list('abc')),
+ 'g': pd.date_range('20130101', periods=3),
+ 'h': pd.date_range('20130101', periods=3,
+ tz='US/Eastern'),
+ 'i': pd.date_range('20130101', periods=3,
+ tz='CET'),
+ 'j': pd.period_range('2013-01', periods=3,
+ freq='M'),
+ 'k': pd.timedelta_range('1 day', periods=3)})
+
+ ri = df.select_dtypes(include=np.number, exclude='floating')
+ ei = df[['b', 'c', 'k']]
+ assert_frame_equal(ri, ei)
+
+ def test_select_dtypes_include_exclude_mixed_scalars_lists(self):
+ df = DataFrame({'a': list('abc'),
+ 'b': list(range(1, 4)),
+ 'c': np.arange(3, 6).astype('u1'),
+ 'd': np.arange(4.0, 7.0, dtype='float64'),
+ 'e': [True, False, True],
+ 'f': pd.Categorical(list('abc')),
+ 'g': pd.date_range('20130101', periods=3),
+ 'h': pd.date_range('20130101', periods=3,
+ tz='US/Eastern'),
+ 'i': pd.date_range('20130101', periods=3,
+ tz='CET'),
+ 'j': pd.period_range('2013-01', periods=3,
+ freq='M'),
+ 'k': pd.timedelta_range('1 day', periods=3)})
+
+ ri = df.select_dtypes(include=np.number,
+ exclude=['floating', 'timedelta'])
+ ei = df[['b', 'c']]
+ assert_frame_equal(ri, ei)
+
+ ri = df.select_dtypes(include=[np.number, 'category'],
+ exclude='floating')
+ ei = df[['b', 'c', 'f', 'k']]
+ assert_frame_equal(ri, ei)
+
def test_select_dtypes_not_an_attr_but_still_valid_dtype(self):
df = DataFrame({'a': list('abc'),
'b': list(range(1, 4)),
@@ -205,18 +309,6 @@ def test_select_dtypes_empty(self):
'must be nonempty'):
df.select_dtypes()
- def test_select_dtypes_raises_on_string(self):
- df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))})
- with tm.assert_raises_regex(TypeError, 'include and exclude '
- '.+ non-'):
- df.select_dtypes(include='object')
- with tm.assert_raises_regex(TypeError, 'include and exclude '
- '.+ non-'):
- df.select_dtypes(exclude='object')
- with tm.assert_raises_regex(TypeError, 'include and exclude '
- '.+ non-'):
- df.select_dtypes(include=int, exclude='object')
-
def test_select_dtypes_bad_datetime64(self):
df = DataFrame({'a': list('abc'),
'b': list(range(1, 4)),
| This commit relates to GH16855. It allows the Dataframe.select_dtypes
function to accept scalar values as well as list-like values. As such,
it should maintain backwards compatibility.
- [x ] closes #16855
- [x ] tests added / passed
- [x ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16860 | 2017-07-08T14:52:02Z | 2017-07-10T10:36:01Z | 2017-07-10T10:36:01Z | 2017-07-10T10:38:02Z |
Bug: groupby multiindex levels equals rows | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index c5fe89282bf52..944f88d257496 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -361,7 +361,7 @@ Groupby/Resample/Rolling
- Bug in ``groupby.transform()`` that would coerce boolean dtypes back to float (:issue:`16875`)
- Bug in ``Series.resample(...).apply()`` where an empty ``Series`` modified the source index and did not return the name of a ``Series`` (:issue:`14313`)
- Bug in ``.rolling(...).apply(...)`` with a ``DataFrame`` with a ``DatetimeIndex``, a ``window`` of a timedelta-convertible and ``min_periods >= 1` (:issue:`15305`)
-
+- Bug in ``DataFrame.groupby`` where index and column keys were not recognized correctly when the number of keys equaled the number of elements on the groupby axis (:issue:`16859`)
Sparse
^^^^^^
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index a388892e925b6..e73852ec3d973 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -2629,13 +2629,14 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True,
try:
if isinstance(obj, DataFrame):
- all_in_columns = all(g in obj.columns for g in keys)
+ all_in_columns_index = all(g in obj.columns or g in obj.index.names
+ for g in keys)
else:
- all_in_columns = False
+ all_in_columns_index = False
except Exception:
- all_in_columns = False
+ all_in_columns_index = False
- if not any_callable and not all_in_columns and \
+ if not any_callable and not all_in_columns_index and \
not any_arraylike and not any_groupers and \
match_axis_length and level is None:
keys = [com._asarray_tuplesafe(keys)]
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index f9e1a0d2e744a..8957beacab376 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -3891,6 +3891,19 @@ def predictions(tool):
result = df2.groupby('Key').apply(predictions).p1
tm.assert_series_equal(expected, result)
+ def test_gb_key_len_equal_axis_len(self):
+ # GH16843
+ # test ensures that index and column keys are recognized correctly
+ # when number of keys equals axis length of groupby
+ df = pd.DataFrame([['foo', 'bar', 'B', 1],
+ ['foo', 'bar', 'B', 2],
+ ['foo', 'baz', 'C', 3]],
+ columns=['first', 'second', 'third', 'one'])
+ df = df.set_index(['first', 'second'])
+ df = df.groupby(['first', 'second', 'third']).size()
+ assert df.loc[('foo', 'bar', 'B')] == 2
+ assert df.loc[('foo', 'baz', 'C')] == 1
+
def _check_groupby(df, result, keys, field, f=lambda x: x.sum()):
tups = lmap(tuple, df[keys].values)
| Fix for GH16843 where groubpy gives wrong result when number of
groupedby index+columns is equal to length of axis
closes #16843
tests added
passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16859 | 2017-07-07T22:49:52Z | 2017-08-24T10:38:28Z | 2017-08-24T10:38:28Z | 2017-08-25T08:35:55Z |
BUG: Series.isin fails or categoricals | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index d5cc3d6ddca8e..c6046aac831b1 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -188,7 +188,7 @@ Numeric
Categorical
^^^^^^^^^^^
-
+- Bug in ``:func:Series.isin()`` when called with a categorical (:issue`16639`)
Other
^^^^^
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index d74c5e66ea1a9..b490bf787a037 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -38,7 +38,6 @@
# --------------- #
# dtype access #
# --------------- #
-
def _ensure_data(values, dtype=None):
"""
routine to ensure that our data is of the correct
@@ -113,7 +112,8 @@ def _ensure_data(values, dtype=None):
return values.asi8, dtype, 'int64'
- elif is_categorical_dtype(values) or is_categorical_dtype(dtype):
+ elif (is_categorical_dtype(values) and
+ (is_categorical_dtype(dtype) or dtype is None)):
values = getattr(values, 'values', values)
values = values.codes
dtype = 'category'
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 063dcea5c76d6..9504d2a9426f0 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -586,6 +586,16 @@ def test_large(self):
expected[1] = True
tm.assert_numpy_array_equal(result, expected)
+ def test_categorical_from_codes(self):
+ # GH 16639
+ vals = np.array([0, 1, 2, 0])
+ cats = ['a', 'b', 'c']
+ Sd = pd.Series(pd.Categorical(1).from_codes(vals, cats))
+ St = pd.Series(pd.Categorical(1).from_codes(np.array([0, 1]), cats))
+ expected = np.array([True, True, False, True])
+ result = algos.isin(Sd, St)
+ tm.assert_numpy_array_equal(expected, result)
+
class TestValueCounts(object):
| - closes #16639
- [1 ] tests added / passed
- ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [check ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16858 | 2017-07-07T21:30:40Z | 2017-07-11T10:40:51Z | 2017-07-11T10:40:51Z | 2017-08-08T18:19:24Z |
TST/PKG: Move test HDF5 file to legacy | diff --git a/pandas/tests/io/data/periodindex_0.20.1_x86_64_darwin_2.7.13.h5 b/pandas/tests/io/data/legacy_hdf/periodindex_0.20.1_x86_64_darwin_2.7.13.h5
similarity index 100%
rename from pandas/tests/io/data/periodindex_0.20.1_x86_64_darwin_2.7.13.h5
rename to pandas/tests/io/data/legacy_hdf/periodindex_0.20.1_x86_64_darwin_2.7.13.h5
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 69c92dd775b9a..c0d200560b477 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -5279,7 +5279,8 @@ def test_read_py2_hdf_file_in_py3(self):
['2015-01-01', '2015-01-02', '2015-01-05'], freq='B'))
with ensure_clean_store(
- tm.get_data_path('periodindex_0.20.1_x86_64_darwin_2.7.13.h5'),
+ tm.get_data_path(
+ 'legacy_hdf/periodindex_0.20.1_x86_64_darwin_2.7.13.h5'),
mode='r') as store:
result = store['p']
assert_frame_equal(result, expected)
| @jreback this failed `pd.test()` from a not-git build, since our setup.py only include HDF5 files under the `legacy_hdf` directory.
```python
>>> pd.test(['-k', 'test_read_py2_hdf_file_in_py3'])
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/16856 | 2017-07-07T15:47:18Z | 2017-07-07T16:54:05Z | 2017-07-07T16:54:04Z | 2017-12-12T02:38:30Z |
DOC: Whatsnew updates | diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.txt
index 644a3047ae7a9..582f975f81a7a 100644
--- a/doc/source/whatsnew/v0.20.3.txt
+++ b/doc/source/whatsnew/v0.20.3.txt
@@ -1,100 +1,60 @@
.. _whatsnew_0203:
-v0.20.3 (June ??, 2017)
+v0.20.3 (July 7, 2017)
-----------------------
-This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes,
-bug fixes and performance improvements.
-We recommend that all users upgrade to this version.
+This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes
+and bug fixes. We recommend that all users upgrade to this version.
.. contents:: What's new in v0.20.3
:local:
:backlinks: none
-
-.. _whatsnew_0203.enhancements:
-
-Enhancements
-~~~~~~~~~~~~
-
-
-
-
-
-
-.. _whatsnew_0203.performance:
-
-Performance Improvements
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-
-
-
-
-
.. _whatsnew_0203.bug_fixes:
Bug Fixes
~~~~~~~~~
-- Fixed issue with dataframe scatter plot for categorical data that reports incorrect column key not found when categorical data is used for plotting (:issue:`16199`)
-- Fixed issue with :meth:`DataFrame.style` where element id's were not unique (:issue:`16780`)
-- Fixed a pytest marker failing downstream packages' tests suites (:issue:`16680`)
-- Fixed compat with loading a ``DataFrame`` with a ``PeriodIndex``, from a ``format='fixed'`` HDFStore, in Python 3, that was written in Python 2 (:issue:`16781`)
-- Fixed a bug in failing to compute rolling computations of a column-MultiIndexed ``DataFrame`` (:issue:`16789`, :issue:`16825`)
-- Bug in a DataFrame/Series with a ``TimedeltaIndex`` when slice indexing (:issue:`16637`)
+- Fixed a bug in failing to compute rolling computations of a column-MultiIndexed ``DataFrame`` (:issue:`16789`, :issue:`16825`)
+- Fixed a pytest marker failing downstream packages' tests suites (:issue:`16680`)
Conversion
^^^^^^^^^^
- Bug in pickle compat prior to the v0.20.x series, when ``UTC`` is a timezone in a Series/DataFrame/Index (:issue:`16608`)
- Bug in ``Series`` construction when passing a ``Series`` with ``dtype='category'`` (:issue:`16524`).
-- Bug in ``DataFrame.astype()`` when passing a ``Series`` as the ``dtype`` kwarg. (:issue:`16717`).
+- Bug in :meth:`DataFrame.astype` when passing a ``Series`` as the ``dtype`` kwarg. (:issue:`16717`).
Indexing
^^^^^^^^
- Bug in ``Float64Index`` causing an empty array instead of ``None`` to be returned from ``.get(np.nan)`` on a Series whose index did not contain any ``NaN`` s (:issue:`8569`)
- Bug in ``MultiIndex.isin`` causing an error when passing an empty iterable (:issue:`16777`)
+- Fixed a bug in a slicing DataFrame/Series that have a ``TimedeltaIndex`` (:issue:`16637`)
I/O
^^^
- Bug in :func:`read_csv` in which files weren't opened as binary files by the C engine on Windows, causing EOF characters mid-field, which would fail (:issue:`16039`, :issue:`16559`, :issue:`16675`)
- Bug in :func:`read_hdf` in which reading a ``Series`` saved to an HDF file in 'fixed' format fails when an explicit ``mode='r'`` argument is supplied (:issue:`16583`)
-- Bug in :func:`DataFrame.to_latex` where ``bold_rows`` was wrongly specified to be ``True`` by default, whereas in reality row labels remained non-bold whatever parameter provided. (:issue:`16707`)
+- Bug in :meth:`DataFrame.to_latex` where ``bold_rows`` was wrongly specified to be ``True`` by default, whereas in reality row labels remained non-bold whatever parameter provided. (:issue:`16707`)
+- Fixed an issue with :meth:`DataFrame.style` where generated element ids were not unique (:issue:`16780`)
+- Fixed loading a ``DataFrame`` with a ``PeriodIndex``, from a ``format='fixed'`` HDFStore, in Python 3, that was written in Python 2 (:issue:`16781`)
Plotting
^^^^^^^^
-- Fix regression in series plotting that prevented RGB and RGBA tuples from being used as color arguments (:issue:`16233`)
-
-
-
-Groupby/Resample/Rolling
-^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-Sparse
-^^^^^^
-
-
+- Fixed regression that prevented RGB and RGBA tuples from being used as color arguments (:issue:`16233`)
+- Fixed an issue with :meth:`DataFrame.plot.scatter` that incorrectly raised a ``KeyError`` when categorical data is used for plotting (:issue:`16199`)
Reshaping
^^^^^^^^^
+
- ``PeriodIndex`` / ``TimedeltaIndex.join`` was missing the ``sort=`` kwarg (:issue:`16541`)
- Bug in joining on a ``MultiIndex`` with a ``category`` dtype for a level (:issue:`16627`).
- Bug in :func:`merge` when merging/joining with multiple categorical columns (:issue:`16767`)
-
-Numeric
-^^^^^^^
-
-
Categorical
^^^^^^^^^^^
-- Bug in ``DataFrame.sort_values`` not respecting the ``kind`` with categorical data (:issue:`16793`)
-
-Other
-^^^^^
+- Bug in ``DataFrame.sort_values`` not respecting the ``kind`` parameter with categorical data (:issue:`16793`)
| [ci skip] | https://api.github.com/repos/pandas-dev/pandas/pulls/16853 | 2017-07-07T13:40:30Z | 2017-07-07T13:46:05Z | 2017-07-07T13:46:05Z | 2017-12-12T02:38:29Z |
TST/PKG: Remove definition of pandas.util.testing.slow | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 4b97fb83cb13b..4406df316f3f4 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -54,6 +54,7 @@ Backwards incompatible API changes
- :class:`pandas.HDFStore`'s string representation is now faster and less detailed. For the previous behavior, use ``pandas.HDFStore.info()``. (:issue:`16503`).
- Compression defaults in HDF stores now follow pytable standards. Default is no compression and if ``complib`` is missing and ``complevel`` > 0 ``zlib`` is used (:issue:`15943`)
- ``Index.get_indexer_non_unique()`` now returns a ndarray indexer rather than an ``Index``; this is consistent with ``Index.get_indexer()`` (:issue:`16819`)
+- Removed the ``@slow`` decorator from ``pandas.util.testing``, which caused issues for some downstream packages' test suites. Use ``@pytest.mark.slow`` instead, which achieves the same thing (:issue:`16850`)
.. _whatsnew_0210.api:
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 89ab4531877a4..6bdd1b51b0bef 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -30,7 +30,7 @@
import pandas.util.testing as tm
from pandas.util.testing import (assert_frame_equal, randbool,
assert_numpy_array_equal, assert_series_equal,
- assert_produces_warning, slow)
+ assert_produces_warning)
from pandas.compat import PY3, reduce
_series_frame_incompatible = _bool_ops_syms
@@ -144,7 +144,7 @@ def teardown_method(self, method):
del self.lhses, self.rhses, self.scalar_rhses, self.scalar_lhses
del self.pandas_rhses, self.pandas_lhses, self.current_engines
- @slow
+ @pytest.mark.slow
def test_complex_cmp_ops(self):
cmp_ops = ('!=', '==', '<=', '>=', '<', '>')
cmp2_ops = ('>', '<')
@@ -161,7 +161,7 @@ def test_simple_cmp_ops(self):
for lhs, rhs, cmp_op in product(bool_lhses, bool_rhses, self.cmp_ops):
self.check_simple_cmp_op(lhs, cmp_op, rhs)
- @slow
+ @pytest.mark.slow
def test_binary_arith_ops(self):
for lhs, op, rhs in product(self.lhses, self.arith_ops, self.rhses):
self.check_binary_arith_op(lhs, op, rhs)
@@ -181,17 +181,17 @@ def test_pow(self):
for lhs, rhs in product(self.lhses, self.rhses):
self.check_pow(lhs, '**', rhs)
- @slow
+ @pytest.mark.slow
def test_single_invert_op(self):
for lhs, op, rhs in product(self.lhses, self.cmp_ops, self.rhses):
self.check_single_invert_op(lhs, op, rhs)
- @slow
+ @pytest.mark.slow
def test_compound_invert_op(self):
for lhs, op, rhs in product(self.lhses, self.cmp_ops, self.rhses):
self.check_compound_invert_op(lhs, op, rhs)
- @slow
+ @pytest.mark.slow
def test_chained_cmp_op(self):
mids = self.lhses
cmp_ops = '<', '>'
@@ -870,7 +870,7 @@ def test_frame_comparison(self, engine, parser):
res = pd.eval('df < df3', engine=engine, parser=parser)
assert_frame_equal(res, df < df3)
- @slow
+ @pytest.mark.slow
def test_medium_complex_frame_alignment(self, engine, parser):
args = product(self.lhs_index_types, self.index_types,
self.index_types, self.index_types)
@@ -974,7 +974,7 @@ def test_series_frame_commutativity(self, engine, parser):
if engine == 'numexpr':
assert_frame_equal(a, b)
- @slow
+ @pytest.mark.slow
def test_complex_series_frame_alignment(self, engine, parser):
import random
args = product(self.lhs_index_types, self.index_types,
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index cc37f8cc3cb02..c317ad542659a 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -8,6 +8,7 @@
from numpy import nan
import numpy as np
+import pytest
from pandas import (DataFrame, compat, option_context)
from pandas.compat import StringIO, lrange, u
@@ -40,7 +41,7 @@ def test_repr_mixed(self):
foo = repr(self.mixed_frame) # noqa
self.mixed_frame.info(verbose=False, buf=buf)
- @tm.slow
+ @pytest.mark.slow
def test_repr_mixed_big(self):
# big mixed
biggie = DataFrame({'A': np.random.randn(200),
@@ -87,7 +88,7 @@ def test_repr_dimensions(self):
with option_context('display.show_dimensions', 'truncate'):
assert "2 rows x 2 columns" not in repr(df)
- @tm.slow
+ @pytest.mark.slow
def test_repr_big(self):
# big one
biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4),
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 69bd2b008416f..6a4b1686a31e2 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -17,7 +17,7 @@
from pandas.util.testing import (assert_almost_equal,
assert_series_equal,
assert_frame_equal,
- ensure_clean, slow,
+ ensure_clean,
makeCustomDataframe as mkdf)
import pandas.util.testing as tm
@@ -205,7 +205,7 @@ def _check_df(df, cols=None):
cols = ['b', 'a']
_check_df(df, cols)
- @slow
+ @pytest.mark.slow
def test_to_csv_dtnat(self):
# GH3437
from pandas import NaT
@@ -236,7 +236,7 @@ def make_dtnat_arr(n, nnat=None):
assert_frame_equal(df, recons, check_names=False,
check_less_precise=True)
- @slow
+ @pytest.mark.slow
def test_to_csv_moar(self):
def _do_test(df, r_dtype=None, c_dtype=None,
@@ -728,7 +728,7 @@ def test_to_csv_chunking(self):
rs = read_csv(filename, index_col=0)
assert_frame_equal(rs, aa)
- @slow
+ @pytest.mark.slow
def test_to_csv_wide_frame_formatting(self):
# Issue #8621
df = DataFrame(np.random.randn(1, 100010), columns=None, index=None)
diff --git a/pandas/tests/indexing/test_indexing_slow.py b/pandas/tests/indexing/test_indexing_slow.py
index 08d390a6a213e..1b3fb18d9ff1d 100644
--- a/pandas/tests/indexing/test_indexing_slow.py
+++ b/pandas/tests/indexing/test_indexing_slow.py
@@ -6,11 +6,12 @@
import pandas as pd
from pandas.core.api import Series, DataFrame, MultiIndex
import pandas.util.testing as tm
+import pytest
class TestIndexingSlow(object):
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_get_loc(self): # GH7724, GH2646
with warnings.catch_warnings(record=True):
@@ -80,7 +81,7 @@ def loop(mi, df, keys):
assert not mi.index.lexsort_depth < i
loop(mi, df, keys)
- @tm.slow
+ @pytest.mark.slow
def test_large_dataframe_indexing(self):
# GH10692
result = DataFrame({'x': range(10 ** 6)}, dtype='int64')
@@ -88,7 +89,7 @@ def test_large_dataframe_indexing(self):
expected = DataFrame({'x': range(10 ** 6 + 1)}, dtype='int64')
tm.assert_frame_equal(result, expected)
- @tm.slow
+ @pytest.mark.slow
def test_large_mi_dataframe_indexing(self):
# GH10645
result = MultiIndex.from_arrays([range(10 ** 6), range(10 ** 6)])
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 4b4f44b44c163..584a6561b505b 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -664,7 +664,7 @@ def test_url(self):
tm.assert_frame_equal(url_table, local_table)
# TODO: ftp testing
- @tm.slow
+ @pytest.mark.slow
def test_file(self):
dirpath = tm.get_data_path()
localtable = os.path.join(dirpath, 'salaries.csv')
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index abe3757ec64f3..856e8d6466526 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -614,7 +614,7 @@ def test_read_from_s3_url(self):
local_table = self.get_exceldf('test1')
tm.assert_frame_equal(url_table, local_table)
- @tm.slow
+ @pytest.mark.slow
def test_read_from_file_url(self):
# FILE
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 1e1d653cf94d1..4ef265dcd5113 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -130,7 +130,7 @@ def test_spam_url(self):
assert_framelist_equal(df1, df2)
- @tm.slow
+ @pytest.mark.slow
def test_banklist(self):
df1 = self.read_html(self.banklist_data, '.*Florida.*',
attrs={'id': 'table'})
@@ -292,7 +292,7 @@ def test_invalid_url(self):
except ValueError as e:
assert str(e) == 'No tables found'
- @tm.slow
+ @pytest.mark.slow
def test_file_url(self):
url = self.banklist_data
dfs = self.read_html(file_path_to_url(url), 'First',
@@ -301,7 +301,7 @@ def test_file_url(self):
for df in dfs:
assert isinstance(df, DataFrame)
- @tm.slow
+ @pytest.mark.slow
def test_invalid_table_attrs(self):
url = self.banklist_data
with tm.assert_raises_regex(ValueError, 'No tables found'):
@@ -312,39 +312,39 @@ def _bank_data(self, *args, **kwargs):
return self.read_html(self.banklist_data, 'Metcalf',
attrs={'id': 'table'}, *args, **kwargs)
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_header(self):
df = self._bank_data(header=[0, 1])[0]
assert isinstance(df.columns, MultiIndex)
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_index(self):
df = self._bank_data(index_col=[0, 1])[0]
assert isinstance(df.index, MultiIndex)
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_header_index(self):
df = self._bank_data(header=[0, 1], index_col=[0, 1])[0]
assert isinstance(df.columns, MultiIndex)
assert isinstance(df.index, MultiIndex)
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_header_skiprows_tuples(self):
df = self._bank_data(header=[0, 1], skiprows=1, tupleize_cols=True)[0]
assert isinstance(df.columns, Index)
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_header_skiprows(self):
df = self._bank_data(header=[0, 1], skiprows=1)[0]
assert isinstance(df.columns, MultiIndex)
- @tm.slow
+ @pytest.mark.slow
def test_multiindex_header_index_skiprows(self):
df = self._bank_data(header=[0, 1], index_col=[0, 1], skiprows=1)[0]
assert isinstance(df.index, MultiIndex)
assert isinstance(df.columns, MultiIndex)
- @tm.slow
+ @pytest.mark.slow
def test_regex_idempotency(self):
url = self.banklist_data
dfs = self.read_html(file_path_to_url(url),
@@ -372,7 +372,7 @@ def test_python_docs_table(self):
zz = [df.iloc[0, 0][0:4] for df in dfs]
assert sorted(zz) == sorted(['Repo', 'What'])
- @tm.slow
+ @pytest.mark.slow
def test_thousands_macau_stats(self):
all_non_nan_table_index = -2
macau_data = os.path.join(DATA_PATH, 'macau.html')
@@ -382,7 +382,7 @@ def test_thousands_macau_stats(self):
assert not any(s.isnull().any() for _, s in df.iteritems())
- @tm.slow
+ @pytest.mark.slow
def test_thousands_macau_index_col(self):
all_non_nan_table_index = -2
macau_data = os.path.join(DATA_PATH, 'macau.html')
@@ -523,7 +523,7 @@ def test_nyse_wsj_commas_table(self):
assert df.shape[0] == nrows
tm.assert_index_equal(df.columns, columns)
- @tm.slow
+ @pytest.mark.slow
def test_banklist_header(self):
from pandas.io.html import _remove_whitespace
@@ -562,7 +562,7 @@ def try_remove_ws(x):
coerce=True)
tm.assert_frame_equal(converted, gtnew)
- @tm.slow
+ @pytest.mark.slow
def test_gold_canyon(self):
gc = 'Gold Canyon'
with open(self.banklist_data, 'r') as f:
@@ -855,7 +855,7 @@ def test_works_on_valid_markup(self):
assert isinstance(dfs, list)
assert isinstance(dfs[0], DataFrame)
- @tm.slow
+ @pytest.mark.slow
def test_fallback_success(self):
_skip_if_none_of(('bs4', 'html5lib'))
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
@@ -898,7 +898,7 @@ def get_elements_from_file(url, element='table'):
return soup.find_all(element)
-@tm.slow
+@pytest.mark.slow
def test_bs4_finds_tables():
filepath = os.path.join(DATA_PATH, "spam.html")
with warnings.catch_warnings():
@@ -913,13 +913,13 @@ def get_lxml_elements(url, element):
return doc.xpath('.//{0}'.format(element))
-@tm.slow
+@pytest.mark.slow
def test_lxml_finds_tables():
filepath = os.path.join(DATA_PATH, "spam.html")
assert get_lxml_elements(filepath, 'table')
-@tm.slow
+@pytest.mark.slow
def test_lxml_finds_tbody():
filepath = os.path.join(DATA_PATH, "spam.html")
assert get_lxml_elements(filepath, 'tbody')
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index ce8fb7a57c912..8fe119d28644c 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -8,7 +8,6 @@
from pandas import Series, DataFrame, MultiIndex
from pandas.compat import range, lzip
import pandas.util.testing as tm
-from pandas.util.testing import slow
import numpy as np
from numpy import random
@@ -35,7 +34,7 @@ def _skip_if_mpl_14_or_dev_boxplot():
class TestDataFramePlots(TestPlotBase):
- @slow
+ @pytest.mark.slow
def test_boxplot_legacy(self):
df = DataFrame(randn(6, 4),
index=list(string.ascii_letters[:6]),
@@ -93,13 +92,13 @@ def test_boxplot_legacy(self):
lines = list(itertools.chain.from_iterable(d.values()))
assert len(ax.get_lines()) == len(lines)
- @slow
+ @pytest.mark.slow
def test_boxplot_return_type_none(self):
# GH 12216; return_type=None & by=None -> axes
result = self.hist_df.boxplot()
assert isinstance(result, self.plt.Axes)
- @slow
+ @pytest.mark.slow
def test_boxplot_return_type_legacy(self):
# API change in https://github.com/pandas-dev/pandas/pull/7096
import matplotlib as mpl # noqa
@@ -125,7 +124,7 @@ def test_boxplot_return_type_legacy(self):
result = df.boxplot(return_type='both')
self._check_box_return_type(result, 'both')
- @slow
+ @pytest.mark.slow
def test_boxplot_axis_limits(self):
def _check_ax_limits(col, ax):
@@ -153,14 +152,14 @@ def _check_ax_limits(col, ax):
assert age_ax._sharey == height_ax
assert dummy_ax._sharey is None
- @slow
+ @pytest.mark.slow
def test_boxplot_empty_column(self):
_skip_if_mpl_14_or_dev_boxplot()
df = DataFrame(np.random.randn(20, 4))
df.loc[:, 0] = np.nan
_check_plot_works(df.boxplot, return_type='axes')
- @slow
+ @pytest.mark.slow
def test_figsize(self):
df = DataFrame(np.random.rand(10, 5),
columns=['A', 'B', 'C', 'D', 'E'])
@@ -176,7 +175,7 @@ def test_fontsize(self):
class TestDataFrameGroupByPlots(TestPlotBase):
- @slow
+ @pytest.mark.slow
def test_boxplot_legacy(self):
grouped = self.hist_df.groupby(by='gender')
with tm.assert_produces_warning(UserWarning):
@@ -206,7 +205,7 @@ def test_boxplot_legacy(self):
return_type='axes')
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
- @slow
+ @pytest.mark.slow
def test_grouped_plot_fignums(self):
n = 10
weight = Series(np.random.normal(166, 20, size=n))
@@ -230,7 +229,7 @@ def test_grouped_plot_fignums(self):
res = df.groupby('gender').hist()
tm.close()
- @slow
+ @pytest.mark.slow
def test_grouped_box_return_type(self):
df = self.hist_df
@@ -267,7 +266,7 @@ def test_grouped_box_return_type(self):
returned = df2.boxplot(by='category', return_type=t)
self._check_box_return_type(returned, t, expected_keys=columns2)
- @slow
+ @pytest.mark.slow
def test_grouped_box_layout(self):
df = self.hist_df
@@ -341,7 +340,7 @@ def test_grouped_box_layout(self):
return_type='dict')
self._check_axes_shape(self.plt.gcf().axes, axes_num=3, layout=(1, 3))
- @slow
+ @pytest.mark.slow
def test_grouped_box_multiple_axes(self):
# GH 6970, GH 7069
df = self.hist_df
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 0cff365be3ec8..e9c7d806fd65d 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -14,7 +14,7 @@
from pandas.core.indexes.period import period_range, Period, PeriodIndex
from pandas.core.resample import DatetimeIndex
-from pandas.util.testing import assert_series_equal, ensure_clean, slow
+from pandas.util.testing import assert_series_equal, ensure_clean
import pandas.util.testing as tm
from pandas.tests.plotting.common import (TestPlotBase,
@@ -45,7 +45,7 @@ def setup_method(self, method):
def teardown_method(self, method):
tm.close()
- @slow
+ @pytest.mark.slow
def test_ts_plot_with_tz(self):
# GH2877
index = date_range('1/1/2011', periods=2, freq='H',
@@ -61,7 +61,7 @@ def test_fontsize_set_correctly(self):
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
assert label.get_fontsize() == 2
- @slow
+ @pytest.mark.slow
def test_frame_inferred(self):
# inferred freq
idx = date_range('1/1/1987', freq='MS', periods=100)
@@ -99,7 +99,7 @@ def test_nonnumeric_exclude(self):
pytest.raises(TypeError, df['A'].plot)
- @slow
+ @pytest.mark.slow
def test_tsplot(self):
from pandas.tseries.plotting import tsplot
@@ -133,7 +133,7 @@ def test_both_style_and_color(self):
s = ts.reset_index(drop=True)
pytest.raises(ValueError, s.plot, style='b-', color='#000099')
- @slow
+ @pytest.mark.slow
def test_high_freq(self):
freaks = ['ms', 'us']
for freq in freaks:
@@ -151,7 +151,7 @@ def test_get_datevalue(self):
assert (get_datevalue('1/1/1987', 'D') ==
Period('1987-1-1', 'D').ordinal)
- @slow
+ @pytest.mark.slow
def test_ts_plot_format_coord(self):
def check_format_of_first_point(ax, expected_string):
first_line = ax.get_lines()[0]
@@ -185,28 +185,28 @@ def check_format_of_first_point(ax, expected_string):
tsplot(daily, self.plt.Axes.plot, ax=ax)
check_format_of_first_point(ax, 't = 2014-01-01 y = 1.000000')
- @slow
+ @pytest.mark.slow
def test_line_plot_period_series(self):
for s in self.period_ser:
_check_plot_works(s.plot, s.index.freq)
- @slow
+ @pytest.mark.slow
def test_line_plot_datetime_series(self):
for s in self.datetime_ser:
_check_plot_works(s.plot, s.index.freq.rule_code)
- @slow
+ @pytest.mark.slow
def test_line_plot_period_frame(self):
for df in self.period_df:
_check_plot_works(df.plot, df.index.freq)
- @slow
+ @pytest.mark.slow
def test_line_plot_datetime_frame(self):
for df in self.datetime_df:
freq = df.index.to_period(df.index.freq.rule_code).freq
_check_plot_works(df.plot, freq)
- @slow
+ @pytest.mark.slow
def test_line_plot_inferred_freq(self):
for ser in self.datetime_ser:
ser = Series(ser.values, Index(np.asarray(ser.index)))
@@ -223,7 +223,7 @@ def test_fake_inferred_business(self):
ts.plot(ax=ax)
assert not hasattr(ax, 'freq')
- @slow
+ @pytest.mark.slow
def test_plot_offset_freq(self):
ser = tm.makeTimeSeries()
_check_plot_works(ser.plot)
@@ -232,14 +232,14 @@ def test_plot_offset_freq(self):
ser = Series(np.random.randn(len(dr)), dr)
_check_plot_works(ser.plot)
- @slow
+ @pytest.mark.slow
def test_plot_multiple_inferred_freq(self):
dr = Index([datetime(2000, 1, 1), datetime(2000, 1, 6), datetime(
2000, 1, 11)])
ser = Series(np.random.randn(len(dr)), dr)
_check_plot_works(ser.plot)
- @slow
+ @pytest.mark.slow
def test_uhf(self):
import pandas.plotting._converter as conv
idx = date_range('2012-6-22 21:59:51.960928', freq='L', periods=500)
@@ -257,7 +257,7 @@ def test_uhf(self):
if len(rs):
assert xp == rs
- @slow
+ @pytest.mark.slow
def test_irreg_hf(self):
idx = date_range('2012-6-22 21:59:51', freq='S', periods=100)
df = DataFrame(np.random.randn(len(idx), 2), idx)
@@ -297,7 +297,7 @@ def test_business_freq(self):
idx = ax.get_lines()[0].get_xdata()
assert PeriodIndex(data=idx).freqstr == 'B'
- @slow
+ @pytest.mark.slow
def test_business_freq_convert(self):
n = tm.N
tm.N = 300
@@ -327,7 +327,7 @@ def test_dataframe(self):
idx = ax.get_lines()[0].get_xdata()
tm.assert_index_equal(bts.index.to_period(), PeriodIndex(idx))
- @slow
+ @pytest.mark.slow
def test_axis_limits(self):
def _test(ax):
@@ -384,7 +384,7 @@ def test_get_finder(self):
assert conv.get_finder('A') == conv._annual_finder
assert conv.get_finder('W') == conv._daily_finder
- @slow
+ @pytest.mark.slow
def test_finder_daily(self):
xp = Period('1999-1-1', freq='B').ordinal
day_lst = [10, 40, 252, 400, 950, 2750, 10000]
@@ -402,7 +402,7 @@ def test_finder_daily(self):
assert xp == rs
self.plt.close(ax.get_figure())
- @slow
+ @pytest.mark.slow
def test_finder_quarterly(self):
xp = Period('1988Q1').ordinal
yrs = [3.5, 11]
@@ -420,7 +420,7 @@ def test_finder_quarterly(self):
assert xp == rs
self.plt.close(ax.get_figure())
- @slow
+ @pytest.mark.slow
def test_finder_monthly(self):
xp = Period('Jan 1988').ordinal
yrs = [1.15, 2.5, 4, 11]
@@ -448,7 +448,7 @@ def test_finder_monthly_long(self):
xp = Period('1989Q1', 'M').ordinal
assert rs == xp
- @slow
+ @pytest.mark.slow
def test_finder_annual(self):
xp = [1987, 1988, 1990, 1990, 1995, 2020, 2070, 2170]
for i, nyears in enumerate([5, 10, 19, 49, 99, 199, 599, 1001]):
@@ -461,7 +461,7 @@ def test_finder_annual(self):
assert rs == Period(xp[i], freq='A').ordinal
self.plt.close(ax.get_figure())
- @slow
+ @pytest.mark.slow
def test_finder_minutely(self):
nminutes = 50 * 24 * 60
rng = date_range('1/1/1999', freq='Min', periods=nminutes)
@@ -484,7 +484,7 @@ def test_finder_hourly(self):
xp = Period('1/1/1999', freq='H').ordinal
assert rs == xp
- @slow
+ @pytest.mark.slow
def test_gaps(self):
ts = tm.makeTimeSeries()
ts[5:25] = np.nan
@@ -529,7 +529,7 @@ def test_gaps(self):
mask = data.mask
assert mask[2:5, 1].all()
- @slow
+ @pytest.mark.slow
def test_gap_upsample(self):
low = tm.makeTimeSeries()
low[5:25] = np.nan
@@ -551,7 +551,7 @@ def test_gap_upsample(self):
mask = data.mask
assert mask[5:25, 1].all()
- @slow
+ @pytest.mark.slow
def test_secondary_y(self):
ser = Series(np.random.randn(10))
ser2 = Series(np.random.randn(10))
@@ -581,7 +581,7 @@ def test_secondary_y(self):
assert hasattr(ax2, 'left_ax')
assert not hasattr(ax2, 'right_ax')
- @slow
+ @pytest.mark.slow
def test_secondary_y_ts(self):
idx = date_range('1/1/2000', periods=10)
ser = Series(np.random.randn(10), idx)
@@ -608,7 +608,7 @@ def test_secondary_y_ts(self):
ax2 = ser.plot(secondary_y=True)
assert ax.get_yaxis().get_visible()
- @slow
+ @pytest.mark.slow
def test_secondary_kde(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -621,7 +621,7 @@ def test_secondary_kde(self):
axes = fig.get_axes()
assert axes[1].get_yaxis().get_ticks_position() == 'right'
- @slow
+ @pytest.mark.slow
def test_secondary_bar(self):
ser = Series(np.random.randn(10))
fig, ax = self.plt.subplots()
@@ -629,7 +629,7 @@ def test_secondary_bar(self):
axes = fig.get_axes()
assert axes[1].get_yaxis().get_ticks_position() == 'right'
- @slow
+ @pytest.mark.slow
def test_secondary_frame(self):
df = DataFrame(np.random.randn(5, 3), columns=['a', 'b', 'c'])
axes = df.plot(secondary_y=['a', 'c'], subplots=True)
@@ -638,7 +638,7 @@ def test_secondary_frame(self):
self.default_tick_position)
assert axes[2].get_yaxis().get_ticks_position() == 'right'
- @slow
+ @pytest.mark.slow
def test_secondary_bar_frame(self):
df = DataFrame(np.random.randn(5, 3), columns=['a', 'b', 'c'])
axes = df.plot(kind='bar', secondary_y=['a', 'c'], subplots=True)
@@ -666,7 +666,7 @@ def test_mixed_freq_regular_first(self):
assert left == pidx[0].ordinal
assert right == pidx[-1].ordinal
- @slow
+ @pytest.mark.slow
def test_mixed_freq_irregular_first(self):
s1 = tm.makeTimeSeries()
s2 = s1[[0, 5, 10, 11, 12, 13, 14, 15]]
@@ -697,7 +697,7 @@ def test_mixed_freq_regular_first_df(self):
assert left == pidx[0].ordinal
assert right == pidx[-1].ordinal
- @slow
+ @pytest.mark.slow
def test_mixed_freq_irregular_first_df(self):
# GH 9852
s1 = tm.makeTimeSeries().to_frame()
@@ -723,7 +723,7 @@ def test_mixed_freq_hf_first(self):
for l in ax.get_lines():
assert PeriodIndex(data=l.get_xdata()).freq == 'D'
- @slow
+ @pytest.mark.slow
def test_mixed_freq_alignment(self):
ts_ind = date_range('2012-01-01 13:00', '2012-01-02', freq='H')
ts_data = np.random.randn(12)
@@ -737,7 +737,7 @@ def test_mixed_freq_alignment(self):
assert ax.lines[0].get_xdata()[0] == ax.lines[1].get_xdata()[0]
- @slow
+ @pytest.mark.slow
def test_mixed_freq_lf_first(self):
idxh = date_range('1/1/1999', periods=365, freq='D')
@@ -819,7 +819,7 @@ def test_nat_handling(self):
assert s.index.min() <= Series(xdata).min()
assert Series(xdata).max() <= s.index.max()
- @slow
+ @pytest.mark.slow
def test_to_weekly_resampling(self):
idxh = date_range('1/1/1999', periods=52, freq='W')
idxl = date_range('1/1/1999', periods=12, freq='M')
@@ -840,7 +840,7 @@ def test_to_weekly_resampling(self):
for l in lines:
assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
- @slow
+ @pytest.mark.slow
def test_from_weekly_resampling(self):
idxh = date_range('1/1/1999', periods=52, freq='W')
idxl = date_range('1/1/1999', periods=12, freq='M')
@@ -876,7 +876,7 @@ def test_from_weekly_resampling(self):
else:
tm.assert_numpy_array_equal(xdata, expected_h)
- @slow
+ @pytest.mark.slow
def test_from_resampling_area_line_mixed(self):
idxh = date_range('1/1/1999', periods=52, freq='W')
idxl = date_range('1/1/1999', periods=12, freq='M')
@@ -950,7 +950,7 @@ def test_from_resampling_area_line_mixed(self):
tm.assert_numpy_array_equal(l.get_ydata(orig=False),
expected_y)
- @slow
+ @pytest.mark.slow
def test_mixed_freq_second_millisecond(self):
# GH 7772, GH 7760
idxh = date_range('2014-07-01 09:00', freq='S', periods=50)
@@ -974,7 +974,7 @@ def test_mixed_freq_second_millisecond(self):
for l in ax.get_lines():
assert PeriodIndex(data=l.get_xdata()).freq == 'L'
- @slow
+ @pytest.mark.slow
def test_irreg_dtypes(self):
# date
idx = [date(2000, 1, 1), date(2000, 1, 5), date(2000, 1, 20)]
@@ -988,7 +988,7 @@ def test_irreg_dtypes(self):
_, ax = self.plt.subplots()
_check_plot_works(df.plot, ax=ax)
- @slow
+ @pytest.mark.slow
def test_time(self):
t = datetime(1, 1, 1, 3, 30, 0)
deltas = np.random.randint(1, 20, 3).cumsum()
@@ -1024,7 +1024,7 @@ def test_time(self):
rs = time(h, m, s).strftime('%H:%M:%S')
assert xp == rs
- @slow
+ @pytest.mark.slow
def test_time_musec(self):
t = datetime(1, 1, 1, 3, 30, 0)
deltas = np.random.randint(1, 20, 3).cumsum()
@@ -1051,7 +1051,7 @@ def test_time_musec(self):
rs = time(h, m, s).strftime('%H:%M:%S.%f')
assert xp == rs
- @slow
+ @pytest.mark.slow
def test_secondary_upsample(self):
idxh = date_range('1/1/1999', periods=365, freq='D')
idxl = date_range('1/1/1999', periods=12, freq='M')
@@ -1067,7 +1067,7 @@ def test_secondary_upsample(self):
for l in ax.left_ax.get_lines():
assert PeriodIndex(l.get_xdata()).freq == 'D'
- @slow
+ @pytest.mark.slow
def test_secondary_legend(self):
fig = self.plt.figure()
ax = fig.add_subplot(211)
@@ -1169,7 +1169,7 @@ def test_format_date_axis(self):
if len(l.get_text()) > 0:
assert l.get_rotation() == 30
- @slow
+ @pytest.mark.slow
def test_ax_plot(self):
x = DatetimeIndex(start='2012-01-02', periods=10, freq='D')
y = lrange(len(x))
@@ -1177,7 +1177,7 @@ def test_ax_plot(self):
lines = ax.plot(x, y, label='Y')
tm.assert_index_equal(DatetimeIndex(lines[0].get_xdata()), x)
- @slow
+ @pytest.mark.slow
def test_mpl_nopandas(self):
dates = [date(2008, 12, 31), date(2009, 1, 31)]
values1 = np.arange(10.0, 11.0, 0.5)
@@ -1196,7 +1196,7 @@ def test_mpl_nopandas(self):
exp = np.array([x.toordinal() for x in dates], dtype=np.float64)
tm.assert_numpy_array_equal(line2.get_xydata()[:, 0], exp)
- @slow
+ @pytest.mark.slow
def test_irregular_ts_shared_ax_xlim(self):
# GH 2960
ts = tm.makeTimeSeries()[:20]
@@ -1212,7 +1212,7 @@ def test_irregular_ts_shared_ax_xlim(self):
assert left == ts_irregular.index.min().toordinal()
assert right == ts_irregular.index.max().toordinal()
- @slow
+ @pytest.mark.slow
def test_secondary_y_non_ts_xlim(self):
# GH 3490 - non-timeseries with secondary y
index_1 = [1, 2, 3, 4]
@@ -1229,7 +1229,7 @@ def test_secondary_y_non_ts_xlim(self):
assert left_before == left_after
assert right_before < right_after
- @slow
+ @pytest.mark.slow
def test_secondary_y_regular_ts_xlim(self):
# GH 3490 - regular-timeseries with secondary y
index_1 = date_range(start='2000-01-01', periods=4, freq='D')
@@ -1246,7 +1246,7 @@ def test_secondary_y_regular_ts_xlim(self):
assert left_before == left_after
assert right_before < right_after
- @slow
+ @pytest.mark.slow
def test_secondary_y_mixed_freq_ts_xlim(self):
# GH 3490 - mixed frequency timeseries with secondary y
rng = date_range('2000-01-01', periods=10000, freq='min')
@@ -1262,7 +1262,7 @@ def test_secondary_y_mixed_freq_ts_xlim(self):
assert left_before == left_after
assert right_before == right_after
- @slow
+ @pytest.mark.slow
def test_secondary_y_irregular_ts_xlim(self):
# GH 3490 - irregular-timeseries with secondary y
ts = tm.makeTimeSeries()[:20]
@@ -1361,7 +1361,7 @@ def test_hist(self):
_, ax = self.plt.subplots()
ax.hist([x, x], weights=[w1, w2])
- @slow
+ @pytest.mark.slow
def test_overlapping_datetime(self):
# GB 6608
s1 = Series([1, 2, 3], index=[datetime(1995, 12, 31),
diff --git a/pandas/tests/plotting/test_deprecated.py b/pandas/tests/plotting/test_deprecated.py
index ca03bcb060e25..970de6ff881ab 100644
--- a/pandas/tests/plotting/test_deprecated.py
+++ b/pandas/tests/plotting/test_deprecated.py
@@ -4,7 +4,7 @@
import pandas as pd
import pandas.util.testing as tm
-from pandas.util.testing import slow
+import pytest
from numpy.random import randn
@@ -23,7 +23,7 @@
class TestDeprecatedNameSpace(TestPlotBase):
- @slow
+ @pytest.mark.slow
def test_scatter_plot_legacy(self):
tm._skip_if_no_scipy()
@@ -35,7 +35,7 @@ def test_scatter_plot_legacy(self):
with tm.assert_produces_warning(FutureWarning):
pd.scatter_matrix(df)
- @slow
+ @pytest.mark.slow
def test_boxplot_deprecated(self):
df = pd.DataFrame(randn(6, 4),
index=list(string.ascii_letters[:6]),
@@ -46,13 +46,13 @@ def test_boxplot_deprecated(self):
plotting.boxplot(df, column=['one', 'two'],
by='indic')
- @slow
+ @pytest.mark.slow
def test_radviz_deprecated(self):
df = self.iris
with tm.assert_produces_warning(FutureWarning):
plotting.radviz(frame=df, class_column='Name')
- @slow
+ @pytest.mark.slow
def test_plot_params(self):
with tm.assert_produces_warning(FutureWarning):
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 352c03582db93..7878740f64e55 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -15,7 +15,6 @@
from pandas.compat import range, lrange, lmap, lzip, u, zip, PY3
from pandas.io.formats.printing import pprint_thing
import pandas.util.testing as tm
-from pandas.util.testing import slow
import numpy as np
from numpy.random import rand, randn
@@ -41,7 +40,7 @@ def setup_method(self, method):
"C": np.arange(20) + np.random.uniform(
size=20)})
- @slow
+ @pytest.mark.slow
def test_plot(self):
df = self.tdf
_check_plot_works(df.plot, grid=False)
@@ -188,13 +187,13 @@ def test_nonnumeric_exclude(self):
ax = df.plot()
assert len(ax.get_lines()) == 1 # B was plotted
- @slow
+ @pytest.mark.slow
def test_implicit_label(self):
df = DataFrame(randn(10, 3), columns=['a', 'b', 'c'])
ax = df.plot(x='a', y='b')
self._check_text_labels(ax.xaxis.get_label(), 'a')
- @slow
+ @pytest.mark.slow
def test_donot_overwrite_index_name(self):
# GH 8494
df = DataFrame(randn(2, 2), columns=['a', 'b'])
@@ -202,7 +201,7 @@ def test_donot_overwrite_index_name(self):
df.plot(y='b', label='LABEL')
assert df.index.name == 'NAME'
- @slow
+ @pytest.mark.slow
def test_plot_xy(self):
# columns.inferred_type == 'string'
df = self.tdf
@@ -228,7 +227,7 @@ def test_plot_xy(self):
# columns.inferred_type == 'mixed'
# TODO add MultiIndex test
- @slow
+ @pytest.mark.slow
def test_logscales(self):
df = DataFrame({'a': np.arange(100)}, index=np.arange(100))
ax = df.plot(logy=True)
@@ -240,7 +239,7 @@ def test_logscales(self):
ax = df.plot(loglog=True)
self._check_ax_scales(ax, xaxis='log', yaxis='log')
- @slow
+ @pytest.mark.slow
def test_xcompat(self):
import pandas as pd
@@ -305,7 +304,7 @@ def test_unsorted_index(self):
rs = Series(rs[:, 1], rs[:, 0], dtype=np.int64, name='y')
tm.assert_series_equal(rs, df.y)
- @slow
+ @pytest.mark.slow
def test_subplots(self):
df = DataFrame(np.random.rand(10, 3),
index=list(string.ascii_letters[:10]))
@@ -345,7 +344,7 @@ def test_subplots(self):
for ax in axes:
assert ax.get_legend() is None
- @slow
+ @pytest.mark.slow
def test_subplots_timeseries(self):
idx = date_range(start='2014-07-01', freq='M', periods=10)
df = DataFrame(np.random.rand(10, 3), index=idx)
@@ -381,7 +380,7 @@ def test_subplots_timeseries(self):
self._check_ticks_props(ax, xlabelsize=7, xrot=45,
ylabelsize=7)
- @slow
+ @pytest.mark.slow
def test_subplots_layout(self):
# GH 6667
df = DataFrame(np.random.rand(10, 3),
@@ -427,7 +426,7 @@ def test_subplots_layout(self):
self._check_axes_shape(axes, axes_num=1, layout=(3, 3))
assert axes.shape == (3, 3)
- @slow
+ @pytest.mark.slow
def test_subplots_warnings(self):
# GH 9464
warnings.simplefilter('error')
@@ -442,7 +441,7 @@ def test_subplots_warnings(self):
self.fail(w)
warnings.simplefilter('default')
- @slow
+ @pytest.mark.slow
def test_subplots_multiple_axes(self):
# GH 5353, 6970, GH 7069
fig, axes = self.plt.subplots(2, 3)
@@ -543,7 +542,7 @@ def test_subplots_sharex_axes_existing_axes(self):
for ax in axes.ravel():
self._check_visible(ax.get_yticklabels(), visible=True)
- @slow
+ @pytest.mark.slow
def test_subplots_dup_columns(self):
# GH 10962
df = DataFrame(np.random.rand(5, 5), columns=list('aaaaa'))
@@ -697,7 +696,7 @@ def test_area_lim(self):
ymin, ymax = ax.get_ylim()
assert ymax == 0
- @slow
+ @pytest.mark.slow
def test_bar_colors(self):
import matplotlib.pyplot as plt
default_colors = self._maybe_unpack_cycler(plt.rcParams)
@@ -733,7 +732,7 @@ def test_bar_colors(self):
self._check_colors(ax.patches[::5], facecolors=['green'] * 5)
tm.close()
- @slow
+ @pytest.mark.slow
def test_bar_linewidth(self):
df = DataFrame(randn(5, 5))
@@ -754,7 +753,7 @@ def test_bar_linewidth(self):
for r in ax.patches:
assert r.get_linewidth() == 2
- @slow
+ @pytest.mark.slow
def test_bar_barwidth(self):
df = DataFrame(randn(5, 5))
@@ -792,7 +791,7 @@ def test_bar_barwidth(self):
for r in ax.patches:
assert r.get_height() == width
- @slow
+ @pytest.mark.slow
def test_bar_barwidth_position(self):
df = DataFrame(randn(5, 5))
self._check_bar_alignment(df, kind='bar', stacked=False, width=0.9,
@@ -808,7 +807,7 @@ def test_bar_barwidth_position(self):
self._check_bar_alignment(df, kind='barh', subplots=True, width=0.9,
position=0.2)
- @slow
+ @pytest.mark.slow
def test_bar_barwidth_position_int(self):
# GH 12979
df = DataFrame(randn(5, 5))
@@ -828,7 +827,7 @@ def test_bar_barwidth_position_int(self):
self._check_bar_alignment(df, kind='bar', subplots=True, width=1)
self._check_bar_alignment(df, kind='barh', subplots=True, width=1)
- @slow
+ @pytest.mark.slow
def test_bar_bottom_left(self):
df = DataFrame(rand(5, 5))
ax = df.plot.bar(stacked=False, bottom=1)
@@ -857,7 +856,7 @@ def test_bar_bottom_left(self):
result = [p.get_x() for p in ax.patches]
assert result == [1] * 5
- @slow
+ @pytest.mark.slow
def test_bar_nan(self):
df = DataFrame({'A': [10, np.nan, 20],
'B': [5, 10, 20],
@@ -875,7 +874,7 @@ def test_bar_nan(self):
expected = [0.0, 0.0, 0.0, 10.0, 0.0, 20.0, 15.0, 10.0, 40.0]
assert result == expected
- @slow
+ @pytest.mark.slow
def test_bar_categorical(self):
# GH 13019
df1 = pd.DataFrame(np.random.randn(6, 5),
@@ -901,7 +900,7 @@ def test_bar_categorical(self):
assert ax.patches[0].get_x() == -0.25
assert ax.patches[-1].get_x() == 4.75
- @slow
+ @pytest.mark.slow
def test_plot_scatter(self):
df = DataFrame(randn(6, 4),
index=list(string.ascii_letters[:6]),
@@ -919,7 +918,7 @@ def test_plot_scatter(self):
axes = df.plot(x='x', y='y', kind='scatter', subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
- @slow
+ @pytest.mark.slow
def test_plot_scatter_with_categorical_data(self):
# GH 16199
df = pd.DataFrame({'x': [1, 2, 3, 4],
@@ -937,7 +936,7 @@ def test_plot_scatter_with_categorical_data(self):
df.plot(x='y', y='y', kind='scatter')
ve.match('requires x column to be numeric')
- @slow
+ @pytest.mark.slow
def test_plot_scatter_with_c(self):
df = DataFrame(randn(6, 4),
index=list(string.ascii_letters[:6]),
@@ -1007,7 +1006,7 @@ def test_scatter_colors(self):
tm.assert_numpy_array_equal(ax.collections[0].get_facecolor()[0],
np.array([1, 1, 1, 1], dtype=np.float64))
- @slow
+ @pytest.mark.slow
def test_plot_bar(self):
df = DataFrame(randn(6, 4),
index=list(string.ascii_letters[:6]),
@@ -1098,7 +1097,7 @@ def _check_bar_alignment(self, df, kind='bar', stacked=False,
return axes
- @slow
+ @pytest.mark.slow
def test_bar_stacked_center(self):
# GH2157
df = DataFrame({'A': [3] * 5, 'B': lrange(5)}, index=lrange(5))
@@ -1107,7 +1106,7 @@ def test_bar_stacked_center(self):
self._check_bar_alignment(df, kind='barh', stacked=True)
self._check_bar_alignment(df, kind='barh', stacked=True, width=0.9)
- @slow
+ @pytest.mark.slow
def test_bar_center(self):
df = DataFrame({'A': [3] * 5, 'B': lrange(5)}, index=lrange(5))
self._check_bar_alignment(df, kind='bar', stacked=False)
@@ -1115,7 +1114,7 @@ def test_bar_center(self):
self._check_bar_alignment(df, kind='barh', stacked=False)
self._check_bar_alignment(df, kind='barh', stacked=False, width=0.9)
- @slow
+ @pytest.mark.slow
def test_bar_subplots_center(self):
df = DataFrame({'A': [3] * 5, 'B': lrange(5)}, index=lrange(5))
self._check_bar_alignment(df, kind='bar', subplots=True)
@@ -1123,7 +1122,7 @@ def test_bar_subplots_center(self):
self._check_bar_alignment(df, kind='barh', subplots=True)
self._check_bar_alignment(df, kind='barh', subplots=True, width=0.9)
- @slow
+ @pytest.mark.slow
def test_bar_align_single_column(self):
df = DataFrame(randn(5))
self._check_bar_alignment(df, kind='bar', stacked=False)
@@ -1133,7 +1132,7 @@ def test_bar_align_single_column(self):
self._check_bar_alignment(df, kind='bar', subplots=True)
self._check_bar_alignment(df, kind='barh', subplots=True)
- @slow
+ @pytest.mark.slow
def test_bar_edge(self):
df = DataFrame({'A': [3] * 5, 'B': lrange(5)}, index=lrange(5))
@@ -1158,7 +1157,7 @@ def test_bar_edge(self):
self._check_bar_alignment(df, kind='barh', subplots=True, width=0.9,
align='edge')
- @slow
+ @pytest.mark.slow
def test_bar_log_no_subplots(self):
# GH3254, GH3298 matplotlib/matplotlib#1882, #1892
# regressions in 1.2.1
@@ -1172,7 +1171,7 @@ def test_bar_log_no_subplots(self):
ax = df.plot.bar(grid=True, log=True)
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), expected)
- @slow
+ @pytest.mark.slow
def test_bar_log_subplots(self):
expected = np.array([1., 10., 100., 1000.])
if not self.mpl_le_1_2_1:
@@ -1184,7 +1183,7 @@ def test_bar_log_subplots(self):
tm.assert_numpy_array_equal(ax[0].yaxis.get_ticklocs(), expected)
tm.assert_numpy_array_equal(ax[1].yaxis.get_ticklocs(), expected)
- @slow
+ @pytest.mark.slow
def test_boxplot(self):
df = self.hist_df
series = df['height']
@@ -1222,7 +1221,7 @@ def test_boxplot(self):
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), positions)
assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
- @slow
+ @pytest.mark.slow
def test_boxplot_vertical(self):
df = self.hist_df
numeric_cols = df._get_numeric_data().columns
@@ -1250,7 +1249,7 @@ def test_boxplot_vertical(self):
tm.assert_numpy_array_equal(ax.yaxis.get_ticklocs(), positions)
assert len(ax.lines) == self.bp_n_objects * len(numeric_cols)
- @slow
+ @pytest.mark.slow
def test_boxplot_return_type(self):
df = DataFrame(randn(6, 4),
index=list(string.ascii_letters[:6]),
@@ -1270,7 +1269,7 @@ def test_boxplot_return_type(self):
result = df.plot.box(return_type='both')
self._check_box_return_type(result, 'both')
- @slow
+ @pytest.mark.slow
def test_boxplot_subplots_return_type(self):
df = self.hist_df
@@ -1287,7 +1286,7 @@ def test_boxplot_subplots_return_type(self):
expected_keys=['height', 'weight', 'category'],
check_ax_title=False)
- @slow
+ @pytest.mark.slow
def test_kde_df(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -1308,7 +1307,7 @@ def test_kde_df(self):
axes = df.plot(kind='kde', logy=True, subplots=True)
self._check_ax_scales(axes, yaxis='log')
- @slow
+ @pytest.mark.slow
def test_kde_missing_vals(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -1316,7 +1315,7 @@ def test_kde_missing_vals(self):
df.loc[0, 0] = np.nan
_check_plot_works(df.plot, kind='kde')
- @slow
+ @pytest.mark.slow
def test_hist_df(self):
from matplotlib.patches import Rectangle
if self.mpl_le_1_2_1:
@@ -1376,7 +1375,7 @@ def _check_box_coord(self, patches, expected_y=None, expected_h=None,
tm.assert_numpy_array_equal(result_width, expected_w,
check_dtype=False)
- @slow
+ @pytest.mark.slow
def test_hist_df_coord(self):
normal_df = DataFrame({'A': np.repeat(np.array([1, 2, 3, 4, 5]),
np.array([10, 9, 8, 7, 6])),
@@ -1467,12 +1466,12 @@ def test_hist_df_coord(self):
expected_x=np.array([0, 0, 0, 0, 0]),
expected_w=np.array([6, 7, 8, 9, 10]))
- @slow
+ @pytest.mark.slow
def test_plot_int_columns(self):
df = DataFrame(randn(100, 4)).cumsum()
_check_plot_works(df.plot, legend=True)
- @slow
+ @pytest.mark.slow
def test_df_legend_labels(self):
kinds = ['line', 'bar', 'barh', 'kde', 'area', 'hist']
df = DataFrame(rand(3, 3), columns=['a', 'b', 'c'])
@@ -1565,7 +1564,7 @@ def test_legend_name(self):
leg_title = ax.legend_.get_title()
self._check_text_labels(leg_title, 'new')
- @slow
+ @pytest.mark.slow
def test_no_legend(self):
kinds = ['line', 'bar', 'barh', 'kde', 'area', 'hist']
df = DataFrame(rand(3, 3), columns=['a', 'b', 'c'])
@@ -1577,7 +1576,7 @@ def test_no_legend(self):
ax = df.plot(kind=kind, legend=False)
self._check_legend_labels(ax, visible=False)
- @slow
+ @pytest.mark.slow
def test_style_by_column(self):
import matplotlib.pyplot as plt
fig = plt.gcf()
@@ -1593,7 +1592,7 @@ def test_style_by_column(self):
for i, l in enumerate(ax.get_lines()[:len(markers)]):
assert l.get_marker() == markers[i]
- @slow
+ @pytest.mark.slow
def test_line_label_none(self):
s = Series([1, 2])
ax = s.plot()
@@ -1602,7 +1601,7 @@ def test_line_label_none(self):
ax = s.plot(legend=True)
assert ax.get_legend().get_texts()[0].get_text() == 'None'
- @slow
+ @pytest.mark.slow
@tm.capture_stdout
def test_line_colors(self):
from matplotlib import cm
@@ -1654,13 +1653,13 @@ def test_line_colors(self):
# Forced show plot
_check_plot_works(df.plot, color=custom_colors)
- @slow
+ @pytest.mark.slow
def test_dont_modify_colors(self):
colors = ['r', 'g', 'b']
pd.DataFrame(np.random.rand(10, 2)).plot(color=colors)
assert len(colors) == 3
- @slow
+ @pytest.mark.slow
def test_line_colors_and_styles_subplots(self):
# GH 9894
from matplotlib import cm
@@ -1738,7 +1737,7 @@ def test_line_colors_and_styles_subplots(self):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
- @slow
+ @pytest.mark.slow
def test_area_colors(self):
from matplotlib import cm
from matplotlib.collections import PolyCollection
@@ -1798,7 +1797,7 @@ def test_area_colors(self):
for h in handles:
assert h.get_alpha() == 0.5
- @slow
+ @pytest.mark.slow
def test_hist_colors(self):
default_colors = self._maybe_unpack_cycler(self.plt.rcParams)
@@ -1832,7 +1831,7 @@ def test_hist_colors(self):
self._check_colors(ax.patches[::10], facecolors=['green'] * 5)
tm.close()
- @slow
+ @pytest.mark.slow
def test_kde_colors(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -1855,7 +1854,7 @@ def test_kde_colors(self):
rgba_colors = lmap(cm.jet, np.linspace(0, 1, len(df)))
self._check_colors(ax.get_lines(), linecolors=rgba_colors)
- @slow
+ @pytest.mark.slow
def test_kde_colors_and_styles_subplots(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -1914,7 +1913,7 @@ def test_kde_colors_and_styles_subplots(self):
self._check_colors(ax.get_lines(), linecolors=[c])
tm.close()
- @slow
+ @pytest.mark.slow
def test_boxplot_colors(self):
def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c='k',
fliers_c=None):
@@ -2025,7 +2024,7 @@ def test_all_invalid_plot_data(self):
with pytest.raises(TypeError):
df.plot(kind=kind)
- @slow
+ @pytest.mark.slow
def test_partially_invalid_plot_data(self):
with tm.RNGContext(42):
df = DataFrame(randn(10, 2), dtype=object)
@@ -2050,7 +2049,7 @@ def test_invalid_kind(self):
with pytest.raises(ValueError):
df.plot(kind='aasdf')
- @slow
+ @pytest.mark.slow
def test_hexbin_basic(self):
df = self.hexbin_df
@@ -2066,7 +2065,7 @@ def test_hexbin_basic(self):
# return value is single axes
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
- @slow
+ @pytest.mark.slow
def test_hexbin_with_c(self):
df = self.hexbin_df
@@ -2076,7 +2075,7 @@ def test_hexbin_with_c(self):
ax = df.plot.hexbin(x='A', y='B', C='C', reduce_C_function=np.std)
assert len(ax.collections) == 1
- @slow
+ @pytest.mark.slow
def test_hexbin_cmap(self):
df = self.hexbin_df
@@ -2088,14 +2087,14 @@ def test_hexbin_cmap(self):
ax = df.plot.hexbin(x='A', y='B', colormap=cm)
assert ax.collections[0].cmap.name == cm
- @slow
+ @pytest.mark.slow
def test_no_color_bar(self):
df = self.hexbin_df
ax = df.plot.hexbin(x='A', y='B', colorbar=None)
assert ax.collections[0].colorbar is None
- @slow
+ @pytest.mark.slow
def test_allow_cmap(self):
df = self.hexbin_df
@@ -2105,7 +2104,7 @@ def test_allow_cmap(self):
with pytest.raises(TypeError):
df.plot.hexbin(x='A', y='B', cmap='YlGn', colormap='BuGn')
- @slow
+ @pytest.mark.slow
def test_pie_df(self):
df = DataFrame(np.random.rand(5, 3), columns=['X', 'Y', 'Z'],
index=['a', 'b', 'c', 'd', 'e'])
@@ -2159,7 +2158,7 @@ def test_pie_df_nan(self):
assert ([x.get_text() for x in ax.get_legend().get_texts()] ==
base_expected[:i] + base_expected[i + 1:])
- @slow
+ @pytest.mark.slow
def test_errorbar_plot(self):
d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)}
df = DataFrame(d)
@@ -2227,7 +2226,7 @@ def test_errorbar_plot(self):
with pytest.raises((ValueError, TypeError)):
df.plot(yerr=df_err)
- @slow
+ @pytest.mark.slow
def test_errorbar_with_integer_column_names(self):
# test with integer column names
df = DataFrame(np.random.randn(10, 2))
@@ -2237,7 +2236,7 @@ def test_errorbar_with_integer_column_names(self):
ax = _check_plot_works(df.plot, y=0, yerr=1)
self._check_has_errorbars(ax, xerr=0, yerr=1)
- @slow
+ @pytest.mark.slow
def test_errorbar_with_partial_columns(self):
df = DataFrame(np.random.randn(10, 3))
df_err = DataFrame(np.random.randn(10, 2), columns=[0, 2])
@@ -2260,7 +2259,7 @@ def test_errorbar_with_partial_columns(self):
ax = _check_plot_works(df.plot, yerr=err)
self._check_has_errorbars(ax, xerr=0, yerr=1)
- @slow
+ @pytest.mark.slow
def test_errorbar_timeseries(self):
d = {'x': np.arange(12), 'y': np.arange(12, 0, -1)}
@@ -2370,7 +2369,7 @@ def _check_errorbar_color(containers, expected, has_err='has_xerr'):
self._check_has_errorbars(ax, xerr=0, yerr=1)
_check_errorbar_color(ax.containers, 'green', has_err='has_yerr')
- @slow
+ @pytest.mark.slow
def test_sharex_and_ax(self):
# https://github.com/pandas-dev/pandas/issues/9737 using gridspec,
# the axis in fig.get_axis() are sorted differently than pandas
@@ -2422,7 +2421,7 @@ def _check(axes):
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
- @slow
+ @pytest.mark.slow
def test_sharey_and_ax(self):
# https://github.com/pandas-dev/pandas/issues/9737 using gridspec,
# the axis in fig.get_axis() are sorted differently than pandas
@@ -2505,7 +2504,7 @@ def test_memory_leak(self):
# need to actually access something to get an error
results[key].lines
- @slow
+ @pytest.mark.slow
def test_df_subplots_patterns_minorticks(self):
# GH 10657
import matplotlib.pyplot as plt
@@ -2550,7 +2549,7 @@ def test_df_subplots_patterns_minorticks(self):
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
- @slow
+ @pytest.mark.slow
def test_df_gridspec_patterns(self):
# GH 10819
import matplotlib.pyplot as plt
@@ -2673,7 +2672,7 @@ def _get_boxed_grid():
self._check_visible(ax.get_xticklabels(minor=True), visible=True)
tm.close()
- @slow
+ @pytest.mark.slow
def test_df_grid_settings(self):
# Make sure plot defaults to rcParams['axes.grid'] setting, GH 9792
self._check_grid_settings(
diff --git a/pandas/tests/plotting/test_hist_method.py b/pandas/tests/plotting/test_hist_method.py
index 17a75e5cb287c..5f7b2dd2d6ca9 100644
--- a/pandas/tests/plotting/test_hist_method.py
+++ b/pandas/tests/plotting/test_hist_method.py
@@ -6,7 +6,6 @@
from pandas import Series, DataFrame
import pandas.util.testing as tm
-from pandas.util.testing import slow
import numpy as np
from numpy.random import randn
@@ -28,7 +27,7 @@ def setup_method(self, method):
self.ts = tm.makeTimeSeries()
self.ts.name = 'ts'
- @slow
+ @pytest.mark.slow
def test_hist_legacy(self):
_check_plot_works(self.ts.hist)
_check_plot_works(self.ts.hist, grid=False)
@@ -52,13 +51,13 @@ def test_hist_legacy(self):
with pytest.raises(ValueError):
self.ts.hist(by=self.ts.index, figure=fig)
- @slow
+ @pytest.mark.slow
def test_hist_bins_legacy(self):
df = DataFrame(np.random.randn(10, 2))
ax = df.hist(bins=2)[0][0]
assert len(ax.patches) == 2
- @slow
+ @pytest.mark.slow
def test_hist_layout(self):
df = self.hist_df
with pytest.raises(ValueError):
@@ -67,7 +66,7 @@ def test_hist_layout(self):
with pytest.raises(ValueError):
df.height.hist(layout=[1, 1])
- @slow
+ @pytest.mark.slow
def test_hist_layout_with_by(self):
df = self.hist_df
@@ -113,7 +112,7 @@ def test_hist_layout_with_by(self):
self._check_axes_shape(
axes, axes_num=4, layout=(4, 2), figsize=(12, 7))
- @slow
+ @pytest.mark.slow
def test_hist_no_overlap(self):
from matplotlib.pyplot import subplot, gcf
x = Series(randn(2))
@@ -126,13 +125,13 @@ def test_hist_no_overlap(self):
axes = fig.axes if self.mpl_ge_1_5_0 else fig.get_axes()
assert len(axes) == 2
- @slow
+ @pytest.mark.slow
def test_hist_by_no_extra_plots(self):
df = self.hist_df
axes = df.height.hist(by=df.gender) # noqa
assert len(self.plt.get_fignums()) == 1
- @slow
+ @pytest.mark.slow
def test_plot_fails_when_ax_differs_from_figure(self):
from pylab import figure
fig1 = figure()
@@ -144,7 +143,7 @@ def test_plot_fails_when_ax_differs_from_figure(self):
class TestDataFramePlots(TestPlotBase):
- @slow
+ @pytest.mark.slow
def test_hist_df_legacy(self):
from matplotlib.patches import Rectangle
with tm.assert_produces_warning(UserWarning):
@@ -210,7 +209,7 @@ def test_hist_df_legacy(self):
with pytest.raises(AttributeError):
ser.hist(foo='bar')
- @slow
+ @pytest.mark.slow
def test_hist_layout(self):
df = DataFrame(randn(100, 3))
@@ -241,7 +240,7 @@ def test_hist_layout(self):
with pytest.raises(ValueError):
df.hist(layout=(-1, -1))
- @slow
+ @pytest.mark.slow
# GH 9351
def test_tight_layout(self):
if self.mpl_ge_2_0_1:
@@ -254,7 +253,7 @@ def test_tight_layout(self):
class TestDataFrameGroupByPlots(TestPlotBase):
- @slow
+ @pytest.mark.slow
def test_grouped_hist_legacy(self):
from matplotlib.patches import Rectangle
@@ -303,7 +302,7 @@ def test_grouped_hist_legacy(self):
with tm.assert_produces_warning(FutureWarning):
df.hist(by='C', figsize='default')
- @slow
+ @pytest.mark.slow
def test_grouped_hist_legacy2(self):
n = 10
weight = Series(np.random.normal(166, 20, size=n))
@@ -318,7 +317,7 @@ def test_grouped_hist_legacy2(self):
assert len(self.plt.get_fignums()) == 2
tm.close()
- @slow
+ @pytest.mark.slow
def test_grouped_hist_layout(self):
df = self.hist_df
pytest.raises(ValueError, df.hist, column='weight', by=df.gender,
@@ -367,7 +366,7 @@ def test_grouped_hist_layout(self):
axes = df.hist(column=['height', 'weight', 'category'])
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- @slow
+ @pytest.mark.slow
def test_grouped_hist_multiple_axes(self):
# GH 6970, GH 7069
df = self.hist_df
@@ -387,7 +386,7 @@ def test_grouped_hist_multiple_axes(self):
# pass different number of axes from required
axes = df.hist(column='height', ax=axes)
- @slow
+ @pytest.mark.slow
def test_axis_share_x(self):
df = self.hist_df
# GH4089
@@ -401,7 +400,7 @@ def test_axis_share_x(self):
assert not ax1._shared_y_axes.joined(ax1, ax2)
assert not ax2._shared_y_axes.joined(ax1, ax2)
- @slow
+ @pytest.mark.slow
def test_axis_share_y(self):
df = self.hist_df
ax1, ax2 = df.hist(column='height', by=df.gender, sharey=True)
@@ -414,7 +413,7 @@ def test_axis_share_y(self):
assert not ax1._shared_x_axes.joined(ax1, ax2)
assert not ax2._shared_x_axes.joined(ax1, ax2)
- @slow
+ @pytest.mark.slow
def test_axis_share_xy(self):
df = self.hist_df
ax1, ax2 = df.hist(column='height', by=df.gender, sharex=True,
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index d93ad90a36a9c..684a943fb5a69 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -7,7 +7,6 @@
from pandas import Series, DataFrame
from pandas.compat import lmap
import pandas.util.testing as tm
-from pandas.util.testing import slow
import numpy as np
from numpy import random
@@ -30,7 +29,7 @@ def setup_method(self, method):
self.ts = tm.makeTimeSeries()
self.ts.name = 'ts'
- @slow
+ @pytest.mark.slow
def test_autocorrelation_plot(self):
from pandas.plotting import autocorrelation_plot
_check_plot_works(autocorrelation_plot, series=self.ts)
@@ -39,13 +38,13 @@ def test_autocorrelation_plot(self):
ax = autocorrelation_plot(self.ts, label='Test')
self._check_legend_labels(ax, labels=['Test'])
- @slow
+ @pytest.mark.slow
def test_lag_plot(self):
from pandas.plotting import lag_plot
_check_plot_works(lag_plot, series=self.ts)
_check_plot_works(lag_plot, series=self.ts, lag=5)
- @slow
+ @pytest.mark.slow
def test_bootstrap_plot(self):
from pandas.plotting import bootstrap_plot
_check_plot_works(bootstrap_plot, series=self.ts, size=10)
@@ -53,7 +52,7 @@ def test_bootstrap_plot(self):
class TestDataFramePlots(TestPlotBase):
- @slow
+ @pytest.mark.slow
def test_scatter_plot_legacy(self):
tm._skip_if_no_scipy()
@@ -130,7 +129,7 @@ def test_scatter_matrix_axis(self):
self._check_ticks_props(
axes, xlabelsize=8, xrot=90, ylabelsize=8, yrot=0)
- @slow
+ @pytest.mark.slow
def test_andrews_curves(self):
from pandas.plotting import andrews_curves
from matplotlib import cm
@@ -195,7 +194,7 @@ def test_andrews_curves(self):
with tm.assert_produces_warning(FutureWarning):
andrews_curves(data=df, class_column='Name')
- @slow
+ @pytest.mark.slow
def test_parallel_coordinates(self):
from pandas.plotting import parallel_coordinates
from matplotlib import cm
@@ -263,7 +262,7 @@ def test_parallel_coordinates_with_sorted_labels(self):
# lables and colors are ordered strictly increasing
assert prev[1] < nxt[1] and prev[0] < nxt[0]
- @slow
+ @pytest.mark.slow
def test_radviz(self):
from pandas.plotting import radviz
from matplotlib import cm
@@ -301,7 +300,7 @@ def test_radviz(self):
handles, labels = ax.get_legend_handles_labels()
self._check_colors(handles, facecolors=colors)
- @slow
+ @pytest.mark.slow
def test_subplot_titles(self):
df = self.iris.drop('Name', axis=1).head()
# Use the column names as the subplot titles
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 7c66b5dafb9c7..9c9011ba1ca7b 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -12,7 +12,6 @@
from pandas import Series, DataFrame, date_range
from pandas.compat import range, lrange
import pandas.util.testing as tm
-from pandas.util.testing import slow
import numpy as np
from numpy.random import randn
@@ -41,7 +40,7 @@ def setup_method(self, method):
self.iseries = tm.makePeriodSeries()
self.iseries.name = 'iseries'
- @slow
+ @pytest.mark.slow
def test_plot(self):
_check_plot_works(self.ts.plot, label='foo')
_check_plot_works(self.ts.plot, use_index=False)
@@ -79,7 +78,7 @@ def test_plot(self):
ax = _check_plot_works(self.ts.plot, subplots=True, layout=(1, -1))
self._check_axes_shape(ax, axes_num=1, layout=(1, 1))
- @slow
+ @pytest.mark.slow
def test_plot_figsize_and_title(self):
# figsize and title
_, ax = self.plt.subplots()
@@ -210,7 +209,7 @@ def test_line_use_index_false(self):
label2 = ax2.get_xlabel()
assert label2 == ''
- @slow
+ @pytest.mark.slow
def test_bar_log(self):
expected = np.array([1., 10., 100., 1000.])
@@ -252,7 +251,7 @@ def test_bar_log(self):
tm.assert_almost_equal(res[1], ymax)
tm.assert_numpy_array_equal(ax.xaxis.get_ticklocs(), expected)
- @slow
+ @pytest.mark.slow
def test_bar_ignore_index(self):
df = Series([1, 2, 3, 4], index=['a', 'b', 'c', 'd'])
_, ax = self.plt.subplots()
@@ -280,7 +279,7 @@ def test_irregular_datetime(self):
ax.set_xlim('1/1/1999', '1/1/2001')
assert xp == ax.get_xlim()[0]
- @slow
+ @pytest.mark.slow
def test_pie_series(self):
# if sum of values is less than 1.0, pie handle them as rate and draw
# semicircle.
@@ -339,14 +338,14 @@ def test_pie_nan(self):
result = [x.get_text() for x in ax.texts]
assert result == expected
- @slow
+ @pytest.mark.slow
def test_hist_df_kwargs(self):
df = DataFrame(np.random.randn(10, 2))
_, ax = self.plt.subplots()
ax = df.plot.hist(bins=5, ax=ax)
assert len(ax.patches) == 10
- @slow
+ @pytest.mark.slow
def test_hist_df_with_nonnumerics(self):
# GH 9853
with tm.RNGContext(1):
@@ -361,7 +360,7 @@ def test_hist_df_with_nonnumerics(self):
ax = df.plot.hist(ax=ax) # bins=10
assert len(ax.patches) == 40
- @slow
+ @pytest.mark.slow
def test_hist_legacy(self):
_check_plot_works(self.ts.hist)
_check_plot_works(self.ts.hist, grid=False)
@@ -387,13 +386,13 @@ def test_hist_legacy(self):
with pytest.raises(ValueError):
self.ts.hist(by=self.ts.index, figure=fig)
- @slow
+ @pytest.mark.slow
def test_hist_bins_legacy(self):
df = DataFrame(np.random.randn(10, 2))
ax = df.hist(bins=2)[0][0]
assert len(ax.patches) == 2
- @slow
+ @pytest.mark.slow
def test_hist_layout(self):
df = self.hist_df
with pytest.raises(ValueError):
@@ -402,7 +401,7 @@ def test_hist_layout(self):
with pytest.raises(ValueError):
df.height.hist(layout=[1, 1])
- @slow
+ @pytest.mark.slow
def test_hist_layout_with_by(self):
df = self.hist_df
@@ -446,7 +445,7 @@ def test_hist_layout_with_by(self):
self._check_axes_shape(axes, axes_num=4, layout=(4, 2),
figsize=(12, 7))
- @slow
+ @pytest.mark.slow
def test_hist_no_overlap(self):
from matplotlib.pyplot import subplot, gcf
x = Series(randn(2))
@@ -459,7 +458,7 @@ def test_hist_no_overlap(self):
axes = fig.axes if self.mpl_ge_1_5_0 else fig.get_axes()
assert len(axes) == 2
- @slow
+ @pytest.mark.slow
def test_hist_secondary_legend(self):
# GH 9610
df = DataFrame(np.random.randn(30, 4), columns=list('abcd'))
@@ -499,7 +498,7 @@ def test_hist_secondary_legend(self):
assert ax.get_yaxis().get_visible()
tm.close()
- @slow
+ @pytest.mark.slow
def test_df_series_secondary_legend(self):
# GH 9779
df = DataFrame(np.random.randn(30, 3), columns=list('abc'))
@@ -563,14 +562,14 @@ def test_df_series_secondary_legend(self):
assert ax.get_yaxis().get_visible()
tm.close()
- @slow
+ @pytest.mark.slow
def test_plot_fails_with_dupe_color_and_style(self):
x = Series(randn(2))
with pytest.raises(ValueError):
_, ax = self.plt.subplots()
x.plot(style='k--', color='k', ax=ax)
- @slow
+ @pytest.mark.slow
def test_hist_kde(self):
_, ax = self.plt.subplots()
ax = self.ts.plot.hist(logy=True, ax=ax)
@@ -593,7 +592,7 @@ def test_hist_kde(self):
ylabels = ax.get_yticklabels()
self._check_text_labels(ylabels, [''] * len(ylabels))
- @slow
+ @pytest.mark.slow
def test_kde_kwargs(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -608,7 +607,7 @@ def test_kde_kwargs(self):
self._check_ax_scales(ax, yaxis='log')
self._check_text_labels(ax.yaxis.get_label(), 'Density')
- @slow
+ @pytest.mark.slow
def test_kde_missing_vals(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
@@ -619,7 +618,7 @@ def test_kde_missing_vals(self):
# gh-14821: check if the values have any missing values
assert any(~np.isnan(axes.lines[0].get_xdata()))
- @slow
+ @pytest.mark.slow
def test_hist_kwargs(self):
_, ax = self.plt.subplots()
ax = self.ts.plot.hist(bins=5, ax=ax)
@@ -637,7 +636,7 @@ def test_hist_kwargs(self):
ax = self.ts.plot.hist(align='left', stacked=True, ax=ax)
tm.close()
- @slow
+ @pytest.mark.slow
def test_hist_kde_color(self):
_, ax = self.plt.subplots()
ax = self.ts.plot.hist(logy=True, bins=10, color='b', ax=ax)
@@ -654,7 +653,7 @@ def test_hist_kde_color(self):
assert len(lines) == 1
self._check_colors(lines, ['r'])
- @slow
+ @pytest.mark.slow
def test_boxplot_series(self):
_, ax = self.plt.subplots()
ax = self.ts.plot.box(logy=True, ax=ax)
@@ -664,7 +663,7 @@ def test_boxplot_series(self):
ylabels = ax.get_yticklabels()
self._check_text_labels(ylabels, [''] * len(ylabels))
- @slow
+ @pytest.mark.slow
def test_kind_both_ways(self):
s = Series(range(3))
kinds = (plotting._core._common_kinds +
@@ -676,7 +675,7 @@ def test_kind_both_ways(self):
s.plot(kind=kind, ax=ax)
getattr(s.plot, kind)()
- @slow
+ @pytest.mark.slow
def test_invalid_plot_data(self):
s = Series(list('abcd'))
_, ax = self.plt.subplots()
@@ -686,7 +685,7 @@ def test_invalid_plot_data(self):
with pytest.raises(TypeError):
s.plot(kind=kind, ax=ax)
- @slow
+ @pytest.mark.slow
def test_valid_object_plot(self):
s = Series(lrange(10), dtype=object)
for kind in plotting._core._common_kinds:
@@ -708,7 +707,7 @@ def test_invalid_kind(self):
with pytest.raises(ValueError):
s.plot(kind='aasdf')
- @slow
+ @pytest.mark.slow
def test_dup_datetime_index_plot(self):
dr1 = date_range('1/1/2009', periods=4)
dr2 = date_range('1/2/2009', periods=4)
@@ -717,7 +716,7 @@ def test_dup_datetime_index_plot(self):
s = Series(values, index=index)
_check_plot_works(s.plot)
- @slow
+ @pytest.mark.slow
def test_errorbar_plot(self):
s = Series(np.arange(10), name='x')
@@ -764,14 +763,14 @@ def test_table(self):
_check_plot_works(self.series.plot, table=True)
_check_plot_works(self.series.plot, table=self.series)
- @slow
+ @pytest.mark.slow
def test_series_grid_settings(self):
# Make sure plot defaults to rcParams['axes.grid'] setting, GH 9792
self._check_grid_settings(Series([1, 2, 3]),
plotting._core._series_kinds +
plotting._core._common_kinds)
- @slow
+ @pytest.mark.slow
def test_standard_colors(self):
from pandas.plotting._style import _get_standard_colors
@@ -788,7 +787,7 @@ def test_standard_colors(self):
result = _get_standard_colors(3, color=[c])
assert result == [c] * 3
- @slow
+ @pytest.mark.slow
def test_standard_colors_all(self):
import matplotlib.colors as colors
from pandas.plotting._style import _get_standard_colors
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 7774d10c5eaf8..6d8a54b538237 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -20,8 +20,7 @@
from pandas.compat import lrange, range
from pandas import compat
-from pandas.util.testing import (slow,
- assert_series_equal,
+from pandas.util.testing import (assert_series_equal,
assert_almost_equal,
assert_frame_equal)
import pandas.util.testing as tm
@@ -2592,7 +2591,7 @@ def test_series_set_value(self):
# s2 = s.set_value(dates[1], index[1])
# assert s2.values.dtype == 'M8[ns]'
- @slow
+ @pytest.mark.slow
def test_slice_locs_indexerror(self):
times = [datetime(2000, 1, 1) + timedelta(minutes=i * 10)
for i in range(100000)]
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 08c3a25e66b0e..2b972477ae999 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -16,7 +16,7 @@
from pandas import compat, _np_version_under1p11, _np_version_under1p13
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal,
- assert_panel4d_equal, slow)
+ assert_panel4d_equal)
from pandas.io.formats.printing import pprint_thing
import pandas.util.testing as tm
@@ -196,7 +196,7 @@ def test_integer_arithmetic_frame(self):
def test_integer_arithmetic_series(self):
self.run_series(self.integer.iloc[:, 0], self.integer.iloc[:, 0])
- @slow
+ @pytest.mark.slow
def test_integer_panel(self):
self.run_panel(_integer2_panel, np.random.randint(1, 100))
@@ -206,11 +206,11 @@ def test_float_arithemtic_frame(self):
def test_float_arithmetic_series(self):
self.run_series(self.frame2.iloc[:, 0], self.frame2.iloc[:, 0])
- @slow
+ @pytest.mark.slow
def test_float_panel(self):
self.run_panel(_frame2_panel, np.random.randn() + 0.1, binary_comp=0.8)
- @slow
+ @pytest.mark.slow
def test_panel4d(self):
with catch_warnings(record=True):
self.run_panel(tm.makePanel4D(), np.random.randn() + 0.5,
@@ -226,7 +226,7 @@ def test_mixed_arithmetic_series(self):
for col in self.mixed2.columns:
self.run_series(self.mixed2[col], self.mixed2[col], binary_comp=4)
- @slow
+ @pytest.mark.slow
def test_mixed_panel(self):
self.run_panel(_mixed2_panel, np.random.randint(1, 100),
binary_comp=-2)
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index 9c3765ffdb716..66a387ff4f0c8 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -2120,7 +2120,7 @@ def _non_null_values(x):
assert_equal(cov_x_y, mean_x_times_y -
(mean_x * mean_y))
- @tm.slow
+ @pytest.mark.slow
def test_ewm_consistency(self):
def _weights(s, com, adjust, ignore_na):
if isinstance(s, DataFrame):
@@ -2219,7 +2219,7 @@ def _ewma(s, com, min_periods, adjust, ignore_na):
_variance_debiasing_factors(x, com=com, adjust=adjust,
ignore_na=ignore_na)))
- @tm.slow
+ @pytest.mark.slow
def test_expanding_consistency(self):
# suppress warnings about empty slices, as we are deliberately testing
@@ -2293,7 +2293,7 @@ def test_expanding_consistency(self):
assert_equal(expanding_f_result,
expanding_apply_f_result)
- @tm.slow
+ @pytest.mark.slow
def test_rolling_consistency(self):
# suppress warnings about empty slices, as we are deliberately testing
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 17e09b38b20e0..d6ba9561340cc 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -50,13 +50,6 @@
from pandas._libs import testing as _testing
from pandas.io.common import urlopen
-try:
- import pytest
- slow = pytest.mark.slow
-except ImportError:
- # Should be ok to just ignore. If you actually need
- # slow then you'll hit an import error long before getting here.
- pass
N = 30
| And just use @pytest.mark.slow instead
closes #16850 | https://api.github.com/repos/pandas-dev/pandas/pulls/16852 | 2017-07-07T12:05:50Z | 2017-07-12T15:51:07Z | 2017-07-12T15:51:07Z | 2017-12-20T16:10:35Z |
COMPAT: 32-bit compat for testing of indexers | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index d13636e8b43e2..c9e0e3b10875c 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -497,7 +497,6 @@ def get_indexer(self, target, method=None, limit=None, tolerance=None):
codes = self.categories.get_indexer(target)
indexer, _ = self._engine.get_indexer_non_unique(codes)
-
return _ensure_platform_int(indexer)
@Appender(_index_shared_docs['get_indexer_non_unique'] % _index_doc_kwargs)
@@ -508,7 +507,8 @@ def get_indexer_non_unique(self, target):
target = target.categories
codes = self.categories.get_indexer(target)
- return self._engine.get_indexer_non_unique(codes)
+ indexer, missing = self._engine.get_indexer_non_unique(codes)
+ return _ensure_platform_int(indexer), missing
@Appender(_index_shared_docs['_convert_scalar_indexer'])
def _convert_scalar_indexer(self, key, kind=None):
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index a6177104d6273..c19dfac1a182a 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -132,6 +132,20 @@ def test_reindex_base(self):
with tm.assert_raises_regex(ValueError, 'Invalid fill method'):
idx.get_indexer(idx, method='invalid')
+ def test_get_indexer_consistency(self):
+ # See GH 16819
+ for name, index in self.indices.items():
+ if isinstance(index, IntervalIndex):
+ continue
+
+ indexer = index.get_indexer(index[0:2])
+ assert isinstance(indexer, np.ndarray)
+ assert indexer.dtype == np.intp
+
+ indexer, _ = index.get_indexer_non_unique(index[0:2])
+ assert isinstance(indexer, np.ndarray)
+ assert indexer.dtype == np.intp
+
def test_ndarray_compat_properties(self):
idx = self.create_index()
assert idx.T.equals(idx)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 7a81a125467d5..18dbe6624008a 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1131,17 +1131,6 @@ def test_get_indexer_strings(self):
with pytest.raises(TypeError):
idx.get_indexer(['a', 'b', 'c', 'd'], method='pad', tolerance=2)
- def test_get_indexer_consistency(self):
- # See GH 16819
- for name, index in self.indices.items():
- indexer = index.get_indexer(index[0:2])
- assert isinstance(indexer, np.ndarray)
- assert indexer.dtype == np.intp
-
- indexer, _ = index.get_indexer_non_unique(index[0:2])
- assert isinstance(indexer, np.ndarray)
- assert indexer.dtype == np.intp
-
def test_get_loc(self):
idx = pd.Index([0, 1, 2])
all_methods = [None, 'pad', 'backfill', 'nearest']
| xref #16826
| https://api.github.com/repos/pandas-dev/pandas/pulls/16849 | 2017-07-07T10:14:37Z | 2017-07-07T13:10:32Z | 2017-07-07T13:10:32Z | 2017-07-07T13:11:53Z |
Added tests for _get_dtype | diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index ba510e68f9a21..c32e8590c5675 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -524,3 +524,42 @@ def test_is_complex_dtype():
assert com.is_complex_dtype(np.complex)
assert com.is_complex_dtype(np.array([1 + 1j, 5]))
+
+
+@pytest.mark.parametrize('input_param,result', [
+ (int, np.dtype(int)),
+ ('int32', np.dtype('int32')),
+ (float, np.dtype(float)),
+ ('float64', np.dtype('float64')),
+ (np.dtype('float64'), np.dtype('float64')),
+ pytest.mark.xfail((str, np.dtype('<U')), ),
+ (pd.Series([1, 2], dtype=np.dtype('int16')), np.dtype('int16')),
+ (pd.Series(['a', 'b']), np.dtype(object)),
+ (pd.Index([1, 2]), np.dtype('int64')),
+ (pd.Index(['a', 'b']), np.dtype(object)),
+ ('category', 'category'),
+ (pd.Categorical(['a', 'b']).dtype, CategoricalDtype()),
+ pytest.mark.xfail((pd.Categorical(['a', 'b']), CategoricalDtype()),),
+ (pd.CategoricalIndex(['a', 'b']).dtype, CategoricalDtype()),
+ pytest.mark.xfail((pd.CategoricalIndex(['a', 'b']), CategoricalDtype()),),
+ (pd.DatetimeIndex([1, 2]), np.dtype('<M8[ns]')),
+ (pd.DatetimeIndex([1, 2]).dtype, np.dtype('<M8[ns]')),
+ ('<M8[ns]', np.dtype('<M8[ns]')),
+ ('datetime64[ns, Europe/London]', DatetimeTZDtype('ns', 'Europe/London')),
+ (pd.SparseSeries([1, 2], dtype='int32'), np.dtype('int32')),
+ (pd.SparseSeries([1, 2], dtype='int32').dtype, np.dtype('int32')),
+ (PeriodDtype(freq='D'), PeriodDtype(freq='D')),
+ ('period[D]', PeriodDtype(freq='D')),
+ (IntervalDtype(), IntervalDtype()),
+])
+def test__get_dtype(input_param, result):
+ assert com._get_dtype(input_param) == result
+
+
+@pytest.mark.parametrize('input_param', [None,
+ 1, 1.2,
+ 'random string',
+ pd.DataFrame([1, 2])])
+def test__get_dtype_fails(input_param):
+ # python objects
+ pytest.raises(TypeError, com._get_dtype, input_param)
| - [ ] closes #xxxx
- [ x] tests added / passed
- [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
See #16817 for a discussion on this.
Added tests for `` pd.core.dtypes.common._get_dtype``. Some things to note:
* Tests for Categoricals fail, I've marked them xfail. This is because I propose allowing categoricals to pass in a later commit.
* ``Interval``s don't have a dtype, so they'd fail with an AttributeError. I therefore haven't added tests for Intervals, but it could be considered if it's a bug/inconsistency, that ``Interval``s don't have a dtype.
This is my first serious pull request, and I may have made errors. If anything is wrong, against the style guide,etc. please let me know.
I will make a pull request of tests for ``_get_dtype_type``, when this one has been accepted.
| https://api.github.com/repos/pandas-dev/pandas/pulls/16845 | 2017-07-06T22:27:19Z | 2017-07-11T10:08:58Z | 2017-07-11T10:08:58Z | 2017-07-11T10:51:36Z |
0.20.3 backports | diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 9281c51059087..959858fb50f89 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,4 +1,4 @@
- [ ] closes #xxxx
- [ ] tests added / passed
- - [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff``
+ - [ ] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [ ] whatsnew entry
diff --git a/.travis.yml b/.travis.yml
index b7c18d2850a15..fa3c9fadf2ddd 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -34,57 +34,57 @@ matrix:
language: generic
env:
- JOB="3.5_OSX" TEST_ARGS="--skip-slow --skip-network"
- - os: linux
+ - dist: trusty
env:
- JOB="2.7_LOCALE" TEST_ARGS="--only-slow --skip-network" LOCALE_OVERRIDE="zh_CN.UTF-8"
addons:
apt:
packages:
- language-pack-zh-hans
- - os: linux
+ - dist: trusty
env:
- JOB="2.7" TEST_ARGS="--skip-slow" LINT=true
addons:
apt:
packages:
- python-gtk2
- - os: linux
+ - dist: trusty
env:
- JOB="3.5" TEST_ARGS="--skip-slow --skip-network" COVERAGE=true
addons:
apt:
packages:
- xsel
- - os: linux
+ - dist: trusty
env:
- JOB="3.6" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate" CONDA_FORGE=true
# In allow_failures
- - os: linux
+ - dist: trusty
env:
- JOB="2.7_SLOW" TEST_ARGS="--only-slow --skip-network"
# In allow_failures
- - os: linux
+ - dist: trusty
env:
- JOB="2.7_BUILD_TEST" TEST_ARGS="--skip-slow" BUILD_TEST=true
# In allow_failures
- - os: linux
+ - dist: trusty
env:
- JOB="3.6_NUMPY_DEV" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate"
# In allow_failures
- - os: linux
+ - dist: trusty
env:
- JOB="3.5_DOC" DOC=true
allow_failures:
- - os: linux
+ - dist: trusty
env:
- JOB="2.7_SLOW" TEST_ARGS="--only-slow --skip-network"
- - os: linux
+ - dist: trusty
env:
- JOB="2.7_BUILD_TEST" TEST_ARGS="--skip-slow" BUILD_TEST=true
- - os: linux
+ - dist: trusty
env:
- JOB="3.6_NUMPY_DEV" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate"
- - os: linux
+ - dist: trusty
env:
- JOB="3.5_DOC" DOC=true
diff --git a/ci/requirements-3.5_DOC.build b/ci/requirements-3.5_DOC.build
index 73aeb3192242f..b87409b36c363 100644
--- a/ci/requirements-3.5_DOC.build
+++ b/ci/requirements-3.5_DOC.build
@@ -1,5 +1,5 @@
python=3.5*
python-dateutil
pytz
-numpy
+numpy==1.11
cython
diff --git a/ci/requirements-3.5_DOC.run b/ci/requirements-3.5_DOC.run
index 9647ab53ab835..d8c713a781021 100644
--- a/ci/requirements-3.5_DOC.run
+++ b/ci/requirements-3.5_DOC.run
@@ -1,7 +1,7 @@
ipython
ipykernel
ipywidgets
-sphinx
+sphinx=1.5*
nbconvert
nbformat
notebook
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index aacfe25b91564..cd444f796fabb 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -525,6 +525,12 @@ run this slightly modified command::
git diff master --name-only -- '*.py' | grep 'pandas/' | xargs flake8
+Note that on Windows, ``grep``, ``xargs``, and other tools are likely
+unavailable. However, this has been shown to work on smaller commits in the
+standard Windows command line::
+
+ git diff master -u -- "*.py" | flake8 --diff
+
Backwards Compatibility
~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 9692766505d7a..f44ea170df201 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -712,6 +712,16 @@ index column inference and discard the last column, pass ``index_col=False``:
pd.read_csv(StringIO(data))
pd.read_csv(StringIO(data), index_col=False)
+If a subset of data is being parsed using the ``usecols`` option, the
+``index_col`` specification is based on that subset, not the original data.
+
+.. ipython:: python
+
+ data = 'a,b,c\n4,apple,bat,\n8,orange,cow,'
+ print(data)
+ pd.read_csv(StringIO(data), usecols=['b', 'c'])
+ pd.read_csv(StringIO(data), usecols=['b', 'c'], index_col=0)
+
.. _io.parse_dates:
Date Handling
@@ -4415,34 +4425,6 @@ Performance
`Here <http://stackoverflow.com/questions/14355151/how-to-make-pandas-hdfstore-put-operation-faster/14370190#14370190>`__
for more information and some solutions.
-Experimental
-''''''''''''
-
-HDFStore supports ``Panel4D`` storage.
-
-.. ipython:: python
- :okwarning:
-
- wp = pd.Panel(randn(2, 5, 4), items=['Item1', 'Item2'],
- major_axis=pd.date_range('1/1/2000', periods=5),
- minor_axis=['A', 'B', 'C', 'D'])
- p4d = pd.Panel4D({ 'l1' : wp })
- p4d
- store.append('p4d', p4d)
- store
-
-These, by default, index the three axes ``items, major_axis,
-minor_axis``. On an ``AppendableTable`` it is possible to setup with the
-first append a different indexing scheme, depending on how you want to
-store your data. Pass the ``axes`` keyword with a list of dimensions
-(currently must by exactly 1 less than the total dimensions of the
-object). This cannot be changed after table creation.
-
-.. ipython:: python
- :okwarning:
-
- store.append('p4d2', p4d, axes=['labels', 'major_axis', 'minor_axis'])
- store.select('p4d2', where='labels=l1 and items=Item1 and minor_axis=A')
.. ipython:: python
:suppress:
diff --git a/doc/source/release.rst b/doc/source/release.rst
index bf272e243e0dd..67c62cd45f7c7 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -37,6 +37,44 @@ analysis / manipulation tool available in any language.
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
+pandas 0.20.3
+-------------
+
+**Release date:** July 7, 2017
+
+This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes
+and bug fixes. We recommend that all users upgrade to this version.
+
+See the :ref:`v0.20.3 Whatsnew <whatsnew_0203>` overview for an extensive list
+of all enhancements and bugs that have been fixed in 0.20.3
+
+Thanks
+~~~~~~
+
+A total of 20 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Bran Yang
+* Chris
+* Chris Kerr +
+* DSM
+* David Gwynne
+* Douglas Rudd
+* Forbidden Donut +
+* Jeff Reback
+* Joris Van den Bossche
+* Karel De Brabandere +
+* Peter Quackenbush +
+* Pradyumna Reddy Chinthala +
+* Telt +
+* Tom Augspurger
+* chris-b1
+* gfyoung
+* ian +
+* jdeschenes +
+* kjford +
+* ri938 +
+
pandas 0.20.2
-------------
diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index ffaeeb78c2799..4b773aab72ad0 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -18,6 +18,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: whatsnew/v0.20.3.txt
+
.. include:: whatsnew/v0.20.2.txt
.. include:: whatsnew/v0.20.0.txt
diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.txt
new file mode 100644
index 0000000000000..582f975f81a7a
--- /dev/null
+++ b/doc/source/whatsnew/v0.20.3.txt
@@ -0,0 +1,60 @@
+.. _whatsnew_0203:
+
+v0.20.3 (July 7, 2017)
+-----------------------
+
+This is a minor bug-fix release in the 0.20.x series and includes some small regression fixes
+and bug fixes. We recommend that all users upgrade to this version.
+
+.. contents:: What's new in v0.20.3
+ :local:
+ :backlinks: none
+
+.. _whatsnew_0203.bug_fixes:
+
+Bug Fixes
+~~~~~~~~~
+
+- Fixed a bug in failing to compute rolling computations of a column-MultiIndexed ``DataFrame`` (:issue:`16789`, :issue:`16825`)
+- Fixed a pytest marker failing downstream packages' tests suites (:issue:`16680`)
+
+Conversion
+^^^^^^^^^^
+
+- Bug in pickle compat prior to the v0.20.x series, when ``UTC`` is a timezone in a Series/DataFrame/Index (:issue:`16608`)
+- Bug in ``Series`` construction when passing a ``Series`` with ``dtype='category'`` (:issue:`16524`).
+- Bug in :meth:`DataFrame.astype` when passing a ``Series`` as the ``dtype`` kwarg. (:issue:`16717`).
+
+Indexing
+^^^^^^^^
+
+- Bug in ``Float64Index`` causing an empty array instead of ``None`` to be returned from ``.get(np.nan)`` on a Series whose index did not contain any ``NaN`` s (:issue:`8569`)
+- Bug in ``MultiIndex.isin`` causing an error when passing an empty iterable (:issue:`16777`)
+- Fixed a bug in a slicing DataFrame/Series that have a ``TimedeltaIndex`` (:issue:`16637`)
+
+I/O
+^^^
+
+- Bug in :func:`read_csv` in which files weren't opened as binary files by the C engine on Windows, causing EOF characters mid-field, which would fail (:issue:`16039`, :issue:`16559`, :issue:`16675`)
+- Bug in :func:`read_hdf` in which reading a ``Series`` saved to an HDF file in 'fixed' format fails when an explicit ``mode='r'`` argument is supplied (:issue:`16583`)
+- Bug in :meth:`DataFrame.to_latex` where ``bold_rows`` was wrongly specified to be ``True`` by default, whereas in reality row labels remained non-bold whatever parameter provided. (:issue:`16707`)
+- Fixed an issue with :meth:`DataFrame.style` where generated element ids were not unique (:issue:`16780`)
+- Fixed loading a ``DataFrame`` with a ``PeriodIndex``, from a ``format='fixed'`` HDFStore, in Python 3, that was written in Python 2 (:issue:`16781`)
+
+Plotting
+^^^^^^^^
+
+- Fixed regression that prevented RGB and RGBA tuples from being used as color arguments (:issue:`16233`)
+- Fixed an issue with :meth:`DataFrame.plot.scatter` that incorrectly raised a ``KeyError`` when categorical data is used for plotting (:issue:`16199`)
+
+Reshaping
+^^^^^^^^^
+
+- ``PeriodIndex`` / ``TimedeltaIndex.join`` was missing the ``sort=`` kwarg (:issue:`16541`)
+- Bug in joining on a ``MultiIndex`` with a ``category`` dtype for a level (:issue:`16627`).
+- Bug in :func:`merge` when merging/joining with multiple categorical columns (:issue:`16767`)
+
+Categorical
+^^^^^^^^^^^
+
+- Bug in ``DataFrame.sort_values`` not respecting the ``kind`` parameter with categorical data (:issue:`16793`)
diff --git a/pandas/_libs/src/parser/io.c b/pandas/_libs/src/parser/io.c
index dee7d9d9281c4..8300e889d4157 100644
--- a/pandas/_libs/src/parser/io.c
+++ b/pandas/_libs/src/parser/io.c
@@ -13,6 +13,10 @@ The full license is in the LICENSE file, distributed with this software.
#include <sys/stat.h>
#include <fcntl.h>
+#ifndef O_BINARY
+#define O_BINARY 0
+#endif /* O_BINARY */
+
/*
On-disk FILE, uncompressed
*/
@@ -23,26 +27,25 @@ void *new_file_source(char *fname, size_t buffer_size) {
return NULL;
}
- fs->fd = open(fname, O_RDONLY);
+ fs->fd = open(fname, O_RDONLY | O_BINARY);
if (fs->fd == -1) {
- goto err_free;
+ free(fs);
+ return NULL;
}
// Only allocate this heap memory if we are not memory-mapping the file
fs->buffer = (char *)malloc((buffer_size + 1) * sizeof(char));
if (fs->buffer == NULL) {
- goto err_free;
+ close(fs->fd);
+ free(fs);
+ return NULL;
}
memset(fs->buffer, '\0', buffer_size + 1);
fs->size = buffer_size;
return (void *)fs;
-
-err_free:
- free(fs);
- return NULL;
}
void *new_rd_source(PyObject *obj) {
@@ -184,17 +187,20 @@ void *new_mmap(char *fname) {
fprintf(stderr, "new_file_buffer: malloc() failed.\n");
return (NULL);
}
- mm->fd = open(fname, O_RDONLY);
+ mm->fd = open(fname, O_RDONLY | O_BINARY);
if (mm->fd == -1) {
fprintf(stderr, "new_file_buffer: open(%s) failed. errno =%d\n",
fname, errno);
- goto err_free;
+ free(mm);
+ return NULL;
}
if (fstat(mm->fd, &stat) == -1) {
fprintf(stderr, "new_file_buffer: fstat() failed. errno =%d\n",
errno);
- goto err_close;
+ close(mm->fd);
+ free(mm);
+ return NULL;
}
filesize = stat.st_size; /* XXX This might be 32 bits. */
@@ -202,19 +208,15 @@ void *new_mmap(char *fname) {
if (mm->memmap == MAP_FAILED) {
/* XXX Eventually remove this print statement. */
fprintf(stderr, "new_file_buffer: mmap() failed.\n");
- goto err_close;
+ close(mm->fd);
+ free(mm);
+ return NULL;
}
mm->size = (off_t)filesize;
mm->position = 0;
return mm;
-
-err_close:
- close(mm->fd);
-err_free:
- free(mm);
- return NULL;
}
int del_mmap(void *ptr) {
diff --git a/pandas/compat/numpy/__init__.py b/pandas/compat/numpy/__init__.py
index 4a9a2647ece0f..2c5a18973afa8 100644
--- a/pandas/compat/numpy/__init__.py
+++ b/pandas/compat/numpy/__init__.py
@@ -15,6 +15,7 @@
_np_version_under1p11 = _nlv < '1.11'
_np_version_under1p12 = _nlv < '1.12'
_np_version_under1p13 = _nlv < '1.13'
+_np_version_under1p14 = _nlv < '1.14'
if _nlv < '1.7.0':
raise ImportError('this version of pandas is incompatible with '
@@ -74,4 +75,6 @@ def np_array_datetime64_compat(arr, *args, **kwargs):
'_np_version_under1p10',
'_np_version_under1p11',
'_np_version_under1p12',
+ '_np_version_under1p13',
+ '_np_version_under1p14'
]
diff --git a/pandas/compat/numpy/function.py b/pandas/compat/numpy/function.py
index a324bf94171ce..ccbd3d9704e0c 100644
--- a/pandas/compat/numpy/function.py
+++ b/pandas/compat/numpy/function.py
@@ -107,6 +107,14 @@ def validate_argmax_with_skipna(skipna, args, kwargs):
validate_argsort = CompatValidator(ARGSORT_DEFAULTS, fname='argsort',
max_fname_arg_count=0, method='both')
+# two different signatures of argsort, this second validation
+# for when the `kind` param is supported
+ARGSORT_DEFAULTS_KIND = OrderedDict()
+ARGSORT_DEFAULTS_KIND['axis'] = -1
+ARGSORT_DEFAULTS_KIND['order'] = None
+validate_argsort_kind = CompatValidator(ARGSORT_DEFAULTS_KIND, fname='argsort',
+ max_fname_arg_count=0, method='both')
+
def validate_argsort_with_ascending(ascending, args, kwargs):
"""
@@ -121,7 +129,7 @@ def validate_argsort_with_ascending(ascending, args, kwargs):
args = (ascending,) + args
ascending = True
- validate_argsort(args, kwargs, max_fname_arg_count=1)
+ validate_argsort_kind(args, kwargs, max_fname_arg_count=3)
return ascending
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index b875bbb0d63c0..f6223c48994ae 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -15,7 +15,7 @@ def load_reduce(self):
args = stack.pop()
func = stack[-1]
- if type(args[0]) is type:
+ if len(args) and type(args[0]) is type:
n = args[0].__name__ # noqa
try:
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index a5e61797bd478..7b169123006bd 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -1284,7 +1284,7 @@ def check_for_ordered(self, op):
"you can use .as_ordered() to change the "
"Categorical to an ordered one\n".format(op=op))
- def argsort(self, ascending=True, *args, **kwargs):
+ def argsort(self, ascending=True, kind='quicksort', *args, **kwargs):
"""
Returns the indices that would sort the Categorical instance if
'sort_values' was called. This function is implemented to provide
@@ -1305,7 +1305,7 @@ def argsort(self, ascending=True, *args, **kwargs):
numpy.ndarray.argsort
"""
ascending = nv.validate_argsort_with_ascending(ascending, args, kwargs)
- result = np.argsort(self._codes.copy(), **kwargs)
+ result = np.argsort(self._codes.copy(), kind=kind, **kwargs)
if not ascending:
result = result[::-1]
return result
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index bfec1ec3ebe8c..2eebf3704253e 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -396,7 +396,10 @@ def is_timedelta64_dtype(arr_or_dtype):
if arr_or_dtype is None:
return False
- tipo = _get_dtype_type(arr_or_dtype)
+ try:
+ tipo = _get_dtype_type(arr_or_dtype)
+ except ValueError:
+ return False
return issubclass(tipo, np.timedelta64)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 1a1bbc37cd816..87ce570e992b9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1508,7 +1508,7 @@ def to_xarray(self):
`to_latex`-specific options:
- bold_rows : boolean, default True
+ bold_rows : boolean, default False
Make the row labels bold in the output
column_format : str, default None
The columns format as specified in `LaTeX table format
@@ -1557,7 +1557,7 @@ def to_xarray(self):
@Appender(_shared_docs['to_latex'] % _shared_doc_kwargs)
def to_latex(self, buf=None, columns=None, col_space=None, header=True,
index=True, na_rep='NaN', formatters=None, float_format=None,
- sparsify=None, index_names=True, bold_rows=True,
+ sparsify=None, index_names=True, bold_rows=False,
column_format=None, longtable=None, escape=None,
encoding=None, decimal='.', multicolumn=None,
multicolumn_format=None, multirow=None):
@@ -3379,12 +3379,12 @@ def astype(self, dtype, copy=True, errors='raise', **kwargs):
-------
casted : type of caller
"""
- if isinstance(dtype, collections.Mapping):
+ if is_dict_like(dtype):
if self.ndim == 1: # i.e. Series
- if len(dtype) > 1 or list(dtype.keys())[0] != self.name:
+ if len(dtype) > 1 or self.name not in dtype:
raise KeyError('Only the Series name can be used for '
'the key in Series dtype mappings.')
- new_type = list(dtype.values())[0]
+ new_type = dtype[self.name]
return self.astype(new_type, copy, errors, **kwargs)
elif self.ndim > 2:
raise NotImplementedError(
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 07ba3c196c8a6..fd7e2578ca736 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3129,7 +3129,7 @@ def _join_non_unique(self, other, how='left', return_indexers=False):
left_idx = _ensure_platform_int(left_idx)
right_idx = _ensure_platform_int(right_idx)
- join_index = self.values.take(left_idx)
+ join_index = np.asarray(self.values.take(left_idx))
mask = left_idx == -1
np.putmask(join_index, mask, other._values.take(right_idx))
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index d9e0c218bfafc..d13636e8b43e2 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -560,6 +560,9 @@ def take(self, indices, axis=0, allow_fill=True,
na_value=-1)
return self._create_from_codes(taken)
+ def is_dtype_equal(self, other):
+ return self._data.is_dtype_equal(other)
+
take_nd = take
def map(self, mapper):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index f30da5b05f8ae..d54dae2ae421c 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1134,10 +1134,11 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
of iterables
"""
if len(tuples) == 0:
- # I think this is right? Not quite sure...
- raise TypeError('Cannot infer number of levels from empty list')
-
- if isinstance(tuples, (np.ndarray, Index)):
+ if names is None:
+ msg = 'Cannot infer number of levels from empty list'
+ raise TypeError(msg)
+ arrays = [[]] * len(names)
+ elif isinstance(tuples, (np.ndarray, Index)):
if isinstance(tuples, Index):
tuples = tuples._values
@@ -1388,6 +1389,9 @@ def __getitem__(self, key):
# cannot be sure whether the result will be sorted
sortorder = None
+ if isinstance(key, Index):
+ key = np.asarray(key)
+
new_labels = [lab[key] for lab in self.labels]
return MultiIndex(levels=self.levels, labels=new_labels,
@@ -2623,8 +2627,9 @@ def _wrap_joined_index(self, joined, other):
@Appender(Index.isin.__doc__)
def isin(self, values, level=None):
if level is None:
- return algos.isin(self.values,
- MultiIndex.from_tuples(values).values)
+ values = MultiIndex.from_tuples(values,
+ names=self.names).values
+ return algos.isin(self.values, values)
else:
num = self._get_level_number(level)
levs = self.levels[num]
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index bdae0ac7ac5e9..72d521cbe2d60 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -369,6 +369,8 @@ def get_loc(self, key, method=None, tolerance=None):
except (ValueError, IndexError):
# should only need to catch ValueError here but on numpy
# 1.7 .item() can raise IndexError when NaNs are present
+ if not len(nan_idxs):
+ raise KeyError(key)
return nan_idxs
except (TypeError, NotImplementedError):
pass
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 15fd9b7dc2b6a..50d59581dc0aa 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -919,14 +919,16 @@ def insert(self, loc, item):
self[loc:].asi8))
return self._shallow_copy(idx)
- def join(self, other, how='left', level=None, return_indexers=False):
+ def join(self, other, how='left', level=None, return_indexers=False,
+ sort=False):
"""
See Index.join
"""
self._assert_can_do_setop(other)
result = Int64Index.join(self, other, how=how, level=level,
- return_indexers=return_indexers)
+ return_indexers=return_indexers,
+ sort=sort)
if return_indexers:
result, lidx, ridx = result
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index ab94a5bffb4f9..faec813df3993 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -516,7 +516,8 @@ def union(self, other):
result.freq = to_offset(result.inferred_freq)
return result
- def join(self, other, how='left', level=None, return_indexers=False):
+ def join(self, other, how='left', level=None, return_indexers=False,
+ sort=False):
"""
See Index.join
"""
@@ -527,7 +528,8 @@ def join(self, other, how='left', level=None, return_indexers=False):
pass
return Index.join(self, other, how=how, level=level,
- return_indexers=return_indexers)
+ return_indexers=return_indexers,
+ sort=sort)
def _wrap_joined_index(self, joined, other):
name = self.name if self.name == other.name else None
@@ -680,8 +682,7 @@ def get_loc(self, key, method=None, tolerance=None):
-------
loc : int
"""
-
- if is_bool_indexer(key):
+ if is_bool_indexer(key) or is_timedelta64_dtype(key):
raise TypeError
if isnull(key):
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index a01e3dc46dfe9..50f2f9b52e111 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1288,7 +1288,7 @@ def __init__(self, obj, name):
.iloc for positional indexing
See the documentation here:
-http://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate_ix"""
+http://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated""" # noqa
warnings.warn(_ix_deprecation_warning,
DeprecationWarning, stacklevel=3)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 15851a17274ca..d4e7fc1a59d21 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -471,7 +471,7 @@ def astype(self, dtype, copy=False, errors='raise', values=None, **kwargs):
**kwargs)
def _astype(self, dtype, copy=False, errors='raise', values=None,
- klass=None, mgr=None, **kwargs):
+ klass=None, mgr=None, raise_on_error=False, **kwargs):
"""
Coerce to the new type (if copy=True, return a new copy)
raise on an except if raise == True
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index c55f4b5bf935f..7c2301e3666d4 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1384,13 +1384,14 @@ def _factorize_keys(lk, rk, sort=True):
lk = lk.values
rk = rk.values
- # if we exactly match in categories, allow us to use codes
+ # if we exactly match in categories, allow us to factorize on codes
if (is_categorical_dtype(lk) and
is_categorical_dtype(rk) and
lk.is_dtype_equal(rk)):
- return lk.codes, rk.codes, len(lk.categories)
-
- if is_int_or_datetime_dtype(lk) and is_int_or_datetime_dtype(rk):
+ klass = libhashtable.Int64Factorizer
+ lk = _ensure_int64(lk.codes)
+ rk = _ensure_int64(rk.codes)
+ elif is_int_or_datetime_dtype(lk) and is_int_or_datetime_dtype(rk):
klass = libhashtable.Int64Factorizer
lk = _ensure_int64(com._values_from_object(lk))
rk = _ensure_int64(com._values_from_object(rk))
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 69b427df981b7..10b80cbc3483d 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -233,7 +233,7 @@ def nargsort(items, kind='quicksort', ascending=True, na_position='last'):
# specially handle Categorical
if is_categorical_dtype(items):
- return items.argsort(ascending=ascending)
+ return items.argsort(ascending=ascending, kind=kind)
items = np.asanyarray(items)
idx = np.arange(len(items))
diff --git a/pandas/core/window.py b/pandas/core/window.py
index ba7e79944ab0e..bbe47ec398628 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -831,7 +831,7 @@ def count(self):
return self._wrap_results(results, blocks, obj)
- _shared_docs['apply'] = dedent("""
+ _shared_docs['apply'] = dedent(r"""
%(name)s function apply
Parameters
@@ -1911,7 +1911,8 @@ def dataframe_from_int_dict(data, frame_template):
# TODO: not the most efficient (perf-wise)
# though not bad code-wise
- from pandas import Panel, MultiIndex, Index
+ from pandas import Panel, MultiIndex
+
with warnings.catch_warnings(record=True):
p = Panel.from_dict(results).swapaxes('items', 'major')
if len(p.major_axis) > 0:
@@ -1934,10 +1935,10 @@ def dataframe_from_int_dict(data, frame_template):
# reset our index names to arg1 names
# reset our column names to arg2 names
# careful not to mutate the original names
- result.columns = Index(result.columns).set_names(
- arg2.columns.name)
+ result.columns = result.columns.set_names(
+ arg2.columns.names)
result.index = result.index.set_names(
- [arg1.index.name, arg1.columns.name])
+ arg1.index.names + arg1.columns.names)
return result
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 9b0f49ccc45b1..bff5f6c17fdb8 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -79,7 +79,9 @@
index_col : int, list of ints, default None
Column (0-indexed) to use as the row labels of the DataFrame.
Pass None if there is no such column. If a list is passed,
- those columns will be combined into a ``MultiIndex``
+ those columns will be combined into a ``MultiIndex``. If a
+ subset of data is selected with ``parse_cols``, index_col
+ is based on the subset.
names : array-like, default None
List of column names to use. If file contains no header row,
then you should explicitly pass header=None
@@ -90,7 +92,7 @@
content.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
- Use `str` or `object` to preserve and not interpret dtype.
+ Use `object` to preserve data as stored in Excel and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
@@ -110,8 +112,9 @@
* If None then parse all columns,
* If int then indicates last column to be parsed
* If list of ints then indicates list of column numbers to be parsed
- * If string then indicates comma separated list of column names and
- column ranges (e.g. "A:E" or "A,C,E:F")
+ * If string then indicates comma separated list of Excel column letters and
+ column ranges (e.g. "A:E" or "A,C,E:F"). Ranges are inclusive of
+ both sides.
squeeze : boolean, default False
If the parsed data only contains one column then return a Series
na_values : scalar, str, list-like, or dict, default None
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 2b0c5899e91cb..78f92930e06a2 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -842,6 +842,7 @@ def __init__(self, formatter, column_format=None, longtable=False,
multicolumn=False, multicolumn_format=None, multirow=False):
self.fmt = formatter
self.frame = self.fmt.frame
+ self.bold_rows = self.fmt.kwds.get('bold_rows', False)
self.column_format = column_format
self.longtable = longtable
self.multicolumn = multicolumn
@@ -940,6 +941,11 @@ def get_col_type(dtype):
if x else '{}') for x in row]
else:
crow = [x if x else '{}' for x in row]
+ if self.bold_rows and self.fmt.index:
+ # bold row labels
+ crow = ['\\textbf{%s}' % x
+ if j < ilevels and x.strip() not in ['', '{}'] else x
+ for j, x in enumerate(crow)]
if i < clevels and self.fmt.header and self.multicolumn:
# sum up columns to multicolumns
crow = self._format_multicolumn(crow, ilevels)
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 3d7e0fcdc69b3..b08d3877f3b03 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -281,13 +281,14 @@ def format_attr(pair):
for r, idx in enumerate(self.data.index):
row_es = []
for c, value in enumerate(rlabels[r]):
+ rid = [ROW_HEADING_CLASS, "level%s" % c, "row%s" % r]
es = {
"type": "th",
"is_visible": _is_visible(r, c, idx_lengths),
"value": value,
"display_value": value,
- "class": " ".join([ROW_HEADING_CLASS, "level%s" % c,
- "row%s" % r]),
+ "id": "_".join(rid[1:]),
+ "class": " ".join(rid)
}
rowspan = idx_lengths.get((c, r), 0)
if rowspan > 1:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8064191282250..fb60df47c32cb 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1979,7 +1979,8 @@ def __init__(self, f, **kwds):
self.comment = kwds['comment']
self._comment_lines = []
- f, handles = _get_handle(f, 'r', encoding=self.encoding,
+ mode = 'r' if PY3 else 'rb'
+ f, handles = _get_handle(f, mode, encoding=self.encoding,
compression=self.compression,
memory_map=self.memory_map)
self.handles.extend(handles)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 26c4a08bee59f..83c5e2278d339 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -282,7 +282,7 @@ def to_hdf(path_or_buf, key, value, mode=None, complevel=None, complib=None,
f(path_or_buf)
-def read_hdf(path_or_buf, key=None, **kwargs):
+def read_hdf(path_or_buf, key=None, mode='r', **kwargs):
""" read from the store, close it if we opened it
Retrieve pandas object stored in file, optionally based on where
@@ -290,13 +290,16 @@ def read_hdf(path_or_buf, key=None, **kwargs):
Parameters
----------
- path_or_buf : path (string), buffer, or path object (pathlib.Path or
- py._path.local.LocalPath) to read from
+ path_or_buf : path (string), buffer or path object (pathlib.Path or
+ py._path.local.LocalPath) designating the file to open, or an
+ already opened pd.HDFStore object
.. versionadded:: 0.19.0 support for pathlib, py.path.
key : group identifier in the store. Can be omitted if the HDF file
contains a single pandas object.
+ mode : string, {'r', 'r+', 'a'}, default 'r'. Mode to use when opening
+ the file. Ignored if path_or_buf is a pd.HDFStore.
where : list of Term (or convertable) objects, optional
start : optional, integer (defaults to None), row number to start
selection
@@ -313,10 +316,9 @@ def read_hdf(path_or_buf, key=None, **kwargs):
"""
- if kwargs.get('mode', 'a') not in ['r', 'r+', 'a']:
+ if mode not in ['r', 'r+', 'a']:
raise ValueError('mode {0} is not allowed while performing a read. '
- 'Allowed modes are r, r+ and a.'
- .format(kwargs.get('mode')))
+ 'Allowed modes are r, r+ and a.'.format(mode))
# grab the scope
if 'where' in kwargs:
kwargs['where'] = _ensure_term(kwargs['where'], scope_level=1)
@@ -335,9 +337,9 @@ def read_hdf(path_or_buf, key=None, **kwargs):
raise compat.FileNotFoundError(
'File %s does not exist' % path_or_buf)
+ store = HDFStore(path_or_buf, mode=mode, **kwargs)
# can't auto open/close if we are using an iterator
# so delegate to the iterator
- store = HDFStore(path_or_buf, **kwargs)
auto_close = True
elif isinstance(path_or_buf, HDFStore):
@@ -2582,8 +2584,8 @@ def read_index_node(self, node, start=None, stop=None):
if 'name' in node._v_attrs:
name = _ensure_str(node._v_attrs.name)
- index_class = self._alias_to_class(getattr(node._v_attrs,
- 'index_class', ''))
+ index_class = self._alias_to_class(_ensure_decoded(
+ getattr(node._v_attrs, 'index_class', '')))
factory = self._get_index_factory(index_class)
kwargs = {}
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index ec7c1f02f2ee8..ed821bcbbec21 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -186,6 +186,11 @@ def _validate_color_args(self):
# support series.plot(color='green')
self.kwds['color'] = [self.kwds['color']]
+ if ('color' in self.kwds and isinstance(self.kwds['color'], tuple) and
+ self.nseries == 1 and len(self.kwds['color']) in (3, 4)):
+ # support RGB and RGBA tuples in series plot
+ self.kwds['color'] = [self.kwds['color']]
+
if ('color' in self.kwds or 'colors' in self.kwds) and \
self.colormap is not None:
warnings.warn("'color' and 'colormap' cannot be used "
@@ -776,6 +781,11 @@ def __init__(self, data, x, y, **kwargs):
x = self.data.columns[x]
if is_integer(y) and not self.data.columns.holds_integer():
y = self.data.columns[y]
+ if len(self.data[x]._get_numeric_data()) == 0:
+ raise ValueError(self._kind + ' requires x column to be numeric')
+ if len(self.data[y]._get_numeric_data()) == 0:
+ raise ValueError(self._kind + ' requires y column to be numeric')
+
self.x = x
self.y = y
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 4633dde5ed537..ba510e68f9a21 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -200,10 +200,12 @@ def test_is_datetime64tz_dtype():
def test_is_timedelta64_dtype():
assert not com.is_timedelta64_dtype(object)
assert not com.is_timedelta64_dtype([1, 2, 3])
-
+ assert not com.is_timedelta64_dtype(np.array([], dtype=np.datetime64))
assert com.is_timedelta64_dtype(np.timedelta64)
assert com.is_timedelta64_dtype(pd.Series([], dtype="timedelta64[ns]"))
+ assert not com.is_timedelta64_dtype("0 days 00:00:00")
+
def test_is_period_dtype():
assert not com.is_period_dtype(object)
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index b99a6fabfa42b..335b76ff2aade 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -442,8 +442,9 @@ def test_astype_str(self):
expected = DataFrame(['1.12345678901'])
assert_frame_equal(result, expected)
- def test_astype_dict(self):
- # GH7271
+ @pytest.mark.parametrize("dtype_class", [dict, Series])
+ def test_astype_dict_like(self, dtype_class):
+ # GH7271 & GH16717
a = Series(date_range('2010-01-04', periods=5))
b = Series(range(5))
c = Series([0.0, 0.2, 0.4, 0.6, 0.8])
@@ -452,7 +453,8 @@ def test_astype_dict(self):
original = df.copy(deep=True)
# change type of a subset of columns
- result = df.astype({'b': 'str', 'd': 'float32'})
+ dt1 = dtype_class({'b': 'str', 'd': 'float32'})
+ result = df.astype(dt1)
expected = DataFrame({
'a': a,
'b': Series(['0', '1', '2', '3', '4']),
@@ -461,7 +463,8 @@ def test_astype_dict(self):
assert_frame_equal(result, expected)
assert_frame_equal(df, original)
- result = df.astype({'b': np.float32, 'c': 'float32', 'd': np.float64})
+ dt2 = dtype_class({'b': np.float32, 'c': 'float32', 'd': np.float64})
+ result = df.astype(dt2)
expected = DataFrame({
'a': a,
'b': Series([0.0, 1.0, 2.0, 3.0, 4.0], dtype='float32'),
@@ -471,19 +474,31 @@ def test_astype_dict(self):
assert_frame_equal(df, original)
# change all columns
- assert_frame_equal(df.astype({'a': str, 'b': str, 'c': str, 'd': str}),
+ dt3 = dtype_class({'a': str, 'b': str, 'c': str, 'd': str})
+ assert_frame_equal(df.astype(dt3),
df.astype(str))
assert_frame_equal(df, original)
# error should be raised when using something other than column labels
# in the keys of the dtype dict
- pytest.raises(KeyError, df.astype, {'b': str, 2: str})
- pytest.raises(KeyError, df.astype, {'e': str})
+ dt4 = dtype_class({'b': str, 2: str})
+ dt5 = dtype_class({'e': str})
+ pytest.raises(KeyError, df.astype, dt4)
+ pytest.raises(KeyError, df.astype, dt5)
assert_frame_equal(df, original)
# if the dtypes provided are the same as the original dtypes, the
# resulting DataFrame should be the same as the original DataFrame
- equiv = df.astype({col: df[col].dtype for col in df.columns})
+ dt6 = dtype_class({col: df[col].dtype for col in df.columns})
+ equiv = df.astype(dt6)
+ assert_frame_equal(df, equiv)
+ assert_frame_equal(df, original)
+
+ # GH 16717
+ # if dtypes provided is empty, the resulting DataFrame
+ # should be the same as the original DataFrame
+ dt7 = dtype_class({})
+ result = df.astype(dt7)
assert_frame_equal(df, equiv)
assert_frame_equal(df, original)
diff --git a/pandas/tests/frame/test_join.py b/pandas/tests/frame/test_join.py
index 21807cb42aa6e..afecba2026dd7 100644
--- a/pandas/tests/frame/test_join.py
+++ b/pandas/tests/frame/test_join.py
@@ -3,11 +3,19 @@
import pytest
import numpy as np
-from pandas import DataFrame, Index
+from pandas import DataFrame, Index, PeriodIndex
from pandas.tests.frame.common import TestData
import pandas.util.testing as tm
+@pytest.fixture
+def frame_with_period_index():
+ return DataFrame(
+ data=np.arange(20).reshape(4, 5),
+ columns=list('abcde'),
+ index=PeriodIndex(start='2000', freq='A', periods=4))
+
+
@pytest.fixture
def frame():
return TestData().frame
@@ -139,3 +147,21 @@ def test_join_overlap(frame):
# column order not necessarily sorted
tm.assert_frame_equal(joined, expected.loc[:, joined.columns])
+
+
+def test_join_period_index(frame_with_period_index):
+ other = frame_with_period_index.rename(
+ columns=lambda x: '{key}{key}'.format(key=x))
+
+ joined_values = np.concatenate(
+ [frame_with_period_index.values] * 2, axis=1)
+
+ joined_cols = frame_with_period_index.columns.append(other.columns)
+
+ joined = frame_with_period_index.join(other)
+ expected = DataFrame(
+ data=joined_values,
+ columns=joined_cols,
+ index=frame_with_period_index.index)
+
+ tm.assert_frame_equal(joined, expected)
diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py
index 98f7f82c0ace7..891c94b59074a 100644
--- a/pandas/tests/frame/test_sorting.py
+++ b/pandas/tests/frame/test_sorting.py
@@ -238,6 +238,15 @@ def test_stable_descending_multicolumn_sort(self):
kind='mergesort')
assert_frame_equal(sorted_df, expected)
+ def test_stable_categorial(self):
+ # GH 16793
+ df = DataFrame({
+ 'x': pd.Categorical(np.repeat([1, 2, 3, 4], 5), ordered=True)
+ })
+ expected = df.copy()
+ sorted_df = df.sort_values('x', kind='mergesort')
+ assert_frame_equal(sorted_df, expected)
+
def test_sort_datetimes(self):
# GH 3461, argsort / lexsort differences for a datetime column
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index bbde902fb87bf..99759e197b7d3 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -909,7 +909,7 @@ def test_fillna(self):
def test_nulls(self):
# this is really a smoke test for the methods
- # as these are adequantely tested for function elsewhere
+ # as these are adequately tested for function elsewhere
for name, index in self.indices.items():
if len(index) == 0:
@@ -937,3 +937,10 @@ def test_empty(self):
index = self.create_index()
assert not index.empty
assert index[:0].empty
+
+ @pytest.mark.parametrize('how', ['outer', 'inner', 'left', 'right'])
+ def test_join_self_unique(self, how):
+ index = self.create_index()
+ if index.is_unique:
+ joined = index.join(index, how=how)
+ assert (index == joined).all()
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 6f73e7c15e4d9..291ca317f8fae 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -773,3 +773,9 @@ def test_map(self):
result = index.map(lambda x: x.ordinal)
exp = Index([x.ordinal for x in index])
tm.assert_index_equal(result, exp)
+
+ @pytest.mark.parametrize('how', ['outer', 'inner', 'left', 'right'])
+ def test_join_self(self, how):
+ index = period_range('1/1/2000', periods=10)
+ joined = index.join(index, how=how)
+ assert index is joined
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index ba917f33d8595..1f73287edb1f6 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -1172,6 +1172,14 @@ def test_get_loc_level(self):
assert result == expected
assert new_index.equals(index.droplevel(0))
+ def test_get_loc_missing_nan(self):
+ # GH 8569
+ idx = MultiIndex.from_arrays([[1.0, 2.0], [3.0, 4.0]])
+ assert isinstance(idx.get_loc(1), slice)
+ pytest.raises(KeyError, idx.get_loc, 3)
+ pytest.raises(KeyError, idx.get_loc, np.nan)
+ pytest.raises(KeyError, idx.get_loc, [np.nan])
+
def test_slice_locs(self):
df = tm.makeTimeDataFrame()
stacked = df.stack()
@@ -1712,6 +1720,13 @@ def test_from_tuples(self):
idx = MultiIndex.from_tuples(((1, 2), (3, 4)), names=['a', 'b'])
assert len(idx) == 2
+ def test_from_tuples_empty(self):
+ # GH 16777
+ result = MultiIndex.from_tuples([], names=['a', 'b'])
+ expected = MultiIndex.from_arrays(arrays=[[], []],
+ names=['a', 'b'])
+ tm.assert_index_equal(result, expected)
+
def test_argsort(self):
result = self.index.argsort()
expected = self.index.values.argsort()
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index 29d4214fd549b..62ac337d02727 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -371,6 +371,14 @@ def test_get_loc_na(self):
assert idx.get_loc(1) == 1
pytest.raises(KeyError, idx.slice_locs, np.nan)
+ def test_get_loc_missing_nan(self):
+ # GH 8569
+ idx = Float64Index([1, 2])
+ assert idx.get_loc(1) == 0
+ pytest.raises(KeyError, idx.get_loc, 3)
+ pytest.raises(KeyError, idx.get_loc, np.nan)
+ pytest.raises(KeyError, idx.get_loc, [np.nan])
+
def test_contains_nans(self):
i = Float64Index([1.0, 2.0, np.nan])
assert np.nan in i
diff --git a/pandas/tests/indexing/test_timedelta.py b/pandas/tests/indexing/test_timedelta.py
index cf8cc6c2d345d..be3ea8f0c371d 100644
--- a/pandas/tests/indexing/test_timedelta.py
+++ b/pandas/tests/indexing/test_timedelta.py
@@ -1,3 +1,5 @@
+import pytest
+
import pandas as pd
from pandas.util import testing as tm
@@ -16,5 +18,25 @@ def test_boolean_indexing(self):
result = df.assign(x=df.mask(cond, 10).astype('int64'))
expected = pd.DataFrame(data,
index=pd.to_timedelta(range(10), unit='s'),
- columns=['x'])
+ columns=['x'],
+ dtype='int64')
tm.assert_frame_equal(expected, result)
+
+ @pytest.mark.parametrize(
+ "indexer, expected",
+ [(0, [20, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
+ (slice(4, 8), [0, 1, 2, 3, 20, 20, 20, 20, 8, 9]),
+ ([3, 5], [0, 1, 2, 20, 4, 20, 6, 7, 8, 9])])
+ def test_list_like_indexing(self, indexer, expected):
+ # GH 16637
+ df = pd.DataFrame({'x': range(10)}, dtype="int64")
+ df.index = pd.to_timedelta(range(10), unit='s')
+
+ df.loc[df.index[indexer], 'x'] = 20
+
+ expected = pd.DataFrame(expected,
+ index=pd.to_timedelta(range(10), unit='s'),
+ columns=['x'],
+ dtype="int64")
+
+ tm.assert_frame_equal(expected, df)
diff --git a/pandas/tests/io/data/legacy_pickle/0.19.2/0.19.2_x86_64_darwin_3.6.1.pickle b/pandas/tests/io/data/legacy_pickle/0.19.2/0.19.2_x86_64_darwin_3.6.1.pickle
index 6bb02672a4151..75ea95ff402c4 100644
Binary files a/pandas/tests/io/data/legacy_pickle/0.19.2/0.19.2_x86_64_darwin_3.6.1.pickle and b/pandas/tests/io/data/legacy_pickle/0.19.2/0.19.2_x86_64_darwin_3.6.1.pickle differ
diff --git a/pandas/tests/io/data/periodindex_0.20.1_x86_64_darwin_2.7.13.h5 b/pandas/tests/io/data/periodindex_0.20.1_x86_64_darwin_2.7.13.h5
new file mode 100644
index 0000000000000..6fb92d3c564bd
Binary files /dev/null and b/pandas/tests/io/data/periodindex_0.20.1_x86_64_darwin_2.7.13.h5 differ
diff --git a/pandas/tests/io/formats/test_style.py b/pandas/tests/io/formats/test_style.py
index 9911888f758fb..59d9f938734ab 100644
--- a/pandas/tests/io/formats/test_style.py
+++ b/pandas/tests/io/formats/test_style.py
@@ -1,5 +1,6 @@
import copy
import textwrap
+import re
import pytest
import numpy as np
@@ -505,6 +506,14 @@ def test_uuid(self):
assert result is styler
assert result.uuid == 'aaa'
+ def test_unique_id(self):
+ # See https://github.com/pandas-dev/pandas/issues/16780
+ df = pd.DataFrame({'a': [1, 3, 5, 6], 'b': [2, 4, 12, 21]})
+ result = df.style.render(uuid='test')
+ assert 'test' in result
+ ids = re.findall('id="(.*?)"', result)
+ assert np.unique(ids).size == len(ids)
+
def test_table_styles(self):
style = [{'selector': 'th', 'props': [('foo', 'bar')]}]
styler = Styler(self.df, table_styles=style)
@@ -719,12 +728,13 @@ def test_mi_sparse(self):
df = pd.DataFrame({'A': [1, 2]},
index=pd.MultiIndex.from_arrays([['a', 'a'],
[0, 1]]))
+
result = df.style._translate()
body_0 = result['body'][0][0]
expected_0 = {
"value": "a", "display_value": "a", "is_visible": True,
"type": "th", "attributes": ["rowspan=2"],
- "class": "row_heading level0 row0",
+ "class": "row_heading level0 row0", "id": "level0_row0"
}
tm.assert_dict_equal(body_0, expected_0)
@@ -732,6 +742,7 @@ def test_mi_sparse(self):
expected_1 = {
"value": 0, "display_value": 0, "is_visible": True,
"type": "th", "class": "row_heading level1 row0",
+ "id": "level1_row0"
}
tm.assert_dict_equal(body_1, expected_1)
@@ -739,6 +750,7 @@ def test_mi_sparse(self):
expected_10 = {
"value": 'a', "display_value": 'a', "is_visible": False,
"type": "th", "class": "row_heading level0 row1",
+ "id": "level0_row1"
}
tm.assert_dict_equal(body_10, expected_10)
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 4ee77abb32c26..aa86d1d9231fb 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -506,3 +506,33 @@ def test_to_latex_series(self):
\end{tabular}
"""
assert withindex_result == withindex_expected
+
+ def test_to_latex_bold_rows(self):
+ # GH 16707
+ df = pd.DataFrame({'a': [1, 2], 'b': ['b1', 'b2']})
+ observed = df.to_latex(bold_rows=True)
+ expected = r"""\begin{tabular}{lrl}
+\toprule
+{} & a & b \\
+\midrule
+\textbf{0} & 1 & b1 \\
+\textbf{1} & 2 & b2 \\
+\bottomrule
+\end{tabular}
+"""
+ assert observed == expected
+
+ def test_to_latex_no_bold_rows(self):
+ # GH 16707
+ df = pd.DataFrame({'a': [1, 2], 'b': ['b1', 'b2']})
+ observed = df.to_latex(bold_rows=False)
+ expected = r"""\begin{tabular}{lrl}
+\toprule
+{} & a & b \\
+\midrule
+0 & 1 & b1 \\
+1 & 2 & b2 \\
+\bottomrule
+\end{tabular}
+"""
+ assert observed == expected
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
old mode 100644
new mode 100755
index 22c62b738e6a2..996965999724e
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -1,3 +1,5 @@
+#!/usr/env/bin python
+
""" self-contained to write legacy storage (pickle/msgpack) files """
from __future__ import print_function
from warnings import catch_warnings
@@ -125,7 +127,11 @@ def create_data():
mixed_dup=mixed_dup_df,
dt_mixed_tzs=DataFrame({
u'A': Timestamp('20130102', tz='US/Eastern'),
- u'B': Timestamp('20130603', tz='CET')}, index=range(5))
+ u'B': Timestamp('20130603', tz='CET')}, index=range(5)),
+ dt_mixed2_tzs=DataFrame({
+ u'A': Timestamp('20130102', tz='US/Eastern'),
+ u'B': Timestamp('20130603', tz='CET'),
+ u'C': Timestamp('20130603', tz='UTC')}, index=range(5))
)
with catch_warnings(record=True):
diff --git a/pandas/tests/io/parser/common.py b/pandas/tests/io/parser/common.py
index 31d815a4bca97..4b4f44b44c163 100644
--- a/pandas/tests/io/parser/common.py
+++ b/pandas/tests/io/parser/common.py
@@ -1662,6 +1662,21 @@ def test_internal_eof_byte(self):
result = self.read_csv(StringIO(data))
tm.assert_frame_equal(result, expected)
+ def test_internal_eof_byte_to_file(self):
+ # see gh-16559
+ data = b'c1,c2\r\n"test \x1a test", test\r\n'
+ expected = pd.DataFrame([["test \x1a test", " test"]],
+ columns=["c1", "c2"])
+
+ path = '__%s__.csv' % tm.rands(10)
+
+ with tm.ensure_clean(path) as path:
+ with open(path, "wb") as f:
+ f.write(data)
+
+ result = self.read_csv(path)
+ tm.assert_frame_equal(result, expected)
+
def test_file_handles(self):
# GH 14418 - don't close user provided file handles
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index e12945a6a3102..cfa60248605ad 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -19,6 +19,7 @@ def salaries_table():
return read_table(path)
+@pytest.mark.network
@pytest.mark.parametrize(
"compression,extension",
[('gzip', '.gz'), ('bz2', '.bz2'), ('zip', '.zip'),
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index 82784da094ed4..1a6e3e20bdf89 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -5190,6 +5190,37 @@ def test_query_compare_column_type(self):
expected = df.loc[[], :]
tm.assert_frame_equal(expected, result)
+ @pytest.mark.parametrize('format', ['fixed', 'table'])
+ def test_read_hdf_series_mode_r(self, format):
+ # GH 16583
+ # Tests that reading a Series saved to an HDF file
+ # still works if a mode='r' argument is supplied
+ series = tm.makeFloatSeries()
+ with ensure_clean_path(self.path) as path:
+ series.to_hdf(path, key='data', format=format)
+ result = pd.read_hdf(path, key='data', mode='r')
+ tm.assert_series_equal(result, series)
+
+ def test_read_py2_hdf_file_in_py3(self):
+ # GH 16781
+
+ # tests reading a PeriodIndex DataFrame written in Python2 in Python3
+
+ # the file was generated in Python 2.7 like so:
+ #
+ # df = pd.DataFrame([1.,2,3], index=pd.PeriodIndex(
+ # ['2015-01-01', '2015-01-02', '2015-01-05'], freq='B'))
+ # df.to_hdf('periodindex_0.20.1_x86_64_darwin_2.7.13.h5', 'p')
+
+ expected = pd.DataFrame([1., 2, 3], index=pd.PeriodIndex(
+ ['2015-01-01', '2015-01-02', '2015-01-05'], freq='B'))
+
+ with ensure_clean_store(
+ tm.get_data_path('periodindex_0.20.1_x86_64_darwin_2.7.13.h5'),
+ mode='r') as store:
+ result = store['p']
+ assert_frame_equal(result, expected)
+
class TestHDFComplexValues(Base):
# GH10447
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 2de8c9acff98c..7a19418a15004 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -158,6 +158,12 @@ def test_color_single_series_list(self):
df = DataFrame({"A": [1, 2, 3]})
_check_plot_works(df.plot, color=['red'])
+ def test_rgb_tuple_color(self):
+ # GH 16695
+ df = DataFrame({'x': [1, 2], 'y': [3, 4]})
+ _check_plot_works(df.plot, x='x', y='y', color=(1, 0, 0))
+ _check_plot_works(df.plot, x='x', y='y', color=(1, 0, 0, 0.5))
+
def test_color_empty_string(self):
df = DataFrame(randn(10, 2))
with pytest.raises(ValueError):
@@ -915,6 +921,24 @@ def test_plot_scatter(self):
axes = df.plot(x='x', y='y', kind='scatter', subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
+ @slow
+ def test_plot_scatter_with_categorical_data(self):
+ # GH 16199
+ df = pd.DataFrame({'x': [1, 2, 3, 4],
+ 'y': pd.Categorical(['a', 'b', 'a', 'c'])})
+
+ with pytest.raises(ValueError) as ve:
+ df.plot(x='x', y='y', kind='scatter')
+ ve.match('requires y column to be numeric')
+
+ with pytest.raises(ValueError) as ve:
+ df.plot(x='y', y='x', kind='scatter')
+ ve.match('requires x column to be numeric')
+
+ with pytest.raises(ValueError) as ve:
+ df.plot(x='y', y='y', kind='scatter')
+ ve.match('requires x column to be numeric')
+
@slow
def test_plot_scatter_with_c(self):
df = DataFrame(randn(6, 4),
diff --git a/pandas/tests/reshape/test_merge.py b/pandas/tests/reshape/test_merge.py
index d3257243d7a2c..29a6b8a2ae818 100644
--- a/pandas/tests/reshape/test_merge.py
+++ b/pandas/tests/reshape/test_merge.py
@@ -1356,6 +1356,29 @@ def test_dtype_on_merged_different(self, change, how, left, right):
index=['X', 'Y', 'Z'])
assert_series_equal(result, expected)
+ def test_self_join_multiple_categories(self):
+ # GH 16767
+ # non-duplicates should work with multiple categories
+ m = 5
+ df = pd.DataFrame({
+ 'a': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] * m,
+ 'b': ['t', 'w', 'x', 'y', 'z'] * 2 * m,
+ 'c': [letter
+ for each in ['m', 'n', 'u', 'p', 'o']
+ for letter in [each] * 2 * m],
+ 'd': [letter
+ for each in ['aa', 'bb', 'cc', 'dd', 'ee',
+ 'ff', 'gg', 'hh', 'ii', 'jj']
+ for letter in [each] * m]})
+
+ # change them all to categorical variables
+ df = df.apply(lambda x: x.astype('category'))
+
+ # self-join should equal ourselves
+ result = pd.merge(df, df, on=list(df.columns))
+
+ assert_frame_equal(result, df)
+
@pytest.fixture
def left_df():
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index e084fa58d6c51..2ec579842e33f 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -152,24 +152,35 @@ def test_astype_unicode(self):
reload(sys) # noqa
sys.setdefaultencoding(former_encoding)
- def test_astype_dict(self):
+ @pytest.mark.parametrize("dtype_class", [dict, Series])
+ def test_astype_dict_like(self, dtype_class):
# see gh-7271
s = Series(range(0, 10, 2), name='abc')
- result = s.astype({'abc': str})
+ dt1 = dtype_class({'abc': str})
+ result = s.astype(dt1)
expected = Series(['0', '2', '4', '6', '8'], name='abc')
tm.assert_series_equal(result, expected)
- result = s.astype({'abc': 'float64'})
+ dt2 = dtype_class({'abc': 'float64'})
+ result = s.astype(dt2)
expected = Series([0.0, 2.0, 4.0, 6.0, 8.0], dtype='float64',
name='abc')
tm.assert_series_equal(result, expected)
+ dt3 = dtype_class({'abc': str, 'def': str})
with pytest.raises(KeyError):
- s.astype({'abc': str, 'def': str})
+ s.astype(dt3)
+ dt4 = dtype_class({0: str})
with pytest.raises(KeyError):
- s.astype({0: str})
+ s.astype(dt4)
+
+ # GH16717
+ # if dtypes provided is empty, it should error
+ dt5 = dtype_class({})
+ with pytest.raises(KeyError):
+ s.astype(dt5)
def test_astype_generic_timestamp_deprecated(self):
# see gh-15524
@@ -248,3 +259,12 @@ def test_intercept_astype_object(self):
result = df.values.squeeze()
assert (result[:, 0] == expected.values).all()
+
+ def test_series_to_categorical(self):
+ # see gh-16524: test conversion of Series to Categorical
+ series = Series(['a', 'b', 'c'])
+
+ result = Series(series, dtype='category')
+ expected = Series(['a', 'b', 'c'], dtype='category')
+
+ tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 7f876357ad3ab..25229b261b91e 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -70,6 +70,21 @@ def test_get(self):
result = vc.get(True, default='Missing')
assert result == 'Missing'
+ def test_get_nan(self):
+ # GH 8569
+ s = pd.Float64Index(range(10)).to_series()
+ assert s.get(np.nan) is None
+ assert s.get(np.nan, default='Missing') == 'Missing'
+
+ # ensure that fixing the above hasn't broken get
+ # with multiple elements
+ idx = [20, 30]
+ assert_series_equal(s.get(idx),
+ Series([np.nan] * 2, index=idx))
+ idx = [np.nan, np.nan]
+ assert_series_equal(s.get(idx),
+ Series([np.nan] * 2, index=idx))
+
def test_delitem(self):
# GH 5542
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 3471f0b13b84b..06a674556cdef 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -586,9 +586,8 @@ def test_numpy_argsort(self):
tm.assert_numpy_array_equal(np.argsort(c), expected,
check_dtype=False)
- msg = "the 'kind' parameter is not supported"
- tm.assert_raises_regex(ValueError, msg, np.argsort,
- c, kind='mergesort')
+ tm.assert_numpy_array_equal(np.argsort(c, kind='mergesort'), expected,
+ check_dtype=False)
msg = "the 'axis' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, np.argsort,
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 27e3c29a70a9f..61f0c992225c6 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -52,6 +52,7 @@ def test_xarray(df):
assert df.to_xarray() is not None
+@tm.network
def test_statsmodels():
statsmodels = import_module('statsmodels') # noqa
@@ -84,6 +85,7 @@ def test_pandas_gbq(df):
pandas_gbq = import_module('pandas_gbq') # noqa
+@tm.network
def test_pandas_datareader():
pandas_datareader = import_module('pandas_datareader') # noqa
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index fae7bfa513dcd..08c3a25e66b0e 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -13,7 +13,7 @@
from pandas.core.api import DataFrame, Panel
from pandas.core.computation import expressions as expr
-from pandas import compat, _np_version_under1p11
+from pandas import compat, _np_version_under1p11, _np_version_under1p13
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal,
assert_panel4d_equal, slow)
@@ -420,6 +420,10 @@ def test_bool_ops_warn_on_arithmetic(self):
f = getattr(operator, name)
fe = getattr(operator, sub_funcs[subs[op]])
+ # >= 1.13.0 these are now TypeErrors
+ if op == '-' and not _np_version_under1p13:
+ continue
+
with tm.use_numexpr(True, min_elements=5):
with tm.assert_produces_warning(check_stacklevel=False):
r = f(df, df)
diff --git a/pandas/tests/test_join.py b/pandas/tests/test_join.py
index 3fc13d23b53f7..cde1cab37d09c 100644
--- a/pandas/tests/test_join.py
+++ b/pandas/tests/test_join.py
@@ -1,11 +1,11 @@
# -*- coding: utf-8 -*-
import numpy as np
-from pandas import Index
+from pandas import Index, DataFrame, Categorical, merge
from pandas._libs import join as _join
import pandas.util.testing as tm
-from pandas.util.testing import assert_almost_equal
+from pandas.util.testing import assert_almost_equal, assert_frame_equal
class TestIndexer(object):
@@ -192,3 +192,43 @@ def test_inner_join_indexer2():
exp_ridx = np.array([0, 1, 2, 3], dtype=np.int64)
assert_almost_equal(ridx, exp_ridx)
+
+
+def test_merge_join_categorical_multiindex():
+ # From issue 16627
+ a = {'Cat1': Categorical(['a', 'b', 'a', 'c', 'a', 'b'],
+ ['a', 'b', 'c']),
+ 'Int1': [0, 1, 0, 1, 0, 0]}
+ a = DataFrame(a)
+
+ b = {'Cat': Categorical(['a', 'b', 'c', 'a', 'b', 'c'],
+ ['a', 'b', 'c']),
+ 'Int': [0, 0, 0, 1, 1, 1],
+ 'Factor': [1.1, 1.2, 1.3, 1.4, 1.5, 1.6]}
+ b = DataFrame(b).set_index(['Cat', 'Int'])['Factor']
+
+ expected = merge(a, b.reset_index(), left_on=['Cat1', 'Int1'],
+ right_on=['Cat', 'Int'], how='left')
+ result = a.join(b, on=['Cat1', 'Int1'])
+ expected = expected.drop(['Cat', 'Int'], axis=1)
+ assert_frame_equal(expected, result)
+
+ # Same test, but with ordered categorical
+ a = {'Cat1': Categorical(['a', 'b', 'a', 'c', 'a', 'b'],
+ ['b', 'a', 'c'],
+ ordered=True),
+ 'Int1': [0, 1, 0, 1, 0, 0]}
+ a = DataFrame(a)
+
+ b = {'Cat': Categorical(['a', 'b', 'c', 'a', 'b', 'c'],
+ ['b', 'a', 'c'],
+ ordered=True),
+ 'Int': [0, 0, 0, 1, 1, 1],
+ 'Factor': [1.1, 1.2, 1.3, 1.4, 1.5, 1.6]}
+ b = DataFrame(b).set_index(['Cat', 'Int'])['Factor']
+
+ expected = merge(a, b.reset_index(), left_on=['Cat1', 'Int1'],
+ right_on=['Cat', 'Int'], how='left')
+ result = a.join(b, on=['Cat1', 'Int1'])
+ expected = expected.drop(['Cat', 'Int'], axis=1)
+ assert_frame_equal(expected, result)
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index cbb3c345a9353..9c3765ffdb716 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -455,6 +455,17 @@ def tests_empty_df_rolling(self, roller):
result = DataFrame(index=pd.DatetimeIndex([])).rolling(roller).sum()
tm.assert_frame_equal(result, expected)
+ def test_multi_index_names(self):
+
+ # GH 16789, 16825
+ cols = pd.MultiIndex.from_product([['A', 'B'], ['C', 'D', 'E']],
+ names=['1', '2'])
+ df = pd.DataFrame(np.ones((10, 6)), columns=cols)
+ result = df.rolling(3).cov()
+
+ tm.assert_index_equal(result.columns, df.columns)
+ assert result.index.names == [None, '1', '2']
+
class TestExpanding(Base):
@@ -501,9 +512,10 @@ def test_numpy_compat(self):
'expander',
[1, pytest.mark.xfail(
reason='GH 16425 expanding with offset not supported')('1s')])
- def tests_empty_df_expanding(self, expander):
+ def test_empty_df_expanding(self, expander):
# GH 15819 Verifies that datetime and integer expanding windows can be
# applied to empty DataFrames
+
expected = DataFrame()
result = DataFrame().expanding(expander).sum()
tm.assert_frame_equal(result, expected)
diff --git a/setup.cfg b/setup.cfg
index 8b32f0f62fe28..05d4c84ca56c4 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -22,8 +22,8 @@ split_penalty_after_opening_bracket = 1000000
split_penalty_logical_operator = 30
[tool:pytest]
-# TODO: Change all yield-based (nose-style) fixutures to pytest fixtures
-# Silencing the warning until then
testpaths = pandas
markers =
single: mark a test as single cpu only
+ slow: mark a test as slow
+ network: mark a test as network
| First round of backports. I'll just cherry-pick the remaining PRs to this branch once they're merged. | https://api.github.com/repos/pandas-dev/pandas/pulls/16841 | 2017-07-06T17:52:11Z | 2017-07-07T15:03:08Z | 2017-07-07T15:03:08Z | 2017-07-07T15:13:42Z |
BUG: kind parameter on categorical argsort | diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.txt
index 3d1bed2c9f1a9..2f5b108f28fc5 100644
--- a/doc/source/whatsnew/v0.20.3.txt
+++ b/doc/source/whatsnew/v0.20.3.txt
@@ -92,6 +92,7 @@ Numeric
Categorical
^^^^^^^^^^^
+- Bug in ``DataFrame.sort_values`` not respecting the ``kind`` with categorical data (:issue:`16793`)
Other
^^^^^
diff --git a/pandas/compat/numpy/function.py b/pandas/compat/numpy/function.py
index a324bf94171ce..ccbd3d9704e0c 100644
--- a/pandas/compat/numpy/function.py
+++ b/pandas/compat/numpy/function.py
@@ -107,6 +107,14 @@ def validate_argmax_with_skipna(skipna, args, kwargs):
validate_argsort = CompatValidator(ARGSORT_DEFAULTS, fname='argsort',
max_fname_arg_count=0, method='both')
+# two different signatures of argsort, this second validation
+# for when the `kind` param is supported
+ARGSORT_DEFAULTS_KIND = OrderedDict()
+ARGSORT_DEFAULTS_KIND['axis'] = -1
+ARGSORT_DEFAULTS_KIND['order'] = None
+validate_argsort_kind = CompatValidator(ARGSORT_DEFAULTS_KIND, fname='argsort',
+ max_fname_arg_count=0, method='both')
+
def validate_argsort_with_ascending(ascending, args, kwargs):
"""
@@ -121,7 +129,7 @@ def validate_argsort_with_ascending(ascending, args, kwargs):
args = (ascending,) + args
ascending = True
- validate_argsort(args, kwargs, max_fname_arg_count=1)
+ validate_argsort_kind(args, kwargs, max_fname_arg_count=3)
return ascending
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 796b2696af9ce..afae11163b0dc 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -1288,7 +1288,7 @@ def check_for_ordered(self, op):
"you can use .as_ordered() to change the "
"Categorical to an ordered one\n".format(op=op))
- def argsort(self, ascending=True, *args, **kwargs):
+ def argsort(self, ascending=True, kind='quicksort', *args, **kwargs):
"""
Returns the indices that would sort the Categorical instance if
'sort_values' was called. This function is implemented to provide
@@ -1309,7 +1309,7 @@ def argsort(self, ascending=True, *args, **kwargs):
numpy.ndarray.argsort
"""
ascending = nv.validate_argsort_with_ascending(ascending, args, kwargs)
- result = np.argsort(self._codes.copy(), **kwargs)
+ result = np.argsort(self._codes.copy(), kind=kind, **kwargs)
if not ascending:
result = result[::-1]
return result
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 69b427df981b7..10b80cbc3483d 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -233,7 +233,7 @@ def nargsort(items, kind='quicksort', ascending=True, na_position='last'):
# specially handle Categorical
if is_categorical_dtype(items):
- return items.argsort(ascending=ascending)
+ return items.argsort(ascending=ascending, kind=kind)
items = np.asanyarray(items)
idx = np.arange(len(items))
diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py
index 98f7f82c0ace7..891c94b59074a 100644
--- a/pandas/tests/frame/test_sorting.py
+++ b/pandas/tests/frame/test_sorting.py
@@ -238,6 +238,15 @@ def test_stable_descending_multicolumn_sort(self):
kind='mergesort')
assert_frame_equal(sorted_df, expected)
+ def test_stable_categorial(self):
+ # GH 16793
+ df = DataFrame({
+ 'x': pd.Categorical(np.repeat([1, 2, 3, 4], 5), ordered=True)
+ })
+ expected = df.copy()
+ sorted_df = df.sort_values('x', kind='mergesort')
+ assert_frame_equal(sorted_df, expected)
+
def test_sort_datetimes(self):
# GH 3461, argsort / lexsort differences for a datetime column
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 92177ca07d835..667b26c24c662 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -585,9 +585,8 @@ def test_numpy_argsort(self):
tm.assert_numpy_array_equal(np.argsort(c), expected,
check_dtype=False)
- msg = "the 'kind' parameter is not supported"
- tm.assert_raises_regex(ValueError, msg, np.argsort,
- c, kind='mergesort')
+ tm.assert_numpy_array_equal(np.argsort(c, kind='mergesort'), expected,
+ check_dtype=False)
msg = "the 'axis' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, np.argsort,
| - [x] closes #16793
- [ ] tests added / passed
- [x] passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16834 | 2017-07-06T01:00:27Z | 2017-07-07T10:17:16Z | 2017-07-07T10:17:16Z | 2017-07-07T13:10:43Z |
Bug issue 16819 Index.get_indexer_not_unique inconsistent return types vs get_indexer | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index de2516d75040b..36f3db98a39b5 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -53,6 +53,7 @@ Backwards incompatible API changes
- :func:`read_csv` now treats ``'n/a'`` strings as missing values by default (:issue:`16078`)
- :class:`pandas.HDFStore`'s string representation is now faster and less detailed. For the previous behavior, use ``pandas.HDFStore.info()``. (:issue:`16503`).
- Compression defaults in HDF stores now follow pytable standards. Default is no compression and if ``complib`` is missing and ``complevel`` > 0 ``zlib`` is used (:issue:`15943`)
+- ``Index.get_indexer_non_unique()`` now returns a ndarray indexer rather than an ``Index``; this is consistent with ``Index.get_indexer()`` (:issue:`16819`)
.. _whatsnew_0210.api:
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index c4b3e25acae7e..daf3381ae4e89 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -896,8 +896,9 @@ def reset_identity(values):
# we can't reindex, so we resort to this
# GH 14776
if isinstance(ax, MultiIndex) and not ax.is_unique:
- result = result.take(result.index.get_indexer_for(
- ax.values).unique(), axis=self.axis)
+ indexer = algorithms.unique1d(
+ result.index.get_indexer_for(ax.values))
+ result = result.take(indexer, axis=self.axis)
else:
result = result.reindex_axis(ax, axis=self.axis)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 695f9f119baa2..8a4878d9cfbcf 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2256,8 +2256,8 @@ def intersection(self, other):
indexer = indexer.take((indexer != -1).nonzero()[0])
except:
# duplicates
- indexer = Index(other._values).get_indexer_non_unique(
- self._values)[0].unique()
+ indexer = algos.unique1d(
+ Index(other._values).get_indexer_non_unique(self._values)[0])
indexer = indexer[indexer != -1]
taken = other.take(indexer)
@@ -2704,7 +2704,7 @@ def get_indexer_non_unique(self, target):
tgt_values = target._values
indexer, missing = self._engine.get_indexer_non_unique(tgt_values)
- return Index(indexer), missing
+ return indexer, missing
def get_indexer_for(self, target, **kwargs):
"""
@@ -2942,7 +2942,6 @@ def _reindex_non_unique(self, target):
else:
# need to retake to have the same size as the indexer
- indexer = indexer.values
indexer[~check] = 0
# reset the new indexer to account for the new size
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 18dbe6624008a..7a81a125467d5 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1131,6 +1131,17 @@ def test_get_indexer_strings(self):
with pytest.raises(TypeError):
idx.get_indexer(['a', 'b', 'c', 'd'], method='pad', tolerance=2)
+ def test_get_indexer_consistency(self):
+ # See GH 16819
+ for name, index in self.indices.items():
+ indexer = index.get_indexer(index[0:2])
+ assert isinstance(indexer, np.ndarray)
+ assert indexer.dtype == np.intp
+
+ indexer, _ = index.get_indexer_non_unique(index[0:2])
+ assert isinstance(indexer, np.ndarray)
+ assert indexer.dtype == np.intp
+
def test_get_loc(self):
idx = pd.Index([0, 1, 2])
all_methods = [None, 'pad', 'backfill', 'nearest']
diff --git a/pandas/tests/indexes/test_category.py b/pandas/tests/indexes/test_category.py
index 4e4f9b29f9a4c..493274fff43e0 100644
--- a/pandas/tests/indexes/test_category.py
+++ b/pandas/tests/indexes/test_category.py
@@ -386,8 +386,7 @@ def test_reindexing(self):
expected = oidx.get_indexer_non_unique(finder)[0]
actual = ci.get_indexer(finder)
- tm.assert_numpy_array_equal(
- expected.values, actual, check_dtype=False)
+ tm.assert_numpy_array_equal(expected, actual)
def test_reindex_dtype(self):
c = CategoricalIndex(['a', 'b', 'c', 'a'])
| closes #16819
tests added / passed
passes ``git diff upstream/master --name-only -- '*.py' | flake8 --diff`` (On Windows, ``git diff upstream/master -u -- "*.py" | flake8 --diff`` might work as an alternative.)
whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/16826 | 2017-07-04T22:50:05Z | 2017-07-06T12:37:12Z | 2017-07-06T12:37:12Z | 2017-07-06T13:21:10Z |
BUG: rolling.cov with multi-index columns should preseve the MI | diff --git a/doc/source/whatsnew/v0.20.3.txt b/doc/source/whatsnew/v0.20.3.txt
index dcce427f8dd84..3d1bed2c9f1a9 100644
--- a/doc/source/whatsnew/v0.20.3.txt
+++ b/doc/source/whatsnew/v0.20.3.txt
@@ -40,7 +40,7 @@ Bug Fixes
- Fixed issue with :meth:`DataFrame.style` where element id's were not unique (:issue:`16780`)
- Fixed a pytest marker failing downstream packages' tests suites (:issue:`16680`)
- Fixed compat with loading a ``DataFrame`` with a ``PeriodIndex``, from a ``format='fixed'`` HDFStore, in Python 3, that was written in Python 2 (:issue:`16781`)
-- Fixed bug where computing the rolling covariance of a MultiIndexed ``DataFrame`` improperly raised a ``ValueError`` (:issue:`16789`)
+- Fixed a bug in failing to compute rolling computations of a column-MultiIndexed ``DataFrame`` (:issue:`16789`, :issue:`16825`)
Conversion
^^^^^^^^^^
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 1a762732b1213..ee18263cca6ab 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1383,6 +1383,9 @@ def __getitem__(self, key):
# cannot be sure whether the result will be sorted
sortorder = None
+ if isinstance(key, Index):
+ key = np.asarray(key)
+
new_labels = [lab[key] for lab in self.labels]
return MultiIndex(levels=self.levels, labels=new_labels,
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 1e16eff7d56cc..02b508bb94e4c 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -836,7 +836,7 @@ def count(self):
return self._wrap_results(results, blocks, obj)
- _shared_docs['apply'] = dedent("""
+ _shared_docs['apply'] = dedent(r"""
%(name)s function apply
Parameters
@@ -1922,7 +1922,8 @@ def dataframe_from_int_dict(data, frame_template):
# TODO: not the most efficient (perf-wise)
# though not bad code-wise
- from pandas import Panel, MultiIndex, Index
+ from pandas import Panel, MultiIndex
+
with warnings.catch_warnings(record=True):
p = Panel.from_dict(results).swapaxes('items', 'major')
if len(p.major_axis) > 0:
@@ -1945,8 +1946,8 @@ def dataframe_from_int_dict(data, frame_template):
# reset our index names to arg1 names
# reset our column names to arg2 names
# careful not to mutate the original names
- result.columns = Index(result.columns).set_names(
- arg2.columns.name)
+ result.columns = result.columns.set_names(
+ arg2.columns.names)
result.index = result.index.set_names(
arg1.index.names + arg1.columns.names)
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 03f90b25415bb..ef8806246c2c5 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -61,15 +61,6 @@ def f():
tm.assert_raises_regex(ValueError, 'The truth value of a', f)
- def test_multi_index_names(self):
-
- # GH 16789
- cols = pd.MultiIndex.from_product([['A', 'B'], ['C', 'D', 'E']],
- names=['1', '2'])
- df = pd.DataFrame(np.ones((10, 6)), columns=cols)
- rolling_result = df.rolling(3).cov()
- assert rolling_result.index.names == [None, '1', '2']
-
def test_labels_dtypes(self):
# GH 8456
diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py
index cbb3c345a9353..9c3765ffdb716 100644
--- a/pandas/tests/test_window.py
+++ b/pandas/tests/test_window.py
@@ -455,6 +455,17 @@ def tests_empty_df_rolling(self, roller):
result = DataFrame(index=pd.DatetimeIndex([])).rolling(roller).sum()
tm.assert_frame_equal(result, expected)
+ def test_multi_index_names(self):
+
+ # GH 16789, 16825
+ cols = pd.MultiIndex.from_product([['A', 'B'], ['C', 'D', 'E']],
+ names=['1', '2'])
+ df = pd.DataFrame(np.ones((10, 6)), columns=cols)
+ result = df.rolling(3).cov()
+
+ tm.assert_index_equal(result.columns, df.columns)
+ assert result.index.names == [None, '1', '2']
+
class TestExpanding(Base):
@@ -501,9 +512,10 @@ def test_numpy_compat(self):
'expander',
[1, pytest.mark.xfail(
reason='GH 16425 expanding with offset not supported')('1s')])
- def tests_empty_df_expanding(self, expander):
+ def test_empty_df_expanding(self, expander):
# GH 15819 Verifies that datetime and integer expanding windows can be
# applied to empty DataFrames
+
expected = DataFrame()
result = DataFrame().expanding(expander).sum()
tm.assert_frame_equal(result, expected)
| xref #16814
| https://api.github.com/repos/pandas-dev/pandas/pulls/16825 | 2017-07-04T14:11:56Z | 2017-07-04T20:34:13Z | 2017-07-04T20:34:13Z | 2017-07-07T13:10:43Z |
TST: use network decorator on additional tests | diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index e12945a6a3102..cfa60248605ad 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -19,6 +19,7 @@ def salaries_table():
return read_table(path)
+@pytest.mark.network
@pytest.mark.parametrize(
"compression,extension",
[('gzip', '.gz'), ('bz2', '.bz2'), ('zip', '.zip'),
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 27e3c29a70a9f..61f0c992225c6 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -52,6 +52,7 @@ def test_xarray(df):
assert df.to_xarray() is not None
+@tm.network
def test_statsmodels():
statsmodels = import_module('statsmodels') # noqa
@@ -84,6 +85,7 @@ def test_pandas_gbq(df):
pandas_gbq = import_module('pandas_gbq') # noqa
+@tm.network
def test_pandas_datareader():
pandas_datareader = import_module('pandas_datareader') # noqa
diff --git a/setup.cfg b/setup.cfg
index 1f9bea6718a4d..05d4c84ca56c4 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -26,3 +26,4 @@ testpaths = pandas
markers =
single: mark a test as single cpu only
slow: mark a test as slow
+ network: mark a test as network
| ``--skip-network`` didn't actually skip the parser compressed url tests as ``tm.network`` is in an inner function (and its not easy to make work with parameterize). Also some additional network required tests. | https://api.github.com/repos/pandas-dev/pandas/pulls/16824 | 2017-07-04T13:22:38Z | 2017-07-04T20:35:37Z | 2017-07-04T20:35:37Z | 2017-07-07T13:10:42Z |
BUG/API: dtype inconsistencies in .where / .setitem / .putmask / .fillna | diff --git a/doc/source/whatsnew/v0.21.0.txt b/doc/source/whatsnew/v0.21.0.txt
index 5146bd35dff30..bab1cf0fec51b 100644
--- a/doc/source/whatsnew/v0.21.0.txt
+++ b/doc/source/whatsnew/v0.21.0.txt
@@ -127,6 +127,65 @@ the target. Now, a ``ValueError`` will be raised when such an input is passed in
...
ValueError: Cannot operate inplace if there is no assignment
+.. _whatsnew_0210.dtype_conversions:
+
+Dtype Conversions
+^^^^^^^^^^^^^^^^^
+
+- Previously assignments, ``.where()`` and ``.fillna()`` with a ``bool`` assignment, would coerce to
+ same type (e.g. int / float), or raise for datetimelikes. These will now preseve the bools with ``object`` dtypes. (:issue:`16821`).
+
+ .. ipython:: python
+
+ s = Series([1, 2, 3])
+
+ .. code-block:: python
+
+ In [5]: s[1] = True
+
+ In [6]: s
+ Out[6]:
+ 0 1
+ 1 1
+ 2 3
+ dtype: int64
+
+ New Behavior
+
+ .. ipython:: python
+
+ s[1] = True
+ s
+
+- Previously as assignment to a datetimelike with a non-datetimelike would coerce the
+ non-datetime-like item being assigned (:issue:`14145`).
+
+ .. ipython:: python
+
+ s = pd.Series([pd.Timestamp('2011-01-01'), pd.Timestamp('2012-01-01')])
+
+ .. code-block:: python
+
+ In [1]: s[1] = 1
+
+ In [2]: s
+ Out[2]:
+ 0 2011-01-01 00:00:00.000000000
+ 1 1970-01-01 00:00:00.000000001
+ dtype: datetime64[ns]
+
+ These now coerce to ``object`` dtype.
+
+ .. ipython:: python
+
+ s[1] = 1
+ s
+
+- Additional bug fixes w.r.t. dtype conversions.
+
+ - Inconsistent behavior in ``.where()`` with datetimelikes which would raise rather than coerce to ``object`` (:issue:`16402`)
+ - Bug in assignment against ``int64`` data with ``np.ndarray`` with ``float64`` dtype may keep ``int64`` dtype (:issue:`14001`)
+
.. _whatsnew_0210.api:
Other API Changes
@@ -185,6 +244,9 @@ Bug Fixes
Conversion
^^^^^^^^^^
+- Bug in assignment against datetime-like data with ``int`` may incorrectly converte to datetime-like (:issue:`14145`)
+- Bug in assignment against ``int64`` data with ``np.ndarray`` with ``float64`` dtype may keep ``int64`` dtype (:issue:`14001`)
+
Indexing
^^^^^^^^
diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 5e92c506b5d0c..273dc06886088 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -19,6 +19,7 @@ cimport tslib
from hashtable cimport *
from pandas._libs import tslib, algos, hashtable as _hash
from pandas._libs.tslib import Timestamp, Timedelta
+from datetime import datetime, timedelta
from datetime cimport (get_datetime64_value, _pydatetime_to_dts,
pandas_datetimestruct)
@@ -507,24 +508,37 @@ cdef class TimedeltaEngine(DatetimeEngine):
return 'm8[ns]'
cpdef convert_scalar(ndarray arr, object value):
+ # we don't turn integers
+ # into datetimes/timedeltas
+
+ # we don't turn bools into int/float/complex
+
if arr.descr.type_num == NPY_DATETIME:
if isinstance(value, np.ndarray):
pass
- elif isinstance(value, Timestamp):
- return value.value
+ elif isinstance(value, datetime):
+ return Timestamp(value).value
elif value is None or value != value:
return iNaT
- else:
+ elif util.is_string_object(value):
return Timestamp(value).value
+ raise ValueError("cannot set a Timestamp with a non-timestamp")
+
elif arr.descr.type_num == NPY_TIMEDELTA:
if isinstance(value, np.ndarray):
pass
- elif isinstance(value, Timedelta):
- return value.value
+ elif isinstance(value, timedelta):
+ return Timedelta(value).value
elif value is None or value != value:
return iNaT
- else:
+ elif util.is_string_object(value):
return Timedelta(value).value
+ raise ValueError("cannot set a Timedelta with a non-timedelta")
+
+ if (issubclass(arr.dtype.type, (np.integer, np.floating, np.complex)) and
+ not issubclass(arr.dtype.type, np.bool_)):
+ if util.is_bool_object(value):
+ raise ValueError('Cannot assign bool to float/integer series')
if issubclass(arr.dtype.type, (np.integer, np.bool_)):
if util.is_float_object(value) and value != value:
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index c471d46262484..44be9ba56b84a 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -14,6 +14,7 @@ cdef bint PY3 = (sys.version_info[0] >= 3)
from cpython cimport (
PyTypeObject,
PyFloat_Check,
+ PyComplex_Check,
PyLong_Check,
PyObject_RichCompareBool,
PyObject_RichCompare,
@@ -902,7 +903,7 @@ cdef inline bint _checknull_with_nat(object val):
cdef inline bint _check_all_nulls(object val):
""" utility to check if a value is any type of null """
cdef bint res
- if PyFloat_Check(val):
+ if PyFloat_Check(val) or PyComplex_Check(val):
res = val != val
elif val is NaT:
res = 1
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 79beb95d93ea1..b4831c0d7d16c 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -151,6 +151,12 @@ def _reconstruct_data(values, dtype, original):
pass
elif is_datetime64tz_dtype(dtype) or is_period_dtype(dtype):
values = Index(original)._shallow_copy(values, name=None)
+ elif is_bool_dtype(dtype):
+ values = values.astype(dtype)
+
+ # we only support object dtypes bool Index
+ if isinstance(original, Index):
+ values = values.astype(object)
elif dtype is not None:
values = values.astype(dtype)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 6532e17695c86..22d98a89d68d6 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -273,7 +273,7 @@ def maybe_promote(dtype, fill_value=np.nan):
else:
if issubclass(dtype.type, np.datetime64):
try:
- fill_value = lib.Timestamp(fill_value).value
+ fill_value = tslib.Timestamp(fill_value).value
except:
# the proper thing to do here would probably be to upcast
# to object (but numpy 1.6.1 doesn't do this properly)
@@ -334,6 +334,23 @@ def maybe_promote(dtype, fill_value=np.nan):
return dtype, fill_value
+def infer_dtype_from(val, pandas_dtype=False):
+ """
+ interpret the dtype from a scalar or array. This is a convenience
+ routines to infer dtype from a scalar or an array
+
+ Parameters
+ ----------
+ pandas_dtype : bool, default False
+ whether to infer dtype including pandas extension types.
+ If False, scalar/array belongs to pandas extension types is inferred as
+ object
+ """
+ if is_scalar(val):
+ return infer_dtype_from_scalar(val, pandas_dtype=pandas_dtype)
+ return infer_dtype_from_array(val, pandas_dtype=pandas_dtype)
+
+
def infer_dtype_from_scalar(val, pandas_dtype=False):
"""
interpret the dtype from a scalar
@@ -350,9 +367,9 @@ def infer_dtype_from_scalar(val, pandas_dtype=False):
# a 1-element ndarray
if isinstance(val, np.ndarray):
+ msg = "invalid ndarray passed to _infer_dtype_from_scalar"
if val.ndim != 0:
- raise ValueError(
- "invalid ndarray passed to _infer_dtype_from_scalar")
+ raise ValueError(msg)
dtype = val.dtype
val = val.item()
@@ -409,24 +426,31 @@ def infer_dtype_from_scalar(val, pandas_dtype=False):
return dtype, val
-def infer_dtype_from_array(arr):
+def infer_dtype_from_array(arr, pandas_dtype=False):
"""
infer the dtype from a scalar or array
Parameters
----------
arr : scalar or array
+ pandas_dtype : bool, default False
+ whether to infer dtype including pandas extension types.
+ If False, array belongs to pandas extension types
+ is inferred as object
Returns
-------
- tuple (numpy-compat dtype, array)
+ tuple (numpy-compat/pandas-compat dtype, array)
Notes
-----
- These infer to numpy dtypes exactly
- with the exception that mixed / object dtypes
+ if pandas_dtype=False. these infer to numpy dtypes
+ exactly with the exception that mixed / object dtypes
are not coerced by stringifying or conversion
+ if pandas_dtype=True. datetime64tz-aware/categorical
+ types will retain there character.
+
Examples
--------
>>> np.asarray([1, '1'])
@@ -443,6 +467,12 @@ def infer_dtype_from_array(arr):
if not is_list_like(arr):
arr = [arr]
+ if pandas_dtype and is_extension_type(arr):
+ return arr.dtype, arr
+
+ elif isinstance(arr, ABCSeries):
+ return arr.dtype, np.asarray(arr)
+
# don't force numpy coerce with nan's
inferred = lib.infer_dtype(arr)
if inferred in ['string', 'bytes', 'unicode',
@@ -553,7 +583,7 @@ def conv(r, dtype):
if isnull(r):
pass
elif dtype == _NS_DTYPE:
- r = lib.Timestamp(r)
+ r = tslib.Timestamp(r)
elif dtype == _TD_DTYPE:
r = _coerce_scalar_to_timedelta_type(r)
elif dtype == np.bool_:
@@ -1027,3 +1057,31 @@ def find_common_type(types):
return np.object
return np.find_common_type(types, [])
+
+
+def cast_scalar_to_array(shape, value, dtype=None):
+ """
+ create np.ndarray of specified shape and dtype, filled with values
+
+ Parameters
+ ----------
+ shape : tuple
+ value : scalar value
+ dtype : np.dtype, optional
+ dtype to coerce
+
+ Returns
+ -------
+ ndarray of shape, filled with value, of specified / inferred dtype
+
+ """
+
+ if dtype is None:
+ dtype, fill_value = infer_dtype_from_scalar(value)
+ else:
+ fill_value = value
+
+ values = np.empty(shape, dtype=dtype)
+ values.fill(fill_value)
+
+ return values
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 114900ce802be..37f99bd344e6c 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -11,7 +11,8 @@
ExtensionDtype)
from .generic import (ABCCategorical, ABCPeriodIndex,
ABCDatetimeIndex, ABCSeries,
- ABCSparseArray, ABCSparseSeries, ABCCategoricalIndex)
+ ABCSparseArray, ABCSparseSeries, ABCCategoricalIndex,
+ ABCIndexClass)
from .inference import is_string_like
from .inference import * # noqa
@@ -1545,6 +1546,16 @@ def is_bool_dtype(arr_or_dtype):
except ValueError:
# this isn't even a dtype
return False
+
+ if isinstance(arr_or_dtype, ABCIndexClass):
+
+ # TODO(jreback)
+ # we don't have a boolean Index class
+ # so its object, we need to infer to
+ # guess this
+ return (arr_or_dtype.is_object and
+ arr_or_dtype.inferred_type == 'boolean')
+
return issubclass(tipo, np.bool_)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e554e136cdb80..41b2a8908f6f3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -25,7 +25,8 @@
import numpy.ma as ma
from pandas.core.dtypes.cast import (
- maybe_upcast, infer_dtype_from_scalar,
+ maybe_upcast,
+ cast_scalar_to_array,
maybe_cast_to_datetime,
maybe_infer_to_datetimelike,
maybe_convert_platform,
@@ -59,6 +60,7 @@
is_named_tuple)
from pandas.core.dtypes.missing import isnull, notnull
+
from pandas.core.common import (_try_sort,
_default_index,
_values_from_object,
@@ -385,15 +387,10 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
raise_with_traceback(exc)
if arr.ndim == 0 and index is not None and columns is not None:
- if isinstance(data, compat.string_types) and dtype is None:
- dtype = np.object_
- if dtype is None:
- dtype, data = infer_dtype_from_scalar(data)
-
- values = np.empty((len(index), len(columns)), dtype=dtype)
- values.fill(data)
- mgr = self._init_ndarray(values, index, columns, dtype=dtype,
- copy=False)
+ values = cast_scalar_to_array((len(index), len(columns)),
+ data, dtype=dtype)
+ mgr = self._init_ndarray(values, index, columns,
+ dtype=values.dtype, copy=False)
else:
raise ValueError('DataFrame constructor not properly called!')
@@ -507,7 +504,7 @@ def _get_axes(N, K, index=index, columns=columns):
values = _prep_ndarray(values, copy=copy)
if dtype is not None:
- if values.dtype != dtype:
+ if not is_dtype_equal(values.dtype, dtype):
try:
values = values.astype(dtype)
except Exception as orig:
@@ -2689,9 +2686,8 @@ def reindexer(value):
else:
# upcast the scalar
- dtype, value = infer_dtype_from_scalar(value)
- value = np.repeat(value, len(self.index)).astype(dtype)
- value = maybe_cast_to_datetime(value, dtype)
+ value = cast_scalar_to_array(len(self.index), value)
+ value = maybe_cast_to_datetime(value, value.dtype)
# return internal types directly
if is_extension_type(value):
@@ -3676,7 +3672,8 @@ def reorder_levels(self, order, axis=0):
# ----------------------------------------------------------------------
# Arithmetic / combination related
- def _combine_frame(self, other, func, fill_value=None, level=None):
+ def _combine_frame(self, other, func, fill_value=None, level=None,
+ try_cast=True):
this, other = self.align(other, join='outer', level=level, copy=False)
new_index, new_columns = this.index, this.columns
@@ -3729,19 +3726,23 @@ def f(i):
copy=False)
def _combine_series(self, other, func, fill_value=None, axis=None,
- level=None):
+ level=None, try_cast=True):
if axis is not None:
axis = self._get_axis_name(axis)
if axis == 'index':
return self._combine_match_index(other, func, level=level,
- fill_value=fill_value)
+ fill_value=fill_value,
+ try_cast=try_cast)
else:
return self._combine_match_columns(other, func, level=level,
- fill_value=fill_value)
+ fill_value=fill_value,
+ try_cast=try_cast)
return self._combine_series_infer(other, func, level=level,
- fill_value=fill_value)
+ fill_value=fill_value,
+ try_cast=try_cast)
- def _combine_series_infer(self, other, func, level=None, fill_value=None):
+ def _combine_series_infer(self, other, func, level=None,
+ fill_value=None, try_cast=True):
if len(other) == 0:
return self * NA
@@ -3751,9 +3752,11 @@ def _combine_series_infer(self, other, func, level=None, fill_value=None):
columns=self.columns)
return self._combine_match_columns(other, func, level=level,
- fill_value=fill_value)
+ fill_value=fill_value,
+ try_cast=try_cast)
- def _combine_match_index(self, other, func, level=None, fill_value=None):
+ def _combine_match_index(self, other, func, level=None,
+ fill_value=None, try_cast=True):
left, right = self.align(other, join='outer', axis=0, level=level,
copy=False)
if fill_value is not None:
@@ -3763,7 +3766,8 @@ def _combine_match_index(self, other, func, level=None, fill_value=None):
index=left.index, columns=self.columns,
copy=False)
- def _combine_match_columns(self, other, func, level=None, fill_value=None):
+ def _combine_match_columns(self, other, func, level=None,
+ fill_value=None, try_cast=True):
left, right = self.align(other, join='outer', axis=1, level=level,
copy=False)
if fill_value is not None:
@@ -3771,15 +3775,17 @@ def _combine_match_columns(self, other, func, level=None, fill_value=None):
fill_value)
new_data = left._data.eval(func=func, other=right,
- axes=[left.columns, self.index])
+ axes=[left.columns, self.index],
+ try_cast=try_cast)
return self._constructor(new_data)
- def _combine_const(self, other, func, raise_on_error=True):
+ def _combine_const(self, other, func, raise_on_error=True, try_cast=True):
new_data = self._data.eval(func=func, other=other,
- raise_on_error=raise_on_error)
+ raise_on_error=raise_on_error,
+ try_cast=try_cast)
return self._constructor(new_data)
- def _compare_frame_evaluate(self, other, func, str_rep):
+ def _compare_frame_evaluate(self, other, func, str_rep, try_cast=True):
# unique
if self.columns.is_unique:
@@ -3803,16 +3809,18 @@ def _compare(a, b):
result.columns = self.columns
return result
- def _compare_frame(self, other, func, str_rep):
+ def _compare_frame(self, other, func, str_rep, try_cast=True):
if not self._indexed_same(other):
raise ValueError('Can only compare identically-labeled '
'DataFrame objects')
- return self._compare_frame_evaluate(other, func, str_rep)
+ return self._compare_frame_evaluate(other, func, str_rep,
+ try_cast=try_cast)
- def _flex_compare_frame(self, other, func, str_rep, level):
+ def _flex_compare_frame(self, other, func, str_rep, level, try_cast=True):
if not self._indexed_same(other):
self, other = self.align(other, 'outer', level=level, copy=False)
- return self._compare_frame_evaluate(other, func, str_rep)
+ return self._compare_frame_evaluate(other, func, str_rep,
+ try_cast=try_cast)
def combine(self, other, func, fill_value=None, overwrite=True):
"""
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b2083a4454f84..68416d85ca659 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -13,7 +13,6 @@
from pandas.core.dtypes.common import (
_ensure_int64,
_ensure_object,
- needs_i8_conversion,
is_scalar,
is_number,
is_integer, is_bool,
@@ -26,7 +25,8 @@
is_dict_like,
is_re_compilable,
pandas_dtype)
-from pandas.core.dtypes.cast import maybe_promote, maybe_upcast_putmask
+from pandas.core.dtypes.cast import (
+ maybe_promote, maybe_upcast_putmask)
from pandas.core.dtypes.missing import isnull, notnull
from pandas.core.dtypes.generic import ABCSeries, ABCPanel
@@ -5465,48 +5465,6 @@ def _where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
raise NotImplementedError("cannot align with a higher "
"dimensional NDFrame")
- elif is_list_like(other):
-
- if self.ndim == 1:
-
- # try to set the same dtype as ourselves
- try:
- new_other = np.array(other, dtype=self.dtype)
- except ValueError:
- new_other = np.array(other)
- except TypeError:
- new_other = other
-
- # we can end up comparing integers and m8[ns]
- # which is a numpy no no
- is_i8 = needs_i8_conversion(self.dtype)
- if is_i8:
- matches = False
- else:
- matches = (new_other == np.array(other))
-
- if matches is False or not matches.all():
-
- # coerce other to a common dtype if we can
- if needs_i8_conversion(self.dtype):
- try:
- other = np.array(other, dtype=self.dtype)
- except:
- other = np.array(other)
- else:
- other = np.asarray(other)
- other = np.asarray(other,
- dtype=np.common_type(other,
- new_other))
-
- # we need to use the new dtype
- try_quick = False
- else:
- other = new_other
- else:
-
- other = np.array(other)
-
if isinstance(other, np.ndarray):
if other.shape != self.shape:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 5d50f961927c7..6cb03c3467139 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -26,6 +26,7 @@
is_object_dtype,
is_categorical_dtype,
is_interval_dtype,
+ is_bool,
is_bool_dtype,
is_signed_integer_dtype,
is_unsigned_integer_dtype,
@@ -610,9 +611,18 @@ def repeat(self, repeats, *args, **kwargs):
def where(self, cond, other=None):
if other is None:
other = self._na_value
- values = np.where(cond, self.values, other)
dtype = self.dtype
+ values = self.values
+
+ if is_bool(other) or is_bool_dtype(other):
+
+ # bools force casting
+ values = values.astype(object)
+ dtype = None
+
+ values = np.where(cond, values, other)
+
if self._is_numeric_dtype and np.any(isnull(values)):
# We can't coerce to the numeric dtype of "self" (unless
# it's float) if there are NaN values in our output.
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 72d521cbe2d60..142e0f36c66ec 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -2,9 +2,14 @@
from pandas._libs import (index as libindex,
algos as libalgos, join as libjoin)
from pandas.core.dtypes.common import (
- is_dtype_equal, pandas_dtype,
- is_float_dtype, is_object_dtype,
- is_integer_dtype, is_scalar)
+ is_dtype_equal,
+ pandas_dtype,
+ is_float_dtype,
+ is_object_dtype,
+ is_integer_dtype,
+ is_bool,
+ is_bool_dtype,
+ is_scalar)
from pandas.core.common import _asarray_tuplesafe, _values_from_object
from pandas import compat
@@ -56,6 +61,16 @@ def _maybe_cast_slice_bound(self, label, side, kind):
# we will try to coerce to integers
return self._maybe_cast_indexer(label)
+ def _convert_for_op(self, value):
+ """ Convert value to be insertable to ndarray """
+
+ if is_bool(value) or is_bool_dtype(value):
+ # force conversion to object
+ # so we don't lose the bools
+ raise TypeError
+
+ return value
+
def _convert_tolerance(self, tolerance):
try:
return float(tolerance)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index f2a7ac76481d4..8f3667edf68e6 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1,4 +1,5 @@
import copy
+from warnings import catch_warnings
import itertools
import re
import operator
@@ -22,6 +23,7 @@
is_categorical, is_categorical_dtype,
is_integer_dtype,
is_datetime64tz_dtype,
+ is_bool_dtype,
is_object_dtype,
is_datetimelike_v_numeric,
is_float_dtype, is_numeric_dtype,
@@ -33,21 +35,21 @@
_get_dtype)
from pandas.core.dtypes.cast import (
maybe_downcast_to_dtype,
- maybe_convert_string_to_object,
maybe_upcast,
- maybe_convert_scalar, maybe_promote,
+ maybe_promote,
+ infer_dtype_from,
infer_dtype_from_scalar,
soft_convert_objects,
maybe_convert_objects,
astype_nansafe,
find_common_type)
from pandas.core.dtypes.missing import (
- isnull, array_equivalent,
+ isnull, notnull, array_equivalent,
_is_na_compat,
is_null_datelike_scalar)
import pandas.core.dtypes.concat as _concat
-from pandas.core.dtypes.generic import ABCSeries
+from pandas.core.dtypes.generic import ABCSeries, ABCDatetimeIndex
from pandas.core.common import is_null_slice
import pandas.core.algorithms as algos
@@ -169,11 +171,6 @@ def get_values(self, dtype=None):
def to_dense(self):
return self.values.view()
- def to_object_block(self, mgr):
- """ return myself as an object block """
- values = self.get_values(dtype=object)
- return self.make_block(values, klass=ObjectBlock)
-
@property
def _na_value(self):
return np.nan
@@ -374,7 +371,6 @@ def fillna(self, value, limit=None, inplace=False, downcast=None,
else:
return self.copy()
- original_value = value
mask = isnull(self.values)
if limit is not None:
if not is_integer(limit):
@@ -388,7 +384,7 @@ def fillna(self, value, limit=None, inplace=False, downcast=None,
# fillna, but if we cannot coerce, then try again as an ObjectBlock
try:
- values, _, value, _ = self._try_coerce_args(self.values, value)
+ values, _, _, _ = self._try_coerce_args(self.values, value)
blocks = self.putmask(mask, value, inplace=inplace)
blocks = [b.make_block(values=self._try_coerce_result(b.values))
for b in blocks]
@@ -399,12 +395,82 @@ def fillna(self, value, limit=None, inplace=False, downcast=None,
if not mask.any():
return self if inplace else self.copy()
- # we cannot coerce the underlying object, so
- # make an ObjectBlock
- return self.to_object_block(mgr=mgr).fillna(original_value,
- limit=limit,
- inplace=inplace,
- downcast=False)
+ # operate column-by-column
+ def f(m, v, i):
+ block = self.coerce_to_target_dtype(value)
+
+ # slice out our block
+ if i is not None:
+ block = block.getitem_block(slice(i, i + 1))
+ return block.fillna(value,
+ limit=limit,
+ inplace=inplace,
+ downcast=None)
+
+ return self.split_and_operate(mask, f, inplace)
+
+ def split_and_operate(self, mask, f, inplace):
+ """
+ split the block per-column, and apply the callable f
+ per-column, return a new block for each. Handle
+ masking which will not change a block unless needed.
+
+ Parameters
+ ----------
+ mask : 2-d boolean mask
+ f : callable accepting (1d-mask, 1d values, indexer)
+ inplace : boolean
+
+ Returns
+ -------
+ list of blocks
+ """
+
+ if mask is None:
+ mask = np.ones(self.shape, dtype=bool)
+ new_values = self.values
+
+ def make_a_block(nv, ref_loc):
+ if isinstance(nv, Block):
+ block = nv
+ elif isinstance(nv, list):
+ block = nv[0]
+ else:
+ # Put back the dimension that was taken from it and make
+ # a block out of the result.
+ try:
+ nv = _block_shape(nv, ndim=self.ndim)
+ except (AttributeError, NotImplementedError):
+ pass
+ block = self.make_block(values=nv,
+ placement=ref_loc, fastpath=True)
+ return block
+
+ # ndim == 1
+ if self.ndim == 1:
+ if mask.any():
+ nv = f(mask, new_values, None)
+ else:
+ nv = new_values if inplace else new_values.copy()
+ block = make_a_block(nv, self.mgr_locs)
+ return [block]
+
+ # ndim > 1
+ new_blocks = []
+ for i, ref_loc in enumerate(self.mgr_locs):
+ m = mask[i]
+ v = new_values[i]
+
+ # need a new block
+ if m.any():
+ nv = f(m, v, i)
+ else:
+ nv = v if inplace else v.copy()
+
+ block = make_a_block(nv, [ref_loc])
+ new_blocks.append(block)
+
+ return new_blocks
def _maybe_downcast(self, blocks, downcast=None):
@@ -415,6 +481,8 @@ def _maybe_downcast(self, blocks, downcast=None):
elif downcast is None and (self.is_timedelta or self.is_datetime):
return blocks
+ if not isinstance(blocks, list):
+ blocks = [blocks]
return _extend_blocks([b.downcast(downcast) for b in blocks])
def downcast(self, dtypes=None, mgr=None):
@@ -444,27 +512,20 @@ def downcast(self, dtypes=None, mgr=None):
raise ValueError("downcast must have a dictionary or 'infer' as "
"its argument")
- # item-by-item
+ # operate column-by-column
# this is expensive as it splits the blocks items-by-item
- blocks = []
- for i, rl in enumerate(self.mgr_locs):
+ def f(m, v, i):
if dtypes == 'infer':
dtype = 'infer'
else:
raise AssertionError("dtypes as dict is not supported yet")
- # TODO: This either should be completed or removed
- dtype = dtypes.get(item, self._downcast_dtype) # noqa
- if dtype is None:
- nv = _block_shape(values[i], ndim=self.ndim)
- else:
- nv = maybe_downcast_to_dtype(values[i], dtype)
- nv = _block_shape(nv, ndim=self.ndim)
+ if dtype is not None:
+ v = maybe_downcast_to_dtype(v, dtype)
+ return v
- blocks.append(self.make_block(nv, fastpath=True, placement=[rl]))
-
- return blocks
+ return self.split_and_operate(None, f, False)
def astype(self, dtype, copy=False, errors='raise', values=None, **kwargs):
return self._astype(dtype, copy=copy, errors=errors, values=values,
@@ -545,11 +606,14 @@ def convert(self, copy=True, **kwargs):
return self.copy() if copy else self
- def _can_hold_element(self, value):
- raise NotImplementedError()
-
- def _try_cast(self, value):
- raise NotImplementedError()
+ def _can_hold_element(self, element):
+ """ require the same dtype as ourselves """
+ dtype = self.values.dtype.type
+ if is_list_like(element):
+ element = np.asarray(element)
+ tipo = element.dtype.type
+ return issubclass(tipo, dtype)
+ return isinstance(element, dtype)
def _try_cast_result(self, result, dtype=None):
""" try to cast the result to our original type, we may have
@@ -584,12 +648,16 @@ def _try_cast_result(self, result, dtype=None):
# may need to change the dtype here
return maybe_downcast_to_dtype(result, dtype)
- def _try_operate(self, values):
- """ return a version to operate on as the input """
- return values
-
def _try_coerce_args(self, values, other):
""" provide coercion to our input arguments """
+
+ if np.any(notnull(other)) and not self._can_hold_element(other):
+ # coercion issues
+ # let higher levels handle
+ raise TypeError("cannot convert {} to an {}".format(
+ type(other).__name__,
+ type(self).__name__.lower().replace('Block', '')))
+
return values, False, other, False
def _try_coerce_result(self, result):
@@ -601,9 +669,6 @@ def _try_coerce_and_cast_result(self, result, dtype=None):
result = self._try_cast_result(result, dtype=dtype)
return result
- def _try_fill(self, value):
- return value
-
def to_native_types(self, slicer=None, na_rep='nan', quoting=None,
**kwargs):
""" convert to our native types format, slicing if desired """
@@ -639,7 +704,7 @@ def replace(self, to_replace, value, inplace=False, filter=None,
inplace = validate_bool_kwarg(inplace, 'inplace')
original_to_replace = to_replace
- mask = isnull(self.values)
+
# try to replace, if we raise an error, convert to ObjectBlock and
# retry
try:
@@ -657,11 +722,9 @@ def replace(self, to_replace, value, inplace=False, filter=None,
return blocks
except (TypeError, ValueError):
- # we can't process the value, but nothing to do
- if not mask.any():
- return self if inplace else self.copy()
-
- return self.to_object_block(mgr=mgr).replace(
+ # try again with a compatible block
+ block = self.astype(object)
+ return block.replace(
to_replace=original_to_replace, value=value, inplace=inplace,
filter=filter, regex=regex, convert=convert)
@@ -676,14 +739,48 @@ def setitem(self, indexer, value, mgr=None):
indexer is a direct slice/positional indexer; value must be a
compatible shape
"""
-
# coerce None values, if appropriate
if value is None:
if self.is_numeric:
value = np.nan
- # coerce args
- values, _, value, _ = self._try_coerce_args(self.values, value)
+ # coerce if block dtype can store value
+ values = self.values
+ try:
+ values, _, value, _ = self._try_coerce_args(values, value)
+ # can keep its own dtype
+ if hasattr(value, 'dtype') and is_dtype_equal(values.dtype,
+ value.dtype):
+ dtype = self.dtype
+ else:
+ dtype = 'infer'
+
+ except (TypeError, ValueError):
+ # current dtype cannot store value, coerce to common dtype
+ find_dtype = False
+
+ if hasattr(value, 'dtype'):
+ dtype = value.dtype
+ find_dtype = True
+
+ elif is_scalar(value):
+ if isnull(value):
+ # NaN promotion is handled in latter path
+ dtype = False
+ else:
+ dtype, _ = infer_dtype_from_scalar(value,
+ pandas_dtype=True)
+ find_dtype = True
+ else:
+ dtype = 'infer'
+
+ if find_dtype:
+ dtype = find_common_type([values.dtype, dtype])
+ if not is_dtype_equal(self.dtype, dtype):
+ b = self.astype(dtype)
+ return b.setitem(indexer, value, mgr=mgr)
+
+ # value must be storeable at this moment
arr_value = np.array(value)
# cast the values to a type that can hold nan (if necessary)
@@ -713,87 +810,58 @@ def setitem(self, indexer, value, mgr=None):
raise ValueError("cannot set using a slice indexer with a "
"different length than the value")
- try:
-
- def _is_scalar_indexer(indexer):
- # return True if we are all scalar indexers
+ def _is_scalar_indexer(indexer):
+ # return True if we are all scalar indexers
- if arr_value.ndim == 1:
- if not isinstance(indexer, tuple):
- indexer = tuple([indexer])
- return all([is_scalar(idx) for idx in indexer])
- return False
-
- def _is_empty_indexer(indexer):
- # return a boolean if we have an empty indexer
-
- if arr_value.ndim == 1:
- if not isinstance(indexer, tuple):
- indexer = tuple([indexer])
+ if arr_value.ndim == 1:
+ if not isinstance(indexer, tuple):
+ indexer = tuple([indexer])
return any(isinstance(idx, np.ndarray) and len(idx) == 0
for idx in indexer)
- return False
-
- # empty indexers
- # 8669 (empty)
- if _is_empty_indexer(indexer):
- pass
-
- # setting a single element for each dim and with a rhs that could
- # be say a list
- # GH 6043
- elif _is_scalar_indexer(indexer):
- values[indexer] = value
-
- # if we are an exact match (ex-broadcasting),
- # then use the resultant dtype
- elif (len(arr_value.shape) and
- arr_value.shape[0] == values.shape[0] and
- np.prod(arr_value.shape) == np.prod(values.shape)):
- values[indexer] = value
- values = values.astype(arr_value.dtype)
-
- # set
- else:
- values[indexer] = value
+ return False
- # coerce and try to infer the dtypes of the result
- if hasattr(value, 'dtype') and is_dtype_equal(values.dtype,
- value.dtype):
- dtype = value.dtype
- elif is_scalar(value):
- dtype, _ = infer_dtype_from_scalar(value)
- else:
- dtype = 'infer'
- values = self._try_coerce_and_cast_result(values, dtype)
- block = self.make_block(transf(values), fastpath=True)
+ def _is_empty_indexer(indexer):
+ # return a boolean if we have an empty indexer
- # may have to soft convert_objects here
- if block.is_object and not self.is_object:
- block = block.convert(numeric=False)
+ if is_list_like(indexer) and not len(indexer):
+ return True
+ if arr_value.ndim == 1:
+ if not isinstance(indexer, tuple):
+ indexer = tuple([indexer])
+ return any(isinstance(idx, np.ndarray) and len(idx) == 0
+ for idx in indexer)
+ return False
- return block
- except ValueError:
- raise
- except TypeError:
+ # empty indexers
+ # 8669 (empty)
+ if _is_empty_indexer(indexer):
+ pass
- # cast to the passed dtype if possible
- # otherwise raise the original error
+ # setting a single element for each dim and with a rhs that could
+ # be say a list
+ # GH 6043
+ elif _is_scalar_indexer(indexer):
+ values[indexer] = value
+
+ # if we are an exact match (ex-broadcasting),
+ # then use the resultant dtype
+ elif (len(arr_value.shape) and
+ arr_value.shape[0] == values.shape[0] and
+ np.prod(arr_value.shape) == np.prod(values.shape)):
+ values[indexer] = value
try:
- # e.g. we are uint32 and our value is uint64
- # this is for compat with older numpies
- block = self.make_block(transf(values.astype(value.dtype)))
- return block.setitem(indexer=indexer, value=value, mgr=mgr)
-
- except:
+ values = values.astype(arr_value.dtype)
+ except ValueError:
pass
- raise
-
- except Exception:
- pass
+ # set
+ else:
+ values[indexer] = value
- return [self]
+ # coerce and try to infer the dtypes of the result
+ values = self._try_coerce_and_cast_result(values, dtype)
+ block = self.make_block(transf(values), fastpath=True)
+ return block
def putmask(self, mask, new, align=True, inplace=False, axis=0,
transpose=False, mgr=None):
@@ -830,11 +898,11 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
new = self.fill_value
if self._can_hold_element(new):
+ _, _, new, _ = self._try_coerce_args(new_values, new)
+
if transpose:
new_values = new_values.T
- new = self._try_cast(new)
-
# If the default repeat behavior in np.putmask would go in the
# wrong direction, then explictly repeat and reshape new instead
if getattr(new, 'ndim', 0) >= 1:
@@ -843,6 +911,23 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
new, new_values.shape[-1]).reshape(self.shape)
new = new.astype(new_values.dtype)
+ # we require exact matches between the len of the
+ # values we are setting (or is compat). np.putmask
+ # doesn't check this and will simply truncate / pad
+ # the output, but we want sane error messages
+ #
+ # TODO: this prob needs some better checking
+ # for 2D cases
+ if ((is_list_like(new) and
+ np.any(mask[mask]) and
+ getattr(new, 'ndim', 1) == 1)):
+
+ if not (mask.shape[-1] == len(new) or
+ mask[mask].shape[-1] == len(new) or
+ len(new) == 1):
+ raise ValueError("cannot assign mismatch "
+ "length to masked array")
+
np.putmask(new_values, mask, new)
# maybe upcast me
@@ -860,41 +945,29 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
new_shape.insert(axis, 1)
new = new.reshape(tuple(new_shape))
- # need to go column by column
- new_blocks = []
- if self.ndim > 1:
- for i, ref_loc in enumerate(self.mgr_locs):
- m = mask[i]
- v = new_values[i]
-
- # need a new block
- if m.any():
- if isinstance(new, np.ndarray):
- n = np.squeeze(new[i % new.shape[0]])
- else:
- n = np.array(new)
-
- # type of the new block
- dtype, _ = maybe_promote(n.dtype)
+ # operate column-by-column
+ def f(m, v, i):
- # we need to explicitly astype here to make a copy
- n = n.astype(dtype)
+ if i is None:
+ # ndim==1 case.
+ n = new
+ else:
- nv = _putmask_smart(v, m, n)
+ if isinstance(new, np.ndarray):
+ n = np.squeeze(new[i % new.shape[0]])
else:
- nv = v if inplace else v.copy()
+ n = np.array(new)
- # Put back the dimension that was taken from it and make
- # a block out of the result.
- block = self.make_block(values=nv[np.newaxis],
- placement=[ref_loc], fastpath=True)
+ # type of the new block
+ dtype, _ = maybe_promote(n.dtype)
- new_blocks.append(block)
+ # we need to explicitly astype here to make a copy
+ n = n.astype(dtype)
- else:
- nv = _putmask_smart(new_values, mask, new)
- new_blocks.append(self.make_block(values=nv, fastpath=True))
+ nv = _putmask_smart(v, m, n)
+ return nv
+ new_blocks = self.split_and_operate(mask, f, inplace)
return new_blocks
if inplace:
@@ -905,6 +978,67 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
return [self.make_block(new_values, fastpath=True)]
+ def coerce_to_target_dtype(self, other):
+ """
+ coerce the current block to a dtype compat for other
+ we will return a block, possibly object, and not raise
+
+ we can also safely try to coerce to the same dtype
+ and will receive the same block
+ """
+
+ # if we cannot then coerce to object
+ dtype, _ = infer_dtype_from(other, pandas_dtype=True)
+
+ if is_dtype_equal(self.dtype, dtype):
+ return self
+
+ if self.is_bool or is_object_dtype(dtype) or is_bool_dtype(dtype):
+ # we don't upcast to bool
+ return self.astype(object)
+
+ elif ((self.is_float or self.is_complex) and
+ (is_integer_dtype(dtype) or is_float_dtype(dtype))):
+ # don't coerce float/complex to int
+ return self
+
+ elif (self.is_datetime or
+ is_datetime64_dtype(dtype) or
+ is_datetime64tz_dtype(dtype)):
+
+ # not a datetime
+ if not ((is_datetime64_dtype(dtype) or
+ is_datetime64tz_dtype(dtype)) and self.is_datetime):
+ return self.astype(object)
+
+ # don't upcast timezone with different timezone or no timezone
+ mytz = getattr(self.dtype, 'tz', None)
+ othertz = getattr(dtype, 'tz', None)
+
+ if str(mytz) != str(othertz):
+ return self.astype(object)
+
+ raise AssertionError("possible recursion in "
+ "coerce_to_target_dtype: {} {}".format(
+ self, other))
+
+ elif (self.is_timedelta or is_timedelta64_dtype(dtype)):
+
+ # not a timedelta
+ if not (is_timedelta64_dtype(dtype) and self.is_timedelta):
+ return self.astype(object)
+
+ raise AssertionError("possible recursion in "
+ "coerce_to_target_dtype: {} {}".format(
+ self, other))
+
+ try:
+ return self.astype(dtype)
+ except (ValueError, TypeError):
+ pass
+
+ return self.astype(object)
+
def interpolate(self, method='pad', axis=0, index=None, values=None,
inplace=False, limit=None, limit_direction='forward',
fill_value=None, coerce=False, downcast=None, mgr=None,
@@ -972,7 +1106,6 @@ def _interpolate_with_fill(self, method='pad', axis=0, inplace=False,
values = self.values if inplace else self.values.copy()
values, _, fill_value, _ = self._try_coerce_args(values, fill_value)
- values = self._try_operate(values)
values = missing.interpolate_2d(values, method=method, axis=axis,
limit=limit, fill_value=fill_value,
dtype=self.dtype)
@@ -1111,6 +1244,7 @@ def eval(self, func, other, raise_on_error=True, try_cast=False, mgr=None):
-------
a new block, the result of the func
"""
+ orig_other = other
values = self.values
if hasattr(other, 'reindex_axis'):
@@ -1135,8 +1269,14 @@ def eval(self, func, other, raise_on_error=True, try_cast=False, mgr=None):
transf = (lambda x: x.T) if is_transposed else (lambda x: x)
# coerce/transpose the args if needed
- values, values_mask, other, other_mask = self._try_coerce_args(
- transf(values), other)
+ try:
+ values, values_mask, other, other_mask = self._try_coerce_args(
+ transf(values), other)
+ except TypeError:
+ block = self.coerce_to_target_dtype(orig_other)
+ return block.eval(func, orig_other,
+ raise_on_error=raise_on_error,
+ try_cast=try_cast, mgr=mgr)
# get the result, may need to transpose the other
def get_result(other):
@@ -1163,7 +1303,7 @@ def get_result(other):
result = result.astype('float64', copy=False)
result[other_mask.ravel()] = np.nan
- return self._try_coerce_result(result)
+ return result
# error handler if we have an issue operating with the function
def handle_error():
@@ -1211,6 +1351,7 @@ def handle_error():
if try_cast:
result = self._try_cast_result(result)
+ result = _block_shape(result, ndim=self.ndim)
return [self.make_block(result, fastpath=True, )]
def where(self, other, cond, align=True, raise_on_error=True,
@@ -1233,8 +1374,8 @@ def where(self, other, cond, align=True, raise_on_error=True,
-------
a new block(s), the result of the func
"""
-
values = self.values
+ orig_other = other
if transpose:
values = values.T
@@ -1254,9 +1395,6 @@ def where(self, other, cond, align=True, raise_on_error=True,
raise ValueError("where must have a condition that is ndarray "
"like")
- other = maybe_convert_string_to_object(other)
- other = maybe_convert_scalar(other)
-
# our where function
def func(cond, values, other):
if cond.ravel().all():
@@ -1264,6 +1402,7 @@ def func(cond, values, other):
values, values_mask, other, other_mask = self._try_coerce_args(
values, other)
+
try:
return self._try_coerce_result(expressions.where(
cond, values, other, raise_on_error=True))
@@ -1279,7 +1418,19 @@ def func(cond, values, other):
# see if we can operate on the entire block, or need item-by-item
# or if we are a single block (ndim == 1)
- result = func(cond, values, other)
+ try:
+ result = func(cond, values, other)
+ except TypeError:
+
+ # we cannot coerce, return a compat dtype
+ # we are explicity ignoring raise_on_error here
+ block = self.coerce_to_target_dtype(other)
+ blocks = block.where(orig_other, cond, align=align,
+ raise_on_error=raise_on_error,
+ try_cast=try_cast, axis=axis,
+ transpose=transpose)
+ return self._maybe_downcast(blocks, 'infer')
+
if self._can_hold_na or self.ndim == 1:
if transpose:
@@ -1543,6 +1694,7 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
new = new[mask]
mask = _safe_reshape(mask, new_values.shape)
+
new_values[mask] = new
new_values = self._try_coerce_result(new_values)
return [self.make_block(values=new_values)]
@@ -1578,20 +1730,14 @@ class FloatBlock(FloatOrComplexBlock):
def _can_hold_element(self, element):
if is_list_like(element):
- element = np.array(element)
+ element = np.asarray(element)
tipo = element.dtype.type
return (issubclass(tipo, (np.floating, np.integer)) and
not issubclass(tipo, (np.datetime64, np.timedelta64)))
- return (isinstance(element, (float, int, np.float_, np.int_)) and
+ return (isinstance(element, (float, int, np.floating, np.int_)) and
not isinstance(element, (bool, np.bool_, datetime, timedelta,
np.datetime64, np.timedelta64)))
- def _try_cast(self, element):
- try:
- return float(element)
- except: # pragma: no cover
- return element
-
def to_native_types(self, slicer=None, na_rep='', float_format=None,
decimal='.', quoting=None, **kwargs):
""" convert to our native types format, slicing if desired """
@@ -1639,13 +1785,7 @@ def _can_hold_element(self, element):
(np.floating, np.integer, np.complexfloating))
return (isinstance(element,
(float, int, complex, np.float_, np.int_)) and
- not isinstance(bool, np.bool_))
-
- def _try_cast(self, element):
- try:
- return complex(element)
- except: # pragma: no cover
- return element
+ not isinstance(element, (bool, np.bool_)))
def should_store(self, value):
return issubclass(value.dtype.type, np.complexfloating)
@@ -1661,15 +1801,10 @@ def _can_hold_element(self, element):
element = np.array(element)
tipo = element.dtype.type
return (issubclass(tipo, np.integer) and
- not issubclass(tipo, (np.datetime64, np.timedelta64)))
+ not issubclass(tipo, (np.datetime64, np.timedelta64)) and
+ self.dtype.itemsize >= element.dtype.itemsize)
return is_integer(element)
- def _try_cast(self, element):
- try:
- return int(element)
- except: # pragma: no cover
- return element
-
def should_store(self, value):
return is_integer_dtype(value) and value.dtype == self.dtype
@@ -1684,10 +1819,6 @@ def _na_value(self):
def fill_value(self):
return tslib.iNaT
- def _try_operate(self, values):
- """ return a version to operate on """
- return values.view('i8')
-
def get_values(self, dtype=None):
"""
return object dtype as boxed values, such as Timestamps/Timedelta
@@ -1708,11 +1839,18 @@ class TimeDeltaBlock(DatetimeLikeBlockMixin, IntBlock):
def _box_func(self):
return lambda x: tslib.Timedelta(x, unit='ns')
+ def _can_hold_element(self, element):
+ if is_list_like(element):
+ element = np.array(element)
+ tipo = element.dtype.type
+ return issubclass(tipo, np.timedelta64)
+ return isinstance(element, (timedelta, np.timedelta64))
+
def fillna(self, value, **kwargs):
# allow filling with integers to be
# interpreted as seconds
- if not isinstance(value, np.timedelta64) and is_integer(value):
+ if is_integer(value) and not isinstance(value, np.timedelta64):
value = Timedelta(value, unit='s')
return super(TimeDeltaBlock, self).fillna(value, **kwargs)
@@ -1743,19 +1881,18 @@ def _try_coerce_args(self, values, other):
elif isinstance(other, Timedelta):
other_mask = isnull(other)
other = other.value
+ elif isinstance(other, timedelta):
+ other = Timedelta(other).value
elif isinstance(other, np.timedelta64):
other_mask = isnull(other)
other = Timedelta(other).value
- elif isinstance(other, timedelta):
- other = Timedelta(other).value
- elif isinstance(other, np.ndarray):
+ elif hasattr(other, 'dtype') and is_timedelta64_dtype(other):
other_mask = isnull(other)
other = other.astype('i8', copy=False).view('i8')
else:
- # scalar
- other = Timedelta(other)
- other_mask = isnull(other)
- other = other.value
+ # coercion issues
+ # let higher levels handle
+ raise TypeError
return values, values_mask, other, other_mask
@@ -1805,15 +1942,9 @@ class BoolBlock(NumericBlock):
def _can_hold_element(self, element):
if is_list_like(element):
- element = np.array(element)
- return issubclass(element.dtype.type, np.integer)
- return isinstance(element, (int, bool))
-
- def _try_cast(self, element):
- try:
- return bool(element)
- except: # pragma: no cover
- return element
+ element = np.asarray(element)
+ return issubclass(element.dtype.type, np.bool_)
+ return isinstance(element, (bool, np.bool_))
def should_store(self, value):
return issubclass(value.dtype.type, np.bool_)
@@ -1881,31 +2012,24 @@ def convert(self, *args, **kwargs):
if key in kwargs:
fn_kwargs[key] = kwargs[key]
- # attempt to create new type blocks
- blocks = []
- if by_item and not self._is_single_block:
-
- for i, rl in enumerate(self.mgr_locs):
- values = self.iget(i)
+ # operate column-by-column
+ def f(m, v, i):
+ shape = v.shape
+ values = fn(v.ravel(), **fn_kwargs)
+ try:
+ values = values.reshape(shape)
+ values = _block_shape(values, ndim=self.ndim)
+ except (AttributeError, NotImplementedError):
+ pass
- shape = values.shape
- values = fn(values.ravel(), **fn_kwargs)
- try:
- values = values.reshape(shape)
- values = _block_shape(values, ndim=self.ndim)
- except (AttributeError, NotImplementedError):
- pass
- newb = make_block(values, ndim=self.ndim, placement=[rl])
- blocks.append(newb)
+ return values
+ if by_item and not self._is_single_block:
+ blocks = self.split_and_operate(None, f, False)
else:
- values = fn(self.values.ravel(), **fn_kwargs)
- try:
- values = values.reshape(self.values.shape)
- except NotImplementedError:
- pass
- blocks.append(make_block(values, ndim=self.ndim,
- placement=self.mgr_locs))
+ values = f(None, self.values.ravel(), None)
+ blocks = [make_block(values, ndim=self.ndim,
+ placement=self.mgr_locs)]
return blocks
@@ -1949,8 +2073,14 @@ def _maybe_downcast(self, blocks, downcast=None):
def _can_hold_element(self, element):
return True
- def _try_cast(self, element):
- return element
+ def _try_coerce_args(self, values, other):
+ """ provide coercion to our input arguments """
+
+ if isinstance(other, ABCDatetimeIndex):
+ # to store DatetimeTZBlock as object
+ other = other.asobject.values
+
+ return values, False, other, False
def should_store(self, value):
return not (issubclass(value.dtype.type,
@@ -2249,12 +2379,6 @@ def _can_hold_element(self, element):
return (is_integer(element) or isinstance(element, datetime) or
isnull(element))
- def _try_cast(self, element):
- try:
- return int(element)
- except:
- return element
-
def _try_coerce_args(self, values, other):
"""
Coerce values and other to dtype 'i8'. NaN and NaT convert to
@@ -2288,19 +2412,13 @@ def _try_coerce_args(self, values, other):
"naive Block")
other_mask = isnull(other)
other = other.asm8.view('i8')
- elif hasattr(other, 'dtype') and is_integer_dtype(other):
- other = other.view('i8')
+ elif hasattr(other, 'dtype') and is_datetime64_dtype(other):
+ other_mask = isnull(other)
+ other = other.astype('i8', copy=False).view('i8')
else:
- try:
- other = np.asarray(other)
- other_mask = isnull(other)
-
- other = other.astype('i8', copy=False).view('i8')
- except ValueError:
-
- # coercion issues
- # let higher levels handle
- raise TypeError
+ # coercion issues
+ # let higher levels handle
+ raise TypeError
return values, values_mask, other, other_mask
@@ -2400,21 +2518,6 @@ def get_values(self, dtype=None):
self.values.ravel(), f).reshape(self.values.shape)
return self.values
- def to_object_block(self, mgr):
- """
- return myself as an object block
-
- Since we keep the DTI as a 1-d object, this is different
- depends on BlockManager's ndim
- """
- values = self.get_values(dtype=object)
- kwargs = {}
- if mgr.ndim > 1:
- values = _block_shape(values, ndim=mgr.ndim)
- kwargs['ndim'] = mgr.ndim
- kwargs['placement'] = [0]
- return self.make_block(values, klass=ObjectBlock, **kwargs)
-
def _slice(self, slicer):
""" return a slice of my values """
if isinstance(slicer, tuple):
@@ -2466,6 +2569,8 @@ def _try_coerce_args(self, values, other):
raise ValueError("incompatible or non tz-aware value")
other_mask = isnull(other)
other = other.value
+ else:
+ raise TypeError
return values, values_mask, other, other_mask
@@ -3246,16 +3351,6 @@ def comp(s):
return isnull(values)
return _maybe_compare(values, getattr(s, 'asm8', s), operator.eq)
- def _cast_scalar(block, scalar):
- dtype, val = infer_dtype_from_scalar(scalar, pandas_dtype=True)
- if not is_dtype_equal(block.dtype, dtype):
- dtype = find_common_type([block.dtype, dtype])
- block = block.astype(dtype)
- # use original value
- val = scalar
-
- return block, val
-
masks = [comp(s) for i, s in enumerate(src_list)]
result_blocks = []
@@ -3278,8 +3373,8 @@ def _cast_scalar(block, scalar):
# particular block
m = masks[i][b.mgr_locs.indexer]
if m.any():
- b, val = _cast_scalar(b, d)
- new_rb.extend(b.putmask(m, val, inplace=True))
+ b = b.coerce_to_target_dtype(d)
+ new_rb.extend(b.putmask(m, d, inplace=True))
else:
new_rb.append(b)
rb = new_rb
@@ -4757,17 +4852,30 @@ def _transform_index(index, func, level=None):
def _putmask_smart(v, m, n):
"""
- Return a new block, try to preserve dtype if possible.
+ Return a new ndarray, try to preserve dtype if possible.
Parameters
----------
v : `values`, updated in-place (array like)
m : `mask`, applies to both sides (array like)
n : `new values` either scalar or an array like aligned with `values`
+
+ Returns
+ -------
+ values : ndarray with updated values
+ this *may* be a copy of the original
+
+ See Also
+ --------
+ ndarray.putmask
"""
+
+ # we cannot use np.asarray() here as we cannot have conversions
+ # that numpy does when numeric are mixed with strings
+
# n should be the length of the mask or a scalar here
if not is_list_like(n):
- n = np.array([n] * len(m))
+ n = np.repeat(n, len(m))
elif isinstance(n, np.ndarray) and n.ndim == 0: # numpy scalar
n = np.repeat(np.array(n, ndmin=1), len(m))
@@ -4781,10 +4889,21 @@ def _putmask_smart(v, m, n):
if not _is_na_compat(v, nn[0]):
raise ValueError
- nn_at = nn.astype(v.dtype)
+ # we ignore ComplexWarning here
+ with catch_warnings(record=True):
+ nn_at = nn.astype(v.dtype)
# avoid invalid dtype comparisons
- if not is_numeric_v_string_like(nn, nn_at):
+ # between numbers & strings
+
+ # only compare integers/floats
+ # don't compare integers to datetimelikes
+ if (not is_numeric_v_string_like(nn, nn_at) and
+ (is_float_dtype(nn.dtype) or
+ is_integer_dtype(nn.dtype) and
+ is_float_dtype(nn_at.dtype) or
+ is_integer_dtype(nn_at.dtype))):
+
comp = (nn == nn_at)
if is_list_like(comp) and comp.all():
nv = v.copy()
@@ -4793,21 +4912,28 @@ def _putmask_smart(v, m, n):
except (ValueError, IndexError, TypeError):
pass
- # change the dtype
+ n = np.asarray(n)
+
+ def _putmask_preserve(nv, n):
+ try:
+ nv[m] = n[m]
+ except (IndexError, ValueError):
+ nv[m] = n
+ return nv
+
+ # preserves dtype if possible
+ if v.dtype.kind == n.dtype.kind:
+ return _putmask_preserve(v, n)
+
+ # change the dtype if needed
dtype, _ = maybe_promote(n.dtype)
if is_extension_type(v.dtype) and is_object_dtype(dtype):
- nv = v.get_values(dtype)
+ v = v.get_values(dtype)
else:
- nv = v.astype(dtype)
+ v = v.astype(dtype)
- try:
- nv[m] = n[m]
- except ValueError:
- idx, = np.where(np.squeeze(m))
- for mask_index, new_val in zip(idx, n[m]):
- nv[mask_index] = new_val
- return nv
+ return _putmask_preserve(v, n)
def concatenate_block_managers(mgrs_indexers, axes, concat_axis, copy):
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 55473ec8d7cad..017afcd691194 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1278,12 +1278,14 @@ def f(self, other, axis=default_axis, level=None):
other = _align_method_FRAME(self, other, axis)
if isinstance(other, pd.DataFrame): # Another DataFrame
- return self._flex_compare_frame(other, na_op, str_rep, level)
+ return self._flex_compare_frame(other, na_op, str_rep, level,
+ try_cast=False)
elif isinstance(other, ABCSeries):
- return self._combine_series(other, na_op, None, axis, level)
+ return self._combine_series(other, na_op, None, axis, level,
+ try_cast=False)
else:
- return self._combine_const(other, na_op)
+ return self._combine_const(other, na_op, try_cast=False)
f.__name__ = name
@@ -1296,12 +1298,14 @@ def f(self, other):
if isinstance(other, pd.DataFrame): # Another DataFrame
return self._compare_frame(other, func, str_rep)
elif isinstance(other, ABCSeries):
- return self._combine_series_infer(other, func)
+ return self._combine_series_infer(other, func, try_cast=False)
else:
# straight boolean comparisions we want to allow all columns
# (regardless of dtype to pass thru) See #4537 for discussion.
- res = self._combine_const(other, func, raise_on_error=False)
+ res = self._combine_const(other, func,
+ raise_on_error=False,
+ try_cast=False)
return res.fillna(True).astype(bool)
f.__name__ = name
@@ -1381,13 +1385,13 @@ def f(self, other, axis=None):
axis = self._get_axis_number(axis)
if isinstance(other, self._constructor):
- return self._compare_constructor(other, na_op)
+ return self._compare_constructor(other, na_op, try_cast=False)
elif isinstance(other, (self._constructor_sliced, pd.DataFrame,
ABCSeries)):
raise Exception("input needs alignment for this object [%s]" %
self._constructor)
else:
- return self._combine_const(other, na_op)
+ return self._combine_const(other, na_op, try_cast=False)
f.__name__ = name
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 69a8468552f54..609bf3186344a 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -8,6 +8,7 @@
import warnings
from pandas.core.dtypes.cast import (
infer_dtype_from_scalar,
+ cast_scalar_to_array,
maybe_cast_item)
from pandas.core.dtypes.common import (
is_integer, is_list_like,
@@ -178,11 +179,9 @@ def _init_data(self, data, copy, dtype, **kwargs):
copy = False
dtype = None
elif is_scalar(data) and all(x is not None for x in passed_axes):
- if dtype is None:
- dtype, data = infer_dtype_from_scalar(data)
- values = np.empty([len(x) for x in passed_axes], dtype=dtype)
- values.fill(data)
- mgr = self._init_matrix(values, passed_axes, dtype=dtype,
+ values = cast_scalar_to_array([len(x) for x in passed_axes],
+ data, dtype=dtype)
+ mgr = self._init_matrix(values, passed_axes, dtype=values.dtype,
copy=False)
copy = False
else: # pragma: no cover
@@ -327,7 +326,7 @@ def _init_matrix(self, data, axes, dtype=None, copy=False):
# ----------------------------------------------------------------------
# Comparison methods
- def _compare_constructor(self, other, func):
+ def _compare_constructor(self, other, func, try_cast=True):
if not self._indexed_same(other):
raise Exception('Can only compare identically-labeled '
'same type objects')
@@ -584,9 +583,7 @@ def __setitem__(self, key, value):
shape[1:], tuple(map(int, value.shape))))
mat = np.asarray(value)
elif is_scalar(value):
- dtype, value = infer_dtype_from_scalar(value)
- mat = np.empty(shape[1:], dtype=dtype)
- mat.fill(value)
+ mat = cast_scalar_to_array(shape[1:], value)
else:
raise TypeError('Cannot set item of type: %s' % str(type(value)))
@@ -719,13 +716,13 @@ def _combine(self, other, func, axis=0):
"operation with %s" %
(str(type(other)), str(type(self))))
- def _combine_const(self, other, func):
+ def _combine_const(self, other, func, try_cast=True):
with np.errstate(all='ignore'):
new_values = func(self.values, other)
d = self._construct_axes_dict()
return self._constructor(new_values, **d)
- def _combine_frame(self, other, func, axis=0):
+ def _combine_frame(self, other, func, axis=0, try_cast=True):
index, columns = self._get_plane_axes(axis)
axis = self._get_axis_number(axis)
@@ -744,7 +741,7 @@ def _combine_frame(self, other, func, axis=0):
return self._constructor(new_values, self.items, self.major_axis,
self.minor_axis)
- def _combine_panel(self, other, func):
+ def _combine_panel(self, other, func, try_cast=True):
items = self.items.union(other.items)
major = self.major_axis.union(other.major_axis)
minor = self.minor_axis.union(other.minor_axis)
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 5fe96d70fc16f..462fb18618949 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -500,7 +500,8 @@ def xs(self, key, axis=0, copy=False):
# ----------------------------------------------------------------------
# Arithmetic-related methods
- def _combine_frame(self, other, func, fill_value=None, level=None):
+ def _combine_frame(self, other, func, fill_value=None, level=None,
+ try_cast=True):
this, other = self.align(other, join='outer', level=level, copy=False)
new_index, new_columns = this.index, this.columns
@@ -543,7 +544,8 @@ def _combine_frame(self, other, func, fill_value=None, level=None):
default_fill_value=new_fill_value
).__finalize__(self)
- def _combine_match_index(self, other, func, level=None, fill_value=None):
+ def _combine_match_index(self, other, func, level=None, fill_value=None,
+ try_cast=True):
new_data = {}
if fill_value is not None:
@@ -573,7 +575,8 @@ def _combine_match_index(self, other, func, level=None, fill_value=None):
new_data, index=new_index, columns=self.columns,
default_fill_value=fill_value).__finalize__(self)
- def _combine_match_columns(self, other, func, level=None, fill_value=None):
+ def _combine_match_columns(self, other, func, level=None, fill_value=None,
+ try_cast=True):
# patched version of DataFrame._combine_match_columns to account for
# NumPy circumventing __rsub__ with float64 types, e.g.: 3.0 - series,
# where 3.0 is numpy.float64 and series is a SparseSeries. Still
@@ -599,7 +602,7 @@ def _combine_match_columns(self, other, func, level=None, fill_value=None):
new_data, index=self.index, columns=union,
default_fill_value=self.default_fill_value).__finalize__(self)
- def _combine_const(self, other, func, raise_on_error=True):
+ def _combine_const(self, other, func, raise_on_error=True, try_cast=True):
return self._apply_columns(lambda x: func(x, other))
def _reindex_index(self, index, method, copy, level, fill_value=np.nan,
diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index 6e07487b3e04f..d9fb458c83529 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -9,11 +9,14 @@
from datetime import datetime, timedelta, date
import numpy as np
-from pandas import Timedelta, Timestamp, DatetimeIndex, DataFrame, NaT, Series
+import pandas as pd
+from pandas import (Timedelta, Timestamp, DatetimeIndex,
+ DataFrame, NaT, Period, Series)
from pandas.core.dtypes.cast import (
maybe_downcast_to_dtype,
maybe_convert_objects,
+ cast_scalar_to_array,
infer_dtype_from_scalar,
infer_dtype_from_array,
maybe_convert_string_to_object,
@@ -23,6 +26,8 @@
CategoricalDtype,
DatetimeTZDtype,
PeriodDtype)
+from pandas.core.dtypes.common import (
+ is_dtype_equal)
from pandas.util import testing as tm
@@ -96,8 +101,8 @@ def test_datetime_with_timezone(self):
class TestInferDtype(object):
- def test_infer_dtype_from_scalar(self):
- # Test that _infer_dtype_from_scalar is returning correct dtype for int
+ def testinfer_dtype_from_scalar(self):
+ # Test that infer_dtype_from_scalar is returning correct dtype for int
# and float.
for dtypec in [np.uint8, np.int8, np.uint16, np.int16, np.uint32,
@@ -137,29 +142,93 @@ def test_infer_dtype_from_scalar(self):
dtype, val = infer_dtype_from_scalar(data)
assert dtype == 'm8[ns]'
+ for tz in ['UTC', 'US/Eastern', 'Asia/Tokyo']:
+ dt = Timestamp(1, tz=tz)
+ dtype, val = infer_dtype_from_scalar(dt, pandas_dtype=True)
+ assert dtype == 'datetime64[ns, {0}]'.format(tz)
+ assert val == dt.value
+
+ dtype, val = infer_dtype_from_scalar(dt)
+ assert dtype == np.object_
+ assert val == dt
+
+ for freq in ['M', 'D']:
+ p = Period('2011-01-01', freq=freq)
+ dtype, val = infer_dtype_from_scalar(p, pandas_dtype=True)
+ assert dtype == 'period[{0}]'.format(freq)
+ assert val == p.ordinal
+
+ dtype, val = infer_dtype_from_scalar(p)
+ dtype == np.object_
+ assert val == p
+
+ # misc
for data in [date(2000, 1, 1),
Timestamp(1, tz='US/Eastern'), 'foo']:
+
dtype, val = infer_dtype_from_scalar(data)
assert dtype == np.object_
+ def testinfer_dtype_from_scalar_errors(self):
+ with pytest.raises(ValueError):
+ infer_dtype_from_scalar(np.array([1]))
+
@pytest.mark.parametrize(
- "arr, expected",
- [('foo', np.object_),
- (b'foo', np.object_),
- (1, np.int_),
- (1.5, np.float_),
- ([1], np.int_),
- (np.array([1]), np.int_),
- ([np.nan, 1, ''], np.object_),
- (np.array([[1.0, 2.0]]), np.float_),
- (Timestamp('20160101'), np.object_),
- (np.datetime64('2016-01-01'), np.dtype('<M8[D]')),
- ])
- def test_infer_dtype_from_array(self, arr, expected):
-
- # these infer specifically to numpy dtypes
- dtype, _ = infer_dtype_from_array(arr)
- assert dtype == expected
+ "arr, expected, pandas_dtype",
+ [('foo', np.object_, False),
+ (b'foo', np.object_, False),
+ (1, np.int_, False),
+ (1.5, np.float_, False),
+ ([1], np.int_, False),
+ (np.array([1], dtype=np.int64), np.int64, False),
+ ([np.nan, 1, ''], np.object_, False),
+ (np.array([[1.0, 2.0]]), np.float_, False),
+ (pd.Categorical(list('aabc')), np.object_, False),
+ (pd.Categorical([1, 2, 3]), np.int64, False),
+ (pd.Categorical(list('aabc')), 'category', True),
+ (pd.Categorical([1, 2, 3]), 'category', True),
+ (Timestamp('20160101'), np.object_, False),
+ (np.datetime64('2016-01-01'), np.dtype('<M8[D]'), False),
+ (pd.date_range('20160101', periods=3),
+ np.dtype('<M8[ns]'), False),
+ (pd.date_range('20160101', periods=3, tz='US/Eastern'),
+ 'datetime64[ns, US/Eastern]', True),
+ (pd.Series([1., 2, 3]), np.float64, False),
+ (pd.Series(list('abc')), np.object_, False),
+ (pd.Series(pd.date_range('20160101', periods=3, tz='US/Eastern')),
+ 'datetime64[ns, US/Eastern]', True)])
+ def test_infer_dtype_from_array(self, arr, expected, pandas_dtype):
+
+ dtype, _ = infer_dtype_from_array(arr, pandas_dtype=pandas_dtype)
+ assert is_dtype_equal(dtype, expected)
+
+ def test_cast_scalar_to_array(self):
+ arr = cast_scalar_to_array((3, 2), 1, dtype=np.int64)
+ exp = np.ones((3, 2), dtype=np.int64)
+ tm.assert_numpy_array_equal(arr, exp)
+
+ arr = cast_scalar_to_array((3, 2), 1.1)
+ exp = np.empty((3, 2), dtype=np.float64)
+ exp.fill(1.1)
+ tm.assert_numpy_array_equal(arr, exp)
+
+ arr = cast_scalar_to_array((2, 3), Timestamp('2011-01-01'))
+ exp = np.empty((2, 3), dtype='datetime64[ns]')
+ exp.fill(np.datetime64('2011-01-01'))
+ tm.assert_numpy_array_equal(arr, exp)
+
+ # pandas dtype is stored as object dtype
+ obj = Timestamp('2011-01-01', tz='US/Eastern')
+ arr = cast_scalar_to_array((2, 3), obj)
+ exp = np.empty((2, 3), dtype=np.object)
+ exp.fill(obj)
+ tm.assert_numpy_array_equal(arr, exp)
+
+ obj = Period('2011-01-01', freq='D')
+ arr = cast_scalar_to_array((2, 3), obj)
+ exp = np.empty((2, 3), dtype=np.object)
+ exp.fill(obj)
+ tm.assert_numpy_array_equal(arr, exp)
class TestMaybe(object):
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 290cdd732b6d6..24d6854530ccc 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -493,10 +493,12 @@ def test_is_bool_dtype():
assert not com.is_bool_dtype(str)
assert not com.is_bool_dtype(pd.Series([1, 2]))
assert not com.is_bool_dtype(np.array(['a', 'b']))
+ assert not com.is_bool_dtype(pd.Index(['a', 'b']))
assert com.is_bool_dtype(bool)
assert com.is_bool_dtype(np.bool)
assert com.is_bool_dtype(np.array([True, False]))
+ assert com.is_bool_dtype(pd.Index([True, False]))
def test_is_extension_type():
diff --git a/pandas/tests/dtypes/test_convert.py b/pandas/tests/dtypes/test_convert.py
deleted file mode 100644
index e69de29bb2d1d..0000000000000
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 90993890b7553..ea4f5da04a271 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
+import pytest
from warnings import catch_warnings
import numpy as np
from datetime import datetime
@@ -11,6 +12,7 @@
from pandas._libs.tslib import iNaT
from pandas import (NaT, Float64Index, Series,
DatetimeIndex, TimedeltaIndex, date_range)
+from pandas.core.dtypes.common import is_scalar
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import (
array_equivalent, isnull, notnull,
@@ -161,6 +163,23 @@ def test_isnull_datetime(self):
exp = np.zeros(len(mask), dtype=bool)
tm.assert_numpy_array_equal(mask, exp)
+ @pytest.mark.parametrize(
+ "value, expected",
+ [(np.complex128(np.nan), True),
+ (np.float64(1), False),
+ (np.array([1, 1 + 0j, np.nan, 3]),
+ np.array([False, False, True, False])),
+ (np.array([1, 1 + 0j, np.nan, 3], dtype=object),
+ np.array([False, False, True, False])),
+ (np.array([1, 1 + 0j, np.nan, 3]).astype(object),
+ np.array([False, False, True, False]))])
+ def test_complex(self, value, expected):
+ result = isnull(value)
+ if is_scalar(result):
+ assert result is expected
+ else:
+ tm.assert_numpy_array_equal(result, expected)
+
def test_datetime_other_units(self):
idx = pd.DatetimeIndex(['2011-01-01', 'NaT', '2011-01-02'])
exp = np.array([False, True, False])
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index f0503b60eeefa..ff79bedbc60f6 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -2522,14 +2522,15 @@ def test_where_dataframe_col_match(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]])
cond = DataFrame([[True, False, True], [False, False, True]])
- out = df.where(cond)
+ result = df.where(cond)
expected = DataFrame([[1.0, np.nan, 3], [np.nan, np.nan, 6]])
- tm.assert_frame_equal(out, expected)
+ tm.assert_frame_equal(result, expected)
- cond.columns = ["a", "b", "c"] # Columns no longer match.
- msg = "Boolean array expected for the condition"
- with tm.assert_raises_regex(ValueError, msg):
- df.where(cond)
+ # this *does* align, though has no matching columns
+ cond.columns = ["a", "b", "c"]
+ result = df.where(cond)
+ expected = DataFrame(np.nan, index=df.index, columns=df.columns)
+ tm.assert_frame_equal(result, expected)
def test_where_ndframe_align(self):
msg = "Array conditional must be same shape as self"
@@ -2712,7 +2713,7 @@ def test_where_axis(self):
result.where(mask, s, axis='index', inplace=True)
assert_frame_equal(result, expected)
- expected = DataFrame([[0, np.nan], [0, np.nan]], dtype='float64')
+ expected = DataFrame([[0, np.nan], [0, np.nan]])
result = df.where(mask, s, axis='columns')
assert_frame_equal(result, expected)
@@ -2723,17 +2724,18 @@ def test_where_axis(self):
assert_frame_equal(result, expected)
# Multiple dtypes (=> multiple Blocks)
- df = pd.concat([DataFrame(np.random.randn(10, 2)),
- DataFrame(np.random.randint(0, 10, size=(10, 2)))],
- ignore_index=True, axis=1)
+ df = pd.concat([
+ DataFrame(np.random.randn(10, 2)),
+ DataFrame(np.random.randint(0, 10, size=(10, 2)), dtype='int64')],
+ ignore_index=True, axis=1)
mask = DataFrame(False, columns=df.columns, index=df.index)
s1 = Series(1, index=df.columns)
s2 = Series(2, index=df.index)
result = df.where(mask, s1, axis='columns')
expected = DataFrame(1.0, columns=df.columns, index=df.index)
- expected[2] = expected[2].astype(int)
- expected[3] = expected[3].astype(int)
+ expected[2] = expected[2].astype('int64')
+ expected[3] = expected[3].astype('int64')
assert_frame_equal(result, expected)
result = df.copy()
@@ -2742,8 +2744,8 @@ def test_where_axis(self):
result = df.where(mask, s2, axis='index')
expected = DataFrame(2.0, columns=df.columns, index=df.index)
- expected[2] = expected[2].astype(int)
- expected[3] = expected[3].astype(int)
+ expected[2] = expected[2].astype('int64')
+ expected[3] = expected[3].astype('int64')
assert_frame_equal(result, expected)
result = df.copy()
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 8ec6c6e6263d8..438d7481ecc3e 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -188,6 +188,7 @@ def test_timestamp_compare(self):
df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT
ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq',
'ne': 'ne'}
+
for left, right in ops.items():
left_f = getattr(operator, left)
right_f = getattr(operator, right)
@@ -315,14 +316,12 @@ def _check_unary_op(op):
# operator.neg is deprecated in numpy >= 1.9
_check_unary_op(operator.inv)
- def test_logical_typeerror(self):
- if not compat.PY3:
- pytest.raises(TypeError, self.frame.__eq__, 'foo')
- pytest.raises(TypeError, self.frame.__lt__, 'foo')
- pytest.raises(TypeError, self.frame.__gt__, 'foo')
- pytest.raises(TypeError, self.frame.__ne__, 'foo')
- else:
- pytest.skip('test_logical_typeerror not tested on PY3')
+ @pytest.mark.parametrize('op,res', [('__eq__', False),
+ ('__ne__', True)])
+ def test_logical_typeerror_with_non_valid(self, op, res):
+ # we are comparing floats vs a string
+ result = getattr(self.frame, op)('foo')
+ assert bool(result.all().all()) is res
def test_logical_with_nas(self):
d = DataFrame({'a': [np.nan, False], 'b': [True, True]})
@@ -832,9 +831,11 @@ def test_combineSeries(self):
assert 'E' in larger_added
assert np.isnan(larger_added['E']).all()
- # vs mix (upcast) as needed
+ # no upcast needed
added = self.mixed_float + series
- _check_mixed_float(added, dtype='float64')
+ _check_mixed_float(added)
+
+ # vs mix (upcast) as needed
added = self.mixed_float + series.astype('float32')
_check_mixed_float(added, dtype=dict(C=None))
added = self.mixed_float + series.astype('float16')
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index 25cc810299678..752d2deb53304 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -101,9 +101,22 @@ def test_setitem_series_int64(self):
exp = pd.Series([1, 1 + 1j, 3, 4])
self._assert_setitem_series_conversion(obj, 1 + 1j, exp, np.complex128)
- # int + bool -> int
- exp = pd.Series([1, 1, 3, 4])
- self._assert_setitem_series_conversion(obj, True, exp, np.int64)
+ # int + bool -> object
+ exp = pd.Series([1, True, 3, 4])
+ self._assert_setitem_series_conversion(obj, True, exp, np.object)
+
+ def test_setitem_series_int8(self):
+ # integer dtype coercion (no change)
+ obj = pd.Series([1, 2, 3, 4], dtype=np.int8)
+ assert obj.dtype == np.int8
+
+ exp = pd.Series([1, 1, 3, 4], dtype=np.int8)
+ self._assert_setitem_series_conversion(obj, np.int32(1), exp, np.int8)
+
+ # BUG: it must be Series([1, 1, 3, 4], dtype=np.int16)
+ exp = pd.Series([1, 0, 3, 4], dtype=np.int8)
+ self._assert_setitem_series_conversion(obj, np.int16(2**9), exp,
+ np.int8)
def test_setitem_series_float64(self):
obj = pd.Series([1.1, 2.2, 3.3, 4.4])
@@ -122,9 +135,9 @@ def test_setitem_series_float64(self):
self._assert_setitem_series_conversion(obj, 1 + 1j, exp,
np.complex128)
- # float + bool -> float
- exp = pd.Series([1.1, 1.0, 3.3, 4.4])
- self._assert_setitem_series_conversion(obj, True, exp, np.float64)
+ # float + bool -> object
+ exp = pd.Series([1.1, True, 3.3, 4.4])
+ self._assert_setitem_series_conversion(obj, True, exp, np.object)
def test_setitem_series_complex128(self):
obj = pd.Series([1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j])
@@ -132,7 +145,7 @@ def test_setitem_series_complex128(self):
# complex + int -> complex
exp = pd.Series([1 + 1j, 1, 3 + 3j, 4 + 4j])
- self._assert_setitem_series_conversion(obj, True, exp, np.complex128)
+ self._assert_setitem_series_conversion(obj, 1, exp, np.complex128)
# complex + float -> complex
exp = pd.Series([1 + 1j, 1.1, 3 + 3j, 4 + 4j])
@@ -142,9 +155,9 @@ def test_setitem_series_complex128(self):
exp = pd.Series([1 + 1j, 1 + 1j, 3 + 3j, 4 + 4j])
self._assert_setitem_series_conversion(obj, 1 + 1j, exp, np.complex128)
- # complex + bool -> complex
- exp = pd.Series([1 + 1j, 1, 3 + 3j, 4 + 4j])
- self._assert_setitem_series_conversion(obj, True, exp, np.complex128)
+ # complex + bool -> object
+ exp = pd.Series([1 + 1j, True, 3 + 3j, 4 + 4j])
+ self._assert_setitem_series_conversion(obj, True, exp, np.object)
def test_setitem_series_bool(self):
obj = pd.Series([True, False, True, False])
@@ -198,14 +211,18 @@ def test_setitem_series_datetime64(self):
exp, 'datetime64[ns]')
# datetime64 + int -> object
- # ToDo: The result must be object
exp = pd.Series([pd.Timestamp('2011-01-01'),
- pd.Timestamp(1),
+ 1,
pd.Timestamp('2011-01-03'),
pd.Timestamp('2011-01-04')])
- self._assert_setitem_series_conversion(obj, 1, exp, 'datetime64[ns]')
+ self._assert_setitem_series_conversion(obj, 1, exp, 'object')
- # ToDo: add more tests once the above issue has been fixed
+ # datetime64 + object -> object
+ exp = pd.Series([pd.Timestamp('2011-01-01'),
+ 'x',
+ pd.Timestamp('2011-01-03'),
+ pd.Timestamp('2011-01-04')])
+ self._assert_setitem_series_conversion(obj, 'x', exp, np.object)
def test_setitem_series_datetime64tz(self):
tz = 'US/Eastern'
@@ -224,19 +241,59 @@ def test_setitem_series_datetime64tz(self):
self._assert_setitem_series_conversion(obj, value, exp,
'datetime64[ns, US/Eastern]')
+ # datetime64tz + datetime64tz (different tz) -> object
+ exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2012-01-01', tz='US/Pacific'),
+ pd.Timestamp('2011-01-03', tz=tz),
+ pd.Timestamp('2011-01-04', tz=tz)])
+ value = pd.Timestamp('2012-01-01', tz='US/Pacific')
+ self._assert_setitem_series_conversion(obj, value, exp, np.object)
+
+ # datetime64tz + datetime64 -> object
+ exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2012-01-01'),
+ pd.Timestamp('2011-01-03', tz=tz),
+ pd.Timestamp('2011-01-04', tz=tz)])
+ value = pd.Timestamp('2012-01-01')
+ self._assert_setitem_series_conversion(obj, value, exp, np.object)
+
# datetime64 + int -> object
- # ToDo: The result must be object
exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp(1, tz=tz),
+ 1,
pd.Timestamp('2011-01-03', tz=tz),
pd.Timestamp('2011-01-04', tz=tz)])
- self._assert_setitem_series_conversion(obj, 1, exp,
- 'datetime64[ns, US/Eastern]')
+ self._assert_setitem_series_conversion(obj, 1, exp, np.object)
# ToDo: add more tests once the above issue has been fixed
def test_setitem_series_timedelta64(self):
- pass
+ obj = pd.Series([pd.Timedelta('1 day'),
+ pd.Timedelta('2 day'),
+ pd.Timedelta('3 day'),
+ pd.Timedelta('4 day')])
+ assert obj.dtype == 'timedelta64[ns]'
+
+ # timedelta64 + timedelta64 -> timedelta64
+ exp = pd.Series([pd.Timedelta('1 day'),
+ pd.Timedelta('12 day'),
+ pd.Timedelta('3 day'),
+ pd.Timedelta('4 day')])
+ self._assert_setitem_series_conversion(obj, pd.Timedelta('12 day'),
+ exp, 'timedelta64[ns]')
+
+ # timedelta64 + int -> object
+ exp = pd.Series([pd.Timedelta('1 day'),
+ 1,
+ pd.Timedelta('3 day'),
+ pd.Timedelta('4 day')])
+ self._assert_setitem_series_conversion(obj, 1, exp, np.object)
+
+ # timedelta64 + object -> object
+ exp = pd.Series([pd.Timedelta('1 day'),
+ 'x',
+ pd.Timedelta('3 day'),
+ pd.Timedelta('4 day')])
+ self._assert_setitem_series_conversion(obj, 'x', exp, np.object)
def test_setitem_series_period(self):
pass
@@ -610,13 +667,13 @@ def _where_int64_common(self, klass):
self._assert_where_conversion(obj, cond, values, exp,
np.complex128)
- # int + bool -> int
- exp = klass([1, 1, 3, 1])
- self._assert_where_conversion(obj, cond, True, exp, np.int64)
+ # int + bool -> object
+ exp = klass([1, True, 3, True])
+ self._assert_where_conversion(obj, cond, True, exp, np.object)
values = klass([True, False, True, True])
- exp = klass([1, 0, 3, 1])
- self._assert_where_conversion(obj, cond, values, exp, np.int64)
+ exp = klass([1, False, 3, True])
+ self._assert_where_conversion(obj, cond, values, exp, np.object)
def test_where_series_int64(self):
self._where_int64_common(pd.Series)
@@ -656,13 +713,13 @@ def _where_float64_common(self, klass):
self._assert_where_conversion(obj, cond, values, exp,
np.complex128)
- # float + bool -> float
- exp = klass([1.1, 1.0, 3.3, 1.0])
- self._assert_where_conversion(obj, cond, True, exp, np.float64)
+ # float + bool -> object
+ exp = klass([1.1, True, 3.3, True])
+ self._assert_where_conversion(obj, cond, True, exp, np.object)
values = klass([True, False, True, True])
- exp = klass([1.1, 0.0, 3.3, 1.0])
- self._assert_where_conversion(obj, cond, values, exp, np.float64)
+ exp = klass([1.1, False, 3.3, True])
+ self._assert_where_conversion(obj, cond, values, exp, np.object)
def test_where_series_float64(self):
self._where_float64_common(pd.Series)
@@ -699,45 +756,46 @@ def test_where_series_complex128(self):
exp = pd.Series([1 + 1j, 6 + 6j, 3 + 3j, 8 + 8j])
self._assert_where_conversion(obj, cond, values, exp, np.complex128)
- # complex + bool -> complex
- exp = pd.Series([1 + 1j, 1, 3 + 3j, 1])
- self._assert_where_conversion(obj, cond, True, exp, np.complex128)
+ # complex + bool -> object
+ exp = pd.Series([1 + 1j, True, 3 + 3j, True])
+ self._assert_where_conversion(obj, cond, True, exp, np.object)
values = pd.Series([True, False, True, True])
- exp = pd.Series([1 + 1j, 0, 3 + 3j, 1])
- self._assert_where_conversion(obj, cond, values, exp, np.complex128)
+ exp = pd.Series([1 + 1j, False, 3 + 3j, True])
+ self._assert_where_conversion(obj, cond, values, exp, np.object)
def test_where_index_complex128(self):
pass
def test_where_series_bool(self):
+
obj = pd.Series([True, False, True, False])
assert obj.dtype == np.bool
cond = pd.Series([True, False, True, False])
- # bool + int -> int
- exp = pd.Series([1, 1, 1, 1])
- self._assert_where_conversion(obj, cond, 1, exp, np.int64)
+ # bool + int -> object
+ exp = pd.Series([True, 1, True, 1])
+ self._assert_where_conversion(obj, cond, 1, exp, np.object)
values = pd.Series([5, 6, 7, 8])
- exp = pd.Series([1, 6, 1, 8])
- self._assert_where_conversion(obj, cond, values, exp, np.int64)
+ exp = pd.Series([True, 6, True, 8])
+ self._assert_where_conversion(obj, cond, values, exp, np.object)
- # bool + float -> float
- exp = pd.Series([1.0, 1.1, 1.0, 1.1])
- self._assert_where_conversion(obj, cond, 1.1, exp, np.float64)
+ # bool + float -> object
+ exp = pd.Series([True, 1.1, True, 1.1])
+ self._assert_where_conversion(obj, cond, 1.1, exp, np.object)
values = pd.Series([5.5, 6.6, 7.7, 8.8])
- exp = pd.Series([1.0, 6.6, 1.0, 8.8])
- self._assert_where_conversion(obj, cond, values, exp, np.float64)
+ exp = pd.Series([True, 6.6, True, 8.8])
+ self._assert_where_conversion(obj, cond, values, exp, np.object)
- # bool + complex -> complex
- exp = pd.Series([1, 1 + 1j, 1, 1 + 1j])
- self._assert_where_conversion(obj, cond, 1 + 1j, exp, np.complex128)
+ # bool + complex -> object
+ exp = pd.Series([True, 1 + 1j, True, 1 + 1j])
+ self._assert_where_conversion(obj, cond, 1 + 1j, exp, np.object)
values = pd.Series([5 + 5j, 6 + 6j, 7 + 7j, 8 + 8j])
- exp = pd.Series([1, 6 + 6j, 1, 8 + 8j])
- self._assert_where_conversion(obj, cond, values, exp, np.complex128)
+ exp = pd.Series([True, 6 + 6j, True, 8 + 8j])
+ self._assert_where_conversion(obj, cond, values, exp, np.object)
# bool + bool -> bool
exp = pd.Series([True, True, True, True])
@@ -776,10 +834,15 @@ def test_where_series_datetime64(self):
pd.Timestamp('2012-01-04')])
self._assert_where_conversion(obj, cond, values, exp, 'datetime64[ns]')
- # ToDo: coerce to object
- msg = "cannot coerce a Timestamp with a tz on a naive Block"
- with tm.assert_raises_regex(TypeError, msg):
- obj.where(cond, pd.Timestamp('2012-01-01', tz='US/Eastern'))
+ # datetime64 + datetime64tz -> object
+ exp = pd.Series([pd.Timestamp('2011-01-01'),
+ pd.Timestamp('2012-01-01', tz='US/Eastern'),
+ pd.Timestamp('2011-01-03'),
+ pd.Timestamp('2012-01-01', tz='US/Eastern')])
+ self._assert_where_conversion(
+ obj, cond,
+ pd.Timestamp('2012-01-01', tz='US/Eastern'),
+ exp, np.object)
# ToDo: do not coerce to UTC, must be object
values = pd.Series([pd.Timestamp('2012-01-01', tz='US/Eastern'),
@@ -898,7 +961,7 @@ def test_fillna_series_int64(self):
def test_fillna_index_int64(self):
pass
- def _fillna_float64_common(self, klass):
+ def _fillna_float64_common(self, klass, complex):
obj = klass([1.1, np.nan, 3.3, 4.4])
assert obj.dtype == np.float64
@@ -910,26 +973,21 @@ def _fillna_float64_common(self, klass):
exp = klass([1.1, 1.1, 3.3, 4.4])
self._assert_fillna_conversion(obj, 1.1, exp, np.float64)
- if klass is pd.Series:
- # float + complex -> complex
- exp = klass([1.1, 1 + 1j, 3.3, 4.4])
- self._assert_fillna_conversion(obj, 1 + 1j, exp, np.complex128)
- elif klass is pd.Index:
- # float + complex -> object
- exp = klass([1.1, 1 + 1j, 3.3, 4.4])
- self._assert_fillna_conversion(obj, 1 + 1j, exp, np.object)
- else:
- NotImplementedError
+ # float + complex -> we don't support a complex Index
+ # complex for Series,
+ # object for Index
+ exp = klass([1.1, 1 + 1j, 3.3, 4.4])
+ self._assert_fillna_conversion(obj, 1 + 1j, exp, complex)
- # float + bool -> float
- exp = klass([1.1, 1.0, 3.3, 4.4])
- self._assert_fillna_conversion(obj, True, exp, np.float64)
+ # float + bool -> object
+ exp = klass([1.1, True, 3.3, 4.4])
+ self._assert_fillna_conversion(obj, True, exp, np.object)
def test_fillna_series_float64(self):
- self._fillna_float64_common(pd.Series)
+ self._fillna_float64_common(pd.Series, complex=np.complex128)
def test_fillna_index_float64(self):
- self._fillna_float64_common(pd.Index)
+ self._fillna_float64_common(pd.Index, complex=np.object)
def test_fillna_series_complex128(self):
obj = pd.Series([1 + 1j, np.nan, 3 + 3j, 4 + 4j])
@@ -947,12 +1005,12 @@ def test_fillna_series_complex128(self):
exp = pd.Series([1 + 1j, 1 + 1j, 3 + 3j, 4 + 4j])
self._assert_fillna_conversion(obj, 1 + 1j, exp, np.complex128)
- # complex + bool -> complex
- exp = pd.Series([1 + 1j, 1, 3 + 3j, 4 + 4j])
- self._assert_fillna_conversion(obj, True, exp, np.complex128)
+ # complex + bool -> object
+ exp = pd.Series([1 + 1j, True, 3 + 3j, 4 + 4j])
+ self._assert_fillna_conversion(obj, True, exp, np.object)
def test_fillna_index_complex128(self):
- self._fillna_float64_common(pd.Index)
+ self._fillna_float64_common(pd.Index, complex=np.object)
def test_fillna_series_bool(self):
# bool can't hold NaN
@@ -985,12 +1043,11 @@ def test_fillna_series_datetime64(self):
self._assert_fillna_conversion(obj, value, exp, np.object)
# datetime64 + int => object
- # ToDo: must be coerced to object
exp = pd.Series([pd.Timestamp('2011-01-01'),
- pd.Timestamp(1),
+ 1,
pd.Timestamp('2011-01-03'),
pd.Timestamp('2011-01-04')])
- self._assert_fillna_conversion(obj, 1, exp, 'datetime64[ns]')
+ self._assert_fillna_conversion(obj, 1, exp, 'object')
# datetime64 + object => object
exp = pd.Series([pd.Timestamp('2011-01-01'),
@@ -1033,14 +1090,12 @@ def test_fillna_series_datetime64tz(self):
value = pd.Timestamp('2012-01-01', tz='Asia/Tokyo')
self._assert_fillna_conversion(obj, value, exp, np.object)
- # datetime64tz + int => datetime64tz
- # ToDo: must be object
+ # datetime64tz + int => object
exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp(1, tz=tz),
+ 1,
pd.Timestamp('2011-01-03', tz=tz),
pd.Timestamp('2011-01-04', tz=tz)])
- self._assert_fillna_conversion(obj, 1, exp,
- 'datetime64[ns, US/Eastern]')
+ self._assert_fillna_conversion(obj, 1, exp, np.object)
# datetime64tz + object => object
exp = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
@@ -1187,8 +1242,8 @@ def _assert_replace_conversion(self, from_key, to_key, how):
(from_key == 'complex128' and
to_key in ('int64', 'float64'))):
- # buggy on 32-bit
- if tm.is_platform_32bit():
+ # buggy on 32-bit / window
+ if compat.is_platform_32bit() or compat.is_platform_windows():
pytest.skip("32-bit platform buggy: {0} -> {1}".format
(from_key, to_key))
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py
index da8a896cb6f4a..7a2f25c72f103 100644
--- a/pandas/tests/indexing/test_datetime.py
+++ b/pandas/tests/indexing/test_datetime.py
@@ -1,5 +1,3 @@
-import pytest
-
import numpy as np
import pandas as pd
from pandas import date_range, Index, DataFrame, Series, Timestamp
@@ -56,10 +54,12 @@ def test_indexing_with_datetime_tz(self):
'US/Pacific')
# trying to set a single element on a part of a different timezone
- def f():
- df.loc[df.new_col == 'new', 'time'] = v
+ # this converts to object
+ df2 = df.copy()
+ df2.loc[df2.new_col == 'new', 'time'] = v
- pytest.raises(ValueError, f)
+ expected = Series([v[0], df.loc[1, 'time']], name='time')
+ tm.assert_series_equal(df2.time, expected)
v = df.loc[df.new_col == 'new', 'time'] + pd.Timedelta('1s')
df.loc[df.new_col == 'new', 'time'] = v
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 98f5d5eb140df..e5b70a9fadb8f 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -382,6 +382,12 @@ def test_multi_assign(self):
tm.assert_frame_equal(df2, expected)
# with an ndarray on rhs
+ # coerces to float64 because values has float64 dtype
+ # GH 14001
+ expected = DataFrame({'FC': ['a', np.nan, 'a', 'b', 'a', 'b'],
+ 'PF': [0, 0, 0, 0, 1, 1],
+ 'col1': [0., 1., 4., 6., 8., 10.],
+ 'col2': [12, 7, 16, np.nan, 20, 22]})
df2 = df.copy()
df2.loc[mask, cols] = dft.loc[mask, cols].values
tm.assert_frame_equal(df2, expected)
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 7aab7df7169d4..a736f3aa74558 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1033,11 +1033,11 @@ def test_clip_with_datetimes(self):
# naive and tz-aware datetimes
t = Timestamp('2015-12-01 09:30:30')
- s = Series([Timestamp('2015-12-01 09:30:00'), Timestamp(
- '2015-12-01 09:31:00')])
+ s = Series([Timestamp('2015-12-01 09:30:00'),
+ Timestamp('2015-12-01 09:31:00')])
result = s.clip(upper=t)
- expected = Series([Timestamp('2015-12-01 09:30:00'), Timestamp(
- '2015-12-01 09:30:30')])
+ expected = Series([Timestamp('2015-12-01 09:30:00'),
+ Timestamp('2015-12-01 09:30:30')])
assert_series_equal(result, expected)
t = Timestamp('2015-12-01 09:30:30', tz='US/Eastern')
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
index 6d8a54b538237..23283733c492a 100644
--- a/pandas/tests/series/test_indexing.py
+++ b/pandas/tests/series/test_indexing.py
@@ -1094,6 +1094,11 @@ def test_where(self):
rs = s2.where(cond[:3], -s2)
assert_series_equal(rs, expected)
+ def test_where_error(self):
+
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
pytest.raises(ValueError, s.where, 1)
pytest.raises(ValueError, s.where, cond[:3].values, -s)
@@ -1109,6 +1114,8 @@ def test_where(self):
pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
[])
+ def test_where_unsafe(self):
+
# unsafe dtype changes
for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16,
np.float32, np.float64]:
@@ -1374,9 +1381,9 @@ def test_where_dups(self):
expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2])
assert_series_equal(comb, expected)
- def test_where_datetime(self):
+ def test_where_datetime_conversion(self):
s = Series(date_range('20130102', periods=2))
- expected = Series([10, 10], dtype='datetime64[ns]')
+ expected = Series([10, 10])
mask = np.array([False, False])
rs = s.where(mask, [10, 10])
@@ -1392,7 +1399,7 @@ def test_where_datetime(self):
assert_series_equal(rs, expected)
rs = s.where(mask, [10.0, np.nan])
- expected = Series([10, None], dtype='datetime64[ns]')
+ expected = Series([10, None], dtype='object')
assert_series_equal(rs, expected)
# GH 15701
@@ -1403,9 +1410,9 @@ def test_where_datetime(self):
expected = Series([pd.NaT, s[1]])
assert_series_equal(rs, expected)
- def test_where_timedelta(self):
+ def test_where_timedelta_coerce(self):
s = Series([1, 2], dtype='timedelta64[ns]')
- expected = Series([10, 10], dtype='timedelta64[ns]')
+ expected = Series([10, 10])
mask = np.array([False, False])
rs = s.where(mask, [10, 10])
@@ -1421,7 +1428,7 @@ def test_where_timedelta(self):
assert_series_equal(rs, expected)
rs = s.where(mask, [10.0, np.nan])
- expected = Series([10, None], dtype='timedelta64[ns]')
+ expected = Series([10, None], dtype='object')
assert_series_equal(rs, expected)
def test_mask(self):
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index b5948e75aa73e..24dd90e40fa35 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -58,14 +58,14 @@ def test_remove_na_deprecation(self):
def test_timedelta_fillna(self):
# GH 3371
- s = Series([Timestamp('20130101'), Timestamp('20130101'), Timestamp(
- '20130102'), Timestamp('20130103 9:01:01')])
+ s = Series([Timestamp('20130101'), Timestamp('20130101'),
+ Timestamp('20130102'), Timestamp('20130103 9:01:01')])
td = s.diff()
# reg fillna
result = td.fillna(0)
- expected = Series([timedelta(0), timedelta(0), timedelta(1), timedelta(
- days=1, seconds=9 * 3600 + 60 + 1)])
+ expected = Series([timedelta(0), timedelta(0), timedelta(1),
+ timedelta(days=1, seconds=9 * 3600 + 60 + 1)])
assert_series_equal(result, expected)
# interprested as seconds
@@ -75,8 +75,9 @@ def test_timedelta_fillna(self):
assert_series_equal(result, expected)
result = td.fillna(timedelta(days=1, seconds=1))
- expected = Series([timedelta(days=1, seconds=1), timedelta(
- 0), timedelta(1), timedelta(days=1, seconds=9 * 3600 + 60 + 1)])
+ expected = Series([timedelta(days=1, seconds=1), timedelta(0),
+ timedelta(1),
+ timedelta(days=1, seconds=9 * 3600 + 60 + 1)])
assert_series_equal(result, expected)
result = td.fillna(np.timedelta64(int(1e9)))
@@ -144,6 +145,7 @@ def test_datetime64_fillna(self):
assert_series_equal(result, expected)
def test_datetime64_tz_fillna(self):
+
for tz in ['US/Eastern', 'Asia/Tokyo']:
# DatetimeBlock
s = Series([Timestamp('2011-01-01 10:00'), pd.NaT,
@@ -278,6 +280,40 @@ def test_datetime64_tz_fillna(self):
pd.Timestamp('2012-11-11 00:00:00+01:00')])
assert_series_equal(df.fillna(method='bfill'), exp)
+ def test_fillna_consistency(self):
+ # GH 16402
+ # fillna with a tz aware to a tz-naive, should result in object
+
+ s = Series([Timestamp('20130101'), pd.NaT])
+
+ result = s.fillna(Timestamp('20130101', tz='US/Eastern'))
+ expected = Series([Timestamp('20130101'),
+ Timestamp('2013-01-01', tz='US/Eastern')],
+ dtype='object')
+ assert_series_equal(result, expected)
+
+ # where (we ignore the raise_on_error)
+ result = s.where([True, False],
+ Timestamp('20130101', tz='US/Eastern'),
+ raise_on_error=False)
+ assert_series_equal(result, expected)
+
+ result = s.where([True, False],
+ Timestamp('20130101', tz='US/Eastern'),
+ raise_on_error=True)
+ assert_series_equal(result, expected)
+
+ # with a non-datetime
+ result = s.fillna('foo')
+ expected = Series([Timestamp('20130101'),
+ 'foo'])
+ assert_series_equal(result, expected)
+
+ # assignment
+ s2 = s.copy()
+ s2[1] = 'foo'
+ assert_series_equal(s2, expected)
+
def test_datetime64tz_fillna_round_issue(self):
# GH 14872
| This fixes a number of inconsistencies and API issues w.r.t. dtype conversions. This is a reprise of #14145. This removes some code from the core structures & pushes it to internals, where the primitives are made more consistent. This should all us to be a bit more consistent for pandas2 type things.
closes #16402
supersedes #14145
supersedes #16408
closes #14001
| https://api.github.com/repos/pandas-dev/pandas/pulls/16821 | 2017-07-04T00:30:26Z | 2017-07-21T11:05:20Z | 2017-07-21T11:05:20Z | 2017-07-21T11:06:21Z |
DOC: Extend docstring pandas core index to_frame method | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index d3a8f11a38715..b96bb22199fa1 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1122,7 +1122,26 @@ def to_frame(self, index=True):
Returns
-------
- DataFrame : a DataFrame containing the original Index data.
+ DataFrame
+ DataFrame containing the original Index data.
+
+ Examples
+ --------
+ >>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
+ >>> idx.to_frame()
+ animal
+ animal
+ Ant Ant
+ Bear Bear
+ Cow Cow
+
+ By default, the original Index is reused. To enforce a new Index:
+
+ >>> idx.to_frame(index=False)
+ animal
+ 0 Ant
+ 1 Bear
+ 2 Cow
"""
from pandas import DataFrame
| - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This PR provides adaptations of the original `to_frame` docstring, taking into account the [docstring guide](https://python-sprints.github.io/pandas/guide/pandas_docstring.html).
| https://api.github.com/repos/pandas-dev/pandas/pulls/20036 | 2018-03-07T14:38:40Z | 2018-03-10T09:15:42Z | 2018-03-10T09:15:42Z | 2018-03-11T15:29:29Z |
TST: xfail numpy-dev clip failures | diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index de4a132e0d613..59a30fc69905f 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -15,7 +15,8 @@
from pandas.compat import lrange, product
from pandas import (compat, isna, notna, DataFrame, Series,
- MultiIndex, date_range, Timestamp, Categorical)
+ MultiIndex, date_range, Timestamp, Categorical,
+ _np_version_under1p15)
import pandas as pd
import pandas.core.nanops as nanops
import pandas.core.algorithms as algorithms
@@ -2057,6 +2058,9 @@ def test_clip_against_list_like(self, inplace, lower, axis, res):
result = original
tm.assert_frame_equal(result, expected, check_exact=True)
+ @pytest.mark.xfail(
+ not _np_version_under1p15,
+ reason="failing under numpy-dev gh-19976")
@pytest.mark.parametrize("axis", [0, 1, None])
def test_clip_against_frame(self, axis):
df = DataFrame(np.random.randn(1000, 2))
| xref #20301
| https://api.github.com/repos/pandas-dev/pandas/pulls/20035 | 2018-03-07T13:12:03Z | 2018-03-07T13:55:34Z | 2018-03-07T13:55:34Z | 2018-03-07T13:55:34Z |
Moving tests in series/indexing to fixtures (#20014.1) | diff --git a/pandas/tests/series/indexing/conftest.py b/pandas/tests/series/indexing/conftest.py
new file mode 100644
index 0000000000000..0e06f6b8e4640
--- /dev/null
+++ b/pandas/tests/series/indexing/conftest.py
@@ -0,0 +1,8 @@
+import pytest
+
+from pandas.tests.series.common import TestData
+
+
+@pytest.fixture(scope='module')
+def test_data():
+ return TestData()
diff --git a/pandas/tests/series/indexing/test_alter_index.py b/pandas/tests/series/indexing/test_alter_index.py
index 7456239d3433a..2629cfde9b4af 100644
--- a/pandas/tests/series/indexing/test_alter_index.py
+++ b/pandas/tests/series/indexing/test_alter_index.py
@@ -18,452 +18,466 @@
from pandas.util.testing import (assert_series_equal)
import pandas.util.testing as tm
-from pandas.tests.series.common import TestData
-
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-class TestAlignment(TestData):
+def test_align(test_data):
+ def _check_align(a, b, how='left', fill=None):
+ aa, ab = a.align(b, join=how, fill_value=fill)
+
+ join_index = a.index.join(b.index, how=how)
+ if fill is not None:
+ diff_a = aa.index.difference(join_index)
+ diff_b = ab.index.difference(join_index)
+ if len(diff_a) > 0:
+ assert (aa.reindex(diff_a) == fill).all()
+ if len(diff_b) > 0:
+ assert (ab.reindex(diff_b) == fill).all()
+
+ ea = a.reindex(join_index)
+ eb = b.reindex(join_index)
+
+ if fill is not None:
+ ea = ea.fillna(fill)
+ eb = eb.fillna(fill)
+
+ assert_series_equal(aa, ea)
+ assert_series_equal(ab, eb)
+ assert aa.name == 'ts'
+ assert ea.name == 'ts'
+ assert ab.name == 'ts'
+ assert eb.name == 'ts'
+
+ for kind in JOIN_TYPES:
+ _check_align(test_data.ts[2:], test_data.ts[:-5], how=kind)
+ _check_align(test_data.ts[2:], test_data.ts[:-5], how=kind, fill=-1)
+
+ # empty left
+ _check_align(test_data.ts[:0], test_data.ts[:-5], how=kind)
+ _check_align(test_data.ts[:0], test_data.ts[:-5], how=kind, fill=-1)
+
+ # empty right
+ _check_align(test_data.ts[:-5], test_data.ts[:0], how=kind)
+ _check_align(test_data.ts[:-5], test_data.ts[:0], how=kind, fill=-1)
+
+ # both empty
+ _check_align(test_data.ts[:0], test_data.ts[:0], how=kind)
+ _check_align(test_data.ts[:0], test_data.ts[:0], how=kind, fill=-1)
- def test_align(self):
- def _check_align(a, b, how='left', fill=None):
- aa, ab = a.align(b, join=how, fill_value=fill)
- join_index = a.index.join(b.index, how=how)
- if fill is not None:
- diff_a = aa.index.difference(join_index)
- diff_b = ab.index.difference(join_index)
- if len(diff_a) > 0:
- assert (aa.reindex(diff_a) == fill).all()
- if len(diff_b) > 0:
- assert (ab.reindex(diff_b) == fill).all()
+def test_align_fill_method(test_data):
+ def _check_align(a, b, how='left', method='pad', limit=None):
+ aa, ab = a.align(b, join=how, method=method, limit=limit)
- ea = a.reindex(join_index)
- eb = b.reindex(join_index)
+ join_index = a.index.join(b.index, how=how)
+ ea = a.reindex(join_index)
+ eb = b.reindex(join_index)
- if fill is not None:
- ea = ea.fillna(fill)
- eb = eb.fillna(fill)
+ ea = ea.fillna(method=method, limit=limit)
+ eb = eb.fillna(method=method, limit=limit)
- assert_series_equal(aa, ea)
- assert_series_equal(ab, eb)
- assert aa.name == 'ts'
- assert ea.name == 'ts'
- assert ab.name == 'ts'
- assert eb.name == 'ts'
+ assert_series_equal(aa, ea)
+ assert_series_equal(ab, eb)
- for kind in JOIN_TYPES:
- _check_align(self.ts[2:], self.ts[:-5], how=kind)
- _check_align(self.ts[2:], self.ts[:-5], how=kind, fill=-1)
+ for kind in JOIN_TYPES:
+ for meth in ['pad', 'bfill']:
+ _check_align(test_data.ts[2:], test_data.ts[:-5],
+ how=kind, method=meth)
+ _check_align(test_data.ts[2:], test_data.ts[:-5],
+ how=kind, method=meth, limit=1)
# empty left
- _check_align(self.ts[:0], self.ts[:-5], how=kind)
- _check_align(self.ts[:0], self.ts[:-5], how=kind, fill=-1)
+ _check_align(test_data.ts[:0], test_data.ts[:-5],
+ how=kind, method=meth)
+ _check_align(test_data.ts[:0], test_data.ts[:-5],
+ how=kind, method=meth, limit=1)
# empty right
- _check_align(self.ts[:-5], self.ts[:0], how=kind)
- _check_align(self.ts[:-5], self.ts[:0], how=kind, fill=-1)
+ _check_align(test_data.ts[:-5], test_data.ts[:0],
+ how=kind, method=meth)
+ _check_align(test_data.ts[:-5], test_data.ts[:0],
+ how=kind, method=meth, limit=1)
# both empty
- _check_align(self.ts[:0], self.ts[:0], how=kind)
- _check_align(self.ts[:0], self.ts[:0], how=kind, fill=-1)
-
- def test_align_fill_method(self):
- def _check_align(a, b, how='left', method='pad', limit=None):
- aa, ab = a.align(b, join=how, method=method, limit=limit)
-
- join_index = a.index.join(b.index, how=how)
- ea = a.reindex(join_index)
- eb = b.reindex(join_index)
-
- ea = ea.fillna(method=method, limit=limit)
- eb = eb.fillna(method=method, limit=limit)
-
- assert_series_equal(aa, ea)
- assert_series_equal(ab, eb)
-
- for kind in JOIN_TYPES:
- for meth in ['pad', 'bfill']:
- _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth)
- _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth,
- limit=1)
-
- # empty left
- _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth)
- _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth,
- limit=1)
-
- # empty right
- _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth)
- _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth,
- limit=1)
-
- # both empty
- _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth)
- _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth,
- limit=1)
-
- def test_align_nocopy(self):
- b = self.ts[:5].copy()
-
- # do copy
- a = self.ts.copy()
- ra, _ = a.align(b, join='left')
- ra[:5] = 5
- assert not (a[:5] == 5).any()
-
- # do not copy
- a = self.ts.copy()
- ra, _ = a.align(b, join='left', copy=False)
- ra[:5] = 5
- assert (a[:5] == 5).all()
-
- # do copy
- a = self.ts.copy()
- b = self.ts[:5].copy()
- _, rb = a.align(b, join='right')
- rb[:3] = 5
- assert not (b[:3] == 5).any()
-
- # do not copy
- a = self.ts.copy()
- b = self.ts[:5].copy()
- _, rb = a.align(b, join='right', copy=False)
- rb[:2] = 5
- assert (b[:2] == 5).all()
-
- def test_align_same_index(self):
- a, b = self.ts.align(self.ts, copy=False)
- assert a.index is self.ts.index
- assert b.index is self.ts.index
-
- a, b = self.ts.align(self.ts, copy=True)
- assert a.index is not self.ts.index
- assert b.index is not self.ts.index
-
- def test_align_multiindex(self):
- # GH 10665
-
- midx = pd.MultiIndex.from_product([range(2), range(3), range(2)],
- names=('a', 'b', 'c'))
- idx = pd.Index(range(2), name='b')
- s1 = pd.Series(np.arange(12, dtype='int64'), index=midx)
- s2 = pd.Series(np.arange(2, dtype='int64'), index=idx)
-
- # these must be the same results (but flipped)
- res1l, res1r = s1.align(s2, join='left')
- res2l, res2r = s2.align(s1, join='right')
-
- expl = s1
- tm.assert_series_equal(expl, res1l)
- tm.assert_series_equal(expl, res2r)
- expr = pd.Series([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx)
- tm.assert_series_equal(expr, res1r)
- tm.assert_series_equal(expr, res2l)
-
- res1l, res1r = s1.align(s2, join='right')
- res2l, res2r = s2.align(s1, join='left')
-
- exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)],
- names=('a', 'b', 'c'))
- expl = pd.Series([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx)
- tm.assert_series_equal(expl, res1l)
- tm.assert_series_equal(expl, res2r)
- expr = pd.Series([0, 0, 1, 1] * 2, index=exp_idx)
- tm.assert_series_equal(expr, res1r)
- tm.assert_series_equal(expr, res2l)
-
-
-class TestReindexing(TestData):
- def test_reindex(self):
- identity = self.series.reindex(self.series.index)
-
- # __array_interface__ is not defined for older numpies
- # and on some pythons
- try:
- assert np.may_share_memory(self.series.index, identity.index)
- except AttributeError:
- pass
-
- assert identity.index.is_(self.series.index)
- assert identity.index.identical(self.series.index)
-
- subIndex = self.series.index[10:20]
- subSeries = self.series.reindex(subIndex)
-
- for idx, val in compat.iteritems(subSeries):
- assert val == self.series[idx]
-
- subIndex2 = self.ts.index[10:20]
- subTS = self.ts.reindex(subIndex2)
-
- for idx, val in compat.iteritems(subTS):
- assert val == self.ts[idx]
- stuffSeries = self.ts.reindex(subIndex)
-
- assert np.isnan(stuffSeries).all()
-
- # This is extremely important for the Cython code to not screw up
- nonContigIndex = self.ts.index[::2]
- subNonContig = self.ts.reindex(nonContigIndex)
- for idx, val in compat.iteritems(subNonContig):
- assert val == self.ts[idx]
-
- # return a copy the same index here
- result = self.ts.reindex()
- assert not (result is self.ts)
-
- def test_reindex_nan(self):
- ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8])
-
- i, j = [nan, 1, nan, 8, 4, nan], [2, 0, 2, 3, 1, 2]
- assert_series_equal(ts.reindex(i), ts.iloc[j])
-
- ts.index = ts.index.astype('object')
-
- # reindex coerces index.dtype to float, loc/iloc doesn't
- assert_series_equal(ts.reindex(i), ts.iloc[j], check_index_type=False)
-
- def test_reindex_series_add_nat(self):
- rng = date_range('1/1/2000 00:00:00', periods=10, freq='10s')
- series = Series(rng)
-
- result = series.reindex(lrange(15))
- assert np.issubdtype(result.dtype, np.dtype('M8[ns]'))
-
- mask = result.isna()
- assert mask[-5:].all()
- assert not mask[:-5].any()
-
- def test_reindex_with_datetimes(self):
- rng = date_range('1/1/2000', periods=20)
- ts = Series(np.random.randn(20), index=rng)
-
- result = ts.reindex(list(ts.index[5:10]))
- expected = ts[5:10]
- tm.assert_series_equal(result, expected)
-
- result = ts[list(ts.index[5:10])]
- tm.assert_series_equal(result, expected)
-
- def test_reindex_corner(self):
- # (don't forget to fix this) I think it's fixed
- self.empty.reindex(self.ts.index, method='pad') # it works
+ _check_align(test_data.ts[:0], test_data.ts[:0],
+ how=kind, method=meth)
+ _check_align(test_data.ts[:0], test_data.ts[:0],
+ how=kind, method=meth, limit=1)
+
+
+def test_align_nocopy(test_data):
+ b = test_data.ts[:5].copy()
+
+ # do copy
+ a = test_data.ts.copy()
+ ra, _ = a.align(b, join='left')
+ ra[:5] = 5
+ assert not (a[:5] == 5).any()
+
+ # do not copy
+ a = test_data.ts.copy()
+ ra, _ = a.align(b, join='left', copy=False)
+ ra[:5] = 5
+ assert (a[:5] == 5).all()
+
+ # do copy
+ a = test_data.ts.copy()
+ b = test_data.ts[:5].copy()
+ _, rb = a.align(b, join='right')
+ rb[:3] = 5
+ assert not (b[:3] == 5).any()
+
+ # do not copy
+ a = test_data.ts.copy()
+ b = test_data.ts[:5].copy()
+ _, rb = a.align(b, join='right', copy=False)
+ rb[:2] = 5
+ assert (b[:2] == 5).all()
+
+
+def test_align_same_index(test_data):
+ a, b = test_data.ts.align(test_data.ts, copy=False)
+ assert a.index is test_data.ts.index
+ assert b.index is test_data.ts.index
+
+ a, b = test_data.ts.align(test_data.ts, copy=True)
+ assert a.index is not test_data.ts.index
+ assert b.index is not test_data.ts.index
+
+
+def test_align_multiindex():
+ # GH 10665
+
+ midx = pd.MultiIndex.from_product([range(2), range(3), range(2)],
+ names=('a', 'b', 'c'))
+ idx = pd.Index(range(2), name='b')
+ s1 = pd.Series(np.arange(12, dtype='int64'), index=midx)
+ s2 = pd.Series(np.arange(2, dtype='int64'), index=idx)
+
+ # these must be the same results (but flipped)
+ res1l, res1r = s1.align(s2, join='left')
+ res2l, res2r = s2.align(s1, join='right')
+
+ expl = s1
+ tm.assert_series_equal(expl, res1l)
+ tm.assert_series_equal(expl, res2r)
+ expr = pd.Series([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx)
+ tm.assert_series_equal(expr, res1r)
+ tm.assert_series_equal(expr, res2l)
+
+ res1l, res1r = s1.align(s2, join='right')
+ res2l, res2r = s2.align(s1, join='left')
+
+ exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)],
+ names=('a', 'b', 'c'))
+ expl = pd.Series([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx)
+ tm.assert_series_equal(expl, res1l)
+ tm.assert_series_equal(expl, res2r)
+ expr = pd.Series([0, 0, 1, 1] * 2, index=exp_idx)
+ tm.assert_series_equal(expr, res1r)
+ tm.assert_series_equal(expr, res2l)
+
+
+def test_reindex(test_data):
+ identity = test_data.series.reindex(test_data.series.index)
+
+ # __array_interface__ is not defined for older numpies
+ # and on some pythons
+ try:
+ assert np.may_share_memory(test_data.series.index, identity.index)
+ except AttributeError:
+ pass
- # corner case: pad empty series
- reindexed = self.empty.reindex(self.ts.index, method='pad')
+ assert identity.index.is_(test_data.series.index)
+ assert identity.index.identical(test_data.series.index)
- # pass non-Index
- reindexed = self.ts.reindex(list(self.ts.index))
- assert_series_equal(self.ts, reindexed)
+ subIndex = test_data.series.index[10:20]
+ subSeries = test_data.series.reindex(subIndex)
- # bad fill method
- ts = self.ts[::2]
- pytest.raises(Exception, ts.reindex, self.ts.index, method='foo')
+ for idx, val in compat.iteritems(subSeries):
+ assert val == test_data.series[idx]
- def test_reindex_pad(self):
- s = Series(np.arange(10), dtype='int64')
- s2 = s[::2]
+ subIndex2 = test_data.ts.index[10:20]
+ subTS = test_data.ts.reindex(subIndex2)
- reindexed = s2.reindex(s.index, method='pad')
- reindexed2 = s2.reindex(s.index, method='ffill')
- assert_series_equal(reindexed, reindexed2)
+ for idx, val in compat.iteritems(subTS):
+ assert val == test_data.ts[idx]
+ stuffSeries = test_data.ts.reindex(subIndex)
- expected = Series([0, 0, 2, 2, 4, 4, 6, 6, 8, 8], index=np.arange(10))
- assert_series_equal(reindexed, expected)
+ assert np.isnan(stuffSeries).all()
- # GH4604
- s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
- new_index = ['a', 'g', 'c', 'f']
- expected = Series([1, 1, 3, 3], index=new_index)
+ # This is extremely important for the Cython code to not screw up
+ nonContigIndex = test_data.ts.index[::2]
+ subNonContig = test_data.ts.reindex(nonContigIndex)
+ for idx, val in compat.iteritems(subNonContig):
+ assert val == test_data.ts[idx]
- # this changes dtype because the ffill happens after
- result = s.reindex(new_index).ffill()
- assert_series_equal(result, expected.astype('float64'))
+ # return a copy the same index here
+ result = test_data.ts.reindex()
+ assert not (result is test_data.ts)
- result = s.reindex(new_index).ffill(downcast='infer')
- assert_series_equal(result, expected)
- expected = Series([1, 5, 3, 5], index=new_index)
- result = s.reindex(new_index, method='ffill')
- assert_series_equal(result, expected)
+def test_reindex_nan():
+ ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8])
- # inference of new dtype
- s = Series([True, False, False, True], index=list('abcd'))
- new_index = 'agc'
- result = s.reindex(list(new_index)).ffill()
- expected = Series([True, True, False], index=list(new_index))
- assert_series_equal(result, expected)
+ i, j = [nan, 1, nan, 8, 4, nan], [2, 0, 2, 3, 1, 2]
+ assert_series_equal(ts.reindex(i), ts.iloc[j])
- # GH4618 shifted series downcasting
- s = Series(False, index=lrange(0, 5))
- result = s.shift(1).fillna(method='bfill')
- expected = Series(False, index=lrange(0, 5))
- assert_series_equal(result, expected)
+ ts.index = ts.index.astype('object')
- def test_reindex_nearest(self):
- s = Series(np.arange(10, dtype='int64'))
- target = [0.1, 0.9, 1.5, 2.0]
- actual = s.reindex(target, method='nearest')
- expected = Series(np.around(target).astype('int64'), target)
- assert_series_equal(expected, actual)
+ # reindex coerces index.dtype to float, loc/iloc doesn't
+ assert_series_equal(ts.reindex(i), ts.iloc[j], check_index_type=False)
- actual = s.reindex_like(actual, method='nearest')
- assert_series_equal(expected, actual)
- actual = s.reindex_like(actual, method='nearest', tolerance=1)
- assert_series_equal(expected, actual)
- actual = s.reindex_like(actual, method='nearest',
- tolerance=[1, 2, 3, 4])
- assert_series_equal(expected, actual)
+def test_reindex_series_add_nat():
+ rng = date_range('1/1/2000 00:00:00', periods=10, freq='10s')
+ series = Series(rng)
- actual = s.reindex(target, method='nearest', tolerance=0.2)
- expected = Series([0, 1, np.nan, 2], target)
- assert_series_equal(expected, actual)
+ result = series.reindex(lrange(15))
+ assert np.issubdtype(result.dtype, np.dtype('M8[ns]'))
- actual = s.reindex(target, method='nearest',
- tolerance=[0.3, 0.01, 0.4, 3])
- expected = Series([0, np.nan, np.nan, 2], target)
- assert_series_equal(expected, actual)
-
- def test_reindex_backfill(self):
- pass
+ mask = result.isna()
+ assert mask[-5:].all()
+ assert not mask[:-5].any()
+
+
+def test_reindex_with_datetimes():
+ rng = date_range('1/1/2000', periods=20)
+ ts = Series(np.random.randn(20), index=rng)
+
+ result = ts.reindex(list(ts.index[5:10]))
+ expected = ts[5:10]
+ tm.assert_series_equal(result, expected)
+
+ result = ts[list(ts.index[5:10])]
+ tm.assert_series_equal(result, expected)
+
+
+def test_reindex_corner(test_data):
+ # (don't forget to fix this) I think it's fixed
+ test_data.empty.reindex(test_data.ts.index, method='pad') # it works
+
+ # corner case: pad empty series
+ reindexed = test_data.empty.reindex(test_data.ts.index, method='pad')
+
+ # pass non-Index
+ reindexed = test_data.ts.reindex(list(test_data.ts.index))
+ assert_series_equal(test_data.ts, reindexed)
+
+ # bad fill method
+ ts = test_data.ts[::2]
+ pytest.raises(Exception, ts.reindex, test_data.ts.index, method='foo')
+
+
+def test_reindex_pad():
+ s = Series(np.arange(10), dtype='int64')
+ s2 = s[::2]
+
+ reindexed = s2.reindex(s.index, method='pad')
+ reindexed2 = s2.reindex(s.index, method='ffill')
+ assert_series_equal(reindexed, reindexed2)
+
+ expected = Series([0, 0, 2, 2, 4, 4, 6, 6, 8, 8], index=np.arange(10))
+ assert_series_equal(reindexed, expected)
+
+ # GH4604
+ s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
+ new_index = ['a', 'g', 'c', 'f']
+ expected = Series([1, 1, 3, 3], index=new_index)
+
+ # this changes dtype because the ffill happens after
+ result = s.reindex(new_index).ffill()
+ assert_series_equal(result, expected.astype('float64'))
+
+ result = s.reindex(new_index).ffill(downcast='infer')
+ assert_series_equal(result, expected)
+
+ expected = Series([1, 5, 3, 5], index=new_index)
+ result = s.reindex(new_index, method='ffill')
+ assert_series_equal(result, expected)
+
+ # inference of new dtype
+ s = Series([True, False, False, True], index=list('abcd'))
+ new_index = 'agc'
+ result = s.reindex(list(new_index)).ffill()
+ expected = Series([True, True, False], index=list(new_index))
+ assert_series_equal(result, expected)
+
+ # GH4618 shifted series downcasting
+ s = Series(False, index=lrange(0, 5))
+ result = s.shift(1).fillna(method='bfill')
+ expected = Series(False, index=lrange(0, 5))
+ assert_series_equal(result, expected)
+
+
+def test_reindex_nearest():
+ s = Series(np.arange(10, dtype='int64'))
+ target = [0.1, 0.9, 1.5, 2.0]
+ actual = s.reindex(target, method='nearest')
+ expected = Series(np.around(target).astype('int64'), target)
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex_like(actual, method='nearest')
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex_like(actual, method='nearest', tolerance=1)
+ assert_series_equal(expected, actual)
+ actual = s.reindex_like(actual, method='nearest',
+ tolerance=[1, 2, 3, 4])
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex(target, method='nearest', tolerance=0.2)
+ expected = Series([0, 1, np.nan, 2], target)
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex(target, method='nearest',
+ tolerance=[0.3, 0.01, 0.4, 3])
+ expected = Series([0, np.nan, np.nan, 2], target)
+ assert_series_equal(expected, actual)
+
+
+def test_reindex_backfill():
+ pass
+
+
+def test_reindex_int(test_data):
+ ts = test_data.ts[::2]
+ int_ts = Series(np.zeros(len(ts), dtype=int), index=ts.index)
+
+ # this should work fine
+ reindexed_int = int_ts.reindex(test_data.ts.index)
+
+ # if NaNs introduced
+ assert reindexed_int.dtype == np.float_
+
+ # NO NaNs introduced
+ reindexed_int = int_ts.reindex(int_ts.index[::2])
+ assert reindexed_int.dtype == np.int_
+
+
+def test_reindex_bool(test_data):
+ # A series other than float, int, string, or object
+ ts = test_data.ts[::2]
+ bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
+
+ # this should work fine
+ reindexed_bool = bool_ts.reindex(test_data.ts.index)
+
+ # if NaNs introduced
+ assert reindexed_bool.dtype == np.object_
+
+ # NO NaNs introduced
+ reindexed_bool = bool_ts.reindex(bool_ts.index[::2])
+ assert reindexed_bool.dtype == np.bool_
+
+
+def test_reindex_bool_pad(test_data):
+ # fail
+ ts = test_data.ts[5:]
+ bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
+ filled_bool = bool_ts.reindex(test_data.ts.index, method='pad')
+ assert isna(filled_bool[:5]).all()
+
+
+def test_reindex_categorical():
+ index = date_range('20000101', periods=3)
+
+ # reindexing to an invalid Categorical
+ s = Series(['a', 'b', 'c'], dtype='category')
+ result = s.reindex(index)
+ expected = Series(Categorical(values=[np.nan, np.nan, np.nan],
+ categories=['a', 'b', 'c']))
+ expected.index = index
+ tm.assert_series_equal(result, expected)
+
+ # partial reindexing
+ expected = Series(Categorical(values=['b', 'c'], categories=['a', 'b',
+ 'c']))
+ expected.index = [1, 2]
+ result = s.reindex([1, 2])
+ tm.assert_series_equal(result, expected)
+
+ expected = Series(Categorical(
+ values=['c', np.nan], categories=['a', 'b', 'c']))
+ expected.index = [2, 3]
+ result = s.reindex([2, 3])
+ tm.assert_series_equal(result, expected)
+
+
+def test_reindex_like(test_data):
+ other = test_data.ts[::2]
+ assert_series_equal(test_data.ts.reindex(other.index),
+ test_data.ts.reindex_like(other))
+
+ # GH 7179
+ day1 = datetime(2013, 3, 5)
+ day2 = datetime(2013, 5, 5)
+ day3 = datetime(2014, 3, 5)
+
+ series1 = Series([5, None, None], [day1, day2, day3])
+ series2 = Series([None, None], [day1, day3])
+
+ result = series1.reindex_like(series2, method='pad')
+ expected = Series([5, np.nan], index=[day1, day3])
+ assert_series_equal(result, expected)
+
+
+def test_reindex_fill_value():
+ # -----------------------------------------------------------
+ # floats
+ floats = Series([1., 2., 3.])
+ result = floats.reindex([1, 2, 3])
+ expected = Series([2., 3., np.nan], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+ result = floats.reindex([1, 2, 3], fill_value=0)
+ expected = Series([2., 3., 0], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+ # -----------------------------------------------------------
+ # ints
+ ints = Series([1, 2, 3])
+
+ result = ints.reindex([1, 2, 3])
+ expected = Series([2., 3., np.nan], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+ # don't upcast
+ result = ints.reindex([1, 2, 3], fill_value=0)
+ expected = Series([2, 3, 0], index=[1, 2, 3])
+ assert issubclass(result.dtype.type, np.integer)
+ assert_series_equal(result, expected)
+
+ # -----------------------------------------------------------
+ # objects
+ objects = Series([1, 2, 3], dtype=object)
+
+ result = objects.reindex([1, 2, 3])
+ expected = Series([2, 3, np.nan], index=[1, 2, 3], dtype=object)
+ assert_series_equal(result, expected)
+
+ result = objects.reindex([1, 2, 3], fill_value='foo')
+ expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object)
+ assert_series_equal(result, expected)
- def test_reindex_int(self):
- ts = self.ts[::2]
- int_ts = Series(np.zeros(len(ts), dtype=int), index=ts.index)
-
- # this should work fine
- reindexed_int = int_ts.reindex(self.ts.index)
-
- # if NaNs introduced
- assert reindexed_int.dtype == np.float_
-
- # NO NaNs introduced
- reindexed_int = int_ts.reindex(int_ts.index[::2])
- assert reindexed_int.dtype == np.int_
-
- def test_reindex_bool(self):
- # A series other than float, int, string, or object
- ts = self.ts[::2]
- bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
-
- # this should work fine
- reindexed_bool = bool_ts.reindex(self.ts.index)
-
- # if NaNs introduced
- assert reindexed_bool.dtype == np.object_
-
- # NO NaNs introduced
- reindexed_bool = bool_ts.reindex(bool_ts.index[::2])
- assert reindexed_bool.dtype == np.bool_
-
- def test_reindex_bool_pad(self):
- # fail
- ts = self.ts[5:]
- bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
- filled_bool = bool_ts.reindex(self.ts.index, method='pad')
- assert isna(filled_bool[:5]).all()
-
- def test_reindex_categorical(self):
- index = date_range('20000101', periods=3)
-
- # reindexing to an invalid Categorical
- s = Series(['a', 'b', 'c'], dtype='category')
- result = s.reindex(index)
- expected = Series(Categorical(values=[np.nan, np.nan, np.nan],
- categories=['a', 'b', 'c']))
- expected.index = index
- tm.assert_series_equal(result, expected)
-
- # partial reindexing
- expected = Series(Categorical(values=['b', 'c'], categories=['a', 'b',
- 'c']))
- expected.index = [1, 2]
- result = s.reindex([1, 2])
- tm.assert_series_equal(result, expected)
-
- expected = Series(Categorical(
- values=['c', np.nan], categories=['a', 'b', 'c']))
- expected.index = [2, 3]
- result = s.reindex([2, 3])
- tm.assert_series_equal(result, expected)
-
- def test_reindex_like(self):
- other = self.ts[::2]
- assert_series_equal(self.ts.reindex(other.index),
- self.ts.reindex_like(other))
-
- # GH 7179
- day1 = datetime(2013, 3, 5)
- day2 = datetime(2013, 5, 5)
- day3 = datetime(2014, 3, 5)
-
- series1 = Series([5, None, None], [day1, day2, day3])
- series2 = Series([None, None], [day1, day3])
-
- result = series1.reindex_like(series2, method='pad')
- expected = Series([5, np.nan], index=[day1, day3])
- assert_series_equal(result, expected)
-
- def test_reindex_fill_value(self):
- # -----------------------------------------------------------
- # floats
- floats = Series([1., 2., 3.])
- result = floats.reindex([1, 2, 3])
- expected = Series([2., 3., np.nan], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- result = floats.reindex([1, 2, 3], fill_value=0)
- expected = Series([2., 3., 0], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- # -----------------------------------------------------------
- # ints
- ints = Series([1, 2, 3])
-
- result = ints.reindex([1, 2, 3])
- expected = Series([2., 3., np.nan], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- # don't upcast
- result = ints.reindex([1, 2, 3], fill_value=0)
- expected = Series([2, 3, 0], index=[1, 2, 3])
- assert issubclass(result.dtype.type, np.integer)
- assert_series_equal(result, expected)
-
- # -----------------------------------------------------------
- # objects
- objects = Series([1, 2, 3], dtype=object)
-
- result = objects.reindex([1, 2, 3])
- expected = Series([2, 3, np.nan], index=[1, 2, 3], dtype=object)
- assert_series_equal(result, expected)
-
- result = objects.reindex([1, 2, 3], fill_value='foo')
- expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object)
- assert_series_equal(result, expected)
-
- # ------------------------------------------------------------
- # bools
- bools = Series([True, False, True])
-
- result = bools.reindex([1, 2, 3])
- expected = Series([False, True, np.nan], index=[1, 2, 3], dtype=object)
- assert_series_equal(result, expected)
-
- result = bools.reindex([1, 2, 3], fill_value=False)
- expected = Series([False, True, False], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
-
-class TestRenaming(TestData):
-
- def test_rename(self):
- # GH 17407
- s = Series(range(1, 6), index=pd.Index(range(2, 7), name='IntIndex'))
- result = s.rename(str)
- expected = s.rename(lambda i: str(i))
- assert_series_equal(result, expected)
-
- assert result.name == expected.name
+ # ------------------------------------------------------------
+ # bools
+ bools = Series([True, False, True])
+
+ result = bools.reindex([1, 2, 3])
+ expected = Series([False, True, np.nan], index=[1, 2, 3], dtype=object)
+ assert_series_equal(result, expected)
+
+ result = bools.reindex([1, 2, 3], fill_value=False)
+ expected = Series([False, True, False], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+
+def test_rename():
+ # GH 17407
+ s = Series(range(1, 6), index=pd.Index(range(2, 7), name='IntIndex'))
+ result = s.rename(str)
+ expected = s.rename(lambda i: str(i))
+ assert_series_equal(result, expected)
+
+ assert result.name == expected.name
diff --git a/pandas/tests/series/indexing/test_boolean.py b/pandas/tests/series/indexing/test_boolean.py
index a6c3dc268cef9..75aa2898ae773 100644
--- a/pandas/tests/series/indexing/test_boolean.py
+++ b/pandas/tests/series/indexing/test_boolean.py
@@ -10,601 +10,615 @@
from pandas.compat import lrange, range
from pandas.core.dtypes.common import is_integer
-
from pandas.core.indexing import IndexingError
from pandas.tseries.offsets import BDay
from pandas.util.testing import (assert_series_equal)
import pandas.util.testing as tm
-from pandas.tests.series.common import TestData
-
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-class TestBooleanIndexing(TestData):
+def test_getitem_boolean(test_data):
+ s = test_data.series
+ mask = s > s.median()
- def test_getitem_boolean(self):
- s = self.series
- mask = s > s.median()
+ # passing list is OK
+ result = s[list(mask)]
+ expected = s[mask]
+ assert_series_equal(result, expected)
+ tm.assert_index_equal(result.index, s.index[mask])
- # passing list is OK
- result = s[list(mask)]
- expected = s[mask]
- assert_series_equal(result, expected)
- tm.assert_index_equal(result.index, s.index[mask])
-
- def test_getitem_boolean_empty(self):
- s = Series([], dtype=np.int64)
- s.index.name = 'index_name'
- s = s[s.isna()]
- assert s.index.name == 'index_name'
- assert s.dtype == np.int64
-
- # GH5877
- # indexing with empty series
- s = Series(['A', 'B'])
- expected = Series(np.nan, index=['C'], dtype=object)
- result = s[Series(['C'], dtype=object)]
- assert_series_equal(result, expected)
- s = Series(['A', 'B'])
- expected = Series(dtype=object, index=Index([], dtype='int64'))
- result = s[Series([], dtype=object)]
- assert_series_equal(result, expected)
+def test_getitem_boolean_empty():
+ s = Series([], dtype=np.int64)
+ s.index.name = 'index_name'
+ s = s[s.isna()]
+ assert s.index.name == 'index_name'
+ assert s.dtype == np.int64
- # invalid because of the boolean indexer
- # that's empty or not-aligned
- def f():
- s[Series([], dtype=bool)]
+ # GH5877
+ # indexing with empty series
+ s = Series(['A', 'B'])
+ expected = Series(np.nan, index=['C'], dtype=object)
+ result = s[Series(['C'], dtype=object)]
+ assert_series_equal(result, expected)
- pytest.raises(IndexingError, f)
+ s = Series(['A', 'B'])
+ expected = Series(dtype=object, index=Index([], dtype='int64'))
+ result = s[Series([], dtype=object)]
+ assert_series_equal(result, expected)
- def f():
- s[Series([True], dtype=bool)]
+ # invalid because of the boolean indexer
+ # that's empty or not-aligned
+ def f():
+ s[Series([], dtype=bool)]
- pytest.raises(IndexingError, f)
+ pytest.raises(IndexingError, f)
- def test_getitem_boolean_object(self):
- # using column from DataFrame
+ def f():
+ s[Series([True], dtype=bool)]
- s = self.series
- mask = s > s.median()
- omask = mask.astype(object)
+ pytest.raises(IndexingError, f)
- # getitem
- result = s[omask]
- expected = s[mask]
- assert_series_equal(result, expected)
- # setitem
- s2 = s.copy()
- cop = s.copy()
- cop[omask] = 5
- s2[mask] = 5
- assert_series_equal(cop, s2)
-
- # nans raise exception
- omask[5:10] = np.nan
- pytest.raises(Exception, s.__getitem__, omask)
- pytest.raises(Exception, s.__setitem__, omask, 5)
-
- def test_getitem_setitem_boolean_corner(self):
- ts = self.ts
- mask_shifted = ts.shift(1, freq=BDay()) > ts.median()
-
- # these used to raise...??
-
- pytest.raises(Exception, ts.__getitem__, mask_shifted)
- pytest.raises(Exception, ts.__setitem__, mask_shifted, 1)
- # ts[mask_shifted]
- # ts[mask_shifted] = 1
-
- pytest.raises(Exception, ts.loc.__getitem__, mask_shifted)
- pytest.raises(Exception, ts.loc.__setitem__, mask_shifted, 1)
- # ts.loc[mask_shifted]
- # ts.loc[mask_shifted] = 2
-
- def test_setitem_boolean(self):
- mask = self.series > self.series.median()
-
- # similar indexed series
- result = self.series.copy()
- result[mask] = self.series * 2
- expected = self.series * 2
- assert_series_equal(result[mask], expected[mask])
-
- # needs alignment
- result = self.series.copy()
- result[mask] = (self.series * 2)[0:5]
- expected = (self.series * 2)[0:5].reindex_like(self.series)
- expected[-mask] = self.series[mask]
- assert_series_equal(result[mask], expected[mask])
-
- def test_get_set_boolean_different_order(self):
- ordered = self.series.sort_values()
-
- # setting
- copy = self.series.copy()
- copy[ordered > 0] = 0
-
- expected = self.series.copy()
- expected[expected > 0] = 0
-
- assert_series_equal(copy, expected)
-
- # getting
- sel = self.series[ordered > 0]
- exp = self.series[self.series > 0]
- assert_series_equal(sel, exp)
-
- def test_where_unsafe(self):
- # unsafe dtype changes
- for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16,
- np.float32, np.float64]:
- s = Series(np.arange(10), dtype=dtype)
- mask = s < 5
- s[mask] = lrange(2, 7)
- expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype)
- assert_series_equal(s, expected)
- assert s.dtype == expected.dtype
-
- # these are allowed operations, but are upcasted
- for dtype in [np.int64, np.float64]:
- s = Series(np.arange(10), dtype=dtype)
- mask = s < 5
- values = [2.5, 3.5, 4.5, 5.5, 6.5]
- s[mask] = values
- expected = Series(values + lrange(5, 10), dtype='float64')
- assert_series_equal(s, expected)
- assert s.dtype == expected.dtype
-
- # GH 9731
- s = Series(np.arange(10), dtype='int64')
- mask = s > 5
- values = [2.5, 3.5, 4.5, 5.5]
- s[mask] = values
- expected = Series(lrange(6) + values, dtype='float64')
- assert_series_equal(s, expected)
+def test_getitem_boolean_object(test_data):
+ # using column from DataFrame
+
+ s = test_data.series
+ mask = s > s.median()
+ omask = mask.astype(object)
+
+ # getitem
+ result = s[omask]
+ expected = s[mask]
+ assert_series_equal(result, expected)
+
+ # setitem
+ s2 = s.copy()
+ cop = s.copy()
+ cop[omask] = 5
+ s2[mask] = 5
+ assert_series_equal(cop, s2)
+
+ # nans raise exception
+ omask[5:10] = np.nan
+ pytest.raises(Exception, s.__getitem__, omask)
+ pytest.raises(Exception, s.__setitem__, omask, 5)
+
+
+def test_getitem_setitem_boolean_corner(test_data):
+ ts = test_data.ts
+ mask_shifted = ts.shift(1, freq=BDay()) > ts.median()
+
+ # these used to raise...??
+
+ pytest.raises(Exception, ts.__getitem__, mask_shifted)
+ pytest.raises(Exception, ts.__setitem__, mask_shifted, 1)
+ # ts[mask_shifted]
+ # ts[mask_shifted] = 1
+
+ pytest.raises(Exception, ts.loc.__getitem__, mask_shifted)
+ pytest.raises(Exception, ts.loc.__setitem__, mask_shifted, 1)
+ # ts.loc[mask_shifted]
+ # ts.loc[mask_shifted] = 2
+
+
+def test_setitem_boolean(test_data):
+ mask = test_data.series > test_data.series.median()
+
+ # similar indexed series
+ result = test_data.series.copy()
+ result[mask] = test_data.series * 2
+ expected = test_data.series * 2
+ assert_series_equal(result[mask], expected[mask])
+
+ # needs alignment
+ result = test_data.series.copy()
+ result[mask] = (test_data.series * 2)[0:5]
+ expected = (test_data.series * 2)[0:5].reindex_like(test_data.series)
+ expected[-mask] = test_data.series[mask]
+ assert_series_equal(result[mask], expected[mask])
+
+
+def test_get_set_boolean_different_order(test_data):
+ ordered = test_data.series.sort_values()
+
+ # setting
+ copy = test_data.series.copy()
+ copy[ordered > 0] = 0
- # can't do these as we are forced to change the itemsize of the input
- # to something we cannot
- for dtype in [np.int8, np.int16, np.int32, np.float16, np.float32]:
- s = Series(np.arange(10), dtype=dtype)
- mask = s < 5
- values = [2.5, 3.5, 4.5, 5.5, 6.5]
- pytest.raises(Exception, s.__setitem__, tuple(mask), values)
+ expected = test_data.series.copy()
+ expected[expected > 0] = 0
- # GH3235
- s = Series(np.arange(10), dtype='int64')
+ assert_series_equal(copy, expected)
+
+ # getting
+ sel = test_data.series[ordered > 0]
+ exp = test_data.series[test_data.series > 0]
+ assert_series_equal(sel, exp)
+
+
+def test_where_unsafe():
+ # unsafe dtype changes
+ for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16,
+ np.float32, np.float64]:
+ s = Series(np.arange(10), dtype=dtype)
mask = s < 5
s[mask] = lrange(2, 7)
- expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64')
+ expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype)
assert_series_equal(s, expected)
assert s.dtype == expected.dtype
- s = Series(np.arange(10), dtype='int64')
- mask = s > 5
- s[mask] = [0] * 4
- expected = Series([0, 1, 2, 3, 4, 5] + [0] * 4, dtype='int64')
+ # these are allowed operations, but are upcasted
+ for dtype in [np.int64, np.float64]:
+ s = Series(np.arange(10), dtype=dtype)
+ mask = s < 5
+ values = [2.5, 3.5, 4.5, 5.5, 6.5]
+ s[mask] = values
+ expected = Series(values + lrange(5, 10), dtype='float64')
assert_series_equal(s, expected)
+ assert s.dtype == expected.dtype
- s = Series(np.arange(10))
- mask = s > 5
+ # GH 9731
+ s = Series(np.arange(10), dtype='int64')
+ mask = s > 5
+ values = [2.5, 3.5, 4.5, 5.5]
+ s[mask] = values
+ expected = Series(lrange(6) + values, dtype='float64')
+ assert_series_equal(s, expected)
+
+ # can't do these as we are forced to change the itemsize of the input
+ # to something we cannot
+ for dtype in [np.int8, np.int16, np.int32, np.float16, np.float32]:
+ s = Series(np.arange(10), dtype=dtype)
+ mask = s < 5
+ values = [2.5, 3.5, 4.5, 5.5, 6.5]
+ pytest.raises(Exception, s.__setitem__, tuple(mask), values)
- def f():
- s[mask] = [5, 4, 3, 2, 1]
+ # GH3235
+ s = Series(np.arange(10), dtype='int64')
+ mask = s < 5
+ s[mask] = lrange(2, 7)
+ expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64')
+ assert_series_equal(s, expected)
+ assert s.dtype == expected.dtype
- pytest.raises(ValueError, f)
+ s = Series(np.arange(10), dtype='int64')
+ mask = s > 5
+ s[mask] = [0] * 4
+ expected = Series([0, 1, 2, 3, 4, 5] + [0] * 4, dtype='int64')
+ assert_series_equal(s, expected)
+
+ s = Series(np.arange(10))
+ mask = s > 5
+
+ def f():
+ s[mask] = [5, 4, 3, 2, 1]
+
+ pytest.raises(ValueError, f)
+
+ def f():
+ s[mask] = [0] * 5
+
+ pytest.raises(ValueError, f)
+
+ # dtype changes
+ s = Series([1, 2, 3, 4])
+ result = s.where(s > 2, np.nan)
+ expected = Series([np.nan, np.nan, 3, 4])
+ assert_series_equal(result, expected)
+
+ # GH 4667
+ # setting with None changes dtype
+ s = Series(range(10)).astype(float)
+ s[8] = None
+ result = s[8]
+ assert isna(result)
+
+ s = Series(range(10)).astype(float)
+ s[s > 8] = None
+ result = s[isna(s)]
+ expected = Series(np.nan, index=[9])
+ assert_series_equal(result, expected)
+
+
+def test_where_raise_on_error_deprecation():
+ # gh-14968
+ # deprecation of raise_on_error
+ s = Series(np.random.randn(5))
+ cond = s > 0
+ with tm.assert_produces_warning(FutureWarning):
+ s.where(cond, raise_on_error=True)
+ with tm.assert_produces_warning(FutureWarning):
+ s.mask(cond, raise_on_error=True)
+
+
+def test_where():
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.where(cond).dropna()
+ rs2 = s[cond]
+ assert_series_equal(rs, rs2)
+
+ rs = s.where(cond, -s)
+ assert_series_equal(rs, s.abs())
+
+ rs = s.where(cond)
+ assert (s.shape == rs.shape)
+ assert (rs is not s)
+
+ # test alignment
+ cond = Series([True, False, False, True, False], index=s.index)
+ s2 = -(s.abs())
+
+ expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index)
+ rs = s2.where(cond[:3])
+ assert_series_equal(rs, expected)
+
+ expected = s2.abs()
+ expected.iloc[0] = s2[0]
+ rs = s2.where(cond[:3], -s2)
+ assert_series_equal(rs, expected)
+
+
+def test_where_error():
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ pytest.raises(ValueError, s.where, 1)
+ pytest.raises(ValueError, s.where, cond[:3].values, -s)
+
+ # GH 2745
+ s = Series([1, 2])
+ s[[True, False]] = [0, 1]
+ expected = Series([0, 2])
+ assert_series_equal(s, expected)
+
+ # failures
+ pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
+ [0, 2, 3])
+ pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
+ [])
- def f():
- s[mask] = [0] * 5
- pytest.raises(ValueError, f)
+def test_where_array_like():
+ # see gh-15414
+ s = Series([1, 2, 3])
+ cond = [False, True, True]
+ expected = Series([np.nan, 2, 3])
+ klasses = [list, tuple, np.array, Series]
- # dtype changes
- s = Series([1, 2, 3, 4])
- result = s.where(s > 2, np.nan)
- expected = Series([np.nan, np.nan, 3, 4])
+ for klass in klasses:
+ result = s.where(klass(cond))
assert_series_equal(result, expected)
- # GH 4667
- # setting with None changes dtype
- s = Series(range(10)).astype(float)
- s[8] = None
- result = s[8]
- assert isna(result)
-
- s = Series(range(10)).astype(float)
- s[s > 8] = None
- result = s[isna(s)]
- expected = Series(np.nan, index=[9])
- assert_series_equal(result, expected)
+def test_where_invalid_input():
+ # see gh-15414: only boolean arrays accepted
+ s = Series([1, 2, 3])
+ msg = "Boolean array expected for the condition"
-class TestWhereAndMask(TestData):
+ conds = [
+ [1, 0, 1],
+ Series([2, 5, 7]),
+ ["True", "False", "True"],
+ [Timestamp("2017-01-01"),
+ pd.NaT, Timestamp("2017-01-02")]
+ ]
- def test_where_raise_on_error_deprecation(self):
- # gh-14968
- # deprecation of raise_on_error
- s = Series(np.random.randn(5))
- cond = s > 0
- with tm.assert_produces_warning(FutureWarning):
- s.where(cond, raise_on_error=True)
- with tm.assert_produces_warning(FutureWarning):
- s.mask(cond, raise_on_error=True)
+ for cond in conds:
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where(cond)
- def test_where(self):
- s = Series(np.random.randn(5))
- cond = s > 0
+ msg = "Array conditional must be same shape as self"
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where([True])
- rs = s.where(cond).dropna()
- rs2 = s[cond]
- assert_series_equal(rs, rs2)
- rs = s.where(cond, -s)
- assert_series_equal(rs, s.abs())
+def test_where_ndframe_align():
+ msg = "Array conditional must be same shape as self"
+ s = Series([1, 2, 3])
- rs = s.where(cond)
- assert (s.shape == rs.shape)
- assert (rs is not s)
+ cond = [True]
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where(cond)
- # test alignment
- cond = Series([True, False, False, True, False], index=s.index)
- s2 = -(s.abs())
+ expected = Series([1, np.nan, np.nan])
- expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index)
- rs = s2.where(cond[:3])
- assert_series_equal(rs, expected)
+ out = s.where(Series(cond))
+ tm.assert_series_equal(out, expected)
- expected = s2.abs()
- expected.iloc[0] = s2[0]
- rs = s2.where(cond[:3], -s2)
- assert_series_equal(rs, expected)
+ cond = np.array([False, True, False, True])
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where(cond)
- def test_where_error(self):
- s = Series(np.random.randn(5))
- cond = s > 0
+ expected = Series([np.nan, 2, np.nan])
- pytest.raises(ValueError, s.where, 1)
- pytest.raises(ValueError, s.where, cond[:3].values, -s)
+ out = s.where(Series(cond))
+ tm.assert_series_equal(out, expected)
- # GH 2745
- s = Series([1, 2])
- s[[True, False]] = [0, 1]
- expected = Series([0, 2])
- assert_series_equal(s, expected)
- # failures
- pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
- [0, 2, 3])
- pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
- [])
-
- def test_where_array_like(self):
- # see gh-15414
- s = Series([1, 2, 3])
- cond = [False, True, True]
- expected = Series([np.nan, 2, 3])
- klasses = [list, tuple, np.array, Series]
-
- for klass in klasses:
- result = s.where(klass(cond))
- assert_series_equal(result, expected)
-
- def test_where_invalid_input(self):
- # see gh-15414: only boolean arrays accepted
- s = Series([1, 2, 3])
- msg = "Boolean array expected for the condition"
-
- conds = [
- [1, 0, 1],
- Series([2, 5, 7]),
- ["True", "False", "True"],
- [Timestamp("2017-01-01"),
- pd.NaT, Timestamp("2017-01-02")]
- ]
-
- for cond in conds:
- with tm.assert_raises_regex(ValueError, msg):
- s.where(cond)
-
- msg = "Array conditional must be same shape as self"
- with tm.assert_raises_regex(ValueError, msg):
- s.where([True])
+def test_where_setitem_invalid():
+ # GH 2702
+ # make sure correct exceptions are raised on invalid list assignment
- def test_where_ndframe_align(self):
- msg = "Array conditional must be same shape as self"
- s = Series([1, 2, 3])
+ # slice
+ s = Series(list('abc'))
- cond = [True]
- with tm.assert_raises_regex(ValueError, msg):
- s.where(cond)
+ def f():
+ s[0:3] = list(range(27))
- expected = Series([1, np.nan, np.nan])
+ pytest.raises(ValueError, f)
- out = s.where(Series(cond))
- tm.assert_series_equal(out, expected)
+ s[0:3] = list(range(3))
+ expected = Series([0, 1, 2])
+ assert_series_equal(s.astype(np.int64), expected, )
- cond = np.array([False, True, False, True])
- with tm.assert_raises_regex(ValueError, msg):
- s.where(cond)
+ # slice with step
+ s = Series(list('abcdef'))
- expected = Series([np.nan, 2, np.nan])
+ def f():
+ s[0:4:2] = list(range(27))
- out = s.where(Series(cond))
- tm.assert_series_equal(out, expected)
+ pytest.raises(ValueError, f)
- def test_where_setitem_invalid(self):
- # GH 2702
- # make sure correct exceptions are raised on invalid list assignment
+ s = Series(list('abcdef'))
+ s[0:4:2] = list(range(2))
+ expected = Series([0, 'b', 1, 'd', 'e', 'f'])
+ assert_series_equal(s, expected)
- # slice
- s = Series(list('abc'))
+ # neg slices
+ s = Series(list('abcdef'))
- def f():
- s[0:3] = list(range(27))
+ def f():
+ s[:-1] = list(range(27))
- pytest.raises(ValueError, f)
+ pytest.raises(ValueError, f)
- s[0:3] = list(range(3))
- expected = Series([0, 1, 2])
- assert_series_equal(s.astype(np.int64), expected, )
+ s[-3:-1] = list(range(2))
+ expected = Series(['a', 'b', 'c', 0, 1, 'f'])
+ assert_series_equal(s, expected)
- # slice with step
- s = Series(list('abcdef'))
+ # list
+ s = Series(list('abc'))
- def f():
- s[0:4:2] = list(range(27))
+ def f():
+ s[[0, 1, 2]] = list(range(27))
- pytest.raises(ValueError, f)
+ pytest.raises(ValueError, f)
- s = Series(list('abcdef'))
- s[0:4:2] = list(range(2))
- expected = Series([0, 'b', 1, 'd', 'e', 'f'])
- assert_series_equal(s, expected)
+ s = Series(list('abc'))
- # neg slices
- s = Series(list('abcdef'))
+ def f():
+ s[[0, 1, 2]] = list(range(2))
- def f():
- s[:-1] = list(range(27))
+ pytest.raises(ValueError, f)
- pytest.raises(ValueError, f)
+ # scalar
+ s = Series(list('abc'))
+ s[0] = list(range(10))
+ expected = Series([list(range(10)), 'b', 'c'])
+ assert_series_equal(s, expected)
- s[-3:-1] = list(range(2))
- expected = Series(['a', 'b', 'c', 0, 1, 'f'])
- assert_series_equal(s, expected)
- # list
- s = Series(list('abc'))
+def test_where_broadcast():
+ # Test a variety of differently sized series
+ for size in range(2, 6):
+ # Test a variety of boolean indices
+ for selection in [
+ # First element should be set
+ np.resize([True, False, False, False, False], size),
+ # Set alternating elements]
+ np.resize([True, False], size),
+ # No element should be set
+ np.resize([False], size)
+ ]:
- def f():
- s[[0, 1, 2]] = list(range(27))
+ # Test a variety of different numbers as content
+ for item in [2.0, np.nan, np.finfo(np.float).max,
+ np.finfo(np.float).min]:
+ # Test numpy arrays, lists and tuples as the input to be
+ # broadcast
+ for arr in [np.array([item]), [item], (item,)]:
+ data = np.arange(size, dtype=float)
+ s = Series(data)
+ s[selection] = arr
+ # Construct the expected series by taking the source
+ # data or item based on the selection
+ expected = Series([item if use_item else data[
+ i] for i, use_item in enumerate(selection)])
+ assert_series_equal(s, expected)
- pytest.raises(ValueError, f)
+ s = Series(data)
+ result = s.where(~selection, arr)
+ assert_series_equal(result, expected)
- s = Series(list('abc'))
- def f():
- s[[0, 1, 2]] = list(range(2))
+def test_where_inplace():
+ s = Series(np.random.randn(5))
+ cond = s > 0
- pytest.raises(ValueError, f)
+ rs = s.copy()
- # scalar
- s = Series(list('abc'))
- s[0] = list(range(10))
- expected = Series([list(range(10)), 'b', 'c'])
- assert_series_equal(s, expected)
+ rs.where(cond, inplace=True)
+ assert_series_equal(rs.dropna(), s[cond])
+ assert_series_equal(rs, s.where(cond))
- def test_where_broadcast(self):
- # Test a variety of differently sized series
- for size in range(2, 6):
- # Test a variety of boolean indices
- for selection in [
- # First element should be set
- np.resize([True, False, False, False, False], size),
- # Set alternating elements]
- np.resize([True, False], size),
- # No element should be set
- np.resize([False], size)
- ]:
-
- # Test a variety of different numbers as content
- for item in [2.0, np.nan, np.finfo(np.float).max,
- np.finfo(np.float).min]:
- # Test numpy arrays, lists and tuples as the input to be
- # broadcast
- for arr in [np.array([item]), [item], (item,)]:
- data = np.arange(size, dtype=float)
- s = Series(data)
- s[selection] = arr
- # Construct the expected series by taking the source
- # data or item based on the selection
- expected = Series([item if use_item else data[
- i] for i, use_item in enumerate(selection)])
- assert_series_equal(s, expected)
-
- s = Series(data)
- result = s.where(~selection, arr)
- assert_series_equal(result, expected)
-
- def test_where_inplace(self):
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.copy()
-
- rs.where(cond, inplace=True)
- assert_series_equal(rs.dropna(), s[cond])
- assert_series_equal(rs, s.where(cond))
-
- rs = s.copy()
- rs.where(cond, -s, inplace=True)
- assert_series_equal(rs, s.where(cond, -s))
-
- def test_where_dups(self):
- # GH 4550
- # where crashes with dups in index
- s1 = Series(list(range(3)))
- s2 = Series(list(range(3)))
- comb = pd.concat([s1, s2])
- result = comb.where(comb < 2)
- expected = Series([0, 1, np.nan, 0, 1, np.nan],
- index=[0, 1, 2, 0, 1, 2])
- assert_series_equal(result, expected)
+ rs = s.copy()
+ rs.where(cond, -s, inplace=True)
+ assert_series_equal(rs, s.where(cond, -s))
- # GH 4548
- # inplace updating not working with dups
- comb[comb < 1] = 5
- expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2])
- assert_series_equal(comb, expected)
-
- comb[comb < 2] += 10
- expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2])
- assert_series_equal(comb, expected)
-
- def test_where_numeric_with_string(self):
- # GH 9280
- s = pd.Series([1, 2, 3])
- w = s.where(s > 1, 'X')
-
- assert not is_integer(w[0])
- assert is_integer(w[1])
- assert is_integer(w[2])
- assert isinstance(w[0], str)
- assert w.dtype == 'object'
-
- w = s.where(s > 1, ['X', 'Y', 'Z'])
- assert not is_integer(w[0])
- assert is_integer(w[1])
- assert is_integer(w[2])
- assert isinstance(w[0], str)
- assert w.dtype == 'object'
-
- w = s.where(s > 1, np.array(['X', 'Y', 'Z']))
- assert not is_integer(w[0])
- assert is_integer(w[1])
- assert is_integer(w[2])
- assert isinstance(w[0], str)
- assert w.dtype == 'object'
-
- def test_where_timedelta_coerce(self):
- s = Series([1, 2], dtype='timedelta64[ns]')
- expected = Series([10, 10])
- mask = np.array([False, False])
-
- rs = s.where(mask, [10, 10])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10.0)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, 10.0])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, np.nan])
- expected = Series([10, None], dtype='object')
- assert_series_equal(rs, expected)
-
- def test_where_datetime_conversion(self):
- s = Series(date_range('20130102', periods=2))
- expected = Series([10, 10])
- mask = np.array([False, False])
-
- rs = s.where(mask, [10, 10])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10.0)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, 10.0])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, np.nan])
- expected = Series([10, None], dtype='object')
- assert_series_equal(rs, expected)
-
- # GH 15701
- timestamps = ['2016-12-31 12:00:04+00:00',
- '2016-12-31 12:00:04.010000+00:00']
- s = Series([pd.Timestamp(t) for t in timestamps])
- rs = s.where(Series([False, True]))
- expected = Series([pd.NaT, s[1]])
- assert_series_equal(rs, expected)
-
- def test_mask(self):
- # compare with tested results in test_where
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.where(~cond, np.nan)
- assert_series_equal(rs, s.mask(cond))
-
- rs = s.where(~cond)
- rs2 = s.mask(cond)
- assert_series_equal(rs, rs2)
-
- rs = s.where(~cond, -s)
- rs2 = s.mask(cond, -s)
- assert_series_equal(rs, rs2)
-
- cond = Series([True, False, False, True, False], index=s.index)
- s2 = -(s.abs())
- rs = s2.where(~cond[:3])
- rs2 = s2.mask(cond[:3])
- assert_series_equal(rs, rs2)
-
- rs = s2.where(~cond[:3], -s2)
- rs2 = s2.mask(cond[:3], -s2)
- assert_series_equal(rs, rs2)
-
- pytest.raises(ValueError, s.mask, 1)
- pytest.raises(ValueError, s.mask, cond[:3].values, -s)
-
- # dtype changes
- s = Series([1, 2, 3, 4])
- result = s.mask(s > 2, np.nan)
- expected = Series([1, 2, np.nan, np.nan])
- assert_series_equal(result, expected)
- def test_mask_broadcast(self):
- # GH 8801
- # copied from test_where_broadcast
- for size in range(2, 6):
- for selection in [
- # First element should be set
- np.resize([True, False, False, False, False], size),
- # Set alternating elements]
- np.resize([True, False], size),
- # No element should be set
- np.resize([False], size)
- ]:
- for item in [2.0, np.nan, np.finfo(np.float).max,
- np.finfo(np.float).min]:
- for arr in [np.array([item]), [item], (item,)]:
- data = np.arange(size, dtype=float)
- s = Series(data)
- result = s.mask(selection, arr)
- expected = Series([item if use_item else data[
- i] for i, use_item in enumerate(selection)])
- assert_series_equal(result, expected)
-
- def test_mask_inplace(self):
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.copy()
- rs.mask(cond, inplace=True)
- assert_series_equal(rs.dropna(), s[~cond])
- assert_series_equal(rs, s.mask(cond))
-
- rs = s.copy()
- rs.mask(cond, -s, inplace=True)
- assert_series_equal(rs, s.mask(cond, -s))
+def test_where_dups():
+ # GH 4550
+ # where crashes with dups in index
+ s1 = Series(list(range(3)))
+ s2 = Series(list(range(3)))
+ comb = pd.concat([s1, s2])
+ result = comb.where(comb < 2)
+ expected = Series([0, 1, np.nan, 0, 1, np.nan],
+ index=[0, 1, 2, 0, 1, 2])
+ assert_series_equal(result, expected)
+
+ # GH 4548
+ # inplace updating not working with dups
+ comb[comb < 1] = 5
+ expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2])
+ assert_series_equal(comb, expected)
+
+ comb[comb < 2] += 10
+ expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2])
+ assert_series_equal(comb, expected)
+
+
+def test_where_numeric_with_string():
+ # GH 9280
+ s = pd.Series([1, 2, 3])
+ w = s.where(s > 1, 'X')
+
+ assert not is_integer(w[0])
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
+
+ w = s.where(s > 1, ['X', 'Y', 'Z'])
+ assert not is_integer(w[0])
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
+
+ w = s.where(s > 1, np.array(['X', 'Y', 'Z']))
+ assert not is_integer(w[0])
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
+
+
+def test_where_timedelta_coerce():
+ s = Series([1, 2], dtype='timedelta64[ns]')
+ expected = Series([10, 10])
+ mask = np.array([False, False])
+
+ rs = s.where(mask, [10, 10])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10.0)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, 10.0])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, np.nan])
+ expected = Series([10, None], dtype='object')
+ assert_series_equal(rs, expected)
+
+
+def test_where_datetime_conversion():
+ s = Series(date_range('20130102', periods=2))
+ expected = Series([10, 10])
+ mask = np.array([False, False])
+
+ rs = s.where(mask, [10, 10])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10.0)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, 10.0])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, np.nan])
+ expected = Series([10, None], dtype='object')
+ assert_series_equal(rs, expected)
+
+ # GH 15701
+ timestamps = ['2016-12-31 12:00:04+00:00',
+ '2016-12-31 12:00:04.010000+00:00']
+ s = Series([pd.Timestamp(t) for t in timestamps])
+ rs = s.where(Series([False, True]))
+ expected = Series([pd.NaT, s[1]])
+ assert_series_equal(rs, expected)
+
+
+def test_mask():
+ # compare with tested results in test_where
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.where(~cond, np.nan)
+ assert_series_equal(rs, s.mask(cond))
+
+ rs = s.where(~cond)
+ rs2 = s.mask(cond)
+ assert_series_equal(rs, rs2)
+
+ rs = s.where(~cond, -s)
+ rs2 = s.mask(cond, -s)
+ assert_series_equal(rs, rs2)
+
+ cond = Series([True, False, False, True, False], index=s.index)
+ s2 = -(s.abs())
+ rs = s2.where(~cond[:3])
+ rs2 = s2.mask(cond[:3])
+ assert_series_equal(rs, rs2)
+
+ rs = s2.where(~cond[:3], -s2)
+ rs2 = s2.mask(cond[:3], -s2)
+ assert_series_equal(rs, rs2)
+
+ pytest.raises(ValueError, s.mask, 1)
+ pytest.raises(ValueError, s.mask, cond[:3].values, -s)
+
+ # dtype changes
+ s = Series([1, 2, 3, 4])
+ result = s.mask(s > 2, np.nan)
+ expected = Series([1, 2, np.nan, np.nan])
+ assert_series_equal(result, expected)
+
+
+def test_mask_broadcast():
+ # GH 8801
+ # copied from test_where_broadcast
+ for size in range(2, 6):
+ for selection in [
+ # First element should be set
+ np.resize([True, False, False, False, False], size),
+ # Set alternating elements]
+ np.resize([True, False], size),
+ # No element should be set
+ np.resize([False], size)
+ ]:
+ for item in [2.0, np.nan, np.finfo(np.float).max,
+ np.finfo(np.float).min]:
+ for arr in [np.array([item]), [item], (item,)]:
+ data = np.arange(size, dtype=float)
+ s = Series(data)
+ result = s.mask(selection, arr)
+ expected = Series([item if use_item else data[
+ i] for i, use_item in enumerate(selection)])
+ assert_series_equal(result, expected)
+
+
+def test_mask_inplace():
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.copy()
+ rs.mask(cond, inplace=True)
+ assert_series_equal(rs.dropna(), s[~cond])
+ assert_series_equal(rs, s.mask(cond))
+
+ rs = s.copy()
+ rs.mask(cond, -s, inplace=True)
+ assert_series_equal(rs, s.mask(cond, -s))
diff --git a/pandas/tests/series/indexing/test_datetime.py b/pandas/tests/series/indexing/test_datetime.py
index f8ea50366a73d..db8118384f6f6 100644
--- a/pandas/tests/series/indexing/test_datetime.py
+++ b/pandas/tests/series/indexing/test_datetime.py
@@ -22,669 +22,689 @@
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+"""
+Also test support for datetime64[ns] in Series / DataFrame
+"""
-class TestDatetimeIndexing(object):
- """
- Also test support for datetime64[ns] in Series / DataFrame
- """
-
- def setup_method(self, method):
- dti = DatetimeIndex(start=datetime(2005, 1, 1),
- end=datetime(2005, 1, 10), freq='Min')
- self.series = Series(np.random.rand(len(dti)), dti)
-
- def test_fancy_getitem(self):
- dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
- end=datetime(2010, 1, 1))
-
- s = Series(np.arange(len(dti)), index=dti)
-
- assert s[48] == 48
- assert s['1/2/2009'] == 48
- assert s['2009-1-2'] == 48
- assert s[datetime(2009, 1, 2)] == 48
- assert s[Timestamp(datetime(2009, 1, 2))] == 48
- pytest.raises(KeyError, s.__getitem__, '2009-1-3')
-
- assert_series_equal(s['3/6/2009':'2009-06-05'],
- s[datetime(2009, 3, 6):datetime(2009, 6, 5)])
-
- def test_fancy_setitem(self):
- dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
- end=datetime(2010, 1, 1))
-
- s = Series(np.arange(len(dti)), index=dti)
- s[48] = -1
- assert s[48] == -1
- s['1/2/2009'] = -2
- assert s[48] == -2
- s['1/2/2009':'2009-06-05'] = -3
- assert (s[48:54] == -3).all()
-
- def test_dti_snap(self):
- dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002',
- '1/5/2002', '1/6/2002', '1/7/2002'], freq='D')
-
- res = dti.snap(freq='W-MON')
- exp = date_range('12/31/2001', '1/7/2002', freq='w-mon')
- exp = exp.repeat([3, 4])
- assert (res == exp).all()
-
- res = dti.snap(freq='B')
-
- exp = date_range('1/1/2002', '1/7/2002', freq='b')
- exp = exp.repeat([1, 1, 1, 2, 2])
- assert (res == exp).all()
-
- def test_dti_reset_index_round_trip(self):
- dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
- d1 = DataFrame({'v': np.random.rand(len(dti))}, index=dti)
- d2 = d1.reset_index()
- assert d2.dtypes[0] == np.dtype('M8[ns]')
- d3 = d2.set_index('index')
- assert_frame_equal(d1, d3, check_names=False)
-
- # #2329
- stamp = datetime(2012, 11, 22)
- df = DataFrame([[stamp, 12.1]], columns=['Date', 'Value'])
- df = df.set_index('Date')
-
- assert df.index[0] == stamp
- assert df.reset_index()['Date'][0] == stamp
-
- def test_series_set_value(self):
- # #1561
-
- dates = [datetime(2001, 1, 1), datetime(2001, 1, 2)]
- index = DatetimeIndex(dates)
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- s = Series().set_value(dates[0], 1.)
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- s2 = s.set_value(dates[1], np.nan)
-
- exp = Series([1., np.nan], index=index)
-
- assert_series_equal(s2, exp)
-
- # s = Series(index[:1], index[:1])
- # s2 = s.set_value(dates[1], index[1])
- # assert s2.values.dtype == 'M8[ns]'
-
- @pytest.mark.slow
- def test_slice_locs_indexerror(self):
- times = [datetime(2000, 1, 1) + timedelta(minutes=i * 10)
- for i in range(100000)]
- s = Series(lrange(100000), times)
- s.loc[datetime(1900, 1, 1):datetime(2100, 1, 1)]
-
- def test_slicing_datetimes(self):
- # GH 7523
-
- # unique
- df = DataFrame(np.arange(4., dtype='float64'),
- index=[datetime(2001, 1, i, 10, 00)
- for i in [1, 2, 3, 4]])
- result = df.loc[datetime(2001, 1, 1, 10):]
- assert_frame_equal(result, df)
- result = df.loc[:datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
- result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
-
- result = df.loc[datetime(2001, 1, 1, 11):]
- expected = df.iloc[1:]
- assert_frame_equal(result, expected)
- result = df.loc['20010101 11':]
- assert_frame_equal(result, expected)
-
- # duplicates
- df = pd.DataFrame(np.arange(5., dtype='float64'),
- index=[datetime(2001, 1, i, 10, 00)
- for i in [1, 2, 2, 3, 4]])
-
- result = df.loc[datetime(2001, 1, 1, 10):]
- assert_frame_equal(result, df)
- result = df.loc[:datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
- result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
-
- result = df.loc[datetime(2001, 1, 1, 11):]
- expected = df.iloc[1:]
- assert_frame_equal(result, expected)
- result = df.loc['20010101 11':]
- assert_frame_equal(result, expected)
-
- def test_frame_datetime64_duplicated(self):
- dates = date_range('2010-07-01', end='2010-08-05')
-
- tst = DataFrame({'symbol': 'AAA', 'date': dates})
- result = tst.duplicated(['date', 'symbol'])
- assert (-result).all()
-
- tst = DataFrame({'date': dates})
- result = tst.duplicated()
- assert (-result).all()
-
- def test_getitem_setitem_datetime_tz_pytz(self):
- from pytz import timezone as tz
- from pandas import date_range
-
- N = 50
- # testing with timezone, GH #2785
- rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
- ts = Series(np.random.randn(N), index=rng)
-
- # also test Timestamp tz handling, GH #2789
- result = ts.copy()
- result["1990-01-01 09:00:00+00:00"] = 0
- result["1990-01-01 09:00:00+00:00"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
- result["1990-01-01 03:00:00-06:00"] = 0
- result["1990-01-01 03:00:00-06:00"] = ts[4]
- assert_series_equal(result, ts)
-
- # repeat with datetimes
- result = ts.copy()
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
-
- # comparison dates with datetime MUST be localized!
- date = tz('US/Central').localize(datetime(1990, 1, 1, 3))
- result[date] = 0
- result[date] = ts[4]
- assert_series_equal(result, ts)
-
- def test_getitem_setitem_datetime_tz_dateutil(self):
- from dateutil.tz import tzutc
- from pandas._libs.tslibs.timezones import dateutil_gettz as gettz
-
- tz = lambda x: tzutc() if x == 'UTC' else gettz(
- x) # handle special case for utc in dateutil
-
- from pandas import date_range
-
- N = 50
-
- # testing with timezone, GH #2785
- rng = date_range('1/1/1990', periods=N, freq='H',
- tz='America/New_York')
- ts = Series(np.random.randn(N), index=rng)
-
- # also test Timestamp tz handling, GH #2789
- result = ts.copy()
- result["1990-01-01 09:00:00+00:00"] = 0
- result["1990-01-01 09:00:00+00:00"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
- result["1990-01-01 03:00:00-06:00"] = 0
- result["1990-01-01 03:00:00-06:00"] = ts[4]
- assert_series_equal(result, ts)
-
- # repeat with datetimes
- result = ts.copy()
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
- result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = 0
- result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = ts[4]
- assert_series_equal(result, ts)
-
- def test_getitem_setitem_datetimeindex(self):
- N = 50
- # testing with timezone, GH #2785
- rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
- ts = Series(np.random.randn(N), index=rng)
-
- result = ts["1990-01-01 04:00:00"]
- expected = ts[4]
- assert result == expected
-
- result = ts.copy()
- result["1990-01-01 04:00:00"] = 0
- result["1990-01-01 04:00:00"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"]
- expected = ts[4:8]
- assert_series_equal(result, expected)
- result = ts.copy()
- result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0
- result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8]
- assert_series_equal(result, ts)
+def test_fancy_getitem():
+ dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
+ end=datetime(2010, 1, 1))
- lb = "1990-01-01 04:00:00"
- rb = "1990-01-01 07:00:00"
- # GH#18435 strings get a pass from tzawareness compat
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+ s = Series(np.arange(len(dti)), index=dti)
- lb = "1990-01-01 04:00:00-0500"
- rb = "1990-01-01 07:00:00-0500"
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+ assert s[48] == 48
+ assert s['1/2/2009'] == 48
+ assert s['2009-1-2'] == 48
+ assert s[datetime(2009, 1, 2)] == 48
+ assert s[Timestamp(datetime(2009, 1, 2))] == 48
+ pytest.raises(KeyError, s.__getitem__, '2009-1-3')
- # repeat all the above with naive datetimes
- result = ts[datetime(1990, 1, 1, 4)]
- expected = ts[4]
- assert result == expected
+ assert_series_equal(s['3/6/2009':'2009-06-05'],
+ s[datetime(2009, 3, 6):datetime(2009, 6, 5)])
- result = ts.copy()
- result[datetime(1990, 1, 1, 4)] = 0
- result[datetime(1990, 1, 1, 4)] = ts[4]
- assert_series_equal(result, ts)
- result = ts[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+def test_fancy_setitem():
+ dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
+ end=datetime(2010, 1, 1))
- result = ts.copy()
- result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = 0
- result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = ts[4:8]
- assert_series_equal(result, ts)
-
- lb = datetime(1990, 1, 1, 4)
- rb = datetime(1990, 1, 1, 7)
- with pytest.raises(TypeError):
- # tznaive vs tzaware comparison is invalid
- # see GH#18376, GH#18162
- ts[(ts.index >= lb) & (ts.index <= rb)]
-
- lb = pd.Timestamp(datetime(1990, 1, 1, 4)).tz_localize(rng.tzinfo)
- rb = pd.Timestamp(datetime(1990, 1, 1, 7)).tz_localize(rng.tzinfo)
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+ s = Series(np.arange(len(dti)), index=dti)
+ s[48] = -1
+ assert s[48] == -1
+ s['1/2/2009'] = -2
+ assert s[48] == -2
+ s['1/2/2009':'2009-06-05'] = -3
+ assert (s[48:54] == -3).all()
- result = ts[ts.index[4]]
- expected = ts[4]
- assert result == expected
- result = ts[ts.index[4:8]]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+def test_dti_snap():
+ dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002',
+ '1/5/2002', '1/6/2002', '1/7/2002'], freq='D')
- result = ts.copy()
- result[ts.index[4:8]] = 0
- result[4:8] = ts[4:8]
- assert_series_equal(result, ts)
+ res = dti.snap(freq='W-MON')
+ exp = date_range('12/31/2001', '1/7/2002', freq='w-mon')
+ exp = exp.repeat([3, 4])
+ assert (res == exp).all()
- # also test partial date slicing
- result = ts["1990-01-02"]
- expected = ts[24:48]
- assert_series_equal(result, expected)
+ res = dti.snap(freq='B')
- result = ts.copy()
- result["1990-01-02"] = 0
- result["1990-01-02"] = ts[24:48]
- assert_series_equal(result, ts)
+ exp = date_range('1/1/2002', '1/7/2002', freq='b')
+ exp = exp.repeat([1, 1, 1, 2, 2])
+ assert (res == exp).all()
- def test_getitem_setitem_periodindex(self):
- from pandas import period_range
- N = 50
- rng = period_range('1/1/1990', periods=N, freq='H')
- ts = Series(np.random.randn(N), index=rng)
+def test_dti_reset_index_round_trip():
+ dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
+ d1 = DataFrame({'v': np.random.rand(len(dti))}, index=dti)
+ d2 = d1.reset_index()
+ assert d2.dtypes[0] == np.dtype('M8[ns]')
+ d3 = d2.set_index('index')
+ assert_frame_equal(d1, d3, check_names=False)
- result = ts["1990-01-01 04"]
- expected = ts[4]
- assert result == expected
+ # #2329
+ stamp = datetime(2012, 11, 22)
+ df = DataFrame([[stamp, 12.1]], columns=['Date', 'Value'])
+ df = df.set_index('Date')
- result = ts.copy()
- result["1990-01-01 04"] = 0
- result["1990-01-01 04"] = ts[4]
- assert_series_equal(result, ts)
+ assert df.index[0] == stamp
+ assert df.reset_index()['Date'][0] == stamp
- result = ts["1990-01-01 04":"1990-01-01 07"]
- expected = ts[4:8]
- assert_series_equal(result, expected)
- result = ts.copy()
- result["1990-01-01 04":"1990-01-01 07"] = 0
- result["1990-01-01 04":"1990-01-01 07"] = ts[4:8]
- assert_series_equal(result, ts)
+def test_series_set_value():
+ # #1561
- lb = "1990-01-01 04"
- rb = "1990-01-01 07"
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+ dates = [datetime(2001, 1, 1), datetime(2001, 1, 2)]
+ index = DatetimeIndex(dates)
- # GH 2782
- result = ts[ts.index[4]]
- expected = ts[4]
- assert result == expected
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ s = Series().set_value(dates[0], 1.)
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ s2 = s.set_value(dates[1], np.nan)
- result = ts[ts.index[4:8]]
- expected = ts[4:8]
- assert_series_equal(result, expected)
+ exp = Series([1., np.nan], index=index)
- result = ts.copy()
- result[ts.index[4:8]] = 0
- result[4:8] = ts[4:8]
- assert_series_equal(result, ts)
+ assert_series_equal(s2, exp)
- def test_getitem_median_slice_bug(self):
- index = date_range('20090415', '20090519', freq='2B')
- s = Series(np.random.randn(13), index=index)
+ # s = Series(index[:1], index[:1])
+ # s2 = s.set_value(dates[1], index[1])
+ # assert s2.values.dtype == 'M8[ns]'
- indexer = [slice(6, 7, None)]
- result = s[indexer]
- expected = s[indexer[0]]
- assert_series_equal(result, expected)
- def test_datetime_indexing(self):
- from pandas import date_range
-
- index = date_range('1/1/2000', '1/7/2000')
- index = index.repeat(3)
-
- s = Series(len(index), index=index)
- stamp = Timestamp('1/8/2000')
-
- pytest.raises(KeyError, s.__getitem__, stamp)
- s[stamp] = 0
- assert s[stamp] == 0
-
- # not monotonic
- s = Series(len(index), index=index)
- s = s[::-1]
-
- pytest.raises(KeyError, s.__getitem__, stamp)
- s[stamp] = 0
- assert s[stamp] == 0
-
-
-class TestTimeSeriesDuplicates(object):
-
- def setup_method(self, method):
- dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
- datetime(2000, 1, 2), datetime(2000, 1, 3),
- datetime(2000, 1, 3), datetime(2000, 1, 3),
- datetime(2000, 1, 4), datetime(2000, 1, 4),
- datetime(2000, 1, 4), datetime(2000, 1, 5)]
-
- self.dups = Series(np.random.randn(len(dates)), index=dates)
-
- def test_constructor(self):
- assert isinstance(self.dups, Series)
- assert isinstance(self.dups.index, DatetimeIndex)
-
- def test_is_unique_monotonic(self):
- assert not self.dups.index.is_unique
-
- def test_index_unique(self):
- uniques = self.dups.index.unique()
- expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3),
- datetime(2000, 1, 4), datetime(2000, 1, 5)])
- assert uniques.dtype == 'M8[ns]' # sanity
- tm.assert_index_equal(uniques, expected)
- assert self.dups.index.nunique() == 4
-
- # #2563
- assert isinstance(uniques, DatetimeIndex)
-
- dups_local = self.dups.index.tz_localize('US/Eastern')
- dups_local.name = 'foo'
- result = dups_local.unique()
- expected = DatetimeIndex(expected, name='foo')
- expected = expected.tz_localize('US/Eastern')
- assert result.tz is not None
- assert result.name == 'foo'
- tm.assert_index_equal(result, expected)
-
- # NaT, note this is excluded
- arr = [1370745748 + t for t in range(20)] + [tslib.iNaT]
- idx = DatetimeIndex(arr * 3)
- tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
- assert idx.nunique() == 20
- assert idx.nunique(dropna=False) == 21
-
- arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t)
- for t in range(20)] + [NaT]
- idx = DatetimeIndex(arr * 3)
- tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
- assert idx.nunique() == 20
- assert idx.nunique(dropna=False) == 21
-
- def test_index_dupes_contains(self):
- d = datetime(2011, 12, 5, 20, 30)
- ix = DatetimeIndex([d, d])
- assert d in ix
-
- def test_duplicate_dates_indexing(self):
- ts = self.dups
-
- uniques = ts.index.unique()
- for date in uniques:
- result = ts[date]
-
- mask = ts.index == date
- total = (ts.index == date).sum()
- expected = ts[mask]
- if total > 1:
- assert_series_equal(result, expected)
- else:
- assert_almost_equal(result, expected[0])
-
- cp = ts.copy()
- cp[date] = 0
- expected = Series(np.where(mask, 0, ts), index=ts.index)
- assert_series_equal(cp, expected)
-
- pytest.raises(KeyError, ts.__getitem__, datetime(2000, 1, 6))
-
- # new index
- ts[datetime(2000, 1, 6)] = 0
- assert ts[datetime(2000, 1, 6)] == 0
-
- def test_range_slice(self):
- idx = DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/3/2000',
- '1/4/2000'])
-
- ts = Series(np.random.randn(len(idx)), index=idx)
-
- result = ts['1/2/2000':]
- expected = ts[1:]
- assert_series_equal(result, expected)
+@pytest.mark.slow
+def test_slice_locs_indexerror():
+ times = [datetime(2000, 1, 1) + timedelta(minutes=i * 10)
+ for i in range(100000)]
+ s = Series(lrange(100000), times)
+ s.loc[datetime(1900, 1, 1):datetime(2100, 1, 1)]
- result = ts['1/2/2000':'1/3/2000']
- expected = ts[1:4]
- assert_series_equal(result, expected)
- def test_groupby_average_dup_values(self):
- result = self.dups.groupby(level=0).mean()
- expected = self.dups.groupby(self.dups.index).mean()
- assert_series_equal(result, expected)
+def test_slicing_datetimes():
+ # GH 7523
+
+ # unique
+ df = DataFrame(np.arange(4., dtype='float64'),
+ index=[datetime(2001, 1, i, 10, 00)
+ for i in [1, 2, 3, 4]])
+ result = df.loc[datetime(2001, 1, 1, 10):]
+ assert_frame_equal(result, df)
+ result = df.loc[:datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+ result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+
+ result = df.loc[datetime(2001, 1, 1, 11):]
+ expected = df.iloc[1:]
+ assert_frame_equal(result, expected)
+ result = df.loc['20010101 11':]
+ assert_frame_equal(result, expected)
+
+ # duplicates
+ df = pd.DataFrame(np.arange(5., dtype='float64'),
+ index=[datetime(2001, 1, i, 10, 00)
+ for i in [1, 2, 2, 3, 4]])
+
+ result = df.loc[datetime(2001, 1, 1, 10):]
+ assert_frame_equal(result, df)
+ result = df.loc[:datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+ result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+
+ result = df.loc[datetime(2001, 1, 1, 11):]
+ expected = df.iloc[1:]
+ assert_frame_equal(result, expected)
+ result = df.loc['20010101 11':]
+ assert_frame_equal(result, expected)
+
+
+def test_frame_datetime64_duplicated():
+ dates = date_range('2010-07-01', end='2010-08-05')
+
+ tst = DataFrame({'symbol': 'AAA', 'date': dates})
+ result = tst.duplicated(['date', 'symbol'])
+ assert (-result).all()
+
+ tst = DataFrame({'date': dates})
+ result = tst.duplicated()
+ assert (-result).all()
+
+
+def test_getitem_setitem_datetime_tz_pytz():
+ from pytz import timezone as tz
+ from pandas import date_range
+
+ N = 50
+ # testing with timezone, GH #2785
+ rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
+ ts = Series(np.random.randn(N), index=rng)
+
+ # also test Timestamp tz handling, GH #2789
+ result = ts.copy()
+ result["1990-01-01 09:00:00+00:00"] = 0
+ result["1990-01-01 09:00:00+00:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+ result["1990-01-01 03:00:00-06:00"] = 0
+ result["1990-01-01 03:00:00-06:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ # repeat with datetimes
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+
+ # comparison dates with datetime MUST be localized!
+ date = tz('US/Central').localize(datetime(1990, 1, 1, 3))
+ result[date] = 0
+ result[date] = ts[4]
+ assert_series_equal(result, ts)
+
+
+def test_getitem_setitem_datetime_tz_dateutil():
+ from dateutil.tz import tzutc
+ from pandas._libs.tslibs.timezones import dateutil_gettz as gettz
+
+ tz = lambda x: tzutc() if x == 'UTC' else gettz(
+ x) # handle special case for utc in dateutil
+
+ from pandas import date_range
+
+ N = 50
+
+ # testing with timezone, GH #2785
+ rng = date_range('1/1/1990', periods=N, freq='H',
+ tz='America/New_York')
+ ts = Series(np.random.randn(N), index=rng)
+
+ # also test Timestamp tz handling, GH #2789
+ result = ts.copy()
+ result["1990-01-01 09:00:00+00:00"] = 0
+ result["1990-01-01 09:00:00+00:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+ result["1990-01-01 03:00:00-06:00"] = 0
+ result["1990-01-01 03:00:00-06:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ # repeat with datetimes
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = 0
+ result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = ts[4]
+ assert_series_equal(result, ts)
+
+
+def test_getitem_setitem_datetimeindex():
+ N = 50
+ # testing with timezone, GH #2785
+ rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
+ ts = Series(np.random.randn(N), index=rng)
+
+ result = ts["1990-01-01 04:00:00"]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts.copy()
+ result["1990-01-01 04:00:00"] = 0
+ result["1990-01-01 04:00:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0
+ result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ lb = "1990-01-01 04:00:00"
+ rb = "1990-01-01 07:00:00"
+ # GH#18435 strings get a pass from tzawareness compat
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ lb = "1990-01-01 04:00:00-0500"
+ rb = "1990-01-01 07:00:00-0500"
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ # repeat all the above with naive datetimes
+ result = ts[datetime(1990, 1, 1, 4)]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 4)] = 0
+ result[datetime(1990, 1, 1, 4)] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = 0
+ result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ lb = datetime(1990, 1, 1, 4)
+ rb = datetime(1990, 1, 1, 7)
+ with pytest.raises(TypeError):
+ # tznaive vs tzaware comparison is invalid
+ # see GH#18376, GH#18162
+ ts[(ts.index >= lb) & (ts.index <= rb)]
+
+ lb = pd.Timestamp(datetime(1990, 1, 1, 4)).tz_localize(rng.tzinfo)
+ rb = pd.Timestamp(datetime(1990, 1, 1, 7)).tz_localize(rng.tzinfo)
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts[ts.index[4]]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts[ts.index[4:8]]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result[ts.index[4:8]] = 0
+ result[4:8] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ # also test partial date slicing
+ result = ts["1990-01-02"]
+ expected = ts[24:48]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result["1990-01-02"] = 0
+ result["1990-01-02"] = ts[24:48]
+ assert_series_equal(result, ts)
+
+
+def test_getitem_setitem_periodindex():
+ from pandas import period_range
+
+ N = 50
+ rng = period_range('1/1/1990', periods=N, freq='H')
+ ts = Series(np.random.randn(N), index=rng)
+
+ result = ts["1990-01-01 04"]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts.copy()
+ result["1990-01-01 04"] = 0
+ result["1990-01-01 04"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts["1990-01-01 04":"1990-01-01 07"]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result["1990-01-01 04":"1990-01-01 07"] = 0
+ result["1990-01-01 04":"1990-01-01 07"] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ lb = "1990-01-01 04"
+ rb = "1990-01-01 07"
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ # GH 2782
+ result = ts[ts.index[4]]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts[ts.index[4:8]]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result[ts.index[4:8]] = 0
+ result[4:8] = ts[4:8]
+ assert_series_equal(result, ts)
+
+
+def test_getitem_median_slice_bug():
+ index = date_range('20090415', '20090519', freq='2B')
+ s = Series(np.random.randn(13), index=index)
+
+ indexer = [slice(6, 7, None)]
+ result = s[indexer]
+ expected = s[indexer[0]]
+ assert_series_equal(result, expected)
- def test_indexing_over_size_cutoff(self):
- import datetime
- # #1821
-
- old_cutoff = _index._SIZE_CUTOFF
- try:
- _index._SIZE_CUTOFF = 1000
-
- # create large list of non periodic datetime
- dates = []
- sec = datetime.timedelta(seconds=1)
- half_sec = datetime.timedelta(microseconds=500000)
- d = datetime.datetime(2011, 12, 5, 20, 30)
- n = 1100
- for i in range(n):
- dates.append(d)
- dates.append(d + sec)
- dates.append(d + sec + half_sec)
- dates.append(d + sec + sec + half_sec)
- d += 3 * sec
-
- # duplicate some values in the list
- duplicate_positions = np.random.randint(0, len(dates) - 1, 20)
- for p in duplicate_positions:
- dates[p + 1] = dates[p]
-
- df = DataFrame(np.random.randn(len(dates), 4),
- index=dates,
- columns=list('ABCD'))
-
- pos = n * 3
- timestamp = df.index[pos]
- assert timestamp in df.index
-
- # it works!
- df.loc[timestamp]
- assert len(df.loc[[timestamp]]) > 0
- finally:
- _index._SIZE_CUTOFF = old_cutoff
-
- def test_indexing_unordered(self):
- # GH 2437
- rng = date_range(start='2011-01-01', end='2011-01-15')
- ts = Series(np.random.rand(len(rng)), index=rng)
- ts2 = pd.concat([ts[0:4], ts[-4:], ts[4:-4]])
-
- for t in ts.index:
- # TODO: unused?
- s = str(t) # noqa
-
- expected = ts[t]
- result = ts2[t]
- assert expected == result
-
- # GH 3448 (ranges)
- def compare(slobj):
- result = ts2[slobj].copy()
- result = result.sort_index()
- expected = ts[slobj]
- assert_series_equal(result, expected)
- compare(slice('2011-01-01', '2011-01-15'))
- compare(slice('2010-12-30', '2011-01-15'))
- compare(slice('2011-01-01', '2011-01-16'))
+def test_datetime_indexing():
+ from pandas import date_range
- # partial ranges
- compare(slice('2011-01-01', '2011-01-6'))
- compare(slice('2011-01-06', '2011-01-8'))
- compare(slice('2011-01-06', '2011-01-12'))
+ index = date_range('1/1/2000', '1/7/2000')
+ index = index.repeat(3)
- # single values
- result = ts2['2011'].sort_index()
- expected = ts['2011']
+ s = Series(len(index), index=index)
+ stamp = Timestamp('1/8/2000')
+
+ pytest.raises(KeyError, s.__getitem__, stamp)
+ s[stamp] = 0
+ assert s[stamp] == 0
+
+ # not monotonic
+ s = Series(len(index), index=index)
+ s = s[::-1]
+
+ pytest.raises(KeyError, s.__getitem__, stamp)
+ s[stamp] = 0
+ assert s[stamp] == 0
+
+
+"""
+test duplicates in time series
+"""
+
+
+@pytest.fixture(scope='module')
+def dups():
+ dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
+ datetime(2000, 1, 2), datetime(2000, 1, 3),
+ datetime(2000, 1, 3), datetime(2000, 1, 3),
+ datetime(2000, 1, 4), datetime(2000, 1, 4),
+ datetime(2000, 1, 4), datetime(2000, 1, 5)]
+
+ return Series(np.random.randn(len(dates)), index=dates)
+
+
+def test_constructor(dups):
+ assert isinstance(dups, Series)
+ assert isinstance(dups.index, DatetimeIndex)
+
+
+def test_is_unique_monotonic(dups):
+ assert not dups.index.is_unique
+
+
+def test_index_unique(dups):
+ uniques = dups.index.unique()
+ expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3),
+ datetime(2000, 1, 4), datetime(2000, 1, 5)])
+ assert uniques.dtype == 'M8[ns]' # sanity
+ tm.assert_index_equal(uniques, expected)
+ assert dups.index.nunique() == 4
+
+ # #2563
+ assert isinstance(uniques, DatetimeIndex)
+
+ dups_local = dups.index.tz_localize('US/Eastern')
+ dups_local.name = 'foo'
+ result = dups_local.unique()
+ expected = DatetimeIndex(expected, name='foo')
+ expected = expected.tz_localize('US/Eastern')
+ assert result.tz is not None
+ assert result.name == 'foo'
+ tm.assert_index_equal(result, expected)
+
+ # NaT, note this is excluded
+ arr = [1370745748 + t for t in range(20)] + [tslib.iNaT]
+ idx = DatetimeIndex(arr * 3)
+ tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
+ assert idx.nunique() == 20
+ assert idx.nunique(dropna=False) == 21
+
+ arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t)
+ for t in range(20)] + [NaT]
+ idx = DatetimeIndex(arr * 3)
+ tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
+ assert idx.nunique() == 20
+ assert idx.nunique(dropna=False) == 21
+
+
+def test_index_dupes_contains():
+ d = datetime(2011, 12, 5, 20, 30)
+ ix = DatetimeIndex([d, d])
+ assert d in ix
+
+
+def test_duplicate_dates_indexing(dups):
+ ts = dups
+
+ uniques = ts.index.unique()
+ for date in uniques:
+ result = ts[date]
+
+ mask = ts.index == date
+ total = (ts.index == date).sum()
+ expected = ts[mask]
+ if total > 1:
+ assert_series_equal(result, expected)
+ else:
+ assert_almost_equal(result, expected[0])
+
+ cp = ts.copy()
+ cp[date] = 0
+ expected = Series(np.where(mask, 0, ts), index=ts.index)
+ assert_series_equal(cp, expected)
+
+ pytest.raises(KeyError, ts.__getitem__, datetime(2000, 1, 6))
+
+ # new index
+ ts[datetime(2000, 1, 6)] = 0
+ assert ts[datetime(2000, 1, 6)] == 0
+
+
+def test_range_slice():
+ idx = DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/3/2000',
+ '1/4/2000'])
+
+ ts = Series(np.random.randn(len(idx)), index=idx)
+
+ result = ts['1/2/2000':]
+ expected = ts[1:]
+ assert_series_equal(result, expected)
+
+ result = ts['1/2/2000':'1/3/2000']
+ expected = ts[1:4]
+ assert_series_equal(result, expected)
+
+
+def test_groupby_average_dup_values(dups):
+ result = dups.groupby(level=0).mean()
+ expected = dups.groupby(dups.index).mean()
+ assert_series_equal(result, expected)
+
+
+def test_indexing_over_size_cutoff():
+ import datetime
+ # #1821
+
+ old_cutoff = _index._SIZE_CUTOFF
+ try:
+ _index._SIZE_CUTOFF = 1000
+
+ # create large list of non periodic datetime
+ dates = []
+ sec = datetime.timedelta(seconds=1)
+ half_sec = datetime.timedelta(microseconds=500000)
+ d = datetime.datetime(2011, 12, 5, 20, 30)
+ n = 1100
+ for i in range(n):
+ dates.append(d)
+ dates.append(d + sec)
+ dates.append(d + sec + half_sec)
+ dates.append(d + sec + sec + half_sec)
+ d += 3 * sec
+
+ # duplicate some values in the list
+ duplicate_positions = np.random.randint(0, len(dates) - 1, 20)
+ for p in duplicate_positions:
+ dates[p + 1] = dates[p]
+
+ df = DataFrame(np.random.randn(len(dates), 4),
+ index=dates,
+ columns=list('ABCD'))
+
+ pos = n * 3
+ timestamp = df.index[pos]
+ assert timestamp in df.index
+
+ # it works!
+ df.loc[timestamp]
+ assert len(df.loc[[timestamp]]) > 0
+ finally:
+ _index._SIZE_CUTOFF = old_cutoff
+
+
+def test_indexing_unordered():
+ # GH 2437
+ rng = date_range(start='2011-01-01', end='2011-01-15')
+ ts = Series(np.random.rand(len(rng)), index=rng)
+ ts2 = pd.concat([ts[0:4], ts[-4:], ts[4:-4]])
+
+ for t in ts.index:
+ # TODO: unused?
+ s = str(t) # noqa
+
+ expected = ts[t]
+ result = ts2[t]
+ assert expected == result
+
+ # GH 3448 (ranges)
+ def compare(slobj):
+ result = ts2[slobj].copy()
+ result = result.sort_index()
+ expected = ts[slobj]
assert_series_equal(result, expected)
- # diff freq
- rng = date_range(datetime(2005, 1, 1), periods=20, freq='M')
- ts = Series(np.arange(len(rng)), index=rng)
- ts = ts.take(np.random.permutation(20))
+ compare(slice('2011-01-01', '2011-01-15'))
+ compare(slice('2010-12-30', '2011-01-15'))
+ compare(slice('2011-01-01', '2011-01-16'))
+
+ # partial ranges
+ compare(slice('2011-01-01', '2011-01-6'))
+ compare(slice('2011-01-06', '2011-01-8'))
+ compare(slice('2011-01-06', '2011-01-12'))
+
+ # single values
+ result = ts2['2011'].sort_index()
+ expected = ts['2011']
+ assert_series_equal(result, expected)
+
+ # diff freq
+ rng = date_range(datetime(2005, 1, 1), periods=20, freq='M')
+ ts = Series(np.arange(len(rng)), index=rng)
+ ts = ts.take(np.random.permutation(20))
+
+ result = ts['2005']
+ for t in result.index:
+ assert t.year == 2005
- result = ts['2005']
- for t in result.index:
- assert t.year == 2005
- def test_indexing(self):
+def test_indexing():
+ idx = date_range("2001-1-1", periods=20, freq='M')
+ ts = Series(np.random.rand(len(idx)), index=idx)
- idx = date_range("2001-1-1", periods=20, freq='M')
- ts = Series(np.random.rand(len(idx)), index=idx)
+ # getting
- # getting
+ # GH 3070, make sure semantics work on Series/Frame
+ expected = ts['2001']
+ expected.name = 'A'
- # GH 3070, make sure semantics work on Series/Frame
- expected = ts['2001']
- expected.name = 'A'
+ df = DataFrame(dict(A=ts))
+ result = df['2001']['A']
+ assert_series_equal(expected, result)
- df = DataFrame(dict(A=ts))
- result = df['2001']['A']
- assert_series_equal(expected, result)
+ # setting
+ ts['2001'] = 1
+ expected = ts['2001']
+ expected.name = 'A'
- # setting
- ts['2001'] = 1
- expected = ts['2001']
- expected.name = 'A'
+ df.loc['2001', 'A'] = 1
- df.loc['2001', 'A'] = 1
+ result = df['2001']['A']
+ assert_series_equal(expected, result)
- result = df['2001']['A']
- assert_series_equal(expected, result)
+ # GH3546 (not including times on the last day)
+ idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:00',
+ freq='H')
+ ts = Series(lrange(len(idx)), index=idx)
+ expected = ts['2013-05']
+ assert_series_equal(expected, ts)
- # GH3546 (not including times on the last day)
- idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:00',
- freq='H')
- ts = Series(lrange(len(idx)), index=idx)
- expected = ts['2013-05']
- assert_series_equal(expected, ts)
+ idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:59',
+ freq='S')
+ ts = Series(lrange(len(idx)), index=idx)
+ expected = ts['2013-05']
+ assert_series_equal(expected, ts)
- idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:59',
- freq='S')
- ts = Series(lrange(len(idx)), index=idx)
- expected = ts['2013-05']
- assert_series_equal(expected, ts)
+ idx = [Timestamp('2013-05-31 00:00'),
+ Timestamp(datetime(2013, 5, 31, 23, 59, 59, 999999))]
+ ts = Series(lrange(len(idx)), index=idx)
+ expected = ts['2013']
+ assert_series_equal(expected, ts)
- idx = [Timestamp('2013-05-31 00:00'),
- Timestamp(datetime(2013, 5, 31, 23, 59, 59, 999999))]
- ts = Series(lrange(len(idx)), index=idx)
- expected = ts['2013']
- assert_series_equal(expected, ts)
+ # GH14826, indexing with a seconds resolution string / datetime object
+ df = DataFrame(np.random.rand(5, 5),
+ columns=['open', 'high', 'low', 'close', 'volume'],
+ index=date_range('2012-01-02 18:01:00',
+ periods=5, tz='US/Central', freq='s'))
+ expected = df.loc[[df.index[2]]]
- # GH14826, indexing with a seconds resolution string / datetime object
- df = DataFrame(np.random.rand(5, 5),
- columns=['open', 'high', 'low', 'close', 'volume'],
- index=date_range('2012-01-02 18:01:00',
- periods=5, tz='US/Central', freq='s'))
- expected = df.loc[[df.index[2]]]
+ # this is a single date, so will raise
+ pytest.raises(KeyError, df.__getitem__, '2012-01-02 18:01:02', )
+ pytest.raises(KeyError, df.__getitem__, df.index[2], )
- # this is a single date, so will raise
- pytest.raises(KeyError, df.__getitem__, '2012-01-02 18:01:02', )
- pytest.raises(KeyError, df.__getitem__, df.index[2], )
+"""
+test NaT support
+"""
-class TestNatIndexing(object):
- def setup_method(self, method):
- self.series = Series(date_range('1/1/2000', periods=10))
+def test_set_none_nan():
+ series = Series(date_range('1/1/2000', periods=10))
+ series[3] = None
+ assert series[3] is NaT
- # ---------------------------------------------------------------------
- # NaT support
+ series[3:5] = None
+ assert series[4] is NaT
- def test_set_none_nan(self):
- self.series[3] = None
- assert self.series[3] is NaT
+ series[5] = np.nan
+ assert series[5] is NaT
- self.series[3:5] = None
- assert self.series[4] is NaT
+ series[5:7] = np.nan
+ assert series[6] is NaT
- self.series[5] = np.nan
- assert self.series[5] is NaT
- self.series[5:7] = np.nan
- assert self.series[6] is NaT
+def test_nat_operations():
+ # GH 8617
+ s = Series([0, pd.NaT], dtype='m8[ns]')
+ exp = s[0]
+ assert s.median() == exp
+ assert s.min() == exp
+ assert s.max() == exp
- def test_nat_operations(self):
- # GH 8617
- s = Series([0, pd.NaT], dtype='m8[ns]')
- exp = s[0]
- assert s.median() == exp
- assert s.min() == exp
- assert s.max() == exp
- def test_round_nat(self):
- # GH14940
- s = Series([pd.NaT])
- expected = Series(pd.NaT)
- for method in ["round", "floor", "ceil"]:
- round_method = getattr(s.dt, method)
- for freq in ["s", "5s", "min", "5min", "h", "5h"]:
- assert_series_equal(round_method(freq), expected)
+def test_round_nat():
+ # GH14940
+ s = Series([pd.NaT])
+ expected = Series(pd.NaT)
+ for method in ["round", "floor", "ceil"]:
+ round_method = getattr(s.dt, method)
+ for freq in ["s", "5s", "min", "5min", "h", "5h"]:
+ assert_series_equal(round_method(freq), expected)
diff --git a/pandas/tests/series/indexing/test_iloc.py b/pandas/tests/series/indexing/test_iloc.py
index c86b50a46a665..5908a7708c426 100644
--- a/pandas/tests/series/indexing/test_iloc.py
+++ b/pandas/tests/series/indexing/test_iloc.py
@@ -11,33 +11,30 @@
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-from pandas.tests.series.common import TestData
+def test_iloc():
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
-class TestIloc(TestData):
+ for i in range(len(s)):
+ result = s.iloc[i]
+ exp = s[s.index[i]]
+ assert_almost_equal(result, exp)
- def test_iloc(self):
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+ # pass a slice
+ result = s.iloc[slice(1, 3)]
+ expected = s.loc[2:4]
+ assert_series_equal(result, expected)
- for i in range(len(s)):
- result = s.iloc[i]
- exp = s[s.index[i]]
- assert_almost_equal(result, exp)
+ # test slice is a view
+ result[:] = 0
+ assert (s[1:3] == 0).all()
- # pass a slice
- result = s.iloc[slice(1, 3)]
- expected = s.loc[2:4]
- assert_series_equal(result, expected)
+ # list of integers
+ result = s.iloc[[0, 2, 3, 4, 5]]
+ expected = s.reindex(s.index[[0, 2, 3, 4, 5]])
+ assert_series_equal(result, expected)
- # test slice is a view
- result[:] = 0
- assert (s[1:3] == 0).all()
- # list of integers
- result = s.iloc[[0, 2, 3, 4, 5]]
- expected = s.reindex(s.index[[0, 2, 3, 4, 5]])
- assert_series_equal(result, expected)
-
- def test_iloc_nonunique(self):
- s = Series([0, 1, 2], index=[0, 1, 0])
- assert s.iloc[2] == 2
+def test_iloc_nonunique():
+ s = Series([0, 1, 2], index=[0, 1, 0])
+ assert s.iloc[2] == 2
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 8b0d5121486ec..9005ac8e97929 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -20,442 +20,427 @@
from pandas.util.testing import (assert_series_equal)
import pandas.util.testing as tm
-from pandas.tests.series.common import TestData
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-class TestMisc(TestData):
+def test_getitem_setitem_ellipsis():
+ s = Series(np.random.randn(10))
- def test_getitem_setitem_ellipsis(self):
- s = Series(np.random.randn(10))
+ np.fix(s)
- np.fix(s)
+ result = s[...]
+ assert_series_equal(result, s)
- result = s[...]
- assert_series_equal(result, s)
+ s[...] = 5
+ assert (result == 5).all()
- s[...] = 5
- assert (result == 5).all()
- def test_pop(self):
- # GH 6600
- df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, })
- k = df.iloc[4]
+def test_pop():
+ # GH 6600
+ df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, })
+ k = df.iloc[4]
- result = k.pop('B')
- assert result == 4
+ result = k.pop('B')
+ assert result == 4
- expected = Series([0, 0], index=['A', 'C'], name=4)
- assert_series_equal(k, expected)
+ expected = Series([0, 0], index=['A', 'C'], name=4)
+ assert_series_equal(k, expected)
- def test_getitem_get(self):
- idx1 = self.series.index[5]
- idx2 = self.objSeries.index[5]
- assert self.series[idx1] == self.series.get(idx1)
- assert self.objSeries[idx2] == self.objSeries.get(idx2)
+def test_getitem_get(test_data):
+ test_series = test_data.series
+ test_obj_series = test_data.objSeries
- assert self.series[idx1] == self.series[5]
- assert self.objSeries[idx2] == self.objSeries[5]
+ idx1 = test_series.index[5]
+ idx2 = test_obj_series.index[5]
- assert self.series.get(-1) == self.series.get(self.series.index[-1])
- assert self.series[5] == self.series.get(self.series.index[5])
+ assert test_series[idx1] == test_series.get(idx1)
+ assert test_obj_series[idx2] == test_obj_series.get(idx2)
- # missing
- d = self.ts.index[0] - BDay()
- pytest.raises(KeyError, self.ts.__getitem__, d)
+ assert test_series[idx1] == test_series[5]
+ assert test_obj_series[idx2] == test_obj_series[5]
- # None
- # GH 5652
- for s in [Series(), Series(index=list('abc'))]:
- result = s.get(None)
- assert result is None
+ assert test_series.get(-1) == test_series.get(test_series.index[-1])
+ assert test_series[5] == test_series.get(test_series.index[5])
- def test_getitem_int64(self):
- idx = np.int64(5)
- assert self.ts[idx] == self.ts[5]
+ # missing
+ d = test_data.ts.index[0] - BDay()
+ pytest.raises(KeyError, test_data.ts.__getitem__, d)
- def test_getitem_fancy(self):
- slice1 = self.series[[1, 2, 3]]
- slice2 = self.objSeries[[1, 2, 3]]
- assert self.series.index[2] == slice1.index[1]
- assert self.objSeries.index[2] == slice2.index[1]
- assert self.series[2] == slice1[1]
- assert self.objSeries[2] == slice2[1]
+ # None
+ # GH 5652
+ for s in [Series(), Series(index=list('abc'))]:
+ result = s.get(None)
+ assert result is None
- def test_getitem_generator(self):
- gen = (x > 0 for x in self.series)
- result = self.series[gen]
- result2 = self.series[iter(self.series > 0)]
- expected = self.series[self.series > 0]
- assert_series_equal(result, expected)
- assert_series_equal(result2, expected)
-
- def test_type_promotion(self):
- # GH12599
- s = pd.Series()
- s["a"] = pd.Timestamp("2016-01-01")
- s["b"] = 3.0
- s["c"] = "foo"
- expected = Series([pd.Timestamp("2016-01-01"), 3.0, "foo"],
- index=["a", "b", "c"])
- assert_series_equal(s, expected)
-
- @pytest.mark.parametrize(
- 'result_1, duplicate_item, expected_1',
+
+def test_getitem_int64(test_data):
+ idx = np.int64(5)
+ assert test_data.ts[idx] == test_data.ts[5]
+
+
+def test_getitem_fancy(test_data):
+ slice1 = test_data.series[[1, 2, 3]]
+ slice2 = test_data.objSeries[[1, 2, 3]]
+ assert test_data.series.index[2] == slice1.index[1]
+ assert test_data.objSeries.index[2] == slice2.index[1]
+ assert test_data.series[2] == slice1[1]
+ assert test_data.objSeries[2] == slice2[1]
+
+
+def test_getitem_generator(test_data):
+ gen = (x > 0 for x in test_data.series)
+ result = test_data.series[gen]
+ result2 = test_data.series[iter(test_data.series > 0)]
+ expected = test_data.series[test_data.series > 0]
+ assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
+
+
+def test_type_promotion():
+ # GH12599
+ s = pd.Series()
+ s["a"] = pd.Timestamp("2016-01-01")
+ s["b"] = 3.0
+ s["c"] = "foo"
+ expected = Series([pd.Timestamp("2016-01-01"), 3.0, "foo"],
+ index=["a", "b", "c"])
+ assert_series_equal(s, expected)
+
+
+@pytest.mark.parametrize(
+ 'result_1, duplicate_item, expected_1',
+ [
[
- [
- pd.Series({1: 12, 2: [1, 2, 2, 3]}), pd.Series({1: 313}),
- pd.Series({1: 12, }, dtype=object),
- ],
- [
- pd.Series({1: [1, 2, 3], 2: [1, 2, 2, 3]}),
- pd.Series({1: [1, 2, 3]}), pd.Series({1: [1, 2, 3], }),
- ],
- ])
- def test_getitem_with_duplicates_indices(
- self, result_1, duplicate_item, expected_1):
- # GH 17610
- result = result_1.append(duplicate_item)
- expected = expected_1.append(duplicate_item)
- assert_series_equal(result[1], expected)
- assert result[2] == result_1[2]
-
- def test_getitem_out_of_bounds(self):
- # don't segfault, GH #495
- pytest.raises(IndexError, self.ts.__getitem__, len(self.ts))
-
- # GH #917
- s = Series([])
- pytest.raises(IndexError, s.__getitem__, -1)
-
- def test_getitem_setitem_integers(self):
- # caused bug without test
- s = Series([1, 2, 3], ['a', 'b', 'c'])
-
- assert s.iloc[0] == s['a']
- s.iloc[0] = 5
- tm.assert_almost_equal(s['a'], 5)
-
- def test_getitem_box_float64(self):
- value = self.ts[5]
- assert isinstance(value, np.float64)
-
- def test_series_box_timestamp(self):
- rng = pd.date_range('20090415', '20090519', freq='B')
- ser = Series(rng)
-
- assert isinstance(ser[5], pd.Timestamp)
-
- rng = pd.date_range('20090415', '20090519', freq='B')
- ser = Series(rng, index=rng)
- assert isinstance(ser[5], pd.Timestamp)
-
- assert isinstance(ser.iat[5], pd.Timestamp)
-
- def test_getitem_ambiguous_keyerror(self):
- s = Series(lrange(10), index=lrange(0, 20, 2))
- pytest.raises(KeyError, s.__getitem__, 1)
- pytest.raises(KeyError, s.loc.__getitem__, 1)
-
- def test_getitem_unordered_dup(self):
- obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b'])
- assert is_scalar(obj['c'])
- assert obj['c'] == 0
-
- def test_getitem_dups_with_missing(self):
-
- # breaks reindex, so need to use .loc internally
- # GH 4246
- s = Series([1, 2, 3, 4], ['foo', 'bar', 'foo', 'bah'])
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- expected = s.loc[['foo', 'bar', 'bah', 'bam']]
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = s[['foo', 'bar', 'bah', 'bam']]
- assert_series_equal(result, expected)
+ pd.Series({1: 12, 2: [1, 2, 2, 3]}), pd.Series({1: 313}),
+ pd.Series({1: 12, }, dtype=object),
+ ],
+ [
+ pd.Series({1: [1, 2, 3], 2: [1, 2, 2, 3]}),
+ pd.Series({1: [1, 2, 3]}), pd.Series({1: [1, 2, 3], }),
+ ],
+ ])
+def test_getitem_with_duplicates_indices(
+ result_1, duplicate_item, expected_1):
+ # GH 17610
+ result = result_1.append(duplicate_item)
+ expected = expected_1.append(duplicate_item)
+ assert_series_equal(result[1], expected)
+ assert result[2] == result_1[2]
- def test_getitem_dups(self):
- s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64)
- expected = Series([3, 4], index=['C', 'C'], dtype=np.int64)
- result = s['C']
- assert_series_equal(result, expected)
- def test_getitem_dataframe(self):
- rng = list(range(10))
- s = pd.Series(10, index=rng)
- df = pd.DataFrame(rng, index=rng)
- pytest.raises(TypeError, s.__getitem__, df > 5)
-
- def test_getitem_callable(self):
- # GH 12533
- s = pd.Series(4, index=list('ABCD'))
- result = s[lambda x: 'A']
- assert result == s.loc['A']
-
- result = s[lambda x: ['A', 'B']]
- tm.assert_series_equal(result, s.loc[['A', 'B']])
-
- result = s[lambda x: [True, False, True, True]]
- tm.assert_series_equal(result, s.iloc[[0, 2, 3]])
+def test_getitem_out_of_bounds(test_data):
+ # don't segfault, GH #495
+ pytest.raises(IndexError, test_data.ts.__getitem__, len(test_data.ts))
- def test_setitem_ambiguous_keyerror(self):
- s = Series(lrange(10), index=lrange(0, 20, 2))
-
- # equivalent of an append
- s2 = s.copy()
- s2[1] = 5
- expected = s.append(Series([5], index=[1]))
- assert_series_equal(s2, expected)
-
- s2 = s.copy()
- s2.loc[1] = 5
- expected = s.append(Series([5], index=[1]))
- assert_series_equal(s2, expected)
-
- def test_setitem_callable(self):
- # GH 12533
- s = pd.Series([1, 2, 3, 4], index=list('ABCD'))
- s[lambda x: 'A'] = -1
- tm.assert_series_equal(s, pd.Series([-1, 2, 3, 4], index=list('ABCD')))
-
- def test_setitem_other_callable(self):
- # GH 13299
- inc = lambda x: x + 1
-
- s = pd.Series([1, 2, -1, 4])
- s[s < 0] = inc
-
- expected = pd.Series([1, 2, inc, 4])
- tm.assert_series_equal(s, expected)
-
- def test_slice(self):
- numSlice = self.series[10:20]
- numSliceEnd = self.series[-10:]
- objSlice = self.objSeries[10:20]
-
- assert self.series.index[9] not in numSlice.index
- assert self.objSeries.index[9] not in objSlice.index
-
- assert len(numSlice) == len(numSlice.index)
- assert self.series[numSlice.index[0]] == numSlice[numSlice.index[0]]
-
- assert numSlice.index[1] == self.series.index[11]
- assert tm.equalContents(numSliceEnd, np.array(self.series)[-10:])
-
- # Test return view.
- sl = self.series[10:20]
- sl[:] = 0
-
- assert (self.series[10:20] == 0).all()
-
- def test_slice_can_reorder_not_uniquely_indexed(self):
- s = Series(1, index=['a', 'a', 'b', 'b', 'c'])
- s[::-1] # it works!
-
- def test_setitem(self):
- self.ts[self.ts.index[5]] = np.NaN
- self.ts[[1, 2, 17]] = np.NaN
- self.ts[6] = np.NaN
- assert np.isnan(self.ts[6])
- assert np.isnan(self.ts[2])
- self.ts[np.isnan(self.ts)] = 5
- assert not np.isnan(self.ts[2])
-
- # caught this bug when writing tests
- series = Series(tm.makeIntIndex(20).astype(float),
- index=tm.makeIntIndex(20))
-
- series[::2] = 0
- assert (series[::2] == 0).all()
-
- # set item that's not contained
- s = self.series.copy()
- s['foobar'] = 1
-
- app = Series([1], index=['foobar'], name='series')
- expected = self.series.append(app)
- assert_series_equal(s, expected)
-
- # Test for issue #10193
- key = pd.Timestamp('2012-01-01')
- series = pd.Series()
- series[key] = 47
- expected = pd.Series(47, [key])
- assert_series_equal(series, expected)
-
- series = pd.Series([], pd.DatetimeIndex([], freq='D'))
- series[key] = 47
- expected = pd.Series(47, pd.DatetimeIndex([key], freq='D'))
- assert_series_equal(series, expected)
-
- def test_setitem_dtypes(self):
-
- # change dtypes
- # GH 4463
- expected = Series([np.nan, 2, 3])
-
- s = Series([1, 2, 3])
- s.iloc[0] = np.nan
- assert_series_equal(s, expected)
-
- s = Series([1, 2, 3])
- s.loc[0] = np.nan
- assert_series_equal(s, expected)
-
- s = Series([1, 2, 3])
- s[0] = np.nan
- assert_series_equal(s, expected)
-
- s = Series([False])
- s.loc[0] = np.nan
- assert_series_equal(s, Series([np.nan]))
-
- s = Series([False, True])
- s.loc[0] = np.nan
- assert_series_equal(s, Series([np.nan, 1.0]))
-
- def test_set_value(self):
- idx = self.ts.index[10]
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- res = self.ts.set_value(idx, 0)
- assert res is self.ts
- assert self.ts[idx] == 0
-
- # equiv
- s = self.series.copy()
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- res = s.set_value('foobar', 0)
- assert res is s
- assert res.index[-1] == 'foobar'
- assert res['foobar'] == 0
-
- s = self.series.copy()
- s.loc['foobar'] = 0
- assert s.index[-1] == 'foobar'
- assert s['foobar'] == 0
-
- def test_setslice(self):
- sl = self.ts[5:20]
- assert len(sl) == len(sl.index)
- assert sl.index.is_unique
-
- def test_basic_getitem_setitem_corner(self):
- # invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2]
- with tm.assert_raises_regex(ValueError, 'tuple-index'):
- self.ts[:, 2]
- with tm.assert_raises_regex(ValueError, 'tuple-index'):
- self.ts[:, 2] = 2
-
- # weird lists. [slice(0, 5)] will work but not two slices
- result = self.ts[[slice(None, 5)]]
- expected = self.ts[:5]
- assert_series_equal(result, expected)
+ # GH #917
+ s = Series([])
+ pytest.raises(IndexError, s.__getitem__, -1)
- # OK
- pytest.raises(Exception, self.ts.__getitem__,
- [5, slice(None, None)])
- pytest.raises(Exception, self.ts.__setitem__,
- [5, slice(None, None)], 2)
- def test_basic_getitem_with_labels(self):
- indices = self.ts.index[[5, 10, 15]]
+def test_getitem_setitem_integers():
+ # caused bug without test
+ s = Series([1, 2, 3], ['a', 'b', 'c'])
- result = self.ts[indices]
- expected = self.ts.reindex(indices)
- assert_series_equal(result, expected)
+ assert s.iloc[0] == s['a']
+ s.iloc[0] = 5
+ tm.assert_almost_equal(s['a'], 5)
- result = self.ts[indices[0]:indices[2]]
- expected = self.ts.loc[indices[0]:indices[2]]
- assert_series_equal(result, expected)
- # integer indexes, be careful
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
- inds = [0, 2, 5, 7, 8]
- arr_inds = np.array([0, 2, 5, 7, 8])
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = s[inds]
- expected = s.reindex(inds)
- assert_series_equal(result, expected)
+def test_getitem_box_float64(test_data):
+ value = test_data.ts[5]
+ assert isinstance(value, np.float64)
+
+
+def test_series_box_timestamp():
+ rng = pd.date_range('20090415', '20090519', freq='B')
+ ser = Series(rng)
+
+ assert isinstance(ser[5], pd.Timestamp)
+
+ rng = pd.date_range('20090415', '20090519', freq='B')
+ ser = Series(rng, index=rng)
+ assert isinstance(ser[5], pd.Timestamp)
+
+ assert isinstance(ser.iat[5], pd.Timestamp)
+
+
+def test_getitem_ambiguous_keyerror():
+ s = Series(lrange(10), index=lrange(0, 20, 2))
+ pytest.raises(KeyError, s.__getitem__, 1)
+ pytest.raises(KeyError, s.loc.__getitem__, 1)
+
+
+def test_getitem_unordered_dup():
+ obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b'])
+ assert is_scalar(obj['c'])
+ assert obj['c'] == 0
+
+
+def test_getitem_dups_with_missing():
+ # breaks reindex, so need to use .loc internally
+ # GH 4246
+ s = Series([1, 2, 3, 4], ['foo', 'bar', 'foo', 'bah'])
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ expected = s.loc[['foo', 'bar', 'bah', 'bam']]
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = s[['foo', 'bar', 'bah', 'bam']]
+ assert_series_equal(result, expected)
+
+
+def test_getitem_dups():
+ s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64)
+ expected = Series([3, 4], index=['C', 'C'], dtype=np.int64)
+ result = s['C']
+ assert_series_equal(result, expected)
+
+
+def test_getitem_dataframe():
+ rng = list(range(10))
+ s = pd.Series(10, index=rng)
+ df = pd.DataFrame(rng, index=rng)
+ pytest.raises(TypeError, s.__getitem__, df > 5)
+
+
+def test_getitem_callable():
+ # GH 12533
+ s = pd.Series(4, index=list('ABCD'))
+ result = s[lambda x: 'A']
+ assert result == s.loc['A']
+
+ result = s[lambda x: ['A', 'B']]
+ tm.assert_series_equal(result, s.loc[['A', 'B']])
+
+ result = s[lambda x: [True, False, True, True]]
+ tm.assert_series_equal(result, s.iloc[[0, 2, 3]])
+
+
+def test_setitem_ambiguous_keyerror():
+ s = Series(lrange(10), index=lrange(0, 20, 2))
+
+ # equivalent of an append
+ s2 = s.copy()
+ s2[1] = 5
+ expected = s.append(Series([5], index=[1]))
+ assert_series_equal(s2, expected)
+
+ s2 = s.copy()
+ s2.loc[1] = 5
+ expected = s.append(Series([5], index=[1]))
+ assert_series_equal(s2, expected)
+
+
+def test_setitem_callable():
+ # GH 12533
+ s = pd.Series([1, 2, 3, 4], index=list('ABCD'))
+ s[lambda x: 'A'] = -1
+ tm.assert_series_equal(s, pd.Series([-1, 2, 3, 4], index=list('ABCD')))
+
+
+def test_setitem_other_callable():
+ # GH 13299
+ inc = lambda x: x + 1
+
+ s = pd.Series([1, 2, -1, 4])
+ s[s < 0] = inc
+
+ expected = pd.Series([1, 2, inc, 4])
+ tm.assert_series_equal(s, expected)
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = s[arr_inds]
- expected = s.reindex(arr_inds)
- assert_series_equal(result, expected)
- # GH12089
- # with tz for values
- s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
- index=['a', 'b', 'c'])
- expected = Timestamp('2011-01-01', tz='US/Eastern')
- result = s.loc['a']
- assert result == expected
- result = s.iloc[0]
- assert result == expected
- result = s['a']
- assert result == expected
-
- def test_setitem_with_tz(self):
- for tz in ['US/Eastern', 'UTC', 'Asia/Tokyo']:
- orig = pd.Series(pd.date_range('2016-01-01', freq='H', periods=3,
- tz=tz))
- assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
-
- # scalar
- s = orig.copy()
- s[1] = pd.Timestamp('2011-01-01', tz=tz)
- exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00', tz=tz),
- pd.Timestamp('2016-01-01 02:00', tz=tz)])
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.loc[1] = pd.Timestamp('2011-01-01', tz=tz)
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.iloc[1] = pd.Timestamp('2011-01-01', tz=tz)
- tm.assert_series_equal(s, exp)
-
- # vector
- vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
- assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
-
- s[[1, 2]] = vals
- exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00', tz=tz),
- pd.Timestamp('2012-01-01 00:00', tz=tz)])
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.loc[[1, 2]] = vals
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.iloc[[1, 2]] = vals
- tm.assert_series_equal(s, exp)
-
- def test_setitem_with_tz_dst(self):
- # GH XXX
- tz = 'US/Eastern'
- orig = pd.Series(pd.date_range('2016-11-06', freq='H', periods=3,
+def test_slice(test_data):
+ numSlice = test_data.series[10:20]
+ numSliceEnd = test_data.series[-10:]
+ objSlice = test_data.objSeries[10:20]
+
+ assert test_data.series.index[9] not in numSlice.index
+ assert test_data.objSeries.index[9] not in objSlice.index
+
+ assert len(numSlice) == len(numSlice.index)
+ assert test_data.series[numSlice.index[0]] == numSlice[numSlice.index[0]]
+
+ assert numSlice.index[1] == test_data.series.index[11]
+ assert tm.equalContents(numSliceEnd, np.array(test_data.series)[-10:])
+
+ # Test return view.
+ sl = test_data.series[10:20]
+ sl[:] = 0
+
+ assert (test_data.series[10:20] == 0).all()
+
+
+def test_slice_can_reorder_not_uniquely_indexed():
+ s = Series(1, index=['a', 'a', 'b', 'b', 'c'])
+ s[::-1] # it works!
+
+
+def test_setitem(test_data):
+ test_data.ts[test_data.ts.index[5]] = np.NaN
+ test_data.ts[[1, 2, 17]] = np.NaN
+ test_data.ts[6] = np.NaN
+ assert np.isnan(test_data.ts[6])
+ assert np.isnan(test_data.ts[2])
+ test_data.ts[np.isnan(test_data.ts)] = 5
+ assert not np.isnan(test_data.ts[2])
+
+ # caught this bug when writing tests
+ series = Series(tm.makeIntIndex(20).astype(float),
+ index=tm.makeIntIndex(20))
+
+ series[::2] = 0
+ assert (series[::2] == 0).all()
+
+ # set item that's not contained
+ s = test_data.series.copy()
+ s['foobar'] = 1
+
+ app = Series([1], index=['foobar'], name='series')
+ expected = test_data.series.append(app)
+ assert_series_equal(s, expected)
+
+ # Test for issue #10193
+ key = pd.Timestamp('2012-01-01')
+ series = pd.Series()
+ series[key] = 47
+ expected = pd.Series(47, [key])
+ assert_series_equal(series, expected)
+
+ series = pd.Series([], pd.DatetimeIndex([], freq='D'))
+ series[key] = 47
+ expected = pd.Series(47, pd.DatetimeIndex([key], freq='D'))
+ assert_series_equal(series, expected)
+
+
+def test_setitem_dtypes():
+ # change dtypes
+ # GH 4463
+ expected = Series([np.nan, 2, 3])
+
+ s = Series([1, 2, 3])
+ s.iloc[0] = np.nan
+ assert_series_equal(s, expected)
+
+ s = Series([1, 2, 3])
+ s.loc[0] = np.nan
+ assert_series_equal(s, expected)
+
+ s = Series([1, 2, 3])
+ s[0] = np.nan
+ assert_series_equal(s, expected)
+
+ s = Series([False])
+ s.loc[0] = np.nan
+ assert_series_equal(s, Series([np.nan]))
+
+ s = Series([False, True])
+ s.loc[0] = np.nan
+ assert_series_equal(s, Series([np.nan, 1.0]))
+
+
+def test_set_value(test_data):
+ idx = test_data.ts.index[10]
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ res = test_data.ts.set_value(idx, 0)
+ assert res is test_data.ts
+ assert test_data.ts[idx] == 0
+
+ # equiv
+ s = test_data.series.copy()
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ res = s.set_value('foobar', 0)
+ assert res is s
+ assert res.index[-1] == 'foobar'
+ assert res['foobar'] == 0
+
+ s = test_data.series.copy()
+ s.loc['foobar'] = 0
+ assert s.index[-1] == 'foobar'
+ assert s['foobar'] == 0
+
+
+def test_setslice(test_data):
+ sl = test_data.ts[5:20]
+ assert len(sl) == len(sl.index)
+ assert sl.index.is_unique
+
+
+def test_basic_getitem_setitem_corner(test_data):
+ # invalid tuples, e.g. td.ts[:, None] vs. td.ts[:, 2]
+ with tm.assert_raises_regex(ValueError, 'tuple-index'):
+ test_data.ts[:, 2]
+ with tm.assert_raises_regex(ValueError, 'tuple-index'):
+ test_data.ts[:, 2] = 2
+
+ # weird lists. [slice(0, 5)] will work but not two slices
+ result = test_data.ts[[slice(None, 5)]]
+ expected = test_data.ts[:5]
+ assert_series_equal(result, expected)
+
+ # OK
+ pytest.raises(Exception, test_data.ts.__getitem__,
+ [5, slice(None, None)])
+ pytest.raises(Exception, test_data.ts.__setitem__,
+ [5, slice(None, None)], 2)
+
+
+def test_basic_getitem_with_labels(test_data):
+ indices = test_data.ts.index[[5, 10, 15]]
+
+ result = test_data.ts[indices]
+ expected = test_data.ts.reindex(indices)
+ assert_series_equal(result, expected)
+
+ result = test_data.ts[indices[0]:indices[2]]
+ expected = test_data.ts.loc[indices[0]:indices[2]]
+ assert_series_equal(result, expected)
+
+ # integer indexes, be careful
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+ inds = [0, 2, 5, 7, 8]
+ arr_inds = np.array([0, 2, 5, 7, 8])
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = s[inds]
+ expected = s.reindex(inds)
+ assert_series_equal(result, expected)
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = s[arr_inds]
+ expected = s.reindex(arr_inds)
+ assert_series_equal(result, expected)
+
+ # GH12089
+ # with tz for values
+ s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
+ index=['a', 'b', 'c'])
+ expected = Timestamp('2011-01-01', tz='US/Eastern')
+ result = s.loc['a']
+ assert result == expected
+ result = s.iloc[0]
+ assert result == expected
+ result = s['a']
+ assert result == expected
+
+
+def test_setitem_with_tz():
+ for tz in ['US/Eastern', 'UTC', 'Asia/Tokyo']:
+ orig = pd.Series(pd.date_range('2016-01-01', freq='H', periods=3,
tz=tz))
assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
# scalar
s = orig.copy()
s[1] = pd.Timestamp('2011-01-01', tz=tz)
- exp = pd.Series([pd.Timestamp('2016-11-06 00:00-04:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00-05:00', tz=tz),
- pd.Timestamp('2016-11-06 01:00-05:00', tz=tz)])
+ exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00', tz=tz),
+ pd.Timestamp('2016-01-01 02:00', tz=tz)])
tm.assert_series_equal(s, exp)
s = orig.copy()
@@ -472,7 +457,7 @@ def test_setitem_with_tz_dst(self):
assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
s[[1, 2]] = vals
- exp = pd.Series([pd.Timestamp('2016-11-06 00:00', tz=tz),
+ exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
pd.Timestamp('2011-01-01 00:00', tz=tz),
pd.Timestamp('2012-01-01 00:00', tz=tz)])
tm.assert_series_equal(s, exp)
@@ -485,324 +470,378 @@ def test_setitem_with_tz_dst(self):
s.iloc[[1, 2]] = vals
tm.assert_series_equal(s, exp)
- def test_categorial_assigning_ops(self):
- orig = Series(Categorical(["b", "b"], categories=["a", "b"]))
- s = orig.copy()
- s[:] = "a"
- exp = Series(Categorical(["a", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
- s = orig.copy()
- s[1] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
+def test_setitem_with_tz_dst():
+ # GH XXX
+ tz = 'US/Eastern'
+ orig = pd.Series(pd.date_range('2016-11-06', freq='H', periods=3,
+ tz=tz))
+ assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
- s = orig.copy()
- s[s.index > 0] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
+ # scalar
+ s = orig.copy()
+ s[1] = pd.Timestamp('2011-01-01', tz=tz)
+ exp = pd.Series([pd.Timestamp('2016-11-06 00:00-04:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00-05:00', tz=tz),
+ pd.Timestamp('2016-11-06 01:00-05:00', tz=tz)])
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.loc[1] = pd.Timestamp('2011-01-01', tz=tz)
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.iloc[1] = pd.Timestamp('2011-01-01', tz=tz)
+ tm.assert_series_equal(s, exp)
+
+ # vector
+ vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
+ assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
+
+ s[[1, 2]] = vals
+ exp = pd.Series([pd.Timestamp('2016-11-06 00:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00', tz=tz),
+ pd.Timestamp('2012-01-01 00:00', tz=tz)])
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.loc[[1, 2]] = vals
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.iloc[[1, 2]] = vals
+ tm.assert_series_equal(s, exp)
+
+
+def test_categorial_assigning_ops():
+ orig = Series(Categorical(["b", "b"], categories=["a", "b"]))
+ s = orig.copy()
+ s[:] = "a"
+ exp = Series(Categorical(["a", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s[1] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
- s = orig.copy()
- s[[False, True]] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
+ s = orig.copy()
+ s[s.index > 0] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
- s = orig.copy()
- s.index = ["x", "y"]
- s["y"] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]),
- index=["x", "y"])
- tm.assert_series_equal(s, exp)
+ s = orig.copy()
+ s[[False, True]] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
- # ensure that one can set something to np.nan
- s = Series(Categorical([1, 2, 3]))
- exp = Series(Categorical([1, np.nan, 3], categories=[1, 2, 3]))
- s[1] = np.nan
- tm.assert_series_equal(s, exp)
+ s = orig.copy()
+ s.index = ["x", "y"]
+ s["y"] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]),
+ index=["x", "y"])
+ tm.assert_series_equal(s, exp)
- def test_take(self):
- s = Series([-1, 5, 6, 2, 4])
+ # ensure that one can set something to np.nan
+ s = Series(Categorical([1, 2, 3]))
+ exp = Series(Categorical([1, np.nan, 3], categories=[1, 2, 3]))
+ s[1] = np.nan
+ tm.assert_series_equal(s, exp)
- actual = s.take([1, 3, 4])
- expected = Series([5, 2, 4], index=[1, 3, 4])
- tm.assert_series_equal(actual, expected)
- actual = s.take([-1, 3, 4])
- expected = Series([4, 2, 4], index=[4, 3, 4])
- tm.assert_series_equal(actual, expected)
+def test_take():
+ s = Series([-1, 5, 6, 2, 4])
- pytest.raises(IndexError, s.take, [1, 10])
- pytest.raises(IndexError, s.take, [2, 5])
+ actual = s.take([1, 3, 4])
+ expected = Series([5, 2, 4], index=[1, 3, 4])
+ tm.assert_series_equal(actual, expected)
- with tm.assert_produces_warning(FutureWarning):
- s.take([-1, 3, 4], convert=False)
+ actual = s.take([-1, 3, 4])
+ expected = Series([4, 2, 4], index=[4, 3, 4])
+ tm.assert_series_equal(actual, expected)
+
+ pytest.raises(IndexError, s.take, [1, 10])
+ pytest.raises(IndexError, s.take, [2, 5])
+
+ with tm.assert_produces_warning(FutureWarning):
+ s.take([-1, 3, 4], convert=False)
- def test_ix_setitem(self):
- inds = self.series.index[[3, 4, 7]]
- result = self.series.copy()
- result.loc[inds] = 5
+def test_ix_setitem(test_data):
+ inds = test_data.series.index[[3, 4, 7]]
+
+ result = test_data.series.copy()
+ result.loc[inds] = 5
+
+ expected = test_data.series.copy()
+ expected[[3, 4, 7]] = 5
+ assert_series_equal(result, expected)
- expected = self.series.copy()
- expected[[3, 4, 7]] = 5
- assert_series_equal(result, expected)
+ result.iloc[5:10] = 10
+ expected[5:10] = 10
+ assert_series_equal(result, expected)
- result.iloc[5:10] = 10
- expected[5:10] = 10
- assert_series_equal(result, expected)
+ # set slice with indices
+ d1, d2 = test_data.series.index[[5, 15]]
+ result.loc[d1:d2] = 6
+ expected[5:16] = 6 # because it's inclusive
+ assert_series_equal(result, expected)
- # set slice with indices
- d1, d2 = self.series.index[[5, 15]]
- result.loc[d1:d2] = 6
- expected[5:16] = 6 # because it's inclusive
- assert_series_equal(result, expected)
+ # set index value
+ test_data.series.loc[d1] = 4
+ test_data.series.loc[d2] = 6
+ assert test_data.series[d1] == 4
+ assert test_data.series[d2] == 6
- # set index value
- self.series.loc[d1] = 4
- self.series.loc[d2] = 6
- assert self.series[d1] == 4
- assert self.series[d2] == 6
-
- def test_setitem_na(self):
- # these induce dtype changes
- expected = Series([np.nan, 3, np.nan, 5, np.nan, 7, np.nan, 9, np.nan])
- s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10])
- s[::2] = np.nan
- assert_series_equal(s, expected)
-
- # gets coerced to float, right?
- expected = Series([np.nan, 1, np.nan, 0])
- s = Series([True, True, False, False])
- s[::2] = np.nan
- assert_series_equal(s, expected)
-
- expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8,
- 9])
- s = Series(np.arange(10))
- s[:5] = np.nan
- assert_series_equal(s, expected)
-
- def test_basic_indexing(self):
- s = Series(np.random.randn(5), index=['a', 'b', 'a', 'a', 'b'])
-
- pytest.raises(IndexError, s.__getitem__, 5)
- pytest.raises(IndexError, s.__setitem__, 5, 0)
-
- pytest.raises(KeyError, s.__getitem__, 'c')
-
- s = s.sort_index()
-
- pytest.raises(IndexError, s.__getitem__, 5)
- pytest.raises(IndexError, s.__setitem__, 5, 0)
-
- def test_timedelta_assignment(self):
- # GH 8209
- s = Series([])
- s.loc['B'] = timedelta(1)
- tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B']))
-
- s = s.reindex(s.index.insert(0, 'A'))
- tm.assert_series_equal(s, Series(
- [np.nan, Timedelta('1 days')], index=['A', 'B']))
-
- result = s.fillna(timedelta(1))
- expected = Series(Timedelta('1 days'), index=['A', 'B'])
- tm.assert_series_equal(result, expected)
-
- s.loc['A'] = timedelta(1)
- tm.assert_series_equal(s, expected)
-
- # GH 14155
- s = Series(10 * [np.timedelta64(10, 'm')])
- s.loc[[1, 2, 3]] = np.timedelta64(20, 'm')
- expected = pd.Series(10 * [np.timedelta64(10, 'm')])
- expected.loc[[1, 2, 3]] = pd.Timedelta(np.timedelta64(20, 'm'))
- tm.assert_series_equal(s, expected)
-
- def test_underlying_data_conversion(self):
-
- # GH 4080
- df = DataFrame({c: [1, 2, 3] for c in ['a', 'b', 'c']})
- df.set_index(['a', 'b', 'c'], inplace=True)
- s = Series([1], index=[(2, 2, 2)])
- df['val'] = 0
- df
- df['val'].update(s)
-
- expected = DataFrame(
- dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0]))
- expected.set_index(['a', 'b', 'c'], inplace=True)
- tm.assert_frame_equal(df, expected)
-
- # GH 3970
- # these are chained assignments as well
- pd.set_option('chained_assignment', None)
- df = DataFrame({"aa": range(5), "bb": [2.2] * 5})
- df["cc"] = 0.0
-
- ck = [True] * len(df)
-
- df["bb"].iloc[0] = .13
-
- # TODO: unused
- df_tmp = df.iloc[ck] # noqa
-
- df["bb"].iloc[0] = .15
- assert df['bb'].iloc[0] == 0.15
- pd.set_option('chained_assignment', 'raise')
-
- # GH 3217
- df = DataFrame(dict(a=[1, 3], b=[np.nan, 2]))
- df['c'] = np.nan
- df['c'].update(pd.Series(['foo'], index=[0]))
-
- expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan]))
- tm.assert_frame_equal(df, expected)
-
- def test_preserveRefs(self):
- seq = self.ts[[5, 10, 15]]
- seq[1] = np.NaN
- assert not np.isnan(self.ts[10])
-
- def test_drop(self):
-
- # unique
- s = Series([1, 2], index=['one', 'two'])
- expected = Series([1], index=['one'])
- result = s.drop(['two'])
- assert_series_equal(result, expected)
- result = s.drop('two', axis='rows')
- assert_series_equal(result, expected)
- # non-unique
- # GH 5248
- s = Series([1, 1, 2], index=['one', 'two', 'one'])
- expected = Series([1, 2], index=['one', 'one'])
- result = s.drop(['two'], axis=0)
- assert_series_equal(result, expected)
- result = s.drop('two')
- assert_series_equal(result, expected)
+def test_setitem_na():
+ # these induce dtype changes
+ expected = Series([np.nan, 3, np.nan, 5, np.nan, 7, np.nan, 9, np.nan])
+ s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10])
+ s[::2] = np.nan
+ assert_series_equal(s, expected)
- expected = Series([1], index=['two'])
- result = s.drop(['one'])
- assert_series_equal(result, expected)
- result = s.drop('one')
- assert_series_equal(result, expected)
+ # gets coerced to float, right?
+ expected = Series([np.nan, 1, np.nan, 0])
+ s = Series([True, True, False, False])
+ s[::2] = np.nan
+ assert_series_equal(s, expected)
- # single string/tuple-like
- s = Series(range(3), index=list('abc'))
- pytest.raises(KeyError, s.drop, 'bc')
- pytest.raises(KeyError, s.drop, ('a',))
-
- # errors='ignore'
- s = Series(range(3), index=list('abc'))
- result = s.drop('bc', errors='ignore')
- assert_series_equal(result, s)
- result = s.drop(['a', 'd'], errors='ignore')
- expected = s.iloc[1:]
- assert_series_equal(result, expected)
+ expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8,
+ 9])
+ s = Series(np.arange(10))
+ s[:5] = np.nan
+ assert_series_equal(s, expected)
- # bad axis
- pytest.raises(ValueError, s.drop, 'one', axis='columns')
- # GH 8522
- s = Series([2, 3], index=[True, False])
- assert s.index.is_object()
- result = s.drop(True)
- expected = Series([3], index=[False])
- assert_series_equal(result, expected)
+def test_basic_indexing():
+ s = Series(np.random.randn(5), index=['a', 'b', 'a', 'a', 'b'])
+
+ pytest.raises(IndexError, s.__getitem__, 5)
+ pytest.raises(IndexError, s.__setitem__, 5, 0)
+
+ pytest.raises(KeyError, s.__getitem__, 'c')
+
+ s = s.sort_index()
+
+ pytest.raises(IndexError, s.__getitem__, 5)
+ pytest.raises(IndexError, s.__setitem__, 5, 0)
- # GH 16877
- s = Series([2, 3], index=[0, 1])
- with tm.assert_raises_regex(KeyError, 'not contained in axis'):
- s.drop([False, True])
- def test_select(self):
+def test_timedelta_assignment():
+ # GH 8209
+ s = Series([])
+ s.loc['B'] = timedelta(1)
+ tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B']))
+
+ s = s.reindex(s.index.insert(0, 'A'))
+ tm.assert_series_equal(s, Series(
+ [np.nan, Timedelta('1 days')], index=['A', 'B']))
+
+ result = s.fillna(timedelta(1))
+ expected = Series(Timedelta('1 days'), index=['A', 'B'])
+ tm.assert_series_equal(result, expected)
- # deprecated: gh-12410
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- n = len(self.ts)
- result = self.ts.select(lambda x: x >= self.ts.index[n // 2])
- expected = self.ts.reindex(self.ts.index[n // 2:])
- assert_series_equal(result, expected)
+ s.loc['A'] = timedelta(1)
+ tm.assert_series_equal(s, expected)
+
+ # GH 14155
+ s = Series(10 * [np.timedelta64(10, 'm')])
+ s.loc[[1, 2, 3]] = np.timedelta64(20, 'm')
+ expected = pd.Series(10 * [np.timedelta64(10, 'm')])
+ expected.loc[[1, 2, 3]] = pd.Timedelta(np.timedelta64(20, 'm'))
+ tm.assert_series_equal(s, expected)
- result = self.ts.select(lambda x: x.weekday() == 2)
- expected = self.ts[self.ts.index.weekday == 2]
- assert_series_equal(result, expected)
- def test_cast_on_putmask(self):
+def test_underlying_data_conversion():
+ # GH 4080
+ df = DataFrame({c: [1, 2, 3] for c in ['a', 'b', 'c']})
+ df.set_index(['a', 'b', 'c'], inplace=True)
+ s = Series([1], index=[(2, 2, 2)])
+ df['val'] = 0
+ df
+ df['val'].update(s)
+
+ expected = DataFrame(
+ dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0]))
+ expected.set_index(['a', 'b', 'c'], inplace=True)
+ tm.assert_frame_equal(df, expected)
+
+ # GH 3970
+ # these are chained assignments as well
+ pd.set_option('chained_assignment', None)
+ df = DataFrame({"aa": range(5), "bb": [2.2] * 5})
+ df["cc"] = 0.0
+
+ ck = [True] * len(df)
+
+ df["bb"].iloc[0] = .13
+
+ # TODO: unused
+ df_tmp = df.iloc[ck] # noqa
+
+ df["bb"].iloc[0] = .15
+ assert df['bb'].iloc[0] == 0.15
+ pd.set_option('chained_assignment', 'raise')
+
+ # GH 3217
+ df = DataFrame(dict(a=[1, 3], b=[np.nan, 2]))
+ df['c'] = np.nan
+ df['c'].update(pd.Series(['foo'], index=[0]))
+
+ expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan]))
+ tm.assert_frame_equal(df, expected)
+
+
+def test_preserve_refs(test_data):
+ seq = test_data.ts[[5, 10, 15]]
+ seq[1] = np.NaN
+ assert not np.isnan(test_data.ts[10])
+
+
+def test_drop():
+ # unique
+ s = Series([1, 2], index=['one', 'two'])
+ expected = Series([1], index=['one'])
+ result = s.drop(['two'])
+ assert_series_equal(result, expected)
+ result = s.drop('two', axis='rows')
+ assert_series_equal(result, expected)
+
+ # non-unique
+ # GH 5248
+ s = Series([1, 1, 2], index=['one', 'two', 'one'])
+ expected = Series([1, 2], index=['one', 'one'])
+ result = s.drop(['two'], axis=0)
+ assert_series_equal(result, expected)
+ result = s.drop('two')
+ assert_series_equal(result, expected)
+
+ expected = Series([1], index=['two'])
+ result = s.drop(['one'])
+ assert_series_equal(result, expected)
+ result = s.drop('one')
+ assert_series_equal(result, expected)
+
+ # single string/tuple-like
+ s = Series(range(3), index=list('abc'))
+ pytest.raises(KeyError, s.drop, 'bc')
+ pytest.raises(KeyError, s.drop, ('a',))
+
+ # errors='ignore'
+ s = Series(range(3), index=list('abc'))
+ result = s.drop('bc', errors='ignore')
+ assert_series_equal(result, s)
+ result = s.drop(['a', 'd'], errors='ignore')
+ expected = s.iloc[1:]
+ assert_series_equal(result, expected)
+
+ # bad axis
+ pytest.raises(ValueError, s.drop, 'one', axis='columns')
+
+ # GH 8522
+ s = Series([2, 3], index=[True, False])
+ assert s.index.is_object()
+ result = s.drop(True)
+ expected = Series([3], index=[False])
+ assert_series_equal(result, expected)
+
+ # GH 16877
+ s = Series([2, 3], index=[0, 1])
+ with tm.assert_raises_regex(KeyError, 'not contained in axis'):
+ s.drop([False, True])
+
+
+def test_select(test_data):
+ # deprecated: gh-12410
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ n = len(test_data.ts)
+ result = test_data.ts.select(lambda x: x >= test_data.ts.index[n // 2])
+ expected = test_data.ts.reindex(test_data.ts.index[n // 2:])
+ assert_series_equal(result, expected)
+
+ result = test_data.ts.select(lambda x: x.weekday() == 2)
+ expected = test_data.ts[test_data.ts.index.weekday == 2]
+ assert_series_equal(result, expected)
- # GH 2746
- # need to upcast
- s = Series([1, 2], index=[1, 2], dtype='int64')
- s[[True, False]] = Series([0], index=[1], dtype='int64')
- expected = Series([0, 2], index=[1, 2], dtype='int64')
+def test_cast_on_putmask():
+ # GH 2746
- assert_series_equal(s, expected)
+ # need to upcast
+ s = Series([1, 2], index=[1, 2], dtype='int64')
+ s[[True, False]] = Series([0], index=[1], dtype='int64')
+ expected = Series([0, 2], index=[1, 2], dtype='int64')
- def test_type_promote_putmask(self):
+ assert_series_equal(s, expected)
- # GH8387: test that changing types does not break alignment
- ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5)
- left, mask = ts.copy(), ts > 0
- right = ts[mask].copy().map(str)
- left[mask] = right
- assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t))
- s = Series([0, 1, 2, 0])
- mask = s > 0
- s2 = s[mask].map(str)
- s[mask] = s2
- assert_series_equal(s, Series([0, '1', '2', 0]))
+def test_type_promote_putmask():
+ # GH8387: test that changing types does not break alignment
+ ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5)
+ left, mask = ts.copy(), ts > 0
+ right = ts[mask].copy().map(str)
+ left[mask] = right
+ assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t))
- s = Series([0, 'foo', 'bar', 0])
- mask = Series([False, True, True, False])
- s2 = s[mask]
- s[mask] = s2
- assert_series_equal(s, Series([0, 'foo', 'bar', 0]))
+ s = Series([0, 1, 2, 0])
+ mask = s > 0
+ s2 = s[mask].map(str)
+ s[mask] = s2
+ assert_series_equal(s, Series([0, '1', '2', 0]))
- def test_head_tail(self):
- assert_series_equal(self.series.head(), self.series[:5])
- assert_series_equal(self.series.head(0), self.series[0:0])
- assert_series_equal(self.series.tail(), self.series[-5:])
- assert_series_equal(self.series.tail(0), self.series[0:0])
+ s = Series([0, 'foo', 'bar', 0])
+ mask = Series([False, True, True, False])
+ s2 = s[mask]
+ s[mask] = s2
+ assert_series_equal(s, Series([0, 'foo', 'bar', 0]))
- def test_multilevel_preserve_name(self):
- index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
- 'three']],
- labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
- [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
- names=['first', 'second'])
- s = Series(np.random.randn(len(index)), index=index, name='sth')
- result = s['foo']
- result2 = s.loc['foo']
- assert result.name == s.name
- assert result2.name == s.name
+def test_head_tail(test_data):
+ assert_series_equal(test_data.series.head(), test_data.series[:5])
+ assert_series_equal(test_data.series.head(0), test_data.series[0:0])
+ assert_series_equal(test_data.series.tail(), test_data.series[-5:])
+ assert_series_equal(test_data.series.tail(0), test_data.series[0:0])
- def test_setitem_scalar_into_readonly_backing_data(self):
- # GH14359: test that you cannot mutate a read only buffer
- array = np.zeros(5)
- array.flags.writeable = False # make the array immutable
- series = Series(array)
+def test_multilevel_preserve_name():
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
+ 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['first', 'second'])
+ s = Series(np.random.randn(len(index)), index=index, name='sth')
- for n in range(len(series)):
- with pytest.raises(ValueError):
- series[n] = 1
+ result = s['foo']
+ result2 = s.loc['foo']
+ assert result.name == s.name
+ assert result2.name == s.name
- assert array[n] == 0
- def test_setitem_slice_into_readonly_backing_data(self):
- # GH14359: test that you cannot mutate a read only buffer
+def test_setitem_scalar_into_readonly_backing_data():
+ # GH14359: test that you cannot mutate a read only buffer
- array = np.zeros(5)
- array.flags.writeable = False # make the array immutable
- series = Series(array)
+ array = np.zeros(5)
+ array.flags.writeable = False # make the array immutable
+ series = Series(array)
+ for n in range(len(series)):
with pytest.raises(ValueError):
- series[1:3] = 1
+ series[n] = 1
+
+ assert array[n] == 0
+
+
+def test_setitem_slice_into_readonly_backing_data():
+ # GH14359: test that you cannot mutate a read only buffer
+
+ array = np.zeros(5)
+ array.flags.writeable = False # make the array immutable
+ series = Series(array)
+
+ with pytest.raises(ValueError):
+ series[1:3] = 1
- assert not array.any()
+ assert not array.any()
diff --git a/pandas/tests/series/indexing/test_loc.py b/pandas/tests/series/indexing/test_loc.py
index b2a835616cde2..d78b09a3c6ccb 100644
--- a/pandas/tests/series/indexing/test_loc.py
+++ b/pandas/tests/series/indexing/test_loc.py
@@ -11,138 +11,143 @@
from pandas.compat import lrange
from pandas.util.testing import (assert_series_equal)
-from pandas.tests.series.common import TestData
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-class TestLoc(TestData):
-
- def test_loc_getitem(self):
- inds = self.series.index[[3, 4, 7]]
- assert_series_equal(self.series.loc[inds], self.series.reindex(inds))
- assert_series_equal(self.series.iloc[5::2], self.series[5::2])
-
- # slice with indices
- d1, d2 = self.ts.index[[5, 15]]
- result = self.ts.loc[d1:d2]
- expected = self.ts.truncate(d1, d2)
- assert_series_equal(result, expected)
-
- # boolean
- mask = self.series > self.series.median()
- assert_series_equal(self.series.loc[mask], self.series[mask])
-
- # ask for index value
- assert self.ts.loc[d1] == self.ts[d1]
- assert self.ts.loc[d2] == self.ts[d2]
-
- def test_loc_getitem_not_monotonic(self):
- d1, d2 = self.ts.index[[5, 15]]
-
- ts2 = self.ts[::2][[1, 2, 0]]
-
- pytest.raises(KeyError, ts2.loc.__getitem__, slice(d1, d2))
- pytest.raises(KeyError, ts2.loc.__setitem__, slice(d1, d2), 0)
-
- def test_loc_getitem_setitem_integer_slice_keyerrors(self):
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
-
- # this is OK
- cp = s.copy()
- cp.iloc[4:10] = 0
- assert (cp.iloc[4:10] == 0).all()
-
- # so is this
- cp = s.copy()
- cp.iloc[3:11] = 0
- assert (cp.iloc[3:11] == 0).values.all()
-
- result = s.iloc[2:6]
- result2 = s.loc[3:11]
- expected = s.reindex([4, 6, 8, 10])
-
- assert_series_equal(result, expected)
- assert_series_equal(result2, expected)
-
- # non-monotonic, raise KeyError
- s2 = s.iloc[lrange(5) + lrange(5, 10)[::-1]]
- pytest.raises(KeyError, s2.loc.__getitem__, slice(3, 11))
- pytest.raises(KeyError, s2.loc.__setitem__, slice(3, 11), 0)
-
- def test_loc_getitem_iterator(self):
- idx = iter(self.series.index[:10])
- result = self.series.loc[idx]
- assert_series_equal(result, self.series[:10])
-
- def test_loc_setitem_boolean(self):
- mask = self.series > self.series.median()
-
- result = self.series.copy()
- result.loc[mask] = 0
- expected = self.series
- expected[mask] = 0
- assert_series_equal(result, expected)
-
- def test_loc_setitem_corner(self):
- inds = list(self.series.index[[5, 8, 12]])
- self.series.loc[inds] = 5
- pytest.raises(Exception, self.series.loc.__setitem__,
- inds + ['foo'], 5)
-
- def test_basic_setitem_with_labels(self):
- indices = self.ts.index[[5, 10, 15]]
-
- cp = self.ts.copy()
- exp = self.ts.copy()
- cp[indices] = 0
- exp.loc[indices] = 0
- assert_series_equal(cp, exp)
-
- cp = self.ts.copy()
- exp = self.ts.copy()
- cp[indices[0]:indices[2]] = 0
- exp.loc[indices[0]:indices[2]] = 0
- assert_series_equal(cp, exp)
-
- # integer indexes, be careful
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
- inds = [0, 4, 6]
- arr_inds = np.array([0, 4, 6])
-
- cp = s.copy()
- exp = s.copy()
- s[inds] = 0
- s.loc[inds] = 0
- assert_series_equal(cp, exp)
-
- cp = s.copy()
- exp = s.copy()
- s[arr_inds] = 0
- s.loc[arr_inds] = 0
- assert_series_equal(cp, exp)
-
- inds_notfound = [0, 4, 5, 6]
- arr_inds_notfound = np.array([0, 4, 5, 6])
- pytest.raises(Exception, s.__setitem__, inds_notfound, 0)
- pytest.raises(Exception, s.__setitem__, arr_inds_notfound, 0)
-
- # GH12089
- # with tz for values
- s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
- index=['a', 'b', 'c'])
- s2 = s.copy()
- expected = Timestamp('2011-01-03', tz='US/Eastern')
- s2.loc['a'] = expected
- result = s2.loc['a']
- assert result == expected
-
- s2 = s.copy()
- s2.iloc[0] = expected
- result = s2.iloc[0]
- assert result == expected
-
- s2 = s.copy()
- s2['a'] = expected
- result = s2['a']
- assert result == expected
+def test_loc_getitem(test_data):
+ inds = test_data.series.index[[3, 4, 7]]
+ assert_series_equal(
+ test_data.series.loc[inds],
+ test_data.series.reindex(inds))
+ assert_series_equal(test_data.series.iloc[5::2], test_data.series[5::2])
+
+ # slice with indices
+ d1, d2 = test_data.ts.index[[5, 15]]
+ result = test_data.ts.loc[d1:d2]
+ expected = test_data.ts.truncate(d1, d2)
+ assert_series_equal(result, expected)
+
+ # boolean
+ mask = test_data.series > test_data.series.median()
+ assert_series_equal(test_data.series.loc[mask], test_data.series[mask])
+
+ # ask for index value
+ assert test_data.ts.loc[d1] == test_data.ts[d1]
+ assert test_data.ts.loc[d2] == test_data.ts[d2]
+
+
+def test_loc_getitem_not_monotonic(test_data):
+ d1, d2 = test_data.ts.index[[5, 15]]
+
+ ts2 = test_data.ts[::2][[1, 2, 0]]
+
+ pytest.raises(KeyError, ts2.loc.__getitem__, slice(d1, d2))
+ pytest.raises(KeyError, ts2.loc.__setitem__, slice(d1, d2), 0)
+
+
+def test_loc_getitem_setitem_integer_slice_keyerrors():
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+
+ # this is OK
+ cp = s.copy()
+ cp.iloc[4:10] = 0
+ assert (cp.iloc[4:10] == 0).all()
+
+ # so is this
+ cp = s.copy()
+ cp.iloc[3:11] = 0
+ assert (cp.iloc[3:11] == 0).values.all()
+
+ result = s.iloc[2:6]
+ result2 = s.loc[3:11]
+ expected = s.reindex([4, 6, 8, 10])
+
+ assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
+
+ # non-monotonic, raise KeyError
+ s2 = s.iloc[lrange(5) + lrange(5, 10)[::-1]]
+ pytest.raises(KeyError, s2.loc.__getitem__, slice(3, 11))
+ pytest.raises(KeyError, s2.loc.__setitem__, slice(3, 11), 0)
+
+
+def test_loc_getitem_iterator(test_data):
+ idx = iter(test_data.series.index[:10])
+ result = test_data.series.loc[idx]
+ assert_series_equal(result, test_data.series[:10])
+
+
+def test_loc_setitem_boolean(test_data):
+ mask = test_data.series > test_data.series.median()
+
+ result = test_data.series.copy()
+ result.loc[mask] = 0
+ expected = test_data.series
+ expected[mask] = 0
+ assert_series_equal(result, expected)
+
+
+def test_loc_setitem_corner(test_data):
+ inds = list(test_data.series.index[[5, 8, 12]])
+ test_data.series.loc[inds] = 5
+ pytest.raises(Exception, test_data.series.loc.__setitem__,
+ inds + ['foo'], 5)
+
+
+def test_basic_setitem_with_labels(test_data):
+ indices = test_data.ts.index[[5, 10, 15]]
+
+ cp = test_data.ts.copy()
+ exp = test_data.ts.copy()
+ cp[indices] = 0
+ exp.loc[indices] = 0
+ assert_series_equal(cp, exp)
+
+ cp = test_data.ts.copy()
+ exp = test_data.ts.copy()
+ cp[indices[0]:indices[2]] = 0
+ exp.loc[indices[0]:indices[2]] = 0
+ assert_series_equal(cp, exp)
+
+ # integer indexes, be careful
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+ inds = [0, 4, 6]
+ arr_inds = np.array([0, 4, 6])
+
+ cp = s.copy()
+ exp = s.copy()
+ s[inds] = 0
+ s.loc[inds] = 0
+ assert_series_equal(cp, exp)
+
+ cp = s.copy()
+ exp = s.copy()
+ s[arr_inds] = 0
+ s.loc[arr_inds] = 0
+ assert_series_equal(cp, exp)
+
+ inds_notfound = [0, 4, 5, 6]
+ arr_inds_notfound = np.array([0, 4, 5, 6])
+ pytest.raises(Exception, s.__setitem__, inds_notfound, 0)
+ pytest.raises(Exception, s.__setitem__, arr_inds_notfound, 0)
+
+ # GH12089
+ # with tz for values
+ s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
+ index=['a', 'b', 'c'])
+ s2 = s.copy()
+ expected = Timestamp('2011-01-03', tz='US/Eastern')
+ s2.loc['a'] = expected
+ result = s2.loc['a']
+ assert result == expected
+
+ s2 = s.copy()
+ s2.iloc[0] = expected
+ result = s2.iloc[0]
+ assert result == expected
+
+ s2 = s.copy()
+ s2['a'] = expected
+ result = s2['a']
+ assert result == expected
diff --git a/pandas/tests/series/indexing/test_numeric.py b/pandas/tests/series/indexing/test_numeric.py
index ee53ec572bcae..e6035ccf2d569 100644
--- a/pandas/tests/series/indexing/test_numeric.py
+++ b/pandas/tests/series/indexing/test_numeric.py
@@ -11,213 +11,221 @@
from pandas.compat import lrange, range
from pandas.util.testing import (assert_series_equal)
-from pandas.tests.series.common import TestData
import pandas.util.testing as tm
-class TestNumericIndexers(TestData):
-
- def test_get(self):
- # GH 6383
- s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45,
- 51, 39, 55, 43, 54, 52, 51, 54]))
-
- result = s.get(25, 0)
- expected = 0
- assert result == expected
-
- s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56,
- 45, 51, 39, 55, 43, 54, 52, 51, 54]),
- index=pd.Float64Index(
- [25.0, 36.0, 49.0, 64.0, 81.0, 100.0,
- 121.0, 144.0, 169.0, 196.0, 1225.0,
- 1296.0, 1369.0, 1444.0, 1521.0, 1600.0,
- 1681.0, 1764.0, 1849.0, 1936.0],
- dtype='object'))
-
- result = s.get(25, 0)
- expected = 43
- assert result == expected
-
- # GH 7407
- # with a boolean accessor
- df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3})
- vc = df.i.value_counts()
- result = vc.get(99, default='Missing')
- assert result == 'Missing'
-
- vc = df.b.value_counts()
- result = vc.get(False, default='Missing')
- assert result == 3
-
- result = vc.get(True, default='Missing')
- assert result == 'Missing'
-
- def test_get_nan(self):
- # GH 8569
- s = pd.Float64Index(range(10)).to_series()
- assert s.get(np.nan) is None
- assert s.get(np.nan, default='Missing') == 'Missing'
-
- # ensure that fixing the above hasn't broken get
- # with multiple elements
- idx = [20, 30]
- assert_series_equal(s.get(idx),
- Series([np.nan] * 2, index=idx))
- idx = [np.nan, np.nan]
- assert_series_equal(s.get(idx),
- Series([np.nan] * 2, index=idx))
-
- def test_delitem(self):
- # GH 5542
- # should delete the item inplace
- s = Series(lrange(5))
+def test_get():
+ # GH 6383
+ s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45,
+ 51, 39, 55, 43, 54, 52, 51, 54]))
+
+ result = s.get(25, 0)
+ expected = 0
+ assert result == expected
+
+ s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56,
+ 45, 51, 39, 55, 43, 54, 52, 51, 54]),
+ index=pd.Float64Index(
+ [25.0, 36.0, 49.0, 64.0, 81.0, 100.0,
+ 121.0, 144.0, 169.0, 196.0, 1225.0,
+ 1296.0, 1369.0, 1444.0, 1521.0, 1600.0,
+ 1681.0, 1764.0, 1849.0, 1936.0],
+ dtype='object'))
+
+ result = s.get(25, 0)
+ expected = 43
+ assert result == expected
+
+ # GH 7407
+ # with a boolean accessor
+ df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3})
+ vc = df.i.value_counts()
+ result = vc.get(99, default='Missing')
+ assert result == 'Missing'
+
+ vc = df.b.value_counts()
+ result = vc.get(False, default='Missing')
+ assert result == 3
+
+ result = vc.get(True, default='Missing')
+ assert result == 'Missing'
+
+
+def test_get_nan():
+ # GH 8569
+ s = pd.Float64Index(range(10)).to_series()
+ assert s.get(np.nan) is None
+ assert s.get(np.nan, default='Missing') == 'Missing'
+
+ # ensure that fixing the above hasn't broken get
+ # with multiple elements
+ idx = [20, 30]
+ assert_series_equal(s.get(idx),
+ Series([np.nan] * 2, index=idx))
+ idx = [np.nan, np.nan]
+ assert_series_equal(s.get(idx),
+ Series([np.nan] * 2, index=idx))
+
+
+def test_delitem():
+ # GH 5542
+ # should delete the item inplace
+ s = Series(lrange(5))
+ del s[0]
+
+ expected = Series(lrange(1, 5), index=lrange(1, 5))
+ assert_series_equal(s, expected)
+
+ del s[1]
+ expected = Series(lrange(2, 5), index=lrange(2, 5))
+ assert_series_equal(s, expected)
+
+ # empty
+ s = Series()
+
+ def f():
del s[0]
- expected = Series(lrange(1, 5), index=lrange(1, 5))
- assert_series_equal(s, expected)
+ pytest.raises(KeyError, f)
- del s[1]
- expected = Series(lrange(2, 5), index=lrange(2, 5))
- assert_series_equal(s, expected)
+ # only 1 left, del, add, del
+ s = Series(1)
+ del s[0]
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='int64')))
+ s[0] = 1
+ assert_series_equal(s, Series(1))
+ del s[0]
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='int64')))
- # empty
- s = Series()
+ # Index(dtype=object)
+ s = Series(1, index=['a'])
+ del s['a']
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='object')))
+ s['a'] = 1
+ assert_series_equal(s, Series(1, index=['a']))
+ del s['a']
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='object')))
- def f():
- del s[0]
- pytest.raises(KeyError, f)
+def test_slice_float64():
+ values = np.arange(10., 50., 2)
+ index = Index(values)
- # only 1 left, del, add, del
- s = Series(1)
- del s[0]
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='int64')))
- s[0] = 1
- assert_series_equal(s, Series(1))
- del s[0]
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='int64')))
+ start, end = values[[5, 15]]
+
+ s = Series(np.random.randn(20), index=index)
+
+ result = s[start:end]
+ expected = s.iloc[5:16]
+ assert_series_equal(result, expected)
+
+ result = s.loc[start:end]
+ assert_series_equal(result, expected)
+
+ df = DataFrame(np.random.randn(20, 3), index=index)
+
+ result = df[start:end]
+ expected = df.iloc[5:16]
+ tm.assert_frame_equal(result, expected)
- # Index(dtype=object)
- s = Series(1, index=['a'])
- del s['a']
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='object')))
- s['a'] = 1
- assert_series_equal(s, Series(1, index=['a']))
- del s['a']
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='object')))
+ result = df.loc[start:end]
+ tm.assert_frame_equal(result, expected)
- def test_slice_float64(self):
- values = np.arange(10., 50., 2)
- index = Index(values)
- start, end = values[[5, 15]]
+def test_getitem_negative_out_of_bounds():
+ s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10))
- s = Series(np.random.randn(20), index=index)
+ pytest.raises(IndexError, s.__getitem__, -11)
+ pytest.raises(IndexError, s.__setitem__, -11, 'foo')
- result = s[start:end]
- expected = s.iloc[5:16]
- assert_series_equal(result, expected)
- result = s.loc[start:end]
- assert_series_equal(result, expected)
+def test_getitem_regression():
+ s = Series(lrange(5), index=lrange(5))
+ result = s[lrange(5)]
+ assert_series_equal(result, s)
- df = DataFrame(np.random.randn(20, 3), index=index)
- result = df[start:end]
- expected = df.iloc[5:16]
- tm.assert_frame_equal(result, expected)
+def test_getitem_setitem_slice_bug():
+ s = Series(lrange(10), lrange(10))
+ result = s[-12:]
+ assert_series_equal(result, s)
- result = df.loc[start:end]
- tm.assert_frame_equal(result, expected)
+ result = s[-7:]
+ assert_series_equal(result, s[3:])
- def test_getitem_negative_out_of_bounds(self):
- s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10))
+ result = s[:-12]
+ assert_series_equal(result, s[:0])
- pytest.raises(IndexError, s.__getitem__, -11)
- pytest.raises(IndexError, s.__setitem__, -11, 'foo')
+ s = Series(lrange(10), lrange(10))
+ s[-12:] = 0
+ assert (s == 0).all()
- def test_getitem_regression(self):
- s = Series(lrange(5), index=lrange(5))
- result = s[lrange(5)]
- assert_series_equal(result, s)
+ s[:-12] = 5
+ assert (s == 0).all()
- def test_getitem_setitem_slice_bug(self):
- s = Series(lrange(10), lrange(10))
- result = s[-12:]
- assert_series_equal(result, s)
- result = s[-7:]
- assert_series_equal(result, s[3:])
+def test_getitem_setitem_slice_integers():
+ s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16])
- result = s[:-12]
- assert_series_equal(result, s[:0])
+ result = s[:4]
+ expected = s.reindex([2, 4, 6, 8])
+ assert_series_equal(result, expected)
- s = Series(lrange(10), lrange(10))
- s[-12:] = 0
- assert (s == 0).all()
+ s[:4] = 0
+ assert (s[:4] == 0).all()
+ assert not (s[4:] == 0).any()
- s[:-12] = 5
- assert (s == 0).all()
- def test_getitem_setitem_slice_integers(self):
- s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16])
+def test_setitem_float_labels():
+ # note labels are floats
+ s = Series(['a', 'b', 'c'], index=[0, 0.5, 1])
+ tmp = s.copy()
- result = s[:4]
- expected = s.reindex([2, 4, 6, 8])
- assert_series_equal(result, expected)
+ s.loc[1] = 'zoo'
+ tmp.iloc[2] = 'zoo'
- s[:4] = 0
- assert (s[:4] == 0).all()
- assert not (s[4:] == 0).any()
+ assert_series_equal(s, tmp)
- def test_setitem_float_labels(self):
- # note labels are floats
- s = Series(['a', 'b', 'c'], index=[0, 0.5, 1])
- tmp = s.copy()
- s.loc[1] = 'zoo'
- tmp.iloc[2] = 'zoo'
+def test_slice_float_get_set(test_data):
+ pytest.raises(TypeError, lambda: test_data.ts[4.0:10.0])
- assert_series_equal(s, tmp)
+ def f():
+ test_data.ts[4.0:10.0] = 0
- def test_slice_float_get_set(self):
- pytest.raises(TypeError, lambda: self.ts[4.0:10.0])
+ pytest.raises(TypeError, f)
- def f():
- self.ts[4.0:10.0] = 0
+ pytest.raises(TypeError, test_data.ts.__getitem__, slice(4.5, 10.0))
+ pytest.raises(TypeError, test_data.ts.__setitem__, slice(4.5, 10.0), 0)
- pytest.raises(TypeError, f)
- pytest.raises(TypeError, self.ts.__getitem__, slice(4.5, 10.0))
- pytest.raises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0)
+def test_slice_floats2():
+ s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float))
- def test_slice_floats2(self):
- s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float))
+ assert len(s.loc[12.0:]) == 8
+ assert len(s.loc[12.5:]) == 7
- assert len(s.loc[12.0:]) == 8
- assert len(s.loc[12.5:]) == 7
+ i = np.arange(10, 20, dtype=float)
+ i[2] = 12.2
+ s.index = i
+ assert len(s.loc[12.0:]) == 8
+ assert len(s.loc[12.5:]) == 7
- i = np.arange(10, 20, dtype=float)
- i[2] = 12.2
- s.index = i
- assert len(s.loc[12.0:]) == 8
- assert len(s.loc[12.5:]) == 7
- def test_int_indexing(self):
- s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2])
+def test_int_indexing():
+ s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2])
- pytest.raises(KeyError, s.__getitem__, 5)
+ pytest.raises(KeyError, s.__getitem__, 5)
- pytest.raises(KeyError, s.__getitem__, 'c')
+ pytest.raises(KeyError, s.__getitem__, 'c')
- # not monotonic
- s = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1])
+ # not monotonic
+ s = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1])
- pytest.raises(KeyError, s.__getitem__, 5)
+ pytest.raises(KeyError, s.__getitem__, 5)
- pytest.raises(KeyError, s.__getitem__, 'c')
+ pytest.raises(KeyError, s.__getitem__, 'c')
| Moving series/indexing tests from classes to fixtures (issue #20014, part 1)
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/20034 | 2018-03-07T13:05:32Z | 2018-03-07T21:09:42Z | 2018-03-07T21:09:42Z | 2018-03-08T07:58:03Z |
TST: xfail some tests for mpl 2.2 compat | diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 18eefa6a14a37..9b721e45099fe 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -1356,10 +1356,10 @@ def test_plot_outofbounds_datetime(self):
values = [datetime(1677, 1, 1, 12), datetime(1677, 1, 2, 12)]
ax.plot(values)
+ @td.xfail_if_mpl_2_2
@pytest.mark.skip(
is_platform_mac(),
"skip on mac for precision display issue on older mpl")
- @pytest.mark.xfail(reason="suspect on mpl 2.2.2")
def test_format_timedelta_ticks_narrow(self):
if self.mpl_ge_2_0_0:
@@ -1381,10 +1381,10 @@ def test_format_timedelta_ticks_narrow(self):
for l, l_expected in zip(labels, expected_labels):
assert l.get_text() == l_expected
+ @td.xfail_if_mpl_2_2
@pytest.mark.skip(
is_platform_mac(),
"skip on mac for precision display issue on older mpl")
- @pytest.mark.xfail(reason="suspect on mpl 2.2.2")
def test_format_timedelta_ticks_wide(self):
if self.mpl_ge_2_0_0:
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 3d25b0b51e052..b29afcb404ac6 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -2461,6 +2461,7 @@ def test_errorbar_asymmetrical(self):
tm.close()
+ @td.xfail_if_mpl_2_2
def test_table(self):
df = DataFrame(np.random.rand(10, 3),
index=list(string.ascii_letters[:10]))
diff --git a/pandas/tests/plotting/test_misc.py b/pandas/tests/plotting/test_misc.py
index 9e538ae130a85..c5ce8aba9d80e 100644
--- a/pandas/tests/plotting/test_misc.py
+++ b/pandas/tests/plotting/test_misc.py
@@ -52,6 +52,7 @@ def test_bootstrap_plot(self):
@td.skip_if_no_mpl
class TestDataFramePlots(TestPlotBase):
+ @td.xfail_if_mpl_2_2
@td.skip_if_no_scipy
def test_scatter_matrix_axis(self):
scatter_matrix = plotting.scatter_matrix
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 278be433183fa..5dc7d52e05778 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -792,6 +792,7 @@ def test_errorbar_plot(self):
with pytest.raises((ValueError, TypeError)):
s.plot(yerr=s_err)
+ @td.xfail_if_mpl_2_2
def test_table(self):
_check_plot_works(self.series.plot, table=True)
_check_plot_works(self.series.plot, table=self.series)
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index b2745ab5eec77..8ad73538fbec1 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -89,6 +89,17 @@ def _skip_if_mpl_1_5():
mod.use("Agg", warn=False)
+def _skip_if_mpl_2_2():
+ mod = safe_import("matplotlib")
+
+ if mod:
+ v = mod.__version__
+ if LooseVersion(v) > LooseVersion('2.1.2'):
+ return True
+ else:
+ mod.use("Agg", warn=False)
+
+
def _skip_if_has_locale():
lang, _ = locale.getlocale()
if lang is not None:
@@ -151,6 +162,8 @@ def decorated_func(func):
reason="Missing matplotlib dependency")
skip_if_mpl_1_5 = pytest.mark.skipif(_skip_if_mpl_1_5(),
reason="matplotlib 1.5")
+xfail_if_mpl_2_2 = pytest.mark.xfail(_skip_if_mpl_2_2(),
+ reason="matplotlib 2.2")
skip_if_32bit = pytest.mark.skipif(is_platform_32bit(),
reason="skipping for 32 bit")
skip_if_windows = pytest.mark.skipif(is_platform_windows(),
| xref #20031
| https://api.github.com/repos/pandas-dev/pandas/pulls/20033 | 2018-03-07T12:45:19Z | 2018-03-07T13:15:50Z | 2018-03-07T13:15:50Z | 2018-03-07T13:15:50Z |
DOC: misc typos | diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index 3d4bb8ec57794..e4ce7ebd01dac 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -177,7 +177,7 @@ are consistent among all columns.
.. note::
To perform table-wise conversion, where all labels in the entire ``DataFrame`` are used as
- categories for each column, the ``categories`` parameter can be determined programatically by
+ categories for each column, the ``categories`` parameter can be determined programmatically by
``categories = pd.unique(df.values.ravel())``.
If you already have ``codes`` and ``categories``, you can use the
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 4a09d636ee320..6b10d2ca3b5b2 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -4757,7 +4757,7 @@ def _apply_to_column_groupbys(self, func):
keys=self._selected_obj.columns, axis=1)
def _fill(self, direction, limit=None):
- """Overriden method to join grouped columns in output"""
+ """Overridden method to join grouped columns in output"""
res = super(DataFrameGroupBy, self)._fill(direction, limit=limit)
output = collections.OrderedDict(
(grp.name, grp.grouper) for grp in self.grouper.groupings)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 0b80af11520b5..6b39717213c0d 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -1014,7 +1014,7 @@ class _WriterBase(SharedItems):
def set_engine_and_path(self, request, merge_cells, engine, ext):
"""Fixture to set engine and open file for use in each test case
- Rather than requiring `engine=...` to be provided explictly as an
+ Rather than requiring `engine=...` to be provided explicitly as an
argument in each test, this fixture sets a global option to dictate
which engine should be used to write Excel files. After executing
the test it rolls back said change to the global option.
| Found via `codespell -q 3 -I ../pandas-whitelist.txt` | https://api.github.com/repos/pandas-dev/pandas/pulls/20029 | 2018-03-07T05:09:51Z | 2018-03-07T11:10:40Z | 2018-03-07T11:10:40Z | 2018-03-07T15:06:52Z |
CLN: Deprecate Index.summary (GH18217) | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 791365295c268..53b91e79f5477 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -736,6 +736,7 @@ Deprecations
- :attr:`Timestamp.weekday_name`, :attr:`DatetimeIndex.weekday_name`, and :attr:`Series.dt.weekday_name` are deprecated in favor of :meth:`Timestamp.day_name`, :meth:`DatetimeIndex.day_name`, and :meth:`Series.dt.day_name` (:issue:`12806`)
- ``pandas.tseries.plotting.tsplot`` is deprecated. Use :func:`Series.plot` instead (:issue:`18627`)
+- ``Index.summary()`` is deprecated and will be removed in a future version (:issue:`18217`)
.. _whatsnew_0230.prior_deprecations:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index eaab17513aaf4..7dbfd0f733496 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2005,7 +2005,7 @@ def info(self, verbose=None, buf=None, max_cols=None, memory_usage=None,
lines = []
lines.append(str(type(self)))
- lines.append(self.index.summary())
+ lines.append(self.index._summary())
if len(self.columns) == 0:
lines.append('Empty %s' % type(self).__name__)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index f69777af31c9c..a7acd58d8d19e 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1385,7 +1385,19 @@ def _has_complex_internals(self):
# to disable groupby tricks in MultiIndex
return False
- def summary(self, name=None):
+ def _summary(self, name=None):
+ """
+ Return a summarized representation
+
+ Parameters
+ ----------
+ name : str
+ name to use in the summary representation
+
+ Returns
+ -------
+ String with a summarized representation of the index
+ """
if len(self) > 0:
head = self[0]
if (hasattr(head, 'format') and
@@ -1404,6 +1416,15 @@ def summary(self, name=None):
name = type(self).__name__
return '%s: %s entries%s' % (name, len(self), index_summary)
+ def summary(self, name=None):
+ """
+ Return a summarized representation
+ .. deprecated:: 0.23.0
+ """
+ warnings.warn("'summary' is deprecated and will be removed in a "
+ "future version.", FutureWarning, stacklevel=2)
+ return self._summary(name)
+
def _mpl_repr(self):
# how to represent ourselves to matplotlib
return self.values
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 1be71ff68c2fb..b3372859a116c 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -1049,9 +1049,18 @@ def where(self, cond, other=None):
return self._shallow_copy(result,
**self._get_attributes_dict())
- def summary(self, name=None):
+ def _summary(self, name=None):
"""
- return a summarized representation
+ Return a summarized representation
+
+ Parameters
+ ----------
+ name : str
+ name to use in the summary representation
+
+ Returns
+ -------
+ String with a summarized representation of the index
"""
formatter = self._formatter_func
if len(self) > 0:
diff --git a/pandas/tests/indexes/datetimes/test_formats.py b/pandas/tests/indexes/datetimes/test_formats.py
index 0d1a9e65ce6c6..63d5338d88d76 100644
--- a/pandas/tests/indexes/datetimes/test_formats.py
+++ b/pandas/tests/indexes/datetimes/test_formats.py
@@ -182,7 +182,7 @@ def test_dti_summary(self):
for idx, expected in zip([idx1, idx2, idx3, idx4, idx5, idx6],
[exp1, exp2, exp3, exp4, exp5, exp6]):
- result = idx.summary()
+ result = idx._summary()
assert result == expected
def test_dti_business_repr(self):
@@ -191,15 +191,15 @@ def test_dti_business_repr(self):
def test_dti_business_summary(self):
rng = pd.bdate_range(datetime(2009, 1, 1), datetime(2010, 1, 1))
- rng.summary()
- rng[2:2].summary()
+ rng._summary()
+ rng[2:2]._summary()
def test_dti_business_summary_pytz(self):
- pd.bdate_range('1/1/2005', '1/1/2009', tz=pytz.utc).summary()
+ pd.bdate_range('1/1/2005', '1/1/2009', tz=pytz.utc)._summary()
def test_dti_business_summary_dateutil(self):
pd.bdate_range('1/1/2005', '1/1/2009',
- tz=dateutil.tz.tzutc()).summary()
+ tz=dateutil.tz.tzutc())._summary()
def test_dti_custom_business_repr(self):
# only really care that it works
@@ -209,12 +209,13 @@ def test_dti_custom_business_repr(self):
def test_dti_custom_business_summary(self):
rng = pd.bdate_range(datetime(2009, 1, 1), datetime(2010, 1, 1),
freq='C')
- rng.summary()
- rng[2:2].summary()
+ rng._summary()
+ rng[2:2]._summary()
def test_dti_custom_business_summary_pytz(self):
- pd.bdate_range('1/1/2005', '1/1/2009', freq='C', tz=pytz.utc).summary()
+ pd.bdate_range('1/1/2005', '1/1/2009', freq='C',
+ tz=pytz.utc)._summary()
def test_dti_custom_business_summary_dateutil(self):
pd.bdate_range('1/1/2005', '1/1/2009', freq='C',
- tz=dateutil.tz.tzutc()).summary()
+ tz=dateutil.tz.tzutc())._summary()
diff --git a/pandas/tests/indexes/period/test_formats.py b/pandas/tests/indexes/period/test_formats.py
index b1a1060bf86c4..c3926cc5f1633 100644
--- a/pandas/tests/indexes/period/test_formats.py
+++ b/pandas/tests/indexes/period/test_formats.py
@@ -205,5 +205,5 @@ def test_summary(self):
idx6, idx7, idx8, idx9],
[exp1, exp2, exp3, exp4, exp5,
exp6, exp7, exp8, exp9]):
- result = idx.summary()
+ result = idx._summary()
assert result == expected
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index e8f05cb928cad..b30cc4b012f14 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1055,14 +1055,21 @@ def test_is_all_dates(self):
assert not self.intIndex.is_all_dates
def test_summary(self):
- self._check_method_works(Index.summary)
+ self._check_method_works(Index._summary)
# GH3869
ind = Index(['{other}%s', "~:{range}:0"], name='A')
- result = ind.summary()
+ result = ind._summary()
# shouldn't be formatted accidentally.
assert '~:{range}:0' in result
assert '{other}%s' in result
+ # GH18217
+ def test_summary_deprecated(self):
+ ind = Index(['{other}%s', "~:{range}:0"], name='A')
+
+ with tm.assert_produces_warning(FutureWarning):
+ ind.summary()
+
def test_format(self):
self._check_method_works(Index.format)
diff --git a/pandas/tests/indexes/timedeltas/test_formats.py b/pandas/tests/indexes/timedeltas/test_formats.py
index a8375459d74e4..09921fac80d22 100644
--- a/pandas/tests/indexes/timedeltas/test_formats.py
+++ b/pandas/tests/indexes/timedeltas/test_formats.py
@@ -92,5 +92,5 @@ def test_summary(self):
for idx, expected in zip([idx1, idx2, idx3, idx4, idx5],
[exp1, exp2, exp3, exp4, exp5]):
- result = idx.summary()
+ result = idx._summary()
assert result == expected
| - [X] closes #18217
- [X] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20028 | 2018-03-07T03:04:48Z | 2018-03-16T10:27:12Z | 2018-03-16T10:27:12Z | 2018-03-17T23:52:10Z |
Fix typos in test_interval_new.py | diff --git a/pandas/tests/indexing/interval/test_interval_new.py b/pandas/tests/indexing/interval/test_interval_new.py
index 16326845de1d5..3eb5f38ba0c80 100644
--- a/pandas/tests/indexing/interval/test_interval_new.py
+++ b/pandas/tests/indexing/interval/test_interval_new.py
@@ -1,6 +1,5 @@
import pytest
import numpy as np
-import pandas as pd
from pandas import Series, IntervalIndex, Interval
import pandas.util.testing as tm
@@ -170,17 +169,17 @@ def test_loc_with_overlap(self):
# interval
expected = 0
- result = s.loc[pd.interval(1, 5)]
+ result = s.loc[Interval(1, 5)]
tm.assert_series_equal(expected, result)
- result = s[pd.interval(1, 5)]
+ result = s[Interval(1, 5)]
tm.assert_series_equal(expected, result)
expected = s
- result = s.loc[[pd.interval(1, 5), pd.Interval(3, 7)]]
+ result = s.loc[[Interval(1, 5), Interval(3, 7)]]
tm.assert_series_equal(expected, result)
- result = s[[pd.interval(1, 5), pd.Interval(3, 7)]]
+ result = s[[Interval(1, 5), Interval(3, 7)]]
tm.assert_series_equal(expected, result)
with pytest.raises(KeyError):
@@ -197,17 +196,17 @@ def test_loc_with_overlap(self):
# slices with interval (only exact matches)
expected = s
- result = s.loc[pd.interval(1, 5):pd.Interval(3, 7)]
+ result = s.loc[Interval(1, 5):Interval(3, 7)]
tm.assert_series_equal(expected, result)
- result = s[pd.interval(1, 5):pd.Interval(3, 7)]
+ result = s[Interval(1, 5):Interval(3, 7)]
tm.assert_series_equal(expected, result)
with pytest.raises(KeyError):
- s.loc[pd.interval(1, 6):pd.Interval(3, 8)]
+ s.loc[Interval(1, 6):Interval(3, 8)]
with pytest.raises(KeyError):
- s[pd.interval(1, 6):pd.Interval(3, 8)]
+ s[Interval(1, 6):Interval(3, 8)]
# slices with scalar raise for overlapping intervals
# TODO KeyError is the appropriate error?
@@ -217,7 +216,7 @@ def test_loc_with_overlap(self):
def test_non_unique(self):
idx = IntervalIndex.from_tuples([(1, 3), (3, 7)])
- s = pd.Series(range(len(idx)), index=idx)
+ s = Series(range(len(idx)), index=idx)
result = s.loc[Interval(1, 3)]
assert result == 0
| Very minor. A few instances of lowercase `pd.interval` with an unnecessary `import pandas as pd`. | https://api.github.com/repos/pandas-dev/pandas/pulls/20026 | 2018-03-07T02:04:53Z | 2018-03-07T13:57:51Z | 2018-03-07T13:57:51Z | 2018-09-24T17:25:15Z |
Replace api coverage script | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index e159af9958fde..61efc6a707d31 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -351,8 +351,10 @@ Some other important things to know about the docs:
pandoc doc/source/contributing.rst -t markdown_github > CONTRIBUTING.md
-The utility script ``scripts/api_rst_coverage.py`` can be used to compare
-the list of methods documented in ``doc/source/api.rst`` (which is used to generate
+The utility script ``scripts/validate_docstrings.py`` can be used to get a csv
+summary of the API documentation. And also validate common errors in the docstring
+of a specific class, function or method. The summary also compares the list of
+methods documented in ``doc/source/api.rst`` (which is used to generate
the `API Reference <http://pandas.pydata.org/pandas-docs/stable/api.html>`_ page)
and the actual public methods.
This will identify methods documented in ``doc/source/api.rst`` that are not actually
diff --git a/scripts/api_rst_coverage.py b/scripts/api_rst_coverage.py
deleted file mode 100755
index 4800e80d82891..0000000000000
--- a/scripts/api_rst_coverage.py
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/usr/bin/env python
-# -*- encoding: utf-8 -*-
-"""
-Script to generate a report with the coverage of the API in the docs.
-
-The output of this script shows the existing methods that are not
-included in the API documentation, as well as the methods documented
-that do not exist. Ideally, no method should be listed. Currently it
-considers the methods of Series, DataFrame and Panel.
-
-Deprecated methods are usually removed from the documentation, while
-still available for three minor versions. They are listed with the
-word deprecated and the version number next to them.
-
-Usage::
-
- $ PYTHONPATH=.. ./api_rst_coverage.py
-
-"""
-import os
-import re
-import inspect
-import pandas as pd
-
-
-def main():
- # classes whose members to check
- classes = [pd.Series, pd.DataFrame, pd.Panel]
-
- def class_name_sort_key(x):
- if x.startswith('Series'):
- # make sure Series precedes DataFrame, and Panel.
- return ' ' + x
- else:
- return x
-
- def get_docstring(x):
- class_name, method = x.split('.')
- obj = getattr(getattr(pd, class_name), method)
- return obj.__doc__
-
- def deprecation_version(x):
- pattern = re.compile('\.\. deprecated:: ([0-9]+\.[0-9]+\.[0-9]+)')
- doc = get_docstring(x)
- match = pattern.search(doc)
- if match:
- return match.groups()[0]
-
- def add_notes(x):
- # Some methods are not documented in api.rst because they
- # have been deprecated. Adding a comment to detect them easier.
- doc = get_docstring(x)
- note = None
- if not doc:
- note = 'no docstring'
- else:
- version = deprecation_version(x)
- if version:
- note = 'deprecated in {}'.format(version)
-
- return '{} ({})'.format(x, note) if note else x
-
- # class members
- class_members = set()
- for cls in classes:
- for member in inspect.getmembers(cls):
- class_members.add('{cls}.{member}'.format(cls=cls.__name__,
- member=member[0]))
-
- # class members referenced in api.rst
- api_rst_members = set()
- base_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
- api_rst_fname = os.path.join(base_path, 'doc', 'source', 'api.rst')
- class_names = (cls.__name__ for cls in classes)
- pattern = re.compile('({})\.(\w+)'.format('|'.join(class_names)))
- with open(api_rst_fname, 'r') as f:
- for line in f:
- match = pattern.search(line)
- if match:
- api_rst_members.add(match.group(0))
-
- print()
- print("Documented members in api.rst that aren't actual class members:")
- for x in sorted(api_rst_members.difference(class_members),
- key=class_name_sort_key):
- print(x)
-
- print()
- print("Class members (other than those beginning with '_') "
- "missing from api.rst:")
- for x in sorted(class_members.difference(api_rst_members),
- key=class_name_sort_key):
- if '._' not in x:
- print(add_notes(x))
-
-
-if __name__ == "__main__":
- main()
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index a90a55c6ce1ab..ce3814a1314c1 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -18,12 +18,13 @@
import csv
import re
import functools
+import collections
import argparse
import contextlib
+import pydoc
import inspect
import importlib
import doctest
-import pydoc
try:
from io import StringIO
except ImportError:
@@ -42,6 +43,28 @@
PRIVATE_CLASSES = ['NDFrame', 'IndexOpsMixin']
+def _load_obj(obj_name):
+ for maxsplit in range(1, obj_name.count('.') + 1):
+ # TODO when py3 only replace by: module, *func_parts = ...
+ func_name_split = obj_name.rsplit('.', maxsplit=maxsplit)
+ module = func_name_split[0]
+ func_parts = func_name_split[1:]
+ try:
+ obj = importlib.import_module(module)
+ except ImportError:
+ pass
+ else:
+ continue
+
+ if 'module' not in locals():
+ raise ImportError('No module can be imported '
+ 'from "{}"'.format(obj_name))
+
+ for part in func_parts:
+ obj = getattr(obj, part)
+ return obj
+
+
def _to_original_callable(obj):
while True:
if inspect.isfunction(obj) or inspect.isclass(obj):
@@ -75,12 +98,17 @@ class Docstring:
def __init__(self, method_name, method_obj):
self.method_name = method_name
self.method_obj = method_obj
- self.raw_doc = pydoc.getdoc(method_obj)
- self.doc = NumpyDocString(self.raw_doc)
+ self.raw_doc = method_obj.__doc__ or ''
+ self.clean_doc = pydoc.getdoc(self.method_obj)
+ self.doc = NumpyDocString(self.clean_doc)
def __len__(self):
return len(self.raw_doc)
+ @property
+ def is_function_or_method(self):
+ return inspect.isfunction(self.method_obj)
+
@property
def source_file_name(self):
fname = inspect.getsourcefile(self.method_obj)
@@ -103,9 +131,31 @@ def github_url(self):
return url
@property
- def first_line_blank(self):
+ def start_blank_lines(self):
+ i = None
+ if self.raw_doc:
+ for i, row in enumerate(self.raw_doc.split('\n')):
+ if row.strip():
+ break
+ return i
+
+ @property
+ def end_blank_lines(self):
+ i = None
if self.raw_doc:
- return not bool(self.raw_doc.split('\n')[0].strip())
+ for i, row in enumerate(reversed(self.raw_doc.split('\n'))):
+ if row.strip():
+ break
+ return i
+
+ @property
+ def double_blank_lines(self):
+ prev = True
+ for row in self.raw_doc.split('\n'):
+ if not prev and not row.strip():
+ return True
+ prev = row.strip()
+ return False
@property
def summary(self):
@@ -125,18 +175,23 @@ def needs_summary(self):
@property
def doc_parameters(self):
- return self.doc['Parameters']
+ return collections.OrderedDict((name, (type_, ''.join(desc)))
+ for name, type_, desc
+ in self.doc['Parameters'])
@property
def signature_parameters(self):
- if not (inspect.isfunction(self.method_obj)
- or inspect.isclass(self.method_obj)):
- return tuple()
if (inspect.isclass(self.method_obj)
and self.method_name.split('.')[-1] in {'dt', 'str', 'cat'}):
# accessor classes have a signature, but don't want to show this
return tuple()
- params = tuple(inspect.signature(self.method_obj).parameters.keys())
+ try:
+ signature = inspect.signature(self.method_obj)
+ except (TypeError, ValueError):
+ # Some objects, mainly in C extensions do not support introspection
+ # of the signature
+ return tuple()
+ params = tuple(signature.parameters.keys())
if params and params[0] in ('self', 'cls'):
return params[1:]
return params
@@ -145,11 +200,7 @@ def signature_parameters(self):
def parameter_mismatches(self):
errs = []
signature_params = self.signature_parameters
- if self.doc_parameters:
- doc_params = list(zip(*self.doc_parameters))[0]
- else:
- doc_params = []
-
+ doc_params = tuple(self.doc_parameters)
missing = set(signature_params) - set(doc_params)
if missing:
errs.append('Parameters {!r} not documented'.format(missing))
@@ -168,9 +219,17 @@ def parameter_mismatches(self):
def correct_parameters(self):
return not bool(self.parameter_mismatches)
+ def parameter_type(self, param):
+ return self.doc_parameters[param][0]
+
+ def parameter_desc(self, param):
+ return self.doc_parameters[param][1]
+
@property
def see_also(self):
- return self.doc['See Also']
+ return collections.OrderedDict((name, ''.join(desc))
+ for name, desc, _
+ in self.doc['See Also'])
@property
def examples(self):
@@ -210,24 +269,34 @@ def examples_errors(self):
def get_api_items():
api_fname = os.path.join(BASE_PATH, 'doc', 'source', 'api.rst')
+ previous_line = current_section = current_subsection = ''
position = None
with open(api_fname) as f:
for line in f:
+ line = line.strip()
+ if len(line) == len(previous_line):
+ if set(line) == set('-'):
+ current_section = previous_line
+ continue
+ if set(line) == set('~'):
+ current_subsection = previous_line
+ continue
+
if line.startswith('.. currentmodule::'):
current_module = line.replace('.. currentmodule::', '').strip()
continue
- if line == '.. autosummary::\n':
+ if line == '.. autosummary::':
position = 'autosummary'
continue
if position == 'autosummary':
- if line == '\n':
+ if line == '':
position = 'items'
continue
if position == 'items':
- if line == '\n':
+ if line == '':
position = None
continue
item = line.strip()
@@ -235,82 +304,107 @@ def get_api_items():
for part in item.split('.'):
func = getattr(func, part)
- yield '.'.join([current_module, item]), func
+ yield ('.'.join([current_module, item]), func,
+ current_section, current_subsection)
+
+ previous_line = line
+
+
+def _csv_row(func_name, func_obj, section, subsection, in_api, seen={}):
+ obj_type = type(func_obj).__name__
+ original_callable = _to_original_callable(func_obj)
+ if original_callable is None:
+ return [func_name, obj_type] + [''] * 12, ''
+ else:
+ doc = Docstring(func_name, original_callable)
+ key = doc.source_file_name, doc.source_file_def_line
+ shared_code = seen.get(key, '')
+ return [func_name,
+ obj_type,
+ in_api,
+ int(doc.deprecated),
+ section,
+ subsection,
+ doc.source_file_name,
+ doc.source_file_def_line,
+ doc.github_url,
+ int(bool(doc.summary)),
+ int(bool(doc.extended_summary)),
+ int(doc.correct_parameters),
+ int(bool(doc.examples)),
+ shared_code], key
def validate_all():
writer = csv.writer(sys.stdout)
- writer.writerow(['Function or method',
- 'Type',
- 'File',
- 'Code line',
- 'GitHub link',
- 'Is deprecated',
- 'Has summary',
- 'Has extended summary',
- 'Parameters ok',
- 'Has examples',
- 'Shared code with'])
+ cols = ('Function or method',
+ 'Type',
+ 'In API doc',
+ 'Is deprecated',
+ 'Section',
+ 'Subsection',
+ 'File',
+ 'Code line',
+ 'GitHub link',
+ 'Has summary',
+ 'Has extended summary',
+ 'Parameters ok',
+ 'Has examples',
+ 'Shared code with')
+ writer.writerow(cols)
seen = {}
- for func_name, func in get_api_items():
- obj_type = type(func).__name__
- original_callable = _to_original_callable(func)
- if original_callable is None:
- writer.writerow([func_name, obj_type] + [''] * 9)
- else:
- doc = Docstring(func_name, original_callable)
- key = doc.source_file_name, doc.source_file_def_line
- shared_code = seen.get(key, '')
- seen[key] = func_name
- writer.writerow([func_name,
- obj_type,
- doc.source_file_name,
- doc.source_file_def_line,
- doc.github_url,
- int(doc.deprecated),
- int(bool(doc.summary)),
- int(bool(doc.extended_summary)),
- int(doc.correct_parameters),
- int(bool(doc.examples)),
- shared_code])
+ api_items = list(get_api_items())
+ for func_name, func, section, subsection in api_items:
+ row, key = _csv_row(func_name, func, section, subsection,
+ in_api=1, seen=seen)
+ seen[key] = func_name
+ writer.writerow(row)
+
+ api_item_names = set(list(zip(*api_items))[0])
+ for class_ in (pandas.Series, pandas.DataFrame, pandas.Panel):
+ for member in inspect.getmembers(class_):
+ func_name = 'pandas.{}.{}'.format(class_.__name__, member[0])
+ if (not member[0].startswith('_') and
+ func_name not in api_item_names):
+ func = _load_obj(func_name)
+ row, key = _csv_row(func_name, func, section='', subsection='',
+ in_api=0)
+ writer.writerow(row)
return 0
def validate_one(func_name):
- for maxsplit in range(1, func_name.count('.') + 1):
- # TODO when py3 only replace by: module, *func_parts = ...
- func_name_split = func_name.rsplit('.', maxsplit=maxsplit)
- module = func_name_split[0]
- func_parts = func_name_split[1:]
- try:
- func_obj = importlib.import_module(module)
- except ImportError:
- pass
- else:
- continue
-
- if 'module' not in locals():
- raise ImportError('No module can be imported '
- 'from "{}"'.format(func_name))
-
- for part in func_parts:
- func_obj = getattr(func_obj, part)
-
+ func_obj = _load_obj(func_name)
doc = Docstring(func_name, func_obj)
sys.stderr.write(_output_header('Docstring ({})'.format(func_name)))
- sys.stderr.write('{}\n'.format(doc.raw_doc))
+ sys.stderr.write('{}\n'.format(doc.clean_doc))
errs = []
+ if doc.start_blank_lines != 1:
+ errs.append('Docstring text (summary) should start in the line '
+ 'immediately after the opening quotes (not in the same '
+ 'line, or leaving a blank line in between)')
+ if doc.end_blank_lines != 1:
+ errs.append('Closing quotes should be placed in the line after '
+ 'the last text in the docstring (do not close the '
+ 'quotes in the same line as the text, or leave a '
+ 'blank line between the last text and the quotes)')
+ if doc.double_blank_lines:
+ errs.append('Use only one blank line to separate sections or '
+ 'paragraphs')
+
if not doc.summary:
- errs.append('No summary found')
+ errs.append('No summary found (a short summary in a single line '
+ 'should be present at the beginning of the docstring)')
else:
if not doc.summary[0].isupper():
errs.append('Summary does not start with capital')
if doc.summary[-1] != '.':
errs.append('Summary does not end with dot')
- if doc.summary.split(' ')[0][-1] == 's':
+ if (doc.is_function_or_method and
+ doc.summary.split(' ')[0][-1] == 's'):
errs.append('Summary must start with infinitive verb, '
'not third person (e.g. use "Generate" instead of '
'"Generates")')
@@ -318,6 +412,25 @@ def validate_one(func_name):
errs.append('No extended summary found')
param_errs = doc.parameter_mismatches
+ for param in doc.doc_parameters:
+ if not doc.parameter_type(param):
+ param_errs.append('Parameter "{}" has no type'.format(param))
+ else:
+ if doc.parameter_type(param)[-1] == '.':
+ param_errs.append('Parameter "{}" type '
+ 'should not finish with "."'.format(param))
+
+ if not doc.parameter_desc(param):
+ param_errs.append('Parameter "{}" '
+ 'has no description'.format(param))
+ else:
+ if not doc.parameter_desc(param)[0].isupper():
+ param_errs.append('Parameter "{}" description '
+ 'should start with '
+ 'capital letter'.format(param))
+ if doc.parameter_desc(param)[-1] != '.':
+ param_errs.append('Parameter "{}" description '
+ 'should finish with "."'.format(param))
if param_errs:
errs.append('Errors in parameters section')
for param_err in param_errs:
@@ -328,6 +441,13 @@ def validate_one(func_name):
errs.append('Private classes ({}) should not be mentioned in public '
'docstring.'.format(mentioned_errs))
+ if not doc.see_also:
+ errs.append('See Also section not found')
+ else:
+ for rel_name, rel_desc in doc.see_also.items():
+ if not rel_desc:
+ errs.append('Missing description for '
+ 'See Also "{}" reference'.format(rel_name))
examples_errs = ''
if not doc.examples:
errs.append('No examples section found')
| - [ ] closes #xxxx
- [ ] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This PR adds to the output of the script `validate_docstrings.py` the methods from `Series`, `DataFrame` and `Panel` that are not present in `doc/api.rst`. This makes the script `api_rst_coverage.py` obsolete, as its only functionality was to detect these methods, and the methods in `doc/api.rst` that didn't exist, which was already returned by `validate_docstrings.py`.
Besides that, the PR implements to minor changes. It adds new columns 'section' and 'subsection' to the csv returned by `validate_docstrings.py`, so the csv contains the section and subsection in `api.rst` in which each docstring is defined (besides giving context, this will be useful to divide the docstrings in similar groups for the sprint).
Also, this PR adds some extra validations, like making sure the docstrings starts in the next line after the opening triple quotes, and similar validations.
CC @jorisvandenbossche | https://api.github.com/repos/pandas-dev/pandas/pulls/20025 | 2018-03-07T00:24:09Z | 2018-03-08T11:07:09Z | 2018-03-08T11:07:09Z | 2018-03-08T11:07:20Z |
Added .pytest_cache to gitignore | diff --git a/.gitignore b/.gitignore
index 0d4e8c6fb75a6..00dac6e336c37 100644
--- a/.gitignore
+++ b/.gitignore
@@ -88,8 +88,9 @@ scikits
*.c
*.cpp
-# Performance Testing #
-#######################
+# Unit / Performance Testing #
+##############################
+.pytest_cache/
asv_bench/env/
asv_bench/html/
asv_bench/results/
| After running the unit test suite `.pytest_cache/` keeps showing up in the project root. Figured it was worth adding to `.gitignore` to prevent this from being inadvertently committed | https://api.github.com/repos/pandas-dev/pandas/pulls/20021 | 2018-03-06T20:46:15Z | 2018-03-07T13:59:40Z | 2018-03-07T13:59:40Z | 2018-03-07T17:20:47Z |
TST: Fix wrong argument in TestDataFrameAlterAxes.test_set_index_dst | diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index c824f0026af50..3e0ba26c20eb0 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -301,7 +301,7 @@ def test_set_index_timezone(self):
def test_set_index_dst(self):
di = pd.date_range('2006-10-29 00:00:00', periods=3,
- req='H', tz='US/Pacific')
+ freq='H', tz='US/Pacific')
df = pd.DataFrame(data={'a': [0, 1, 2], 'b': [3, 4, 5]},
index=di).reset_index()
| - [x] tests passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/20019 | 2018-03-06T17:07:59Z | 2018-03-07T13:55:49Z | 2018-03-07T13:55:49Z | 2018-03-07T15:52:27Z |
BUG: Check for wrong arguments in index subclasses constructors | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 3afd9cff10e86..f686a042c1a74 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -935,6 +935,7 @@ Indexing
- Bug in :func:`IntervalIndex.symmetric_difference` where the symmetric difference with a non-``IntervalIndex`` did not raise (:issue:`18475`)
- Bug in :class:`IntervalIndex` where set operations that returned an empty ``IntervalIndex`` had the wrong dtype (:issue:`19101`)
- Bug in :meth:`DataFrame.drop_duplicates` where no ``KeyError`` is raised when passing in columns that don't exist on the ``DataFrame`` (issue:`19726`)
+- Bug in ``Index`` subclasses constructors that ignore unexpected keyword arguments (:issue:`19348`)
MultiIndex
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 218851b1713f2..71d39ad812d20 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -76,7 +76,7 @@ class CategoricalIndex(Index, accessor.PandasDelegate):
_attributes = ['name']
def __new__(cls, data=None, categories=None, ordered=None, dtype=None,
- copy=False, name=None, fastpath=False, **kwargs):
+ copy=False, name=None, fastpath=False):
if fastpath:
return cls._simple_new(data, name=name, dtype=dtype)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e5e9bba269fd4..491fefe8efee0 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -213,6 +213,10 @@ class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin,
Attempt to infer fall dst-transition hours based on order
name : object
Name to be stored in the index
+ dayfirst : bool, default False
+ If True, parse dates in `data` with the day first order
+ yearfirst : bool, default False
+ If True parse dates in `data` with the year first order
Attributes
----------
@@ -272,6 +276,7 @@ class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin,
Index : The base pandas Index type
TimedeltaIndex : Index of timedelta64 data
PeriodIndex : Index of Period data
+ pandas.to_datetime : Convert argument to datetime
"""
_typ = 'datetimeindex'
@@ -327,10 +332,10 @@ def _add_comparison_methods(cls):
@deprecate_kwarg(old_arg_name='infer_dst', new_arg_name='ambiguous',
mapping={True: 'infer', False: 'raise'})
def __new__(cls, data=None,
- freq=None, start=None, end=None, periods=None,
- copy=False, name=None, tz=None,
- verify_integrity=True, normalize=False,
- closed=None, ambiguous='raise', dtype=None, **kwargs):
+ freq=None, start=None, end=None, periods=None, tz=None,
+ normalize=False, closed=None, ambiguous='raise',
+ dayfirst=False, yearfirst=False, dtype=None,
+ copy=False, name=None, verify_integrity=True):
# This allows to later ensure that the 'copy' parameter is honored:
if isinstance(data, Index):
@@ -341,9 +346,6 @@ def __new__(cls, data=None,
if name is None and hasattr(data, 'name'):
name = data.name
- dayfirst = kwargs.pop('dayfirst', None)
- yearfirst = kwargs.pop('yearfirst', None)
-
freq_infer = False
if not isinstance(freq, DateOffset):
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index d431ea1e51e31..ccf2e5e3c4486 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -213,8 +213,8 @@ class IntervalIndex(IntervalMixin, Index):
_mask = None
- def __new__(cls, data, closed=None, name=None, copy=False, dtype=None,
- fastpath=False, verify_integrity=True):
+ def __new__(cls, data, closed=None, dtype=None, copy=False,
+ name=None, fastpath=False, verify_integrity=True):
if fastpath:
return cls._simple_new(data.left, data.right, closed, name,
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 73f4aee1c4880..8b6d945854960 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -208,8 +208,8 @@ class MultiIndex(Index):
rename = Index.set_names
def __new__(cls, levels=None, labels=None, sortorder=None, names=None,
- copy=False, verify_integrity=True, _set_identity=True,
- name=None, **kwargs):
+ dtype=None, copy=False, name=None,
+ verify_integrity=True, _set_identity=True):
# compat with Index
if name is not None:
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 97cb3fbd877dd..705dc36d92522 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -234,8 +234,15 @@ def _add_comparison_methods(cls):
cls.__ge__ = _period_index_cmp('__ge__', cls)
def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
- periods=None, copy=False, name=None, tz=None, dtype=None,
- **kwargs):
+ periods=None, tz=None, dtype=None, copy=False, name=None,
+ **fields):
+
+ valid_field_set = {'year', 'month', 'day', 'quarter',
+ 'hour', 'minute', 'second'}
+
+ if not set(fields).issubset(valid_field_set):
+ raise TypeError('__new__() got an unexpected keyword argument {}'.
+ format(list(set(fields) - valid_field_set)[0]))
if periods is not None:
if is_float(periods):
@@ -267,7 +274,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
data = np.asarray(ordinal, dtype=np.int64)
else:
data, freq = cls._generate_range(start, end, periods,
- freq, kwargs)
+ freq, fields)
return cls._from_ordinals(data, name=name, freq=freq)
if isinstance(data, PeriodIndex):
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 7c266dc889368..4e192548a1f2d 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -65,8 +65,8 @@ class RangeIndex(Int64Index):
_typ = 'rangeindex'
_engine_type = libindex.Int64Engine
- def __new__(cls, start=None, stop=None, step=None, name=None, dtype=None,
- fastpath=False, copy=False, **kwargs):
+ def __new__(cls, start=None, stop=None, step=None,
+ dtype=None, copy=False, name=None, fastpath=False):
if fastpath:
return cls._simple_new(start, stop, step, name=name)
@@ -550,7 +550,7 @@ def __getitem__(self, key):
stop = self._start + self._step * stop
step = self._step * step
- return RangeIndex(start, stop, step, self.name, fastpath=True)
+ return RangeIndex(start, stop, step, name=self.name, fastpath=True)
# fall back to Int64Index
return super_getitem(key)
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index a14de18b1012f..969afccdbc755 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -197,10 +197,9 @@ def _add_comparison_methods(cls):
freq = None
- def __new__(cls, data=None, unit=None,
- freq=None, start=None, end=None, periods=None,
- copy=False, name=None,
- closed=None, verify_integrity=True, **kwargs):
+ def __new__(cls, data=None, unit=None, freq=None, start=None, end=None,
+ periods=None, closed=None, dtype=None, copy=False,
+ name=None, verify_integrity=True):
if isinstance(data, TimedeltaIndex) and freq is None and name is None:
if copy:
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 964a6b14d2b1e..eb429f46a3355 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -2326,3 +2326,10 @@ def test_generated_op_names(opname, indices):
opname = '__{name}__'.format(name=opname)
method = getattr(index, opname)
assert method.__name__ == opname
+
+
+@pytest.mark.parametrize('idx_maker', tm.index_subclass_makers_generator())
+def test_index_subclass_constructor_wrong_kwargs(idx_maker):
+ # GH #19348
+ with tm.assert_raises_regex(TypeError, 'unexpected keyword argument'):
+ idx_maker(foo='bar')
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 942416408e4f0..a223e4d8fd23e 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1539,16 +1539,16 @@ def makeUnicodeIndex(k=10, name=None):
return Index(randu_array(nchars=10, size=k), name=name)
-def makeCategoricalIndex(k=10, n=3, name=None):
+def makeCategoricalIndex(k=10, n=3, name=None, **kwargs):
""" make a length k index or n categories """
x = rands_array(nchars=4, size=n)
- return CategoricalIndex(np.random.choice(x, k), name=name)
+ return CategoricalIndex(np.random.choice(x, k), name=name, **kwargs)
-def makeIntervalIndex(k=10, name=None):
+def makeIntervalIndex(k=10, name=None, **kwargs):
""" make a length k IntervalIndex """
x = np.linspace(0, 100, num=(k + 1))
- return IntervalIndex.from_breaks(x, name=name)
+ return IntervalIndex.from_breaks(x, name=name, **kwargs)
def makeBoolIndex(k=10, name=None):
@@ -1567,8 +1567,8 @@ def makeUIntIndex(k=10, name=None):
return Index([2**63 + i for i in lrange(k)], name=name)
-def makeRangeIndex(k=10, name=None):
- return RangeIndex(0, k, 1, name=name)
+def makeRangeIndex(k=10, name=None, **kwargs):
+ return RangeIndex(0, k, 1, name=name, **kwargs)
def makeFloatIndex(k=10, name=None):
@@ -1576,22 +1576,28 @@ def makeFloatIndex(k=10, name=None):
return Index(values * (10 ** np.random.randint(0, 9)), name=name)
-def makeDateIndex(k=10, freq='B', name=None):
+def makeDateIndex(k=10, freq='B', name=None, **kwargs):
dt = datetime(2000, 1, 1)
dr = bdate_range(dt, periods=k, freq=freq, name=name)
- return DatetimeIndex(dr, name=name)
+ return DatetimeIndex(dr, name=name, **kwargs)
-def makeTimedeltaIndex(k=10, freq='D', name=None):
- return TimedeltaIndex(start='1 day', periods=k, freq=freq, name=name)
+def makeTimedeltaIndex(k=10, freq='D', name=None, **kwargs):
+ return TimedeltaIndex(start='1 day', periods=k, freq=freq,
+ name=name, **kwargs)
-def makePeriodIndex(k=10, name=None):
+def makePeriodIndex(k=10, name=None, **kwargs):
dt = datetime(2000, 1, 1)
- dr = PeriodIndex(start=dt, periods=k, freq='B', name=name)
+ dr = PeriodIndex(start=dt, periods=k, freq='B', name=name, **kwargs)
return dr
+def makeMultiIndex(k=10, names=None, **kwargs):
+ return MultiIndex.from_product(
+ (('foo', 'bar'), (1, 2)), names=names, **kwargs)
+
+
def all_index_generator(k=10):
"""Generator which can be iterated over to get instances of all the various
index classes.
@@ -1609,6 +1615,17 @@ def all_index_generator(k=10):
yield make_index_func(k=k)
+def index_subclass_makers_generator():
+ make_index_funcs = [
+ makeDateIndex, makePeriodIndex,
+ makeTimedeltaIndex, makeRangeIndex,
+ makeIntervalIndex, makeCategoricalIndex,
+ makeMultiIndex
+ ]
+ for make_index_func in make_index_funcs:
+ yield make_index_func
+
+
def all_timeseries_index_generator(k=10):
"""Generator which can be iterated over to get instances of all the classes
which represent time-seires.
| - [x] closes #19348
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20017 | 2018-03-06T16:49:07Z | 2018-03-10T11:42:53Z | 2018-03-10T11:42:53Z | 2018-03-16T11:03:09Z |
DOC: add guide on shared docstrings | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index ff0aa8af611db..967d1fe3369f0 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -1088,5 +1088,4 @@ The branch will still exist on GitHub, so to delete it there do::
git push origin --delete shiny-new-feature
-
.. _Gitter: https://gitter.im/pydata/pandas
diff --git a/doc/source/contributing_docstring.rst b/doc/source/contributing_docstring.rst
index c210bb7050fb8..f80bfd9253764 100644
--- a/doc/source/contributing_docstring.rst
+++ b/doc/source/contributing_docstring.rst
@@ -82,6 +82,9 @@ about reStructuredText can be found in:
- `Quick reStructuredText reference <http://docutils.sourceforge.net/docs/user/rst/quickref.html>`_
- `Full reStructuredText specification <http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html>`_
+Pandas has some helpers for sharing docstrings between related classes, see
+:ref:`docstring.sharing`.
+
The rest of this document will summarize all the above guides, and will
provide additional convention specific to the pandas project.
@@ -916,3 +919,79 @@ plot will be generated automatically when building the documentation.
>>> s.plot()
"""
pass
+
+.. _docstring.sharing:
+
+Sharing Docstrings
+------------------
+
+Pandas has a system for sharing docstrings, with slight variations, between
+classes. This helps us keep docstrings consistent, while keeping things clear
+for the user reading. It comes at the cost of some complexity when writing.
+
+Each shared docstring will have a base template with variables, like
+``%(klass)s``. The variables filled in later on using the ``Substitution``
+decorator. Finally, docstrings can be appended to with the ``Appender``
+decorator.
+
+In this example, we'll create a parent docstring normally (this is like
+``pandas.core.generic.NDFrame``. Then we'll have two children (like
+``pandas.core.series.Series`` and ``pandas.core.frame.DataFrame``). We'll
+substitute the children's class names in this docstring.
+
+.. code-block:: python
+
+ class Parent:
+ def my_function(self):
+ """Apply my function to %(klass)s."""
+ ...
+
+ class ChildA(Parent):
+ @Substitution(klass="ChildA")
+ @Appender(Parent.my_function.__doc__)
+ def my_function(self):
+ ...
+
+ class ChildB(Parent):
+ @Substitution(klass="ChildB")
+ @Appender(Parent.my_function.__doc__)
+ def my_function(self):
+ ...
+
+The resulting docstrings are
+
+.. code-block:: python
+
+ >>> print(Parent.my_function.__doc__)
+ Apply my function to %(klass)s.
+ >>> print(ChildA.my_function.__doc__)
+ Apply my function to ChildA.
+ >>> print(ChildB.my_function.__doc__)
+ Apply my function to ChildB.
+
+Notice two things:
+
+1. We "append" the parent docstring to the children docstrings, which are
+ initially empty.
+2. Python decorators are applied inside out. So the order is Append then
+ Substitution, even though Substitution comes first in the file.
+
+Our files will often contain a module-level ``_shared_doc_kwargs`` with some
+common substitution values (things like ``klass``, ``axes``, etc).
+
+You can substitute and append in one shot with something like
+
+.. code-block:: python
+
+ @Appender(template % _shared_doc_kwargs)
+ def my_function(self):
+ ...
+
+where ``template`` may come from a module-level ``_shared_docs`` dictionary
+mapping function names to docstrings. Wherever possible, we prefer using
+``Appender`` and ``Substitution``, since the docstring-writing processes is
+slightly closer to normal.
+
+See ``pandas.core.generic.NDFrame.fillna`` for an example template, and
+``pandas.core.series.Series.fillna`` and ``pandas.core.generic.frame.fillna``
+for the filled versions.
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d2617305d220a..ace975385ce32 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3696,7 +3696,8 @@ def rename(self, *args, **kwargs):
kwargs.pop('mapper', None)
return super(DataFrame, self).rename(**kwargs)
- @Appender(_shared_docs['fillna'] % _shared_doc_kwargs)
+ @Substitution(**_shared_doc_kwargs)
+ @Appender(NDFrame.fillna.__doc__)
def fillna(self, value=None, method=None, axis=None, inplace=False,
limit=None, downcast=None, **kwargs):
return super(DataFrame,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 80885fd9ef139..f1fa43818ce64 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5252,7 +5252,9 @@ def infer_objects(self):
# ----------------------------------------------------------------------
# Filling NA's
- _shared_docs['fillna'] = ("""
+ def fillna(self, value=None, method=None, axis=None, inplace=False,
+ limit=None, downcast=None):
+ """
Fill NA/NaN values using the specified method
Parameters
@@ -5343,11 +5345,7 @@ def infer_objects(self):
1 3.0 4.0 NaN 1
2 NaN 1.0 NaN 5
3 NaN 3.0 NaN 4
- """)
-
- @Appender(_shared_docs['fillna'] % _shared_doc_kwargs)
- def fillna(self, value=None, method=None, axis=None, inplace=False,
- limit=None, downcast=None):
+ """
inplace = validate_bool_kwarg(inplace, 'inplace')
value, method = validate_fillna_kwargs(value, method)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 5bb4b72a0562d..7c087ac7deafc 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -31,7 +31,7 @@
create_block_manager_from_blocks)
from pandas.core.series import Series
from pandas.core.reshape.util import cartesian_product
-from pandas.util._decorators import Appender
+from pandas.util._decorators import Appender, Substitution
from pandas.util._validators import validate_axis_style_args
_shared_doc_kwargs = dict(
@@ -1254,7 +1254,8 @@ def transpose(self, *args, **kwargs):
return super(Panel, self).transpose(*axes, **kwargs)
- @Appender(_shared_docs['fillna'] % _shared_doc_kwargs)
+ @Substitution(**_shared_doc_kwargs)
+ @Appender(NDFrame.fillna.__doc__)
def fillna(self, value=None, method=None, axis=None, inplace=False,
limit=None, downcast=None, **kwargs):
return super(Panel, self).fillna(value=value, method=method, axis=axis,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index da598259d272d..98ba53bdf5176 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3341,7 +3341,8 @@ def drop(self, labels=None, axis=0, index=None, columns=None,
columns=columns, level=level,
inplace=inplace, errors=errors)
- @Appender(generic._shared_docs['fillna'] % _shared_doc_kwargs)
+ @Substitution(**_shared_doc_kwargs)
+ @Appender(generic.NDFrame.fillna.__doc__)
def fillna(self, value=None, method=None, axis=None, inplace=False,
limit=None, downcast=None, **kwargs):
return super(Series, self).fillna(value=value, method=method,
| closes https://github.com/pandas-dev/pandas/issues/16446
xref https://github.com/pandas-dev/pandas/issues/19932
cc @jorisvandenbossche
The main change in best practices here is the recommendation is that `Appender` is only for appending. `Substitution` should be used for substituting values.
See the changes to `fillna` that this policy requires. Personally, I find the new way clearer than the old way.
- We write the base template normally
- I like `Appender(NDFrame.method.__doc__)` better than `Appender(_shared_docs['name'] % kwargs)`
- I like `Substitution(**kwargs)` better than `%` formatting.
The downsides is that we don't have the template in a dictionary anymore and the docstring in `NDFrame` has to be the template, it can't have its variables substituted. I think that's OK. | https://api.github.com/repos/pandas-dev/pandas/pulls/20016 | 2018-03-06T16:41:54Z | 2018-03-28T07:49:48Z | 2018-03-28T07:49:48Z | 2018-05-02T13:10:01Z |
DOC: enable matplotlib plot_directive to include figures in docstrings | diff --git a/doc/source/conf.py b/doc/source/conf.py
index c81d38db05cca..46249af8a5a56 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -63,6 +63,7 @@
'ipython_sphinxext.ipython_console_highlighting',
# lowercase didn't work
'IPython.sphinxext.ipython_console_highlighting',
+ 'matplotlib.sphinxext.plot_directive',
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.mathjax',
@@ -85,6 +86,14 @@
if any(re.match("\s*api\s*", l) for l in index_rst_lines):
autosummary_generate = True
+# matplotlib plot directive
+plot_include_source = True
+plot_formats = [("png", 90)]
+plot_html_show_formats = False
+plot_html_show_source_link = False
+plot_pre_code = """import numpy as np
+import pandas as pd"""
+
# Add any paths that contain templates here, relative to this directory.
templates_path = ['../_templates']
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index b15c5271ae321..98fdcf8f94ae0 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -2537,6 +2537,15 @@ def line(self, **kwds):
Returns
-------
axes : matplotlib.AxesSubplot or np.array of them
+
+ Examples
+ --------
+
+ .. plot::
+ :context: close-figs
+
+ >>> s = pd.Series([1, 3, 2])
+ >>> s.plot.line()
"""
return self(kind='line', **kwds)
| This adds the matplotlib.sphinxext.plot_directive as extension to conf.py, and added a small example in the `Series.plot.line` docstring | https://api.github.com/repos/pandas-dev/pandas/pulls/20015 | 2018-03-06T15:20:53Z | 2018-03-07T13:57:18Z | 2018-03-07T13:57:18Z | 2018-03-07T13:57:21Z |
TST: clean deprecation warnings for xref #19980 | diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 92ca68257eabf..18eefa6a14a37 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -111,12 +111,15 @@ def test_tsplot_deprecated(self):
@pytest.mark.slow
def test_tsplot(self):
+
from pandas.tseries.plotting import tsplot
_, ax = self.plt.subplots()
ts = tm.makeTimeSeries()
- f = lambda *args, **kwds: tsplot(s, self.plt.Axes.plot, *args, **kwds)
+ def f(*args, **kwds):
+ with tm.assert_produces_warning(FutureWarning):
+ return tsplot(s, self.plt.Axes.plot, *args, **kwds)
for s in self.period_ser:
_check_plot_works(f, s.index.freq, ax=ax, series=s)
@@ -188,11 +191,13 @@ def check_format_of_first_point(ax, expected_string):
tm.close()
# tsplot
- _, ax = self.plt.subplots()
from pandas.tseries.plotting import tsplot
- tsplot(annual, self.plt.Axes.plot, ax=ax)
+ _, ax = self.plt.subplots()
+ with tm.assert_produces_warning(FutureWarning):
+ tsplot(annual, self.plt.Axes.plot, ax=ax)
check_format_of_first_point(ax, 't = 2014 y = 1.000000')
- tsplot(daily, self.plt.Axes.plot, ax=ax)
+ with tm.assert_produces_warning(FutureWarning):
+ tsplot(daily, self.plt.Axes.plot, ax=ax)
check_format_of_first_point(ax, 't = 2014-01-01 y = 1.000000')
@pytest.mark.slow
@@ -879,12 +884,12 @@ def test_to_weekly_resampling(self):
for l in ax.get_lines():
assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
- # tsplot
- from pandas.tseries.plotting import tsplot
-
_, ax = self.plt.subplots()
- tsplot(high, self.plt.Axes.plot, ax=ax)
- lines = tsplot(low, self.plt.Axes.plot, ax=ax)
+ from pandas.tseries.plotting import tsplot
+ with tm.assert_produces_warning(FutureWarning):
+ tsplot(high, self.plt.Axes.plot, ax=ax)
+ with tm.assert_produces_warning(FutureWarning):
+ lines = tsplot(low, self.plt.Axes.plot, ax=ax)
for l in lines:
assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
@@ -910,12 +915,12 @@ def test_from_weekly_resampling(self):
tm.assert_numpy_array_equal(xdata, expected_h)
tm.close()
- # tsplot
- from pandas.tseries.plotting import tsplot
-
_, ax = self.plt.subplots()
- tsplot(low, self.plt.Axes.plot, ax=ax)
- lines = tsplot(high, self.plt.Axes.plot, ax=ax)
+ from pandas.tseries.plotting import tsplot
+ with tm.assert_produces_warning(FutureWarning):
+ tsplot(low, self.plt.Axes.plot, ax=ax)
+ with tm.assert_produces_warning(FutureWarning):
+ lines = tsplot(high, self.plt.Axes.plot, ax=ax)
for l in lines:
assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
xdata = l.get_xdata(orig=False)
@@ -1351,9 +1356,11 @@ def test_plot_outofbounds_datetime(self):
values = [datetime(1677, 1, 1, 12), datetime(1677, 1, 2, 12)]
ax.plot(values)
+ @pytest.mark.skip(
+ is_platform_mac(),
+ "skip on mac for precision display issue on older mpl")
+ @pytest.mark.xfail(reason="suspect on mpl 2.2.2")
def test_format_timedelta_ticks_narrow(self):
- if is_platform_mac():
- pytest.skip("skip on mac for precision display issue on older mpl")
if self.mpl_ge_2_0_0:
expected_labels = [''] + [
@@ -1374,9 +1381,11 @@ def test_format_timedelta_ticks_narrow(self):
for l, l_expected in zip(labels, expected_labels):
assert l.get_text() == l_expected
+ @pytest.mark.skip(
+ is_platform_mac(),
+ "skip on mac for precision display issue on older mpl")
+ @pytest.mark.xfail(reason="suspect on mpl 2.2.2")
def test_format_timedelta_ticks_wide(self):
- if is_platform_mac():
- pytest.skip("skip on mac for precision display issue on older mpl")
if self.mpl_ge_2_0_0:
expected_labels = [
| xfail some mpl > 2.1.2 tests
| https://api.github.com/repos/pandas-dev/pandas/pulls/20013 | 2018-03-06T11:29:53Z | 2018-03-06T12:25:18Z | 2018-03-06T12:25:18Z | 2018-03-07T10:48:19Z |
month_name/day_name warnings followup | diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 5bb53cf20b478..9818d53e386bd 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -766,7 +766,7 @@ class Timestamp(_Timestamp):
"""
warnings.warn("`weekday_name` is deprecated and will be removed in a "
"future version. Use `day_name` instead",
- DeprecationWarning)
+ FutureWarning)
return self.day_name()
@property
diff --git a/pandas/tests/indexes/datetimes/test_misc.py b/pandas/tests/indexes/datetimes/test_misc.py
index a65b80efc7911..056924f2c6663 100644
--- a/pandas/tests/indexes/datetimes/test_misc.py
+++ b/pandas/tests/indexes/datetimes/test_misc.py
@@ -271,7 +271,9 @@ def test_datetime_name_accessors(self, time_locale):
assert dti.weekday_name[day] == eng_name
assert dti.day_name(locale=time_locale)[day] == name
ts = Timestamp(datetime(2016, 4, day))
- assert ts.weekday_name == eng_name
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ assert ts.weekday_name == eng_name
assert ts.day_name(locale=time_locale) == name
dti = dti.append(DatetimeIndex([pd.NaT]))
assert np.isnan(dti.day_name(locale=time_locale)[-1])
diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py
index 6f0756949edc6..9180bb0af3af3 100644
--- a/pandas/tests/indexes/datetimes/test_scalar_compat.py
+++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py
@@ -47,7 +47,12 @@ def test_dti_timestamp_fields(self, field):
# extra fields from DatetimeIndex like quarter and week
idx = tm.makeDateIndex(100)
expected = getattr(idx, field)[-1]
- result = getattr(Timestamp(idx[-1]), field)
+ if field == 'weekday_name':
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = getattr(Timestamp(idx[-1]), field)
+ else:
+ result = getattr(Timestamp(idx[-1]), field)
assert result == expected
def test_dti_timestamp_freq_fields(self):
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index 0acf7acb19c0d..cde5baf47c18e 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -105,7 +105,7 @@ def check(value, equal):
def test_names(self, data, time_locale):
# GH 17354
# Test .weekday_name, .day_name(), .month_name
- with tm.assert_produces_warning(DeprecationWarning,
+ with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
assert data.weekday_name == 'Monday'
if time_locale is None:
| xref https://github.com/pandas-dev/pandas/pull/18164#issuecomment-370390747
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Changed the `weekday_name` warning to a `FutureWarning` and catching them in 2 additional tests. | https://api.github.com/repos/pandas-dev/pandas/pulls/20010 | 2018-03-06T04:45:01Z | 2018-03-06T12:25:56Z | 2018-03-06T12:25:55Z | 2018-03-06T17:33:43Z |
TST: split series/test_indexing.py (#18614) | diff --git a/pandas/tests/series/indexing/__init__.py b/pandas/tests/series/indexing/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/series/indexing/test_alter_index.py b/pandas/tests/series/indexing/test_alter_index.py
new file mode 100644
index 0000000000000..7456239d3433a
--- /dev/null
+++ b/pandas/tests/series/indexing/test_alter_index.py
@@ -0,0 +1,469 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+import pytest
+
+from datetime import datetime
+
+import pandas as pd
+import numpy as np
+
+from numpy import nan
+
+from pandas import compat
+
+from pandas import (Series, date_range, isna, Categorical)
+from pandas.compat import lrange, range
+
+from pandas.util.testing import (assert_series_equal)
+import pandas.util.testing as tm
+
+from pandas.tests.series.common import TestData
+
+JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
+
+class TestAlignment(TestData):
+
+ def test_align(self):
+ def _check_align(a, b, how='left', fill=None):
+ aa, ab = a.align(b, join=how, fill_value=fill)
+
+ join_index = a.index.join(b.index, how=how)
+ if fill is not None:
+ diff_a = aa.index.difference(join_index)
+ diff_b = ab.index.difference(join_index)
+ if len(diff_a) > 0:
+ assert (aa.reindex(diff_a) == fill).all()
+ if len(diff_b) > 0:
+ assert (ab.reindex(diff_b) == fill).all()
+
+ ea = a.reindex(join_index)
+ eb = b.reindex(join_index)
+
+ if fill is not None:
+ ea = ea.fillna(fill)
+ eb = eb.fillna(fill)
+
+ assert_series_equal(aa, ea)
+ assert_series_equal(ab, eb)
+ assert aa.name == 'ts'
+ assert ea.name == 'ts'
+ assert ab.name == 'ts'
+ assert eb.name == 'ts'
+
+ for kind in JOIN_TYPES:
+ _check_align(self.ts[2:], self.ts[:-5], how=kind)
+ _check_align(self.ts[2:], self.ts[:-5], how=kind, fill=-1)
+
+ # empty left
+ _check_align(self.ts[:0], self.ts[:-5], how=kind)
+ _check_align(self.ts[:0], self.ts[:-5], how=kind, fill=-1)
+
+ # empty right
+ _check_align(self.ts[:-5], self.ts[:0], how=kind)
+ _check_align(self.ts[:-5], self.ts[:0], how=kind, fill=-1)
+
+ # both empty
+ _check_align(self.ts[:0], self.ts[:0], how=kind)
+ _check_align(self.ts[:0], self.ts[:0], how=kind, fill=-1)
+
+ def test_align_fill_method(self):
+ def _check_align(a, b, how='left', method='pad', limit=None):
+ aa, ab = a.align(b, join=how, method=method, limit=limit)
+
+ join_index = a.index.join(b.index, how=how)
+ ea = a.reindex(join_index)
+ eb = b.reindex(join_index)
+
+ ea = ea.fillna(method=method, limit=limit)
+ eb = eb.fillna(method=method, limit=limit)
+
+ assert_series_equal(aa, ea)
+ assert_series_equal(ab, eb)
+
+ for kind in JOIN_TYPES:
+ for meth in ['pad', 'bfill']:
+ _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth)
+ _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth,
+ limit=1)
+
+ # empty left
+ _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth)
+ _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth,
+ limit=1)
+
+ # empty right
+ _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth)
+ _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth,
+ limit=1)
+
+ # both empty
+ _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth)
+ _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth,
+ limit=1)
+
+ def test_align_nocopy(self):
+ b = self.ts[:5].copy()
+
+ # do copy
+ a = self.ts.copy()
+ ra, _ = a.align(b, join='left')
+ ra[:5] = 5
+ assert not (a[:5] == 5).any()
+
+ # do not copy
+ a = self.ts.copy()
+ ra, _ = a.align(b, join='left', copy=False)
+ ra[:5] = 5
+ assert (a[:5] == 5).all()
+
+ # do copy
+ a = self.ts.copy()
+ b = self.ts[:5].copy()
+ _, rb = a.align(b, join='right')
+ rb[:3] = 5
+ assert not (b[:3] == 5).any()
+
+ # do not copy
+ a = self.ts.copy()
+ b = self.ts[:5].copy()
+ _, rb = a.align(b, join='right', copy=False)
+ rb[:2] = 5
+ assert (b[:2] == 5).all()
+
+ def test_align_same_index(self):
+ a, b = self.ts.align(self.ts, copy=False)
+ assert a.index is self.ts.index
+ assert b.index is self.ts.index
+
+ a, b = self.ts.align(self.ts, copy=True)
+ assert a.index is not self.ts.index
+ assert b.index is not self.ts.index
+
+ def test_align_multiindex(self):
+ # GH 10665
+
+ midx = pd.MultiIndex.from_product([range(2), range(3), range(2)],
+ names=('a', 'b', 'c'))
+ idx = pd.Index(range(2), name='b')
+ s1 = pd.Series(np.arange(12, dtype='int64'), index=midx)
+ s2 = pd.Series(np.arange(2, dtype='int64'), index=idx)
+
+ # these must be the same results (but flipped)
+ res1l, res1r = s1.align(s2, join='left')
+ res2l, res2r = s2.align(s1, join='right')
+
+ expl = s1
+ tm.assert_series_equal(expl, res1l)
+ tm.assert_series_equal(expl, res2r)
+ expr = pd.Series([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx)
+ tm.assert_series_equal(expr, res1r)
+ tm.assert_series_equal(expr, res2l)
+
+ res1l, res1r = s1.align(s2, join='right')
+ res2l, res2r = s2.align(s1, join='left')
+
+ exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)],
+ names=('a', 'b', 'c'))
+ expl = pd.Series([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx)
+ tm.assert_series_equal(expl, res1l)
+ tm.assert_series_equal(expl, res2r)
+ expr = pd.Series([0, 0, 1, 1] * 2, index=exp_idx)
+ tm.assert_series_equal(expr, res1r)
+ tm.assert_series_equal(expr, res2l)
+
+
+class TestReindexing(TestData):
+ def test_reindex(self):
+ identity = self.series.reindex(self.series.index)
+
+ # __array_interface__ is not defined for older numpies
+ # and on some pythons
+ try:
+ assert np.may_share_memory(self.series.index, identity.index)
+ except AttributeError:
+ pass
+
+ assert identity.index.is_(self.series.index)
+ assert identity.index.identical(self.series.index)
+
+ subIndex = self.series.index[10:20]
+ subSeries = self.series.reindex(subIndex)
+
+ for idx, val in compat.iteritems(subSeries):
+ assert val == self.series[idx]
+
+ subIndex2 = self.ts.index[10:20]
+ subTS = self.ts.reindex(subIndex2)
+
+ for idx, val in compat.iteritems(subTS):
+ assert val == self.ts[idx]
+ stuffSeries = self.ts.reindex(subIndex)
+
+ assert np.isnan(stuffSeries).all()
+
+ # This is extremely important for the Cython code to not screw up
+ nonContigIndex = self.ts.index[::2]
+ subNonContig = self.ts.reindex(nonContigIndex)
+ for idx, val in compat.iteritems(subNonContig):
+ assert val == self.ts[idx]
+
+ # return a copy the same index here
+ result = self.ts.reindex()
+ assert not (result is self.ts)
+
+ def test_reindex_nan(self):
+ ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8])
+
+ i, j = [nan, 1, nan, 8, 4, nan], [2, 0, 2, 3, 1, 2]
+ assert_series_equal(ts.reindex(i), ts.iloc[j])
+
+ ts.index = ts.index.astype('object')
+
+ # reindex coerces index.dtype to float, loc/iloc doesn't
+ assert_series_equal(ts.reindex(i), ts.iloc[j], check_index_type=False)
+
+ def test_reindex_series_add_nat(self):
+ rng = date_range('1/1/2000 00:00:00', periods=10, freq='10s')
+ series = Series(rng)
+
+ result = series.reindex(lrange(15))
+ assert np.issubdtype(result.dtype, np.dtype('M8[ns]'))
+
+ mask = result.isna()
+ assert mask[-5:].all()
+ assert not mask[:-5].any()
+
+ def test_reindex_with_datetimes(self):
+ rng = date_range('1/1/2000', periods=20)
+ ts = Series(np.random.randn(20), index=rng)
+
+ result = ts.reindex(list(ts.index[5:10]))
+ expected = ts[5:10]
+ tm.assert_series_equal(result, expected)
+
+ result = ts[list(ts.index[5:10])]
+ tm.assert_series_equal(result, expected)
+
+ def test_reindex_corner(self):
+ # (don't forget to fix this) I think it's fixed
+ self.empty.reindex(self.ts.index, method='pad') # it works
+
+ # corner case: pad empty series
+ reindexed = self.empty.reindex(self.ts.index, method='pad')
+
+ # pass non-Index
+ reindexed = self.ts.reindex(list(self.ts.index))
+ assert_series_equal(self.ts, reindexed)
+
+ # bad fill method
+ ts = self.ts[::2]
+ pytest.raises(Exception, ts.reindex, self.ts.index, method='foo')
+
+ def test_reindex_pad(self):
+ s = Series(np.arange(10), dtype='int64')
+ s2 = s[::2]
+
+ reindexed = s2.reindex(s.index, method='pad')
+ reindexed2 = s2.reindex(s.index, method='ffill')
+ assert_series_equal(reindexed, reindexed2)
+
+ expected = Series([0, 0, 2, 2, 4, 4, 6, 6, 8, 8], index=np.arange(10))
+ assert_series_equal(reindexed, expected)
+
+ # GH4604
+ s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
+ new_index = ['a', 'g', 'c', 'f']
+ expected = Series([1, 1, 3, 3], index=new_index)
+
+ # this changes dtype because the ffill happens after
+ result = s.reindex(new_index).ffill()
+ assert_series_equal(result, expected.astype('float64'))
+
+ result = s.reindex(new_index).ffill(downcast='infer')
+ assert_series_equal(result, expected)
+
+ expected = Series([1, 5, 3, 5], index=new_index)
+ result = s.reindex(new_index, method='ffill')
+ assert_series_equal(result, expected)
+
+ # inference of new dtype
+ s = Series([True, False, False, True], index=list('abcd'))
+ new_index = 'agc'
+ result = s.reindex(list(new_index)).ffill()
+ expected = Series([True, True, False], index=list(new_index))
+ assert_series_equal(result, expected)
+
+ # GH4618 shifted series downcasting
+ s = Series(False, index=lrange(0, 5))
+ result = s.shift(1).fillna(method='bfill')
+ expected = Series(False, index=lrange(0, 5))
+ assert_series_equal(result, expected)
+
+ def test_reindex_nearest(self):
+ s = Series(np.arange(10, dtype='int64'))
+ target = [0.1, 0.9, 1.5, 2.0]
+ actual = s.reindex(target, method='nearest')
+ expected = Series(np.around(target).astype('int64'), target)
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex_like(actual, method='nearest')
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex_like(actual, method='nearest', tolerance=1)
+ assert_series_equal(expected, actual)
+ actual = s.reindex_like(actual, method='nearest',
+ tolerance=[1, 2, 3, 4])
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex(target, method='nearest', tolerance=0.2)
+ expected = Series([0, 1, np.nan, 2], target)
+ assert_series_equal(expected, actual)
+
+ actual = s.reindex(target, method='nearest',
+ tolerance=[0.3, 0.01, 0.4, 3])
+ expected = Series([0, np.nan, np.nan, 2], target)
+ assert_series_equal(expected, actual)
+
+ def test_reindex_backfill(self):
+ pass
+
+ def test_reindex_int(self):
+ ts = self.ts[::2]
+ int_ts = Series(np.zeros(len(ts), dtype=int), index=ts.index)
+
+ # this should work fine
+ reindexed_int = int_ts.reindex(self.ts.index)
+
+ # if NaNs introduced
+ assert reindexed_int.dtype == np.float_
+
+ # NO NaNs introduced
+ reindexed_int = int_ts.reindex(int_ts.index[::2])
+ assert reindexed_int.dtype == np.int_
+
+ def test_reindex_bool(self):
+ # A series other than float, int, string, or object
+ ts = self.ts[::2]
+ bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
+
+ # this should work fine
+ reindexed_bool = bool_ts.reindex(self.ts.index)
+
+ # if NaNs introduced
+ assert reindexed_bool.dtype == np.object_
+
+ # NO NaNs introduced
+ reindexed_bool = bool_ts.reindex(bool_ts.index[::2])
+ assert reindexed_bool.dtype == np.bool_
+
+ def test_reindex_bool_pad(self):
+ # fail
+ ts = self.ts[5:]
+ bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
+ filled_bool = bool_ts.reindex(self.ts.index, method='pad')
+ assert isna(filled_bool[:5]).all()
+
+ def test_reindex_categorical(self):
+ index = date_range('20000101', periods=3)
+
+ # reindexing to an invalid Categorical
+ s = Series(['a', 'b', 'c'], dtype='category')
+ result = s.reindex(index)
+ expected = Series(Categorical(values=[np.nan, np.nan, np.nan],
+ categories=['a', 'b', 'c']))
+ expected.index = index
+ tm.assert_series_equal(result, expected)
+
+ # partial reindexing
+ expected = Series(Categorical(values=['b', 'c'], categories=['a', 'b',
+ 'c']))
+ expected.index = [1, 2]
+ result = s.reindex([1, 2])
+ tm.assert_series_equal(result, expected)
+
+ expected = Series(Categorical(
+ values=['c', np.nan], categories=['a', 'b', 'c']))
+ expected.index = [2, 3]
+ result = s.reindex([2, 3])
+ tm.assert_series_equal(result, expected)
+
+ def test_reindex_like(self):
+ other = self.ts[::2]
+ assert_series_equal(self.ts.reindex(other.index),
+ self.ts.reindex_like(other))
+
+ # GH 7179
+ day1 = datetime(2013, 3, 5)
+ day2 = datetime(2013, 5, 5)
+ day3 = datetime(2014, 3, 5)
+
+ series1 = Series([5, None, None], [day1, day2, day3])
+ series2 = Series([None, None], [day1, day3])
+
+ result = series1.reindex_like(series2, method='pad')
+ expected = Series([5, np.nan], index=[day1, day3])
+ assert_series_equal(result, expected)
+
+ def test_reindex_fill_value(self):
+ # -----------------------------------------------------------
+ # floats
+ floats = Series([1., 2., 3.])
+ result = floats.reindex([1, 2, 3])
+ expected = Series([2., 3., np.nan], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+ result = floats.reindex([1, 2, 3], fill_value=0)
+ expected = Series([2., 3., 0], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+ # -----------------------------------------------------------
+ # ints
+ ints = Series([1, 2, 3])
+
+ result = ints.reindex([1, 2, 3])
+ expected = Series([2., 3., np.nan], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+ # don't upcast
+ result = ints.reindex([1, 2, 3], fill_value=0)
+ expected = Series([2, 3, 0], index=[1, 2, 3])
+ assert issubclass(result.dtype.type, np.integer)
+ assert_series_equal(result, expected)
+
+ # -----------------------------------------------------------
+ # objects
+ objects = Series([1, 2, 3], dtype=object)
+
+ result = objects.reindex([1, 2, 3])
+ expected = Series([2, 3, np.nan], index=[1, 2, 3], dtype=object)
+ assert_series_equal(result, expected)
+
+ result = objects.reindex([1, 2, 3], fill_value='foo')
+ expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object)
+ assert_series_equal(result, expected)
+
+ # ------------------------------------------------------------
+ # bools
+ bools = Series([True, False, True])
+
+ result = bools.reindex([1, 2, 3])
+ expected = Series([False, True, np.nan], index=[1, 2, 3], dtype=object)
+ assert_series_equal(result, expected)
+
+ result = bools.reindex([1, 2, 3], fill_value=False)
+ expected = Series([False, True, False], index=[1, 2, 3])
+ assert_series_equal(result, expected)
+
+
+class TestRenaming(TestData):
+
+ def test_rename(self):
+ # GH 17407
+ s = Series(range(1, 6), index=pd.Index(range(2, 7), name='IntIndex'))
+ result = s.rename(str)
+ expected = s.rename(lambda i: str(i))
+ assert_series_equal(result, expected)
+
+ assert result.name == expected.name
diff --git a/pandas/tests/series/indexing/test_boolean.py b/pandas/tests/series/indexing/test_boolean.py
new file mode 100644
index 0000000000000..a6c3dc268cef9
--- /dev/null
+++ b/pandas/tests/series/indexing/test_boolean.py
@@ -0,0 +1,610 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+import pytest
+
+import pandas as pd
+import numpy as np
+
+from pandas import (Series, date_range, isna, Index, Timestamp)
+from pandas.compat import lrange, range
+from pandas.core.dtypes.common import is_integer
+
+
+from pandas.core.indexing import IndexingError
+from pandas.tseries.offsets import BDay
+
+from pandas.util.testing import (assert_series_equal)
+import pandas.util.testing as tm
+
+from pandas.tests.series.common import TestData
+
+JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
+
+class TestBooleanIndexing(TestData):
+
+ def test_getitem_boolean(self):
+ s = self.series
+ mask = s > s.median()
+
+ # passing list is OK
+ result = s[list(mask)]
+ expected = s[mask]
+ assert_series_equal(result, expected)
+ tm.assert_index_equal(result.index, s.index[mask])
+
+ def test_getitem_boolean_empty(self):
+ s = Series([], dtype=np.int64)
+ s.index.name = 'index_name'
+ s = s[s.isna()]
+ assert s.index.name == 'index_name'
+ assert s.dtype == np.int64
+
+ # GH5877
+ # indexing with empty series
+ s = Series(['A', 'B'])
+ expected = Series(np.nan, index=['C'], dtype=object)
+ result = s[Series(['C'], dtype=object)]
+ assert_series_equal(result, expected)
+
+ s = Series(['A', 'B'])
+ expected = Series(dtype=object, index=Index([], dtype='int64'))
+ result = s[Series([], dtype=object)]
+ assert_series_equal(result, expected)
+
+ # invalid because of the boolean indexer
+ # that's empty or not-aligned
+ def f():
+ s[Series([], dtype=bool)]
+
+ pytest.raises(IndexingError, f)
+
+ def f():
+ s[Series([True], dtype=bool)]
+
+ pytest.raises(IndexingError, f)
+
+ def test_getitem_boolean_object(self):
+ # using column from DataFrame
+
+ s = self.series
+ mask = s > s.median()
+ omask = mask.astype(object)
+
+ # getitem
+ result = s[omask]
+ expected = s[mask]
+ assert_series_equal(result, expected)
+
+ # setitem
+ s2 = s.copy()
+ cop = s.copy()
+ cop[omask] = 5
+ s2[mask] = 5
+ assert_series_equal(cop, s2)
+
+ # nans raise exception
+ omask[5:10] = np.nan
+ pytest.raises(Exception, s.__getitem__, omask)
+ pytest.raises(Exception, s.__setitem__, omask, 5)
+
+ def test_getitem_setitem_boolean_corner(self):
+ ts = self.ts
+ mask_shifted = ts.shift(1, freq=BDay()) > ts.median()
+
+ # these used to raise...??
+
+ pytest.raises(Exception, ts.__getitem__, mask_shifted)
+ pytest.raises(Exception, ts.__setitem__, mask_shifted, 1)
+ # ts[mask_shifted]
+ # ts[mask_shifted] = 1
+
+ pytest.raises(Exception, ts.loc.__getitem__, mask_shifted)
+ pytest.raises(Exception, ts.loc.__setitem__, mask_shifted, 1)
+ # ts.loc[mask_shifted]
+ # ts.loc[mask_shifted] = 2
+
+ def test_setitem_boolean(self):
+ mask = self.series > self.series.median()
+
+ # similar indexed series
+ result = self.series.copy()
+ result[mask] = self.series * 2
+ expected = self.series * 2
+ assert_series_equal(result[mask], expected[mask])
+
+ # needs alignment
+ result = self.series.copy()
+ result[mask] = (self.series * 2)[0:5]
+ expected = (self.series * 2)[0:5].reindex_like(self.series)
+ expected[-mask] = self.series[mask]
+ assert_series_equal(result[mask], expected[mask])
+
+ def test_get_set_boolean_different_order(self):
+ ordered = self.series.sort_values()
+
+ # setting
+ copy = self.series.copy()
+ copy[ordered > 0] = 0
+
+ expected = self.series.copy()
+ expected[expected > 0] = 0
+
+ assert_series_equal(copy, expected)
+
+ # getting
+ sel = self.series[ordered > 0]
+ exp = self.series[self.series > 0]
+ assert_series_equal(sel, exp)
+
+ def test_where_unsafe(self):
+ # unsafe dtype changes
+ for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16,
+ np.float32, np.float64]:
+ s = Series(np.arange(10), dtype=dtype)
+ mask = s < 5
+ s[mask] = lrange(2, 7)
+ expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype)
+ assert_series_equal(s, expected)
+ assert s.dtype == expected.dtype
+
+ # these are allowed operations, but are upcasted
+ for dtype in [np.int64, np.float64]:
+ s = Series(np.arange(10), dtype=dtype)
+ mask = s < 5
+ values = [2.5, 3.5, 4.5, 5.5, 6.5]
+ s[mask] = values
+ expected = Series(values + lrange(5, 10), dtype='float64')
+ assert_series_equal(s, expected)
+ assert s.dtype == expected.dtype
+
+ # GH 9731
+ s = Series(np.arange(10), dtype='int64')
+ mask = s > 5
+ values = [2.5, 3.5, 4.5, 5.5]
+ s[mask] = values
+ expected = Series(lrange(6) + values, dtype='float64')
+ assert_series_equal(s, expected)
+
+ # can't do these as we are forced to change the itemsize of the input
+ # to something we cannot
+ for dtype in [np.int8, np.int16, np.int32, np.float16, np.float32]:
+ s = Series(np.arange(10), dtype=dtype)
+ mask = s < 5
+ values = [2.5, 3.5, 4.5, 5.5, 6.5]
+ pytest.raises(Exception, s.__setitem__, tuple(mask), values)
+
+ # GH3235
+ s = Series(np.arange(10), dtype='int64')
+ mask = s < 5
+ s[mask] = lrange(2, 7)
+ expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64')
+ assert_series_equal(s, expected)
+ assert s.dtype == expected.dtype
+
+ s = Series(np.arange(10), dtype='int64')
+ mask = s > 5
+ s[mask] = [0] * 4
+ expected = Series([0, 1, 2, 3, 4, 5] + [0] * 4, dtype='int64')
+ assert_series_equal(s, expected)
+
+ s = Series(np.arange(10))
+ mask = s > 5
+
+ def f():
+ s[mask] = [5, 4, 3, 2, 1]
+
+ pytest.raises(ValueError, f)
+
+ def f():
+ s[mask] = [0] * 5
+
+ pytest.raises(ValueError, f)
+
+ # dtype changes
+ s = Series([1, 2, 3, 4])
+ result = s.where(s > 2, np.nan)
+ expected = Series([np.nan, np.nan, 3, 4])
+ assert_series_equal(result, expected)
+
+ # GH 4667
+ # setting with None changes dtype
+ s = Series(range(10)).astype(float)
+ s[8] = None
+ result = s[8]
+ assert isna(result)
+
+ s = Series(range(10)).astype(float)
+ s[s > 8] = None
+ result = s[isna(s)]
+ expected = Series(np.nan, index=[9])
+ assert_series_equal(result, expected)
+
+
+class TestWhereAndMask(TestData):
+
+ def test_where_raise_on_error_deprecation(self):
+ # gh-14968
+ # deprecation of raise_on_error
+ s = Series(np.random.randn(5))
+ cond = s > 0
+ with tm.assert_produces_warning(FutureWarning):
+ s.where(cond, raise_on_error=True)
+ with tm.assert_produces_warning(FutureWarning):
+ s.mask(cond, raise_on_error=True)
+
+ def test_where(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.where(cond).dropna()
+ rs2 = s[cond]
+ assert_series_equal(rs, rs2)
+
+ rs = s.where(cond, -s)
+ assert_series_equal(rs, s.abs())
+
+ rs = s.where(cond)
+ assert (s.shape == rs.shape)
+ assert (rs is not s)
+
+ # test alignment
+ cond = Series([True, False, False, True, False], index=s.index)
+ s2 = -(s.abs())
+
+ expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index)
+ rs = s2.where(cond[:3])
+ assert_series_equal(rs, expected)
+
+ expected = s2.abs()
+ expected.iloc[0] = s2[0]
+ rs = s2.where(cond[:3], -s2)
+ assert_series_equal(rs, expected)
+
+ def test_where_error(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ pytest.raises(ValueError, s.where, 1)
+ pytest.raises(ValueError, s.where, cond[:3].values, -s)
+
+ # GH 2745
+ s = Series([1, 2])
+ s[[True, False]] = [0, 1]
+ expected = Series([0, 2])
+ assert_series_equal(s, expected)
+
+ # failures
+ pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
+ [0, 2, 3])
+ pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
+ [])
+
+ def test_where_array_like(self):
+ # see gh-15414
+ s = Series([1, 2, 3])
+ cond = [False, True, True]
+ expected = Series([np.nan, 2, 3])
+ klasses = [list, tuple, np.array, Series]
+
+ for klass in klasses:
+ result = s.where(klass(cond))
+ assert_series_equal(result, expected)
+
+ def test_where_invalid_input(self):
+ # see gh-15414: only boolean arrays accepted
+ s = Series([1, 2, 3])
+ msg = "Boolean array expected for the condition"
+
+ conds = [
+ [1, 0, 1],
+ Series([2, 5, 7]),
+ ["True", "False", "True"],
+ [Timestamp("2017-01-01"),
+ pd.NaT, Timestamp("2017-01-02")]
+ ]
+
+ for cond in conds:
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where(cond)
+
+ msg = "Array conditional must be same shape as self"
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where([True])
+
+ def test_where_ndframe_align(self):
+ msg = "Array conditional must be same shape as self"
+ s = Series([1, 2, 3])
+
+ cond = [True]
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where(cond)
+
+ expected = Series([1, np.nan, np.nan])
+
+ out = s.where(Series(cond))
+ tm.assert_series_equal(out, expected)
+
+ cond = np.array([False, True, False, True])
+ with tm.assert_raises_regex(ValueError, msg):
+ s.where(cond)
+
+ expected = Series([np.nan, 2, np.nan])
+
+ out = s.where(Series(cond))
+ tm.assert_series_equal(out, expected)
+
+ def test_where_setitem_invalid(self):
+ # GH 2702
+ # make sure correct exceptions are raised on invalid list assignment
+
+ # slice
+ s = Series(list('abc'))
+
+ def f():
+ s[0:3] = list(range(27))
+
+ pytest.raises(ValueError, f)
+
+ s[0:3] = list(range(3))
+ expected = Series([0, 1, 2])
+ assert_series_equal(s.astype(np.int64), expected, )
+
+ # slice with step
+ s = Series(list('abcdef'))
+
+ def f():
+ s[0:4:2] = list(range(27))
+
+ pytest.raises(ValueError, f)
+
+ s = Series(list('abcdef'))
+ s[0:4:2] = list(range(2))
+ expected = Series([0, 'b', 1, 'd', 'e', 'f'])
+ assert_series_equal(s, expected)
+
+ # neg slices
+ s = Series(list('abcdef'))
+
+ def f():
+ s[:-1] = list(range(27))
+
+ pytest.raises(ValueError, f)
+
+ s[-3:-1] = list(range(2))
+ expected = Series(['a', 'b', 'c', 0, 1, 'f'])
+ assert_series_equal(s, expected)
+
+ # list
+ s = Series(list('abc'))
+
+ def f():
+ s[[0, 1, 2]] = list(range(27))
+
+ pytest.raises(ValueError, f)
+
+ s = Series(list('abc'))
+
+ def f():
+ s[[0, 1, 2]] = list(range(2))
+
+ pytest.raises(ValueError, f)
+
+ # scalar
+ s = Series(list('abc'))
+ s[0] = list(range(10))
+ expected = Series([list(range(10)), 'b', 'c'])
+ assert_series_equal(s, expected)
+
+ def test_where_broadcast(self):
+ # Test a variety of differently sized series
+ for size in range(2, 6):
+ # Test a variety of boolean indices
+ for selection in [
+ # First element should be set
+ np.resize([True, False, False, False, False], size),
+ # Set alternating elements]
+ np.resize([True, False], size),
+ # No element should be set
+ np.resize([False], size)
+ ]:
+
+ # Test a variety of different numbers as content
+ for item in [2.0, np.nan, np.finfo(np.float).max,
+ np.finfo(np.float).min]:
+ # Test numpy arrays, lists and tuples as the input to be
+ # broadcast
+ for arr in [np.array([item]), [item], (item,)]:
+ data = np.arange(size, dtype=float)
+ s = Series(data)
+ s[selection] = arr
+ # Construct the expected series by taking the source
+ # data or item based on the selection
+ expected = Series([item if use_item else data[
+ i] for i, use_item in enumerate(selection)])
+ assert_series_equal(s, expected)
+
+ s = Series(data)
+ result = s.where(~selection, arr)
+ assert_series_equal(result, expected)
+
+ def test_where_inplace(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.copy()
+
+ rs.where(cond, inplace=True)
+ assert_series_equal(rs.dropna(), s[cond])
+ assert_series_equal(rs, s.where(cond))
+
+ rs = s.copy()
+ rs.where(cond, -s, inplace=True)
+ assert_series_equal(rs, s.where(cond, -s))
+
+ def test_where_dups(self):
+ # GH 4550
+ # where crashes with dups in index
+ s1 = Series(list(range(3)))
+ s2 = Series(list(range(3)))
+ comb = pd.concat([s1, s2])
+ result = comb.where(comb < 2)
+ expected = Series([0, 1, np.nan, 0, 1, np.nan],
+ index=[0, 1, 2, 0, 1, 2])
+ assert_series_equal(result, expected)
+
+ # GH 4548
+ # inplace updating not working with dups
+ comb[comb < 1] = 5
+ expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2])
+ assert_series_equal(comb, expected)
+
+ comb[comb < 2] += 10
+ expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2])
+ assert_series_equal(comb, expected)
+
+ def test_where_numeric_with_string(self):
+ # GH 9280
+ s = pd.Series([1, 2, 3])
+ w = s.where(s > 1, 'X')
+
+ assert not is_integer(w[0])
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
+
+ w = s.where(s > 1, ['X', 'Y', 'Z'])
+ assert not is_integer(w[0])
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
+
+ w = s.where(s > 1, np.array(['X', 'Y', 'Z']))
+ assert not is_integer(w[0])
+ assert is_integer(w[1])
+ assert is_integer(w[2])
+ assert isinstance(w[0], str)
+ assert w.dtype == 'object'
+
+ def test_where_timedelta_coerce(self):
+ s = Series([1, 2], dtype='timedelta64[ns]')
+ expected = Series([10, 10])
+ mask = np.array([False, False])
+
+ rs = s.where(mask, [10, 10])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10.0)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, 10.0])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, np.nan])
+ expected = Series([10, None], dtype='object')
+ assert_series_equal(rs, expected)
+
+ def test_where_datetime_conversion(self):
+ s = Series(date_range('20130102', periods=2))
+ expected = Series([10, 10])
+ mask = np.array([False, False])
+
+ rs = s.where(mask, [10, 10])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, 10.0)
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, 10.0])
+ assert_series_equal(rs, expected)
+
+ rs = s.where(mask, [10.0, np.nan])
+ expected = Series([10, None], dtype='object')
+ assert_series_equal(rs, expected)
+
+ # GH 15701
+ timestamps = ['2016-12-31 12:00:04+00:00',
+ '2016-12-31 12:00:04.010000+00:00']
+ s = Series([pd.Timestamp(t) for t in timestamps])
+ rs = s.where(Series([False, True]))
+ expected = Series([pd.NaT, s[1]])
+ assert_series_equal(rs, expected)
+
+ def test_mask(self):
+ # compare with tested results in test_where
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.where(~cond, np.nan)
+ assert_series_equal(rs, s.mask(cond))
+
+ rs = s.where(~cond)
+ rs2 = s.mask(cond)
+ assert_series_equal(rs, rs2)
+
+ rs = s.where(~cond, -s)
+ rs2 = s.mask(cond, -s)
+ assert_series_equal(rs, rs2)
+
+ cond = Series([True, False, False, True, False], index=s.index)
+ s2 = -(s.abs())
+ rs = s2.where(~cond[:3])
+ rs2 = s2.mask(cond[:3])
+ assert_series_equal(rs, rs2)
+
+ rs = s2.where(~cond[:3], -s2)
+ rs2 = s2.mask(cond[:3], -s2)
+ assert_series_equal(rs, rs2)
+
+ pytest.raises(ValueError, s.mask, 1)
+ pytest.raises(ValueError, s.mask, cond[:3].values, -s)
+
+ # dtype changes
+ s = Series([1, 2, 3, 4])
+ result = s.mask(s > 2, np.nan)
+ expected = Series([1, 2, np.nan, np.nan])
+ assert_series_equal(result, expected)
+
+ def test_mask_broadcast(self):
+ # GH 8801
+ # copied from test_where_broadcast
+ for size in range(2, 6):
+ for selection in [
+ # First element should be set
+ np.resize([True, False, False, False, False], size),
+ # Set alternating elements]
+ np.resize([True, False], size),
+ # No element should be set
+ np.resize([False], size)
+ ]:
+ for item in [2.0, np.nan, np.finfo(np.float).max,
+ np.finfo(np.float).min]:
+ for arr in [np.array([item]), [item], (item,)]:
+ data = np.arange(size, dtype=float)
+ s = Series(data)
+ result = s.mask(selection, arr)
+ expected = Series([item if use_item else data[
+ i] for i, use_item in enumerate(selection)])
+ assert_series_equal(result, expected)
+
+ def test_mask_inplace(self):
+ s = Series(np.random.randn(5))
+ cond = s > 0
+
+ rs = s.copy()
+ rs.mask(cond, inplace=True)
+ assert_series_equal(rs.dropna(), s[~cond])
+ assert_series_equal(rs, s.mask(cond))
+
+ rs = s.copy()
+ rs.mask(cond, -s, inplace=True)
+ assert_series_equal(rs, s.mask(cond, -s))
diff --git a/pandas/tests/series/indexing/test_datetime.py b/pandas/tests/series/indexing/test_datetime.py
new file mode 100644
index 0000000000000..f8ea50366a73d
--- /dev/null
+++ b/pandas/tests/series/indexing/test_datetime.py
@@ -0,0 +1,690 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+import pytest
+
+from datetime import datetime, timedelta
+
+import numpy as np
+import pandas as pd
+
+from pandas import (Series, DataFrame,
+ date_range, Timestamp, DatetimeIndex, NaT)
+
+from pandas.compat import lrange, range
+from pandas.util.testing import (assert_series_equal,
+ assert_frame_equal, assert_almost_equal)
+
+import pandas.util.testing as tm
+
+import pandas._libs.index as _index
+from pandas._libs import tslib
+
+JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
+
+class TestDatetimeIndexing(object):
+ """
+ Also test support for datetime64[ns] in Series / DataFrame
+ """
+
+ def setup_method(self, method):
+ dti = DatetimeIndex(start=datetime(2005, 1, 1),
+ end=datetime(2005, 1, 10), freq='Min')
+ self.series = Series(np.random.rand(len(dti)), dti)
+
+ def test_fancy_getitem(self):
+ dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
+ end=datetime(2010, 1, 1))
+
+ s = Series(np.arange(len(dti)), index=dti)
+
+ assert s[48] == 48
+ assert s['1/2/2009'] == 48
+ assert s['2009-1-2'] == 48
+ assert s[datetime(2009, 1, 2)] == 48
+ assert s[Timestamp(datetime(2009, 1, 2))] == 48
+ pytest.raises(KeyError, s.__getitem__, '2009-1-3')
+
+ assert_series_equal(s['3/6/2009':'2009-06-05'],
+ s[datetime(2009, 3, 6):datetime(2009, 6, 5)])
+
+ def test_fancy_setitem(self):
+ dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
+ end=datetime(2010, 1, 1))
+
+ s = Series(np.arange(len(dti)), index=dti)
+ s[48] = -1
+ assert s[48] == -1
+ s['1/2/2009'] = -2
+ assert s[48] == -2
+ s['1/2/2009':'2009-06-05'] = -3
+ assert (s[48:54] == -3).all()
+
+ def test_dti_snap(self):
+ dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002',
+ '1/5/2002', '1/6/2002', '1/7/2002'], freq='D')
+
+ res = dti.snap(freq='W-MON')
+ exp = date_range('12/31/2001', '1/7/2002', freq='w-mon')
+ exp = exp.repeat([3, 4])
+ assert (res == exp).all()
+
+ res = dti.snap(freq='B')
+
+ exp = date_range('1/1/2002', '1/7/2002', freq='b')
+ exp = exp.repeat([1, 1, 1, 2, 2])
+ assert (res == exp).all()
+
+ def test_dti_reset_index_round_trip(self):
+ dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
+ d1 = DataFrame({'v': np.random.rand(len(dti))}, index=dti)
+ d2 = d1.reset_index()
+ assert d2.dtypes[0] == np.dtype('M8[ns]')
+ d3 = d2.set_index('index')
+ assert_frame_equal(d1, d3, check_names=False)
+
+ # #2329
+ stamp = datetime(2012, 11, 22)
+ df = DataFrame([[stamp, 12.1]], columns=['Date', 'Value'])
+ df = df.set_index('Date')
+
+ assert df.index[0] == stamp
+ assert df.reset_index()['Date'][0] == stamp
+
+ def test_series_set_value(self):
+ # #1561
+
+ dates = [datetime(2001, 1, 1), datetime(2001, 1, 2)]
+ index = DatetimeIndex(dates)
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ s = Series().set_value(dates[0], 1.)
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ s2 = s.set_value(dates[1], np.nan)
+
+ exp = Series([1., np.nan], index=index)
+
+ assert_series_equal(s2, exp)
+
+ # s = Series(index[:1], index[:1])
+ # s2 = s.set_value(dates[1], index[1])
+ # assert s2.values.dtype == 'M8[ns]'
+
+ @pytest.mark.slow
+ def test_slice_locs_indexerror(self):
+ times = [datetime(2000, 1, 1) + timedelta(minutes=i * 10)
+ for i in range(100000)]
+ s = Series(lrange(100000), times)
+ s.loc[datetime(1900, 1, 1):datetime(2100, 1, 1)]
+
+ def test_slicing_datetimes(self):
+ # GH 7523
+
+ # unique
+ df = DataFrame(np.arange(4., dtype='float64'),
+ index=[datetime(2001, 1, i, 10, 00)
+ for i in [1, 2, 3, 4]])
+ result = df.loc[datetime(2001, 1, 1, 10):]
+ assert_frame_equal(result, df)
+ result = df.loc[:datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+ result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+
+ result = df.loc[datetime(2001, 1, 1, 11):]
+ expected = df.iloc[1:]
+ assert_frame_equal(result, expected)
+ result = df.loc['20010101 11':]
+ assert_frame_equal(result, expected)
+
+ # duplicates
+ df = pd.DataFrame(np.arange(5., dtype='float64'),
+ index=[datetime(2001, 1, i, 10, 00)
+ for i in [1, 2, 2, 3, 4]])
+
+ result = df.loc[datetime(2001, 1, 1, 10):]
+ assert_frame_equal(result, df)
+ result = df.loc[:datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+ result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
+ assert_frame_equal(result, df)
+
+ result = df.loc[datetime(2001, 1, 1, 11):]
+ expected = df.iloc[1:]
+ assert_frame_equal(result, expected)
+ result = df.loc['20010101 11':]
+ assert_frame_equal(result, expected)
+
+ def test_frame_datetime64_duplicated(self):
+ dates = date_range('2010-07-01', end='2010-08-05')
+
+ tst = DataFrame({'symbol': 'AAA', 'date': dates})
+ result = tst.duplicated(['date', 'symbol'])
+ assert (-result).all()
+
+ tst = DataFrame({'date': dates})
+ result = tst.duplicated()
+ assert (-result).all()
+
+ def test_getitem_setitem_datetime_tz_pytz(self):
+ from pytz import timezone as tz
+ from pandas import date_range
+
+ N = 50
+ # testing with timezone, GH #2785
+ rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
+ ts = Series(np.random.randn(N), index=rng)
+
+ # also test Timestamp tz handling, GH #2789
+ result = ts.copy()
+ result["1990-01-01 09:00:00+00:00"] = 0
+ result["1990-01-01 09:00:00+00:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+ result["1990-01-01 03:00:00-06:00"] = 0
+ result["1990-01-01 03:00:00-06:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ # repeat with datetimes
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+
+ # comparison dates with datetime MUST be localized!
+ date = tz('US/Central').localize(datetime(1990, 1, 1, 3))
+ result[date] = 0
+ result[date] = ts[4]
+ assert_series_equal(result, ts)
+
+ def test_getitem_setitem_datetime_tz_dateutil(self):
+ from dateutil.tz import tzutc
+ from pandas._libs.tslibs.timezones import dateutil_gettz as gettz
+
+ tz = lambda x: tzutc() if x == 'UTC' else gettz(
+ x) # handle special case for utc in dateutil
+
+ from pandas import date_range
+
+ N = 50
+
+ # testing with timezone, GH #2785
+ rng = date_range('1/1/1990', periods=N, freq='H',
+ tz='America/New_York')
+ ts = Series(np.random.randn(N), index=rng)
+
+ # also test Timestamp tz handling, GH #2789
+ result = ts.copy()
+ result["1990-01-01 09:00:00+00:00"] = 0
+ result["1990-01-01 09:00:00+00:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+ result["1990-01-01 03:00:00-06:00"] = 0
+ result["1990-01-01 03:00:00-06:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ # repeat with datetimes
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
+ result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = 0
+ result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = ts[4]
+ assert_series_equal(result, ts)
+
+ def test_getitem_setitem_datetimeindex(self):
+ N = 50
+ # testing with timezone, GH #2785
+ rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
+ ts = Series(np.random.randn(N), index=rng)
+
+ result = ts["1990-01-01 04:00:00"]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts.copy()
+ result["1990-01-01 04:00:00"] = 0
+ result["1990-01-01 04:00:00"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0
+ result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ lb = "1990-01-01 04:00:00"
+ rb = "1990-01-01 07:00:00"
+ # GH#18435 strings get a pass from tzawareness compat
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ lb = "1990-01-01 04:00:00-0500"
+ rb = "1990-01-01 07:00:00-0500"
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ # repeat all the above with naive datetimes
+ result = ts[datetime(1990, 1, 1, 4)]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 4)] = 0
+ result[datetime(1990, 1, 1, 4)] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = 0
+ result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ lb = datetime(1990, 1, 1, 4)
+ rb = datetime(1990, 1, 1, 7)
+ with pytest.raises(TypeError):
+ # tznaive vs tzaware comparison is invalid
+ # see GH#18376, GH#18162
+ ts[(ts.index >= lb) & (ts.index <= rb)]
+
+ lb = pd.Timestamp(datetime(1990, 1, 1, 4)).tz_localize(rng.tzinfo)
+ rb = pd.Timestamp(datetime(1990, 1, 1, 7)).tz_localize(rng.tzinfo)
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts[ts.index[4]]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts[ts.index[4:8]]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result[ts.index[4:8]] = 0
+ result[4:8] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ # also test partial date slicing
+ result = ts["1990-01-02"]
+ expected = ts[24:48]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result["1990-01-02"] = 0
+ result["1990-01-02"] = ts[24:48]
+ assert_series_equal(result, ts)
+
+ def test_getitem_setitem_periodindex(self):
+ from pandas import period_range
+
+ N = 50
+ rng = period_range('1/1/1990', periods=N, freq='H')
+ ts = Series(np.random.randn(N), index=rng)
+
+ result = ts["1990-01-01 04"]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts.copy()
+ result["1990-01-01 04"] = 0
+ result["1990-01-01 04"] = ts[4]
+ assert_series_equal(result, ts)
+
+ result = ts["1990-01-01 04":"1990-01-01 07"]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result["1990-01-01 04":"1990-01-01 07"] = 0
+ result["1990-01-01 04":"1990-01-01 07"] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ lb = "1990-01-01 04"
+ rb = "1990-01-01 07"
+ result = ts[(ts.index >= lb) & (ts.index <= rb)]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ # GH 2782
+ result = ts[ts.index[4]]
+ expected = ts[4]
+ assert result == expected
+
+ result = ts[ts.index[4:8]]
+ expected = ts[4:8]
+ assert_series_equal(result, expected)
+
+ result = ts.copy()
+ result[ts.index[4:8]] = 0
+ result[4:8] = ts[4:8]
+ assert_series_equal(result, ts)
+
+ def test_getitem_median_slice_bug(self):
+ index = date_range('20090415', '20090519', freq='2B')
+ s = Series(np.random.randn(13), index=index)
+
+ indexer = [slice(6, 7, None)]
+ result = s[indexer]
+ expected = s[indexer[0]]
+ assert_series_equal(result, expected)
+
+ def test_datetime_indexing(self):
+ from pandas import date_range
+
+ index = date_range('1/1/2000', '1/7/2000')
+ index = index.repeat(3)
+
+ s = Series(len(index), index=index)
+ stamp = Timestamp('1/8/2000')
+
+ pytest.raises(KeyError, s.__getitem__, stamp)
+ s[stamp] = 0
+ assert s[stamp] == 0
+
+ # not monotonic
+ s = Series(len(index), index=index)
+ s = s[::-1]
+
+ pytest.raises(KeyError, s.__getitem__, stamp)
+ s[stamp] = 0
+ assert s[stamp] == 0
+
+
+class TestTimeSeriesDuplicates(object):
+
+ def setup_method(self, method):
+ dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
+ datetime(2000, 1, 2), datetime(2000, 1, 3),
+ datetime(2000, 1, 3), datetime(2000, 1, 3),
+ datetime(2000, 1, 4), datetime(2000, 1, 4),
+ datetime(2000, 1, 4), datetime(2000, 1, 5)]
+
+ self.dups = Series(np.random.randn(len(dates)), index=dates)
+
+ def test_constructor(self):
+ assert isinstance(self.dups, Series)
+ assert isinstance(self.dups.index, DatetimeIndex)
+
+ def test_is_unique_monotonic(self):
+ assert not self.dups.index.is_unique
+
+ def test_index_unique(self):
+ uniques = self.dups.index.unique()
+ expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3),
+ datetime(2000, 1, 4), datetime(2000, 1, 5)])
+ assert uniques.dtype == 'M8[ns]' # sanity
+ tm.assert_index_equal(uniques, expected)
+ assert self.dups.index.nunique() == 4
+
+ # #2563
+ assert isinstance(uniques, DatetimeIndex)
+
+ dups_local = self.dups.index.tz_localize('US/Eastern')
+ dups_local.name = 'foo'
+ result = dups_local.unique()
+ expected = DatetimeIndex(expected, name='foo')
+ expected = expected.tz_localize('US/Eastern')
+ assert result.tz is not None
+ assert result.name == 'foo'
+ tm.assert_index_equal(result, expected)
+
+ # NaT, note this is excluded
+ arr = [1370745748 + t for t in range(20)] + [tslib.iNaT]
+ idx = DatetimeIndex(arr * 3)
+ tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
+ assert idx.nunique() == 20
+ assert idx.nunique(dropna=False) == 21
+
+ arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t)
+ for t in range(20)] + [NaT]
+ idx = DatetimeIndex(arr * 3)
+ tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
+ assert idx.nunique() == 20
+ assert idx.nunique(dropna=False) == 21
+
+ def test_index_dupes_contains(self):
+ d = datetime(2011, 12, 5, 20, 30)
+ ix = DatetimeIndex([d, d])
+ assert d in ix
+
+ def test_duplicate_dates_indexing(self):
+ ts = self.dups
+
+ uniques = ts.index.unique()
+ for date in uniques:
+ result = ts[date]
+
+ mask = ts.index == date
+ total = (ts.index == date).sum()
+ expected = ts[mask]
+ if total > 1:
+ assert_series_equal(result, expected)
+ else:
+ assert_almost_equal(result, expected[0])
+
+ cp = ts.copy()
+ cp[date] = 0
+ expected = Series(np.where(mask, 0, ts), index=ts.index)
+ assert_series_equal(cp, expected)
+
+ pytest.raises(KeyError, ts.__getitem__, datetime(2000, 1, 6))
+
+ # new index
+ ts[datetime(2000, 1, 6)] = 0
+ assert ts[datetime(2000, 1, 6)] == 0
+
+ def test_range_slice(self):
+ idx = DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/3/2000',
+ '1/4/2000'])
+
+ ts = Series(np.random.randn(len(idx)), index=idx)
+
+ result = ts['1/2/2000':]
+ expected = ts[1:]
+ assert_series_equal(result, expected)
+
+ result = ts['1/2/2000':'1/3/2000']
+ expected = ts[1:4]
+ assert_series_equal(result, expected)
+
+ def test_groupby_average_dup_values(self):
+ result = self.dups.groupby(level=0).mean()
+ expected = self.dups.groupby(self.dups.index).mean()
+ assert_series_equal(result, expected)
+
+ def test_indexing_over_size_cutoff(self):
+ import datetime
+ # #1821
+
+ old_cutoff = _index._SIZE_CUTOFF
+ try:
+ _index._SIZE_CUTOFF = 1000
+
+ # create large list of non periodic datetime
+ dates = []
+ sec = datetime.timedelta(seconds=1)
+ half_sec = datetime.timedelta(microseconds=500000)
+ d = datetime.datetime(2011, 12, 5, 20, 30)
+ n = 1100
+ for i in range(n):
+ dates.append(d)
+ dates.append(d + sec)
+ dates.append(d + sec + half_sec)
+ dates.append(d + sec + sec + half_sec)
+ d += 3 * sec
+
+ # duplicate some values in the list
+ duplicate_positions = np.random.randint(0, len(dates) - 1, 20)
+ for p in duplicate_positions:
+ dates[p + 1] = dates[p]
+
+ df = DataFrame(np.random.randn(len(dates), 4),
+ index=dates,
+ columns=list('ABCD'))
+
+ pos = n * 3
+ timestamp = df.index[pos]
+ assert timestamp in df.index
+
+ # it works!
+ df.loc[timestamp]
+ assert len(df.loc[[timestamp]]) > 0
+ finally:
+ _index._SIZE_CUTOFF = old_cutoff
+
+ def test_indexing_unordered(self):
+ # GH 2437
+ rng = date_range(start='2011-01-01', end='2011-01-15')
+ ts = Series(np.random.rand(len(rng)), index=rng)
+ ts2 = pd.concat([ts[0:4], ts[-4:], ts[4:-4]])
+
+ for t in ts.index:
+ # TODO: unused?
+ s = str(t) # noqa
+
+ expected = ts[t]
+ result = ts2[t]
+ assert expected == result
+
+ # GH 3448 (ranges)
+ def compare(slobj):
+ result = ts2[slobj].copy()
+ result = result.sort_index()
+ expected = ts[slobj]
+ assert_series_equal(result, expected)
+
+ compare(slice('2011-01-01', '2011-01-15'))
+ compare(slice('2010-12-30', '2011-01-15'))
+ compare(slice('2011-01-01', '2011-01-16'))
+
+ # partial ranges
+ compare(slice('2011-01-01', '2011-01-6'))
+ compare(slice('2011-01-06', '2011-01-8'))
+ compare(slice('2011-01-06', '2011-01-12'))
+
+ # single values
+ result = ts2['2011'].sort_index()
+ expected = ts['2011']
+ assert_series_equal(result, expected)
+
+ # diff freq
+ rng = date_range(datetime(2005, 1, 1), periods=20, freq='M')
+ ts = Series(np.arange(len(rng)), index=rng)
+ ts = ts.take(np.random.permutation(20))
+
+ result = ts['2005']
+ for t in result.index:
+ assert t.year == 2005
+
+ def test_indexing(self):
+
+ idx = date_range("2001-1-1", periods=20, freq='M')
+ ts = Series(np.random.rand(len(idx)), index=idx)
+
+ # getting
+
+ # GH 3070, make sure semantics work on Series/Frame
+ expected = ts['2001']
+ expected.name = 'A'
+
+ df = DataFrame(dict(A=ts))
+ result = df['2001']['A']
+ assert_series_equal(expected, result)
+
+ # setting
+ ts['2001'] = 1
+ expected = ts['2001']
+ expected.name = 'A'
+
+ df.loc['2001', 'A'] = 1
+
+ result = df['2001']['A']
+ assert_series_equal(expected, result)
+
+ # GH3546 (not including times on the last day)
+ idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:00',
+ freq='H')
+ ts = Series(lrange(len(idx)), index=idx)
+ expected = ts['2013-05']
+ assert_series_equal(expected, ts)
+
+ idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:59',
+ freq='S')
+ ts = Series(lrange(len(idx)), index=idx)
+ expected = ts['2013-05']
+ assert_series_equal(expected, ts)
+
+ idx = [Timestamp('2013-05-31 00:00'),
+ Timestamp(datetime(2013, 5, 31, 23, 59, 59, 999999))]
+ ts = Series(lrange(len(idx)), index=idx)
+ expected = ts['2013']
+ assert_series_equal(expected, ts)
+
+ # GH14826, indexing with a seconds resolution string / datetime object
+ df = DataFrame(np.random.rand(5, 5),
+ columns=['open', 'high', 'low', 'close', 'volume'],
+ index=date_range('2012-01-02 18:01:00',
+ periods=5, tz='US/Central', freq='s'))
+ expected = df.loc[[df.index[2]]]
+
+ # this is a single date, so will raise
+ pytest.raises(KeyError, df.__getitem__, '2012-01-02 18:01:02', )
+ pytest.raises(KeyError, df.__getitem__, df.index[2], )
+
+
+class TestNatIndexing(object):
+
+ def setup_method(self, method):
+ self.series = Series(date_range('1/1/2000', periods=10))
+
+ # ---------------------------------------------------------------------
+ # NaT support
+
+ def test_set_none_nan(self):
+ self.series[3] = None
+ assert self.series[3] is NaT
+
+ self.series[3:5] = None
+ assert self.series[4] is NaT
+
+ self.series[5] = np.nan
+ assert self.series[5] is NaT
+
+ self.series[5:7] = np.nan
+ assert self.series[6] is NaT
+
+ def test_nat_operations(self):
+ # GH 8617
+ s = Series([0, pd.NaT], dtype='m8[ns]')
+ exp = s[0]
+ assert s.median() == exp
+ assert s.min() == exp
+ assert s.max() == exp
+
+ def test_round_nat(self):
+ # GH14940
+ s = Series([pd.NaT])
+ expected = Series(pd.NaT)
+ for method in ["round", "floor", "ceil"]:
+ round_method = getattr(s.dt, method)
+ for freq in ["s", "5s", "min", "5min", "h", "5h"]:
+ assert_series_equal(round_method(freq), expected)
diff --git a/pandas/tests/series/indexing/test_iloc.py b/pandas/tests/series/indexing/test_iloc.py
new file mode 100644
index 0000000000000..c86b50a46a665
--- /dev/null
+++ b/pandas/tests/series/indexing/test_iloc.py
@@ -0,0 +1,43 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+import numpy as np
+
+from pandas import Series
+
+from pandas.compat import lrange, range
+from pandas.util.testing import (assert_series_equal,
+ assert_almost_equal)
+
+JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
+from pandas.tests.series.common import TestData
+
+
+class TestIloc(TestData):
+
+ def test_iloc(self):
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+
+ for i in range(len(s)):
+ result = s.iloc[i]
+ exp = s[s.index[i]]
+ assert_almost_equal(result, exp)
+
+ # pass a slice
+ result = s.iloc[slice(1, 3)]
+ expected = s.loc[2:4]
+ assert_series_equal(result, expected)
+
+ # test slice is a view
+ result[:] = 0
+ assert (s[1:3] == 0).all()
+
+ # list of integers
+ result = s.iloc[[0, 2, 3, 4, 5]]
+ expected = s.reindex(s.index[[0, 2, 3, 4, 5]])
+ assert_series_equal(result, expected)
+
+ def test_iloc_nonunique(self):
+ s = Series([0, 1, 2], index=[0, 1, 0])
+ assert s.iloc[2] == 2
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
new file mode 100644
index 0000000000000..8b0d5121486ec
--- /dev/null
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -0,0 +1,808 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+""" test get/set & misc """
+
+import pytest
+
+from datetime import timedelta
+
+import numpy as np
+import pandas as pd
+
+from pandas.core.dtypes.common import is_scalar
+from pandas import (Series, DataFrame, MultiIndex,
+ Timestamp, Timedelta, Categorical)
+from pandas.tseries.offsets import BDay
+
+from pandas.compat import lrange, range
+
+from pandas.util.testing import (assert_series_equal)
+import pandas.util.testing as tm
+
+from pandas.tests.series.common import TestData
+
+JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
+
+class TestMisc(TestData):
+
+ def test_getitem_setitem_ellipsis(self):
+ s = Series(np.random.randn(10))
+
+ np.fix(s)
+
+ result = s[...]
+ assert_series_equal(result, s)
+
+ s[...] = 5
+ assert (result == 5).all()
+
+ def test_pop(self):
+ # GH 6600
+ df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, })
+ k = df.iloc[4]
+
+ result = k.pop('B')
+ assert result == 4
+
+ expected = Series([0, 0], index=['A', 'C'], name=4)
+ assert_series_equal(k, expected)
+
+ def test_getitem_get(self):
+ idx1 = self.series.index[5]
+ idx2 = self.objSeries.index[5]
+
+ assert self.series[idx1] == self.series.get(idx1)
+ assert self.objSeries[idx2] == self.objSeries.get(idx2)
+
+ assert self.series[idx1] == self.series[5]
+ assert self.objSeries[idx2] == self.objSeries[5]
+
+ assert self.series.get(-1) == self.series.get(self.series.index[-1])
+ assert self.series[5] == self.series.get(self.series.index[5])
+
+ # missing
+ d = self.ts.index[0] - BDay()
+ pytest.raises(KeyError, self.ts.__getitem__, d)
+
+ # None
+ # GH 5652
+ for s in [Series(), Series(index=list('abc'))]:
+ result = s.get(None)
+ assert result is None
+
+ def test_getitem_int64(self):
+ idx = np.int64(5)
+ assert self.ts[idx] == self.ts[5]
+
+ def test_getitem_fancy(self):
+ slice1 = self.series[[1, 2, 3]]
+ slice2 = self.objSeries[[1, 2, 3]]
+ assert self.series.index[2] == slice1.index[1]
+ assert self.objSeries.index[2] == slice2.index[1]
+ assert self.series[2] == slice1[1]
+ assert self.objSeries[2] == slice2[1]
+
+ def test_getitem_generator(self):
+ gen = (x > 0 for x in self.series)
+ result = self.series[gen]
+ result2 = self.series[iter(self.series > 0)]
+ expected = self.series[self.series > 0]
+ assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
+
+ def test_type_promotion(self):
+ # GH12599
+ s = pd.Series()
+ s["a"] = pd.Timestamp("2016-01-01")
+ s["b"] = 3.0
+ s["c"] = "foo"
+ expected = Series([pd.Timestamp("2016-01-01"), 3.0, "foo"],
+ index=["a", "b", "c"])
+ assert_series_equal(s, expected)
+
+ @pytest.mark.parametrize(
+ 'result_1, duplicate_item, expected_1',
+ [
+ [
+ pd.Series({1: 12, 2: [1, 2, 2, 3]}), pd.Series({1: 313}),
+ pd.Series({1: 12, }, dtype=object),
+ ],
+ [
+ pd.Series({1: [1, 2, 3], 2: [1, 2, 2, 3]}),
+ pd.Series({1: [1, 2, 3]}), pd.Series({1: [1, 2, 3], }),
+ ],
+ ])
+ def test_getitem_with_duplicates_indices(
+ self, result_1, duplicate_item, expected_1):
+ # GH 17610
+ result = result_1.append(duplicate_item)
+ expected = expected_1.append(duplicate_item)
+ assert_series_equal(result[1], expected)
+ assert result[2] == result_1[2]
+
+ def test_getitem_out_of_bounds(self):
+ # don't segfault, GH #495
+ pytest.raises(IndexError, self.ts.__getitem__, len(self.ts))
+
+ # GH #917
+ s = Series([])
+ pytest.raises(IndexError, s.__getitem__, -1)
+
+ def test_getitem_setitem_integers(self):
+ # caused bug without test
+ s = Series([1, 2, 3], ['a', 'b', 'c'])
+
+ assert s.iloc[0] == s['a']
+ s.iloc[0] = 5
+ tm.assert_almost_equal(s['a'], 5)
+
+ def test_getitem_box_float64(self):
+ value = self.ts[5]
+ assert isinstance(value, np.float64)
+
+ def test_series_box_timestamp(self):
+ rng = pd.date_range('20090415', '20090519', freq='B')
+ ser = Series(rng)
+
+ assert isinstance(ser[5], pd.Timestamp)
+
+ rng = pd.date_range('20090415', '20090519', freq='B')
+ ser = Series(rng, index=rng)
+ assert isinstance(ser[5], pd.Timestamp)
+
+ assert isinstance(ser.iat[5], pd.Timestamp)
+
+ def test_getitem_ambiguous_keyerror(self):
+ s = Series(lrange(10), index=lrange(0, 20, 2))
+ pytest.raises(KeyError, s.__getitem__, 1)
+ pytest.raises(KeyError, s.loc.__getitem__, 1)
+
+ def test_getitem_unordered_dup(self):
+ obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b'])
+ assert is_scalar(obj['c'])
+ assert obj['c'] == 0
+
+ def test_getitem_dups_with_missing(self):
+
+ # breaks reindex, so need to use .loc internally
+ # GH 4246
+ s = Series([1, 2, 3, 4], ['foo', 'bar', 'foo', 'bah'])
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ expected = s.loc[['foo', 'bar', 'bah', 'bam']]
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = s[['foo', 'bar', 'bah', 'bam']]
+ assert_series_equal(result, expected)
+
+ def test_getitem_dups(self):
+ s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64)
+ expected = Series([3, 4], index=['C', 'C'], dtype=np.int64)
+ result = s['C']
+ assert_series_equal(result, expected)
+
+ def test_getitem_dataframe(self):
+ rng = list(range(10))
+ s = pd.Series(10, index=rng)
+ df = pd.DataFrame(rng, index=rng)
+ pytest.raises(TypeError, s.__getitem__, df > 5)
+
+ def test_getitem_callable(self):
+ # GH 12533
+ s = pd.Series(4, index=list('ABCD'))
+ result = s[lambda x: 'A']
+ assert result == s.loc['A']
+
+ result = s[lambda x: ['A', 'B']]
+ tm.assert_series_equal(result, s.loc[['A', 'B']])
+
+ result = s[lambda x: [True, False, True, True]]
+ tm.assert_series_equal(result, s.iloc[[0, 2, 3]])
+
+ def test_setitem_ambiguous_keyerror(self):
+ s = Series(lrange(10), index=lrange(0, 20, 2))
+
+ # equivalent of an append
+ s2 = s.copy()
+ s2[1] = 5
+ expected = s.append(Series([5], index=[1]))
+ assert_series_equal(s2, expected)
+
+ s2 = s.copy()
+ s2.loc[1] = 5
+ expected = s.append(Series([5], index=[1]))
+ assert_series_equal(s2, expected)
+
+ def test_setitem_callable(self):
+ # GH 12533
+ s = pd.Series([1, 2, 3, 4], index=list('ABCD'))
+ s[lambda x: 'A'] = -1
+ tm.assert_series_equal(s, pd.Series([-1, 2, 3, 4], index=list('ABCD')))
+
+ def test_setitem_other_callable(self):
+ # GH 13299
+ inc = lambda x: x + 1
+
+ s = pd.Series([1, 2, -1, 4])
+ s[s < 0] = inc
+
+ expected = pd.Series([1, 2, inc, 4])
+ tm.assert_series_equal(s, expected)
+
+ def test_slice(self):
+ numSlice = self.series[10:20]
+ numSliceEnd = self.series[-10:]
+ objSlice = self.objSeries[10:20]
+
+ assert self.series.index[9] not in numSlice.index
+ assert self.objSeries.index[9] not in objSlice.index
+
+ assert len(numSlice) == len(numSlice.index)
+ assert self.series[numSlice.index[0]] == numSlice[numSlice.index[0]]
+
+ assert numSlice.index[1] == self.series.index[11]
+ assert tm.equalContents(numSliceEnd, np.array(self.series)[-10:])
+
+ # Test return view.
+ sl = self.series[10:20]
+ sl[:] = 0
+
+ assert (self.series[10:20] == 0).all()
+
+ def test_slice_can_reorder_not_uniquely_indexed(self):
+ s = Series(1, index=['a', 'a', 'b', 'b', 'c'])
+ s[::-1] # it works!
+
+ def test_setitem(self):
+ self.ts[self.ts.index[5]] = np.NaN
+ self.ts[[1, 2, 17]] = np.NaN
+ self.ts[6] = np.NaN
+ assert np.isnan(self.ts[6])
+ assert np.isnan(self.ts[2])
+ self.ts[np.isnan(self.ts)] = 5
+ assert not np.isnan(self.ts[2])
+
+ # caught this bug when writing tests
+ series = Series(tm.makeIntIndex(20).astype(float),
+ index=tm.makeIntIndex(20))
+
+ series[::2] = 0
+ assert (series[::2] == 0).all()
+
+ # set item that's not contained
+ s = self.series.copy()
+ s['foobar'] = 1
+
+ app = Series([1], index=['foobar'], name='series')
+ expected = self.series.append(app)
+ assert_series_equal(s, expected)
+
+ # Test for issue #10193
+ key = pd.Timestamp('2012-01-01')
+ series = pd.Series()
+ series[key] = 47
+ expected = pd.Series(47, [key])
+ assert_series_equal(series, expected)
+
+ series = pd.Series([], pd.DatetimeIndex([], freq='D'))
+ series[key] = 47
+ expected = pd.Series(47, pd.DatetimeIndex([key], freq='D'))
+ assert_series_equal(series, expected)
+
+ def test_setitem_dtypes(self):
+
+ # change dtypes
+ # GH 4463
+ expected = Series([np.nan, 2, 3])
+
+ s = Series([1, 2, 3])
+ s.iloc[0] = np.nan
+ assert_series_equal(s, expected)
+
+ s = Series([1, 2, 3])
+ s.loc[0] = np.nan
+ assert_series_equal(s, expected)
+
+ s = Series([1, 2, 3])
+ s[0] = np.nan
+ assert_series_equal(s, expected)
+
+ s = Series([False])
+ s.loc[0] = np.nan
+ assert_series_equal(s, Series([np.nan]))
+
+ s = Series([False, True])
+ s.loc[0] = np.nan
+ assert_series_equal(s, Series([np.nan, 1.0]))
+
+ def test_set_value(self):
+ idx = self.ts.index[10]
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ res = self.ts.set_value(idx, 0)
+ assert res is self.ts
+ assert self.ts[idx] == 0
+
+ # equiv
+ s = self.series.copy()
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ res = s.set_value('foobar', 0)
+ assert res is s
+ assert res.index[-1] == 'foobar'
+ assert res['foobar'] == 0
+
+ s = self.series.copy()
+ s.loc['foobar'] = 0
+ assert s.index[-1] == 'foobar'
+ assert s['foobar'] == 0
+
+ def test_setslice(self):
+ sl = self.ts[5:20]
+ assert len(sl) == len(sl.index)
+ assert sl.index.is_unique
+
+ def test_basic_getitem_setitem_corner(self):
+ # invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2]
+ with tm.assert_raises_regex(ValueError, 'tuple-index'):
+ self.ts[:, 2]
+ with tm.assert_raises_regex(ValueError, 'tuple-index'):
+ self.ts[:, 2] = 2
+
+ # weird lists. [slice(0, 5)] will work but not two slices
+ result = self.ts[[slice(None, 5)]]
+ expected = self.ts[:5]
+ assert_series_equal(result, expected)
+
+ # OK
+ pytest.raises(Exception, self.ts.__getitem__,
+ [5, slice(None, None)])
+ pytest.raises(Exception, self.ts.__setitem__,
+ [5, slice(None, None)], 2)
+
+ def test_basic_getitem_with_labels(self):
+ indices = self.ts.index[[5, 10, 15]]
+
+ result = self.ts[indices]
+ expected = self.ts.reindex(indices)
+ assert_series_equal(result, expected)
+
+ result = self.ts[indices[0]:indices[2]]
+ expected = self.ts.loc[indices[0]:indices[2]]
+ assert_series_equal(result, expected)
+
+ # integer indexes, be careful
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+ inds = [0, 2, 5, 7, 8]
+ arr_inds = np.array([0, 2, 5, 7, 8])
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = s[inds]
+ expected = s.reindex(inds)
+ assert_series_equal(result, expected)
+
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ result = s[arr_inds]
+ expected = s.reindex(arr_inds)
+ assert_series_equal(result, expected)
+
+ # GH12089
+ # with tz for values
+ s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
+ index=['a', 'b', 'c'])
+ expected = Timestamp('2011-01-01', tz='US/Eastern')
+ result = s.loc['a']
+ assert result == expected
+ result = s.iloc[0]
+ assert result == expected
+ result = s['a']
+ assert result == expected
+
+ def test_setitem_with_tz(self):
+ for tz in ['US/Eastern', 'UTC', 'Asia/Tokyo']:
+ orig = pd.Series(pd.date_range('2016-01-01', freq='H', periods=3,
+ tz=tz))
+ assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
+
+ # scalar
+ s = orig.copy()
+ s[1] = pd.Timestamp('2011-01-01', tz=tz)
+ exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00', tz=tz),
+ pd.Timestamp('2016-01-01 02:00', tz=tz)])
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.loc[1] = pd.Timestamp('2011-01-01', tz=tz)
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.iloc[1] = pd.Timestamp('2011-01-01', tz=tz)
+ tm.assert_series_equal(s, exp)
+
+ # vector
+ vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
+ assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
+
+ s[[1, 2]] = vals
+ exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00', tz=tz),
+ pd.Timestamp('2012-01-01 00:00', tz=tz)])
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.loc[[1, 2]] = vals
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.iloc[[1, 2]] = vals
+ tm.assert_series_equal(s, exp)
+
+ def test_setitem_with_tz_dst(self):
+ # GH XXX
+ tz = 'US/Eastern'
+ orig = pd.Series(pd.date_range('2016-11-06', freq='H', periods=3,
+ tz=tz))
+ assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
+
+ # scalar
+ s = orig.copy()
+ s[1] = pd.Timestamp('2011-01-01', tz=tz)
+ exp = pd.Series([pd.Timestamp('2016-11-06 00:00-04:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00-05:00', tz=tz),
+ pd.Timestamp('2016-11-06 01:00-05:00', tz=tz)])
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.loc[1] = pd.Timestamp('2011-01-01', tz=tz)
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.iloc[1] = pd.Timestamp('2011-01-01', tz=tz)
+ tm.assert_series_equal(s, exp)
+
+ # vector
+ vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
+ assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
+
+ s[[1, 2]] = vals
+ exp = pd.Series([pd.Timestamp('2016-11-06 00:00', tz=tz),
+ pd.Timestamp('2011-01-01 00:00', tz=tz),
+ pd.Timestamp('2012-01-01 00:00', tz=tz)])
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.loc[[1, 2]] = vals
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.iloc[[1, 2]] = vals
+ tm.assert_series_equal(s, exp)
+
+ def test_categorial_assigning_ops(self):
+ orig = Series(Categorical(["b", "b"], categories=["a", "b"]))
+ s = orig.copy()
+ s[:] = "a"
+ exp = Series(Categorical(["a", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s[1] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s[s.index > 0] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s[[False, True]] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
+ tm.assert_series_equal(s, exp)
+
+ s = orig.copy()
+ s.index = ["x", "y"]
+ s["y"] = "a"
+ exp = Series(Categorical(["b", "a"], categories=["a", "b"]),
+ index=["x", "y"])
+ tm.assert_series_equal(s, exp)
+
+ # ensure that one can set something to np.nan
+ s = Series(Categorical([1, 2, 3]))
+ exp = Series(Categorical([1, np.nan, 3], categories=[1, 2, 3]))
+ s[1] = np.nan
+ tm.assert_series_equal(s, exp)
+
+ def test_take(self):
+ s = Series([-1, 5, 6, 2, 4])
+
+ actual = s.take([1, 3, 4])
+ expected = Series([5, 2, 4], index=[1, 3, 4])
+ tm.assert_series_equal(actual, expected)
+
+ actual = s.take([-1, 3, 4])
+ expected = Series([4, 2, 4], index=[4, 3, 4])
+ tm.assert_series_equal(actual, expected)
+
+ pytest.raises(IndexError, s.take, [1, 10])
+ pytest.raises(IndexError, s.take, [2, 5])
+
+ with tm.assert_produces_warning(FutureWarning):
+ s.take([-1, 3, 4], convert=False)
+
+ def test_ix_setitem(self):
+ inds = self.series.index[[3, 4, 7]]
+
+ result = self.series.copy()
+ result.loc[inds] = 5
+
+ expected = self.series.copy()
+ expected[[3, 4, 7]] = 5
+ assert_series_equal(result, expected)
+
+ result.iloc[5:10] = 10
+ expected[5:10] = 10
+ assert_series_equal(result, expected)
+
+ # set slice with indices
+ d1, d2 = self.series.index[[5, 15]]
+ result.loc[d1:d2] = 6
+ expected[5:16] = 6 # because it's inclusive
+ assert_series_equal(result, expected)
+
+ # set index value
+ self.series.loc[d1] = 4
+ self.series.loc[d2] = 6
+ assert self.series[d1] == 4
+ assert self.series[d2] == 6
+
+ def test_setitem_na(self):
+ # these induce dtype changes
+ expected = Series([np.nan, 3, np.nan, 5, np.nan, 7, np.nan, 9, np.nan])
+ s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10])
+ s[::2] = np.nan
+ assert_series_equal(s, expected)
+
+ # gets coerced to float, right?
+ expected = Series([np.nan, 1, np.nan, 0])
+ s = Series([True, True, False, False])
+ s[::2] = np.nan
+ assert_series_equal(s, expected)
+
+ expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8,
+ 9])
+ s = Series(np.arange(10))
+ s[:5] = np.nan
+ assert_series_equal(s, expected)
+
+ def test_basic_indexing(self):
+ s = Series(np.random.randn(5), index=['a', 'b', 'a', 'a', 'b'])
+
+ pytest.raises(IndexError, s.__getitem__, 5)
+ pytest.raises(IndexError, s.__setitem__, 5, 0)
+
+ pytest.raises(KeyError, s.__getitem__, 'c')
+
+ s = s.sort_index()
+
+ pytest.raises(IndexError, s.__getitem__, 5)
+ pytest.raises(IndexError, s.__setitem__, 5, 0)
+
+ def test_timedelta_assignment(self):
+ # GH 8209
+ s = Series([])
+ s.loc['B'] = timedelta(1)
+ tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B']))
+
+ s = s.reindex(s.index.insert(0, 'A'))
+ tm.assert_series_equal(s, Series(
+ [np.nan, Timedelta('1 days')], index=['A', 'B']))
+
+ result = s.fillna(timedelta(1))
+ expected = Series(Timedelta('1 days'), index=['A', 'B'])
+ tm.assert_series_equal(result, expected)
+
+ s.loc['A'] = timedelta(1)
+ tm.assert_series_equal(s, expected)
+
+ # GH 14155
+ s = Series(10 * [np.timedelta64(10, 'm')])
+ s.loc[[1, 2, 3]] = np.timedelta64(20, 'm')
+ expected = pd.Series(10 * [np.timedelta64(10, 'm')])
+ expected.loc[[1, 2, 3]] = pd.Timedelta(np.timedelta64(20, 'm'))
+ tm.assert_series_equal(s, expected)
+
+ def test_underlying_data_conversion(self):
+
+ # GH 4080
+ df = DataFrame({c: [1, 2, 3] for c in ['a', 'b', 'c']})
+ df.set_index(['a', 'b', 'c'], inplace=True)
+ s = Series([1], index=[(2, 2, 2)])
+ df['val'] = 0
+ df
+ df['val'].update(s)
+
+ expected = DataFrame(
+ dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0]))
+ expected.set_index(['a', 'b', 'c'], inplace=True)
+ tm.assert_frame_equal(df, expected)
+
+ # GH 3970
+ # these are chained assignments as well
+ pd.set_option('chained_assignment', None)
+ df = DataFrame({"aa": range(5), "bb": [2.2] * 5})
+ df["cc"] = 0.0
+
+ ck = [True] * len(df)
+
+ df["bb"].iloc[0] = .13
+
+ # TODO: unused
+ df_tmp = df.iloc[ck] # noqa
+
+ df["bb"].iloc[0] = .15
+ assert df['bb'].iloc[0] == 0.15
+ pd.set_option('chained_assignment', 'raise')
+
+ # GH 3217
+ df = DataFrame(dict(a=[1, 3], b=[np.nan, 2]))
+ df['c'] = np.nan
+ df['c'].update(pd.Series(['foo'], index=[0]))
+
+ expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan]))
+ tm.assert_frame_equal(df, expected)
+
+ def test_preserveRefs(self):
+ seq = self.ts[[5, 10, 15]]
+ seq[1] = np.NaN
+ assert not np.isnan(self.ts[10])
+
+ def test_drop(self):
+
+ # unique
+ s = Series([1, 2], index=['one', 'two'])
+ expected = Series([1], index=['one'])
+ result = s.drop(['two'])
+ assert_series_equal(result, expected)
+ result = s.drop('two', axis='rows')
+ assert_series_equal(result, expected)
+
+ # non-unique
+ # GH 5248
+ s = Series([1, 1, 2], index=['one', 'two', 'one'])
+ expected = Series([1, 2], index=['one', 'one'])
+ result = s.drop(['two'], axis=0)
+ assert_series_equal(result, expected)
+ result = s.drop('two')
+ assert_series_equal(result, expected)
+
+ expected = Series([1], index=['two'])
+ result = s.drop(['one'])
+ assert_series_equal(result, expected)
+ result = s.drop('one')
+ assert_series_equal(result, expected)
+
+ # single string/tuple-like
+ s = Series(range(3), index=list('abc'))
+ pytest.raises(KeyError, s.drop, 'bc')
+ pytest.raises(KeyError, s.drop, ('a',))
+
+ # errors='ignore'
+ s = Series(range(3), index=list('abc'))
+ result = s.drop('bc', errors='ignore')
+ assert_series_equal(result, s)
+ result = s.drop(['a', 'd'], errors='ignore')
+ expected = s.iloc[1:]
+ assert_series_equal(result, expected)
+
+ # bad axis
+ pytest.raises(ValueError, s.drop, 'one', axis='columns')
+
+ # GH 8522
+ s = Series([2, 3], index=[True, False])
+ assert s.index.is_object()
+ result = s.drop(True)
+ expected = Series([3], index=[False])
+ assert_series_equal(result, expected)
+
+ # GH 16877
+ s = Series([2, 3], index=[0, 1])
+ with tm.assert_raises_regex(KeyError, 'not contained in axis'):
+ s.drop([False, True])
+
+ def test_select(self):
+
+ # deprecated: gh-12410
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ n = len(self.ts)
+ result = self.ts.select(lambda x: x >= self.ts.index[n // 2])
+ expected = self.ts.reindex(self.ts.index[n // 2:])
+ assert_series_equal(result, expected)
+
+ result = self.ts.select(lambda x: x.weekday() == 2)
+ expected = self.ts[self.ts.index.weekday == 2]
+ assert_series_equal(result, expected)
+
+ def test_cast_on_putmask(self):
+
+ # GH 2746
+
+ # need to upcast
+ s = Series([1, 2], index=[1, 2], dtype='int64')
+ s[[True, False]] = Series([0], index=[1], dtype='int64')
+ expected = Series([0, 2], index=[1, 2], dtype='int64')
+
+ assert_series_equal(s, expected)
+
+ def test_type_promote_putmask(self):
+
+ # GH8387: test that changing types does not break alignment
+ ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5)
+ left, mask = ts.copy(), ts > 0
+ right = ts[mask].copy().map(str)
+ left[mask] = right
+ assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t))
+
+ s = Series([0, 1, 2, 0])
+ mask = s > 0
+ s2 = s[mask].map(str)
+ s[mask] = s2
+ assert_series_equal(s, Series([0, '1', '2', 0]))
+
+ s = Series([0, 'foo', 'bar', 0])
+ mask = Series([False, True, True, False])
+ s2 = s[mask]
+ s[mask] = s2
+ assert_series_equal(s, Series([0, 'foo', 'bar', 0]))
+
+ def test_head_tail(self):
+ assert_series_equal(self.series.head(), self.series[:5])
+ assert_series_equal(self.series.head(0), self.series[0:0])
+ assert_series_equal(self.series.tail(), self.series[-5:])
+ assert_series_equal(self.series.tail(0), self.series[0:0])
+
+ def test_multilevel_preserve_name(self):
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
+ 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['first', 'second'])
+ s = Series(np.random.randn(len(index)), index=index, name='sth')
+
+ result = s['foo']
+ result2 = s.loc['foo']
+ assert result.name == s.name
+ assert result2.name == s.name
+
+ def test_setitem_scalar_into_readonly_backing_data(self):
+ # GH14359: test that you cannot mutate a read only buffer
+
+ array = np.zeros(5)
+ array.flags.writeable = False # make the array immutable
+ series = Series(array)
+
+ for n in range(len(series)):
+ with pytest.raises(ValueError):
+ series[n] = 1
+
+ assert array[n] == 0
+
+ def test_setitem_slice_into_readonly_backing_data(self):
+ # GH14359: test that you cannot mutate a read only buffer
+
+ array = np.zeros(5)
+ array.flags.writeable = False # make the array immutable
+ series = Series(array)
+
+ with pytest.raises(ValueError):
+ series[1:3] = 1
+
+ assert not array.any()
diff --git a/pandas/tests/series/indexing/test_loc.py b/pandas/tests/series/indexing/test_loc.py
new file mode 100644
index 0000000000000..b2a835616cde2
--- /dev/null
+++ b/pandas/tests/series/indexing/test_loc.py
@@ -0,0 +1,148 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+import pytest
+
+import numpy as np
+import pandas as pd
+
+from pandas import (Series, Timestamp)
+
+from pandas.compat import lrange
+from pandas.util.testing import (assert_series_equal)
+
+from pandas.tests.series.common import TestData
+
+JOIN_TYPES = ['inner', 'outer', 'left', 'right']
+
+
+class TestLoc(TestData):
+
+ def test_loc_getitem(self):
+ inds = self.series.index[[3, 4, 7]]
+ assert_series_equal(self.series.loc[inds], self.series.reindex(inds))
+ assert_series_equal(self.series.iloc[5::2], self.series[5::2])
+
+ # slice with indices
+ d1, d2 = self.ts.index[[5, 15]]
+ result = self.ts.loc[d1:d2]
+ expected = self.ts.truncate(d1, d2)
+ assert_series_equal(result, expected)
+
+ # boolean
+ mask = self.series > self.series.median()
+ assert_series_equal(self.series.loc[mask], self.series[mask])
+
+ # ask for index value
+ assert self.ts.loc[d1] == self.ts[d1]
+ assert self.ts.loc[d2] == self.ts[d2]
+
+ def test_loc_getitem_not_monotonic(self):
+ d1, d2 = self.ts.index[[5, 15]]
+
+ ts2 = self.ts[::2][[1, 2, 0]]
+
+ pytest.raises(KeyError, ts2.loc.__getitem__, slice(d1, d2))
+ pytest.raises(KeyError, ts2.loc.__setitem__, slice(d1, d2), 0)
+
+ def test_loc_getitem_setitem_integer_slice_keyerrors(self):
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+
+ # this is OK
+ cp = s.copy()
+ cp.iloc[4:10] = 0
+ assert (cp.iloc[4:10] == 0).all()
+
+ # so is this
+ cp = s.copy()
+ cp.iloc[3:11] = 0
+ assert (cp.iloc[3:11] == 0).values.all()
+
+ result = s.iloc[2:6]
+ result2 = s.loc[3:11]
+ expected = s.reindex([4, 6, 8, 10])
+
+ assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
+
+ # non-monotonic, raise KeyError
+ s2 = s.iloc[lrange(5) + lrange(5, 10)[::-1]]
+ pytest.raises(KeyError, s2.loc.__getitem__, slice(3, 11))
+ pytest.raises(KeyError, s2.loc.__setitem__, slice(3, 11), 0)
+
+ def test_loc_getitem_iterator(self):
+ idx = iter(self.series.index[:10])
+ result = self.series.loc[idx]
+ assert_series_equal(result, self.series[:10])
+
+ def test_loc_setitem_boolean(self):
+ mask = self.series > self.series.median()
+
+ result = self.series.copy()
+ result.loc[mask] = 0
+ expected = self.series
+ expected[mask] = 0
+ assert_series_equal(result, expected)
+
+ def test_loc_setitem_corner(self):
+ inds = list(self.series.index[[5, 8, 12]])
+ self.series.loc[inds] = 5
+ pytest.raises(Exception, self.series.loc.__setitem__,
+ inds + ['foo'], 5)
+
+ def test_basic_setitem_with_labels(self):
+ indices = self.ts.index[[5, 10, 15]]
+
+ cp = self.ts.copy()
+ exp = self.ts.copy()
+ cp[indices] = 0
+ exp.loc[indices] = 0
+ assert_series_equal(cp, exp)
+
+ cp = self.ts.copy()
+ exp = self.ts.copy()
+ cp[indices[0]:indices[2]] = 0
+ exp.loc[indices[0]:indices[2]] = 0
+ assert_series_equal(cp, exp)
+
+ # integer indexes, be careful
+ s = Series(np.random.randn(10), index=lrange(0, 20, 2))
+ inds = [0, 4, 6]
+ arr_inds = np.array([0, 4, 6])
+
+ cp = s.copy()
+ exp = s.copy()
+ s[inds] = 0
+ s.loc[inds] = 0
+ assert_series_equal(cp, exp)
+
+ cp = s.copy()
+ exp = s.copy()
+ s[arr_inds] = 0
+ s.loc[arr_inds] = 0
+ assert_series_equal(cp, exp)
+
+ inds_notfound = [0, 4, 5, 6]
+ arr_inds_notfound = np.array([0, 4, 5, 6])
+ pytest.raises(Exception, s.__setitem__, inds_notfound, 0)
+ pytest.raises(Exception, s.__setitem__, arr_inds_notfound, 0)
+
+ # GH12089
+ # with tz for values
+ s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
+ index=['a', 'b', 'c'])
+ s2 = s.copy()
+ expected = Timestamp('2011-01-03', tz='US/Eastern')
+ s2.loc['a'] = expected
+ result = s2.loc['a']
+ assert result == expected
+
+ s2 = s.copy()
+ s2.iloc[0] = expected
+ result = s2.iloc[0]
+ assert result == expected
+
+ s2 = s.copy()
+ s2['a'] = expected
+ result = s2['a']
+ assert result == expected
diff --git a/pandas/tests/series/indexing/test_numeric.py b/pandas/tests/series/indexing/test_numeric.py
new file mode 100644
index 0000000000000..ee53ec572bcae
--- /dev/null
+++ b/pandas/tests/series/indexing/test_numeric.py
@@ -0,0 +1,223 @@
+# coding=utf-8
+# pylint: disable-msg=E1101,W0612
+
+import pytest
+
+import numpy as np
+import pandas as pd
+
+from pandas import (Index, Series, DataFrame)
+
+from pandas.compat import lrange, range
+from pandas.util.testing import (assert_series_equal)
+
+from pandas.tests.series.common import TestData
+import pandas.util.testing as tm
+
+
+class TestNumericIndexers(TestData):
+
+ def test_get(self):
+ # GH 6383
+ s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45,
+ 51, 39, 55, 43, 54, 52, 51, 54]))
+
+ result = s.get(25, 0)
+ expected = 0
+ assert result == expected
+
+ s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56,
+ 45, 51, 39, 55, 43, 54, 52, 51, 54]),
+ index=pd.Float64Index(
+ [25.0, 36.0, 49.0, 64.0, 81.0, 100.0,
+ 121.0, 144.0, 169.0, 196.0, 1225.0,
+ 1296.0, 1369.0, 1444.0, 1521.0, 1600.0,
+ 1681.0, 1764.0, 1849.0, 1936.0],
+ dtype='object'))
+
+ result = s.get(25, 0)
+ expected = 43
+ assert result == expected
+
+ # GH 7407
+ # with a boolean accessor
+ df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3})
+ vc = df.i.value_counts()
+ result = vc.get(99, default='Missing')
+ assert result == 'Missing'
+
+ vc = df.b.value_counts()
+ result = vc.get(False, default='Missing')
+ assert result == 3
+
+ result = vc.get(True, default='Missing')
+ assert result == 'Missing'
+
+ def test_get_nan(self):
+ # GH 8569
+ s = pd.Float64Index(range(10)).to_series()
+ assert s.get(np.nan) is None
+ assert s.get(np.nan, default='Missing') == 'Missing'
+
+ # ensure that fixing the above hasn't broken get
+ # with multiple elements
+ idx = [20, 30]
+ assert_series_equal(s.get(idx),
+ Series([np.nan] * 2, index=idx))
+ idx = [np.nan, np.nan]
+ assert_series_equal(s.get(idx),
+ Series([np.nan] * 2, index=idx))
+
+ def test_delitem(self):
+ # GH 5542
+ # should delete the item inplace
+ s = Series(lrange(5))
+ del s[0]
+
+ expected = Series(lrange(1, 5), index=lrange(1, 5))
+ assert_series_equal(s, expected)
+
+ del s[1]
+ expected = Series(lrange(2, 5), index=lrange(2, 5))
+ assert_series_equal(s, expected)
+
+ # empty
+ s = Series()
+
+ def f():
+ del s[0]
+
+ pytest.raises(KeyError, f)
+
+ # only 1 left, del, add, del
+ s = Series(1)
+ del s[0]
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='int64')))
+ s[0] = 1
+ assert_series_equal(s, Series(1))
+ del s[0]
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='int64')))
+
+ # Index(dtype=object)
+ s = Series(1, index=['a'])
+ del s['a']
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='object')))
+ s['a'] = 1
+ assert_series_equal(s, Series(1, index=['a']))
+ del s['a']
+ assert_series_equal(s, Series(dtype='int64', index=Index(
+ [], dtype='object')))
+
+ def test_slice_float64(self):
+ values = np.arange(10., 50., 2)
+ index = Index(values)
+
+ start, end = values[[5, 15]]
+
+ s = Series(np.random.randn(20), index=index)
+
+ result = s[start:end]
+ expected = s.iloc[5:16]
+ assert_series_equal(result, expected)
+
+ result = s.loc[start:end]
+ assert_series_equal(result, expected)
+
+ df = DataFrame(np.random.randn(20, 3), index=index)
+
+ result = df[start:end]
+ expected = df.iloc[5:16]
+ tm.assert_frame_equal(result, expected)
+
+ result = df.loc[start:end]
+ tm.assert_frame_equal(result, expected)
+
+ def test_getitem_negative_out_of_bounds(self):
+ s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10))
+
+ pytest.raises(IndexError, s.__getitem__, -11)
+ pytest.raises(IndexError, s.__setitem__, -11, 'foo')
+
+ def test_getitem_regression(self):
+ s = Series(lrange(5), index=lrange(5))
+ result = s[lrange(5)]
+ assert_series_equal(result, s)
+
+ def test_getitem_setitem_slice_bug(self):
+ s = Series(lrange(10), lrange(10))
+ result = s[-12:]
+ assert_series_equal(result, s)
+
+ result = s[-7:]
+ assert_series_equal(result, s[3:])
+
+ result = s[:-12]
+ assert_series_equal(result, s[:0])
+
+ s = Series(lrange(10), lrange(10))
+ s[-12:] = 0
+ assert (s == 0).all()
+
+ s[:-12] = 5
+ assert (s == 0).all()
+
+ def test_getitem_setitem_slice_integers(self):
+ s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16])
+
+ result = s[:4]
+ expected = s.reindex([2, 4, 6, 8])
+ assert_series_equal(result, expected)
+
+ s[:4] = 0
+ assert (s[:4] == 0).all()
+ assert not (s[4:] == 0).any()
+
+ def test_setitem_float_labels(self):
+ # note labels are floats
+ s = Series(['a', 'b', 'c'], index=[0, 0.5, 1])
+ tmp = s.copy()
+
+ s.loc[1] = 'zoo'
+ tmp.iloc[2] = 'zoo'
+
+ assert_series_equal(s, tmp)
+
+ def test_slice_float_get_set(self):
+ pytest.raises(TypeError, lambda: self.ts[4.0:10.0])
+
+ def f():
+ self.ts[4.0:10.0] = 0
+
+ pytest.raises(TypeError, f)
+
+ pytest.raises(TypeError, self.ts.__getitem__, slice(4.5, 10.0))
+ pytest.raises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0)
+
+ def test_slice_floats2(self):
+ s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float))
+
+ assert len(s.loc[12.0:]) == 8
+ assert len(s.loc[12.5:]) == 7
+
+ i = np.arange(10, 20, dtype=float)
+ i[2] = 12.2
+ s.index = i
+ assert len(s.loc[12.0:]) == 8
+ assert len(s.loc[12.5:]) == 7
+
+ def test_int_indexing(self):
+ s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2])
+
+ pytest.raises(KeyError, s.__getitem__, 5)
+
+ pytest.raises(KeyError, s.__getitem__, 'c')
+
+ # not monotonic
+ s = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1])
+
+ pytest.raises(KeyError, s.__getitem__, 5)
+
+ pytest.raises(KeyError, s.__getitem__, 'c')
diff --git a/pandas/tests/series/test_indexing.py b/pandas/tests/series/test_indexing.py
deleted file mode 100644
index e5c3d6f7d3ee1..0000000000000
--- a/pandas/tests/series/test_indexing.py
+++ /dev/null
@@ -1,2874 +0,0 @@
-# coding=utf-8
-# pylint: disable-msg=E1101,W0612
-
-import pytest
-
-from datetime import datetime, timedelta
-
-from numpy import nan
-import numpy as np
-import pandas as pd
-
-import pandas._libs.index as _index
-from pandas.core.dtypes.common import is_integer, is_scalar
-from pandas import (Index, Series, DataFrame, isna,
- date_range, NaT, MultiIndex,
- Timestamp, DatetimeIndex, Timedelta,
- Categorical)
-from pandas.core.indexing import IndexingError
-from pandas.tseries.offsets import BDay
-from pandas._libs import tslib
-
-from pandas.compat import lrange, range
-from pandas import compat
-from pandas.util.testing import (assert_series_equal,
- assert_almost_equal,
- assert_frame_equal)
-import pandas.util.testing as tm
-
-from pandas.tests.series.common import TestData
-
-JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-
-
-class TestSeriesIndexing(TestData):
-
- def test_get(self):
-
- # GH 6383
- s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56, 45,
- 51, 39, 55, 43, 54, 52, 51, 54]))
-
- result = s.get(25, 0)
- expected = 0
- assert result == expected
-
- s = Series(np.array([43, 48, 60, 48, 50, 51, 50, 45, 57, 48, 56,
- 45, 51, 39, 55, 43, 54, 52, 51, 54]),
- index=pd.Float64Index(
- [25.0, 36.0, 49.0, 64.0, 81.0, 100.0,
- 121.0, 144.0, 169.0, 196.0, 1225.0,
- 1296.0, 1369.0, 1444.0, 1521.0, 1600.0,
- 1681.0, 1764.0, 1849.0, 1936.0],
- dtype='object'))
-
- result = s.get(25, 0)
- expected = 43
- assert result == expected
-
- # GH 7407
- # with a boolean accessor
- df = pd.DataFrame({'i': [0] * 3, 'b': [False] * 3})
- vc = df.i.value_counts()
- result = vc.get(99, default='Missing')
- assert result == 'Missing'
-
- vc = df.b.value_counts()
- result = vc.get(False, default='Missing')
- assert result == 3
-
- result = vc.get(True, default='Missing')
- assert result == 'Missing'
-
- def test_get_nan(self):
- # GH 8569
- s = pd.Float64Index(range(10)).to_series()
- assert s.get(np.nan) is None
- assert s.get(np.nan, default='Missing') == 'Missing'
-
- # ensure that fixing the above hasn't broken get
- # with multiple elements
- idx = [20, 30]
- assert_series_equal(s.get(idx),
- Series([np.nan] * 2, index=idx))
- idx = [np.nan, np.nan]
- assert_series_equal(s.get(idx),
- Series([np.nan] * 2, index=idx))
-
- def test_delitem(self):
-
- # GH 5542
- # should delete the item inplace
- s = Series(lrange(5))
- del s[0]
-
- expected = Series(lrange(1, 5), index=lrange(1, 5))
- assert_series_equal(s, expected)
-
- del s[1]
- expected = Series(lrange(2, 5), index=lrange(2, 5))
- assert_series_equal(s, expected)
-
- # empty
- s = Series()
-
- def f():
- del s[0]
-
- pytest.raises(KeyError, f)
-
- # only 1 left, del, add, del
- s = Series(1)
- del s[0]
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='int64')))
- s[0] = 1
- assert_series_equal(s, Series(1))
- del s[0]
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='int64')))
-
- # Index(dtype=object)
- s = Series(1, index=['a'])
- del s['a']
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='object')))
- s['a'] = 1
- assert_series_equal(s, Series(1, index=['a']))
- del s['a']
- assert_series_equal(s, Series(dtype='int64', index=Index(
- [], dtype='object')))
-
- def test_getitem_setitem_ellipsis(self):
- s = Series(np.random.randn(10))
-
- np.fix(s)
-
- result = s[...]
- assert_series_equal(result, s)
-
- s[...] = 5
- assert (result == 5).all()
-
- def test_getitem_negative_out_of_bounds(self):
- s = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10))
-
- pytest.raises(IndexError, s.__getitem__, -11)
- pytest.raises(IndexError, s.__setitem__, -11, 'foo')
-
- def test_pop(self):
- # GH 6600
- df = DataFrame({'A': 0, 'B': np.arange(5, dtype='int64'), 'C': 0, })
- k = df.iloc[4]
-
- result = k.pop('B')
- assert result == 4
-
- expected = Series([0, 0], index=['A', 'C'], name=4)
- assert_series_equal(k, expected)
-
- def test_getitem_get(self):
- idx1 = self.series.index[5]
- idx2 = self.objSeries.index[5]
-
- assert self.series[idx1] == self.series.get(idx1)
- assert self.objSeries[idx2] == self.objSeries.get(idx2)
-
- assert self.series[idx1] == self.series[5]
- assert self.objSeries[idx2] == self.objSeries[5]
-
- assert self.series.get(-1) == self.series.get(self.series.index[-1])
- assert self.series[5] == self.series.get(self.series.index[5])
-
- # missing
- d = self.ts.index[0] - BDay()
- pytest.raises(KeyError, self.ts.__getitem__, d)
-
- # None
- # GH 5652
- for s in [Series(), Series(index=list('abc'))]:
- result = s.get(None)
- assert result is None
-
- def test_iloc(self):
-
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
-
- for i in range(len(s)):
- result = s.iloc[i]
- exp = s[s.index[i]]
- assert_almost_equal(result, exp)
-
- # pass a slice
- result = s.iloc[slice(1, 3)]
- expected = s.loc[2:4]
- assert_series_equal(result, expected)
-
- # test slice is a view
- result[:] = 0
- assert (s[1:3] == 0).all()
-
- # list of integers
- result = s.iloc[[0, 2, 3, 4, 5]]
- expected = s.reindex(s.index[[0, 2, 3, 4, 5]])
- assert_series_equal(result, expected)
-
- def test_iloc_nonunique(self):
- s = Series([0, 1, 2], index=[0, 1, 0])
- assert s.iloc[2] == 2
-
- def test_getitem_regression(self):
- s = Series(lrange(5), index=lrange(5))
- result = s[lrange(5)]
- assert_series_equal(result, s)
-
- def test_getitem_setitem_slice_bug(self):
- s = Series(lrange(10), lrange(10))
- result = s[-12:]
- assert_series_equal(result, s)
-
- result = s[-7:]
- assert_series_equal(result, s[3:])
-
- result = s[:-12]
- assert_series_equal(result, s[:0])
-
- s = Series(lrange(10), lrange(10))
- s[-12:] = 0
- assert (s == 0).all()
-
- s[:-12] = 5
- assert (s == 0).all()
-
- def test_getitem_int64(self):
- idx = np.int64(5)
- assert self.ts[idx] == self.ts[5]
-
- def test_getitem_fancy(self):
- slice1 = self.series[[1, 2, 3]]
- slice2 = self.objSeries[[1, 2, 3]]
- assert self.series.index[2] == slice1.index[1]
- assert self.objSeries.index[2] == slice2.index[1]
- assert self.series[2] == slice1[1]
- assert self.objSeries[2] == slice2[1]
-
- def test_getitem_boolean(self):
- s = self.series
- mask = s > s.median()
-
- # passing list is OK
- result = s[list(mask)]
- expected = s[mask]
- assert_series_equal(result, expected)
- tm.assert_index_equal(result.index, s.index[mask])
-
- def test_getitem_boolean_empty(self):
- s = Series([], dtype=np.int64)
- s.index.name = 'index_name'
- s = s[s.isna()]
- assert s.index.name == 'index_name'
- assert s.dtype == np.int64
-
- # GH5877
- # indexing with empty series
- s = Series(['A', 'B'])
- expected = Series(np.nan, index=['C'], dtype=object)
- result = s[Series(['C'], dtype=object)]
- assert_series_equal(result, expected)
-
- s = Series(['A', 'B'])
- expected = Series(dtype=object, index=Index([], dtype='int64'))
- result = s[Series([], dtype=object)]
- assert_series_equal(result, expected)
-
- # invalid because of the boolean indexer
- # that's empty or not-aligned
- def f():
- s[Series([], dtype=bool)]
-
- pytest.raises(IndexingError, f)
-
- def f():
- s[Series([True], dtype=bool)]
-
- pytest.raises(IndexingError, f)
-
- def test_getitem_generator(self):
- gen = (x > 0 for x in self.series)
- result = self.series[gen]
- result2 = self.series[iter(self.series > 0)]
- expected = self.series[self.series > 0]
- assert_series_equal(result, expected)
- assert_series_equal(result2, expected)
-
- def test_type_promotion(self):
- # GH12599
- s = pd.Series()
- s["a"] = pd.Timestamp("2016-01-01")
- s["b"] = 3.0
- s["c"] = "foo"
- expected = Series([pd.Timestamp("2016-01-01"), 3.0, "foo"],
- index=["a", "b", "c"])
- assert_series_equal(s, expected)
-
- def test_getitem_boolean_object(self):
- # using column from DataFrame
-
- s = self.series
- mask = s > s.median()
- omask = mask.astype(object)
-
- # getitem
- result = s[omask]
- expected = s[mask]
- assert_series_equal(result, expected)
-
- # setitem
- s2 = s.copy()
- cop = s.copy()
- cop[omask] = 5
- s2[mask] = 5
- assert_series_equal(cop, s2)
-
- # nans raise exception
- omask[5:10] = np.nan
- pytest.raises(Exception, s.__getitem__, omask)
- pytest.raises(Exception, s.__setitem__, omask, 5)
-
- def test_getitem_setitem_boolean_corner(self):
- ts = self.ts
- mask_shifted = ts.shift(1, freq=BDay()) > ts.median()
-
- # these used to raise...??
-
- pytest.raises(Exception, ts.__getitem__, mask_shifted)
- pytest.raises(Exception, ts.__setitem__, mask_shifted, 1)
- # ts[mask_shifted]
- # ts[mask_shifted] = 1
-
- pytest.raises(Exception, ts.loc.__getitem__, mask_shifted)
- pytest.raises(Exception, ts.loc.__setitem__, mask_shifted, 1)
- # ts.loc[mask_shifted]
- # ts.loc[mask_shifted] = 2
-
- def test_getitem_setitem_slice_integers(self):
- s = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16])
-
- result = s[:4]
- expected = s.reindex([2, 4, 6, 8])
- assert_series_equal(result, expected)
-
- s[:4] = 0
- assert (s[:4] == 0).all()
- assert not (s[4:] == 0).any()
-
- def test_getitem_setitem_datetime_tz_pytz(self):
- from pytz import timezone as tz
- from pandas import date_range
-
- N = 50
- # testing with timezone, GH #2785
- rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
- ts = Series(np.random.randn(N), index=rng)
-
- # also test Timestamp tz handling, GH #2789
- result = ts.copy()
- result["1990-01-01 09:00:00+00:00"] = 0
- result["1990-01-01 09:00:00+00:00"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
- result["1990-01-01 03:00:00-06:00"] = 0
- result["1990-01-01 03:00:00-06:00"] = ts[4]
- assert_series_equal(result, ts)
-
- # repeat with datetimes
- result = ts.copy()
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
-
- # comparison dates with datetime MUST be localized!
- date = tz('US/Central').localize(datetime(1990, 1, 1, 3))
- result[date] = 0
- result[date] = ts[4]
- assert_series_equal(result, ts)
-
- def test_getitem_setitem_datetime_tz_dateutil(self):
- from dateutil.tz import tzutc
- from pandas._libs.tslibs.timezones import dateutil_gettz as gettz
-
- tz = lambda x: tzutc() if x == 'UTC' else gettz(
- x) # handle special case for utc in dateutil
-
- from pandas import date_range
-
- N = 50
-
- # testing with timezone, GH #2785
- rng = date_range('1/1/1990', periods=N, freq='H',
- tz='America/New_York')
- ts = Series(np.random.randn(N), index=rng)
-
- # also test Timestamp tz handling, GH #2789
- result = ts.copy()
- result["1990-01-01 09:00:00+00:00"] = 0
- result["1990-01-01 09:00:00+00:00"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
- result["1990-01-01 03:00:00-06:00"] = 0
- result["1990-01-01 03:00:00-06:00"] = ts[4]
- assert_series_equal(result, ts)
-
- # repeat with datetimes
- result = ts.copy()
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = 0
- result[datetime(1990, 1, 1, 9, tzinfo=tz('UTC'))] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts.copy()
- result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = 0
- result[datetime(1990, 1, 1, 3, tzinfo=tz('America/Chicago'))] = ts[4]
- assert_series_equal(result, ts)
-
- def test_getitem_setitem_datetimeindex(self):
- N = 50
- # testing with timezone, GH #2785
- rng = date_range('1/1/1990', periods=N, freq='H', tz='US/Eastern')
- ts = Series(np.random.randn(N), index=rng)
-
- result = ts["1990-01-01 04:00:00"]
- expected = ts[4]
- assert result == expected
-
- result = ts.copy()
- result["1990-01-01 04:00:00"] = 0
- result["1990-01-01 04:00:00"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- result = ts.copy()
- result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0
- result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8]
- assert_series_equal(result, ts)
-
- lb = "1990-01-01 04:00:00"
- rb = "1990-01-01 07:00:00"
- # GH#18435 strings get a pass from tzawareness compat
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- lb = "1990-01-01 04:00:00-0500"
- rb = "1990-01-01 07:00:00-0500"
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- # repeat all the above with naive datetimes
- result = ts[datetime(1990, 1, 1, 4)]
- expected = ts[4]
- assert result == expected
-
- result = ts.copy()
- result[datetime(1990, 1, 1, 4)] = 0
- result[datetime(1990, 1, 1, 4)] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- result = ts.copy()
- result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = 0
- result[datetime(1990, 1, 1, 4):datetime(1990, 1, 1, 7)] = ts[4:8]
- assert_series_equal(result, ts)
-
- lb = datetime(1990, 1, 1, 4)
- rb = datetime(1990, 1, 1, 7)
- with pytest.raises(TypeError):
- # tznaive vs tzaware comparison is invalid
- # see GH#18376, GH#18162
- ts[(ts.index >= lb) & (ts.index <= rb)]
-
- lb = pd.Timestamp(datetime(1990, 1, 1, 4)).tz_localize(rng.tzinfo)
- rb = pd.Timestamp(datetime(1990, 1, 1, 7)).tz_localize(rng.tzinfo)
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- result = ts[ts.index[4]]
- expected = ts[4]
- assert result == expected
-
- result = ts[ts.index[4:8]]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- result = ts.copy()
- result[ts.index[4:8]] = 0
- result[4:8] = ts[4:8]
- assert_series_equal(result, ts)
-
- # also test partial date slicing
- result = ts["1990-01-02"]
- expected = ts[24:48]
- assert_series_equal(result, expected)
-
- result = ts.copy()
- result["1990-01-02"] = 0
- result["1990-01-02"] = ts[24:48]
- assert_series_equal(result, ts)
-
- def test_getitem_setitem_periodindex(self):
- from pandas import period_range
-
- N = 50
- rng = period_range('1/1/1990', periods=N, freq='H')
- ts = Series(np.random.randn(N), index=rng)
-
- result = ts["1990-01-01 04"]
- expected = ts[4]
- assert result == expected
-
- result = ts.copy()
- result["1990-01-01 04"] = 0
- result["1990-01-01 04"] = ts[4]
- assert_series_equal(result, ts)
-
- result = ts["1990-01-01 04":"1990-01-01 07"]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- result = ts.copy()
- result["1990-01-01 04":"1990-01-01 07"] = 0
- result["1990-01-01 04":"1990-01-01 07"] = ts[4:8]
- assert_series_equal(result, ts)
-
- lb = "1990-01-01 04"
- rb = "1990-01-01 07"
- result = ts[(ts.index >= lb) & (ts.index <= rb)]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- # GH 2782
- result = ts[ts.index[4]]
- expected = ts[4]
- assert result == expected
-
- result = ts[ts.index[4:8]]
- expected = ts[4:8]
- assert_series_equal(result, expected)
-
- result = ts.copy()
- result[ts.index[4:8]] = 0
- result[4:8] = ts[4:8]
- assert_series_equal(result, ts)
-
- @pytest.mark.parametrize(
- 'result_1, duplicate_item, expected_1',
- [
- [
- pd.Series({1: 12, 2: [1, 2, 2, 3]}), pd.Series({1: 313}),
- pd.Series({1: 12, }, dtype=object),
- ],
- [
- pd.Series({1: [1, 2, 3], 2: [1, 2, 2, 3]}),
- pd.Series({1: [1, 2, 3]}), pd.Series({1: [1, 2, 3], }),
- ],
- ])
- def test_getitem_with_duplicates_indices(
- self, result_1, duplicate_item, expected_1):
- # GH 17610
- result = result_1.append(duplicate_item)
- expected = expected_1.append(duplicate_item)
- assert_series_equal(result[1], expected)
- assert result[2] == result_1[2]
-
- def test_getitem_median_slice_bug(self):
- index = date_range('20090415', '20090519', freq='2B')
- s = Series(np.random.randn(13), index=index)
-
- indexer = [slice(6, 7, None)]
- result = s[indexer]
- expected = s[indexer[0]]
- assert_series_equal(result, expected)
-
- def test_getitem_out_of_bounds(self):
- # don't segfault, GH #495
- pytest.raises(IndexError, self.ts.__getitem__, len(self.ts))
-
- # GH #917
- s = Series([])
- pytest.raises(IndexError, s.__getitem__, -1)
-
- def test_getitem_setitem_integers(self):
- # caused bug without test
- s = Series([1, 2, 3], ['a', 'b', 'c'])
-
- assert s.iloc[0] == s['a']
- s.iloc[0] = 5
- tm.assert_almost_equal(s['a'], 5)
-
- def test_getitem_box_float64(self):
- value = self.ts[5]
- assert isinstance(value, np.float64)
-
- def test_series_box_timestamp(self):
- rng = pd.date_range('20090415', '20090519', freq='B')
- ser = Series(rng)
-
- assert isinstance(ser[5], pd.Timestamp)
-
- rng = pd.date_range('20090415', '20090519', freq='B')
- ser = Series(rng, index=rng)
- assert isinstance(ser[5], pd.Timestamp)
-
- assert isinstance(ser.iat[5], pd.Timestamp)
-
- def test_getitem_ambiguous_keyerror(self):
- s = Series(lrange(10), index=lrange(0, 20, 2))
- pytest.raises(KeyError, s.__getitem__, 1)
- pytest.raises(KeyError, s.loc.__getitem__, 1)
-
- def test_getitem_unordered_dup(self):
- obj = Series(lrange(5), index=['c', 'a', 'a', 'b', 'b'])
- assert is_scalar(obj['c'])
- assert obj['c'] == 0
-
- def test_getitem_dups_with_missing(self):
-
- # breaks reindex, so need to use .loc internally
- # GH 4246
- s = Series([1, 2, 3, 4], ['foo', 'bar', 'foo', 'bah'])
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- expected = s.loc[['foo', 'bar', 'bah', 'bam']]
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = s[['foo', 'bar', 'bah', 'bam']]
- assert_series_equal(result, expected)
-
- def test_getitem_dups(self):
- s = Series(range(5), index=['A', 'A', 'B', 'C', 'C'], dtype=np.int64)
- expected = Series([3, 4], index=['C', 'C'], dtype=np.int64)
- result = s['C']
- assert_series_equal(result, expected)
-
- def test_getitem_dataframe(self):
- rng = list(range(10))
- s = pd.Series(10, index=rng)
- df = pd.DataFrame(rng, index=rng)
- pytest.raises(TypeError, s.__getitem__, df > 5)
-
- def test_getitem_callable(self):
- # GH 12533
- s = pd.Series(4, index=list('ABCD'))
- result = s[lambda x: 'A']
- assert result == s.loc['A']
-
- result = s[lambda x: ['A', 'B']]
- tm.assert_series_equal(result, s.loc[['A', 'B']])
-
- result = s[lambda x: [True, False, True, True]]
- tm.assert_series_equal(result, s.iloc[[0, 2, 3]])
-
- def test_setitem_ambiguous_keyerror(self):
- s = Series(lrange(10), index=lrange(0, 20, 2))
-
- # equivalent of an append
- s2 = s.copy()
- s2[1] = 5
- expected = s.append(Series([5], index=[1]))
- assert_series_equal(s2, expected)
-
- s2 = s.copy()
- s2.loc[1] = 5
- expected = s.append(Series([5], index=[1]))
- assert_series_equal(s2, expected)
-
- def test_setitem_float_labels(self):
- # note labels are floats
- s = Series(['a', 'b', 'c'], index=[0, 0.5, 1])
- tmp = s.copy()
-
- s.loc[1] = 'zoo'
- tmp.iloc[2] = 'zoo'
-
- assert_series_equal(s, tmp)
-
- def test_setitem_callable(self):
- # GH 12533
- s = pd.Series([1, 2, 3, 4], index=list('ABCD'))
- s[lambda x: 'A'] = -1
- tm.assert_series_equal(s, pd.Series([-1, 2, 3, 4], index=list('ABCD')))
-
- def test_setitem_other_callable(self):
- # GH 13299
- inc = lambda x: x + 1
-
- s = pd.Series([1, 2, -1, 4])
- s[s < 0] = inc
-
- expected = pd.Series([1, 2, inc, 4])
- tm.assert_series_equal(s, expected)
-
- def test_slice(self):
- numSlice = self.series[10:20]
- numSliceEnd = self.series[-10:]
- objSlice = self.objSeries[10:20]
-
- assert self.series.index[9] not in numSlice.index
- assert self.objSeries.index[9] not in objSlice.index
-
- assert len(numSlice) == len(numSlice.index)
- assert self.series[numSlice.index[0]] == numSlice[numSlice.index[0]]
-
- assert numSlice.index[1] == self.series.index[11]
- assert tm.equalContents(numSliceEnd, np.array(self.series)[-10:])
-
- # Test return view.
- sl = self.series[10:20]
- sl[:] = 0
-
- assert (self.series[10:20] == 0).all()
-
- def test_slice_can_reorder_not_uniquely_indexed(self):
- s = Series(1, index=['a', 'a', 'b', 'b', 'c'])
- s[::-1] # it works!
-
- def test_slice_float_get_set(self):
-
- pytest.raises(TypeError, lambda: self.ts[4.0:10.0])
-
- def f():
- self.ts[4.0:10.0] = 0
-
- pytest.raises(TypeError, f)
-
- pytest.raises(TypeError, self.ts.__getitem__, slice(4.5, 10.0))
- pytest.raises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0)
-
- def test_slice_floats2(self):
- s = Series(np.random.rand(10), index=np.arange(10, 20, dtype=float))
-
- assert len(s.loc[12.0:]) == 8
- assert len(s.loc[12.5:]) == 7
-
- i = np.arange(10, 20, dtype=float)
- i[2] = 12.2
- s.index = i
- assert len(s.loc[12.0:]) == 8
- assert len(s.loc[12.5:]) == 7
-
- def test_slice_float64(self):
-
- values = np.arange(10., 50., 2)
- index = Index(values)
-
- start, end = values[[5, 15]]
-
- s = Series(np.random.randn(20), index=index)
-
- result = s[start:end]
- expected = s.iloc[5:16]
- assert_series_equal(result, expected)
-
- result = s.loc[start:end]
- assert_series_equal(result, expected)
-
- df = DataFrame(np.random.randn(20, 3), index=index)
-
- result = df[start:end]
- expected = df.iloc[5:16]
- tm.assert_frame_equal(result, expected)
-
- result = df.loc[start:end]
- tm.assert_frame_equal(result, expected)
-
- def test_setitem(self):
- self.ts[self.ts.index[5]] = np.NaN
- self.ts[[1, 2, 17]] = np.NaN
- self.ts[6] = np.NaN
- assert np.isnan(self.ts[6])
- assert np.isnan(self.ts[2])
- self.ts[np.isnan(self.ts)] = 5
- assert not np.isnan(self.ts[2])
-
- # caught this bug when writing tests
- series = Series(tm.makeIntIndex(20).astype(float),
- index=tm.makeIntIndex(20))
-
- series[::2] = 0
- assert (series[::2] == 0).all()
-
- # set item that's not contained
- s = self.series.copy()
- s['foobar'] = 1
-
- app = Series([1], index=['foobar'], name='series')
- expected = self.series.append(app)
- assert_series_equal(s, expected)
-
- # Test for issue #10193
- key = pd.Timestamp('2012-01-01')
- series = pd.Series()
- series[key] = 47
- expected = pd.Series(47, [key])
- assert_series_equal(series, expected)
-
- series = pd.Series([], pd.DatetimeIndex([], freq='D'))
- series[key] = 47
- expected = pd.Series(47, pd.DatetimeIndex([key], freq='D'))
- assert_series_equal(series, expected)
-
- def test_setitem_dtypes(self):
-
- # change dtypes
- # GH 4463
- expected = Series([np.nan, 2, 3])
-
- s = Series([1, 2, 3])
- s.iloc[0] = np.nan
- assert_series_equal(s, expected)
-
- s = Series([1, 2, 3])
- s.loc[0] = np.nan
- assert_series_equal(s, expected)
-
- s = Series([1, 2, 3])
- s[0] = np.nan
- assert_series_equal(s, expected)
-
- s = Series([False])
- s.loc[0] = np.nan
- assert_series_equal(s, Series([np.nan]))
-
- s = Series([False, True])
- s.loc[0] = np.nan
- assert_series_equal(s, Series([np.nan, 1.0]))
-
- def test_set_value(self):
- idx = self.ts.index[10]
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- res = self.ts.set_value(idx, 0)
- assert res is self.ts
- assert self.ts[idx] == 0
-
- # equiv
- s = self.series.copy()
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- res = s.set_value('foobar', 0)
- assert res is s
- assert res.index[-1] == 'foobar'
- assert res['foobar'] == 0
-
- s = self.series.copy()
- s.loc['foobar'] = 0
- assert s.index[-1] == 'foobar'
- assert s['foobar'] == 0
-
- def test_setslice(self):
- sl = self.ts[5:20]
- assert len(sl) == len(sl.index)
- assert sl.index.is_unique
-
- def test_basic_getitem_setitem_corner(self):
- # invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2]
- with tm.assert_raises_regex(ValueError, 'tuple-index'):
- self.ts[:, 2]
- with tm.assert_raises_regex(ValueError, 'tuple-index'):
- self.ts[:, 2] = 2
-
- # weird lists. [slice(0, 5)] will work but not two slices
- result = self.ts[[slice(None, 5)]]
- expected = self.ts[:5]
- assert_series_equal(result, expected)
-
- # OK
- pytest.raises(Exception, self.ts.__getitem__,
- [5, slice(None, None)])
- pytest.raises(Exception, self.ts.__setitem__,
- [5, slice(None, None)], 2)
-
- def test_basic_getitem_with_labels(self):
- indices = self.ts.index[[5, 10, 15]]
-
- result = self.ts[indices]
- expected = self.ts.reindex(indices)
- assert_series_equal(result, expected)
-
- result = self.ts[indices[0]:indices[2]]
- expected = self.ts.loc[indices[0]:indices[2]]
- assert_series_equal(result, expected)
-
- # integer indexes, be careful
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
- inds = [0, 2, 5, 7, 8]
- arr_inds = np.array([0, 2, 5, 7, 8])
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = s[inds]
- expected = s.reindex(inds)
- assert_series_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- result = s[arr_inds]
- expected = s.reindex(arr_inds)
- assert_series_equal(result, expected)
-
- # GH12089
- # with tz for values
- s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
- index=['a', 'b', 'c'])
- expected = Timestamp('2011-01-01', tz='US/Eastern')
- result = s.loc['a']
- assert result == expected
- result = s.iloc[0]
- assert result == expected
- result = s['a']
- assert result == expected
-
- def test_basic_setitem_with_labels(self):
- indices = self.ts.index[[5, 10, 15]]
-
- cp = self.ts.copy()
- exp = self.ts.copy()
- cp[indices] = 0
- exp.loc[indices] = 0
- assert_series_equal(cp, exp)
-
- cp = self.ts.copy()
- exp = self.ts.copy()
- cp[indices[0]:indices[2]] = 0
- exp.loc[indices[0]:indices[2]] = 0
- assert_series_equal(cp, exp)
-
- # integer indexes, be careful
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
- inds = [0, 4, 6]
- arr_inds = np.array([0, 4, 6])
-
- cp = s.copy()
- exp = s.copy()
- s[inds] = 0
- s.loc[inds] = 0
- assert_series_equal(cp, exp)
-
- cp = s.copy()
- exp = s.copy()
- s[arr_inds] = 0
- s.loc[arr_inds] = 0
- assert_series_equal(cp, exp)
-
- inds_notfound = [0, 4, 5, 6]
- arr_inds_notfound = np.array([0, 4, 5, 6])
- pytest.raises(Exception, s.__setitem__, inds_notfound, 0)
- pytest.raises(Exception, s.__setitem__, arr_inds_notfound, 0)
-
- # GH12089
- # with tz for values
- s = Series(pd.date_range("2011-01-01", periods=3, tz="US/Eastern"),
- index=['a', 'b', 'c'])
- s2 = s.copy()
- expected = Timestamp('2011-01-03', tz='US/Eastern')
- s2.loc['a'] = expected
- result = s2.loc['a']
- assert result == expected
-
- s2 = s.copy()
- s2.iloc[0] = expected
- result = s2.iloc[0]
- assert result == expected
-
- s2 = s.copy()
- s2['a'] = expected
- result = s2['a']
- assert result == expected
-
- def test_loc_getitem(self):
- inds = self.series.index[[3, 4, 7]]
- assert_series_equal(self.series.loc[inds], self.series.reindex(inds))
- assert_series_equal(self.series.iloc[5::2], self.series[5::2])
-
- # slice with indices
- d1, d2 = self.ts.index[[5, 15]]
- result = self.ts.loc[d1:d2]
- expected = self.ts.truncate(d1, d2)
- assert_series_equal(result, expected)
-
- # boolean
- mask = self.series > self.series.median()
- assert_series_equal(self.series.loc[mask], self.series[mask])
-
- # ask for index value
- assert self.ts.loc[d1] == self.ts[d1]
- assert self.ts.loc[d2] == self.ts[d2]
-
- def test_loc_getitem_not_monotonic(self):
- d1, d2 = self.ts.index[[5, 15]]
-
- ts2 = self.ts[::2][[1, 2, 0]]
-
- pytest.raises(KeyError, ts2.loc.__getitem__, slice(d1, d2))
- pytest.raises(KeyError, ts2.loc.__setitem__, slice(d1, d2), 0)
-
- def test_loc_getitem_setitem_integer_slice_keyerrors(self):
- s = Series(np.random.randn(10), index=lrange(0, 20, 2))
-
- # this is OK
- cp = s.copy()
- cp.iloc[4:10] = 0
- assert (cp.iloc[4:10] == 0).all()
-
- # so is this
- cp = s.copy()
- cp.iloc[3:11] = 0
- assert (cp.iloc[3:11] == 0).values.all()
-
- result = s.iloc[2:6]
- result2 = s.loc[3:11]
- expected = s.reindex([4, 6, 8, 10])
-
- assert_series_equal(result, expected)
- assert_series_equal(result2, expected)
-
- # non-monotonic, raise KeyError
- s2 = s.iloc[lrange(5) + lrange(5, 10)[::-1]]
- pytest.raises(KeyError, s2.loc.__getitem__, slice(3, 11))
- pytest.raises(KeyError, s2.loc.__setitem__, slice(3, 11), 0)
-
- def test_loc_getitem_iterator(self):
- idx = iter(self.series.index[:10])
- result = self.series.loc[idx]
- assert_series_equal(result, self.series[:10])
-
- def test_setitem_with_tz(self):
- for tz in ['US/Eastern', 'UTC', 'Asia/Tokyo']:
- orig = pd.Series(pd.date_range('2016-01-01', freq='H', periods=3,
- tz=tz))
- assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
-
- # scalar
- s = orig.copy()
- s[1] = pd.Timestamp('2011-01-01', tz=tz)
- exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00', tz=tz),
- pd.Timestamp('2016-01-01 02:00', tz=tz)])
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.loc[1] = pd.Timestamp('2011-01-01', tz=tz)
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.iloc[1] = pd.Timestamp('2011-01-01', tz=tz)
- tm.assert_series_equal(s, exp)
-
- # vector
- vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
- assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
-
- s[[1, 2]] = vals
- exp = pd.Series([pd.Timestamp('2016-01-01 00:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00', tz=tz),
- pd.Timestamp('2012-01-01 00:00', tz=tz)])
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.loc[[1, 2]] = vals
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.iloc[[1, 2]] = vals
- tm.assert_series_equal(s, exp)
-
- def test_setitem_with_tz_dst(self):
- # GH XXX
- tz = 'US/Eastern'
- orig = pd.Series(pd.date_range('2016-11-06', freq='H', periods=3,
- tz=tz))
- assert orig.dtype == 'datetime64[ns, {0}]'.format(tz)
-
- # scalar
- s = orig.copy()
- s[1] = pd.Timestamp('2011-01-01', tz=tz)
- exp = pd.Series([pd.Timestamp('2016-11-06 00:00-04:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00-05:00', tz=tz),
- pd.Timestamp('2016-11-06 01:00-05:00', tz=tz)])
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.loc[1] = pd.Timestamp('2011-01-01', tz=tz)
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.iloc[1] = pd.Timestamp('2011-01-01', tz=tz)
- tm.assert_series_equal(s, exp)
-
- # vector
- vals = pd.Series([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp('2012-01-01', tz=tz)], index=[1, 2])
- assert vals.dtype == 'datetime64[ns, {0}]'.format(tz)
-
- s[[1, 2]] = vals
- exp = pd.Series([pd.Timestamp('2016-11-06 00:00', tz=tz),
- pd.Timestamp('2011-01-01 00:00', tz=tz),
- pd.Timestamp('2012-01-01 00:00', tz=tz)])
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.loc[[1, 2]] = vals
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.iloc[[1, 2]] = vals
- tm.assert_series_equal(s, exp)
-
- def test_take(self):
- s = Series([-1, 5, 6, 2, 4])
-
- actual = s.take([1, 3, 4])
- expected = Series([5, 2, 4], index=[1, 3, 4])
- tm.assert_series_equal(actual, expected)
-
- actual = s.take([-1, 3, 4])
- expected = Series([4, 2, 4], index=[4, 3, 4])
- tm.assert_series_equal(actual, expected)
-
- pytest.raises(IndexError, s.take, [1, 10])
- pytest.raises(IndexError, s.take, [2, 5])
-
- with tm.assert_produces_warning(FutureWarning):
- s.take([-1, 3, 4], convert=False)
-
- def test_where_raise_on_error_deprecation(self):
-
- # gh-14968
- # deprecation of raise_on_error
- s = Series(np.random.randn(5))
- cond = s > 0
- with tm.assert_produces_warning(FutureWarning):
- s.where(cond, raise_on_error=True)
- with tm.assert_produces_warning(FutureWarning):
- s.mask(cond, raise_on_error=True)
-
- def test_where(self):
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.where(cond).dropna()
- rs2 = s[cond]
- assert_series_equal(rs, rs2)
-
- rs = s.where(cond, -s)
- assert_series_equal(rs, s.abs())
-
- rs = s.where(cond)
- assert (s.shape == rs.shape)
- assert (rs is not s)
-
- # test alignment
- cond = Series([True, False, False, True, False], index=s.index)
- s2 = -(s.abs())
-
- expected = s2[cond].reindex(s2.index[:3]).reindex(s2.index)
- rs = s2.where(cond[:3])
- assert_series_equal(rs, expected)
-
- expected = s2.abs()
- expected.iloc[0] = s2[0]
- rs = s2.where(cond[:3], -s2)
- assert_series_equal(rs, expected)
-
- def test_where_error(self):
-
- s = Series(np.random.randn(5))
- cond = s > 0
-
- pytest.raises(ValueError, s.where, 1)
- pytest.raises(ValueError, s.where, cond[:3].values, -s)
-
- # GH 2745
- s = Series([1, 2])
- s[[True, False]] = [0, 1]
- expected = Series([0, 2])
- assert_series_equal(s, expected)
-
- # failures
- pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
- [0, 2, 3])
- pytest.raises(ValueError, s.__setitem__, tuple([[[True, False]]]),
- [])
-
- def test_where_unsafe(self):
-
- # unsafe dtype changes
- for dtype in [np.int8, np.int16, np.int32, np.int64, np.float16,
- np.float32, np.float64]:
- s = Series(np.arange(10), dtype=dtype)
- mask = s < 5
- s[mask] = lrange(2, 7)
- expected = Series(lrange(2, 7) + lrange(5, 10), dtype=dtype)
- assert_series_equal(s, expected)
- assert s.dtype == expected.dtype
-
- # these are allowed operations, but are upcasted
- for dtype in [np.int64, np.float64]:
- s = Series(np.arange(10), dtype=dtype)
- mask = s < 5
- values = [2.5, 3.5, 4.5, 5.5, 6.5]
- s[mask] = values
- expected = Series(values + lrange(5, 10), dtype='float64')
- assert_series_equal(s, expected)
- assert s.dtype == expected.dtype
-
- # GH 9731
- s = Series(np.arange(10), dtype='int64')
- mask = s > 5
- values = [2.5, 3.5, 4.5, 5.5]
- s[mask] = values
- expected = Series(lrange(6) + values, dtype='float64')
- assert_series_equal(s, expected)
-
- # can't do these as we are forced to change the itemsize of the input
- # to something we cannot
- for dtype in [np.int8, np.int16, np.int32, np.float16, np.float32]:
- s = Series(np.arange(10), dtype=dtype)
- mask = s < 5
- values = [2.5, 3.5, 4.5, 5.5, 6.5]
- pytest.raises(Exception, s.__setitem__, tuple(mask), values)
-
- # GH3235
- s = Series(np.arange(10), dtype='int64')
- mask = s < 5
- s[mask] = lrange(2, 7)
- expected = Series(lrange(2, 7) + lrange(5, 10), dtype='int64')
- assert_series_equal(s, expected)
- assert s.dtype == expected.dtype
-
- s = Series(np.arange(10), dtype='int64')
- mask = s > 5
- s[mask] = [0] * 4
- expected = Series([0, 1, 2, 3, 4, 5] + [0] * 4, dtype='int64')
- assert_series_equal(s, expected)
-
- s = Series(np.arange(10))
- mask = s > 5
-
- def f():
- s[mask] = [5, 4, 3, 2, 1]
-
- pytest.raises(ValueError, f)
-
- def f():
- s[mask] = [0] * 5
-
- pytest.raises(ValueError, f)
-
- # dtype changes
- s = Series([1, 2, 3, 4])
- result = s.where(s > 2, np.nan)
- expected = Series([np.nan, np.nan, 3, 4])
- assert_series_equal(result, expected)
-
- # GH 4667
- # setting with None changes dtype
- s = Series(range(10)).astype(float)
- s[8] = None
- result = s[8]
- assert isna(result)
-
- s = Series(range(10)).astype(float)
- s[s > 8] = None
- result = s[isna(s)]
- expected = Series(np.nan, index=[9])
- assert_series_equal(result, expected)
-
- def test_where_array_like(self):
- # see gh-15414
- s = Series([1, 2, 3])
- cond = [False, True, True]
- expected = Series([np.nan, 2, 3])
- klasses = [list, tuple, np.array, Series]
-
- for klass in klasses:
- result = s.where(klass(cond))
- assert_series_equal(result, expected)
-
- def test_where_invalid_input(self):
- # see gh-15414: only boolean arrays accepted
- s = Series([1, 2, 3])
- msg = "Boolean array expected for the condition"
-
- conds = [
- [1, 0, 1],
- Series([2, 5, 7]),
- ["True", "False", "True"],
- [Timestamp("2017-01-01"),
- pd.NaT, Timestamp("2017-01-02")]
- ]
-
- for cond in conds:
- with tm.assert_raises_regex(ValueError, msg):
- s.where(cond)
-
- msg = "Array conditional must be same shape as self"
- with tm.assert_raises_regex(ValueError, msg):
- s.where([True])
-
- def test_where_ndframe_align(self):
- msg = "Array conditional must be same shape as self"
- s = Series([1, 2, 3])
-
- cond = [True]
- with tm.assert_raises_regex(ValueError, msg):
- s.where(cond)
-
- expected = Series([1, np.nan, np.nan])
-
- out = s.where(Series(cond))
- tm.assert_series_equal(out, expected)
-
- cond = np.array([False, True, False, True])
- with tm.assert_raises_regex(ValueError, msg):
- s.where(cond)
-
- expected = Series([np.nan, 2, np.nan])
-
- out = s.where(Series(cond))
- tm.assert_series_equal(out, expected)
-
- def test_where_setitem_invalid(self):
-
- # GH 2702
- # make sure correct exceptions are raised on invalid list assignment
-
- # slice
- s = Series(list('abc'))
-
- def f():
- s[0:3] = list(range(27))
-
- pytest.raises(ValueError, f)
-
- s[0:3] = list(range(3))
- expected = Series([0, 1, 2])
- assert_series_equal(s.astype(np.int64), expected, )
-
- # slice with step
- s = Series(list('abcdef'))
-
- def f():
- s[0:4:2] = list(range(27))
-
- pytest.raises(ValueError, f)
-
- s = Series(list('abcdef'))
- s[0:4:2] = list(range(2))
- expected = Series([0, 'b', 1, 'd', 'e', 'f'])
- assert_series_equal(s, expected)
-
- # neg slices
- s = Series(list('abcdef'))
-
- def f():
- s[:-1] = list(range(27))
-
- pytest.raises(ValueError, f)
-
- s[-3:-1] = list(range(2))
- expected = Series(['a', 'b', 'c', 0, 1, 'f'])
- assert_series_equal(s, expected)
-
- # list
- s = Series(list('abc'))
-
- def f():
- s[[0, 1, 2]] = list(range(27))
-
- pytest.raises(ValueError, f)
-
- s = Series(list('abc'))
-
- def f():
- s[[0, 1, 2]] = list(range(2))
-
- pytest.raises(ValueError, f)
-
- # scalar
- s = Series(list('abc'))
- s[0] = list(range(10))
- expected = Series([list(range(10)), 'b', 'c'])
- assert_series_equal(s, expected)
-
- def test_where_broadcast(self):
- # Test a variety of differently sized series
- for size in range(2, 6):
- # Test a variety of boolean indices
- for selection in [
- # First element should be set
- np.resize([True, False, False, False, False], size),
- # Set alternating elements]
- np.resize([True, False], size),
- # No element should be set
- np.resize([False], size)]:
-
- # Test a variety of different numbers as content
- for item in [2.0, np.nan, np.finfo(np.float).max,
- np.finfo(np.float).min]:
- # Test numpy arrays, lists and tuples as the input to be
- # broadcast
- for arr in [np.array([item]), [item], (item, )]:
- data = np.arange(size, dtype=float)
- s = Series(data)
- s[selection] = arr
- # Construct the expected series by taking the source
- # data or item based on the selection
- expected = Series([item if use_item else data[
- i] for i, use_item in enumerate(selection)])
- assert_series_equal(s, expected)
-
- s = Series(data)
- result = s.where(~selection, arr)
- assert_series_equal(result, expected)
-
- def test_where_inplace(self):
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.copy()
-
- rs.where(cond, inplace=True)
- assert_series_equal(rs.dropna(), s[cond])
- assert_series_equal(rs, s.where(cond))
-
- rs = s.copy()
- rs.where(cond, -s, inplace=True)
- assert_series_equal(rs, s.where(cond, -s))
-
- def test_where_dups(self):
- # GH 4550
- # where crashes with dups in index
- s1 = Series(list(range(3)))
- s2 = Series(list(range(3)))
- comb = pd.concat([s1, s2])
- result = comb.where(comb < 2)
- expected = Series([0, 1, np.nan, 0, 1, np.nan],
- index=[0, 1, 2, 0, 1, 2])
- assert_series_equal(result, expected)
-
- # GH 4548
- # inplace updating not working with dups
- comb[comb < 1] = 5
- expected = Series([5, 1, 2, 5, 1, 2], index=[0, 1, 2, 0, 1, 2])
- assert_series_equal(comb, expected)
-
- comb[comb < 2] += 10
- expected = Series([5, 11, 2, 5, 11, 2], index=[0, 1, 2, 0, 1, 2])
- assert_series_equal(comb, expected)
-
- def test_where_datetime_conversion(self):
- s = Series(date_range('20130102', periods=2))
- expected = Series([10, 10])
- mask = np.array([False, False])
-
- rs = s.where(mask, [10, 10])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10.0)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, 10.0])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, np.nan])
- expected = Series([10, None], dtype='object')
- assert_series_equal(rs, expected)
-
- # GH 15701
- timestamps = ['2016-12-31 12:00:04+00:00',
- '2016-12-31 12:00:04.010000+00:00']
- s = Series([pd.Timestamp(t) for t in timestamps])
- rs = s.where(Series([False, True]))
- expected = Series([pd.NaT, s[1]])
- assert_series_equal(rs, expected)
-
- def test_where_timedelta_coerce(self):
- s = Series([1, 2], dtype='timedelta64[ns]')
- expected = Series([10, 10])
- mask = np.array([False, False])
-
- rs = s.where(mask, [10, 10])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, 10.0)
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, 10.0])
- assert_series_equal(rs, expected)
-
- rs = s.where(mask, [10.0, np.nan])
- expected = Series([10, None], dtype='object')
- assert_series_equal(rs, expected)
-
- def test_mask(self):
- # compare with tested results in test_where
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.where(~cond, np.nan)
- assert_series_equal(rs, s.mask(cond))
-
- rs = s.where(~cond)
- rs2 = s.mask(cond)
- assert_series_equal(rs, rs2)
-
- rs = s.where(~cond, -s)
- rs2 = s.mask(cond, -s)
- assert_series_equal(rs, rs2)
-
- cond = Series([True, False, False, True, False], index=s.index)
- s2 = -(s.abs())
- rs = s2.where(~cond[:3])
- rs2 = s2.mask(cond[:3])
- assert_series_equal(rs, rs2)
-
- rs = s2.where(~cond[:3], -s2)
- rs2 = s2.mask(cond[:3], -s2)
- assert_series_equal(rs, rs2)
-
- pytest.raises(ValueError, s.mask, 1)
- pytest.raises(ValueError, s.mask, cond[:3].values, -s)
-
- # dtype changes
- s = Series([1, 2, 3, 4])
- result = s.mask(s > 2, np.nan)
- expected = Series([1, 2, np.nan, np.nan])
- assert_series_equal(result, expected)
-
- def test_mask_broadcast(self):
- # GH 8801
- # copied from test_where_broadcast
- for size in range(2, 6):
- for selection in [
- # First element should be set
- np.resize([True, False, False, False, False], size),
- # Set alternating elements]
- np.resize([True, False], size),
- # No element should be set
- np.resize([False], size)]:
- for item in [2.0, np.nan, np.finfo(np.float).max,
- np.finfo(np.float).min]:
- for arr in [np.array([item]), [item], (item, )]:
- data = np.arange(size, dtype=float)
- s = Series(data)
- result = s.mask(selection, arr)
- expected = Series([item if use_item else data[
- i] for i, use_item in enumerate(selection)])
- assert_series_equal(result, expected)
-
- def test_mask_inplace(self):
- s = Series(np.random.randn(5))
- cond = s > 0
-
- rs = s.copy()
- rs.mask(cond, inplace=True)
- assert_series_equal(rs.dropna(), s[~cond])
- assert_series_equal(rs, s.mask(cond))
-
- rs = s.copy()
- rs.mask(cond, -s, inplace=True)
- assert_series_equal(rs, s.mask(cond, -s))
-
- def test_ix_setitem(self):
- inds = self.series.index[[3, 4, 7]]
-
- result = self.series.copy()
- result.loc[inds] = 5
-
- expected = self.series.copy()
- expected[[3, 4, 7]] = 5
- assert_series_equal(result, expected)
-
- result.iloc[5:10] = 10
- expected[5:10] = 10
- assert_series_equal(result, expected)
-
- # set slice with indices
- d1, d2 = self.series.index[[5, 15]]
- result.loc[d1:d2] = 6
- expected[5:16] = 6 # because it's inclusive
- assert_series_equal(result, expected)
-
- # set index value
- self.series.loc[d1] = 4
- self.series.loc[d2] = 6
- assert self.series[d1] == 4
- assert self.series[d2] == 6
-
- def test_where_numeric_with_string(self):
- # GH 9280
- s = pd.Series([1, 2, 3])
- w = s.where(s > 1, 'X')
-
- assert not is_integer(w[0])
- assert is_integer(w[1])
- assert is_integer(w[2])
- assert isinstance(w[0], str)
- assert w.dtype == 'object'
-
- w = s.where(s > 1, ['X', 'Y', 'Z'])
- assert not is_integer(w[0])
- assert is_integer(w[1])
- assert is_integer(w[2])
- assert isinstance(w[0], str)
- assert w.dtype == 'object'
-
- w = s.where(s > 1, np.array(['X', 'Y', 'Z']))
- assert not is_integer(w[0])
- assert is_integer(w[1])
- assert is_integer(w[2])
- assert isinstance(w[0], str)
- assert w.dtype == 'object'
-
- def test_setitem_boolean(self):
- mask = self.series > self.series.median()
-
- # similar indexed series
- result = self.series.copy()
- result[mask] = self.series * 2
- expected = self.series * 2
- assert_series_equal(result[mask], expected[mask])
-
- # needs alignment
- result = self.series.copy()
- result[mask] = (self.series * 2)[0:5]
- expected = (self.series * 2)[0:5].reindex_like(self.series)
- expected[-mask] = self.series[mask]
- assert_series_equal(result[mask], expected[mask])
-
- def test_ix_setitem_boolean(self):
- mask = self.series > self.series.median()
-
- result = self.series.copy()
- result.loc[mask] = 0
- expected = self.series
- expected[mask] = 0
- assert_series_equal(result, expected)
-
- def test_ix_setitem_corner(self):
- inds = list(self.series.index[[5, 8, 12]])
- self.series.loc[inds] = 5
- pytest.raises(Exception, self.series.loc.__setitem__,
- inds + ['foo'], 5)
-
- def test_get_set_boolean_different_order(self):
- ordered = self.series.sort_values()
-
- # setting
- copy = self.series.copy()
- copy[ordered > 0] = 0
-
- expected = self.series.copy()
- expected[expected > 0] = 0
-
- assert_series_equal(copy, expected)
-
- # getting
- sel = self.series[ordered > 0]
- exp = self.series[self.series > 0]
- assert_series_equal(sel, exp)
-
- def test_setitem_na(self):
- # these induce dtype changes
- expected = Series([np.nan, 3, np.nan, 5, np.nan, 7, np.nan, 9, np.nan])
- s = Series([2, 3, 4, 5, 6, 7, 8, 9, 10])
- s[::2] = np.nan
- assert_series_equal(s, expected)
-
- # gets coerced to float, right?
- expected = Series([np.nan, 1, np.nan, 0])
- s = Series([True, True, False, False])
- s[::2] = np.nan
- assert_series_equal(s, expected)
-
- expected = Series([np.nan, np.nan, np.nan, np.nan, np.nan, 5, 6, 7, 8,
- 9])
- s = Series(np.arange(10))
- s[:5] = np.nan
- assert_series_equal(s, expected)
-
- def test_basic_indexing(self):
- s = Series(np.random.randn(5), index=['a', 'b', 'a', 'a', 'b'])
-
- pytest.raises(IndexError, s.__getitem__, 5)
- pytest.raises(IndexError, s.__setitem__, 5, 0)
-
- pytest.raises(KeyError, s.__getitem__, 'c')
-
- s = s.sort_index()
-
- pytest.raises(IndexError, s.__getitem__, 5)
- pytest.raises(IndexError, s.__setitem__, 5, 0)
-
- def test_int_indexing(self):
- s = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2])
-
- pytest.raises(KeyError, s.__getitem__, 5)
-
- pytest.raises(KeyError, s.__getitem__, 'c')
-
- # not monotonic
- s = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1])
-
- pytest.raises(KeyError, s.__getitem__, 5)
-
- pytest.raises(KeyError, s.__getitem__, 'c')
-
- def test_datetime_indexing(self):
- from pandas import date_range
-
- index = date_range('1/1/2000', '1/7/2000')
- index = index.repeat(3)
-
- s = Series(len(index), index=index)
- stamp = Timestamp('1/8/2000')
-
- pytest.raises(KeyError, s.__getitem__, stamp)
- s[stamp] = 0
- assert s[stamp] == 0
-
- # not monotonic
- s = Series(len(index), index=index)
- s = s[::-1]
-
- pytest.raises(KeyError, s.__getitem__, stamp)
- s[stamp] = 0
- assert s[stamp] == 0
-
- def test_timedelta_assignment(self):
- # GH 8209
- s = Series([])
- s.loc['B'] = timedelta(1)
- tm.assert_series_equal(s, Series(Timedelta('1 days'), index=['B']))
-
- s = s.reindex(s.index.insert(0, 'A'))
- tm.assert_series_equal(s, Series(
- [np.nan, Timedelta('1 days')], index=['A', 'B']))
-
- result = s.fillna(timedelta(1))
- expected = Series(Timedelta('1 days'), index=['A', 'B'])
- tm.assert_series_equal(result, expected)
-
- s.loc['A'] = timedelta(1)
- tm.assert_series_equal(s, expected)
-
- # GH 14155
- s = Series(10 * [np.timedelta64(10, 'm')])
- s.loc[[1, 2, 3]] = np.timedelta64(20, 'm')
- expected = pd.Series(10 * [np.timedelta64(10, 'm')])
- expected.loc[[1, 2, 3]] = pd.Timedelta(np.timedelta64(20, 'm'))
- tm.assert_series_equal(s, expected)
-
- def test_underlying_data_conversion(self):
-
- # GH 4080
- df = DataFrame({c: [1, 2, 3] for c in ['a', 'b', 'c']})
- df.set_index(['a', 'b', 'c'], inplace=True)
- s = Series([1], index=[(2, 2, 2)])
- df['val'] = 0
- df
- df['val'].update(s)
-
- expected = DataFrame(
- dict(a=[1, 2, 3], b=[1, 2, 3], c=[1, 2, 3], val=[0, 1, 0]))
- expected.set_index(['a', 'b', 'c'], inplace=True)
- tm.assert_frame_equal(df, expected)
-
- # GH 3970
- # these are chained assignments as well
- pd.set_option('chained_assignment', None)
- df = DataFrame({"aa": range(5), "bb": [2.2] * 5})
- df["cc"] = 0.0
-
- ck = [True] * len(df)
-
- df["bb"].iloc[0] = .13
-
- # TODO: unused
- df_tmp = df.iloc[ck] # noqa
-
- df["bb"].iloc[0] = .15
- assert df['bb'].iloc[0] == 0.15
- pd.set_option('chained_assignment', 'raise')
-
- # GH 3217
- df = DataFrame(dict(a=[1, 3], b=[np.nan, 2]))
- df['c'] = np.nan
- df['c'].update(pd.Series(['foo'], index=[0]))
-
- expected = DataFrame(dict(a=[1, 3], b=[np.nan, 2], c=['foo', np.nan]))
- tm.assert_frame_equal(df, expected)
-
- def test_preserveRefs(self):
- seq = self.ts[[5, 10, 15]]
- seq[1] = np.NaN
- assert not np.isnan(self.ts[10])
-
- def test_drop(self):
-
- # unique
- s = Series([1, 2], index=['one', 'two'])
- expected = Series([1], index=['one'])
- result = s.drop(['two'])
- assert_series_equal(result, expected)
- result = s.drop('two', axis='rows')
- assert_series_equal(result, expected)
-
- # non-unique
- # GH 5248
- s = Series([1, 1, 2], index=['one', 'two', 'one'])
- expected = Series([1, 2], index=['one', 'one'])
- result = s.drop(['two'], axis=0)
- assert_series_equal(result, expected)
- result = s.drop('two')
- assert_series_equal(result, expected)
-
- expected = Series([1], index=['two'])
- result = s.drop(['one'])
- assert_series_equal(result, expected)
- result = s.drop('one')
- assert_series_equal(result, expected)
-
- # single string/tuple-like
- s = Series(range(3), index=list('abc'))
- pytest.raises(KeyError, s.drop, 'bc')
- pytest.raises(KeyError, s.drop, ('a', ))
-
- # errors='ignore'
- s = Series(range(3), index=list('abc'))
- result = s.drop('bc', errors='ignore')
- assert_series_equal(result, s)
- result = s.drop(['a', 'd'], errors='ignore')
- expected = s.iloc[1:]
- assert_series_equal(result, expected)
-
- # bad axis
- pytest.raises(ValueError, s.drop, 'one', axis='columns')
-
- # GH 8522
- s = Series([2, 3], index=[True, False])
- assert s.index.is_object()
- result = s.drop(True)
- expected = Series([3], index=[False])
- assert_series_equal(result, expected)
-
- # GH 16877
- s = Series([2, 3], index=[0, 1])
- with tm.assert_raises_regex(KeyError, 'not contained in axis'):
- s.drop([False, True])
-
- def test_align(self):
- def _check_align(a, b, how='left', fill=None):
- aa, ab = a.align(b, join=how, fill_value=fill)
-
- join_index = a.index.join(b.index, how=how)
- if fill is not None:
- diff_a = aa.index.difference(join_index)
- diff_b = ab.index.difference(join_index)
- if len(diff_a) > 0:
- assert (aa.reindex(diff_a) == fill).all()
- if len(diff_b) > 0:
- assert (ab.reindex(diff_b) == fill).all()
-
- ea = a.reindex(join_index)
- eb = b.reindex(join_index)
-
- if fill is not None:
- ea = ea.fillna(fill)
- eb = eb.fillna(fill)
-
- assert_series_equal(aa, ea)
- assert_series_equal(ab, eb)
- assert aa.name == 'ts'
- assert ea.name == 'ts'
- assert ab.name == 'ts'
- assert eb.name == 'ts'
-
- for kind in JOIN_TYPES:
- _check_align(self.ts[2:], self.ts[:-5], how=kind)
- _check_align(self.ts[2:], self.ts[:-5], how=kind, fill=-1)
-
- # empty left
- _check_align(self.ts[:0], self.ts[:-5], how=kind)
- _check_align(self.ts[:0], self.ts[:-5], how=kind, fill=-1)
-
- # empty right
- _check_align(self.ts[:-5], self.ts[:0], how=kind)
- _check_align(self.ts[:-5], self.ts[:0], how=kind, fill=-1)
-
- # both empty
- _check_align(self.ts[:0], self.ts[:0], how=kind)
- _check_align(self.ts[:0], self.ts[:0], how=kind, fill=-1)
-
- def test_align_fill_method(self):
- def _check_align(a, b, how='left', method='pad', limit=None):
- aa, ab = a.align(b, join=how, method=method, limit=limit)
-
- join_index = a.index.join(b.index, how=how)
- ea = a.reindex(join_index)
- eb = b.reindex(join_index)
-
- ea = ea.fillna(method=method, limit=limit)
- eb = eb.fillna(method=method, limit=limit)
-
- assert_series_equal(aa, ea)
- assert_series_equal(ab, eb)
-
- for kind in JOIN_TYPES:
- for meth in ['pad', 'bfill']:
- _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth)
- _check_align(self.ts[2:], self.ts[:-5], how=kind, method=meth,
- limit=1)
-
- # empty left
- _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth)
- _check_align(self.ts[:0], self.ts[:-5], how=kind, method=meth,
- limit=1)
-
- # empty right
- _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth)
- _check_align(self.ts[:-5], self.ts[:0], how=kind, method=meth,
- limit=1)
-
- # both empty
- _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth)
- _check_align(self.ts[:0], self.ts[:0], how=kind, method=meth,
- limit=1)
-
- def test_align_nocopy(self):
- b = self.ts[:5].copy()
-
- # do copy
- a = self.ts.copy()
- ra, _ = a.align(b, join='left')
- ra[:5] = 5
- assert not (a[:5] == 5).any()
-
- # do not copy
- a = self.ts.copy()
- ra, _ = a.align(b, join='left', copy=False)
- ra[:5] = 5
- assert (a[:5] == 5).all()
-
- # do copy
- a = self.ts.copy()
- b = self.ts[:5].copy()
- _, rb = a.align(b, join='right')
- rb[:3] = 5
- assert not (b[:3] == 5).any()
-
- # do not copy
- a = self.ts.copy()
- b = self.ts[:5].copy()
- _, rb = a.align(b, join='right', copy=False)
- rb[:2] = 5
- assert (b[:2] == 5).all()
-
- def test_align_same_index(self):
- a, b = self.ts.align(self.ts, copy=False)
- assert a.index is self.ts.index
- assert b.index is self.ts.index
-
- a, b = self.ts.align(self.ts, copy=True)
- assert a.index is not self.ts.index
- assert b.index is not self.ts.index
-
- def test_align_multiindex(self):
- # GH 10665
-
- midx = pd.MultiIndex.from_product([range(2), range(3), range(2)],
- names=('a', 'b', 'c'))
- idx = pd.Index(range(2), name='b')
- s1 = pd.Series(np.arange(12, dtype='int64'), index=midx)
- s2 = pd.Series(np.arange(2, dtype='int64'), index=idx)
-
- # these must be the same results (but flipped)
- res1l, res1r = s1.align(s2, join='left')
- res2l, res2r = s2.align(s1, join='right')
-
- expl = s1
- tm.assert_series_equal(expl, res1l)
- tm.assert_series_equal(expl, res2r)
- expr = pd.Series([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx)
- tm.assert_series_equal(expr, res1r)
- tm.assert_series_equal(expr, res2l)
-
- res1l, res1r = s1.align(s2, join='right')
- res2l, res2r = s2.align(s1, join='left')
-
- exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)],
- names=('a', 'b', 'c'))
- expl = pd.Series([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx)
- tm.assert_series_equal(expl, res1l)
- tm.assert_series_equal(expl, res2r)
- expr = pd.Series([0, 0, 1, 1] * 2, index=exp_idx)
- tm.assert_series_equal(expr, res1r)
- tm.assert_series_equal(expr, res2l)
-
- def test_reindex(self):
-
- identity = self.series.reindex(self.series.index)
-
- # __array_interface__ is not defined for older numpies
- # and on some pythons
- try:
- assert np.may_share_memory(self.series.index, identity.index)
- except AttributeError:
- pass
-
- assert identity.index.is_(self.series.index)
- assert identity.index.identical(self.series.index)
-
- subIndex = self.series.index[10:20]
- subSeries = self.series.reindex(subIndex)
-
- for idx, val in compat.iteritems(subSeries):
- assert val == self.series[idx]
-
- subIndex2 = self.ts.index[10:20]
- subTS = self.ts.reindex(subIndex2)
-
- for idx, val in compat.iteritems(subTS):
- assert val == self.ts[idx]
- stuffSeries = self.ts.reindex(subIndex)
-
- assert np.isnan(stuffSeries).all()
-
- # This is extremely important for the Cython code to not screw up
- nonContigIndex = self.ts.index[::2]
- subNonContig = self.ts.reindex(nonContigIndex)
- for idx, val in compat.iteritems(subNonContig):
- assert val == self.ts[idx]
-
- # return a copy the same index here
- result = self.ts.reindex()
- assert not (result is self.ts)
-
- def test_reindex_nan(self):
- ts = Series([2, 3, 5, 7], index=[1, 4, nan, 8])
-
- i, j = [nan, 1, nan, 8, 4, nan], [2, 0, 2, 3, 1, 2]
- assert_series_equal(ts.reindex(i), ts.iloc[j])
-
- ts.index = ts.index.astype('object')
-
- # reindex coerces index.dtype to float, loc/iloc doesn't
- assert_series_equal(ts.reindex(i), ts.iloc[j], check_index_type=False)
-
- def test_reindex_series_add_nat(self):
- rng = date_range('1/1/2000 00:00:00', periods=10, freq='10s')
- series = Series(rng)
-
- result = series.reindex(lrange(15))
- assert np.issubdtype(result.dtype, np.dtype('M8[ns]'))
-
- mask = result.isna()
- assert mask[-5:].all()
- assert not mask[:-5].any()
-
- def test_reindex_with_datetimes(self):
- rng = date_range('1/1/2000', periods=20)
- ts = Series(np.random.randn(20), index=rng)
-
- result = ts.reindex(list(ts.index[5:10]))
- expected = ts[5:10]
- tm.assert_series_equal(result, expected)
-
- result = ts[list(ts.index[5:10])]
- tm.assert_series_equal(result, expected)
-
- def test_reindex_corner(self):
- # (don't forget to fix this) I think it's fixed
- self.empty.reindex(self.ts.index, method='pad') # it works
-
- # corner case: pad empty series
- reindexed = self.empty.reindex(self.ts.index, method='pad')
-
- # pass non-Index
- reindexed = self.ts.reindex(list(self.ts.index))
- assert_series_equal(self.ts, reindexed)
-
- # bad fill method
- ts = self.ts[::2]
- pytest.raises(Exception, ts.reindex, self.ts.index, method='foo')
-
- def test_reindex_pad(self):
-
- s = Series(np.arange(10), dtype='int64')
- s2 = s[::2]
-
- reindexed = s2.reindex(s.index, method='pad')
- reindexed2 = s2.reindex(s.index, method='ffill')
- assert_series_equal(reindexed, reindexed2)
-
- expected = Series([0, 0, 2, 2, 4, 4, 6, 6, 8, 8], index=np.arange(10))
- assert_series_equal(reindexed, expected)
-
- # GH4604
- s = Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
- new_index = ['a', 'g', 'c', 'f']
- expected = Series([1, 1, 3, 3], index=new_index)
-
- # this changes dtype because the ffill happens after
- result = s.reindex(new_index).ffill()
- assert_series_equal(result, expected.astype('float64'))
-
- result = s.reindex(new_index).ffill(downcast='infer')
- assert_series_equal(result, expected)
-
- expected = Series([1, 5, 3, 5], index=new_index)
- result = s.reindex(new_index, method='ffill')
- assert_series_equal(result, expected)
-
- # inference of new dtype
- s = Series([True, False, False, True], index=list('abcd'))
- new_index = 'agc'
- result = s.reindex(list(new_index)).ffill()
- expected = Series([True, True, False], index=list(new_index))
- assert_series_equal(result, expected)
-
- # GH4618 shifted series downcasting
- s = Series(False, index=lrange(0, 5))
- result = s.shift(1).fillna(method='bfill')
- expected = Series(False, index=lrange(0, 5))
- assert_series_equal(result, expected)
-
- def test_reindex_nearest(self):
- s = Series(np.arange(10, dtype='int64'))
- target = [0.1, 0.9, 1.5, 2.0]
- actual = s.reindex(target, method='nearest')
- expected = Series(np.around(target).astype('int64'), target)
- assert_series_equal(expected, actual)
-
- actual = s.reindex_like(actual, method='nearest')
- assert_series_equal(expected, actual)
-
- actual = s.reindex_like(actual, method='nearest', tolerance=1)
- assert_series_equal(expected, actual)
- actual = s.reindex_like(actual, method='nearest',
- tolerance=[1, 2, 3, 4])
- assert_series_equal(expected, actual)
-
- actual = s.reindex(target, method='nearest', tolerance=0.2)
- expected = Series([0, 1, np.nan, 2], target)
- assert_series_equal(expected, actual)
-
- actual = s.reindex(target, method='nearest',
- tolerance=[0.3, 0.01, 0.4, 3])
- expected = Series([0, np.nan, np.nan, 2], target)
- assert_series_equal(expected, actual)
-
- def test_reindex_backfill(self):
- pass
-
- def test_reindex_int(self):
- ts = self.ts[::2]
- int_ts = Series(np.zeros(len(ts), dtype=int), index=ts.index)
-
- # this should work fine
- reindexed_int = int_ts.reindex(self.ts.index)
-
- # if NaNs introduced
- assert reindexed_int.dtype == np.float_
-
- # NO NaNs introduced
- reindexed_int = int_ts.reindex(int_ts.index[::2])
- assert reindexed_int.dtype == np.int_
-
- def test_reindex_bool(self):
-
- # A series other than float, int, string, or object
- ts = self.ts[::2]
- bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
-
- # this should work fine
- reindexed_bool = bool_ts.reindex(self.ts.index)
-
- # if NaNs introduced
- assert reindexed_bool.dtype == np.object_
-
- # NO NaNs introduced
- reindexed_bool = bool_ts.reindex(bool_ts.index[::2])
- assert reindexed_bool.dtype == np.bool_
-
- def test_reindex_bool_pad(self):
- # fail
- ts = self.ts[5:]
- bool_ts = Series(np.zeros(len(ts), dtype=bool), index=ts.index)
- filled_bool = bool_ts.reindex(self.ts.index, method='pad')
- assert isna(filled_bool[:5]).all()
-
- def test_reindex_like(self):
- other = self.ts[::2]
- assert_series_equal(self.ts.reindex(other.index),
- self.ts.reindex_like(other))
-
- # GH 7179
- day1 = datetime(2013, 3, 5)
- day2 = datetime(2013, 5, 5)
- day3 = datetime(2014, 3, 5)
-
- series1 = Series([5, None, None], [day1, day2, day3])
- series2 = Series([None, None], [day1, day3])
-
- result = series1.reindex_like(series2, method='pad')
- expected = Series([5, np.nan], index=[day1, day3])
- assert_series_equal(result, expected)
-
- def test_reindex_fill_value(self):
- # -----------------------------------------------------------
- # floats
- floats = Series([1., 2., 3.])
- result = floats.reindex([1, 2, 3])
- expected = Series([2., 3., np.nan], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- result = floats.reindex([1, 2, 3], fill_value=0)
- expected = Series([2., 3., 0], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- # -----------------------------------------------------------
- # ints
- ints = Series([1, 2, 3])
-
- result = ints.reindex([1, 2, 3])
- expected = Series([2., 3., np.nan], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- # don't upcast
- result = ints.reindex([1, 2, 3], fill_value=0)
- expected = Series([2, 3, 0], index=[1, 2, 3])
- assert issubclass(result.dtype.type, np.integer)
- assert_series_equal(result, expected)
-
- # -----------------------------------------------------------
- # objects
- objects = Series([1, 2, 3], dtype=object)
-
- result = objects.reindex([1, 2, 3])
- expected = Series([2, 3, np.nan], index=[1, 2, 3], dtype=object)
- assert_series_equal(result, expected)
-
- result = objects.reindex([1, 2, 3], fill_value='foo')
- expected = Series([2, 3, 'foo'], index=[1, 2, 3], dtype=object)
- assert_series_equal(result, expected)
-
- # ------------------------------------------------------------
- # bools
- bools = Series([True, False, True])
-
- result = bools.reindex([1, 2, 3])
- expected = Series([False, True, np.nan], index=[1, 2, 3], dtype=object)
- assert_series_equal(result, expected)
-
- result = bools.reindex([1, 2, 3], fill_value=False)
- expected = Series([False, True, False], index=[1, 2, 3])
- assert_series_equal(result, expected)
-
- def test_reindex_categorical(self):
-
- index = date_range('20000101', periods=3)
-
- # reindexing to an invalid Categorical
- s = Series(['a', 'b', 'c'], dtype='category')
- result = s.reindex(index)
- expected = Series(Categorical(values=[np.nan, np.nan, np.nan],
- categories=['a', 'b', 'c']))
- expected.index = index
- tm.assert_series_equal(result, expected)
-
- # partial reindexing
- expected = Series(Categorical(values=['b', 'c'], categories=['a', 'b',
- 'c']))
- expected.index = [1, 2]
- result = s.reindex([1, 2])
- tm.assert_series_equal(result, expected)
-
- expected = Series(Categorical(
- values=['c', np.nan], categories=['a', 'b', 'c']))
- expected.index = [2, 3]
- result = s.reindex([2, 3])
- tm.assert_series_equal(result, expected)
-
- def test_rename(self):
-
- # GH 17407
- s = Series(range(1, 6), index=pd.Index(range(2, 7), name='IntIndex'))
- result = s.rename(str)
- expected = s.rename(lambda i: str(i))
- assert_series_equal(result, expected)
-
- assert result.name == expected.name
-
- def test_select(self):
-
- # deprecated: gh-12410
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- n = len(self.ts)
- result = self.ts.select(lambda x: x >= self.ts.index[n // 2])
- expected = self.ts.reindex(self.ts.index[n // 2:])
- assert_series_equal(result, expected)
-
- result = self.ts.select(lambda x: x.weekday() == 2)
- expected = self.ts[self.ts.index.weekday == 2]
- assert_series_equal(result, expected)
-
- def test_cast_on_putmask(self):
-
- # GH 2746
-
- # need to upcast
- s = Series([1, 2], index=[1, 2], dtype='int64')
- s[[True, False]] = Series([0], index=[1], dtype='int64')
- expected = Series([0, 2], index=[1, 2], dtype='int64')
-
- assert_series_equal(s, expected)
-
- def test_type_promote_putmask(self):
-
- # GH8387: test that changing types does not break alignment
- ts = Series(np.random.randn(100), index=np.arange(100, 0, -1)).round(5)
- left, mask = ts.copy(), ts > 0
- right = ts[mask].copy().map(str)
- left[mask] = right
- assert_series_equal(left, ts.map(lambda t: str(t) if t > 0 else t))
-
- s = Series([0, 1, 2, 0])
- mask = s > 0
- s2 = s[mask].map(str)
- s[mask] = s2
- assert_series_equal(s, Series([0, '1', '2', 0]))
-
- s = Series([0, 'foo', 'bar', 0])
- mask = Series([False, True, True, False])
- s2 = s[mask]
- s[mask] = s2
- assert_series_equal(s, Series([0, 'foo', 'bar', 0]))
-
- def test_head_tail(self):
- assert_series_equal(self.series.head(), self.series[:5])
- assert_series_equal(self.series.head(0), self.series[0:0])
- assert_series_equal(self.series.tail(), self.series[-5:])
- assert_series_equal(self.series.tail(0), self.series[0:0])
-
- def test_multilevel_preserve_name(self):
- index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'], ['one', 'two',
- 'three']],
- labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
- [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
- names=['first', 'second'])
- s = Series(np.random.randn(len(index)), index=index, name='sth')
-
- result = s['foo']
- result2 = s.loc['foo']
- assert result.name == s.name
- assert result2.name == s.name
-
- def test_setitem_scalar_into_readonly_backing_data(self):
- # GH14359: test that you cannot mutate a read only buffer
-
- array = np.zeros(5)
- array.flags.writeable = False # make the array immutable
- series = Series(array)
-
- for n in range(len(series)):
- with pytest.raises(ValueError):
- series[n] = 1
-
- assert array[n] == 0
-
- def test_setitem_slice_into_readonly_backing_data(self):
- # GH14359: test that you cannot mutate a read only buffer
-
- array = np.zeros(5)
- array.flags.writeable = False # make the array immutable
- series = Series(array)
-
- with pytest.raises(ValueError):
- series[1:3] = 1
-
- assert not array.any()
-
- def test_categorial_assigning_ops(self):
- orig = Series(Categorical(["b", "b"], categories=["a", "b"]))
- s = orig.copy()
- s[:] = "a"
- exp = Series(Categorical(["a", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s[1] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s[s.index > 0] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s[[False, True]] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]))
- tm.assert_series_equal(s, exp)
-
- s = orig.copy()
- s.index = ["x", "y"]
- s["y"] = "a"
- exp = Series(Categorical(["b", "a"], categories=["a", "b"]),
- index=["x", "y"])
- tm.assert_series_equal(s, exp)
-
- # ensure that one can set something to np.nan
- s = Series(Categorical([1, 2, 3]))
- exp = Series(Categorical([1, np.nan, 3], categories=[1, 2, 3]))
- s[1] = np.nan
- tm.assert_series_equal(s, exp)
-
-
-class TestTimeSeriesDuplicates(object):
-
- def setup_method(self, method):
- dates = [datetime(2000, 1, 2), datetime(2000, 1, 2),
- datetime(2000, 1, 2), datetime(2000, 1, 3),
- datetime(2000, 1, 3), datetime(2000, 1, 3),
- datetime(2000, 1, 4), datetime(2000, 1, 4),
- datetime(2000, 1, 4), datetime(2000, 1, 5)]
-
- self.dups = Series(np.random.randn(len(dates)), index=dates)
-
- def test_constructor(self):
- assert isinstance(self.dups, Series)
- assert isinstance(self.dups.index, DatetimeIndex)
-
- def test_is_unique_monotonic(self):
- assert not self.dups.index.is_unique
-
- def test_index_unique(self):
- uniques = self.dups.index.unique()
- expected = DatetimeIndex([datetime(2000, 1, 2), datetime(2000, 1, 3),
- datetime(2000, 1, 4), datetime(2000, 1, 5)])
- assert uniques.dtype == 'M8[ns]' # sanity
- tm.assert_index_equal(uniques, expected)
- assert self.dups.index.nunique() == 4
-
- # #2563
- assert isinstance(uniques, DatetimeIndex)
-
- dups_local = self.dups.index.tz_localize('US/Eastern')
- dups_local.name = 'foo'
- result = dups_local.unique()
- expected = DatetimeIndex(expected, name='foo')
- expected = expected.tz_localize('US/Eastern')
- assert result.tz is not None
- assert result.name == 'foo'
- tm.assert_index_equal(result, expected)
-
- # NaT, note this is excluded
- arr = [1370745748 + t for t in range(20)] + [tslib.iNaT]
- idx = DatetimeIndex(arr * 3)
- tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
- assert idx.nunique() == 20
- assert idx.nunique(dropna=False) == 21
-
- arr = [Timestamp('2013-06-09 02:42:28') + timedelta(seconds=t)
- for t in range(20)] + [NaT]
- idx = DatetimeIndex(arr * 3)
- tm.assert_index_equal(idx.unique(), DatetimeIndex(arr))
- assert idx.nunique() == 20
- assert idx.nunique(dropna=False) == 21
-
- def test_index_dupes_contains(self):
- d = datetime(2011, 12, 5, 20, 30)
- ix = DatetimeIndex([d, d])
- assert d in ix
-
- def test_duplicate_dates_indexing(self):
- ts = self.dups
-
- uniques = ts.index.unique()
- for date in uniques:
- result = ts[date]
-
- mask = ts.index == date
- total = (ts.index == date).sum()
- expected = ts[mask]
- if total > 1:
- assert_series_equal(result, expected)
- else:
- assert_almost_equal(result, expected[0])
-
- cp = ts.copy()
- cp[date] = 0
- expected = Series(np.where(mask, 0, ts), index=ts.index)
- assert_series_equal(cp, expected)
-
- pytest.raises(KeyError, ts.__getitem__, datetime(2000, 1, 6))
-
- # new index
- ts[datetime(2000, 1, 6)] = 0
- assert ts[datetime(2000, 1, 6)] == 0
-
- def test_range_slice(self):
- idx = DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000', '1/3/2000',
- '1/4/2000'])
-
- ts = Series(np.random.randn(len(idx)), index=idx)
-
- result = ts['1/2/2000':]
- expected = ts[1:]
- assert_series_equal(result, expected)
-
- result = ts['1/2/2000':'1/3/2000']
- expected = ts[1:4]
- assert_series_equal(result, expected)
-
- def test_groupby_average_dup_values(self):
- result = self.dups.groupby(level=0).mean()
- expected = self.dups.groupby(self.dups.index).mean()
- assert_series_equal(result, expected)
-
- def test_indexing_over_size_cutoff(self):
- import datetime
- # #1821
-
- old_cutoff = _index._SIZE_CUTOFF
- try:
- _index._SIZE_CUTOFF = 1000
-
- # create large list of non periodic datetime
- dates = []
- sec = datetime.timedelta(seconds=1)
- half_sec = datetime.timedelta(microseconds=500000)
- d = datetime.datetime(2011, 12, 5, 20, 30)
- n = 1100
- for i in range(n):
- dates.append(d)
- dates.append(d + sec)
- dates.append(d + sec + half_sec)
- dates.append(d + sec + sec + half_sec)
- d += 3 * sec
-
- # duplicate some values in the list
- duplicate_positions = np.random.randint(0, len(dates) - 1, 20)
- for p in duplicate_positions:
- dates[p + 1] = dates[p]
-
- df = DataFrame(np.random.randn(len(dates), 4),
- index=dates,
- columns=list('ABCD'))
-
- pos = n * 3
- timestamp = df.index[pos]
- assert timestamp in df.index
-
- # it works!
- df.loc[timestamp]
- assert len(df.loc[[timestamp]]) > 0
- finally:
- _index._SIZE_CUTOFF = old_cutoff
-
- def test_indexing_unordered(self):
- # GH 2437
- rng = date_range(start='2011-01-01', end='2011-01-15')
- ts = Series(np.random.rand(len(rng)), index=rng)
- ts2 = pd.concat([ts[0:4], ts[-4:], ts[4:-4]])
-
- for t in ts.index:
- # TODO: unused?
- s = str(t) # noqa
-
- expected = ts[t]
- result = ts2[t]
- assert expected == result
-
- # GH 3448 (ranges)
- def compare(slobj):
- result = ts2[slobj].copy()
- result = result.sort_index()
- expected = ts[slobj]
- assert_series_equal(result, expected)
-
- compare(slice('2011-01-01', '2011-01-15'))
- compare(slice('2010-12-30', '2011-01-15'))
- compare(slice('2011-01-01', '2011-01-16'))
-
- # partial ranges
- compare(slice('2011-01-01', '2011-01-6'))
- compare(slice('2011-01-06', '2011-01-8'))
- compare(slice('2011-01-06', '2011-01-12'))
-
- # single values
- result = ts2['2011'].sort_index()
- expected = ts['2011']
- assert_series_equal(result, expected)
-
- # diff freq
- rng = date_range(datetime(2005, 1, 1), periods=20, freq='M')
- ts = Series(np.arange(len(rng)), index=rng)
- ts = ts.take(np.random.permutation(20))
-
- result = ts['2005']
- for t in result.index:
- assert t.year == 2005
-
- def test_indexing(self):
-
- idx = date_range("2001-1-1", periods=20, freq='M')
- ts = Series(np.random.rand(len(idx)), index=idx)
-
- # getting
-
- # GH 3070, make sure semantics work on Series/Frame
- expected = ts['2001']
- expected.name = 'A'
-
- df = DataFrame(dict(A=ts))
- result = df['2001']['A']
- assert_series_equal(expected, result)
-
- # setting
- ts['2001'] = 1
- expected = ts['2001']
- expected.name = 'A'
-
- df.loc['2001', 'A'] = 1
-
- result = df['2001']['A']
- assert_series_equal(expected, result)
-
- # GH3546 (not including times on the last day)
- idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:00',
- freq='H')
- ts = Series(lrange(len(idx)), index=idx)
- expected = ts['2013-05']
- assert_series_equal(expected, ts)
-
- idx = date_range(start='2013-05-31 00:00', end='2013-05-31 23:59',
- freq='S')
- ts = Series(lrange(len(idx)), index=idx)
- expected = ts['2013-05']
- assert_series_equal(expected, ts)
-
- idx = [Timestamp('2013-05-31 00:00'),
- Timestamp(datetime(2013, 5, 31, 23, 59, 59, 999999))]
- ts = Series(lrange(len(idx)), index=idx)
- expected = ts['2013']
- assert_series_equal(expected, ts)
-
- # GH14826, indexing with a seconds resolution string / datetime object
- df = DataFrame(np.random.rand(5, 5),
- columns=['open', 'high', 'low', 'close', 'volume'],
- index=date_range('2012-01-02 18:01:00',
- periods=5, tz='US/Central', freq='s'))
- expected = df.loc[[df.index[2]]]
-
- # this is a single date, so will raise
- pytest.raises(KeyError, df.__getitem__, '2012-01-02 18:01:02', )
- pytest.raises(KeyError, df.__getitem__, df.index[2], )
-
-
-class TestDatetimeIndexing(object):
- """
- Also test support for datetime64[ns] in Series / DataFrame
- """
-
- def setup_method(self, method):
- dti = DatetimeIndex(start=datetime(2005, 1, 1),
- end=datetime(2005, 1, 10), freq='Min')
- self.series = Series(np.random.rand(len(dti)), dti)
-
- def test_fancy_getitem(self):
- dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
- end=datetime(2010, 1, 1))
-
- s = Series(np.arange(len(dti)), index=dti)
-
- assert s[48] == 48
- assert s['1/2/2009'] == 48
- assert s['2009-1-2'] == 48
- assert s[datetime(2009, 1, 2)] == 48
- assert s[Timestamp(datetime(2009, 1, 2))] == 48
- pytest.raises(KeyError, s.__getitem__, '2009-1-3')
-
- assert_series_equal(s['3/6/2009':'2009-06-05'],
- s[datetime(2009, 3, 6):datetime(2009, 6, 5)])
-
- def test_fancy_setitem(self):
- dti = DatetimeIndex(freq='WOM-1FRI', start=datetime(2005, 1, 1),
- end=datetime(2010, 1, 1))
-
- s = Series(np.arange(len(dti)), index=dti)
- s[48] = -1
- assert s[48] == -1
- s['1/2/2009'] = -2
- assert s[48] == -2
- s['1/2/2009':'2009-06-05'] = -3
- assert (s[48:54] == -3).all()
-
- def test_dti_snap(self):
- dti = DatetimeIndex(['1/1/2002', '1/2/2002', '1/3/2002', '1/4/2002',
- '1/5/2002', '1/6/2002', '1/7/2002'], freq='D')
-
- res = dti.snap(freq='W-MON')
- exp = date_range('12/31/2001', '1/7/2002', freq='w-mon')
- exp = exp.repeat([3, 4])
- assert (res == exp).all()
-
- res = dti.snap(freq='B')
-
- exp = date_range('1/1/2002', '1/7/2002', freq='b')
- exp = exp.repeat([1, 1, 1, 2, 2])
- assert (res == exp).all()
-
- def test_dti_reset_index_round_trip(self):
- dti = DatetimeIndex(start='1/1/2001', end='6/1/2001', freq='D')
- d1 = DataFrame({'v': np.random.rand(len(dti))}, index=dti)
- d2 = d1.reset_index()
- assert d2.dtypes[0] == np.dtype('M8[ns]')
- d3 = d2.set_index('index')
- assert_frame_equal(d1, d3, check_names=False)
-
- # #2329
- stamp = datetime(2012, 11, 22)
- df = DataFrame([[stamp, 12.1]], columns=['Date', 'Value'])
- df = df.set_index('Date')
-
- assert df.index[0] == stamp
- assert df.reset_index()['Date'][0] == stamp
-
- def test_series_set_value(self):
- # #1561
-
- dates = [datetime(2001, 1, 1), datetime(2001, 1, 2)]
- index = DatetimeIndex(dates)
-
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- s = Series().set_value(dates[0], 1.)
- with tm.assert_produces_warning(FutureWarning,
- check_stacklevel=False):
- s2 = s.set_value(dates[1], np.nan)
-
- exp = Series([1., np.nan], index=index)
-
- assert_series_equal(s2, exp)
-
- # s = Series(index[:1], index[:1])
- # s2 = s.set_value(dates[1], index[1])
- # assert s2.values.dtype == 'M8[ns]'
-
- @pytest.mark.slow
- def test_slice_locs_indexerror(self):
- times = [datetime(2000, 1, 1) + timedelta(minutes=i * 10)
- for i in range(100000)]
- s = Series(lrange(100000), times)
- s.loc[datetime(1900, 1, 1):datetime(2100, 1, 1)]
-
- def test_slicing_datetimes(self):
-
- # GH 7523
-
- # unique
- df = DataFrame(np.arange(4., dtype='float64'),
- index=[datetime(2001, 1, i, 10, 00)
- for i in [1, 2, 3, 4]])
- result = df.loc[datetime(2001, 1, 1, 10):]
- assert_frame_equal(result, df)
- result = df.loc[:datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
- result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
-
- result = df.loc[datetime(2001, 1, 1, 11):]
- expected = df.iloc[1:]
- assert_frame_equal(result, expected)
- result = df.loc['20010101 11':]
- assert_frame_equal(result, expected)
-
- # duplicates
- df = pd.DataFrame(np.arange(5., dtype='float64'),
- index=[datetime(2001, 1, i, 10, 00)
- for i in [1, 2, 2, 3, 4]])
-
- result = df.loc[datetime(2001, 1, 1, 10):]
- assert_frame_equal(result, df)
- result = df.loc[:datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
- result = df.loc[datetime(2001, 1, 1, 10):datetime(2001, 1, 4, 10)]
- assert_frame_equal(result, df)
-
- result = df.loc[datetime(2001, 1, 1, 11):]
- expected = df.iloc[1:]
- assert_frame_equal(result, expected)
- result = df.loc['20010101 11':]
- assert_frame_equal(result, expected)
-
- def test_frame_datetime64_duplicated(self):
- dates = date_range('2010-07-01', end='2010-08-05')
-
- tst = DataFrame({'symbol': 'AAA', 'date': dates})
- result = tst.duplicated(['date', 'symbol'])
- assert (-result).all()
-
- tst = DataFrame({'date': dates})
- result = tst.duplicated()
- assert (-result).all()
-
-
-class TestNatIndexing(object):
-
- def setup_method(self, method):
- self.series = Series(date_range('1/1/2000', periods=10))
-
- # ---------------------------------------------------------------------
- # NaT support
-
- def test_set_none_nan(self):
- self.series[3] = None
- assert self.series[3] is NaT
-
- self.series[3:5] = None
- assert self.series[4] is NaT
-
- self.series[5] = np.nan
- assert self.series[5] is NaT
-
- self.series[5:7] = np.nan
- assert self.series[6] is NaT
-
- def test_nat_operations(self):
- # GH 8617
- s = Series([0, pd.NaT], dtype='m8[ns]')
- exp = s[0]
- assert s.median() == exp
- assert s.min() == exp
- assert s.max() == exp
-
- def test_round_nat(self):
- # GH14940
- s = Series([pd.NaT])
- expected = Series(pd.NaT)
- for method in ["round", "floor", "ceil"]:
- round_method = getattr(s.dt, method)
- for freq in ["s", "5s", "min", "5min", "h", "5h"]:
- assert_series_equal(round_method(freq), expected)
| - [x] closes #18614
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/20006 | 2018-03-05T22:39:25Z | 2018-03-06T12:25:38Z | 2018-03-06T12:25:38Z | 2018-03-06T12:25:41Z |
DOC: fixed dynamic import mechanics of make.py | diff --git a/doc/make.py b/doc/make.py
index 53635498f2adb..4967f30453fd1 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -11,6 +11,7 @@
$ python make.py html
$ python make.py latex
"""
+import importlib
import sys
import os
import shutil
@@ -20,8 +21,6 @@
import webbrowser
import jinja2
-import pandas
-
DOC_PATH = os.path.dirname(os.path.abspath(__file__))
SOURCE_PATH = os.path.join(DOC_PATH, 'source')
@@ -134,7 +133,7 @@ def _process_single_doc(self, single_doc):
self.single_doc = single_doc
elif single_doc is not None:
try:
- obj = pandas
+ obj = pandas # noqa: F821
for name in single_doc.split('.'):
obj = getattr(obj, name)
except AttributeError:
@@ -332,7 +331,7 @@ def main():
'compile, e.g. "indexing", "DataFrame.join"'))
argparser.add_argument('--python-path',
type=str,
- default=os.path.join(DOC_PATH, '..'),
+ default=os.path.dirname(DOC_PATH),
help='path')
argparser.add_argument('-v', action='count', dest='verbosity', default=0,
help=('increase verbosity (can be repeated), '
@@ -343,7 +342,13 @@ def main():
raise ValueError('Unknown command {}. Available options: {}'.format(
args.command, ', '.join(cmds)))
+ # Below we update both os.environ and sys.path. The former is used by
+ # external libraries (namely Sphinx) to compile this module and resolve
+ # the import of `python_path` correctly. The latter is used to resolve
+ # the import within the module, injecting it into the global namespace
os.environ['PYTHONPATH'] = args.python_path
+ sys.path.append(args.python_path)
+ globals()['pandas'] = importlib.import_module('pandas')
builder = DocBuilder(args.num_jobs, not args.no_api, args.single,
args.verbosity)
| - [X] closes #20001
- [ ] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/20005 | 2018-03-05T22:17:02Z | 2018-03-06T08:30:14Z | 2018-03-06T08:30:14Z | 2018-05-14T21:11:42Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.